Re: idea for a universal disk interface
> On Apr 17, 2022, at 1:28 PM, shad via cctalk > wrote: > > hello, > there's much discussion about the right method to transfer data in and out. > Of course there are several methods, the right one must be carefully chosen > after some review of all the disk interfaces that must be supported. The idea > of having a copy of the whole disk in RAM is OK, assuming that a maximum size > of around 512MB is required, as the RAM is also needed for the OS, and for > Zynq maximum is 1GB. For reading a disk, an attractive approach is to do a high speed analog capture of the waveforms. That way you don't need a priori knowledge of the encoding, and it also allows you to use sophisticated algorithms (DSP, digital filtering, etc.) to recover marginal media. A number of old tape recovery projects have used this approach. For disk you have to go faster if you use an existing drive, but the numbers are perfectly manageable with modern hardware. If you use this technique, you do generate a whole lot more data than the formatted capacity of the drive; 10x to 100x or so. Throw in another order of magnitude if you step across the surface in small increments to avoid having to identify the track centerline in advance -- again, somewhat like the tape recovery machines that use a 36 track head to read 7 or 9 or 10 track tapes. Fred mentioned how life gets hard if you don't have a drive. I'm wondering how difficult it would be to build a useable "spin table", basically an accurate spindle that will accept the pack to be recovered and that will rotate at a modest speed, with a head positioner that can accurately position a read head along the surface. One head would suffice, RAMAC fashion. For slow rotation you'd want an MR head, and perhaps supplied air to float the head off the surface. Perhaps a scheme like this with slow rotation could allow for recovery much of the data on a platter that suffered a head crash, because you could spin it slowly enough that either the head doesn't touch the scratched areas, or touches it slowly enough that no further damage results. paul
Re: AlphaServer 2100s available
> On Apr 15, 2022, at 6:49 PM, Peter Coghlan via cctalk > wrote: > > We occasionally hear of aged computers being employed in the nuclear > power industry, certain military applications or long lifed medical > equipment for example. I imagine that these machines can have a > significant commercial value long after their contemporaries which are > not involved in these roles. An example: in the past year, a CDC mainframe (the last generation of what started with the 6600 in 1964) was taken out of service at Vandenberg SFB. And actually, the architecture is still in use, but the replacement is an emulator rather than a "real" machine. paul
Re: idea for a universal disk interface
> On Apr 15, 2022, at 6:54 PM, Fred Cisin via cctalk > wrote: > > On Thu, 14 Apr 2022, Tom Gardner via cctalk wrote: >> This was the approach IBM used in it's first RAMAC RAID where I think they >> had to buffer a whole cylinder but that was many generations ago > > (my copy of the specs may not be exact): > Buffering a whole cylinder, or a whole surface, of the RAMAC was no big deal. > One hundred surfaces (52 platters, but not using bottom of bottommost nor top > of topmost) totalling to 5 million 6 bit characters. > > That's 50,000 characters per surface. > OR 50,000 characters per cylinder > ("square geometry" :-) "Was" as in "back in the day"? 50k characters would have been quiet a large memory in the 1950s. And for an I/O device, any kind of buffer is not necessarily all that useful. What does make sense is a track buffer in drum memory machines, as found for example in the Dutch ARMAC, where the first implementation of the famous "shortest path" algorithm was first implemented. paul
Re: phishing increase lately
What, on this list? I haven't seen any of that. I get a large amount of dishonest mail; much of it is caught by my "send this immediately to trash without reading it" filter rules. Like you, I have no idea what's attached, simply because I have no interest in investigating the machinations of criminal elements on the net. paul > On Apr 14, 2022, at 1:47 PM, dwight via cctalk wrote: > > I've been getting a bunch of "release of lien waiver" > messages lately. > I've not been opening the click bate. I'm just wondering if anyone else is > getting them. > I don't know what nasty is attached as I don't have a secure system to look > at it. > Dwight >
Re: idea for a universal disk interface
> On Apr 13, 2022, at 9:45 PM, Fred Cisin via cctech > wrote: > > On Wed, 13 Apr 2022, Paul Koning wrote: >> Indeed. Though even that is hard for the more exotic formats, if original >> controllers are unavailable. How would you read, for example, an IBM 1620 >> or CDC 6600 disk pack, given that the machine is hard to find and those that >> exists may not have the right controllers? But both are certainly doable >> with a "generic" track extractor engine. Turning track waveforms into data >> then becomes a matter of reverse engineering the encoding and constructing >> the software to do that. This may be hard, or not so hard. For example, if >> you wanted to do this for a CDC 844 disk pack (RP04/06 lookalike but with >> 322 12-bit word sectors) you'd get a lot of help from the fact that the >> source code of the disk controller firmware, and the manual describing it, >> have been preserved. >> >> Then as you said the real goal is to recover files, which means also having >> to reverse engineer the file system. That too may be documented adequately >> (it is in the 6600 case, for example) or not so much (does such >> documentation, or the OS source code, exists for the 1620 disk operating >> system?). > > > Some projects are well beyond the reach of even the most insane of us. > > I don't think that any of us here today have the ability to build a > replacement drive from scratch. Even with full access to the original > construction documents. > > Now, if we had NSA level of facilities, . . . I don't think a 1970s era disk drive replica is quite as hard as you suggest. In my comment I wasn't actually thinking of that, but rather of the possibility that you might have a drive and packs, but no computer to connect the drive to. That said, consider what, say, a 1311 looks like. It's a spindle and a head carriage, each with the levels of precision you would find on a good quality lathe. That suggests the guts of a small CNC lathe, or the building blocks of such a thing, could be put to work for this. One data point: I remember when our RC11 spindle bearings failed (in college, 1974). DEC FS was called in, the tech decided to look for a low cost solution since the machine was not under contract. The normal procedure would have been to replace the spindle motor, of course. Instead, he disassembled the drive, took the motor to Appleton Motor Service Co., which pulled off the failed bearings and pressed on new ones, reinstalled the spindle motor, presto, good as new. He didn't even have to reformat the drive, all the data on it remained intact. So the tolerances on drives of that era are not all that severe, not out of reach for ordinary skilled machinists. paul
Re: idea for a universal disk interface
> On Apr 13, 2022, at 8:12 PM, Fred Cisin via cctech > wrote: > > ... > My mindset is/was still stuck in the disk format conversion realm, of trying > to get information (hopefully information in the form of files, not just data > as track images) from alien media. > And, more often than not, unidirectionally. Indeed. Though even that is hard for the more exotic formats, if original controllers are unavailable. How would you read, for example, an IBM 1620 or CDC 6600 disk pack, given that the machine is hard to find and those that exists may not have the right controllers? But both are certainly doable with a "generic" track extractor engine. Turning track waveforms into data then becomes a matter of reverse engineering the encoding and constructing the software to do that. This may be hard, or not so hard. For example, if you wanted to do this for a CDC 844 disk pack (RP04/06 lookalike but with 322 12-bit word sectors) you'd get a lot of help from the fact that the source code of the disk controller firmware, and the manual describing it, have been preserved. Then as you said the real goal is to recover files, which means also having to reverse engineer the file system. That too may be documented adequately (it is in the 6600 case, for example) or not so much (does such documentation, or the OS source code, exists for the 1620 disk operating system?). paul
Re: idea for a universal disk interface
> On Apr 13, 2022, at 5:27 PM, Fred Cisin via cctech > wrote: > > On Wed, 13 Apr 2022, shad via cctech wrote: >> The main board should include a large enough array of bidirectional >> transceivers, possibly with variable voltage, to support as much interfaces >> as possible, namely at least Shugart floppy, ST506 MFM/RLL, ESDI, SMD, IDE, >> SCSI1, DEC DSSI, DEC RX01/02, DG6030, and so on, to give a starting point. > > Hmmm. rather than re-inventing the wheel, as we usually do, . . . > > It may be possible to accomplish a subset of those, specifically including > Shugart floppy, ST506/412 MFM, RLL, ESDI, IDE, SASI, SCSI by repurposing > certain commercial hardware. > > You would have a collection of boards, that you would remove/insert into a > connector. > > The main board would have RAM, and a ROM for certain basic (and BASIC?) > functions, but would load software for various modules and output results to > and from one or more interfaces that remain connected. > > I don't doubt that you could design a far better system, but there already > exists a crude version, ready to implement! > > It has a marginal power supply, and it has a poorly designed group of 8 62 > pin connectors for the interfaces, although some of those would need to be > dedicated for other functions of the device, including user interface > hardware. Some software is already available, but some crude tools are > available for creating more. > > It says "IBM", "5160" on the back panel label, although there were plenty of > generic second sources. > The updated "5170" version of it could be more easily set up even for USB. :-) But the main goal that was mentioned is device emulation, which existing products generally don't do. I see the idea as a generalized form of David Gesswein's MFM emulator, which is primarily a device emulator but is also capable of reading and writing devices to capture everything that's on them. The puzzle is how to make it do, say, 2311 emulation suitable to talk to an IBM/360, 1311 emulation for the decimal IBM 1620, or 844 emulation for a CDC 6600 -- to mention just a few of the more exotic cases. paul
Re: idea for a universal disk interface
> On Apr 13, 2022, at 1:35 AM, shad via cctalk > wrote: > > Hello, > I'm a decent collector of big iron, aka mini computers, mainly DEC and DG. > I'm often facing common problems with storage devices, magnetic discs and > tapes are a little prone to give headaches after years, and replacement > drives/media in case of a severe failure are unobtainable. > In some cases, the ability to make a dump of the media, also without a > running computer is very important. > > Whence the idea: realize an universal device, with several input/output > interfaces, which could be used both as storage emulator, to run a computer > without real storage, and as controller emulator, to read/write a media > without a running computer. > To reduce costs as much as possible, and to allow the better compatibility, > the main board shall host enough electrical interfaces to support a large > number of disc standard interfaces, ideally by exchanging only a personality > adapter for each specific interface, i.e. connectors and few components. > > There are several orders of problems: > - electrical signals, number and type (most disk employ 5V TTL or 3.3V TTL, > some interfaces use differential mode for some faster signals?) > - logical implementation: several electrical signals are used for a specific > interface. These must be handled with correct timings > - software implementation: the universal device shall be able to switch > between interface modes and be controlled by a remote PC > > I suppose the only way to obtain this is to employ an FPGA for logic > implementation of the interface, and a microprocessor running Linux to handle > software management, data interchange to external (via Ethernet). This means > a Xilinx Zynq module for instance. > I know there are several ready devices based on cheaper microcontrollers, but > I'm sure these can't support fast and tight timing required by hard disk > interfaces (SMD-E runs at 24MHz). Interesting idea. I wonder if it's cost-effective given the wide variety involved. It's surprising how tight timing you can manage with some modern programmable PIO blocks in devices like the Raspberry Pico or the BeagleBone Black. David Gesswein used the latter for his MFM emulator. That's not quite as fast as the SDM interface you mentioned. And I've used the Pico for some comms work, and realized at some point that it's fast enough to do Ethernet in software. Again, not quite as fast but close, and with care and under certain constraints that device (a $4 item) can indeed generate and receive 24 MHz waveforms. The thing that makes me wonder is the interfacing part. Disks might have anywhere from a few wires (SI port on DEC RA series disks) to dozens of wires (older disks like RP04 or RK05). Signal levels might be TTL at various voltages, differential, or even analog signals. Would you want to emulate the disk to formatter interface, or the formatter to controller interface? Another difficulty may be the low level format details. As I recall, there is a massive amount of research and code that went into David Gesswein's emulator to understand and reproduce the great variety of bit level layouts. If you're at a low enough level that might not matter, but if you want to manage data in terms of what the host thinks of as blocks, it does. Things might get very strange with more exotic stuff than you find at DEC -- for example, emulating IBM 360 disk drives with their variable length blocks and key/data capability would be interesting. Or CDC 6000 disks with sectors containing 322 12-bit words. So, it seems that building a device to emulate one class of disks with one class of interface, as David has done, is hard enough. Making something much more general is appealing, but I wonder if it's so hard, and requires so many parts, that it becomes infeasible and/or too expensive. paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 3:45 PM, John-Paul Stewart via cctalk > wrote: > > > On 2022-04-12 09:49, Paul Koning via cctalk wrote: >> >> Does anyone still remember the other 100 Mb Ethernet-like proposal, I >> think from HP, which added various types of complexity instead of simply >> being a faster Ethernet? > > HP's proposal was called 100BaseVG, aka 100VG-AnyLAN, and could carry > Token Ring frames in addition to Ethernet. Ah, that would certainly be one part of the explanation why it didn't go anywhere. Supporting Token Ring would add a vast amount of complexity for no real benefit. I noticed the Wikipedia article has a wrong and misleading description of the supposed "deterministic" behavior of token rings. It speaks of "consistent performance no matter how large the network became". That, of course, is nonsense. What is true is that, in the absence of errors, token rings have a hard upper bound on the time required to transmit the first frame in the NIC transmit queue. What is always unstated in the marketing documents making that claim is how large that upper bound is. I remember an IBM document that spent 30 or so pages saying "token ring good, ethernet bad". DEC, with some help from 3Com, wrote a detailed paragraph by paragraph refutation of that; I participated in creating that and have a copy somewhere. One of the points IBM made was that determinism claim. So we calculated the worst case transmit latency for a max size 802.5 ring. I don't remember the answer exactly; I'm pretty sure it was over a minute. FDDI is an entirely different protocol, with dramatically better latency at high load than 802.5. (Its ancestor is 802.4, not 802.5.) But even there, the max latency is surprisingly long. Ethernet, on the other hand, is nondeterministic (in the half duplex shared bus topology) but except at insanely high loads is much lower than the latency of any token ring. paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 3:11 PM, Grant Taylor via cctalk > wrote: > > On 4/12/22 11:41 AM, Paul Koning via cctalk wrote: >> ... >> Spanning tree is indeed another algorithm / protocol, but it's a control >> plane algorithm with relatively easy time constraints, so it's just SMOP. > > I guess I always assumed that spanning tree came along /after/ and / or > /independently/ of bridging / switching. > > After all, the BPDU in spanning tree is "Bridge ..." so that name tends to > imply to me that it came about /after/ bridges were a thing on at least some > level. No, definitely not. The DECbridge-100 included two critical elements: learning / selective forwarding, and spanning tree. Both were day-one features. paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 2:51 PM, Grant Taylor via cctalk > wrote: > > On 4/12/22 11:44 AM, Paul Koning wrote: >> In my experience, "hub" is a vague marketing term. ... >> Non-learning layer 2 packet switching devices to me are hypothetical beast, >> I never met one and I'm glad I didn't. > > Nope. Hubs are definitely not a marketing term, nor a hypothetical beast. > > See the quote from the following Cisco page. But that's a marketing document. > ... >> Building such a thing would be a silly thing to do in my view. > > Well, it may be silly, but a non-learning device, or "hub", was ~> is very > much a thing that was mass marketed and sold all over the place. > > Switches supplanted hubs for obvious reasons in the mid-to-late '90s. That doesn't match how I see the history. From the DEC point of view, non-learning packet switches did not exist. We sold either real bridges, or routers, or (early on) repeaters. I never heard of a DEC customer using non-learning devices; if anyone had and ran into trouble I'm certain our answer would have been "please use a real bridge". paul
Re: Retro networking / WAN communities
Yes, that's the way the term was used by DEC. There have long been many things in our business that were called by multiple names. Gateway is one (router, protocol translator). Dataset (modem or file), file (file as we know it, disk drive) are other examples. paul > On Apr 12, 2022, at 2:01 PM, Wayne S wrote: > > Good clarification. > In my day, gateway was some unique device or software that provided access to > a service or another non-standard device. > Think a device that dials out to batch send information to a specific > service. > Router meant networking within the company. > Different times. > > Sent from my iPhone > >> On Apr 12, 2022, at 10:49, Paul Koning via cctalk >> wrote: >> >> >> >>> On Apr 12, 2022, at 1:20 PM, Grant Taylor via cctalk >>> wrote: >>> >>>> On 4/12/22 10:11 AM, Wayne S wrote: >>>> Wiki says ethernet became commercially available in 1980 and invented in >>>> 1973. So if enet was 1980 what were routers routing 10 years earlier in >>>> 1970? >>> >>> I feel like IMPs were "routing" and could be considered "routers" long >>> before Ethernet was a thing. >> >> Exactly. For that matter, DECnet included routing before Ethernet came out >> (in Phase III, with DDCMP links). And Typeset-11 did routing before DECnet >> did, starting around 1977. >> >> I think the term used in the IMP days was "gateway" but by today's >> terminology they are routers. >> >> paul >>
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 1:20 PM, Grant Taylor via cctalk > wrote: > > On 4/12/22 10:11 AM, Wayne S wrote: >> Wiki says ethernet became commercially available in 1980 and invented in >> 1973. So if enet was 1980 what were routers routing 10 years earlier in 1970? > > I feel like IMPs were "routing" and could be considered "routers" long before > Ethernet was a thing. Exactly. For that matter, DECnet included routing before Ethernet came out (in Phase III, with DDCMP links). And Typeset-11 did routing before DECnet did, starting around 1977. I think the term used in the IMP days was "gateway" but by today's terminology they are routers. paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 1:44 PM, Todd Goodman via cctalk > wrote: > > On 4/12/2022 1:28 PM, Grant Taylor via cctalk wrote: >> On 4/12/22 7:56 AM, Todd Goodman via cctalk wrote: >>> The big difference in my mind between bridge and switch is: >>> >>> * Switches learn what port given MACs are on and only sends unicast >>> traffic destined for that MAC address on that port and not all >>> * Bridges send unicast traffic to all ports >> >> So what would differentiate the bridge (using your understanding) from a hub? > To me a hub is a layer 1 device (physical layer) that doesn't look at the > traffic at all while the bridge does look at the traffic and generally > implements 802.1d Spanning Tree Protocol and processes BPDUs. For 802.3, a physical layer relay is called a repeater. Other LAN types, if they have such things, may use other names; for example FDDI calls it a concentrator. paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 1:25 PM, Grant Taylor via cctalk > wrote: > > On 4/12/22 8:50 AM, Paul Koning via cctalk wrote: >> A device that doesn't do address learning and floods unicast frames is not a >> bridge but rather a non-standard piece hardware. > > I feel like a "hub" qualifies as "a device that doesn't do address learning > and floods unicast frames". > > To me, the fundamental difference between a hub and a switch / bridge is > address learning. > > I can't tell if your (quoted) statement is specific to /just/ bridges / > switches or could include hubs. Your first comment addresses bridges > directly, thus meaning that your second non-targeted comment might target > more. In my experience, "hub" is a vague marketing term. It might mean a backplane into which networking modules are plugged -- the DEChub-90 and DEChub-900 are examples. It might mean a chassis accepting networking cards that offer repeater, bridging, or other services -- I think Chipcom and Cabletron used the term in that fashion. Non-learning layer 2 packet switching devices to me are hypothetical beast, I never met one and I'm glad I didn't. Building such a thing would be a silly thing to do in my view. So no, I don't think I would call that a "hub" because all the "hubs" I ever ran into were something different entirely. paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 12:45 PM, Grant Taylor via cctalk > wrote: > > On 4/12/22 7:49 AM, Paul Koning via cctalk wrote: >> ... >> The concept of a repeater goes back to day 1 of Ethernet; you'll find them >> in the D/I/X Ethernet spec. And they were part of the first batch of >> Ethernet products from DEC. > > Repeaters existing from day 1 of Ethernet sort of surprises me. > > I wonder if there is some difference in the original 3 Mbps Ethernet at Xerox > PARC vs the 10 Mbps Ethernet (II?) that was commercialized. I don't know anything about the 3 Mb/s prototype other than that it existed. When I speak of Ethernet and its "day 1" I mean 10 Mb/s Ethernet as defined by the DEC/Intel/Xerox spec. Repeaters are a core part of that spec, and they were among the first wave of products delivered by DEC. > ... >> I first saw "structured wiring" -- the star wiring with a hierarchy of >> wiring closets and devices -- around 1986, in the new Littleton King Street >> DEC building. It had distribution cabinets at the end of each row of >> cubicles. These looked just like standard office supplies storage cabinets, >> with shelves; inside you'd find a bridge and a couple of DEMPR repeaters, >> connected to 10Base2 coax drops to each cubicle. > > Interesting use case. -- Now I'm wondering if each station run was standard > 10Base2 with it's T connector and terminator. I no longer remember. That's possible, or perhaps they were a number of small segments each with a handful of stations on them. >> ... > > I now feel the need to call out what I think are two very distinct things > that need to be differentiated: > > 1) Learning of which port a source MAC address is on for the purposes of not > sending a frame out to a destination segment when the location of the > destination is known. > 2) Spanning Tree / 802.1D learning the path to the root of the tree. > > The former is a fairly easy algorithm that doesn't require anything other > than passive listening for data collection. The latter requires introduction > of a new type of active traffic, namely BPDUs. That's true but only part of the story. For one thing, as I said, both mechanisms were part of bridges from the start (at least from the start of DEC's bridges, which may not be quite the very earliest ever but certainly are the earliest significant ones). The learning part of bridging is actually the hard part. It involves (a) extracting the SA of each received address, (b) looking it up, in very short time, in a large address table (8k entries in the original DECbridge-100), (c) timing out each one of those addresses. And then also (d) looking up the DA of a received packet in that table and (e) if found, forwarding the packet only to the port for that address, or dropping it if output port == input port. (a) is easy, (b) through (d) are not. And (d)/(e) are the purpose of the exercise, it is why bridged networks have better scaling properties than repeater based networks. If you consider a two port 10 Mb Ethernet bridge, steps (b) and (d) amount to two table lookups every 30 microseconds (minimum packet interval across two ports), and step (b) includes restarting the aging timer for that address. With early 1980s technology this is a seriously hard problem; as I recall, in the DECbridge-100 it involved a hardware assist device to do the table lookup. And (c) is not easy either, it required the invention of the "timer wheel" algorithm which has the properties that starting, stopping, and keeping N timers is O(1) i.e., independent of N. (Expirationn of n timers is O(n), but most timers do not expire in this application.) Later on it became possible, with great care, to do these things in software. The DECbridge-900 (FDDI to 6 x 10M Ethernet) has its fast path in a 25 MHz 68040, very carefully handcrafted assembly code, with a bit of help from a 64-entry CAM acting as address cache. It can handle around 60 k packets/second, which means it needs some additional specialized rules to ensure it won't miss BPDUs under overload since it can't do worst case packet arrival at all ports concurrently in the normal forwarding path. We patented that. Spanning tree is indeed another algorithm / protocol, but it's a control plane algorithm with relatively easy time constraints, so it's just SMOP. >> ... >> Does anyone still remember the other 100 Mb Ethernet-like proposal, I think >> from HP, which added various types of complexity instead of simply being a >> faster Ethernet? I forgot what it was called, or what other things it >> added. Something about isochronous mode, perhaps? Or maybe I'm confused >> with FDDI 2 -- another concept that never got anywhere, being much more
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 11:40 AM, Todd Goodman wrote: > > > On 4/12/2022 10:50 AM, Paul Koning wrote: >> >>> ... >> >> Learning has always been part of what bridges do. It's a core part of the >> DEC bridge spec, and a core part of the DECbridge-100 functionality. It is >> the reason why Tony Lauck and George Varghese invented the "timer wheels" >> scheme for keeping 8000 timers in constant time. >> >> A device that doesn't do address learning and floods unicast frames is not a >> bridge but rather a non-standard piece hardware. I don't actually know if >> anyone ever implemented such a device. Certainly I've never seen one or >> built one myself, even though what I built was called "bridge". >> >> paul > > > I'm not talking about pre-standard DEC devices. > > I can show you a standard commodity bridge from multiple vendors right now > that will allow you to monitor unicast traffic destined for other ports just > by plugging in to one of the other ports on the bridge. > > I don't have my 802.1d spec I implemented a bridge from in the 90s I do have 802.1d, but it's a box. I know for a fact that learning is part of it, just as it was in the DEC spec, for the obvious reason that they are basically the same architecture. So any compliant bridge forwards unicast traffic to the port on which that address is known to be, flooding only to unknown addresses. paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 10:44 AM, Todd Goodman wrote: > > > On 4/12/2022 10:12 AM, Paul Koning wrote: >> >>> On Apr 12, 2022, at 9:56 AM, Todd Goodman via cctalk >>> wrote: >>> ... >>> The big difference in my mind between bridge and switch is: >>> >>> * Switches learn what port given MACs are on and only sends unicast >>> traffic destined for that MAC address on that port and not all >>> * Bridges send unicast traffic to all ports >> Absolutely not. The only standard device that forwards unicast to all ports >> is the repeater. I don't know of any packet forwarding device that sends >> unicast traffic to all ports; certainly no such thing can be found in any >> standard. >> >> Learning was introduced by DEC in the DECbridge 100 (along with spanning >> tree); IEEE later standardized this, with some small mods, in 802.1d. >> >> paul > > You snipped the part where I said except for ports that should not receive > the traffic due to blocked ports from the Spanning Tree Protocol in 802.1d > and that if that fails you end up with a broadcast storm. > > Well, I didn't mention STP in 802.1d specifically because I thought it was > obvious. > > Bridges were useful even after switches arrived to allow monitoring of > traffic on any port of the bridge. It was useful before switches got port > mirroring and even after as it didn't require any configuration. Yes, I snipped part of what you said, but that doesn't affect my point. Learning has always been part of what bridges do. It's a core part of the DEC bridge spec, and a core part of the DECbridge-100 functionality. It is the reason why Tony Lauck and George Varghese invented the "timer wheels" scheme for keeping 8000 timers in constant time. A device that doesn't do address learning and floods unicast frames is not a bridge but rather a non-standard piece hardware. I don't actually know if anyone ever implemented such a device. Certainly I've never seen one or built one myself, even though what I built was called "bridge". paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 12:52 AM, Grant Taylor > wrote: > > ... > I vaguely remember that there were three main forms of switching: store and > forward, cut-through, and a hybrid of the two. My understanding is that S > had the ability to sanity check (checksum?) frames and only re-send out valid > / non-corrupted frames. Conversely C.T. could not do this sanity checking > and thus could re-send corrupted frames. The 3rd form did a sanity check on > the first part of the frame. -- I think. The normal type of bridge / switch is the store and forward type, which discards bad packets and forwards only good ones. Cut through means starting to forward a packet before the end of it has been received. That necessarily means forwarding it without knowing if it's a good frame (good CRC, length, alignment if applicable). The remaining question is what happens with the cut-through frame when the end of packet arrives and is seen to be bad. One possibility is to propagate the received packet exactly, in which case (barring an unfortunate additional data error) it will be seen as bad by the eventual recipient. The other possibility is to force an explicit abort of some sort to make sure the packet is seen as bad. For a mixed LAN type bridge, only the second option is valid (because you aren't doing CRC forwarding in that case). Of course, a lot of mixed type bridges are also mixed speed, where cut through isn't really an option. Theoretically you could have, say, 100 Mb/s Ethernet to FDDI, but in practice I don't know if those existed and doubt that, if so, they used cut through. You can't do sanity checking on the frame beginning; there isn't anything that gives you a clue whether the start is valid or not. At least not apart from trivial checks like "source address can't be a multicast address". The only data link protocol I can think of that lets you check the frame beginning is DDCMP. paul
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 9:56 AM, Todd Goodman via cctalk > wrote: > ... > The big difference in my mind between bridge and switch is: > > * Switches learn what port given MACs are on and only sends unicast > traffic destined for that MAC address on that port and not all > * Bridges send unicast traffic to all ports Absolutely not. The only standard device that forwards unicast to all ports is the repeater. I don't know of any packet forwarding device that sends unicast traffic to all ports; certainly no such thing can be found in any standard. Learning was introduced by DEC in the DECbridge 100 (along with spanning tree); IEEE later standardized this, with some small mods, in 802.1d. paul
Re: Retro networking / WAN communities
> Begin forwarded message: > > From: Grant Taylor via cctalk > Subject: Re: Retro networking / WAN communities > Date: April 12, 2022 at 2:08:22 AM EDT > To: Wayne S , "General Discussion: On-Topic and > Off-Topic Posts" > Reply-To: Grant Taylor , "General > Discussion: On-Topic and Off-Topic Posts" > > On 4/11/22 11:38 PM, Wayne S wrote: >> In the beginning there was thick ethernet limited to 100 m. > > Um > > I *REALLY* thought the 5 & 2 in 10Base5 and 10Base2 was the number of > hundreds of meters that the cable segment could exist on it's own. > > My understanding is that the 100 meter limit came about with 10Base-T. Yes, that is correct. 10Base5 is 500 meter segment limit, 10Base2 is 200 meters max. There are some other small differences: 10Base5 wants you to put the transceiver attachments on the marks on the cable (to avoid having impedance bumps aligned with the wave length); 10Base2 omits that requirement. See the 802.3 spec for all the gory details. >> People wanted computers that were on different floors connected together >> between floors and buildings. That exceeded the 100 meter spec so the >> repeater was born to connect two 100 m thick ethernet seqments. > > I feel like even only 100 meters / 300(+) feet gives quite a bit of > flexibility to connect multiple floors. Especially if you consider the AUI > drop cable. > > Aside: I'm not sure how long an AUI drop cable could be. I'm anticipating > between single and low double digit feet. The spec says 50 meters. And given the 500 meter segment limit, 10Base5 would handle quite a large building. Repeaters serve several purposes. One is to allow a larger station count than is permitted on a single segment. Another is to allow multiple segments either for still greater distances or for convenience. For example, it would make a lot of sense to run a segment per floor, a backbone segment up the elevator shaft, and repeaters to connect floor to backbone, even if in principle you can zig-zag a single segment across several floors within the distance limits. >> A repeater was basically a signal booster between two ethernet segments. As >> you added segments interference and collisions became a problem as traffic >> on one segment was passed to all the other connected segments. > > Yep, the 3, 4, 5, rule. Up to five segments of cable with four repeaters and > stations on no more than three of the segments. > >> Hence the bridge was born. It had some intelligence And didn’t pass packets >> intended for computers on its own segment to the other segments thereby >> reducing congestion and collisions. > > Didn't repeaters operate in the analog domain? Meaning that they would also > repeat ~> amplify any noise / distortion too? No, they are digital devices (except that collision sense of course is an analog function, but that lives in the transceiver). They do clock recovery and regeneration. So you'd get some added jitter but not noise or distortion. > Conversely bridges operated in the digital domain. Meaning that they > received an Ethernet frame and then re-generated and transmitted a new > pristine Ethernet frame? Yes, except that bridges were encouraged to repeat the CRC rather than recalculate it, if possible. You can't do that on mixed LANs (like Ethernet to FDDI) but for the Ethernet to Ethernet case you can, and it's a very good thing to do so. >> Then the router was born to connect multiple segments together at one point. >> And it had intelligence to determine what segment a packet should go to and >> route it there. It also prevented packets from going onto segments that >> didn’t have the packet’s intended target thereby reducing congestion. Yes, except that historically speaking this is not accurate; routers predate Ethernet by 10 years or so. >> Hubs were born approximately the same time to get over the ethernet tap >> distance because by this time there were more computers in the single area >> that needed to be connected together to the Ethernet. > > Hum > > I can see problems with having enough actual taps on a network segment to > connect all the machines in a given area with AUI drop cable length issues. > > But I know that multi-port taps were a thing. I've read about them and seen > pictures of them for sale. I think I've read about a singular tap that had > eight AUI ports on it. I've seen pictures of four AUI ports on a single tap. Yes, DEC came out with that very early on, the DELNI. > ... >> The switch came about. It was a smart hub that had intelligence. It could >> filter out packets that were not intended for other computers connected to >> it thereby reducing congestion. > > I feel like the switch and the bridge are doing the same thing from a > learning / forwarding / blocking perspective. Learning and spanning tree were part of bridges from the start (the DECbridge 100). That's precisely what made bridges offer
Re: Retro networking / WAN communities
> On Apr 12, 2022, at 12:42 AM, Grant Taylor > wrote: > > On 4/11/22 6:16 PM, Paul Koning wrote: >> I think "hub" is another word for "repeater" (just like "switch" is another >> word for "bridge"). > > Interesting. > > Do you know of any documentation, preferably not marketing materials, that > used "repeater" in lieu of "hub"? DEC documentation. > From my naive point of view, hubs came about when multiple stations connected > to a central location, the center, or hub, of the start if you will. > Conversely, I remember reading (after the fact) about repeaters as something > that existed in pure 10Base5 / 10Base2 networks, predating hubs. > > I'm questioning form a place of ignorance. Like a child asking why fire is > hot. The concept of a repeater goes back to day 1 of Ethernet; you'll find them in the D/I/X Ethernet spec. And they were part of the first batch of Ethernet products from DEC. Yes, AUI based devices, two port. But the next thing out the door was the DEMPR, "Digital Multi-Port Repeater", an 8 port repeater. I think that's 10Base2. I first saw "structured wiring" -- the star wiring with a hierarchy of wiring closets and devices -- around 1986, in the new Littleton King Street DEC building. It had distribution cabinets at the end of each row of cubicles. These looked just like standard office supplies storage cabinets, with shelves; inside you'd find a bridge and a couple of DEMPR repeaters, connected to 10Base2 coax drops to each cubicle. > I think there is a large, > 80%, overlap between switch and bridge, but they > aren't perfect. Bridging some traffic between otherwise incompatible > networks comes to mind; e.g. SNAP between Token Ring and Ethernet or Ethernet > to xDSL (RFC 1483). That's not where the term "switch" was introduced. And devices like that were called "bridge" by market leaders like DEC -- the two generations of FDDI to Ethernet bridges I mentioned were both called "bridge". Also, the general operation of the device is the same whether it does MAC frame tweaking or not, 802.1d applies unchanged. Ethernet to non-Ethernet bridges have to do some tinkering with Ethernet protocol type frames (which is where SNAP comes in, all nicely standardized in the FDDI days). For 802.5 they also have to deal with the misnamed "functional" addresses, but that's not hard. There also was such a thing as a "source routing bridge", an 802.5 only bad idea invented by IBM and sold for a while until the whole idea faded away. >> The device I have is a small standalone box, about the size of today's small >> 4-6 port switches you buy at Staples for $100. But it's actually a >> repeater, not a switch, and one of its ports is a 10Base2 connector (BNC >> jack). > > I would firmly consider what you describe as a "hub". I think "hub" is what DEC called the chassis that these boxes could plug in to. >> ... >> That's rather odd because even if someone doesn't obey the letter of the law >> you'd think they would at least support 100BaseT. Or was the problem lack of >> half duplex? Do those management interfaces want to run half duplex? > > No. It's more nefarious than that. You mentioned supporting n - 1 > generation. I'm talking about switches that support 1 Gbps / 10 Gbps / 25 > Gbps / 40 Gbps / 50 Gbps / 100 Gbps. They quite simply don't scale down to > 100 Mbps much less 10 Mbps. -- Why would someone want to support those slow > speed connections on such a high speed switch? Devices like intelligent power > strips or serial consoles or the likes in a cabinet that uses said switch as > a Top of Rack device. -- Our reluctant solution has been to put in a lower > end / un-manged 10 Mbps / 100 Mbps / 1 Gbps that can link at 1 Gbps to the > main ToR. I understand now. Yes, that's annoying indeed. >> I think I saw in the standard that Gigabit Ethernet in theory includes a >> half duplex mode, but I have never seen anyone use it and I wonder if it >> would work if tried. Perhaps I misread things. > > My understanding is that Gigabit Ethernet (and beyond) only supports full > duplex. Maybe I'm mis-remembering or thinking about what is actually > produced vs theoretical / lab experiments. I took a quick look in the 802.3 spec. In the 2002 edition, Part 3 describes gigabit Ethernet. The intro ("clause 34") has this to say: "In full duplex mode, the mini- mum packet transmission time has been reduced by a factor of ten. Achievable topologies for 1000 Mb/s full duplex operation are comparable to those found in 100BASE-T full duplex mode. In half duplex mode, the minimum packet transmission time has been reduced, but not by a factor of ten." So yes, it's theoretically part of the spec. As you said, it doesn't seem to be in actual use. > Similarly, I know someone that has 100 Mbps Token Ring, a.k.a. High Speed > Token Ring (HSTR) equipment for their mainframe. And 1 Gigabit Token Ring > was designed in the
Re: Retro networking / WAN communities
> On Apr 11, 2022, at 8:16 PM, Cameron Kaiser via cctalk > wrote: > I still have 10 Mb Ethernet at home (on my Pro, and while it's not in use I have a few 10Base2 bits). >>> Please expand "my Pro". There's not much to go on. >>> #LivingRetroVicariouslyThoughOthers >> DEC Professional 380 (and a caseless 350) -- PDP-11s with a screwball bus >> and their own set of peripherals. I have an Ethernet card for one of them. >> Working on the driver. > > I'd love Ethernet to work in Venix/PRO but I think my 380 is just going to > have > to do some user-level SLIP driver. I suppose that's something I could write up > for gits and shiggles. Perhaps you could adapt the Linux or NetBSD driver for the 82586. I'm not sure the NetBSD one is complete but the Linux one (sun3-82586.c) looks plausible. You'll also need to study the CNA chapter of the Pro technical manual (on Bitsavers) since the board includes some additional control logic as well as the packet memory that the chip talks to. And of course, you'll get to discover all over again that Intel never could design Ethernet chips for beans; the 586 is an especially ugly example of architural absurdity, with its linked list of descriptors that introduce lots of bad race conditions between the CPU and I/O device. (The depressing thing is that Dijkstra figured out and documented how to do this right 20 years earlier.) paul
Re: Retro networking / WAN communities
> On Apr 11, 2022, at 6:35 PM, Grant Taylor via cctalk > wrote: > > On 4/11/22 4:18 PM, Cameron Kaiser via cctalk wrote: >> Were there ever actual true 10b2 switches? DECbridge-90: AUI or 10Base2 to 10Base2. > ... > IMHO an unmanaged switch is an evolution of a bridge. Or in the past, I used > to say (a very long time ago) a switch was was three or more ports and a > bridge was exactly two ports. -- Probably inaccurate in some way. But it > worked for the conversation at the time. That's not accurate. "Switch" is a marketing term invented by certain companies that wanted to pretend their products were different from (and better than) other people's bridges. It never was true that bridges are specifically two port devices. Yes, the very first few models (DEC's DECbridge-100 for example) were two port devices, as was one whose manufacturer I no longer remember that bridged Ethernet over a satellite link (InterLAN?). But the standard never assumed that, neither the original DEC one nor its later 802.1d derivative. To pick one example, the DECbridge-500 is a four port bridge: FDDI to 3 Ethernets. The DECbridge-900 is a 7 port bridge: FDDI to 6 Ethernets. Neither, at the time when DEC introduced them, were called or described as anything other than bridges. The marketeers who flogged the other term also tried to use it to claim it referred to other supposed improvement, like cut-through operation. That was an oddball notion that never made much sense but some people seemed to like doing it in the 10 Mb and 100 Mb era. Of course it doesn't work for any mixed media, and at higher speeds the difficulty goes up while the benefits, if they ever were meaningful in the first place, shrink to microscopic values. For sure it hasn't been heard of in quite a while. I forgot the name of the company, mid 1980s I think, that made a big fuss over "cut through" and I think may also have been the inventer of the term "switch". Cisco bought them at some point. Also: neither "bridge" nor "switch" by itself implies either managed or unmanaged. I think DEC bridges were generally unmanaged, though that was mostly because no management standards existed yet. I wasn't around when SNMP became a big deal so I don't know if DEC adopted it when that happened. paul
Re: Retro networking / WAN communities
> On Apr 11, 2022, at 6:07 PM, Grant Taylor via cctalk > wrote: > > On 4/11/22 2:58 PM, Paul Koning via cctalk wrote: >> I don't have a 10Base2 switch, but I have an old repeater with 4-5 10BaseT >> ports and a 10Base2 port. And I have a 10Base2 transceiver (as well as 2 or >> 3 10BaseT transceiver). Good thing because the Pro has an AUI connector. > > I have to ask ... > > Q1 Would you be referring a "hub"? I think "hub" is another word for "repeater" (just like "switch" is another word for "bridge"). The device I have is a small standalone box, about the size of today's small 4-6 port switches you buy at Staples for $100. But it's actually a repeater, not a switch, and one of its ports is a 10Base2 connector (BNC jack). > Q2 Are your transceivers from AUI to 10Base2 / 10BaseT? Or are they > something else more mid-span? AUI connector, yes. Two are little boxes about the size of the connector body but maybe 2-3 inches long, with the coax or RJ45 connector at the other end. The 10BaseT is a DEC product, the 10Base2 I don't remember. I also have an ancient 10BaseT transceiver that's about twice as big, with a jack for an external power source, forgot the maker of that one. > Sorry if this is too pedantic. But I do feel like the pedantry is somewhat > warranted for this list. You won't get an argument from me on that one... :-) >> Same here, except unmanaged. I didn't realize until years into the 1G era >> that 1G is backwards compatible TWO generations, not the usual one. And the >> switch seems to be happy to speak 10 Mb half duplex, nice. > > I'm running into issues with switches not supporting 10 / 100 Mbps management > interfaces for other equipment. That's rather odd because even if someone doesn't obey the letter of the law you'd think they would at least support 100BaseT. Or was the problem lack of half duplex? Do those management interfaces want to run half duplex? I think I saw in the standard that Gigabit Ethernet in theory includes a half duplex mode, but I have never seen anyone use it and I wonder if it would work if tried. Perhaps I misread things. paul
Re: Retro networking / WAN communities
> On Apr 11, 2022, at 5:57 PM, Grant Taylor via cctalk > wrote: > > On 4/11/22 11:27 AM, Paul Koning via cctalk wrote: >> I still have 10 Mb Ethernet at home (on my Pro, and while it's not in use I >> have a few 10Base2 bits). > > Please expand "my Pro". There's not much to go on. > #LivingRetroVicariouslyThoughOthers DEC Professional 380 (and a caseless 350) -- PDP-11s with a screwball bus and their own set of peripherals. I have an Ethernet card for one of them. Working on the driver. paul
Re: Retro networking / WAN communities
> On Apr 11, 2022, at 3:36 PM, Zane Healy wrote: > > > >> On Apr 11, 2022, at 10:27 AM, Paul Koning via cctalk >> wrote: >> >> I still have 10 Mb Ethernet at home (on my Pro, and while it's not in use I >> have a few 10Base2 bits). And I did ATM for a living for about 5 years, >> back around 1995, so I can still talk a bit of that. >> >> Hey, you didn't mention FDDI. :-) >> >> Paul > > Hi Paul, > You’re one of the people I’d expect to be running 10Base-2 at home. :-) > I only have one 10Mbit switch still online, it’s for my DECserver 90TL, the > only 10Base-2 device that I keep online (I have a couple others). I don't have a 10Base2 switch, but I have an old repeater with 4-5 10BaseT ports and a 10Base2 port. And I have a 10Base2 transceiver (as well as 2 or 3 10BaseT transceiver). Good thing because the Pro has an AUI connector. > All my 10Base-T devices are plugged into 1Gbit managed switches. Same here, except unmanaged. I didn't realize until years into the 1G era that 1G is backwards compatible TWO generations, not the usual one. And the switch seems to be happy to speak 10 Mb half duplex, nice. paul
Re: Retro networking / WAN communities
> On Apr 11, 2022, at 1:02 PM, Grant Taylor via cctalk > wrote: > > Hi, > > Does anyone know of any communities / mailing lists / newsgroups / et al. for > retro networking / WAN technologies? > > I find myself interested in (at least) the following and would like to find > others with similar (dis)interests to chat about things. > > - 10Base5 / 10Base2 / 10BaseT > - ISDN > - DSL / ADSL / SDSL / HDSL > - T1 / E1 > - ATM > - Frame Relay > - ARCnet > - PSTN / PBX / PABX I still have 10 Mb Ethernet at home (on my Pro, and while it's not in use I have a few 10Base2 bits). And I did ATM for a living for about 5 years, back around 1995, so I can still talk a bit of that. Hey, you didn't mention FDDI. :-) paul
Re: Glass memory?
> On Apr 2, 2022, at 6:27 AM, Liam Proven via cctalk > wrote: > > On Sat, 2 Apr 2022 at 00:34, Bill Gunshannon via cctalk > wrote: >> >> And, as you say, an Arduino or a Pi that fits in my pocket is orders >> of magnitude more powerful and costs pocket money. > > The comparisons of size, power, storage, cost, power usage, heat > output and so on are often made. > > What is less often observed are the facts that a machine that takes > multiple trailers can be repaired with spare parts. Anything made from > ICs basically can't, except by replacing the ICs. But that true for earlier machines, too. Replacing broken transistors requires having replacement transistors suitable for the circuit in question. For a 1960s era machine that may be quite hard; while transistors are easy to find, transistors with suitable characteristics might not be. And what are they? I'd love to know what the specs of the transistors in CDC's 6000 series "cordwood" modules are. Other than the stage delay (5 ns) I have no idea. > What if you can't make ICs any more? Or rather, what level of IC > fabrication would it be possible to construct from scratch? That's a fun question and a full answer would probably make a good book. For transistors the answer is only marginally simpler. For tubes, quite a lot simpler. (There's a nice Youtube video of someone making tubes, in his basement. You need glass blowing equipment, a spot welder, vacuum pumps, and an inductive heating system. That's about it for tools, as I recall. Materials: pyrex glass, kovar feed through wires, tungsten filaments, not sure what the electrodes are made of.) For semiconductors, you'd start with machinery to make ultra-pure materials (silicon, I'd assume). A Czochralski crystal growing machine to make the cylinders of pure mono-crystal silicon from which wafers are sliced. Polishing machinery. Wafer coating machines. Wafer steppers. Etching, metal coating, diffusion, etc. most of which also require very pure and often exotic ingredients. (I remember being amazed to read that chlorine trifluoride is used as a cleaner in the semiconductor industry. Look up the properties of that compound, it will blow your mind.) Reading the specs of the latest generation wafer steppers from ASML (the only company in the world with the technology) boggles the mind, especially if you have some understanding of precision machinery design. And even earlier generation steppers are not easy devices to make. I'm not sure what's involved in doing one precise enough for, say, an 1980s era microprocessor. Or even an SN7400. Transistors are basically the same as small ICs, unless you go to really ancient types (point contact, mesa, alloy diffusion). If I had to build a simple computer starting from a pile of rubble I'd seriously consider building it from tubes. paul
Re: UNIBUS powoer on/off spec
> On Apr 6, 2022, at 9:20 AM, Noel Chiappa via cctalk > wrote: > > ... > I have been told that at one point Google was 'downgrading' results that used > plain HTTP, instead of HTTPS, because they were trying to push people to > switch to HTTPS (this was when everyone was hyperventilating over the Snowden > revelations). Given the near-ubiquitous use of HTTPS these days, I'd have > thought that piece of 'information credit engineering' by our tech overlords > was past its 'sell by' date, and now serves primarily to block people from > finding the material they are looking for (as here). That's a classic example of a rule invented by people who can't think. In fact, HTTP is perfectly fine for sites that arenot conducting web-based business activity. Blogs are a good example, and I know at least one that runs HTTP for the simple reason that nothing else is needed. Bitsavers is another example; nothing would be gained by adding all the overhead inflicted by HTTPS. paul
Re: UNIBUS powoer on/off spec
Very impressive detail. You might give a precise source citation on that page. paul > On Apr 5, 2022, at 8:07 AM, Noel Chiappa via cctalk > wrote: > > So, I looked at the early editions of the "pdp11 peripherals hanbook", which > have good, detailed discussions of UNIBUS operations (at the back; chapter 5, > "UNIBUS Theory and Operation", in the 1976 edition), but in contrast to the > level of detail given for master/slave operations, and bus requests and > interrupts, the level of detail on power up/down is very low, especially > compared to that in the later "pdp11 bus hanbook" (which, as mentioned, does > not seem to be online yet, alas). So, I have transcribed that section, and > posted it: > > https://gunkies.org/wiki/UNIBUS_Initialization > > I have yet to scan and post the associated timing diagrams (which are useful, > but not critical); the desktop that runs my scanner is down at the moment, > alas. (If anyone who has a copy would like to volunteer to scan them, that > would be great.) > > Noel
Re: Data recovery (was: Re: SETI@home (ca. 2000) servers heading to salvage)
> On Apr 4, 2022, at 10:55 AM, Warner Losh via cctalk > wrote: > > That's what a sanitize operation does. It forgets the key and reformats the > metadata with a new key. Yes, but the devil is in the details. For example, for the SSD case, it is necessary verify that the flash block that previously held the key information has been explicitly erased (successfully) and not merely put on the free list. That's a detail I once pried out of a manufacturer, insisting they were required to answer in order to get the business. paul
Re: Data recovery (was: Re: SETI@home (ca. 2000) servers heading to salvage)
> On Apr 4, 2022, at 10:20 AM, Jules Richardson via cctalk > wrote: > > On 4/3/22 10:51, Eric J. Korpela via cctalk wrote: >> drive removed and destroyed for privacy reason. > > For those in the know, how much success - assuming a "money is no object" > approach - do data recovery companies have in retrieving data from drives > that have a) been overwritten with zeros using dd or similar, and b) been > overwritten with random data via a more comprehensive tool? There's a research group in, I think, UCSD which studies that question. From what I recall, in modern hard disk drives with microscopic tracks and not a whole lot of margin anywhere, one overwrite is plenty good. The legendary multiple erase schemes are mostly rumors -- I looked long and hard for the supposed government standards that specify these and found they don't seem to exist -- and no longer useful. SSDs are a different story entirely because there you don't write over the actual data; instead a write updates internal metadata saying where the most recent version of block number xyz lives. So, given that you tend to have a fair amount (10 or 20 percent if not more) of "spare space" in the SSD, previous data are likely to be hanging around. I suspect if you write long enough you could traverse all that, but how to do that depends on the internals of the firmware. That's likely to be confidential and may not even be reliably known. There are SSD SEDs. If designed correctly those would give you cryptographically strong security and "instant erase". Not all disk designers know how to do these designs correctly. If I needed an SED (of any kind) I'd insist on a detailed disclosure of its keying and key management. Prying that out of manyfacturers is hard. I've done it, but it may be that my employer's name and unit volume was a factor. paul
Re: Core memory
> On Apr 1, 2022, at 5:13 PM, Brent Hilpert via cctalk > wrote: > > On 2022-Apr-01, at 11:51 AM, Paul Koning wrote: >>> On Apr 1, 2022, at 2:38 PM, Brent Hilpert via cctalk >>> wrote: >>> On 2022-Apr-01, at 6:02 AM, Paul Koning via cctalk wrote: >>> >>>> When I looked at that ebay listing of "glass memory" it pointed me to >>>> another item,https://www.ebay.com/itm/265623663142 -- described as "core >>>> rope memory". Obviously it isn't -- it's conventional core RAM. >>>> Interestingly enough, it seems to be three-wire memory (no inhibit line >>>> that I can see). It looks to be in decent shape. No manufacturer marks, >>>> and "GC-6" doesn't ring any bells. >>> >>> Well, it would still work for 1-bit-wide words, so to speak. One wonders >>> what the application was. >> >> I wonder if the sense wire was used as inhibit during write cycles -- that >> seems doable. It would make the core plane simpler at the expense of more >> complex electronics. With that approach, you have regular memory, not >> limited to 1 bit words. > > Maybe I'm being overly cautious, but offhand I'm initially skeptical without > doing the math or some good vector diagrams, or seeing an example. With the > diagonal wire you're changing the current/magnetic sum vectors in direction > and magnitude. The question is coming up with a current that reliably > performs the cancellation function on the selected core of a bit-array while > reliably *not* selecting another core, while accounting for all the variation > tolerances in the array. > > While there's probably some value by which it would work in theory, I wonder > whether the diagonal wire would narrow the operating margins. From some stuff > I've seen, the hysteresis curves for cores weren't spectacularly square. With > the usual 3D-3wire scheme of a close parallel inhibit wire you have > 'cancellation by simplicity', you maximise the difference (cancellation) > influence on one wire while minimising it's sum influence on the other. > > A related issue is the normal diagonal sense stringing (which this looks to > have) has the wire entering the cores from both directions relative to the > address wires, which is why sense amplifiers respond to pulses of both > polarity. If this diagonal wire is put to use as an inhibit wire, some logic > is needed to decide the direction of the inhibit current from the address, > though that may not be very difficult. You're right about the diagonal wiring, that suggests the idea isn't very practical. I suppose another possibility for a 3-wire core plane is that it's linear-select with inhibit (as, for example, in CDC 6000 series ECS, extended core storage). Speaking of CDC and inhibit, the CDC mainframe memories (12 bit by 4k modules) are peculiar in that they have inhibit wire pairs, one X and one Y, and four of them in each direction so a given inhibit acts on a 1/16th square of the full plane. The best reason I can figure for this is to have the number of cores traversed by the inhibit wires and by the address wires to be roughly the same, so the inductance is fairly consistent and a single design of ultra high speed current pulse drivers will work for all of them. The drivers used are current diversion drivers -- they don't switch on and off, instead they switch the current from an idling path to the core wire. It's pretty wild stuff, one of the 6600 training manuals describes it in a lot of detail. My assumption is that all this was done to achieve the unusually high speed: 1 microsecond full read/restore cycle in 1964. paul
Re: Core memory
> On Apr 1, 2022, at 3:38 PM, Joshua Rice wrote: > > > >> On Apr 1, 2022, at 7:51 PM, Paul Koning via cctalk >> wrote: >> >> Neat looking stuff. It doesn't look like core rope memory in the sense of >> the AGC ROM, nor in the sense of the Electrologica X1. It looks more like >> the transformer memory used in Wang calculators that you documented in your >> core ROM paper. >> >> paul >> > > I second the Transformer ROM theory. I guess the transformers are the epoxied > modules on the top half of the board, with some weird magnetic/inductance > wizardry at the bottom doing the adressing. You may find it’s a hybrid of > core rope and transformer ROM for super dense ROMs. I’m no expert at the > nuances of this field though. > > Reminds me a bit of my Wagenr Computer transformer ROM: > https://www.reddit.com/r/vintagecomputing/comments/m3pe29/a_very_photogenic_rom_board_from_an_early_70s/ > Nice. Is that actually a transformer ROM, or a square loop type core rope ROM? The physical appearance supports both conclusions. To be sure you'd have to analyze the driving circuitry, or check the properties of the cores used in it. paul
Re: Core memory
> On Apr 1, 2022, at 2:38 PM, Brent Hilpert via cctalk > wrote: > > On 2022-Apr-01, at 6:02 AM, Paul Koning via cctalk wrote: > >> When I looked at that ebay listing of "glass memory" it pointed me to >> another item, https://www.ebay.com/itm/265623663142 -- described as "core >> rope memory". Obviously it isn't -- it's conventional core RAM. >> Interestingly enough, it seems to be three-wire memory (no inhibit line that >> I can see). It looks to be in decent shape. No manufacturer marks, and >> "GC-6" doesn't ring any bells. > > Well, it would still work for 1-bit-wide words, so to speak. One wonders what > the application was. I wonder if the sense wire was used as inhibit during write cycles -- that seems doable. It would make the core plane simpler at the expense of more complex electronics. With that approach, you have regular memory, not limited to 1 bit words. > There are a couple of Soviet core-rope memories up right now: > https://www.ebay.com/itm/294558261336 > https://www.ebay.com/itm/294851032351 Neat looking stuff. It doesn't look like core rope memory in the sense of the AGC ROM, nor in the sense of the Electrologica X1. It looks more like the transformer memory used in Wang calculators that you documented in your core ROM paper. paul
Re: Glass memory?
> On Apr 1, 2022, at 1:25 PM, Chuck Guzis via cctalk > wrote: > > Wasn't some of this glass delay line memory used in early raster-scanned > computer video displays? I don't know about that one, but a delay line is a key component of a PAL (European) system color TV receiver. paul
Core memory
When I looked at that ebay listing of "glass memory" it pointed me to another item, https://www.ebay.com/itm/265623663142 -- described as "core rope memory". Obviously it isn't -- it's conventional core RAM. Interestingly enough, it seems to be three-wire memory (no inhibit line that I can see). It looks to be in decent shape. No manufacturer marks, and "GC-6" doesn't ring any bells. paul
Re: Glass memory?
> On Apr 1, 2022, at 2:56 AM, Mark Huffstutter via cctalk > wrote: > > Here is some pretty good information. > > https://archive.org/details/TNM_Glass_computer_memories_-_Corning_Electronics_20171206_0185 > > Mark Interesting stuff. When I saw Corning I thought glass fiber (optical pulse signals delay line) but their development of optical fiber came later, I think. That Corning document is also interesting because of its comparison of memory technologies it shows. Tunnel diode memories? Hm. And cryogenic, in 1962? Hm again. paul
Re: Webinar: Ethernet's Emergence from Xerox PARC: 1975-1980
> On Mar 28, 2022, at 2:12 PM, Joseph S. Barrera III via cctalk > wrote: > > That was the ALOHA network, which inspired Ethernet but was not Ethernet. The differences are quite crucial. ALOHA is a broadcast radio packet network, which doesn't have collision detect and probably not carrier sense either. So it's about 1/3rd of Ethernet -- just MA. :-) A consequence is that the theoretical channel capacity is also about 1/3rd; ALOHA tops out around 30% of data rate, while Ethernet -- thanks to CS and CD -- can reach pretty much the full wire capacity. paul
Re: Unusual “gold” IC chips?
> On Mar 22, 2022, at 1:25 PM, Magnus Ringman via cctech > wrote: > > Those look like "stripline" RF/microwave packages. PCBs will have cutouts > for the package body, so that the leads can be soldered flat (no bends) > directly onto impedance-controlled leads on the board. > > On Tue, Mar 22, 2022 at 6:17 PM Oldcompu via cctech > wrote: > >> Anybody know what these are? Maybe RF related? Found on a box of computer >> ships. >> >> https://share.icloud.com/photos/0294pRVHPFQMShUZic2vFvneg Yes, microwave. And the wide connection strips suggest high power devices, so perhaps power transistors (though why so many leads is unclear) or MMIC amplifiers. paul
Re: LSSM is chasing this, was Re: General Data? Computer
> On Mar 18, 2022, at 3:15 PM, W2HX wrote: > >> For a number of years they had on display the world's oldest broadcast >> transmitter, an FM transmitter from 1919 invented in The Hague by Hanso >> Idzerda. > > Interesting as that would have predated the invention of FM by Edwin Howard > Armstrong in 1933 (or at least what we thought was the invention). But > notably, vacuum tube technology that existed in 1919 might be hard-pressed to > be up to the task. I look forward to doing some more research on this topic. > Thanks! FWIW, in an article I wrote about Idzerda's work I mentioned an analogy: Leif Eriksson's discovery of America, well before the journeys of Columbus. The difference is that Eriksson's travels did not produce any historic followup while Columbus's travels did. Similarly, Idzerda's work was a technological dead end; while a few additional transmitters were built from his design, it disappeared in the late 1920s, and the reactance modulator used by Armstrong was a better technology. In the world of computers you can apply this analogy as well; the Analytical Engine, the ABC computer, and perhaps Zuse's computers would be examples of early work that didn't produce any real descendants. Somewhat different but similar are all the various dead end technology bits, from core rope ROM to bubble memory to magnetic card memory, all things that had a brief and very limited existence but faded and left no progeny. paul
Re: LSSM is chasing this, was Re: General Data? Computer
> On Mar 18, 2022, at 1:31 PM, Dave Wade G4UGM via cctalk > wrote: > > I missed a lot of this because g-mail decided to bounce some e-mails. > > I would like to make a couple of observations:- > > 1. Many real accredited museums have a smaller percentage of their artifacts > on display than private collectors. In the UK both TNMOC and the Science > Museum Group have large quantities of hardware that is not displayed. > The science museum usually catalogues it but it is not really helpful if you > can't see it. They might also get rid of stuff, not necessarily for an obvious reason. I saw a case of this recently, in the Dutch museum Boerhaave in Leiden, which is a national science-related museum. For a number of years they had on display the world's oldest broadcast transmitter, an FM transmitter from 1919 invented in The Hague by Hanso Idzerda. Some time recently it was removed from the museum collection. In that case it went back to the organization it came from, the Picture and Sound Institute, but whether it will be displayed by them is not clear. In any case, that's an example of the uncertain future of artefacts in museum collections. paul
Re: LSSM is chasing this, was Re: General Data? Computer Equipment Auction - GSA
> On Mar 17, 2022, at 1:56 PM, Ethan O'Toole via cctalk > wrote: > > ... > I mean what is a museum really? What about low attendence museums versus > private collections that serve tons of people? Aren't museums private > collections too? Some museums are government establishments, but they don't necessarily play any better than any other collector. paul
Re: LSSM is chasing this, was Re: General Data? Computer Equipment Auction - GSA
> On Mar 17, 2022, at 4:22 PM, Dave McGuire via cctalk > wrote: > > On 3/17/22 14:19, Ethan O'Toole via cctalk wrote: >>> In LSSM's case, it's a wholly-occupied 14,000 square foot commercial >>> storefront building that nobody lives in, in a downtown shopping district, >>> as distinguished from the typical private collection in a garage, basement, >>> etc. >> Right, I know you have a real building and are open to the public and >> operate what I would consider a real museum. That's a good definition. But the fact that an organization has a building, is open to the public, and has a separate legal identity, doesn't necessarily protect adequately. The LCM fits that definition, I believe, yet it long acted as a gray hole, and now seems to have gone essentially full black. For that matter, there are examples in "traditional" musea, as in places with paintings and sculptures in them, going rogue. I remember an infamous court case in the USA involving a museum obliged to be in a particular town, by its founding documents, yet it decided unilaterally to pick up and move and somehow got a court to agree it could do so. Another consideration is how items get removed from the collection. Often there is a storage building that holds much of the collection, while the actual museum displays only a fraction. That's a consideration if access is a concern. A bigger question is how a museum can decide to get rid of ("deaccess" I think is the buzzword used) things in its collection, and if it does so, what becomes of them. paul
Re: LSSM is chasing this, was Re: General Data? Computer Equipment Auction - GSA
> On Mar 17, 2022, at 9:30 AM, Tom Hunter via cctalk > wrote: > > Dave, > > Your following comment is offensive: > > "I hope these systems go to a good home, and don't disappear into the black > hole of a private collection." > > You equate private collections with black holes. I think on the contrary > many private collectors do a better job at preserving old systems than > "museums". > > I remember several "museums" which have failed in recent years. > At least four of them have been much too greedy and took on way more than > they could handle and in effect turned the collections into scrap. > > And then there is of course the sorry saga of the Living Computer "Museum" > in Seattle which has sucked up a lot of old systems from private collectors. Perhaps better than taking offense is to work with those who do a good job in this area and help them do so. Yes, some of us saw the warning signs about the LCM years ago. And I know of various musea doing things contrary to the wishes of their donors, sometimes even with the aid and comfort of the courts. But, with care, it's possible for both private collections and musea to do a good job. The "with care" is the key point, and it applies to every collector. paul
Re: Does anyone/museum test disk packs?
> On Mar 16, 2022, at 10:28 PM, Chris Zach via cctalk > wrote: > >> I vividly recall a log by an operator who had a bad CDC 844 pack who >> proceeded to destroy 5 drives and 3 additional packs. It was detailed >> enough that it read like Gerard Hoffnung's "Bricklayer's Story". > > When I was testing one of my RL02 drives I had a head skid on the disk. > Problem was the air filter was so clogged there wasn't enough air to allow > the heads to fly. Huh? The way I've always understood it is that heads fly from the air entrained by the disk surface as it spins, not from air blown through the drive via the air filters. And clearly that is true for modern drives, since they are sealed. paul
Re: gcobol
> On Mar 15, 2022, at 1:18 PM, Bill Gunshannon via cctalk > wrote: > > On 3/15/22 12:57, Paul Koning wrote: >>> ... > One difference is that GDB will be able to do COBOL mode debugging. >>> >>> Never had a reason to try it but I thought GnuCOBOL allowed the use >>> of GDB. FAQ seems to say it can be used. >> Yes, but presumably in C language mode. > > But I thought there was a comment that because of the liberal use > of comments it was easy tracing a problem back to the COBOL source. > > I'll probably never find out. :-) Same here since I'm not a COBOL programmer. What I meant: COBOL has data types like decimal numbers, which C doesn't seem to have. So how would GDB view such a variable? How would you enter a value if you want to change it? paul
Re: gcobol
> On Mar 15, 2022, at 12:39 PM, Bill Gunshannon via cctalk > wrote: > > On 3/15/22 09:12, Paul Koning wrote: >>> On Mar 14, 2022, at 9:05 PM, Bill Gunshannon via cctalk >>> wrote: >>> >>> On 3/14/22 20:53, Paul Koning via cctalk wrote: >>>> Saw a note on the GCC list that I thought some here might find >>>> interesting: it announces the existence (not quite done but getting there) >>>> of a COBOL language front end for GCC. Interesting. For those who deal >>>> in legacy COBOL applications that want a more modern platform, I wonder if >>>> this might be a good way to get there. Run old COBOL dusty decks on >>>> Linux, yeah... >>> >>> We already have GnuCOBOL which works just fine (most of the time). >> Yes, although that one is apparently more limited. > > In what way? I thought I saw a comment to that effect in the announcement; looking more closely that isn't the case, other than the limitations you get from going through C as an intermediate language. (Same sort of reason why the C++ to C converter is no longer used.) >> And GnuCOBOL is a COBOL to C converter. gcobol is a full front end. > > Is there some shortcoming in using C as an intermediate language? Yes, debugging. It means the debugger sees a C program, and it's somewhere between difficult and impossible to apply the original source semantics while debugging. >> One difference is that GDB will be able to do COBOL mode debugging. > > Never had a reason to try it but I thought GnuCOBOL allowed the use > of GDB. FAQ seems to say it can be used. Yes, but presumably in C language mode. paul
Re: gcobol
> On Mar 14, 2022, at 9:05 PM, Bill Gunshannon via cctalk > wrote: > > On 3/14/22 20:53, Paul Koning via cctalk wrote: >> Saw a note on the GCC list that I thought some here might find interesting: >> it announces the existence (not quite done but getting there) of a COBOL >> language front end for GCC. Interesting. For those who deal in legacy >> COBOL applications that want a more modern platform, I wonder if this might >> be a good way to get there. Run old COBOL dusty decks on Linux, yeah... > > We already have GnuCOBOL which works just fine (most of the time). Yes, although that one is apparently more limited. And GnuCOBOL is a COBOL to C converter. gcobol is a full front end. One difference is that GDB will be able to do COBOL mode debugging. >> I wonder if I can make build that front end with the pdp11 back-end. :-) > > I wasn't aware it was still possible to build the PDP-11 back-end. I > thought support for that was dropped ages ago. No, it's still there. I picked it up when it needed a maintainer. It's actually been upgraded to deal with GCC changes, for example the new condition code handling which produces somewhat better code. (Also the new register allocator, as an option, which unfortunately produces somewhat worse code.) paul
gcobol
Saw a note on the GCC list that I thought some here might find interesting: it announces the existence (not quite done but getting there) of a COBOL language front end for GCC. Interesting. For those who deal in legacy COBOL applications that want a more modern platform, I wonder if this might be a good way to get there. Run old COBOL dusty decks on Linux, yeah... I wonder if I can make build that front end with the pdp11 back-end. :-) paul
Re: Rack Discussion Continued - Slide lubricant
> On Mar 4, 2022, at 4:06 PM, Peter Coghlan via cctalk > wrote: > >> >> I have several difficult slides in my H960 rack. >> What is the best lubricant for the slides? >> I was wondering if graphite would work better than oil due to the fact that >> it won't pick up dirt and dust. > > Powdered graphite for lubricating locks? > > I wouldn't like to have conductive stuff like that anywhere it might > get sucked in by fans and deposited on PCBs. I had the same thought. And from what I've heard, graphite is no longer recommended for locks either; part of the reason seems to be that it absorbs moisture. Instead there are spray cans with neat teflon-bearing very light lubricant, they work very nicely. If I had slides that needed lubrication I might try some anti-seize compound like the stuff some guns want in the action. paul
Re: While on the subject of cabinets...
> On Mar 2, 2022, at 11:45 AM, Chris Elmquist via cctalk > wrote: > > On Tuesday (03/01/2022 at 04:36PM -0800), Marc Howard via cctech wrote: >> I've got a PDP 11/34 I've never opened up. It's mounted in a H9642 >> cabinet. I can't get the bloody thing to extend on the chassis track >> slides. >> >> Is there a catch or lock screw on this unit? > > Mine (and we may be learning, is not be a proper configuration) does not > have any release or catch to allow the CPU to slide out. I just grab it > and start pulling and it slides out-- although it does not slide easily. > That could be due to old, stiffened lubricant on the slides. Might be a non-standard slide, or a defective lock. > BUT! make sure you pull out the front foot at the bottom of the rack to keep > the whole rack from tipping forward if you do get the CPU to slide out. > > The CPU is a heavy beast and the rack WILL tip forward once the CPU is > out far enough. That's why H960 cabinets have optional front stabilizer feet. paul
Re: Racking a PDP-11/24
> On Feb 26, 2022, at 3:05 PM, Rob Jarratt wrote: > >> ... >> Hardware stores can fix that. Or Brownells, where you can get really good >> screwdrivers that are less likely to damage screw heads than standard >> hardware store ones do. > > I am not in the USA, but I am should be able to look for other screwdrivers > here in the UK. I already have one quite big one, but I think it is still > way too small for this purpose. Brownell's is a gunsmith supply store, but the screwdrivers I was talking about are also known as "clockmaker's screwdrivers". Either way, they have hollow-ground hardened blades rather than flat-sided bevel tips, and they come in a range of withs as well as thickness. I use them for any situation where a good fit in the screw head is important. For oversized screwdrivers, the ones that are sold as pry bars can serve... paul
Re: Racking a PDP-11/24
> On Feb 26, 2022, at 3:14 AM, Rob Jarratt via cctalk > wrote: > > I am wondering if I have racked my 11/24 correctly. > > > > As you can see here: > https://robs-old-computers.com/2022/02/10/pdp-11-24-progress/ I have put the > CPU at the top and the two RL02 drives underneath. That seems fine. Others mentioned having them at the top of a low cabinet, but the RL02s I used were in H960 (6 foot) racks, mid-level with stuff above them. > The problem is that the CPU enclosure catches on the RL02 underneath. There > is a bit of play in the mounting bracket: > https://rjarratt.files.wordpress.com/2022/02/cpu-mounting-bracket.jpg. With > a bit of manipulation I can get the CPU to slide in. However, I am wondering > if I have racked it correctly? I don't think there is room to move the RL02s > down and it would presumably leave a bit of a gap below the CPU. There seems > to be very little clearance between the CPU and the RL02 at the front but > more at the back, but I am sure that the rails are mounted horizontally. Is > it just a matter of tightening the big screws that hold the mounting > brackets to stop the play? If so I am not sure I have a big enough > screwdriver! Hardware stores can fix that. Or Brownells, where you can get really good screwdrivers that are less likely to damage screw heads than standard hardware store ones do. Something I observed on my H960 that wasn't all that obvious at first: the holes are NOT evenly spaced. If I remember right, they come in groups of four where the spacing between groups is something like 1/8th of an inch more than the spacing within groups. The consequence is that if you attach your brackets using the wrong set of holes things may be 1/4 inch (or whatever the delta is) closer than they were meant to be. paul
Re: Information about an unknown IC
> On Feb 24, 2022, at 1:16 PM, Brent Hilpert via cctalk > wrote: > > On 2022-Feb-24, at 8:29 AM, Clemar Folly via cctalk wrote: >> >> I'm looking for information about Texas Instruments TB-759933 IC. >> >> Does anyone have the datasheet or any other information about this IC? > > > A search shows this question was posted over here, with a picture: > > https://atariage.com/forums/topic/331769-unknown-cart-ic-please-i-need-some-help/ Wow, that's a sorry looking board. It looks like it was assembled by someone using a soldering gun and acid-core solder. But most plumbers would do better work than that. paul
Re: 11/83 operating system load update -2
> On Feb 23, 2022, at 12:38 PM, Ethan Dicks via cctalk > wrote: > > On Tue, Feb 22, 2022 at 9:29 PM Rod Smallwood via cctalk > wrote: >> 2. The PC I want to use is a DEC Celeibris FX ie the PC and its W95 >> software is as supplied by DEC. > . > . > . >> 5. putR was supposed to be able to do this. It does not. > > Rod, > > My memory is that programs like putr need to run on "real" DOS, not a > DOS window. So if you are trying to run putr without booting to MS-DOS > 6.2 or older, that could be the source of your problems with it. I don't know PUTR, but my experience with DOS int13 code in RSTSFLX (built with DJGPP) is that it worked fine in a DOS window on Win95. And that makes sense, because Win95 is basically just a bit of UI veneer over DOS. Win NT is an entirely different thing, of course, and I would not expect DOS low level I/O to work in Win NT command windows since those really are not DOS. paul
Re: 11/83 operating system load update -2
I think you're unnecessarily limiting your options by refusing to use Linux, which as we've pointed out is something you can do on your existing PC without overwriting the OS that is on it now. As for SIMH, I am quite convinced that it IS a perfectly good answer. But sure, if you have all the floppies you need to do an actual install via floppy media, that's fine too. It does limit you somewhat; it means you have to use an OS for which RX50 was a supported kit type. For example, RSTS/E doesn't come that way, so if you want that, the SIMH route is your only option. And it would not be hard to do so long as you can find a PC interface for that SCSI drive. paul > On Feb 22, 2022, at 9:29 PM, Rod Smallwood via cctalk > wrote: > > Hi > >Well I have had a huge response to my request. > > I am unsure as to if I have defined the problem properly. > > So a few bullet points. > > 1. The objective is to copy RX50 disk images (*.dsk format) to genuine DEC > RX50 disks. > > 2. The PC I want to use is a DEC Celeibris FX ie the PC and its W95 software > is as supplied by DEC. > > 3. It has an RX33 5.25 inch floppy drive. > > 4. The RX33 _*is*_ capable of reading and writing RX50 disks. > > 5. putR was supposed to be able to do this. It does not. > > 6. All that is lacking is the right utility. > > 7. Doing this does not need any disks other than RX50's. > > 8. Linux in any of its myriad of forms is not the answer. > > 9. simH is good at what it does but of no use here > > 10. Its just a W95 utility program to copy an RX50 disk image to an RX50 > disk on an RX33 drive on a DEC PC. > > 11. So whats it called? Does it work given the above situation? > > Rod > >
Re: Installing an operating system on the 11/83 - update.
> On Feb 22, 2022, at 7:33 PM, Fred Cisin via cctalk > wrote: > > From the FDC point of view, which doesn't have optical view of the drive and > media, the 80 track DD 5.25" looks similar to a "720K 3.5" drive. > (80 tracks, 9 sectors per track, 300 RPM, 250K data transgfer rate) > > On SOME PCs, setting the CMOS floppy setting to "720K" may take care of it. Originally I wrote my RX50 floppies on a Windows PC. The drive was a plain old 5.25 inch PC drive, normally used for 9 sector per track PC floppies. It turns out some BIOS operations will reset it to 10 sectors, which is RX50 format, and then BIOS int13 operations can read and write it. I coded up support for that in RSTSFLX, which can be found on my Subversion server (in branches/V2.6). The original version was built with Borland C++, but I switched to DJGPP which made all that much easier. No CMOS or other magic needed, just an application that knows how to speak int13. And of course an old enough Windows, or plain DOS, which allows you to do those operations. Subsequently I moved all this to Linux. There is (was?) a tool -- fdparm? -- that you could use to tweak the floppy layout settings. A simple entry in its config file would give the RX50 layout. Then it's just a matter of handling the sector interleaving, track skew, and odd track numbering. With just a few more lines of code, the application can handle the parameter setting so no prior setup is needed, which is what I ended up doing in RSTSFLX (in C). The latest version does this as well, but in Python. So as far as I can see, all this stuff is perfectly easy if you just use a plain ordinary floppy drive. paul
Re: cctalk Digest, Vol 89, Issue 21
You could boot a packaged Linux that doesn't need installation but runs directly from the boot device. I haven't done this but I know they are out there and easy to use. SimH complex and lots of setup? Not my experience. The documentation may be sparse in places, as I found when configuring a PDP-10 setup, but the PDP-11 setups are well documented. SCSI controller, that's beside the point. I assume it looks to the PDP-11 as an MSCP controller, right? It would have to be, else you'd have no chance of running a standard OS. If so, you'd just tell SIMH to configure an MSCP controller with a disk of size matching what you have. When you said "won't write a disk image to a real RX50" do you mean an RX50 drive, or an RX50 floppy in a plain PC 5.25 inch drive? I don't know about the former, but the latter has long worked for me. I haven't used Windows for stuff like that in ages, and don't want to use it if I can avoid it, but my RSTSFLX 2.6 can be build for DOS (using DJGPP). I don't have an executable of that version handy but could probably create one. That doesn't create from images, though; it manipulates RSTS file systems. A simple program to copy an image, along the lines of the rx50.py I mentioned, would not be hard to make. paul > On Feb 22, 2022, at 5:20 PM, Rod Smallwood via cctalk > wrote: > > I'm sure that will work. Unfortunatly dd is a linux command. > > I only have windows PC's. > > simH is highly complex and needs a lot of setup. ( I know - I tried - total > nightmare) > > It does not have support for the CMD CDD 220 SCSI controller and a RH-18A > > I have a working 11/83 with a 2gig SCSI drive and RX50.(it passes the diags > and boots XXDP+) > > None of the methods suggested so far gets me an RX50 bootable OS install set. > > Latest fail.. putR does not as claimed write disk images to a real RX50 under > W95. (write protect error) > > The SCSI25D costs $150 US in the UK. > > So the simple requirement to copy an RX50 disk image (which I have) to an > RX50 remains. > > Rod > > > > > > On 22/02/2022 19:27, Adam Thornton via cctalk wrote: >> The 11/83 question sounds like a job for SCSI2SD to me. Install a system >> with simh. dd the resulting disk image to your sd card. Hook the SCSI2SD >> up to your 11/83 and boot from the card. Copy the contents of that drive >> to your real SCSI drive. Done. >> >> SCSI2SD cards are not expensive and are a tremendous value for money.
Re: Installing an operating system on the 11/83 - update.
> On Feb 22, 2022, at 12:27 PM, Joshua Rice via cctalk > wrote: > > I have a generic 5.25” (not sure of brand) in my dell GX1 but it writes plain > SSDD floppies in RX50 format no problem. > > The RX33 was a pretty standard PC floppy drive AFAIK, just configured (with > jumpers) to work as an RX33. You may find better milage configuring it as a > PC floppy drive, as PUTR expects to work on PC drives at the device level. > Having a real RX33 might be throwing it off. Don’t take it as gospel, since > i’ve not got an RX33 to test it with. > > Not sure if PUTR can copy images to a floppy, as i’ve only used it to build a > bootable RT11 disk, and make a few RT11 disks out of the contents of images > mounted by PUTR. You might find it better to work on a blank formatted floppy > and build up from there. I have a utility to go between real floppies and images, it's included with my RSTSFLX utility. It can deal with interleaving, so (for example) you can have an image file in logical block order as they usually are, and copy that to a floppy in the correct physical order. Or you can have an image that's in physical order, as you would use for the xHomer fork of SIMH. Look for svn://akdesign.dyndns.org/flx/trunk -- the program I mentioned is rx50.py. I have no idea if this can be made to work on Windows, but it runs fine on Linux. (I did once, long ago, write code for DOS -- DJGPP -- to access the PC floppy in the right way to read/write RX50 format floppies, but while that works fine under Win95 it probably won't work under WinNT derivatives.) paul
Re: Installing an operating system on an 11/83
> On Feb 21, 2022, at 10:11 PM, Zane Healy via cctalk > wrote: > > On Feb 21, 2022, at 4:32 PM, Rod Smallwood via cctalk > wrote: >> >> Hi >> >> I have built an 11/83 in a BA23 box. >> >> It has a KDJ-11B, 2mB PMI memory, an RQDX3 with an RX50 attached, >> >> Plus a CMD CQD 220A Disk controller with a digital RH18A 2Gig SCSI drive >> attached. >> >> Diag sees drive as RA82. >> >> It boots and runs the diag disk and XXDP+ just fine. >> >> I do not have install distributions for any of the 11/83 operating systems. >> >> Daily driver system is a Windows 10 PC. >> >> So how do I install an operating system? >> >> Suggestions please. > > You can install RT-11, RSX-11M, and RSX-11M+ from CD-R, I couldn’t figure out > how to install RSTS/E from CD-R. > > Zane How did you get a CD-R image of kits for those OS? I'm not sure if it has been done for RSTS but it should be possible. I once did some work for Fred Knight when he was looking into creating a CD image of the OS and all its layered products; the question was whether a bootable CD could be created that would nevertheless look like it had a valid ISO file system on it. The answer is yes and my RSTSFLX program (the V2 version) had a feature intended to produce such an image. But the project faded away before it completed, and I don't know that such a CD was ever produced. Still, a RSTS disk kit is a simple thing: a bootable disk with a RSTS file system on it, containing a few files needed to get the new system disk set up and all the remaining bits in the form of a collection of backup sets. Boot the distribution device, use the "dskint" and "copy" options to copy the basic files to the destination disk, boot that disk, and run the installation script. More precisely, those steps will all run automatically, triggered by the fact that the kit is a read-only file system. A RSTS floppy kit is tricky only because the basic files don't fit on one floppy, so you have to split them across several and include marker files that trigger media swaps. I've looked for the MicroRSTS kit building scripts but don't think I've seen them. Reverse engineering them is certainly possible. Not trivial; all that machinery assumes it's running on the RSTS team's main development machine, which isn't what I have. As for the question why there aren't RX50 kits for many of the choices: that's because RX50 isn't a convenient distribution device, and DEC didn't sell configs such as the one we're talking about here, at least not for RSTS systems. With RSTS, you got a choice of a handful of kit media, which typically were things you'd want anyway (like a magtape, good for backups). So you'd get a system with that kind of configuration, and everything works painlessly. BTW, Rod, do you have any kind of network interface? An Ethernet device would be ideal. With that, you could install just a core setup from floppies or other hairy procedures, then copy the remaining kits across the network and install from the local copies of the kits. paul
Re: Seeking paper tape punch
> On Feb 21, 2022, at 6:07 PM, Guy Fedorkow wrote: > > hi Paul, > Yes, I should have said -- I'm looking for a machine that can punch under > control of a computer. > Whirlwind actually used seven-bit Flexowriters for reading and punching > (along with a high-speed reader later on), but I think it would be even > harder to find fresh seven-level tape even if a seven bit machine turned up. > I actually have been using a BRPE on loan from another contributor to this > list, but it's time to return the unit, so I've started to look for > alternatives. > I assume something like an ASR-33 would do the trick, although a machine > without keyboard and printer might have fewer moving parts to go wrong. But > I don't see many plausible choices on ebay. > If anyone can suggest other sources, I'll poke around The nice thing about an ASR33 (or other hardcopy terminal with reader/punch like a TT model 15) is that you can interface them to a computer rather easily, just hook up a UART with appropriate driver/receiver circuitry. RS232 to 20 mA (or 60mA for a Model 15) isn't totally trivial but it certainly is no big deal. And those slow machines actually have the nice benefit that it's easy for people to see the action, and to get some understanding at a gut level of how slow computers were in those days. I understand there is a group called "Green keys" -- ham radio operators who use old "teletype" machines -- which in that community means wny sort of keyboard telex-type machine, not necessarily made by Teletype Co. though US ones often are. 5 bit machines are common in that crowd, some 8 bit machines also appear. I haven't participated, but I would think that you might find pointers to options there. As for 7 bit tape media: I found out in the past year or so that there actually was such a thing as paper tape of width designed for 7 tracks, but a lot of "7 bit" paper tape work actually used 1 inch wide tape, i.e., what is normally considered 8 bit tape. For example, the Flexowriters on which I did my first programming at TU Eindhoven used a 7-bit code but on 8 bit tape. paul
Re: Installing an operating system on an 11/83
> On Feb 21, 2022, at 7:32 PM, Rod Smallwood via cctalk > wrote: > > Hi > > I have built an 11/83 in a BA23 box. > > It has a KDJ-11B, 2mB PMI memory, an RQDX3 with an RX50 attached, > > Plus a CMD CQD 220A Disk controller with a digital RH18A 2Gig SCSI drive > attached. > > Diag sees drive as RA82. > > It boots and runs the diag disk and XXDP+ just fine. > > I do not have install distributions for any of the 11/83 operating systems. > > Daily driver system is a Windows 10 PC. > > So how do I install an operating system? > > Suggestions please. So all you have is a BIG disk and a floppy drive? You're in a world of hurt. The straightforward answer is to get an OS kit on floppies, and run the installation procedure. The problem is that a lot of the more obvious choices for OS (given that you have a 22 bit machine with bit memory and disk) don't have floppy kits. For example, RSTS has a trimmed down kit called "micro-RSTS" back in the V9 era; I don't know that a V10 version of that was ever done. And even the severe trim job still took 10 floppies. Do you have a PC that can do I/O to that SCSI drive? If so, the best answer may be to run SIMH on that PC, with the SCSI disk as its disk drive. Then feed an install kit to an emulated SIMH tape drive or whatever you need for the kit media. Or you could do an image copy of someone else's system, provided it sits on a disk that's close enough in size to what you have. "Close enough" depends on the OS. For example, with RSTS it would work so long as the two devices have the same "device cluster size", i.e., their sizes rounded up to the next power of two are the same. An obvious question is what sort of system you're looking for. RT, RSX-11/M+, RSTS, Ultrix, BSD 2.11 are all possibilities. Exotic choices like IAS or DSM may not like the CPU and/or the controller. But RT and Ultrix are, to put it mildly, rather different systems. paul
Re: Seeking paper tape punch
> On Feb 21, 2022, at 4:26 PM, Guy Fedorkow via cctalk > wrote: > > [apologies if this is a dup, but I didn't see it coming back in any of the > cctalk digests] > > Greetings CC-Talk, > I've been working on a low-budget project to help to introduce students to > history of computing through material we have from MIT's 1950's Whirlwind > project. The activity would have more of a hands-on feel if we could use > actual paper tape. > A simple reader is easy enough, but a punch is a bit harder. We don't need > anything "authentic", or fast, or high performance, just something fairly > reliable. >If anyone can suggest where to find such a machine, could you let me know? > Fanuc PPR, GNT 4601/4604, and the DSI NC-2400 have been cited as possible > candidates, but I don't see anything that looks like a good match on ebay. > > Thanks! > /guy fedorkow Do you mean a punch as a computer peripheral, or a keyboard operated tape punch? For the former, the ones you mentioned are obvious choices; BRPE is another. Also the DEC paper tape reader/punch (PC01 or some such number). For keyboard operated, there's Teletype, Flexowriter, Creed, Siemens, depending on where you're located. ASR33 is a common 8-bit punching terminal. Older models that use 5-level tape ("Baudot") may also be around, and those could certainly serve for 1950s era machines that may well have actually used those. I don't know what Whirlwind used, but I know some other 1950s machines that used 5 bit tape for their I/O. Electrologica X1 is an example. paul
Re: VAX9000 unearthed
> On Feb 19, 2022, at 1:28 PM, Jon Elson via cctalk > wrote: > > On 2/18/22 21:43, ben via cctalk wrote: >> >> The 70's was all low scale tech. I suspect it was the high speed/edge rates >> more the power that kept ECL from common use. Any other views on this topic. >> Ben, who only had access to RADIO SHACK in the 70's. >> PS: Still grumbling about buying life time tubes at a big price, >> just to see all tubes discontinued a year or two later. > > Edge rates on pedestrian MECL 10K were not crazy fast. Rise and fall of > about 1 ns, but the gate propagation delay was ALSO about 1 ns, so that was a > lot faster than TTL. ECL was very easy to work with, crosstalk was not a > common issue. But, you HAD to terminate any line over a foot, and better to > make it 6" to be sure. And, the termination and pulldown resistors ate a LOT > of power! I think there are a number of reasons why ECL was niche technology. One is that TTL was fast enough for most applications. Another is that more people knew TTL, and ECL requires (some) different design techniques. Yet another is that higher levels of integration appeared in CMOS but not ECL. Yet another is that ECL was expensive compared to the alternatives, partly because of the low integration and partly because of the low volume. In the mid-1980s (I think) there was a very interesting project at DEC Western Research Lab to build a custom VSLI ECL processor chip. A lot of amazing design was done for it. One is power and cooling work; it was estimated to consume about 100 watts which in that day was utterly unheard of, by a substantial margin. This was solved by a package with integral heat pipe. Another issue was the fact that ECL foundries each had their own design rules, and they were shutting down frequently. So the CAD system needed to be able to let you specify a design where the fab rules were inputs to the layout algorithms. The design took great advantage of ECL-specific logic capabilities like wire OR or stacked pass transistors. I remember that the CAD system let the designer work at multiple levels in the same chip: at the rectangle level (for memory arrays), transistor level, gate level, and even write some constructs as programming language notations. For example a 64-bit register could be specified as: for (i = 0; i < 64; i++) { transistor-level schematic of a one-bit register } Originally the idea was to use this for a 1 GHz Alpha; I think it ended up being a 1 GHz MIPS processor. Possibly the project was killed before it quite finished. That seems to have been one of the very few examples of ECL going beyond SSI. The physical possibility existed; the economics did not. paul
Re: VAX9000 unearthed
> On Feb 18, 2022, at 4:30 PM, Chris Zach via cctalk > wrote: > >> XMI already existed as the system bus for the VAX 6000 series machines. >> I/O on the VAX 6000's was via an XMI-to-BI bridge. I don't remember the >> exact performance specs on XMI, but it was wider and faster than BI. > > I thought XMI was only supposed to be a CPU/memory bus, with IO being done by > multiple VaxBI busses. That's what we had on the 6000 at the computer > Society: 2 CPUs, memory, and two VaxBi with a SCSI disk controller on each. From what was just reported, the 6000 series indeed did it that way. But I think on the 9000 it was an I/O bus too. I definitely remember some work on XMI based I/O devices, in particular an FDDI card. And indeed you can find a spec for that device in http://www.bitsavers.org/pdf/dec/xmi/ . paul
Re: VAX9000 unearthed
> On Feb 18, 2022, at 3:18 PM, Gary Grebus wrote: > > On 2/18/22 09:46, Paul Koning wrote: >> ...The 9000 also had its own I/O bus, XMI, different from BI. I don't know >> how its performance compares, whether it was worth the effort. > > XMI already existed as the system bus for the VAX 6000 series machines. I/O > on the VAX 6000's was via an XMI-to-BI bridge. I don't remember the exact > performance specs on XMI, but it was wider and faster than BI. > > XMI was then also used as one of the possible I/O buses on the VAX 1 and > AlphaServer 7000 and 8000 series machines, via a system bus to XMI bridge. > So the XMI I/O adapters were common across all these series of machines. I didn't remember all those details, thanks. There also was an effort at one point to adopt FutureBus in DEC systems. We did a pile of design in the network architecture group to figure out how to handle interrupts and bus cycles efficiently; I don't remember if anything actually shipped with that stuff. paul
Re: VAX9000 unearthed
> On Feb 18, 2022, at 12:16 PM, Lee Courtney wrote: > > Paul, > > What was the timeframe for the MPP? I thought late 1980s. Just did some searching, which turns up some manuals for the "DecMPP 12000". And a trade press article that says it's a rebadged MasPar machine. https://en.wikipedia.org/wiki/MasPar says that MasPar was founded by ex-DEC chip VP Jeff Kalb. He took a design done at DEC, for a massively parallel machine inspired by the Goodyear MPP with some changes. DEC decided not to build that so MasPar did and DEC then resold it. The description sounds vaguely familiar. The manual I downloaded says it has 1024 cores per board, and up to 16 boards. Neat. paul
Re: VAX9000 unearthed
> On Feb 18, 2022, at 7:08 AM, Joerg Hoppe via cctalk > wrote: > > Hi, > > my computer club c-c-g.de could acquire the remains of a VAX9000 ! > The machine ran at the GWDG computing center in Göttingen, Germany, around > 1993. > Parts of it were in stock of their museum for 20+ years. > > See lots of hires-pictures at > > https://c-c-g.de/fachartikel/359-vax-9000-ein-starker-exot > > (scroll to the bottom for a slide show). > > Joerg Excellent photos! I didn't realize the 9000 had a vector processor. One reason the design was so expensive is that it was originally planned as a water-cooled machine -- code name "Aquarius". At some point that idea was dropped and switched to air cooling -- code name "Aridus". I guess those skinny pipes with red and blue markers carry jets of cooling air, but were originally going to carry water. The 9000 also had its own I/O bus, XMI, different from BI. I don't know how its performance compares, whether it was worth the effort. Speaking of vector processors: there's a very obscure DEC processor, the DEC MPP. I remember seeing the processor architecture document when it was being designed, not sure why. It's a very-RISC machine, just a few instructions, but lots of cores especially for that time -- 256? More? Recently I saw it mentioned in some documents, apparently it did get produced and shipped, perhaps only in small numbers. I wonder if any have been preserved. As far as I know there is no family connection between that machine and anything else DEC did before or since. paul
Re: Also WTB: DEC VSXXX-AA Mouse or Compatible
> On Feb 11, 2022, at 2:52 PM, Jonathan Stone via cctalk > wrote: > > > If available, I'd like to purchase a bunch. FWIW: it would not be all that hard to convert from readily available USB (or PS2) mice to DEC protocol. I did the analogous work for keyboards (LK201) on Arduino. That same hardware could do this job with different firmware Someone asked me about that a while ago; it's not something I would undertake because I don't have any relevant systems to use it on, but it doesn't look like a difficult job. paul
Re: DEC Tape TU56 head pictures
> On Feb 9, 2022, at 11:11 AM, Mike Katz via cctalk > wrote: > > I am in the process or restoring a TU56 so it's in pieces. Pictures of the > head were requested so here they are. > > These were taken with my phone so the quality is only mediocre. > > This, like the rest of the drive, is in the process of being restored. > > I'm still not sure how to clean the front of the head where the tape touches > the head. Any ideas? We always used isopropyl alcohol (91% or better). paul
Re: DECTape head problem
> On Feb 8, 2022, at 5:14 PM, Wayne S via cctech wrote: > > Searched a lille bit for Western Magnetics. Here’s a site that has some > surplus heads, even a western magnetics onebut probably not the correct one. > There is a corporate charter record for Western Magnetics in Minnesota dated > 1964. Maybe this is the same company. There’s also a tape head from Michigan > Magnetics. Maybe a merged company? > > https://www.surplussales.com/Equipment/magnetic-tape.html Those all look like audio heads, nothing even vaguely resembling a DECtape head. paul
Re: DECTape head problem
> On Feb 8, 2022, at 4:04 PM, Ron Pool via cctech wrote: > >> So it sermsdectape heads are special. I don’t think Dec would have the >> desire to make them internally so they probably contractef with a company >> already set up to do that. Who were the big tape head manufacturers at that >> time? Does anyone know? > > A photo of the back of a TU56 DECtape head can be seen at > https://www.pdp8online.com/tu56/pics/head_label.shtml?small . > The head has a label on it that reads: > Western Magnetics > Glendale Calif. > Record > 7282 > > I've never seen a TU56 in person and have no idea if they have separate read, > write, and erase heads or some other combo. The "Record" notation on the > above head's label hints to me this might be a write head. > > I found that and other DECtape photos at > https://www.pdp8online.com/tu56/tu56.shtml . This picture https://www.pdp8online.com/tu56/pics/TU56_front.shtml?large shows the tape path clearly. There is just one head assembly that performs reading as well as writing. I don't know what "Record" refers to; the numbers near it look vaguely like a date code though not the usual year and week number. The maintenance manual (on Bitsavers) speaks of a "read/write head" and has an illustration that shows one of the head elements with a "read/write coil". So the implication is that (a) there isn't an erase head, and (b) the same head serves for read or write according to whether the coil is being driven or sensed. Come to think of it, I think erase heads are an aspect of audio tapes, not relevant to computer tapes. paul
Re: DECTape head problem
> On Feb 8, 2022, at 2:53 PM, Wayne S via cctech wrote: > > Since so many audio tape players and computer magtape units were made it > would stand to reason that there has to be a stash somewhere of tape heads > and it’s just a matter of finding where they are. > Are there any part numbers on the dectape heads? The schematics are bound to show DEC part numbers, but how those translate into supplier part numbers is anyone's guess. Or perhaps they were made internaly by DEC? In any case, DECtape heads are unusual. Computer tapes are normally 1/2 inch wide (a few old tape drives had different widths, like the 14 track 1 inch CDC tape). But DECtape and LINCtape are 3/4 inches wide, with 10 head positions. Audio tapes are unlikely to be helpful; consumer reel to reel tape is 2 tracks (interleaved for when you flip over the reel?) 1/4 inch; professional decks might have 8 tracks or more on 1/2 or 1 or 2 inch wide tape, but I don't remember ever seeing 3/4 inch wide audio or instrumentation heads. paul
Re: DEC AXV11-C analog board
> On Feb 8, 2022, at 2:34 PM, Douglas Taylor via cctalk > wrote: > > Update on this: I did put together a battery and voltage divider to test the > AXV11. The label on the A/D module says it brings the output from the > multiplexer to one of the external pins. I was able to verify that the > voltage applied to a couple of the A/D inputs makes it through the > multiplexer when selected using the CSR. The next output available is from > the Sample and Hold, and this is always pegged at +12v. Am I wrong to assume > that the sample and hold will 'freeze' its output when the A/D go bit is set? You're correct, a S/H circuit is supposed to hold the value that was on its input at the time it was told to take the sample. Typically it won't hold it "forever"; S/H circuits have a hold time spec chosen so it is substantially longer than the time it takes the A/D behind it to complete its measurement. paul
Re:
> On Feb 2, 2022, at 1:20 PM, John Ames via cctalk > wrote: > >> Back in the bad old days of the 5160 PC, some DTC controllers allowed for >> partitioning a drive (using witch settings) > I think "witch settings" is my new preferred term for this. They're > certainly mysterious and arcane enough. Nice. It would be a good term to apply to VMS SYSGEN parameters that are documented as having units "microfortnights". paul
Re: OT: Who? What? Was: Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 6:46 PM, Jon Elson via cctalk > wrote: > > On 2/1/22 15:40, Paul Koning via cctalk wrote: >> >>> On Feb 1, 2022, at 4:31 PM, Grant Taylor via cctalk >>> wrote: >>> >>> On 2/1/22 11:23 AM, Paul Koning via cctalk wrote: >>>> Did any DEC MSCP disks use it? >>> Please expand "MSCP". My brain is failing to do so at the moment. >> Mass Storage Control Protocol, the geometry-independent storage access >> scheme DEC created in the early 1980s. Early implementations include the >> HSC50 (for VAXclusters) and the UDA50 (Unibus adapter), talking to disk >> drives such as the RA80. >> >> With MSCP, DEC switched to addressing disks by sector offset, as SCSI did >> later, rather than by geometry (cylinder, track, sector) > > All SCSI devices were logical block number, all the way back to the original > SASI (Shugart Associates System Interface). I had a 10 MB Memorex Winchester > drive with SASI adapter on my Z-80 CP/M system in about 1981 or so. Maybe I > misunderstood your sentence above, what the "later" applied to. I meant that SCSI appeared later than MSCP. And that it used LBA addressing, but MSCP did it before SCSI. paul
Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 6:00 PM, Warner Losh via cctalk > wrote: > > On Tue, Feb 1, 2022 at 12:42 PM Grant Taylor via cctalk < > cctalk@classiccmp.org> wrote: > >> On 2/1/22 2:14 AM, Joshua Rice via cctalk wrote: >>> There's several advantages to doing it that way, including balancing >>> wear on a disk (especially today, with SSDs), as a dedicated swap >>> partition could put undue wear on certain areas of disk. >> >> I thought avoiding this very problem was the purpose of the wear >> leveling functions in SSD controllers. > > All modern SSD's firmware that I'm aware of decouple the physical location > from the LBA. They implement some variation of 'append store log' that > abstracts out the LBAs from the chips the data is stored in. One big reason > for this is so that one worn out 'erase block' doesn't cause a hole in the > LBA > range the drive can store data on. You expect to retire hundreds or > thousands of erase blocks in today's NAND over the life of the drive, and > coupling LBAs to a physical location makes that impossible. Another reason is that the flash memory write block size is larger than the sector size exposed to the host, and the erase block size is much larger than the write block size. So the firmware has to keep track of retired data, move stuff around to collect an erase block worth of that, then erase it to make it available again to receive incoming writes. The spare capacity of an SSD can be pretty substantial. I remember one some years ago that had a bug which, in a subtle way, exposed the internal structure of the device. It turned out the exposed capacity was 49/64th of the physical flash space. Strange fraction, I don't think we were ever told why, but the supplier did confirm we analyzed it correctly. paul
Re: OT: Who? What? Was: Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 5:08 PM, Chuck Guzis via cctalk > wrote: > > On 2/1/22 13:40, Paul Koning via cctalk wrote: >> > >> With MSCP, DEC switched to addressing disks by sector offset, as SCSI did >> later, rather than by geometry (cylinder, track, sector) on devices like the >> RK05 and RP06. If the OS sees only an LBA, it doesn't matter whether the >> drive uses zone recording; such complexity can be hidden inside the >> controller firmware. But I don't know if that was actually done, either at >> that time or in later generations. > > Good grief, it took DEC all that time? CDC was doing it in the 1960s. > Had to, because of the wide variety of RMS available. I think that > one of the early 2311 clone drives (854?) used 256-byte (8 bit byte) > hard-sectored media, which isn't very friendly to systems with 60 bit > words. I recall that several sectors were used to create a logical > 60-bit word addressable sector, with a substantial part of the last > sector of a logical PRU left unused. I didn't know that one. The only drive I really know is the 844, an RP04 lookalike, which does have friendly size sectors, laid out by the controller ("BSC"). LBA addressing, in CDC? Where is that? On the 6000 series, I remember classic c/h/s addressing. The OS would convert those to "logical track and sector" addresses, sure. But that was a file system structure thing really. PLATO ignored all that overhead and laid its own file system directly on top of the disks, with the file system block offset to c/h/s mapping done in the PP. So yes, it necessarily knew the drive layout. For that matter, with "logical tracks" the OS still had to know the layout; it just got buried into the logical to physical mapping system request code, in the CP monitor for extra inefficiency. paul
Re: OT: Who? What? Was: Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 4:31 PM, Grant Taylor via cctalk > wrote: > > On 2/1/22 11:23 AM, Paul Koning via cctalk wrote: >> Did any DEC MSCP disks use it? > > Please expand "MSCP". My brain is failing to do so at the moment. Mass Storage Control Protocol, the geometry-independent storage access scheme DEC created in the early 1980s. Early implementations include the HSC50 (for VAXclusters) and the UDA50 (Unibus adapter), talking to disk drives such as the RA80. With MSCP, DEC switched to addressing disks by sector offset, as SCSI did later, rather than by geometry (cylinder, track, sector) on devices like the RK05 and RP06. If the OS sees only an LBA, it doesn't matter whether the drive uses zone recording; such complexity can be hidden inside the controller firmware. But I don't know if that was actually done, either at that time or in later generations. paul
Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 2:42 PM, Grant Taylor via cctalk > wrote: > > On 2/1/22 2:14 AM, Joshua Rice via cctalk wrote: >> There's several advantages to doing it that way, including balancing wear on >> a disk (especially today, with SSDs), as a dedicated swap partition could >> put undue wear on certain areas of disk. > > I thought avoiding this very problem was the purpose of the wear leveling > functions in SSD controllers. Definitely. But apparently wear from repeated writes is a thing on very high density modern HDDs, much to my surprise. It's not as dramatic as flash memory but it apparently does exist. For most purposes it probably isn't very important. Especially not swap partitions: if you're swapping enough for this to matter you have bigger problems. :-) paul
Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 1:03 PM, Chuck Guzis via cctalk > wrote: > > On 2/1/22 09:16, Mike Katz via cctalk wrote: >> In the rotating drive world there is constant linear velocity (CLV) and >> constant angular velocity (CAV) drives. >> >> On CLV drives the speed of rotation would vary based on the track >> (slower in the inner tracks and faster on the outer tracks). This meant >> that the data rate and number of bits/track remained constant. >> >> On CAV drives the rotational speed of the drive doesn't change, this >> means that the data rate and number of bits/track changes depending on >> the track. > > I suspect that most recent ATA and SCSI drives employ "Zoned" recording. > That is, the disk is divided up into several annular "zones", each with > its own data rate.The rotational speed remains constant, however. As far as I know zoned recording is universal at this point. Don't know how far back it goes. Did any DEC MSCP disks use it? > This is not even recent. The old Bryant 4000 disks used such a scheme > and it was used on many old drives after that. Those are the ones CDC sold as 6603. Four zones, handled by the driver. (Also 12 bit parallel data rather than the usual serial data stream.) paul
Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 1:09 PM, Glen Slick via cctalk > wrote: > > On Tue, Feb 1, 2022 at 10:04 AM Paul Koning via cctalk > wrote: >> >>> Slower on the outer tracks, I believe. CDs work this way. >> >> I suspect CLV was invented for CDs, in fact. > > Which came first CLV CDs, or CLV LaserDiscs? I forgot about LaserDisc. That's earlier, says Wikipedia. paul
Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 12:21 PM, Paul Koning via cctalk > wrote: > >> On Feb 1, 2022, at 12:16 PM, Mike Katz via cctalk >> wrote: >> >> In the rotating drive world there is constant linear velocity (CLV) and >> constant angular velocity (CAV) drives. >> >> On CLV drives the speed of rotation would vary based on the track (slower in >> the inner tracks and faster on the outer tracks). This meant that the data >> rate and number of bits/track remained constant. > > Slower on the outer tracks, I believe. CDs work this way. I suspect CLV was invented for CDs, in fact. The reason is obvious: CDs contain uncompressed digital audio, i.e., constant bit rate. If you want to avoid big buffers -- an expensive thing to have in 1980s consumer electronics -- the bits have to come off the media at essentially the desired payload data rate. So you either use CAV with constant sector counts, which wastes a whole lot of capacity given that the ratio of inner to outer radius is quite large on a CD, or you go to CLV. The variable rotation rate is easy enough to handle because CDs are accessed sequentially; the speed change on track switch is small because track switches are only by +1 (during play). You can often hear the RPM changes clearly, if you're asking the CD player to do random access by skipping around the songs. paul
Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 12:21 PM, Paul Koning via cctalk > wrote: > > > >> On Feb 1, 2022, at 12:16 PM, Mike Katz via cctalk >> wrote: >> >> In the rotating drive world there is constant linear velocity (CLV) and >> constant angular velocity (CAV) drives. >> >> On CLV drives the speed of rotation would vary based on the track (slower in >> the inner tracks and faster on the outer tracks). This meant that the data >> rate and number of bits/track remained constant. > > Slower on the outer tracks, I believe. CDs work this way. More precisely: CLV means slower rotation when positioned on the outer cylinders. The outer cylinders have more sectors; the layout is such that the linear bit density is roughly constant, which in turn because of the constant linear velocity means constant data rate. >> On CAV drives the rotational speed of the drive doesn't change, this means >> that the data rate and number of bits/track changes depending on the track. > > It means that only if the sector count changes. That's true for modern > drives and for the CDC 6603; it wasn't true for quite a while. A lot of > "classic" disk drives have constant sector counts. So, for example, an RP06 > is a CAV drive and its transfer rate is independent of cylinder number since > the sector count per track is constant. > > I think hard drives are CAV as a rule because changing the spin rate as part > of a seek takes too long. Variable sector count is independent of CLV vs. CAV. Modern drives have it, classic CAV drives mostly do not. A CAV drive with fixed sector counts has fixed data rate; a CAV drive with more sectors on the outer tracks has higher transfer rate on those tracks. paul
Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 12:16 PM, Mike Katz via cctalk > wrote: > > In the rotating drive world there is constant linear velocity (CLV) and > constant angular velocity (CAV) drives. > > On CLV drives the speed of rotation would vary based on the track (slower in > the inner tracks and faster on the outer tracks). This meant that the data > rate and number of bits/track remained constant. Slower on the outer tracks, I believe. CDs work this way. > On CAV drives the rotational speed of the drive doesn't change, this means > that the data rate and number of bits/track changes depending on the track. It means that only if the sector count changes. That's true for modern drives and for the CDC 6603; it wasn't true for quite a while. A lot of "classic" disk drives have constant sector counts. So, for example, an RP06 is a CAV drive and its transfer rate is independent of cylinder number since the sector count per track is constant. I think hard drives are CAV as a rule because changing the spin rate as part of a seek takes too long. paul
Re: Origin of "partition" in storage devices
> On Feb 1, 2022, at 5:02 AM, Liam Proven via cctalk > wrote: > > ... > I suggested making a D: drive and putting the swap file on it -- you > saved space and reduced fragmentation. > > One of our favourite small PC builders, Panrix, questioned this. They > reckoned that having the swap file on the outer, longer tracks of the > drive made it slower, due to slower access times and slower transfer > speeds. They were adamant. And very obviously wrong -- elementary geometry. It is true that the outer tracks are physically longer. But that doesn't mean transfer rates are slower. Given the older formatting where the count of sectors per track is constant, so is the transfer rate -- the same number of sectors pass the head per revolution, i.e., in constant time, no matter what track you're on. The bits are physically longer, of course. That's why later drives put more sectors per track as you move outward, and that means that the transfer rate on outer tracks is *higher* than for inner tracks. And some storage systems indeed use that knowledge. Incidentally, while constant sector count was the norm for a long time, it wasn't universal; the CDC 6603, in 1964, had "zones" with the sector count changing between zones. Outer zones had more sectors per track. Unlike modern drives, the OS driver had to handle that. paul
Re: Origin of "partition" in storage devices
> On Jan 31, 2022, at 7:35 PM, Noel Chiappa via cctalk > wrote: > >> From: Tom Gardner > >> You define logical disks by assigning a logical disk unit number to a >> file on a physical disk. You can then use the logical disk as though it >> were a physical disk. > > To me, 'partition' implies a contiguous are of the disk; "a file" to me > implies that it might not be contiguous? Or are files contiguous in the RT-11 > filesystem? (I know there were filesystems which supported contiguous files.) Yes, RT-11 is a somewhat unusual file system in that it doesn't just support contiguous files -- it supports ONLY contiguous files. That makes for a very small and very fast file system. The only other example I know of that does this is the PLATO file system. As for partition vs. file, the two differences I see are: (1) layering: the partition is below the file system. (2) partitions are originally entirely static (set at creation and never changed) and even later on changed only rarely and typically with substantial technical difficulty. paul
Re: Origin of "partition" in storage devices
> On Jan 31, 2022, at 2:01 PM, Tom Gardner via cctalk > wrote: > > There is a discussion of the origin of the term "partition" in storage > devices such as HDDs at: > https://en.wikipedia.org/wiki/Talk:Disk_partitioning#Where_did_the_term_%22p > artition%22_originate? > > It seems clear it was used in memory well before HDDs but when it got > started there is unclear. > * IBM PC DOS v2 was an early user in 1983 with FDISK and its first PC > support of HDDs > * UNIX, Apple OS's and IBM mainframe all seem to come later. > > Partitioning as a "slice" probably predates IBM PC DOS v2 > > Would appreciate some recollections about DEC usage, other minicomputers and > the BUNCH. > > You can either post directly to Wikipedia or let me know; links to > references would greatly be appreciated > > Tom RSX has partitions; the term goes back at least as far as RSX-11/D. It may well go back to older versions such as RSX-15, but I haven't looked. OS/360 came in three flavors, PCP, MFT, and MVT. The OS/360 article describes MFT as having partitions selected at sysgen time (the acronym stands for "multiprogramming with a fixed number of tasks, with the concurrent tasks assigned across the available partitions). Both of these are memory partitions. The only OS I can think of predating the ones you mentioned is RT-11, the later versions (V2 did not have them). When did Unix first get partitions? paul
Re: What was a Terminal Concentration Device in DEC's products?
> On Jan 31, 2022, at 11:04 AM, Noel Chiappa via cctalk > wrote: > >> From: Bob Smith > >> the original UART was designed by DEC, Vince Bastiani was the project >> lead and designer, Gordon Bell was behind the project, and it may have >> been his idea. > > "Computer Engineering: A DEC View of Hardware Systems Design" covers this, in > a footnote on pg. 73. > > I seem to recall reading that Ken Olsen was involved in doing early modem > work (presumably on Whirlwind), but maybe my memory is failing, and it was > someone else (like Bell, or someone). Too busy to research it at the moment... Don't know about that. But I do know that Ken Olsen was involved in magnetic logic -- it's his thesis subject -- which is at the basis of core rope ROM memory design. There's a reference to his work in one of the 1960s era papers about core rope memory. (It names "Olsen" without first name but I assume there weren't two Olsens at that lab doing that work.) paul
Re: What was a Terminal Concentration Device in DEC's products?
> On Jan 30, 2022, at 4:44 PM, John Forecast via cctalk > wrote: > > ... > Is it possible that the DMX11 was a CSS product? Clearly it is; the Option/Module list (1983 edition, from Bitsavers) says so. It shows the controller and three different 64 port line units, for different signal specs ("differential" presumably RS-422, "EIA" presumably RS-232, and 20 mA current loop). paul
Re: What was a Terminal Concentration Device in DEC's products?
> On Jan 30, 2022, at 2:43 PM, Chris Zach via cctalk > wrote: > > ... > From what I can see, the the kmc11 was an M8204 single board which is > different from the 8200 used in the dmc11. I had a DMC11 somewhere. > > From the books, the kmc11 had an "lsi11 on board", 1k of 16 bit ram, 1k of 8 > bit data memory a 300ns cycle time, 16 bit microprossor with a 16 bit > micro-instruction bus and 8 bit data path. This is according to the 1980 > Terminal and Communications handbook, so it's a few years after the 1976 > timeframe of Sha Tin. A KMC-11 has no resemblance whatsoever to an LSI-11 or any other PDP-11 processor. It's a custom microcontroller designed to be a coprocessor on the Unibus. The DMC-11 processor card is not quite the same thing as a KMC-11; its firmware is in ROM rather than RAM, for one thing. I don't know if there are any subtle instruction set differences. Certainly the architecture is at least mostly the same; this can be seen from the fact that RSTS at startup probes various internal state of the DMC-11 by making it execute instructions, and those instructions can be readily understood by reading the KMC-11 manual. It looks like the DMC-11 had a 1k program ROM, the KMC-11/B a 4k RAM, and the DMR/DMP microprocessor seems to be 6k ROM (the drawings are a bit confusing). A consequence of the tiny program memory in the DMC was that the high speed version had a couple of limitations and bugs, described in the DMC-11 microprocessor manual. paul
Re: What was a Terminal Concentration Device in DEC's products?
> On Jan 29, 2022, at 3:58 PM, Noel Chiappa via cctalk > wrote: > >> From: Paul Koning > >> DH-11 is unusual in that it has DMA in both directions > > McNamara's DH11? (I don't know of another DECdevice of that name.) Per: > > > http://www.bitsavers.org/pdf/dec/unibus/EK-ODH11-OP-002_DH11_Asynchronous_16-line_Multiplexer_Users_Manual_Sep76.pdf > > it's DMA on output only; the input side has a FIFO that has to be emptied by > the CPU. Oh. That's amazing, all these years I thought it had DMA both ways. Clearly not. I wonder how I got that misunderstanding. paul
Re: What was a Terminal Concentration Device in DEC's products?
> On Jan 29, 2022, at 12:28 AM, Chris Zach via cctalk > wrote: > > Old question: I'm looking through some old reports from 1977 about a failed > DEC project with the DMX11 multiplexer system and there is reference to the > following key items: > > 1) The DMX was designed to handle block mode devices. Fine. > 2) Character mode devices like the VT52's were supposed to use a "TCD" > product from DEC. > > The reason the project imploded was because apparently DEC stopped supporting > the TCD in RSX11/D in late 1976, so someone in CSS had the great idea of > agreeing to extend the microcode in the DMX11 to handle both block AND > character mode devices. This did not work well and it sank the project. > > What I'm wondering is what was the TCD for PDP11's back then? I don't see > anything in my communications handbooks on this, and even the DMX11 doesn't > really appear, instead there is the COMM/IO/P type boards which worked with a > pile of DZ11's. From what I can glean from this documentation it looks like > the DMX11 worked in a similar fashion as the requirement was the DMX11 system > was a nine board solution (possibly 8 DZ11's and one controller board). > > More odd it looks like the TCD *was* still supported in RSX11/M and > ultimately the decision was made to build the thing in M so it's weird they > continued to whack away at the DMX solution instead of going with TCD's for > async and proven DMX microcode for block devices. > > Any thoughts, or does this jog any memories? Nothing comes to mind here; the name "DMX" does not ring any bells. It's a bit before my time, admittedly. DEC made some products that used block mode terminals: the moderately successful Typeset-11 with the VT-61/t forms and page editing terminal, and the VT-71 with embedded LSI-11 to do full file local editing. Both have some form of block transfer to the host, but as far as I can remember they used ordinary DH-11 terminal interfaces. DH-11 is unusual in that it has DMA in both directions, which is unhelpful for interactive use but great for block transfer. Typeset-11 also supported a specialized terminal made by Harris (the 2200), another local processor device, this one connected to the PDP-11 host with a DL-11/E, using half duplex multidrop BISYNC with modem signal handshakes. I kid you not... I have some scars debugging that protocol at 2 am in downtown Philadelphia. DEC also built yet another VT-51 variation, the VT-62, which was the terminal for the TRAX system. That was, I think, some sort of RSX derivative (-M+ perhaps, but I'm not sure), which made it to field test but was canceled before becoming an official product. Not sure why. paul
Re: Question about DECtape formulation
> On Jan 26, 2022, at 1:28 PM, Tom Gardner via cctalk > wrote: > > ...The UNISERVO I, of Univac I, tape drives had a separate spool of clear > very thin film that was clock motor wound across the head when tape was > moving, since the phosphor bronze plated tape was very abrasive. That > existed long before LINCtape/DECtape." > > Note that LINCtape is DECtape :-) Media-wise, yes. The format is very different indeed. paul
Re: Question about DECtape formulation
> On Jan 25, 2022, at 3:13 PM, Bjoren Davis via cctalk > wrote: > > > On 1/25/2022 9:18 AM, Paul Koning via cctalk wrote: >> >>> On Jan 24, 2022, at 10:27 PM, Gary Oliver via cctech >>> wrote: >>> >>>> ... >>> >>> BTW, does anyone know of a source for these vinyl strips. My old ones are >>> 10 mil blue very-flexible vinyl without any adhesive. They rely only on the >>> cohesive properties of vinyl-to-a-non-porous surface. I tried using some >>> of the 'dry vinyl' sheets from Cricut (the plastic decal printer company.) >>> They have a couple of colors without adhesive that they call "window cling" >>> but they aree only 4 mills thick and a bit flimsy, though so-far they are >>> holding ok. >> There's a children's toy I remember: shapes cut from vinyl, intended to be >> stuck to windows to make pictures. That seems to be the same stuff. >> >> paul >> > Paul, > > Are you thinking of Colorforms (https://en.wikipedia.org/wiki/Colorforms)? Yes, that's it. I would think those will work fine. I remember having something similar as a kid and what I remember is that the thickness was similar to that of DECtape "little blue things". paul
Re: Question about DECtape formulation
> On Jan 24, 2022, at 10:27 PM, Gary Oliver via cctech > wrote: > >> ... > > As to the real reason I was doing this: Most of my tapes are un-boxed and > have suffered being in a dusty area (before I got them) with the dust forming > a sort of 'crust' on the outside of the tape. It's only on the first wrap or > so, but it's enough that it keeps those handy vinyl cohesive tapes from > sticking. For that reason, I was trying to find something to clean of this > dusty gunk so the vinyl strip would hold the tape into a spooled condition. > It was the side-effect of this effort that lead me to the discovery if this > "removable layer" on the DECtape. > > BTW, does anyone know of a source for these vinyl strips. My old ones are 10 > mil blue very-flexible vinyl without any adhesive. They rely only on the > cohesive properties of vinyl-to-a-non-porous surface. I tried using some of > the 'dry vinyl' sheets from Cricut (the plastic decal printer company.) They > have a couple of colors without adhesive that they call "window cling" but > they aree only 4 mills thick and a bit flimsy, though so-far they are holding > ok. There's a children's toy I remember: shapes cut from vinyl, intended to be stuck to windows to make pictures. That seems to be the same stuff. paul
Re: Question about DECtape formulation
> On Jan 25, 2022, at 1:13 AM, Chuck Guzis via cctalk > wrote: > > So, can we assume that the words about a "tape sandwich" refer to a > mylar base, oxide coating, and a lubricant/protective coating? > > That is not an oxide coating sandwiched between to layers of mylar. The 3M spec for the media doesn't quite say it, but yes, that implication is unavoidable from what it does say. paul
Re: Typing in lost code
> On Jan 24, 2022, at 5:57 PM, ben via cctalk wrote: > >> ... > Document source is also a problem. > You would want to keep scan it at the best data format, > not something in a lossey format. That's true generally. Anything other than actual photographs (continuous tone images) should NOT be run through JPEG because JPEG is not intended for, and unfit for, anything else. Printouts, line drawings, and anything else with crisp edges between dark and light will be messed up by JPEG. PNG and TIFF are examples of appropriate compression schemes. paul