Re: WTB Intel 7110 Bubble Memory Subsystem or Chipset

2018-02-22 Thread Eric Smith via cctalk
On Thu, Feb 22, 2018 at 2:28 PM, Mark J. Blair via cctalk <
cctalk@classiccmp.org> wrote:

> Is my understanding correct that removing the entire 7110 module as a unit
> (whether socketed or soldered in) should be somewhat safe, but any attempt
> to disassemble the module would likely disturb the bias field and destroy
> the data?
>

Yes, you can safely remove the entire 7110 without altering the data
within, as long as you don't subject it to strong external magnetic fields.


Re: WTB Intel 7110 Bubble Memory Subsystem or Chipset

2018-02-22 Thread Eric Smith via cctalk
On Thu, Feb 22, 2018 at 2:23 PM, dwight via cctalk 
wrote:

> Do not remove the chip from the bias magnets. All will be lost if you do.
>

That's true, but AFAIK all commercially produced bubble memory devices,
including Intel (7110 1Mbit, 7114 4Mbit) and TI, the bias magnets are
integral to the packaging of the device, so there's no danger of that
unless you pry apart the device packaging.


Re: WTB Intel 7110 Bubble Memory Subsystem or Chipset

2018-02-22 Thread Eric Smith via cctalk
It's easy to design an emulator at the level of the D7220 host interface.

It is _difficult_ to design an emulator at the interface between the D7220
controller and the 7242 Formatter/Sense Amplifier, because the 7242 is a
tricky little beastie, and while the interface is somewhat documented, the
docs aren't terribly clear and not entirely complete, because Intel didn't
think anyone would want to use the 7242 without the D7220.

Unfortunately what Mark needs is to emulate it at the 7242 level, because
the 7242 is in the cartridge and the D7220 is in the host.


Re: HP 9816 CP/M-68K

2018-02-12 Thread Eric Smith via cctalk
On Mon, Feb 12, 2018 at 3:56 PM, js--- via cctalk 
wrote:

> That's really "slick," Glen.If it's not too burdensome to give a brief
> answer, how would you keep track of the time, or know how long feeding a
> byte at a time took?
>

On an original PC or XT (without special turbo modes of clones), you could
probably measure time well enough by counting CPU cycles. Otherwise you'd
need a timer. However, if you make the decision to reset the FDC based on
when it asks for data, you just reset it when it asks for the sector
information for the next sector past the last one that you want to format.
(I haven't actually tried that. I used a timer.)


Re: HP 9816 CP/M-68K

2018-02-12 Thread Eric Smith via cctalk
On Mon, Feb 12, 2018 at 1:23 PM, Fred Cisin via cctalk <
cctalk@classiccmp.org> wrote:

> Reading or writing multiple sized sectors can be done with multiple passes.
> But, I don't know how to FORMAT a track with multiple sector sizes with
> NEC 765 type controller.  Not as hard with WD style controllers.
> With multiple sector sizes, can squeeze 440K on a "360K" disk.
>

Format a track with the sector size that occurs later on the track, with
dummy sectors ahead of them and gap sizes selected to position them
properly.
Start formatting with the sector size for the earlier sectors. Abort the
format at the time when the desired number of sectors have been written.

I'm not sure whether there's any way to abort a track format on a PC. I did
it on a machine that had control over the μPD765 reset pin.

Definitely much easier with the WD style controllers.

It is possible to format disks on an NEC style controller with a format
that cannot be created on a WD style controller, for instance using
particular track or sector numbers above 0xf0, which are specially
interpreted by the WD during a track write.


Re: Intel 8085 - interview?

2018-02-09 Thread Eric Smith via cctalk
On Fri, Feb 9, 2018 at 3:41 PM, Chuck Guzis via cctalk <
cctalk@classiccmp.org> wrote:

> The V-series may be a good example of why Intel didn't want to publicize
> the added 8085 instructions.
>

Maybe. What I'd heard from multiple sources was that they only wanted the
8085 to replace the 8080, so it was supposed to be "better" in terms of
being a lower-cost 8080 replacement, needing fewer support chips (except an
address latch, but that's cheaper than an 8228/8238), but they didn't want
it to have a better instruction set that might put it into sockets that
might otherwise get filled with an 8086/8088.


Re: Intel 8085 - interview?

2018-02-09 Thread Eric Smith via cctalk
On Thu, Feb 8, 2018 at 9:56 PM, Chuck Guzis via cctalk <
cctalk@classiccmp.org> wrote:

> On 02/08/2018 07:18 PM, Eric Smith via cctalk wrote:
> > At some point I read an article or a transcript of an interview with an
> > Intel employee (or former employee) who had been involved with the design
> > of the 8085, describing how he had specified additional instructions over
> > those of the 8080, and they had been implemented in the silicon, but then
> > the decision was made to not document any of the new instructions other
> > than RIM and SIM.
> >
> > I no longer recall which Intel employee that was, and can't find the
> > article or interview. Does anyone else remember that, and perhaps have a
> > copy?
>
> Do you mean Cort Allen?   His email a couple of years ago was:
>
> manofqu...@aol.com
>

That's not the interview I was thinking of, but it's definitely interesting!


Intel 8085 - interview?

2018-02-08 Thread Eric Smith via cctalk
At some point I read an article or a transcript of an interview with an
Intel employee (or former employee) who had been involved with the design
of the 8085, describing how he had specified additional instructions over
those of the 8080, and they had been implemented in the silicon, but then
the decision was made to not document any of the new instructions other
than RIM and SIM.

I no longer recall which Intel employee that was, and can't find the
article or interview. Does anyone else remember that, and perhaps have a
copy?

Eric


Re: [RESOLVED] Re: EPROM baking

2018-02-05 Thread Eric Smith via cctalk
On Mon, Feb 5, 2018 at 12:06 PM, Chuck Guzis via cctalk <
cctalk@classiccmp.org> wrote:

> You should be aware that many "thin" Far East USB cables will not pass
> the full USB 1.5A current without substantial voltage drop.
>

"Full USB current" is only 0.5A for USB 2, and 0.9A for USB 3. Any USB
device that needs more current than that should be using the Battery
Charging, Power Delivery, or Type C options, or some combination thereof,
and suitable cabling.

However, I don't disagree with your assertion that there are a lot of
really crappy USB cables out there.


Re: IBM 9331-011 8" External Floppy Drive - eBay 183038271095

2018-02-02 Thread Eric Smith via cctalk
On Thu, Feb 1, 2018 at 7:04 PM, Chuck Guzis via cctalk <
cctalk@classiccmp.org> wrote:

> Much to my surprise, a P3 Intel i820 (that's the one with RDRAM) FIC
> board not only handles FM, but 128-byte sector MFM.
>

Well, great, but then what do you do when you want to read and write 80
(decimal) byte MFM sectors?
:-)


Re: SuperTerm Maintenance Manual

2018-02-01 Thread Eric Smith via cctalk
On Thu, Feb 1, 2018 at 10:56 AM, Paul Koning  wrote:

> > On Feb 1, 2018, at 12:51 PM, Eric Smith via cctalk <
> cctalk@classiccmp.org> wrote:
> > console terminal [...] VT52. (It was not good
> > practice to use a CRT as the system console, IMO.)
>
> As for CRTs, it all depends on the design assumptions.  Lots of operating
> system console interfaces are designed on the assumption you have hardcopy
> consoles, and if so a CRT is a bad idea.  But you can certainly make CRT
> consoles and have it work -- consider the CDC 6000 series.
>

Just a wild-ass guess, but I suspect that a typical CDC 6600 system would
have had a printer that logged console interaction?  I'm only suggesting
that a CRT console with no logging was a bad idea.

Of course, in principle the logging could be to disk or tape, but I don't
think most "machine-room" people would have trusted that nearly as much for
a console log. One wants a log of what happened on the console even when
the system was not working well.


Re: SuperTerm Maintenance Manual

2018-02-01 Thread Eric Smith via cctalk
On Thu, Feb 1, 2018 at 10:19 AM, Mike Norris via cctalk <
cctalk@classiccmp.org> wrote:

> The SuperTerm was manufactured by Intertec Data Systems c. 1978, it was a
> 180 CPS dot matrix printer (RS232), quite often used as a console printer
> in place of a LA36,


I know it sounds snarky, and admittedly my sample size is small, but it
seems to me that it was quite _rarely_ used as a console printer in place
of a LA36. Of the DEC machine rooms I saw back in the day (DECsystem-10,
PDP-11, VAX-11/7xx), most used an LA36 or LA120 as the console terminal,
but I also saw one Teletype Model 43 and one VT52. (It was not good
practice to use a CRT as the system console, IMO.)

I saw Intertec Intertube CRT terminals and SuperBrain microcomputers a fair
bit outside the machine rooms, but never saw a SuperTerm, though I'd seen
advertising for it. Given that it cost slightly more than an LA120, if I'd
had to make the choice, I'd have bought an LA120.  Also typically DEC
offered good deals on buying a complete system, at least for the sort of
large systems you'd find in a machine room, so substituting another
vendor's console terminal would cost more than just the delta in price
between the DEC terminal and the other vendor's terminal.

It's possible that some of the console terminals I saw could have had LA36
internal upgrades produced by Intertec or other companies. The only
advantage was that an LA36 could be upgraded to higher speed. The various
third-party graphics upgrades for the LA36 obviously weren't worthwhile for
a console terminal except in so far as they included the speed upgrade.

However, a scan of the SuperTerm maintenance manual definitely would be
good to archive for posterity.


Re: where to find ZCPR2, ZCPR3, ZCPR33, ZCPR34?

2018-01-31 Thread Eric Smith via cctalk
On Wed, Jan 31, 2018 at 1:43 PM, Eric Smith  wrote:

>
> Still looking for ZCPR2 and ZCPR34.
>

Elsewhere nathanael pointed out to me that ZCPR1, ZCPR2, and ZCPR33 may be
found at:
http://www.classiccmp.org/cpmarchives/ftp.php?b=cpm/Software/WalnutCD/cpm

So, still looking for the ZCPR34 source code, which is the version upon
which NZ-COM and Z3PLUS are built.


Re: where to find ZCPR2, ZCPR3, ZCPR33, ZCPR34?

2018-01-31 Thread Eric Smith via cctalk
On Wed, Jan 31, 2018 at 12:46 PM, Eric Smith  wrote:

> That site has NZ-COM and Z3PLUS, but I've dug through it and cannot find
> ZCPR2, ZCPR33, or ZCPR34. It's possible that they are there somewhere and I
> just didn't find them.
>

OK. Found ZCPR33 on that site in the FOG collection, disks 205 through 208.

Still looking for ZCPR2 and ZCPR34.


Re: where to find ZCPR2, ZCPR3, ZCPR33, ZCPR34?

2018-01-31 Thread Eric Smith via cctalk
On Tue, Jan 30, 2018 at 9:29 PM,  wrote:

> On January 30, 2018 at 3:21 PM Eric Smith via cctalk wrote:
> Now I'm still looking for ZCPR2, ZCPR33, and ZCPR34.
>
> I believe you will find this site:
>
> http://www.znode51.de/indexe.htm
>
> useful.  I could be wrong, but I think it has the most up to date zcpr
> software.
>

That site has NZ-COM and Z3PLUS, but I've dug through it and cannot find
ZCPR2, ZCPR33, or ZCPR34. It's possible that they are there somewhere and I
just didn't find them.

Apparently NZ-COM and Z3PLUS are based on ZCPR34, but are fancy
auto-installing things with no source code, whereas what I'm looking for is
the original ZCPR2, 33, and/or 34 distributions that included source code.

Best regards,
Eric


Re: who is in this picture? (VCF 199x)

2018-01-30 Thread Eric Smith via cctalk
On Tue, Jan 30, 2018 at 2:55 PM, Bill Degnan via cctech <
cct...@classiccmp.org> wrote:

> https://retropopplanet.files.wordpress.com/2011/06/vintage-computer.jpg
>

Pavl Zachary


Re: Interest Check: Belden Thicknet 10base5 Ethernet Coax

2018-01-30 Thread Eric Smith via cctalk
On Tue, Jan 30, 2018 at 4:47 PM, systems_glitch via cctalk <
cctalk@classiccmp.org> wrote:

> Per the recent discussion on thicknet/early Ethernet, I figured I'd see if
> there's any interest in cut-to-length Belden thicknet/10base5 Ethernet
> cable. I've got a local surplus guy who's got at least one 1100 foot roll.
> It's the real Ethernet spec stuff, sez so on the cable, and it has the
> bands to locate your vampire taps.
>

Anyone have extra thicknet transceivers w/ the vampire taps? I'd lilke to
have some of the cable if I had some transceivers to go with it.


Re: where to find ZCPR2, ZCPR3, ZCPR33, ZCPR34?

2018-01-30 Thread Eric Smith via cctalk
On Tue, Jan 30, 2018 at 1:18 PM, geneb via cctalk 
wrote:

> Eric, take a peek here:
> http://www.classiccmp.org/cpmarchives/ftp.php?b=cpm/Software
> /WalnutCD/zsys/
>

Thanks! It does look like that contains the ZCPR3 distribution.

Now I'm still looking for ZCPR2, ZCPR33, and ZCPR34.


Re: where to find ZCPR2, ZCPR3, ZCPR33, ZCPR34?

2018-01-30 Thread Eric Smith via cctalk
On Tue, Jan 30, 2018 at 1:10 PM, Chuck Guzis via cctalk <
cctalk@classiccmp.org> wrote:

>
> http://www.classiccmp.org/cpmarchives/ftp.php?b=cpm%
> 2Fmirrors%2Foak.oakland.edu%2Fpub%2Fsigm
>
> Is the SIG/M collection, all 310 volumes of it.
>
> Does this help?
>
>
> Thanks, but no. That actually does NOT contain all 310 volumes. At a
minimum, it is missing volumes 184 to 192; it may be missing more.


Re: where to find ZCPR2, ZCPR3, ZCPR33, ZCPR34?

2018-01-30 Thread Eric Smith via cctalk
On Tue, Jan 30, 2018 at 11:38 AM, Bill Degnan  wrote:

> https://archive.org/details/LOGIC_AppleII_Disk-CPM014
>
> is this what you mean?
>

While that's useful (thanks!), I'm really looking for the complete ZCPRn
distributions, which included source code for the CCP replacement and the
utilities etc.

That's why ZCPR3 took up _nine_ volumes of the SIG/M library.


where to find ZCPR2, ZCPR3, ZCPR33, ZCPR34?

2018-01-30 Thread Eric Smith via cctalk
I've become interested in ZCPR2, 3, 33, and 34, and am surprised at how
difficult it is to locate them online. Or maybe I'm just an idiot. Are they
out there somewhere?

It looks like ZCPR3 was on SIG/M volumes 184 to 192, but those specific
volumes seem to be missing from the SIG/M archives I can find.

I'm specifically NOT looking for NZ-COM or Z3PLUS.

Thanks!
Eric


Re: MY COMPUTER LIKES ME when i speak in BASIC.

2018-01-22 Thread Eric Smith via cctalk
My computer likes me when I speak MACRO-10.
:-)

On Jan 17, 2018 7:28 AM, "Mattis Lind via cctalk" 
wrote:

> I scanned a nice little booklet I found in my fathers stuff.
>
> "MY COMPUTER LIKES ME when i speak in BASIC" by Bob Albrecht.
>
> http://www.datormuseum.se/documentation-software/my-computer-likes
>
> If someone feel like they can straighten it up, please do! I didn't feel
> like ripping it apart to have it scanned so it was troublesome to scan it
> perfectly in my page scanner.
>


Re: IP address classes vs CIDR (was Re: Reviving ARPAnet)

2018-01-18 Thread Eric Smith via cctalk
On Thu, Jan 18, 2018 at 11:35 AM, Grant Taylor via cctalk <
cctalk@classiccmp.org> wrote:

> On 01/18/2018 11:00 AM, Eric Smith wrote:
>
>> Years ago I added a configurable "bozo-arp" feature to the Telebit
>> NetBlazer router, which would respond to ARP requests for non-local
>> addresses and reply with the router's MAC address (on that interface),
>> specifically in order to make classful-only hosts work on a CIDR network.
>>
>
> That functionality sounds exactly like my understanding of what Proxy ARP
> is supposed to do.
>

Proxy ARP is (or was, at the time) something that had to be configured for
individual IP addresses or ranges. What I did was have it reply to an ARP
for _any_ IP address outside the subnet(s) configured on that interface.

Since you stated that anyipd "…would respond to ARP requests for non-local
> addresses…" I"m assuming that you are talking IP and not another protocol.
>

Yes. Specifically IPv4.

Recently I've needed that functionality on Linux, as I have multiple old
>> systems that only understand classful, including the AT&T UnixPC (7300 or
>> 3B1). I suppose I should rewrite and open-source it.
>>
>

> I /think/ (it's been too long since I've done this) that you would
> configure one classless interface with 10.20.30.254/24 and another
> classless interface with 10.10.10.254/24 -and- enable Proxy ARP on both
> (?) interfaces.  You will likely need to enter the target machine's IP
> addresses in a file that the Proxy ARP sub-system references to learn what
> target IPs that it needs to Proxy ARP for.
>

The point of bozo-arp and anyipd was that the only necessary configuration
was to turn it on. Of course, there may be scenarios in which one does not
want the router to respond to bogus ARP requests, in which case
bozo-arp/anyipd should not be used.


IP address classes vs CIDR (was Re: Reviving ARPAnet)

2018-01-18 Thread Eric Smith via cctalk
On Thu, Jan 18, 2018 at 10:39 AM, Grant Taylor via cctalk <
cctalk@classiccmp.org> wrote:

> I was not aware that there was code that supported /only/ Class A (/8)
> addresses and /not/ Class B (/16) or Class C (/24) addresses.
>
> I /thought/ that everything was either classful (as in supports all three
> classes: A, B, and C) or classless (as in supports CIDR).
>

Years ago I added a configurable "bozo-arp" feature to the Telebit
NetBlazer router, which would respond to ARP requests for non-local
addresses and reply with the router's MAC address (on that interface),
specifically in order to make classful-only hosts work on a CIDR network.
Later someone paid me to write a NetBSD daemon ("anyipd") to do the same
thing, though for an entirely different reason. Recently I've needed that
functionality on Linux, as I have multiple old systems that only understand
classful, including the AT&T UnixPC (7300 or 3B1). I suppose I should
rewrite and open-source it.


Re: non-PC Floppy imaging

2018-01-07 Thread Eric Smith via cctalk
On Fri, Jan 5, 2018 at 3:45 PM, Dave Wade via cctalk 
wrote:

> IBM invented the 8" floppy disk format. Generally their disks follow the
> standard 3740 format.
>

True for anything you're likely to encounter in the "real world", but in
the interest of muddying the waters I'll point out that IBM's _first_
floppy drives, used for microcode load on big iron, were NOT even remotely
compatible with the later 3740 and succesors. The disk was the same
physical size, but the index hole was near the edge of the disk, rather
than near the spindle. They spun at 90 RPM rather than 360, and were
read-only.  (Obviously IBM had some drives that could write that format,
but they didn't provide them to customers.)

I think it's a safe bet that the 4331 microcode disks do NOT use that
format. Guy would have noticed if the diskettes didn't look like "normal"
8-inchers.


Re: Large discs (Was: Spectre & Meltdown

2018-01-04 Thread Eric Smith via cctalk
On Jan 4, 2018 22:17, "TeoZ via cctalk"  wrote:

100GB M-Discs are dual layer BlueRay media correct (not readable on a DVD
player)? I actually have a BDXL BR burner.


They are three-layer, and will ONLY work on BDXL drives, not older BD
drives.


Re: Dumping my first EPROM

2018-01-02 Thread Eric Smith via cctalk
On Tue, Jan 2, 2018 at 12:13 PM, Brad H via cctalk 
wrote:

> Thanks Paul.  I found an srec2bin converter and ran that.. it created a 1K
> bin file.  I then opened that with a hex editor (slick edit).. but alas, no
> readable strings.
>

Not too surprising, since AFAIK the basic Dynamicro doesn't have any ASCII
I/O. Next step would be to run the binary file through an 8080 (or Z80)
disassembler. I use z80dasm, which is provided as C source code:

https://www.tablix.org/~avian/blog/articles/z80dasm/

but there are many others to choose from.

Without the disassembler, just looking at the object code in hex, you might
see whether there's a pattern in the first 64 bytes of each 8 byte group
containing a few bytes of code then zeros. That's common (but not
universal) in 8080/Z80 code because the reset vector is at address 00, and
the RST instruction vectors are at 00, 08, 10, 18, 20, 28, 30, and 38
hexadecimal.


Re: tumble tiff to pdf converter

2017-12-27 Thread Eric Smith via cctalk
On Wed, Dec 20, 2017 at 3:30 AM, Christian Corti via cctalk <
cctalk@classiccmp.org> wrote:

> Ok, I see, whoever changed tumble as found on github forgot to change all
> version numbers, to update the README and many things more :-(
>

I can't find anywhere in the github repo that the version number was not
updated to 0.35.  (Just now updated to 0.36.)

But anyway, it compiles happily with two modifications in tumble_pbm.c:
> - add the following line in front of the first include statement:
> #define HAVE_BOOL
> - change the following line from
> #include 
>   to
> #include 
>

I'm happy to accept pull requests.

The HAVE_BOOL would probably be fine, but I'm not going to change the
include as it would then fail to build on Fedora and RHEL.


Re: Dec-10 Day announcement from Living Computers: Museum + Labs

2017-12-11 Thread Eric Smith via cctalk
On Sun, Dec 10, 2017 at 4:19 PM, Noel Chiappa via cctalk <
cctalk@classiccmp.org> wrote:

> Oh, one detail I didn't look at: what's the physical interface this uses?
> Hopefully three of the Berg/DuPont connectors (i.e. what's on the RHxx
> boards, with flat cables going to the adapter to the standard MASSBUS
> connector, a device rejoicing in the name 'Receptacle Housing Assembly');
> the
> original MASSBUS cables (along with the 'Receptacle Housing Assembly' are
> now
> rare as hen's teeth). And there's also the MASSBUS termination...
>

It uses three 40-pin dual-row headers, so you can cable it to a real
Massbus connector (RHA), or you might be able to cable it directly to the
Berg headers of a DEC Massbus adapter.


Re: Ideas for a simple, but somewhat extendable computer bus

2017-11-20 Thread Eric Smith via cctalk
On Nov 20, 2017 7:41 AM, "Tapley, Mark via cctalk" 
wrote:

Catching up late, sorry if this is an old question, but what did
the Digital Group computers use? My recollection is that they offered cards
with 6800, 6502, 8080, and Z-80 CPUs on the same bus, and that part of the
system seemed to work reasonably well.


The Digital Group had two separate buses, a memory bys and an I/O bus, as
well as two other slot types incompatible with either bus, for a CPU card
and a TVC (video and cassette) card. They didn't support interrupts or DMA
on any bus. If you wanted to use an interrupt, you had to wire it over the
top. Doc Suding said that he didn't put interrupts on the bus because
(paraphrasing) they are complicated and you don't need them.

As you say, they did support various CPUs, but not more than one in a
system. I wouldn't recommend that anyone consider The Digital Group as an
example of good bus design.


Re: Ideas for a simple, but somewhat extendable computer bus

2017-11-19 Thread Eric Smith via cctalk
On Nov 19, 2017 7:18 PM, "allison via cctalk"  wrote:

The rest is the specific implementation.  What happens if the CPU is
1802 or something else that does not match the 6500 or 8080z80 models.


There is nothing that prevents either the serial or parallel arbitration
schemes from working with other processors.

In the case of the 1802, it would work easily for interrupts, but would
need some additional circuitry for DMA, because the 1802 doesn't include
any feature whereby another bus master can request that the 1802 surrender
control of the bus. Instead, the 1802 has a built-in single-channel DMA
controller.

That 1802 bus master problem exists for interfacing _any_ sort of bus
master to any 1802, and is totally independent of what kind of DMA
arbitration is chosen.


Re: Ideas for a simple, but somewhat extendable computer bus

2017-11-19 Thread Eric Smith via cctalk
On Sat, Nov 18, 2017 at 10:48 PM, Jim Brain via cctalk <
cctalk@classiccmp.org> wrote:

> Looking at the schematic for the ECB, I cannot find any description of the
> signals BAI, BAO, IEI, and IEO.  Can anyone shed some light on the function
> of these signals?
>

Bus Acknowledge In and Out, Interrupt Enable In and Out, used for serial
arbitration of the bus request and interrupt.  These signals are
daisy-chained rather than bused.

If a any card is requesting the bus by asserting the bus request, at some
point the CPU will acknowledge that by asserting its bus acknowledge
output, which is wired to the BAI signal of the first bus slot. The BAO of
each slot is wired to the BAI of the next slot.  Since more than one card
can request the bus, it is necessary for there to be some arbitration
scheme to determine which card gets the bus grant. In the serial
arbitration scheme, the highest priority goes to the card that is earliest
in the daisy chain (closest to the CPU).  If a particular card is NOT
requesting the bus, it passes the BAI signal on to BAO to make the
acknowledge available to the next card. If it is requesting the bus, it
does not pass BAI to BAO, but instead sets BAO inactive, so that no
lower-priority card will see the bus acknowledge.

Similarly for how the card deals with interrupts, but using the IEI and IEO
as the daisy chain.

This is a common technique since the 1960s, and for microcomputers was used
by Intel Multibus in 1975, and by Zilog Z80 family peripherals in 1976.

The drawback is that if there are a lot of cards, there can be a long
propagation delay of the interrupt acknowledge from the CPU to the last
card of the bus, particularly if routing the IEI/IEO chain through NMOS Z80
peripherals.  That can cause no device to respond to the interrupt
acknowledge with a vector by the time the CPU needs it.  This can be solved
by adding wait states to the interrupt vector read, or by using a parallel
arbitration scheme instead of serial.

Parallel arbitration can be done with the same slot pin assignments, but
instead of busing the request and daisy-chaining the acknowledge, the
requests are each separately fed into a priority encoder.  The "any" output
of the encoder goes to the request input of the CPU.  The acknowledge from
the CPU goes into the enable of a decoder, and the select inputs of the
decoder come from the priority encoder, so that each slot gets its own
decoded acknowledge signal to its BAI input.  In this case the backplane
doesn't connect the BAO output of a slot to anything.   This parallel
arbitration scheme can be used for bus request and/or interrupt request.

Eric


Re: WTB: HP-85 16k RAM Module and HPIB Floppy Drive

2017-11-16 Thread Eric Smith via cctalk
Hi Eric,

It's not urgent, but when you have a chance, could you dump the 9122C
ROM(s) and take high resolution photos of the controller board?

Since it does HD, I suspect it probably does not use a 600 RPM mechanism.

Thanks!

Best regards,
Eric


On Nov 15, 2017 17:45, "Eric Schlaepfer via cctalk" 
wrote:

> It'd be interesting to find out how well that PRM-85 works. I've laid out a
> board for a rough equivalent but I haven't fabbed it out. It may be cheaper
> for me to buy that instead.
>
> I've also got a 9122C but I don't have the mass storage ROM so I can't use
> it with my 85. Right now I'm using it with my 9000 series 300.
>
> On Tue, Nov 14, 2017 at 8:26 PM, Mark J. Blair via cctalk <
> cctalk@classiccmp.org> wrote:
>
> >
> >
> > > On Nov 14, 2017, at 20:11, Ed Sharpe via cctalk  >
> > wrote:
> > >
> > > wondervifcthec9122 drives,will work on 85?
> > >
> >
> > I think I can guess what you meant to say there... :)
> >
> > I’ve ordered a PRM-85 (a modern reprogrammable ROM drawer replacement)
> > which includes the HP-85B version of the Mass Storage ROM, and the
> Extended
> > Mass Storage ROM. Based on what I have read, I think that should let my A
> > model use the newer 9122C drive, and other drives using either the Amigo
> or
> > SS-80 protocols.
> >
> > I’d like to get the 9122C mostly because I have a much easier time
> finding
> > 1.44M media than the older double density media. eBay and I don’t talk,
> so
> > that limits my options a bit. If I had easy access to lots of 3.5” DD
> > media, then I would consider getting one of the more plentiful (?) other
> > 3.5” HPIB floppy drives.
> >
>


[no subject]

2017-11-09 Thread Eric Smith via cctalk
My first FPGA-Elf (2009) used an FPGA board that is long-since obsolete,
and while I updated it last year, it used an FPGA board that was not
commercially available, and would have been frighteningly expensive if it
was. For the most recent RetroChallenge, I updated the FPGA-Elf to work on
a readily-available, inexpensive FPGA module, the Digilent CMOD-A7-35T,
which is available for $89.  (It can also be made to work on the $75
CMOD-A7-15T, but I recommend the -35T as it can provide more RAM.)  As part
of the RetroChallenge, I added emulation of the CDP1861 PIXIE graphics.
Various photos can be seen at:
https://www.flickr.com/photos/22368471@N04/albums/72157687136287141

The project progress is described, in reverse chronological order, on my
blog:

http://whats.all.this.brouhaha.com/category/computing/retrocomputing/retrochallenge/

I designed a base PCB into which the which the CMOD-A7 module plugs. The
base board provides for use of hexadecimal displays (either HP or TI) for
data and (optional) address, a connector for the switches, a serial port, a
composite video ports, and an optional MicroSD breakout board.  A 5V 2A
regulated wall-wart provides power.

There are a few issues with the board design requiring a few traces cut and
jumpers and resistors added, and I haven't yet written any software to deal
with the MicroSD card.  I plan to have a new revision of the main board
made to correct the known issues. The switch PCB and bezel PCB don't need
another revision.

I still need to write some documentation, but I've put the rev 0 main board
Eagle files, Gerber files, and PDF files of the schematic and layout at:

http://www.frobco.com/e1000/

I'm willing to make bare boards available for those who want to build their
own.

This version runs at 256x the speed of a normal Elf w/ PIXIE. It's clocked
at 56.34 MHz, but it executes all instructions in one-eighth the clock
cycles required by an 1802. My 1861 implementation uses a dual-port RAM to
allow the CPU to run fast while still producing normal NTSC-rate video.  I
plan to make the processor speed configurable to 1x or 256x, with perhaps a
few intermediate choices.


Re: HP 9836U processor mystery...

2017-11-06 Thread Eric Smith via cctalk
On Mon, Nov 6, 2017 at 9:59 PM, Tony Duell via cctalk  wrote:

> Mine identifies the CPU as a 68010 in the power-on diagnostic. But from
> what
> I remember the PGA socket could also take a 68012 (with extra address pins
> brought out). I don't have such a chip, so no idea what it would identify
> as.
>

Do you mean that it will actually use the extra address pins?

I suppose the most likely way for the software to identify the MC68012 (vs
MC68010) would be to try accessing two memory addresses differing only in
address bits A24 or higher (but not A30), and test whether whatever MMU
hardware they've built will actually map them distinctly.  The other
difference in the MC68012 is the availability of a /RMC pin to better
identify read-modify-write cycles, but since their board has to work with
an MC68010, I doubt that it would use the /RMC signal at all.

I've heard claims that HP used the MC68012 in some systems, but I've never
seen any definite confirmation.


Re: Which Dec Emulation is the MOST useful and Versatile?

2017-11-03 Thread Eric Smith via cctalk
On Fri, Nov 3, 2017 at 6:07 PM, Paul Koning via cctalk <
cctalk@classiccmp.org> wrote:

> Could be.  Then again, today's main architectures are all decades old;
> they get refined but not redone.
>

I'm not sure whether you consider the 64-bit ARM architecture to be one of
"today's main architectures", though it's probably shipping in higher unit
volume than x86. Anyhow, the 64-bit ARM architecture is pretty much brand
new; it's not the 32-bit ARM architecture stretched to 64 bits.


Otrona Attache disk format?

2017-11-01 Thread Eric Smith via cctalk
>From the Otrona Attache Technical Manual, July 1983:

"The diskettes Attache uses have fourty-six tracks on the top side and
fifty tracks on the bottom side, [...]"

Really???


RE: Which Dec Emulation is the MOST useful and Versatile?

2017-10-30 Thread Eric Smith via cctalk
On Oct 29, 2017 09:54, "Dave Wade via cctalk"  wrote:

I am not sure they invented computer emulation. I think that the concept
Emulation/Simulation is as old as, or perhaps even older than computing.
Whilst it was a pure concept Alan Turing's "Universal Turing Machine" was a
Turing machine that could emulate or simulate the behaviour of any arbitrary
Turing machine...


1. Did Turing use the word "emulate"? I honestly have no idea. My (possibly
wrong) impression was that no published literature used the word emulate
with that meaning (one computer emulating another) before the IBM papers.

2. What a UTM does is simulate another machine using only a general-purpose
machine. In fact, the UTM is arguably the most general-purpose machine ever
described. What IBM defined as emulation was use of extremely specialized
hardware and/or microcode (specifically, not the machine's general-purpose
microcode used for natively programming the host machine). If anyone else
did _that_ in a product before IBM, I'm very interested.


Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-29 Thread Eric Smith via cctalk
IBM invented computer emulation and introduced it with System/360 in 1964.
They defined it as using special-purpose hardware and/or microcode on a
computer to simulate a different computer.

Anything you run on your x86 (or ARM, MIPS, SPARC, Alpha, etc) does not
meet that definition, and is a simulator, since those processors have only
general-purpose hardware and microcode.

Lots of people have other definitions of "emulator" which they've just
pulled out of their a**, but since the System/360 architects invented it, I
see no good reason to prefer anyone else's definition.


Re: Where is the memory on the AP-101S memory board?

2017-10-14 Thread Eric Smith via cctalk
On Oct 11, 2017 07:35, "Shoppa, Tim via cctalk" 
wrote:

The AMD chips probably have two layers of legs.


I doubt it, but without a close-up at a different angle it's hard to say.

Are the AMD chips the memory, and the 54F the glue logic?


I suspect so.

 Maybe backtranslating to a core memory bus?


There was reportedly a lot of redesign throughout the computer, so I doubt
it.

Or are there memory SIP's on the other side?


I doubt it, but we need more photos.

In which case maybe the AMD chips might be some sort of error-correction
LSI's?


AMD did make ECC chips, so that is somewhat plausible.

This is just 1 Mbyte of RAM but of course with a long manned spaceflight
leadtime, date codes of 1986-1988 might imply a design from most of a
decade before.


Not in this case. 54F series TTL didn't exist decade before, and I don't
think it was even on the drawing board yet. If it really was designed in
1978 or earlier, it would have used 54S.

Maybe could have been designed for 54S and built with 54F, but I think
that's unlikely, as it would have required a full requalification of the
entire computer, for pretty limited benefit. Not something that's commonly
done, due to extreme expense of requalifying flight hardware.

I think it's more likely a design from 1982-1984. Possibly IBM FSD was
redesigning the computer for multiple aerospace applications, not just the
Space Shuttle.


Re: A Mystery

2017-10-10 Thread Eric Smith via cctalk
On Tue, Oct 10, 2017 at 6:07 AM, Rod Smallwood via cctalk <
cctalk@classiccmp.org> wrote:

> I have in my possession a back plane from a BA23.
> Somebody has put glue in the last three slots.
> Can anybody explain that?


DEC sold a lower-priced, limited expansion MicroVAX II/RC, and rather than
actually manufacture a backplane with fewer connectors installed, they took
standard backplanes and blocked slots with glue.


Re: Did DEC make a Daisy Wheel printer?

2017-10-09 Thread Eric Smith via cctalk
On Sun, Oct 8, 2017 at 10:19 AM, Zane Healy via cctalk <
cctalk@classiccmp.org> wrote:

> Did DEC make any sort of impact printer, besides dot-matrix printers?  I
> have an LA50 or two, and dot-matrix isn’t what I’m after.
>

They sold some drum, chain, and band printers, but I think they were all
OEM'd.  Late examples were the LP25, LP26, and LP27/LP29. Note that some
system-specific printer subsystems (e.g., LP11-xx, LP20-xx, and LP32-xx)
contained printers with their own separate designations.

As far as I recall, the only printer mechanisms DEC made themselves were
dot matrix impact or electrolytic.  Of course, they also OEM'd some
dot-matrix printers, including the LA50 and LA75.


32-bit x86 (was Re: The origin of the phrases ATA and IDE)

2017-10-06 Thread Eric Smith via cctalk
On Oct 6, 2017 12:42, "ben via cctalk"  wrote:

On 10/05/2017 03:46 PM, Chuck Guzis via cctalk wrote:
>
>>
>>
>> I recall an Intel engineer opining on the subject.  "We give you a
>> 32-bit advanced architecture CPU and you p*ss it away running DOS."
>>
>> Compatibility is a tough mistress.
>>
>> --Chuck
>
>
Did anything ever use *ADVANCED ACHITECTURE(?.
Games don't count here.


Not much. Only Xenix, BSD, and other Unixes, Netware, Windows 3.x with
Win32S, Windows 95/98/ME, Windows NT/2000/XP/Vista/7/8/10, Linux, and maybe
a few other obscure things.


Re: HP 9845 complete system on auction in Sweden

2017-09-23 Thread Eric Smith via cctalk
On Sep 22, 2017 11:47 PM, "Curious Marc via cctalk" 
wrote:

I didn't know you could interface a 9845 with a 7970 tape drive.


The 9845 was the top-of-the-line workstation. It could be interfaced to
almost everything computer-controlable that HP made.


Re: RIP Jerry Pournelle - Firsts

2017-09-12 Thread Eric Smith via cctalk
On Mon, Sep 11, 2017 at 7:43 PM, Charles Dickman via cctalk <
cctalk@classiccmp.org> wrote:

> On Mon, Sep 11, 2017 at 1:15 PM, Shoppa, Tim via cctalk
>  wrote:
> > He seems to have been the first to mention ARPANET in a popular
> hobbyist-type context like BYTE. (Leading him to get kicked off ARPANET!)
>
> Yes I remember reading something like that too. I would like to know
> the story of that.
>

http://www.stormtiger.org/bob/humor/pournell/story.html


Re: origin of 3D-printing?

2017-09-01 Thread Eric Smith via cctalk
See also "Pay for the Printer" by Philip K. Dick, 1956.


Re: origin of 3D-printing?

2017-09-01 Thread Eric Smith via cctalk
See also "Pay for the Printer" by Philip


Re: IBM 5110 with 5114 & 5103 on Pittsburgh Craigslist

2017-08-06 Thread Eric Smith via cctalk
On Aug 6, 2017 12:44 PM, "Sam O'nella via cctalk" 
wrote:

Theyre always out of my reach but is there a way to upgrade or convert
5100/5110s to IPL or basic or are you stuck with what you get?


I assume you meant APL.

In general you're stuck with what you get.

In principle if you found all of the necessary ROS (ROM) cards and wired a
switch or jumper to the backplane, you could convert a 5100/5110/5120 to
the other language or to dual-marked.

I need to design a 5100 language ROS card replacement, as one of the APL
language ROS chips in my 5100 model C has gone bad. It fails self-test.

There is also "executable ROS" associated with each language. The system
can't self-test that.

Naturally IBM made their own ROM chips which are neither pin nor
electrically compatible with industry standards. The backplane signals are
TTL compatible, though.


Re: scary warning about bubble memory loop mask, TI 763/765 maint manual

2017-08-06 Thread Eric Smith via cctalk
On Aug 6, 2017 08:20, "dwight via cctalk"  wrote:

I wonder if they can be reset by just removing the bias magnets.

The bias field is needed to maintain the domains in the material.


With the Intel bubble parts, if they get erased, you have to use special
electronics and procedure to recreate the seed bubble.  The later Intel
parts added a "Z coil" specifically for fast bulk erase, desired for
military use.

TI doesn't document any procedures for recovery of erased or corrupted
modules.

Of course, that might make them totally useless.


It might, but if they're already useless that's not going to be any worse.


Re: scary warning about bubble memory loop mask, TI 763/765 maint manual

2017-08-05 Thread Eric Smith via cctalk
On Aug 5, 2017 19:42, "Chuck Guzis via cctalk" 
wrote:

My recollection is that the track data was printed by Intel.  At least
mine came that way.


Both Intel and TI printed the bad loop map/mask on the device label.  Intel
also programmed the map into a special "boot loop" in the device; TI did
not, at least on their 92Kbit devices.

The Intel boot loop was deliberately set up to require a special procedure
to write, so that under normal circumstances it was written once at the
factory, and never had to be written again. However, there was a procedure
to rewrite the boot loop in the field if necessary.

For the TI 763/765 terminal, losing the mask data isn't the problem. The
TEST MASK command lets you type in the mask data if needed.

The problem is that if you type in the mask data wrong on the TI, it can
corrupt the device such that it is NOT FIELD RECOVERABLE. Re-entering the
correct mask and "reformatting" won't fix it.

With Intel devices, there is a special "seed module" and procedure that can
be used to recover bubble devices which have become corrupted. TI just says
that such devices have to be replaced, which is pretty difficult these days.


scary warning about bubble memory loop mask, TI 763/765 maint manual

2017-08-05 Thread Eric Smith via cctalk
Al just put the TI Silent 700 Model 763/765 maintenance manual up on
Bitsavers. (Thanks Al!)

The 763 and 765 are the models using internal bubble memory for between
10,000 and 80,000 characters of local storage.  They use either one or two
"discrete memory boards", with one 92 Kbit bubble device each, or one to
four "dual memory boards", each with two 92 Kbit devices.

Bubble memory devices typically have many minor loops, not all of which are
usable.  TI printed mask data on the label of the bubble device,
identifying the defective loops.  When a bubble memory board is installed
or replaced in the terminal, it is necessary to put the terminal into
command mode and issue a test command to enter the mask.

The scary part is this warning in section 5.2.4.2:

CAUTION

Care should be exercised in the installation of the bubble memory
mask. If an incorrect mask is entered, the bubble device will not work
correctly even if the correct mask is entered at a later time. If an
incorrect mask is entered, replacement memory board must be
installed because the bubble device is no longer reliable.

Intel provided the necessary technical information and hardware to recover
corrupted bubble devices in the field.  As far as I can tell, TI did not.


Re: 2.11BSD on two RL02 drives? Probably not, but...

2017-08-02 Thread Eric Smith via cctalk
On Wed, Aug 2, 2017 at 7:24 PM, systems_glitch via cctalk <
cctalk@classiccmp.org> wrote:

> You might consider just adding another storage controller. I'd recommend
> something that talks MSCP. SCSI seems to be what most people are after
> nowadays, but ESDI controllers are much cheaper, and the drives aren't that
> hard to find. If you have SMD drives kicking around already, there are SMD
> MSCP interfaces that also work well. I've had excellent luck with Emulex
> MSCP controllers.
>

I had good results on both Unibus and Qbus systems with using a CMD MSCP
SCSI controller and an Iomega ZIP drive. It was very convenient to be able
to stick the ZIP cartridge in a drive on a PC for access from simulators
etc.  I used it with RT11, RSTS/E, and BSD.

100MB might not be enough to be interesting for general purpose use on a PC
any more, but it's fine for a PDP-11.


Re: WTB: RX02 Floppy Disks

2017-08-01 Thread Eric Smith via cctalk
On Tue, Aug 1, 2017 at 6:21 PM, Charles Dickman via cctalk <
cctalk@classiccmp.org> wrote:

> Are RX02 disks actually special or will any SSDD 8in floppy work?


TL;DR: No. As standard 8-inch floppy disks go, only SSSD are useful in an
RX02 drive, even for double-density use (DEC RX02 modified MFM format).


Any standard single-sided single-density (SSSD) 8-inch floppy (IBM 3740
format) will work in an RX02 drive, and can can be used for either single
or double density.  In principle media sold as single-density could be
marginal for DEC RX02 modified MFM format, but I've only ever seen that on
*really* awful quality disks.

A standard single-sided double-density (SSDD) 8-inch floppy will not work
in an RX02 drive, unless some other system is used to reformat it to
single-density. Arguably if one doesn't have true RX02 media, this is the
preferred way to make some, since media certified for standard MFM
double-density should be perfectly satisfactory for DEC RX02 modified MFM
format.

Standard double-sided 8-inch floppies (either DSSD or DSDD) will NOT work
in an RX02 drive, because double-sided 8-inch floppies have the index hole
in the jacket in a different place.  They could be used if one punches an
extra index hole in the jacket, and if double-density, another system is
used to reformat to single-density.


Re: Importing a PDP-8 from Canada

2017-07-31 Thread Eric Smith via cctalk
On Mon, Jul 31, 2017 at 6:15 PM, Michael Thompson via cctech <
cct...@classiccmp.org> wrote:

> The RICM has an opportunity to get a PDP-8/M (built in Maynard, MA) that is
> in Canada. I remember that there was a discussion on the procedure here,
> but I can't find it with Google.
>

If it is actually bears a label stating the origin as the US, then there's
nothing special to do, other than show that label to the customs official
if they don't take your word for it.


Re: Seeking correct EuroCard dimensions

2017-07-28 Thread Eric Smith via cctalk
On Fri, Jul 28, 2017 at 5:16 PM, David Griffith via cctalk <
cctalk@classiccmp.org> wrote:

>
> I'm trying to verify the correct dimensions for a 160mm x 100mm EuroCard.
> I figured this would be simple: 160 millimeters by 100 millimeters.  But
> when I submitted a template to the Kicad project at
> https://github.com/KiCad/kicad-library/pull/1441.
>
>
> I was told that I need to trim the board back by .15mm, citing this
> document: https://www.elma.com/-/media/general-web-content-files/resou
> rces/pdfs/us-corporate-dimensions-of-pcbs-d.ashx?la=en&hash=
> 13B1EAF4FF743087F8B7E87BB10A1D20C44C40BD
>
> This doesn't seem to make sense to me.  I always thought that the bearing
> surfaces on the card cage would be spaced slightly more than the card
> width.  What's going on here?  What am I missing?


I think what they're saying is that the board size specs have a tolerance
of +0.0/-0.3mm, so you should spec 0.15mm less than nominal to be right in
the middle of the tolerance range, so that any positive tolerance variation
from manufacturing should still be within the specification.

I'm in the process of designing some 220x100 mm Eurocards myself, and
hadn't previously given the tolerance limit of +0.0mm any consideration. I
think the advice to trim 0.15mm is good.

Eric


Re: Diskette size (was: Repurposed Art (ahem...)

2017-07-20 Thread Eric Smith via cctalk
On Jul 19, 2017 10:15 AM, "Fred Cisin via cctalk" 
wrote:
> That Steve Jobs was pestering them for a cheap drive, but due to the
holes in his jeans and personal hygiene?, they never took him seriously.

I think Shugart settled on 5.25" for the size of a minifloppy at least a
year, and more likely two years, before Steve Jobs would have visited. I
don't have proof, but SA400 public intro was in 1976, and they probably
took more than a year of development to get to that point.

There's evidence in engineering notes recently published indicating that
Woz did some design work using a normal SA400 before Shugart was convinced
to sell Apple the SA390, which was the SA400 sans the standard drive
electronics PCB.


Re: PDP11 and Simh Floating point

2017-07-20 Thread Eric Smith via cctalk
On Jul 19, 2017 10:43 AM, "Douglas Taylor via cctalk" 
wrote:

The pdp11_fp.c code is quite intricate.  If simh was a simple simulation it
would take the easy route and use the intel fp co-processor as you point
out, but it doesn't.  It actually 'emulates' what the pdp11 would do in
hardware.


Do all PDP-11 FPP hardware (and/or microcode) implementations give the same
results in all cases? Does FIS?


Re: early (pre-1971) edge-triggered D flip-flop ICs

2017-07-20 Thread Eric Smith via cctalk
On Wed, Jul 19, 2017 at 11:29 PM, Ethan Dicks  wrote:

> I have no datasheet, but I have examples on DEC M-series FLIP-CHIP
> modules from my PDP-8/L, c. 1968.
>
> I am pretty sure I have examples with 1968 date codes and possibly
> 1967 date codes.
>

Thanks! Also, the 1967 Allied catalog lists the SN7474 (flat pack) and
SN7474N (plastic DIP), priced at $8.00.


Re: early (pre-1971) edge-triggered D flip-flop ICs

2017-07-19 Thread Eric Smith via cctalk
Thanks for all the info, Brent!

The MECL II MC1022 is an edge-triggered D flip-flop using master-slave
design.  I'll have to look up the others you mentioned, especially the
National DM8510 and Sprague NE8828.

I've previously overlooked the MC778 mW RTL D flip-flop, which also uses a
variant of the three-SR design. However, it is a cruder device than the
MC3060 and SN7474, in that its asynchronous preset and clear only work when
the clock input is high, whereas they work at any time in the MC3060 and
SN7474. I haven't analyzed the circuit, but I'm guessing that they simply
didn't gate the preset and clear inputs into the master flip-flop of the
MC778, but only into the slave FF.

Best regards,
Eric


early (pre-1971) edge-triggered D flip-flop ICs

2017-07-19 Thread Eric Smith via cctalk
I'm interested in the history of the logic design for the edge-triggered D
flip-flop, as used in the SN7474. The design is composed of three set-reset
latches (six NAND gates total) per flip-flop.

Does anyone know what year the SN7474 was introduced, or have an early
datasheet for it (prior to the 1973 TTL Data Book For Design Engineers 1st
Edition?

The earliest datasheet I've found using this specific logic design for an
edge-triggered D flip-flop is from a non-7400-series TTL chip, the Motorola
MC3060/3160, which is a member of the MTTL III MC3000/MC3100 series.The
MC3060 is covered in the Motorola 1968 IC databook, on page 4-138.

I've searched US patents for edge-triggered flip-flop design, but have not
found one specifically for the three S-R latch design.

The subject came up as a result of a discussion on a private mailing list
regarding the fact that the conventional J-K master-slave flip-flop design
is NOT edge-triggered; pulses on J and/or K while the clock is high but
stable can affect the Q (and not-Q) outputs of the FF at the following
falling edge of the clock. That behavior is known as "pulse catching", and
such a flip-flop is properly called pulse-triggered or level-triggered, but
not edge-triggered.  Early datasheets on J-K master-slave flip-flops
actually had correct terminology and specifically stated that J and K
should not change while the clock is high.


Re: IBM 5110 - Where does the character set live? And other questions.

2017-07-16 Thread Eric Smith via cctalk
On Jul 16, 2017 4:10 PM, "Robert via cctalk"  wrote:

Does anybody know
whether upgrading the memory is as simple as plugging in another
board, or does it involve wiring changes to the backplane, or other
complex manoevres?


Just plug in the memory cards.


Re: Xerox stores

2017-07-15 Thread Eric Smith via cctalk
On Fri, Jul 14, 2017 at 8:59 PM, jim stephens via cctalk <
cctalk@classiccmp.org> wrote:

> Backwater == St. Louis, BTW, a place I visit pretty often.
>
> Still my all time favorite, Stu's Gateway Electronics still going.
>

I really miss the Gateway Electronics location in Denver, closed 15 years
ago. Wow, has it really been that long? On the last day they were open, a
friend and I bought pizza for the employees (and any other customers who
showed up).

The other electronic surplus places in Denver and Boulder have all closed
too. AFAIK the nearest is OEM Parts in Colorado Springs. Maybe there's
something in Fort Collins.


Re: Through-hole desoldering (was Re: IBM 5110 - Where does the character set live? And other questions.)

2017-07-13 Thread Eric Smith via cctalk
On Thu, Jul 13, 2017 at 10:42 AM, William Sudbrink via cctalk <
cctalk@classiccmp.org> wrote:

> If you have the bucks, go for a Pace station with an SX-100 desoldering
> tool.  40 pin chips
> fall out like they were never soldered in the first place.
>

That's my experience with the Hakko 472D-01. Presumably the FR410-03 would
work as well or better.


Through-hole desoldering (was Re: IBM 5110 - Where does the character set live? And other questions.)

2017-07-13 Thread Eric Smith via cctalk
On Wed, Jul 12, 2017 at 8:38 PM, Robert via cctalk 
wrote:

> Side note: It's probably not a good time to try out my shiny new heat
> gun that I've never yet used. Maybe save my first go on it for
> something more replaceable.
>

A heat gun is definitely NOT the right tool for desoldering through-hole
parts, especially DIP ICs.

If you're not intending to reuse a DIP IC, cut the leads off before
desoldering.  Cut the leads close to the package body, not close to the PCB.

Some people say solder wick is good enough for desoldering DIP ICs, but
I've never been satisfied with it. Maybe my technique is faulty. I've had
best results with vacuum desoldering equipment. In order of my preference:

1) vacuum desoldering station with pencil tool: I use a Hakko 472D-01,
which sadly is discontinued. Last fall I accidentally installed a DIN 41612
96-pin connector on the wrong side of a board, and had already soldered
more than half of the pins before noticing the error. It only took me a few
minutes with the Hakko to desolder the pins. The connector and board were
very clean, so I was able to reinstall the same connector on the correct
side of the board. When I purchased it, the Hakko 472D-01 was around $500;
the replacement is the FR410-03 which has better specs (mostly higher power
at 140W vs 110W) but is nearly $1000.

2) vacuum desoldering gun: lots of people liked the Hakko 808, but it's
discontiued. The Hakko FR-300 looks like a reasonable replacement, and
sells for around $310. The drawback compared to the vacuum desoldering
station with pencil tool is that the handpiece is much heavier and bulkier
since it contains the vacuum pump; this is probably not an issue if you
don't use it to do a lot of desoldering in a single session.

3) desoldering pencil with a built-in manual piston-operated pump; I use
one from Paladin, but they seem to have discontinued it, though there are
many similar ones such as:
http://www.mcmelectronics.com/product/21-8240

4) soldering iron with separate manual piston-operated pump - you have to
be quick switching from soldering iron to pump

5) soldering iron with separate squeeze-bulb - in my experience a bulb just
doesn't work as well as a piston-operated pump

6) soldering iron with built-in squeeze-bulb - same bulb issue as #5, but
even more awkward to handle

Of course, YMMV.

There are also vacuum desoldering stations that use "shop air" to derive
the vacuum, rather than having an internal pump. I've never used them as I
don't normally have an air compressor anywhere near my electronics
workbench.

Since there is a lot less through-hole production now than in the past,
some of the soldering equipment companies that formerly made vacuum
desoldering equipment have abandoned that market segment.


Re: tape baking (Rob Jarratt)

2017-07-10 Thread Eric Smith via cctalk
On Mon, Jul 10, 2017 at 3:38 PM, Rob Jarratt via cctalk <
cctalk@classiccmp.org> wrote:

> Do you have any videos (with sound!) of the LP20 operating?
>

The LP20 is just the printer interface. It looks exactly the same whether
it's operating or not, and doesn't make any sound unless something is very
very wrong.


Re: IBM 5110 - Where does the character set live? And other questions.

2017-07-10 Thread Eric Smith via cctalk
On Mon, Jul 10, 2017 at 1:13 PM, Robert via cctalk 
wrote:

> I've recently picked up a 5110 (BASIC only), along with a 5114 floppy
>
...

> t powers on, completes its self test and gets to LOAD0, but several
> of the characters are only partially drawn on the screen. The lower
>
...

> 1. Can anybody tell me which card the character set is held on? None
>
of the manuals that I've looked at provide that info.
>

The Maintenance Information Manual (SY31-0550) has that information. The
most relevant pages are 3-3, 3.6, and 3-35 through 3-39.

The character generator is the "Display ROS" on the display adapter card,
which is installed in the main backplane (A1) socket G.


VT100 keycaps? "Z" wanted

2017-06-22 Thread Eric Smith via cctalk
My VT180 is missing the "Z" keycap. Does anyone happen to have VT100-series
keycaps to spare? (Or an entire keyboard to sell?)

Best regards,
Eric


Re: Electronic Systems TRS-80 Serial I/O Board?

2017-06-14 Thread Eric Smith via cctalk
On Wed, Jun 14, 2017 at 6:14 PM, jim stephens via cctalk <
cctalk@classiccmp.org> wrote:

> FWIW the send and receive clocks are separate on the 1602 Uart.
>

That is true of all traditional UARTs.  For fancier parts intended for
direct connection to microprocessor buses , some have separate clocks,
others don't. Some bond options for Z80-SIO have separate clocks for first
channel, combined clock for second channel, because the Z80-SIO really had
41 signals, and compromises had to be made to put it in a 40-pin DIP.

Separate rx and tx rates was important when using a modem with asymmetric
rates, like Bell 202, which was 1200bps in "forward" direction and 75bps in
reverse direction.  That mostly went away when synchronous modem
modulations came into vogue, e.g., Bell 212 and V.22, for 1200 bps full
duplex, and most things after that.

Even when synchronous modem modulations started having different rates in
the opposite directions again, e.g., V.90 and V.92, using a single bit rate
on the electrical interface to the modem for both rx and tx had become so
ingrained that no one seriously entertained the idea of going back to split
rates on that interface.  The problem is solved (usually) by flow control.


Re: Electronic Systems TRS-80 Serial I/O Board?

2017-06-13 Thread Eric Smith via cctalk
It's actually a legit measurement, but it's basically how much total jitter
+ bit-proportioned rate error the rx signal can have as a percentage of bit
time, and doesn't matter nearly as much when receiving from all-electronic
transmitters, vs mechanical such as TTY.

I'm sure it made the marketing dept happy to have a way to claim it
tolerates >45% error insteat of only 2.5% error, even though it's just a
different way of specifying it. Basically it's a brag about 16x
oversampling, except that it was essentially meaningless since all the
other UARTs on the market used the same 16x oversampling.

Eric



On Jun 13, 2017 9:33 PM, "Jon Elson via cctalk" 
wrote:

On 06/13/2017 07:59 PM, Chuck Guzis via cctalk wrote:

> Well, I didn't say "timing error", I did say "timing distortion", which is
> not quite the same thing. My reference was the "TR1602/TR1863/TR1865
> MOS/LSI Application Notes Asynchronous Receiver Transmitter", which can be
> found in the WD 1984 Data Communications Handbook (I think there's a copy
> online). Page 126-127. "Thus, signals with up to 46.875% distortion could
> be received."
>
Well, I think it is wild market-speak inflation.  Yes, if everything else
was perfect, then as long as the serial data was at the correct level while
the UART sampled the signal, the rest could be garbage. But, how will a
seriously degraded channel ALWAYS pass the signal correctly just when the
UART samples it?  NOT very likely.

Jon


Re: Electronic Systems TRS-80 Serial I/O Board?

2017-06-13 Thread Eric Smith via cctalk
On Tue, Jun 13, 2017 at 6:59 PM, Chuck Guzis via cctalk <
cctalk@classiccmp.org> wrote:

> Well, I didn't say "timing error", I did say "timing distortion", which
> is not quite the same thing. My reference was the "TR1602/TR1863/TR1865
> MOS/LSI Application Notes Asynchronous Receiver Transmitter", which can
> be found in the WD 1984 Data Communications Handbook (I think there's a
> copy online). Page 126-127.  "Thus, signals with up to 46.875%
> distortion could be received."


They're referring to 46.875% of a bit time as the maximum error in the
sampling time of a single bit. That still corresponds to no more than 5%
error of the overall bit rate, which is where use of an RC oscillator runs
into trouble.


> Obviously, the developer of the subject board didn't have
> much of a problem either; or else he wouldn't be able to sell the thing.
>

They sold it, then spent a bunch of money on Field Service trips to make it
work for customers. It cost them enough to justify multiple redesigns,
including (finally) switching to a crystal.

I'd think that if an RC oscillator could have been (inexpensively) made to
work well enough at that time, DEC would have done it. Their hardware
people weren't complete idiots. (Usually.)

Eric


Re: Electronic Systems TRS-80 Serial I/O Board?

2017-06-13 Thread Eric Smith via cctalk
On Tue, Jun 13, 2017 at 5:53 PM, Chuck Guzis via cctalk <
cctalk@classiccmp.org> wrote:

> The TR1602 UART, like its cousin, the AY-3-1013 used in the TVT,
> tolerates a pretty wide range of bit rate distortion.  The app note
> gives a figure of something like 49%.  And, since it's async, the game
> starts all over at the next character.
>

It's mathematically impossible for a normal UART [*] to handle 49% timing
error.  The cumulative timebase error by the end of a character can't be
more than one bit time, or the wrong bit will get sampled, resulting in
incorrect data, or, (if that happens on the stop bit) a possible framing
error.  For 8N1 [**], there are 10 bits in total (including start and
stop), so that's an absolute maximum timing error of 10%, but for various
reasons even 10% speed variation won't actually work in practice.  If they
meant 4.9%, that is believable, but even that won't work if the other side
is more than slightly off-speed in the other direction.  Normal spec is a
maximum timing variation of within +/-2% at each end [***], so that things
still work properly if one side is at +2% and the other is at -2%,


> Maybe DEC wasn't using the same sort of UART; I don't know.
>

I'm pretty sure it was a common UART.  DEC invented the electronic UART,
though it was originally two DEC System Modules, effectively a UAR and a
UAT.  They worked with Western Digital to have the first single-chip UART
developed, resulting in the TR1402A. Almost all other UARTs were designed
to be compatible with the TR1402A.

Eric


* I refer to a "normal UART" as one that oversamples the receive data
signal (typically at 16x the bit rate) to find the leading edge of the
start bit, delays 1/2 bit time, then samples the start bit and subsequent
data bits at one bit time intervals.  That is how nearly all UART chips
work, since the very first ones.  This analysis is disregarding various
"auto-baud" techniques, which are not performed by normal UARTs, and
certainly not by the TR1602 or AY-3-1013.

** Seven-level mechanical teleprinters, such as the Teletype Model 33, use
two stop bits, and may substitute parity for the 8th data bit, so they have
a total of 11 bits per character, and thus can actually tolerate slightly
less timing error than a UART configured for 8N1.

*** Various sources make different claims regarding allowable variation;
some claim only 1%, but in general 2% is workable. The ITU-T V.14 standard
for transmitting async data over synchronous modem modulation, as used in
all standard full-duplex modem protocols for 1200bps and over, specifies a
basic range of +1% to -2.5%, and an extended range of +2.3% to -2.5%.  They
specify the two ranges because the methods necessary to deal with the
extended range introduce issues that can negatively affect the operation of
some equipment. For instance, the NEC uPD7210, used in among other things
the AT&T 7300 & 3B1 "Unix PC", is unable to handle V.14 stop bit shaving.


Re: Electronic Systems TRS-80 Serial I/O Board?

2017-06-13 Thread Eric Smith via cctalk
On Tue, Jun 13, 2017 at 12:39 AM, Chuck Guzis via cctalk <
cctalk@classiccmp.org> wrote:

> The trimpot on the board says to me that the clock is most likely a
> simple RC affair.


That does seem likely.


>For low bitrates, that's perfectly adequate.
>

A person might think so, but as DEC found out with the PDP-11/05 console
serial port, it's really not. The percentage tolerance of async serial is
not any higher at low bit rates than at higher bit rates, and the
percentage tolerance of RC oscillators isn't any better at low frequencies
than at higher frequencies.  Being off by only a few percent is enough to
be a problem, because the other end might be off by a few percent also. 1%
resistors are dirt cheap now, but they weren't in the 1970s.  It's become
only slightly easier since then to get capacitors with 1% tolerance and low
tempco.

DEC went through multiple board revisions with changes to the RC oscillator
in attempt to make it sufficiently reliable. I've heard that they finally
up and putting a crystal oscillator on the board, but all of mine have the
RC, and they've given me some grief over the years. I replaced one with a
crystal oscillator.


Re: Serial keyboards

2017-06-06 Thread Eric Smith via cctalk
On Tue, Jun 6, 2017 at 3:48 PM, Guy Sotomayor Jr via cctalk <
cctalk@classiccmp.org> wrote:

> Yes, I’ve been dealing with the morons who strip the keyboards off of (now
> rare) IBM 327x terminals,
> cut the connectors off and wire them up to PS/2 or USB.  May they burn in
> hell.
>

I have an IBM 1389194, which is a 122-key model M, apparently for a 3192 G
series terminal, with APL keycaps. I do not have such a terminal; someone
else separated the keyboard from it.  I wouldn't mind getting a 3192
terminal, but I'm not willing to spend much money on one.

https://www.flickr.com/photos/22368471@N04/25859890091/

I'm converting it into a USB keyboard, but I'm doing it in a fully
reversible manner.  If I burn in hell, I hope it's not because of modifying
this keyboard.

Eric

"Why this is hell, nor am I out of it."
- Christopher Marlowe, The Tragical History of the Life and Death of Doctor
Faustus


Re: Commodore Pet 8032 keyboard repair - conductive or capacitive?

2017-06-04 Thread Eric Smith via cctalk
Commodore didn't use any capacitive keyboards on 6502-based computers.
That would have taken extra electronics and cost more.

I don't know whether any of the Amiga keyboards were capacitive, but I
suspect not.


Re: Firefly dual processor card

2017-05-31 Thread Eric Smith via cctalk
On May 31, 2017 11:06 AM, "Paul Koning via cctalk" 
wrote:

To clarify: Firefly is an internal-only device built by DEC research in
Palo Alto.  It wasn't a product and as far as I remember wasn't the basis
of one, either.  There are some DEC SRC reports that describe aspects of
the Firefly.  I think Modula-3 was invented for it, but I may have my
research platforms mixed up.


DEC claimed that the VAXstation 3520/3540 was based on Firefly.  Certainly
many aspects of 35x0 are similar to 2nd gen Firefly, though details are
different.


Re: Firefly dual processor card

2017-05-30 Thread Eric Smith via cctalk
Did you get an actual Firefly (research) board, or a prduction VAXstation
3520/3540 board? I don't think you're likely to find schematics or pinouts
for either, but it's not impossible to find 3520/3540 stuff, while I've
never before heard of anyone encountering any actual Firefly boards in the
wild.


Re: Commercial AIM-65 Video Controller?

2017-05-20 Thread Eric Smith via cctalk
On May 19, 2017 10:46 PM, "dwight via cctalk"  wrote:

I don't believe the AIM-65 normally does color??


The AIM-65 normally does one color, which is red, on its alphanumeric LED
displays.


Re: AT&T Work Group System Voice Power voice processing boards for Unix PC 6300/7300 for sale

2017-05-18 Thread Eric Smith via cctalk
Back when the Voice Power board for the 7300/3B1 UnixPC was of mainstream
interest, I spent some time trying to obtain specifications and programming
information regarding the Western Electric DSP20 chip it used. Unlike the
DSP16 and DSP32, WE (and AT&T Microelectronics) did not offer the chip for
sale, and the technical documentation was unobtanium.


Re: Extracting files off “unknown” 8 inch disks. Any thoughts…

2017-05-05 Thread Eric Smith via cctalk
On Fri, May 5, 2017 at 3:02 PM, allison via cctech 
wrote:

> In the PDP-10 realm not less than a handful Tops10. ITC, more.
>

TOPS-10 doesn't have any filesystems for floppy disks, though the KL10
front-end PDP-11/40 running RSX-20F does, and there are utilities to access
RSX and RT11 filesystems from TOPS-10.

AFAIK, the situation is the same for TOPS-20.  I don't have any idea
whether ITS, WAITS, Tenex, or the Compuserve Monitor ever had any different
floppy disk support.


Re: Looking for TRS-80 Model parts (and/or someone in the Phoenix, AZ area)

2017-05-02 Thread Eric Smith via cctalk
On Mon, May 1, 2017 at 12:49 PM, Peter Cetinski via cctalk <
cctalk@classiccmp.org> wrote:

> The 8mhz [MC68000 for TRS-80 Model 16/6000] boards are similarly unobtanium


Is a schematic for the 8 MHz board available?  Is there any other tech info
about the differences between the original 6 MHz boards and the 8 MHz?  I
only have a 6 MHz, but if I knew the differences, I'd contemplate hacking
it into an 8 MHz, or just laying out a new 8 MHz board in Eagle.

Will TRSDOS-16 work on the 8 MHz board?


Re: TRS-80 Model 12 versus 16B

2017-04-26 Thread Eric Smith via cctalk
On Tue, Apr 25, 2017 at 8:28 PM, Jim Brain via cctalk  wrote:

> Been trying to Google things, but not having a lot of luck.  I understand
> both are white case, both have slimline drives, 12 had no card cage, I
> think I read somewhere that the 16 came with 68K std (no Z80?), and 12 had
> KB conn on case, 16B had KB conn on KB. Beyond that, though, would love
> more information.
>

All machines in the II/12/16/6000 family have the Z80.  When using the
68000 in equipped machines (16, 16b, 6000 from factory, II and 12 with
upgrade), the Z80 is responsible for booting the system and handling all of
the I/O.  The 68000 can't talk directly to anything except the Z80.

All the machines in the family can run Z80 software, including Model II
TRSDOS and CP/M.


Program Logic Manual (PLM) for APL\360 or APL.SV?

2017-04-16 Thread Eric Smith via cctalk
Did IBM publish a Program Logic Manual (PLM) for APL\360, APL.SV, or any
other APL language implementation, as they did for e.g. their FORTRAN(E)
and PL/I(F) compilers?


Re: If C is so evil why is it so successful?

2017-04-12 Thread Eric Smith via cctalk
On Wed, Apr 12, 2017 at 9:55 AM, Sean Conner via cctalk <
cctalk@classiccmp.org> wrote:

>   Yeah, I'm having a hard time with that too.  I mean, pedantically, it
> should be:
>
> #include 
> int main(void) { return EXIT_SUCCESS; }
>
> where EXIT_SUCCESS is 0 on every plaform except for some obscure system no
> one has heard of but managed to influence the C committee back in the late
> 80s.
>

Returning zero from main to indicate success is perfectly valid according
to the most recent three C standards.  ISO/IEC 9899:1990(E) §7.10.4.3,
ISO/IEC 9899:1999(E) §7.20.4.3 ¶5 and ISO/IEC 9899:2011(E) §7.22.4.4 ¶5
both requires that either 0 or EXIT_SUCCESS as an argument to exit() be
considered success.  EXIT_SUCCESS may or may not be zero, but zero is
considered success regardless of that.

One annoyance with the way the standard defines the EXIT_x macros is that
if you use other exit status values, including those from sysexits.h (not
part of the C standard), it's possible that an intended failure status
value might happen to match EXIT_SUCCESS on some standard-compliant
implementation.

§5.1.2.2.3 ¶1 of both :1999 and :2011 state that if execution reaches the
closing brace of main without a return statement, that it is equivalent to
returning zero, so even the return statement in this alleged non-portable
example is unnecessary.

On the other hand, the earlier ISO/IEC 9899:1990(E) §5.1.2.2.3 says that
main returning with no value yields an undefined termination status.

-- Eric "not a language lawyer but I play one on the internet" Smith


Re: remember xvscan?

2017-04-11 Thread Eric Smith via cctalk
On Apr 11, 2017 5:29 AM, "E. Groenenberg via cctalk" 
wrote:
> Wasn't that not an add-on to 'xv' (xv-3.10a)?

xvscan was based on xv but was sold including xv, with the xvscan price
including the cost of an xv license.


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-11 Thread Eric Smith via cctalk
On Apr 11, 2017 11:29 AM, "Chuck Guzis via cctalk" 
wrote:
> This has me wondering about how the 432 people implemented FORTRAN.

Oh, there's a very simple answer to that. They didn't!

Early in the 8800/432 development (which started in 1975), Intel was
developing their own language for it, generally in the Algol family. It's
possible that they intended to support other languages, but Fortran
definitely would have been a poor fit.

When Ada came along, they decided that it was a reasonably good fit, and
with the DoD pushing Ada, that would be an easier sell to customers than a
proprietary language. Intel marketing basically claimed that the 432 was
designed for Ada, though that wasn't really the case.

The only two programming languages Intel supported on the 432 were:

1) Ada, using a cross-compiler written in Pascal and hosted on a VAX, to
run on "real" 432 systems such as the 432/670

2) Object Programming Language (OPL), a Smalltalk dialect based on Rosetta
Smalltalk, which only ran on the 432/100 demo board, a Multibus board
inserted in a slot of an Intel MDS decelopment system.

Late in the 432 timeline there was an unsupported port of XPL, but it did
not generate native code.

Apparently there was little concern for either Fortran or COBOL, the most
widely used programming languages at the time.


Re: remember xvscan?

2017-04-11 Thread Eric Smith via cctalk
On Tue, Apr 11, 2017 at 4:44 AM, David Griffith via cctalk <
cctalk@classiccmp.org> wrote:

> Does anyone remember using xvscan?  Does anyone know how to get a hold of
> it anymore?
>

I bought xvscan many years ago from tummy.com, but at some point realized
that I no longer have it. I inquired several times to tummy.com about
getting another copy, or even buying it again, but never got any response.


Re: The iAPX 432 and block languages (was Re: RTX-2000 processor PC/AT add-in card (any takers?))

2017-04-11 Thread Eric Smith via cctalk
On Mon, Apr 10, 2017 at 3:39 PM, Sean Conner  wrote:

>   What about C made it difficult for the [Intel iAPX] 432 to run?
>

The iAPX 432 was a capability based architecture; the only kind of pointer
supported by the hardware was an Access Descriptor, which is a pointer to
an object (or a refinement, which is a subset of an object).  There is no
efficient way to do any kind of pointer arithmetic, even with refinements.

In the Release 1 and 2 architectures, objects were either Access Objects,
which could contain Access Descriptors (pointers to objects), or Data
Objects, whcih could NOT contain Access Descriptors. As a result,
architectural objects were often used in pairs, with the Access Object
having an Access Descriptor at a specific offset (generally 0) pointing to
the corresponding Data Object.

In the Release 3 architecture, a single object could have both an Access
Part and a Data Part, with basically the same restriction: the Access Part
can only store Access Descriptors, and the Data Part can NOT store Access
Descriptors.

As a consequence, a C pointer to a structure containing both pointer and
non-pointer data would have to be represented as a composite of:
   1)  an Access Descriptor to the containing object
   2)  an offset into the data object or data part, for the non-pointer
data, and the non-Access-Descriptor portion of any pointers
   3)  an offset into the access object or access part, for the Access
Descriptor portion of any pointers
The architecture provides no assistance for managing this sort of pointer;
the compiler would just have to emit all the necessary code.

However, C requires that it be possible to cast other data types into
pointers. The 432 can easily enough let you read an access descriptor as
data, but it will not allow you to write data to an access descriptor. That
will raise an exception. It would take really awful hacks in the operating
system to subvert that, and would be insanely slow. (On a machine that was
already quite slow under normal conditions.)  You can't even cast an Access
Descriptor (which occupies 32 bits of memory) to uint32_t, then cast it
back unmodified, e.g., to store a pointer into an intptr_t then put it back
in a pointer.

It would almost certainly be more efficient to implement C on the 432 by
simply allocating a single large array of bytes as the memory for the C
world, and implementing pointers only as offsets within that C world.  This
would preclude all access from C code to normal 432 objects, except by
calling native libraries through hand-written glue. It would effectively be
halfway to an abstract C machine; the compiler could emit a subset of
normal 432 machine instructions that operate on the data.

Note that the 432 segment size is limited to 64KB. Accessing an array
larger than that, such as the proposed C world, is expensive. You have to
have an array of access descriptors to data objects of 64KB (or some other
power of 2) each. Release 1 and 2 provide no architectural support for it,
so the machine code would have to take C virtual addresses and split them
into the object index and offset.  Release 3 provides an instruction for
indexing a large array in this fashion; IIRC the individual data objects
comprising the array are 2KB each.

  -spc (Curious here, as some aspects of the 432 made their way to the 286
> and we all know what happened to that architecture ... )
>

The only siginificant aspect of the 432 that made it into the 286 was the
use of 64KB segments, and that had already been done (badly) in the 8086.

The 432 architects went on to design a RISC processor that eliminated most
of the drawbacks of the 432, but still supported object-oriented
addressing, type safety, and memory safety, but using 33-bit word with one
bit being the tag to differentiate Access Descriptors from data. This
became the BiiN machine, which was unsuccessful. With the tag bit and
object-oriented instructions removed, it became the i960; the tag bit and
object-oriented instructions were later offered as the i960MX. The military
used the i960MX, but it is unclear whether they actually made use of the
tagging.

Eric


Re: RTX-2000 processor PC/AT add-in card (any takers?)

2017-04-10 Thread Eric Smith via cctalk
On Apr 10, 2017 2:43 PM, "Chuck Guzis via cctalk" 
wrote:
> Were there any microprocessor chips that attempted to mimic the
> Burroughs B5000 series and natively execute Algol of any flavor?

Yes, that's what the HP 3000 did (before PA RISC), and they did make
microprocessor implementations of it.

The Intel iAPX 432 was also designed to explicitly support block-structured
languages. The main language Intel pushed was Ada, but there was no
technical reason it couldn't have supported Algol, Pascal, Modula, Euclid,
Mesa, etc. just as well. (Or just as poorly, depending on your point of
view.)

The iAPX 432 could not have supported standard C, though, except in the
sense that since the 432 GDP was Turing-complete, code running on it could
provide an emulated environment suitable for standard C.

When the 432 project (originally 8800) started, there weren't many people
predicting that C (and its derivatives) would take over the world.


Re: Trip to CHM - Hotel/Restaurant Advice

2017-03-31 Thread Eric Smith via cctalk
On Thu, Mar 30, 2017 at 5:05 PM, Rich Alderson via cctalk <
cctalk@classiccmp.org> wrote:

> and what there are are not
> easily within walking distance of good food (In 'n' Out does not qualify).
>

Them's fightin' words!


Re: PSU protection with resettable polyfuse

2017-03-29 Thread Eric Smith via cctalk
On Wed, Mar 29, 2017 at 9:24 AM, Systems Glitch via cctalk <
cctalk@classiccmp.org> wrote:

> > Any downsides to resettable polyfuses?
>
> If you hit them hard enough, they'll sometimes permanently open, which is
> desirable anyway but does require rework. I don't remember how they stack
> up speed-wise, I'm sure it's in the datasheets.
>

They're not very fast. They're comparable to a slow-blow fuse.


Re: Floating point routines for the 6809

2017-03-27 Thread Eric Smith via cctalk
On Mon, Mar 27, 2017 at 2:53 PM, Sean Conner via cctalk <
cctalk@classiccmp.org> wrote:

>   Some time ago I came across the MC6839 ROM which contains floating point
> routines for the 6809.  The documentation that came with it stated:
>
> Written for Motorola by Joel Boney, 1980
> Released into the public domain by Motorola in 1988
> Docs and apps for Tandy Color Computer by Rich Kottke, 1989
>
>   What I haven't been able to find is the actual *source code* to the
> module.  Is it available anywhere?  I've been playing around the the MC6839
> on an emulator but having the source would clear up some issues I've been
> having with the code.
>

https://github.com/brouhaha/float09

I haven't modified it to assemble with a readily available assembler, so I
don't know whether it assembles into the exact MC6839 ROM image.


Re: TRS-80 Model 1 Expansion Interface question?

2017-03-20 Thread Eric Smith via cctalk
On Mon, Mar 20, 2017 at 6:46 PM, Win Heagy via cctech  wrote:

> The expansion interface hardware manual indicates
> it is an FD1771B-01, but the service manual indicates a couple
> possibilitiesFD1771 A/B -01 -11.  Any considerations to look for here?
>

All other things being equal, I'd use the FD1771x-01, with x being either A
or B.

The A vs. B is just whether the chip is in a ceramic (A) or plastic (B)
package. This makes no difference in the TRS-80 EI.

The numeric suffix indicates functional or specification differences.
The -01 suffix is rated for operation at 1 MHz or 2 MHz, which is fine for
the TRS-80 EI, which runs it at 1 MHz (as required for 5.25-inch
standard-density floppies).
The -02 suffix is rated for operation at 2 MHz only, so in principle it
isn't guaranteed to work in the TRS-80 EI, but in practice it should work
fine.
The -11 suffix is for a part that has an internal substrate bias generator,
so pin 1 (Vbb) should be disconnected. This wouldn't meet spec in the
TRS-80 EI without disconnecting pin 1, though it's possible that it might
work OK without doing so.

At 1 MHz, the available stepping rate selections for the -01 and -11 are
12, 20, and 40 ms, while for the -02 they are 12, 16, and 20 ms.


Re: Pair of Twiggys

2017-03-15 Thread Eric Smith via cctalk
On Mar 15, 2017 3:28 PM, "Fred Cisin via cctalk" 
wrote:
> I was surprised that Jobs didn't make the Lisa floppy 5.0 or 5.5 inches,

I assume that Apple wanted to get at least a small benefit of economy of
scale from media manufacturers not having to retool for a different size,
even though they had to use a higher coercivity coating and a different
punch for the jacket.

> and used a relatively standard drive for the Mac.

The Mac used a Twiggy drive (AKA FileWare, AKA Apple 871 drive) until very
late in development. Twiggy drives were intended for use on the Apple II
and III as well, though they didn't go into production. The decision to use
Sony 3.5" drives was a response to the huge problems Apple had with the
Twiggy.

>  I would have thought that he would want people to buy even their media
from Apple.

Other vendors sold Twiggy media under the FileWare trademark, presumably
under license. I have no idea whether a per-disk royalty was involved. I
have unopened boxes of Verbatim FileWare diskettes.


Re: I hate the new mail system

2017-03-07 Thread Eric Smith via cctalk
On Tue, Mar 7, 2017 at 2:48 AM, Christian Corti via cctalk <
cctalk@classiccmp.org> wrote:

> And cctalk@... is neither responsible for the writing of the message nor
> does it belong to the author of the message. But replies should be directed
> there, so there should be a Reply-To: field containing cctalk@... and the
> From: field should contain the author's address.
>

And thus we come full circle. The "From:" header containing the original
author's address is the cause of the unsubscribes due to bounces, and (I
assume) the motivation for the recent change to the list behavior.

The problem is that major production MTAs will reject (bounce) email with a
"From:" whose domain uses DKIM or SPF, when the sending MTA isn't in the
DKIM or SPF authorized sender list. This will almost always be true when
messages are forwarded by a mailing list. Since the bounces go back to the
mailing list, the mailing list software then drops the entirely legitimate
list subscriber's subscription.  :-(

The behavior you describe is certainly correct according to the RFCs, but
unfortunately now falls apart in practice.  :-(


Re: Clock program for COSMAC Elf microcomputer with PIXIE graphics

2017-03-03 Thread Eric Smith via cctalk
On Fri, Mar 3, 2017 at 3:27 AM, Eric Smith  wrote:

> I've written a clock program to run on an unexpanded Elf with PIXIE
> graphics
>

A few seconds of video of it running on a Netronics Elf II:
https://www.youtube.com/watch?v=vKCsw_7wpdw


Re: Binary keypad front panel

2017-03-03 Thread Eric Smith via cctalk
On Mar 3, 2017 12:58 PM, "John Wilson via cctalk" 
wrote:
> It occurred to me that lots of old machines had binary front panels
> (switches and lights) and lots of machines had keypad front panels (octal
> or hex, with 7-segment LEDs), but I'd never seen a binary keypad front
> panel.

It wasn't a computer, but the first commercial frequency-synthesized
scanning receiver, the Tennellec Memoryscan, circa 1974, used a binary
keypad. It came with a fat book listing a 16-bit binary code for each
frequency the scanner could receive. You could program up to 16 such codes
into the scanner.

My grandfather bought one for my grandmother when I was 10 years old, and I
was put in charge of programming in the codes for the frequencies my
grandmother selected. Since I already new binary, I worked out formulas for
the codes so that I wouldn't have to use the book. That way I could program
the scanner in only ten times the time.

It did come in handy some years later when my grandmother wanted to changes
the frequencies, but the code book had been lost.


Clock program for COSMAC Elf microcomputer with PIXIE graphics

2017-03-03 Thread Eric Smith via cctalk
I've written a clock program to run on an unexpanded Elf with PIXIE
graphics. It proved to be quite a challenge to fit it into 256 bytes, but
I've now got it working, with two bytes of RAM to spare. There are 12-hour
and 24-hour versions. I've released it under the GPL 3.0 license.

The source code is in a github repository:
https://github.com/brouhaha/elf-clock

A text file containing instructions and hexadecimal object code is at:

https://github.com/brouhaha/elf-clock/releases/download/v0.1/elf-clock-v0.1.txt


Re: MIPS I-IV instruction set standards

2017-02-28 Thread Eric Smith via cctalk
On Mon, Feb 27, 2017 at 11:59 PM, Angelo Papenhoff via cctalk <
cctalk@classiccmp.org> wrote:

> I'm wondering where the MIPS I-IV standards that are referenced
> everywhere are defined. I was able to actually find what seems to be the
> IV standard [1] but found no such thing for I-III. I didn't even find
> any bibliographic references to them. Did they only exist as printed
> books and nobody bothered to scan them? Or are they under copyright?
>

AFAIK, there weren't any formal MIPS architecture standards published for
the early versions of the architecture, though there may have been such
things internally to MIPS. That was not an uncommon practice for computer
architectures; for instance, the formal DEC VAX architecture standard was a
DEC confidential document not available to customers, even though there was
plenty of customer documentation covering most aspects of the architecture.

It might be worth asking on comp.arch.


<    1   2   3   4