Re: Spelunking the places where files are not

2021-03-05 Thread Paul Koning via cctalk



> On Mar 5, 2021, at 5:02 PM, Glen Slick via cctalk  
> wrote:
> 
> On Fri, Mar 5, 2021 at 1:46 PM Paul Koning via cctalk
>  wrote:
>> 
>> Yes, RT11 has contiguous files.  That actually made it rather unusual.  For 
>> example, while RSTS supports contiguous files that isn't the default and 
>> because of disk fragmentation wasn't commonly used.
> 
> On VMS you can copy files with the /CONTIGUOUS switch to specify that
> the output file must occupy contiguous physical disk blocks. Of course
> the default is /NOCONTIGUOUS.
> 
> I vaguely remember using the /CONTIGUOUS switch to copy MDM (MicroVAX
> Diagnostic Monitor) diagnostic files from one bootable MDM disk to
> another. I forget if that is necessary for proper operation of MDM.

I like to make RSTS floppy files contiguous to avoid spending so much time 
going back to the directory to find the next set of file data pointers.  In 
RSTS, a few files have to be contiguous: run time systems and shared libraries, 
swap files, the system error message file, and the DECtape directory buffer 
file.  That's about it.  In early versions, the monitor had to be contiguous as 
well, but as of V6B that is no longer true (not for INIT either).

Strangely enough, in the RSTS file system there are always pointers to each 
file cluster, even if the file is contiguous.  It didn't dawn on me until a few 
weeks ago that I should have changed that -- I could have done that back around 
1982 or so.  Oops.

paul



Re: [simh] RSTS processor identification

2021-03-05 Thread Paul Koning via cctalk



> On Mar 5, 2021, at 7:22 PM, Johnny Billquist via cctalk 
>  wrote:
> 
> ...
>> Maybe this weekend I'll hack that SSD floppy thingie and load up the P/OS 
>> 3.2 disks to see how that works.
> 
> Can't run split I/D space on any version of P/OS. Neither does it support 
> supervisor mode. Also, the J11 on the Pro-380 is running a bit on the slow 
> side. Rather sad, but I guess they didn't want to improve the support chips 
> on the Pro, which limited speed, and they didn't want to start having Pro 
> software that didn't run on all models, which prevented the I/D space and 
> supervisor mode.
> 
> In the end I would probably just put it down to additional ways DEC 
> themselves crippled the Pro, which otherwise could have been a much better 
> machine.

The most embarassing blunder with the Pro is that the bus supports DMA, but no 
I/O cards use it.  Even though a bunch of them should have -- hard disk 
controller obviously, network adapter possibly as well.

I/D and supervisor mode work fine on RSTS.  :-)

The explanation I heard for the slow J-11 clock is that the original J-11 spec 
called for it to operate at 20 MHz.  When Harris failed to deliver and the max 
useable clock speed ended up to be 18 MHz, most designs had no trouble.  But 
the Pro support chips were designed to run synchronous with the CPU clock and 
for various other reasons needed a clock frequency that's a multiple of 10 MHz, 
so when 20 MHz was ruled out that left 10 MHz as the only alternative.

I would have liked better comms.  The USART has such a tiny FIFO that you can't 
run it at higher than 9600 bps even with the J-11 CPU.  At least not with RSTS; 
perhaps a lighter weight OS can do better.  The printer port is worse, that one 
can't run DDCMP reliably at more than 4800 bps.  I normally run DDCMP on the 
PC3XC, which is a 4-line serial card that uses two dual UART chips (2681?) with 
reasonable FIFO.

paul



Re: [simh] RSTS processor identification

2021-03-07 Thread Paul Koning via cctalk



> On Mar 5, 2021, at 9:02 PM, Johnny Billquist  wrote:
> 
> On 2021-03-06 02:33, Paul Koning wrote:
>>> ...
> 
>> I would have liked better comms.  The USART has such a tiny FIFO that you 
>> can't run it at higher than 9600 bps even with the J-11 CPU.  At least not 
>> with RSTS; perhaps a lighter weight OS can do better.  The printer port is 
>> worse, that one can't run DDCMP reliably at more than 4800 bps.  I normally 
>> run DDCMP on the PC3XC, which is a 4-line serial card that uses two dual 
>> UART chips (2681?) with reasonable FIFO.
> 
> Hmm. I'm pretty sure I was running my -380 with the printer port for DDCMP on 
> HECnet for a while, and at 9600 bps.

DDCMP runs fairly well on RSTS with the printer port at 9600, but I get some 
overruns.  My guess is that the terminal driver (which is front ending the 
DDCMP machinery) isn't as lightweight as the equivalent on RSX.  Or do you 
bypass the terminal driver and get a separate comms-specific driver for this 
case?

> But with P/OS, you are not using the console port as such. That's all on the 
> graphics side.
> But unless I'm confused, that's the same port. The printer port just can also 
> be the console port, if you short pins 8-9, right? Except it won't fully work 
> the same as the DL11, since interrupts work differently. But polled I/O will 
> work the same.
> But I would expect the speed characteristics to be the same for the console 
> as for the printer port.

Correct, printer and console are actually the same thing.  If you use the 
console cable (pin 8 connected to 9) then that materializes a DL11-like CSR set 
at 177560.  Yes, with polled I/O such as the ODT microcode uses that works just 
like a real DL11, but for interrupts it's different.  In RSTS, either way that 
port becomes a terminal port.

RSTS does have support for the graphics module, in "glass TTY" mode within the 
initialization code and full VT220 emulation in RSTS proper.  Well, except for 
blink mode, and no bold in 132 column mode.

paul



Re: [simh] RSTS processor identification

2021-03-08 Thread Paul Koning via cctalk



> On Mar 7, 2021, at 6:42 PM, Johnny Billquist  wrote:
> 
> 
> 
> On 2021-03-07 23:00, Paul Koning wrote:
>>> On Mar 5, 2021, at 9:02 PM, Johnny Billquist  wrote:
>>> 
>>> On 2021-03-06 02:33, Paul Koning wrote:
> ...
>>> 
 I would have liked better comms.  The USART has such a tiny FIFO that you 
 can't run it at higher than 9600 bps even with the J-11 CPU.  At least not 
 with RSTS; perhaps a lighter weight OS can do better.  The printer port is 
 worse, that one can't run DDCMP reliably at more than 4800 bps.  I 
 normally run DDCMP on the PC3XC, which is a 4-line serial card that uses 
 two dual UART chips (2681?) with reasonable FIFO.
>>> 
>>> Hmm. I'm pretty sure I was running my -380 with the printer port for DDCMP 
>>> on HECnet for a while, and at 9600 bps.
>> DDCMP runs fairly well on RSTS with the printer port at 9600, but I get some 
>> overruns.  My guess is that the terminal driver (which is front ending the 
>> DDCMP machinery) isn't as lightweight as the equivalent on RSX.  Or do you 
>> bypass the terminal driver and get a separate comms-specific driver for this 
>> case?
> 
> I realized I might have spoken too soon. There is also a comm port, and now 
> I'm unsure if DECnet isn't running over that one actually.

That would make a difference.  The printer port is a 2661 on the Pro 350, or 
the gate array equivalent on the Pro 380.  Either way, it's a UART without a 
FIFO.  The comm port is an 8274, which has a 3 byte FIFO.  So does the 2681 
dual UART, which is what the 4 port comm card uses.  In my tests, that FIFO 
makes the difference between running reliably at 9600 baud, and getting 
frequent overrun errors.

> Anyway, in RSX, when running DDCMP on the serial port, DECnet has its own 
> device driver. So not talking through any terminal device driver, which have 
> all kind of features and capabilities expected for a terminal line.
> 
> Same with normal RSX, which is why you have to dedicate the whole controller 
> to either DECnet or TT. You can't mix.

That's probably more efficient.  In RSTS I added the DDCMP support as an 
"auxiliary" function attached to the terminal driver, so the regular terminal 
driver does the device control and then diverts the data stream to/from the 
DDCMP driver.  It's a bit like how Linux does these things, I forgot what term 
they use.  In fact, it would be possible to add DDCMP support to Linux in the 
same way if someone wants to try that... :-)

>>> But with P/OS, you are not using the console port as such. That's all on 
>>> the graphics side.
>>> But unless I'm confused, that's the same port. The printer port just can 
>>> also be the console port, if you short pins 8-9, right? Except it won't 
>>> fully work the same as the DL11, since interrupts work differently. But 
>>> polled I/O will work the same.
>>> But I would expect the speed characteristics to be the same for the console 
>>> as for the printer port.
>> Correct, printer and console are actually the same thing.  If you use the 
>> console cable (pin 8 connected to 9) then that materializes a DL11-like CSR 
>> set at 177560.  Yes, with polled I/O such as the ODT microcode uses that 
>> works just like a real DL11, but for interrupts it's different.  In RSTS, 
>> either way that port becomes a terminal port.
>> RSTS does have support for the graphics module, in "glass TTY" mode within 
>> the initialization code and full VT220 emulation in RSTS proper.  Well, 
>> except for blink mode, and no bold in 132 column mode.
> 
> Well, in P/OS you do have the option of also play graphics, and do different 
> resolutions. But the "terminal" handling for it have similar limitations. I 
> think blink isn't working the same as in a VT100, nor is reverse (if I 
> remember correctly). And of course, smooth scrolling do not work you you 
> don't scroll the whole screen, since the hardware isn't capable, and doing it 
> in software would be way too slow.

Right, I forgot about partial smooth scroll.  Blink could be done fairly easily 
with EBO through the color lookup table; I haven't bothered doing that.  Same 
for bold.  Reverse wasn't a problem in my experience.

paul



Re: [simh] RSTS processor identification

2021-03-08 Thread Paul Koning via cctalk



> On Mar 5, 2021, at 9:15 PM, Chris Zach via cctalk  
> wrote:
> 
>>> Can't run split I/D space on any version of P/OS. Neither does it support 
>>> supervisor mode. Also, the J11 on the Pro-380 is running a bit on the slow 
>>> side. Rather sad, but I guess they didn't want to improve the support chips 
>>> on the Pro, which limited speed, and they didn't want to start having Pro 
>>> software that didn't run on all models, which prevented the I/D space and 
>>> supervisor mode.
> 
> That sucks. I sometimes wonder how hard it would be to code the hard disk 
> driver, if it doesn't do DMA it's probably simple as dirt to be honest. Any 
> idea if it worked like MCSP or was it totally off the wall?

The Pro hard disk driver is indeed pretty simple.  Nothing like DDCMP; it's 
more like an old style disk controller with CSRs to tell the device what to do. 
 Basically, you convert linear sector number to cylinder/track/sector (which 
requires knowing the specific drive type), then load that into the CSRs along 
with a command.  Then for a write operation you write the words of data, 256 of 
them, to the data buffer CSR.  For a read, you wait for the data ready 
interrupt, then read the data one word at a time from that same CSR.  Repeat 
for the next sector.

The floppy is similar except that the transfer is byte-at-a-time, and the 
address mapping is more complicated because the software has to deal with the 
sector interleave, track skew, and funny cylinder numbering.

An entirely different odd design is the Pro Ethernet card, which uses the evil 
82568 Ethernet chip.  That does DMA -- into a 64kW memory that's part of the 
DECNA card.  So the OS would allocate Ethernet buffers to that memory space and 
can then do DMA.  That's not too bad, and 64kW is a decent amount of memory.  
The real problem is that the 82568 is by far the worst DMA engine design ever 
created.  It actually implements design errors that were well understood and 
well documented (and solved) 20 years earlier, but such considerations never 
stopped Intel.

>> The most embarassing blunder with the Pro is that the bus supports DMA, but 
>> no I/O cards use it.  Even though a bunch of them should have -- hard disk 
>> controller obviously, network adapter possibly as well.
> 
> I think they used an intel chipset to handle the CTI bus, so the normal Q-Bus 
> DMA methods just doesn't work. Hm. Wonder if the problem is they just didn't 
> build the driver to support DMA, or if they found some problem that made DMA 
> just not work at all

The documentation clearly describes DMA operation of the CT bus on both 350 and 
380 (see the Technical Manual on Bitsavers).  I don't know why it was never 
used, I never heard any rumors about it.

I don't believe the bus control uses Intel chips.  The interrupt controller 
does, yes, which is another bad design decision but one that can be worked 
around adequately.  The Pro 380 implements only a subset of the full interrupt 
controller, the parts that aren't totally absurd.

> The 380 *was* a mess, mine is a formidable bit of kit with DECNA and 
> everything, but without I/D space it's really not too very useful as more 
> than a really nice VT terminal.

I/D space is just an OS issue; it works fine in the 380.  As for a VT terminal, 
it's actually reasonably good at that but not great; the video can't quite do 
the full VT220 video.

paul




Re: [simh] RSTS processor identification

2021-03-08 Thread Paul Koning via cctalk



> On Mar 5, 2021, at 9:02 PM, Johnny Billquist  wrote:
> 
> On 2021-03-06 02:33, Paul Koning wrote:
> ...
>> The explanation I heard for the slow J-11 clock is that the original J-11 
>> spec called for it to operate at 20 MHz.  When Harris failed to deliver and 
>> the max useable clock speed ended up to be 18 MHz, most designs had no 
>> trouble.  But the Pro support chips were designed to run synchronous with 
>> the CPU clock and for various other reasons needed a clock frequency that's 
>> a multiple of 10 MHz, so when 20 MHz was ruled out that left 10 MHz as the 
>> only alternative.
> 
> I do think it sounds weird that the support chips would require a clock that 
> is a multiple of 10 MHz. But I wouldn't know for sure.
> Somewhere else I read/heard that they didn't work reliable above 10 MHz, but 
> for the F11 that was ok. When the -380 came, they just reused those support 
> chips.

The 380 has an entirely different core design.  Instead of lots of discrete 
support chips including a pile of screwball Intel chips, it uses a pair of gate 
arrays that incorporate all those functions.  Or more precisely, the subset 
that the OS actually needs.  This is really obvious when you compare the 350 
and 380 documentation for the interrupt controllers -- the 350 uses Intel 
chips, the 380 only implements a tiny subset of what those chips do.

I'm guessing here, but a possible reason for the 10 MHz issue is if the support 
chips use that clock, and use a synchronous design for the clock boundary 
crossing rather than an asynchronous design.  It's entirely possible to design 
a chip that has an outside interface with an unrelated clock frequency, but 
it's harder to do and harder to get right.

paul




Re: DF32?

2021-03-09 Thread Paul Koning via cctalk



> On Mar 9, 2021, at 6:53 PM, Chris Zach via cctalk  
> wrote:
> 
>> So did one you bid over $1500?
> 
> Not me. $1k would have been my limit, it's really kind of insane to run 
> something like that. As I put on my old memory hat I remember that the 
> platter would rust but at least the heads would not weld to the platter.

Hm.  I know we had that exact problem in college with an RS11 disk (on our RSTS 
system).  That required replacing the heads, platter, and I think motor.

> 
> Also there were two timing tracks on it and if they were toast the platter 
> was as well.

Really?  The very similar RS64, as well as the RS11, both had a formatter 
device that field service could use to write the timing tracks if they were 
lost.  Or, for that matter, if the platter had to be replaced, since it arrived 
from the factory totally blank.

> Although these days you could probably just build a formatter for it from a 
> Beaglebone and reformat. But at that point you could just have the BB spit 
> out the head data right to the controller. And you could just replace the 
> whole thing with a BB that could replicate every disk drive DEC made for the 
> pdp8.

Sure, a generalization of Dave Gesswein's MFM emulator.  I was just looking the 
other day how practical it would be for such a device to do an RK05 emulation.  
The answer seems to be: quite practical.

paul




Re: DF32?

2021-03-09 Thread Paul Koning via cctalk



> On Mar 9, 2021, at 7:38 PM, Paul Koning via cctalk  
> wrote:
> 
> 
> 
>> On Mar 9, 2021, at 6:53 PM, Chris Zach via cctalk  
>> wrote:
>> 
>>> So did one you bid over $1500?
>> 
>> Not me. $1k would have been my limit, it's really kind of insane to run 
>> something like that. As I put on my old memory hat I remember that the 
>> platter would rust but at least the heads would not weld to the platter.
> 
> Hm.  I know we had that exact problem in college with an RS11 disk (on our 
> RSTS system).  That required replacing the heads, platter, and I think motor.

I meant that I have experienced welding of the heads.  I haven't seen rusting, 
though I could believe that it's possible -- the platter looks like it's been 
blued like old style firearms.

paul



Re: DF32?

2021-03-09 Thread Paul Koning via cctalk



> On Mar 9, 2021, at 8:32 PM, Chris Zach via cctalk  
> wrote:
> 
>> Really?  The very similar RS64, as well as the RS11, both had a formatter 
>> device that field service could use to write the timing tracks if they were 
>> lost.  Or, for that matter, if the platter had to be replaced, since it 
>> arrived from the factory totally blank.
> 
> Oh, sorry, meant the data was lost. I don't think it had the formatter on the 
> unit though.

Right, the formatter was a piece of field service hardware.  I think typically 
it had to be shipped up from Maynard, there wasn't enough call for them to have 
them at each field office.

One oddity is that the timing track clock frequency on those writers is 
variable.  The device would write the correct number of timing pulses and then 
read the timing track to verify the length of the gap at the end.  Lights would 
indicate whether the gap was too small, correct, or too long, and you'd adjust 
the frequency knob accordingly until the "ok" light came on.   It's documented 
in the maintenance manual.  I read it long before I saw it done, and was amazed 
that yes, it actually work just as strangely as what the manual claims.

Judging by the block diagram in the manual, you could build your own in an 
afternoon or two.

>> Sure, a generalization of Dave Gesswein's MFM emulator.  I was just looking 
>> the other day how practical it would be for such a device to do an RK05 
>> emulation.  The answer seems to be: quite practical.
> 
> The MFM emulator is an amazing bit of kit.

It certainly is.  It works wonderfully well.

paul



DECtape ancestry

2021-03-11 Thread Paul Koning via cctalk
I just read part of the Grant Saviers interview from CHM, where near the end he 
gives a  bit of history of DECtape.  In particular, the fact that it was  
derived from LINCtape though the format details are quite different.

A question popped into my mind, prompted by having read Guy Fedorkow's paper 
about Whirlwind just a few days earlier: the Whirlwind tape format has 6 
physical tracks but 3 logical tracks (each logical track is recorded 
redundantly on two physical tracks) and one of those tracks is a clock track.  
LINCtape and DECtape have the same redundant recording scheme, and also have a 
clock track; the difference is that they add a mark track to enable the 
recording of block numbers and in-place block writing.

That made me wonder if LINCtape was, in part, inspired by the Whirlwind tape 
system, or if those analogies are just a concidence.

Incidentally, it's probably not widely known that LINCtape/DECtape is not the 
only tape system with random block write capability.  Another one that does 
this is the Electrologica X1 tape system, which uses 1/2 inch 10 track tapes, 
which include a clock and a mark track.  An interesting wrinkle is that the X1 
tape system lets you chose the block size when formatting the tape, and then 
data block writes allow for the writing of any block size up to the formatted 
block size.  I'm not sure when that device was introduced; the documentation I 
have is from 1964.  There's no sign the designers knew of DECtape (or vice 
versa).

paul



Re: Wagner WAC40

2021-03-13 Thread Paul Koning via cctalk



> On Mar 13, 2021, at 3:17 AM, Joshua Rice via cctalk  
> wrote:
> 
> Hi, 
> 
> I recently bought a core rope memory unit from a Wagner WAC40, mainly because 
> it’s very aesthetically pleasing and looks good on display: 
> https://i.redd.it/h9sb550uhnm61.jpg 

Very interesting looking.  I can't quite make out what is going on in that 
rectangular area where all the wires terminate, labeled 0-15 and A-R.  Are 
there diodes there?  Anything on the other side of that board?

The large cores with all the wires are remisniscent of core rope ROM.   If so, 
I wonder if it's AGC (Lincoln Labs) style, EL-X1 style, or a scheme different 
from either of those two.

paul



Re: Wagner WAC40

2021-03-14 Thread Paul Koning via cctalk



> On Mar 13, 2021, at 1:34 PM, Joshua Rice via cctalk  
> wrote:
> 
> 
>> Very interesting looking.  I can't quite make out what is going on in that 
>> rectangular area where all the wires terminate, labeled 0-15 and A-R.  Are 
>> there diodes there?  Anything on the other side of that board?
> 
> Nothing but traces on the other side, though you’re right on them being 
> diodes.
> 
>> The large cores with all the wires are remisniscent of core rope ROM.   If 
>> so, I wonder if it's AGC (Lincoln Labs) style, EL-X1 style, or a scheme 
>> different from either of those two.
> 
> It’s definitely some form of core rope ROM. 
> 
> Interestingly, the ferrite rings are built in pairs, with a "selection" coil 
> wrapped around both, joining them. Therefore (i assume, i’m really no expert) 
> they’ll be a positive pulse induced when passing through one coil, but a 
> negative pulse when passed through the opposite coil. This probably helps in 
> differentiating beween a 0, a 1, or a NULL state (ie 0v). 
> 
> I have no idea if that correlates with any particular format of Core Rope, 
> but as far as my eyes can tell, that’  how the core rope is woven and 
> functions.

The key component of core rope memory (and X1 ROM) is square-loop cores, like 
the cores used in conventional read/write core memory.

There is another kind of core ROM where the cores are simply transformer cores. 
 Since you mentioned a "selection" coil, chances are that's what we're dealing 
with here.

Brent Hilpert has a great writeup on a number of the technologies used.  
http://madrona.ca/e/corerope/index.html 

paul



Re: DECtape ancestry

2021-03-20 Thread Paul Koning via cctalk



> On Mar 20, 2021, at 4:07 PM, Kyle Owen via cctalk  
> wrote:
> 
> Why did DEC not use the LINCtape format for the PDP-8? I assume maintaining
> format compatibility between their low, mid, and high range systems was
> important to them? I suppose there was no other good solution to
> transferring large files across different DEC platforms than to use
> magnetic tape...so I may have answered my own question here.
> 
> Kyle

Speculating here since I have no direct knowledge: the DECtape format allows 
read and write in either direction, while LINCtape only allows read and write 
forward.  The bidirectional I/O capability was part of DECtape format from the 
start, and I suspect the desire was to keep that.

Also, the DECtape format (ignoring the funny bit order in the PDP-1 case) is 
the same across all models except for the block size and count in the PDP-8 
case.  Chances are the desire was to reuse all that design.

paul



Re: DECtape ancestry

2021-03-20 Thread Paul Koning via cctalk



> On Mar 20, 2021, at 4:21 PM, Kyle Owen via cctalk  
> wrote:
> 
> On Sat, Mar 20, 2021, 16:13 Paul Koning  wrote
> 
>> Speculating here since I have no direct knowledge: the DECtape format
>> allows read and write in either direction, while LINCtape only allows read
>> and write forward.  The bidirectional I/O capability was part of DECtape
>> format from the start, and I suspect the desire was to keep that.
> 
> 
> What systems took advantage of the bidirectional nature?
> 
> Kyle

DOS-11 for one (and thus RSTS, which reuses that format).  DOS DECtape files 
are linked lists; each block contains a link to the next block.  To allow for 
reading one block at a time, start/stop mode, DOS interleaves 4:1.  If you're 
allocating a long file and the allocation reaches end of tape, allocation then 
continues in the reverse direction.  The extreme case of a single file that 
takes up the whole drive looks like two up/down passes over the tape, each one 
touching 1/4th of the blocks in each direction.  When a file is read, blocks 
are read in the same direction as they were written.  The direction is given by 
the sign of the block number in the link word, negative means reverse.

As Grant pointed out in the oral history interview, bidirectional DECtape I/O 
in the sense that you could read a block in the opposite direction it was 
written isn't all that useful.  While the PDP-11 controller does the obverse 
complement thing, that just means you get the bits in the word correct but the 
256 words are still in the opposite order.  That could be handled, of course, 
but I haven't seen programs that do so.

Well, one exception: the tape formatter does the write timing/mark forward, 
then write all in reverse, then reads (to check) forward.  But those are test 
patterns so the job of dealing with the direction change is easy.

paul



Re: TC08 DECtape bootloader question

2021-03-21 Thread Paul Koning via cctalk



> On Mar 21, 2021, at 9:35 AM, Rick Murphy via cctalk  
> wrote:
> ...
> Trying again - my reply got chopped off for some reason.
> 
> You have to read the bootstrap code in the TC0x driver to understand this.
> 
> What happens is that the code watches the buffer pointer (7755) and when it 
> hits 7642, the remaining read is directed to field 1. The boot is looping on 
> 7616/DTSF and 7617/JMP .-1 when it's overwritten by the boot (the NOP below 
> overwrites the DTSF).

The details are different, but it reminds me a bit of the magic used in the 
bootstrap on the CDC 6000 mainframes.  The "deadstart panel" (boot rom 
implemented as 12 rows of 12 toggle switches) does a rewind followed by reading 
the first tape block into the top of memory.  During a read (or write) 
instruction, the program counter is temporarily stored in location 0 so it can 
be put to work as a buffer pointer instead.  The starting address of the read 
is arranged so the block read wraps around into location zero, the last word of 
the block overwrites the saved PC and causes execution to continue at that 
address.  Saves two words in the boot ROM.

paul



Re: TC08 DECtape bootloader question

2021-03-21 Thread Paul Koning via cctalk



> On Mar 21, 2021, at 2:28 PM, Paul Koning  wrote:
> 
> 
> 
>> On Mar 21, 2021, at 9:35 AM, Rick Murphy via cctalk  
>> wrote:
>> ...
>> Trying again - my reply got chopped off for some reason.
>> 
>> You have to read the bootstrap code in the TC0x driver to understand this.
>> 
>> What happens is that the code watches the buffer pointer (7755) and when it 
>> hits 7642, the remaining read is directed to field 1. The boot is looping on 
>> 7616/DTSF and 7617/JMP .-1 when it's overwritten by the boot (the NOP below 
>> overwrites the DTSF).
> 
> The details are different, but it reminds me a bit of the magic used in the 
> bootstrap on the CDC 6000 mainframes. 

Another example and a more significant one of a self modifying boot loading 
process is the "emulator IPL" on the IBM 360 model 44.  The "emulator" is a 
chunk of separate memory and control used to emulate the SS instructions not 
implemented in the hardware.  I used such a machine in college and looked at 
the card deck for the emulator.

IPL ("initial program load") reads a record from the boot device -- the card 
reader in this case -- which is a channel program that is then executed to read 
the actual initial code.  In the emulator case, the rest of the card deck is a 
standard binary output file from the assembler -- think LDA format on a PDP-11. 
 The first card is a nice concoction of several channel commands that read 
another card, drop the load address and byte count fields for that card into 
another channel command word, then executes that CCW to send the data on the 
card to the right memory location.  It then loops back (CCW "command chaining") 
to do the same with the next card.  So the entire deck load is executed by the 
channel, no CPU involvement at all, transfering the right number of bytes from 
each card to the location it asked for.

I don't have any of this preserved, but it wouldn't be too hard to reconstruct 
the details from that description.  An exercise for the student... :-)

paul




Re: DEC CTI Bus Technical Manual, or looking for Ken Wellsch or Megan Gentry

2021-03-24 Thread Paul Koning via cctalk
I'd be very interested in that document too.  The closest I've seen is not very 
close, the technical description in XT_Hardware_Handbook_1982.pdf 
 . 
 But there's more detail that would be good to know, for example more 
information about what the option card ROMs look like.

I sent a message to Megan, we'll see if she can help.

paul

> On Mar 23, 2021, at 10:04 PM, Bjoren Davis via cctalk  
> wrote:
> 
> Hello All,
> 
> Does anyone have a copy of the DEC CTI Bus Technical Manual (EK-00CTI-TM-002) 
> I can scan?
> 
> If not, does anyone have an email address for Ken Wellsch or Megan Gentry as 
> they both appear to be authorities on the CTI bus (see 
> https://en.wikiversity.org/wiki/DEC_Professional_(computer)/Archive question 
> 10)?
> 
> Thanks in advance!
> 
> --Bjoren Davis
> 
> 



Re: Logic Analyser Usage Advice

2021-03-26 Thread Paul Koning via cctalk



> On Mar 26, 2021, at 5:08 AM, Rob Jarratt via cctalk  
> wrote:
> 
> I have an old HP 1630G logic analyser. I am trying to use it to debug a
> problem with an 82C206 peripheral controller (or rather I think damage
> between the CPU and the peripheral controller). I am not very experienced
> with logic analysers and I wonder if I am using it correctly.
> 
> What I am trying to do is see which internal registers are being
> read/written and the values. To do this there are two signals (XIOR and
> XIOW) that trigger the read/write on their rising edge. So I have connected
> the XIOR and XIOW signals to the J and K clock inputs and set the LA to
> clock on the rising edge. I have then told the LA to trigger on a particular
> address range (in the State Trace screen if anyone is familiar with this
> LA).
> 
> When I run the analyser it complains of a slow clock. This makes sense,
> because I am using the read/write signals to drive the clock inputs so that
> I only capture actual reads and writes to the peripheral controller.
> However, I don't seem to be getting sensible values in the trace and I am
> wondering if the LA is really not capturing anything because of the slow
> clock?
> 
> I don't think it makes sense to clock the LA on the actual clock signal
> because I won't be able to capture the address and data values on the rising
> edge of the read/write signals and I would end up with traces full of
> useless data.

If you have the trigger set to the event you want that wouldn't be a problem; 
the LA would not store anything until the trigger hits.

I have a different ancient logic analyzer, a Philips/Fluke model.  It has 
"state plus timing" capture, meaning that it can capture sequences of clocked 
state changes, time-labeled waveforms, or both simultaneously.  What you're 
doing corresponds to "state" capture, which uses a clock.

If you're capturing with a clock that means the LA captures the inputs at each 
specified clock edge -- in your case, rising edge of either of those two 
signals.  (Does it really have two clocks and defines that it captures on a 
clock event from either of them?)  That would mean you see ONLY the points in 
time when that edge occurs.

If you have a bus transaction that begins with a rising XIOR or XIOW, and then 
some other things happen -- like an address or data transfer perhaps 
accompanied by some control signal -- then what you're doing won't work because 
you won't be capturing those later points in time, since they don't occur at an 
XIOR/W edge.  What you need instead is either to specify a constantly running 
bus clock as your clock, or capture in timing mode (every N nanoseconds) if the 
LA has such a mode.  You would then specify a trigger along the lines of: wait 
for edge on XIOR or XIOW, then look for address in the range x to y.  If the 
address is on the bus at that edge this is easy: "(rising XIOR or rising XIOW) 
and (addr >= x and addr <= y)".  If the address occurs later, you'd have to 
specify something along the lines of "edge then address match within z 
nanoseconds" to describe an address match occurring within that same bus cycle. 
 Or if the address is accompanied by an address strobe it would be "edge then 
within z nanoseconds (address strobe and address in range)".  Depending on your 
LA trigger machinery you may be pushing the limits of what it can do.  If all 
else fails you might need to concoct some external circuit to implement part of 
the trigger condition, and hook the output from that helper circuit to another 
LA pin as one of the trigger terms.

paul



Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-24 Thread Paul Koning via cctalk

> On Oct 24, 2017, at 1:44 AM, Kip Koon via cctalk  
> wrote:
> 
> Hi DEC Enthusiast's,
> 
> If I were to have to decide on just one model DEC PDP system to run in a DEC
> Emulator, which one would be the most useful, versatile and has the most
> software available for it?
> 
> I have only ever used a real PDP-8/e system way back in high school so I'm
> not up to par on any other model of DEC PDP system and I only know BASIC on
> the PDP-8/e so not much there either.
> 
> I hear a lot about the PDP-11.  I found out that there were 16 major PDP
> models at one time so I'm not too sure which one to pick.  
> 
> I built Oscar Vermeulen's PiDP-8/I which I'm waiting on 1 part for.  Other
> than that project which is in a holding pattern at the moment, I have no
> other PDP anything running in any form.

When you say "emulation" do you mean a software emulator like SIMH or E11?  For 
those, the model choice is just a startup parameter, so you can change at will.

Or do you mean an FPGA based one like PDP2011?  There too the choice is a 
parameter, when you build the VHDL into the actual FPGA bits.

In any case, if you want to pick a particular model, I would say 11/70 is a 
good choice.  While near the end of the PDP11 era the Q-bus became mainstream, 
for much of the time the Unibus was either the only or at least the primary I/O 
bus.  It has the full memory management unit and full floating point, so any 
software that requires these is happy.  It has 22 bit addressing for big 
memory.  And it is old enough that early operating systems like DOS will work.  
You could even turn on CIS instructions and call it an 11/74, the semi-mythical 
11/70 variant for commercial applications (COBOL) that never shipped, some say 
because it was too good compared to the VAX 11/780.

One more consideration: if by "emulator" you mean something in hardware that 
has an actual DEC I/O bus coming out of it and accepts real DEC cards, then a 
Q-bus system may be better, it depends on what I/O devices you can most readily 
find.  If so, I'd go for the 11/73.

paul




Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-26 Thread Paul Koning via cctalk

> On Oct 24, 2017, at 10:40 PM, Kip Koon via cctalk  
> wrote:
> 
> ...
> 2nd, a hardware emulator running a simulator written in 6809 assembly
> language for the PDP-8/e running on a 6809 Core & I/O board system seems
> like a good choice for me as I understand the 6809 microprocessor, ...

I would call that a software emulator; the fact that it runs on some 
microprocessor eval board doesn't make a difference.  Running SIMH on a 
Beaglebone would be analogous (though easier).

When you said "hardware emulator" I figured you meant an FPGA implementation of 
a VHDL or Verilog model of the machine.  There are a bunch of those for a 
variety of DEC computers.  One I have looked at is this one: 
http://pdp2011.sytse.net/wordpress/ which incidentally is also configurable to 
implement a choice of PDP11 model.

paul



Re: Image de-warping tool, and Multics/GCOS panels

2017-10-27 Thread Paul Koning via cctalk

> On Oct 27, 2017, at 11:30 AM, Noel Chiappa via cctalk  
> wrote:
> 
> Hey all, I've been doing research on Multics front panels, which it turns out
> are slightly different from those on the Honeywell 6000 series machines which
> ran GCOS, and are often confused with them.
> 
> So, I've put together a Web page about them:
> 
>  Multics and Related 6000 Series Front Panels
>  http://ana-3.lcs.mit.edu/~jnc/tech/multics/MulticsPanels.html
> 
> and I've taken some new images, so make sure the captions are all readable.
> 
> 
> I'm having an issue with the images, though: taking a picture of a flat,
> rectangular panel with a camera usually produces distortion (even with the
> lens set to the narrowest angle possible).
> 
> Does anyone know of any freeware which will fix this? The image tool I
> normally use (ImagePals, sort of a poor man's Photoshop) does have a 'warp'
> function, but it requires setting up a grid of points, and is a pain to use:
> optimal would be something where you mark the 4 corners, and few intermediate
> edge points, and the image is automagically fixed.

GIMP has something that does this, after a fashion.  I've played with it a bit, 
to straighten out snapshots of book pages moderately.  It doesn't work all that 
well, but for modest distortion (pincushion, for example) of photos taken with 
some care, it's probably good enough.

If I want a good clean image, my solution is to take a decent photo or scan, 
then turn it into vector graphics.  By hand is often best; I recently 
discovered Inkscape which is pretty friendly.

paul




Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-27 Thread Paul Koning via cctalk

> On Oct 27, 2017, at 4:54 AM, Dave Wade via cctalk  
> wrote:
> 
> Kip,
> I think "emulation" and "simulation" get used pretty much interchangeable.
> SIMH is touted a simulator, Hercules/390 as an emulator yet they are both
> programs that provide a "bare metal" machine via software on which an
> operating system can be installed. Neither make any attempt to reproduce the
> speed of the original CPU.

True.  And by some argument, an FPGA implementation (from an HDL behavioral 
model) is also a software implementation, just written in a different 
programming language.

Recently I commented to an old colleague that there are many different levels 
of emulation possible, and any one of those may make sense -- it's just a 
question of what you're after.  So you can emulate in a conventional 
programming language, as SIMH does, reproducing the programmer-visible behavior 
of the machine but not its timing.  Bugs from the original might appear if 
those bugs are known to be important, but probably not otherwise.  This kind is 
(nowadays) likely to run faster than the original; certainly it won't usually 
mimic the original timing, neither for computation nor I/O.

You can make timing-accurate software emulators, with lots of work.  SIMH, in 
paced mode, and provided the I/O waits are reasonably accurately expressed in 
units of machine cycles, isn't quite timing accurate but is somewhat similar.

You can build a behavioral simulator (SIMH style, basically) in an FPGA.  That 
isn't necessarily any more capable or accurate than a software simulator.  
PDP-2011 is an example I know of, and I've see articles about other PDP 
emulations of this kind.  Since the design is new, created from a behavioral 
description (data book, functional spec, architecture spec) it will be about as 
accurate as SIMH.

You can also, if the data exists, build a lower level (gate level or 
thereabouts) FPGA model.  Given schematics and wire lists, it should be 
possible to build an implementation that's an exact copy of how the original 
machine worked (assuming of course the documentation is accurate, which is not 
necessarily the case).  Such an emulation would replicate strange and 
undocumented behavior of the original -- and allow you to find out where that 
came from.  I've been working on such a thing for the CDC 6600, which is 
surprisingly hard given that the design lives right on the hairy edge of not 
working at all timing-wise.  But it does accuarately model the peripheral 
processors right now, and indeed it shows and explains some undocumented 
oddities that are part of that machine's folklore.

So it's a question of what you're after.  If you want to run the software, or 
teach the machine at the programmer level, SIMH or equivalent is quite 
adequate.  If you want to teach FPGA skills, an FPGA behavioral model emulation 
is a good project, especially for a small machine like a PDP-8.  As for the 
gate level model, I'm not sure what argument to make for that other than "paul 
is a bit crazy" and "because the data exists to do it".  :-)

paul



Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-27 Thread Paul Koning via cctalk

> On Oct 27, 2017, at 1:47 PM, Rob Jarratt  wrote:
> 
> If I had the skill, data and time, I would always go for a gate level model.
> However, I do most (sim/em)ulation in SIMH instead, like I have been doing
> for MU5 where I lack the data and the time and probably the skill as well,
> but I can always acquire the skill, the other two are harder to find.

I've read some VHDL before, but my 6600 gate level model was my first VHDL 
project.  It's actually quite easy, easier than the level of fluency needed to 
do a good behavioral model.  A decent textbook helps a lot.  My favorite is The 
Designer's Guide to VHDL by Peter Ashenden.  The point is that modeling small 
modules (SSI gates, or 6600 "cordwood" modules, or the like) is easy because 
they are small and have quite simple behavior.  Then it's just a matter of 
wiring them together.

Note that you don't need an FPGA to do logic level design; all you need is a 
VHDL simulator.  I use GHDL, which is open source, part of GCC so you can hook 
in C code if you need it.  For example, that allows you to make a model of the 
I/O channel and connect it to a SIMH style emulation of a peripheral device.

The real issue for gate level modeling is the availability of the necessary 
documentation.  If you have schematics, and they includes critical detail such 
as microcode ROM contents, you're all set.  If all you have is functional 
specs, you can't even start.

It helps to have a machine built with sane design principles.  Things like RS 
flops that don't have both inputs active at the same time.  And a properly 
clocked architecture.  Neither of these properties holds for the CDC 6600...

paul



Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-27 Thread Paul Koning via cctalk

> On Oct 27, 2017, at 2:55 PM, ben via cctalk  wrote:
> 
> On 10/27/2017 12:28 PM, Paul Koning via cctalk wrote:
> 
>> It helps to have a machine built with sane design principles.  Things like 
>> RS flops that don't have both inputs active at the same time.  And a 
>> properly clocked architecture.  Neither of these properties holds for the 
>> CDC 6600...
>>  paul
> 
> But you can still get TTL for the common stuff,and PAL/GAL chips as well, so 
> nothing is preventing you from doing the common logic of
> the 1965 to 1985 era, if it not for production use.
> Ben.

True if you have a TTL machine.  6600 is discrete transistor, and the actual 
transistor specs are nowhere to be found as far as I have been able to tell.

But that doesn't directly relate to gate level emulation.  If you have gate 
level documentation you can of course build a copy of the machine out of actual 
gate-type parts, like 7400 chips.  Or you can write a gate level model in VHDL, 
which is not the most popular form but certainly perfectly straightforward.  
Either way, though, you have to start with a document that shows what the gates 
are in the original and how they connect.  And to get it to work, you need to 
deal with timing issues and logic abuse, if present.  In the 6600, both are 
very present and very critical.  For example, I've been debugging a section 
(the central processor branch logic) where the behavior changes quite 
substantially depending on whether you favor S or R in an R/S flop, i.e., if 
both are asserted at the same time, who wins?  And the circuit and wire delays 
matter, down to the few-nanosecond level.

Most machines are not so crazy; I would assume a PDP-11/20 gate level model 
would be quite painless.

paul



Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-27 Thread Paul Koning via cctalk

> On Oct 27, 2017, at 3:21 PM, Al Kossow via cctalk  
> wrote:
> 
> 
> 
> On 10/27/17 12:16 PM, Chuck Guzis via cctalk wrote:
> 
>> I've long had a fantasy about building a core-logic CPU such as the
>> Univac Solid State.
> 
> I have been told the behavior of Univac magnetic logic was similar to NMOS

That doesn't sound even close.

Ken Olsen did his thesis on magnetic core logic.  There is some (but 
surprisingly little) information on line about this technology.  There are two 
flavors: with permanent magnetic cores (memory cores) and with "soft" cores 
(transformers).  Apollo "rope" core memory ROM is an example of the former.

paul




Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-29 Thread Paul Koning via cctalk

> On Oct 28, 2017, at 10:09 PM, Eric Smith via cctech  
> wrote:
> 
> IBM invented computer emulation and introduced it with System/360 in 1964.
> They defined it as using special-purpose hardware and/or microcode on a
> computer to simulate a different computer.

That's certainly a successful early commercial implementation of emulation, 
done using a particular implementation approach.  At least for some of the 
emulator features -- I believe you're talking about the 1401 emulator.  IBM 
didn't use that all the time; the emulator feature in the 360 model 44, to 
emlulate the missing instructions, uses standard 360 code. 

It's not clear if that IBM product amounts to inventing emulation.  It seems 
likely there are earlier ones, possibly not with that particular choice of 
implementation techniques.


> Anything you run on your x86 (or ARM, MIPS, SPARC, Alpha, etc) does not
> meet that definition, and is a simulator, since those processors have only
> general-purpose hardware and microcode.
> 
> Lots of people have other definitions of "emulator" which they've just
> pulled out of their a**, but since the System/360 architects invented it, I
> see no good reason to prefer anyone else's definition.

"emulation" is just a standard English word.  I don't see a good reason to 
limit its application here to a specific intepretation given to it in a 
particular IBM product.  It's not as if IBM's terminology is necessarily the 
predominant one in IT (consider "data set").  And in particular, as was pointed 
out before, "emulator" has a quite specific (and different) meaning in the 
1980s through 2000 or so in microprocessor development hardware.

paul



Re: Which Dec Emulation is the MOST useful and Versatile?

2017-10-31 Thread Paul Koning via cctalk

> On Oct 27, 2017, at 5:00 PM, Phil Blundell via cctalk  
> wrote:
> 
> On Fri, 2017-10-27 at 13:38 -0700, Brent Hilpert via cctalk wrote:
>> I wonder if they were just trying to draw an analogy between the
>> inherent dynamic operation requirements of magnetic logic and the
>> dynamic operation requirements of some (many?) NMOS designs (not
>> really inherent to NMOS).
> 
> On the subject of NMOS dynamic logic, someone recently pointed out a
> paragraph in the technical manual for a 1990s ARM2-based computer which
> warned of dire consequences, including possibly destruction of the
> chipset, if the circuitry was left powered with the clock stopped for
> more than a second or two.
> 
> Obviously if the clock is stopped for more than a few hundred
> microseconds then the logic will start to lose its marbles and the
> system will need a reset to recover.  But I don't think I've previously
> heard any suggestion that dynamic logic ICs would actually be damaged
> or destroyed under these circumstances.  I can just about imagine that
> there might be some situation where an invalid internal state would
> result in a short circuit between power and ground, but that's just
> supposition really.  Anybody know of a case where something bad has
> actually happened?

I don't understand this at all.  "Dynamic logic" is not a familiar concept, and 
certainly the NMOS logic I know isn't dynamic.  Memory (DRAM) is dynamic, and 
will forget if you don't refresh it.  But DRAM doesn't mind if you stop the 
clock, it just won't remember its data.

So I don't know how you might have a logic design that "loses its marbles" if 
you stop the clock.  And anything that is fried by clock loss is, in my view, 
the work of someone who should not be allowed anywhere near a EE shop.

Incidentally, while "soft core" magnetic logic is dynamic, memory core logic is 
not.  You could slow that down and it would still work.  The signals are 
pulses, not levels, but the pulses will still happen with a 1 Hz clock.

paul



Re: Which Dec Emulation is the MOST useful and Versatile?

2017-11-01 Thread Paul Koning via cctalk

> On Oct 31, 2017, at 5:59 PM, allison via cctalk  wrote:
> 
> ...FYI rope core was basically many
> transformers either with a wire
> in for the bit or wire around for the not bit.  The cores for rope
> didn't change magnetic state like
> coincident current cores of the bistable type as that allowed read write
> but was DRO (destructive
> read out with re-write) which is the more familiar core and why it had a
> shorter read time and a
> longer cycle time between reads.

Actually, that's not accurate.  Core rope memory (Apollo Computer style) does 
use memory cores.  You can find it described well on a web page by Brent 
Halpern that discusses that and other (transformer style) memories.  In core 
rope memory, the selected core has its state changed by the combined select 
currents, and then it delivers pulses to whichever sense lines are threaded 
through that core when the core is reset again.

The Electrologica X1 has a core ROM that also uses memory cores, but in a 
different way than core rope.

paul



Re: Which Dec Emulation is the MOST useful and Versatile?

2017-11-03 Thread Paul Koning via cctalk


> On Nov 3, 2017, at 11:58 AM, allison via cctalk  wrote:
> 
> Emulation of another computer was important to two groups early on...
> designers
> that wanted to try new architecture and the result of evolution and
> retirement of
> hardware the need to run costly to develop programs for which source or the
> needed components had become extinct.  The latter I believe is more rampant
> since the mid 70s with machines getting replaced with bigger and faster
> at an
> ever increasing rate. 

Could be.  Then again, today's main architectures are all decades old; they get 
refined but not redone.

Emulation of new architectures on old ones goes back quite a long time.  I've 
seen a document from 1964, describing the emulation of the Electrologica X8 
(which came out around 1964) on its predecessor the X1 (which dates back to 
1958).  

paul



Re: Details about IBM's early 'scientific' computers

2017-11-15 Thread Paul Koning via cctalk


> On Nov 14, 2017, at 10:58 PM, Jon Elson via cctalk  
> wrote:
> 
> On 11/14/2017 11:20 AM, Chuck Guzis via cctalk wrote:
>> It's always struck me how revolutionary (for IBM) the change in
>> architecture from the 700x to the S/360 was.  The 709x will probably
>> strike the average reader of today as being arcane, what with
>> sign-magnitude representation, subtractive index registers and so on.
>> The 7080, probably even more so.  But then, most of IBM's hardware
>> before S/360 had its quirky side; the only exception I can think of,
>> offhand, would be the 1130, which was introduced at about the same time
>> as the S/360.
> Pretty much all computers of that early-60's vintage, where a maze of logic 
> was used to decode instructions, and everything was done with discrete 
> transistors and diodes, had quirky arcane instruction sets.  Some of this was 
> due to the prevailing thought on instruction sets, but part of it was done to 
> save a few transistors here and there, and to heck with the side effects.  
> Most of these computers had very few registers, or put the "registers" in 
> fixed core locations, due to the cost of a flip-flop.  The 709x series was 
> certainly like that.  Hard to BELIEVE, with 55,000 transistors!

I can't remember how many transistors a CDC 6600 has.  A lot more than that, 
I'm pretty sure.

On "quirky arcane instruction sets" -- some yes, some no.  The CDC 6000 series 
can make a pretty good argument for being the first RISC machine.  Its 
instructions are certainly quite nicely constructed and the decoding involved 
is pretty compact.  While I don't think the term "orthogonal" had been applied 
yet to instruction set design -- I first saw that used for the VAX -- it fits 
the 6000 too.

Another example of an instruction set design that's pretty orthogonal is the 
Electrologica, especially in the X1 (from 1958).  It's a one address machine, 
not a register machine like the 6000 or traditional RISC, but in other ways it 
looks a lot like RISC.  Wide instructions with fixed fields allocated for fixed 
purposes (like register numbers, operation numbers, conditional execution 
modifiers, etc.).

The 360 was certainly significant in delivering many of these things in a very 
successful commercial package.  And I can believe it being revolutionary for 
IBM -- but not quite so much for the industry as a whole.

paul




Re: "Personal" Computers (Was: Details about IBM's early 'scientific' computers)

2017-11-15 Thread Paul Koning via cctalk


> On Nov 15, 2017, at 8:06 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 11/15/2017 02:39 PM, Rick Bensene via cctalk wrote:
> 
>> Perhaps the glass-room meme isn't so much bogus, as it is a sign of
>> the cultural times.   In those days, the big machines were very
>> expensive, and required a lot of support --  that meant special
>> power, air conditioning, raised floors, and highly-trained people.
>> The "management" of these big machine installations had a lot at
>> stake...and as such, they were very protective of their machines,
>> which is most of the reason they were encased in glass (they needed
>> to be glass to be able to show them off without letting people
>> in...in the days, big computer installations were class icons).
> 
> Remember also, that this was long before the indoor "no smoking" rules.
> Many folks smoked like chimneys and just about every installation that
> I experienced back then prohibited smoking around the machines.

Then again, our college computer room (1973) was the place where the computer 
services director was often see, chain smoking away.  No mainframe there, but a 
large PDP-11 and an IBM 1620.

Earlier, there was the SAGE computer (the air defense one, not the PC by the 
same name), which had built-in ash trays at each operator station.

paul




Re: Ideas for a simple, but somewhat extendable computer bus

2017-11-17 Thread Paul Koning via cctalk


> On Nov 17, 2017, at 8:11 PM, Jim Brain via cctalk  
> wrote:
> 
> I'm currently working on a single board computer system, designing from 
> scratch partially as an education experience, and also as something that 
> might be of interest to others.
> 
> I've laid out the first version of the SBC, and I realize it would cost 
> nothing to add an edge connector on the PCB, allowing expansion options.  As 
> well, assuming the design has any merit, I can see creating one of these SBcs 
> for each family (8080/Z80, 65XX, 68XX, and maybe even 16 bit options like 
> TMS9900, 68K, etc.)
> 
> However, as the design is not *for* any purpose, and I've never designed a 
> bus that could be shared among multiple CPUs, I am wondering what bus layout 
> would satisfy the following criteria: ...

You might start with the Unibus and make some small tweaks.  If you think of 
each of the several CPUs as a DMA device, which asks for the bus and gets the 
grant from a central arbiter, you've got your MP bus right there.  Strip out 
some unneeded stuff, like multiple interrupt levels (if you want).

One key question is whether it should be asynchronous, as the Unibus is, or 
synchronous.  If you put a central clock on the bus also (presumably from the 
arbiter since there's one of those) everything else gets a whole lot simpler.  
There are good reasons for the Unibus to be async, but if you can do sync 
that's a much better choice.

A synchronous version of the Unibus would be quite easy; all the funny one-shot 
delays would disappear and actions would simply be taken on the clock edge 
(rising or falling, pick one).  Just make the clock period comfortably longer 
than the worst case propagation delay and you're in business.

I'm assuming it doesn't need to be all that fast.  If you clock period > prop 
delay is an issue, things get vastly more complicated.  If so, you might want 
to stick with something that's already been sorted out, like PCIe.

paul



Re: Preventing VAX running VMS / Multinet from being used as SMTP relay

2017-12-07 Thread Paul Koning via cctalk


> On Dec 2, 2017, at 5:48 AM, Doug Jackson via cctech  
> wrote:
> 
> Camiel,
> 
> Without sounding super negative (my day job as a security consultant let's
> me do that  enough...)  I would be especially wary of connecting anything
> with a 10 year old stack to the modern internet.  The range of automatic
> attacks based on what the state of the OS was when it was last patched is
> staggering.

That's true to a point.  On the other hand, many attacks require that the 
machine is running on Intel instruction set hardware, and most of them also 
depend on the OS being Windows.

While bugs happen, the level of security competence applied by VMS engineering 
is quite high compared to the usual "hack it till it no longer crashes" 
practice seen all too often nowadays.  That applies especially to network 
protocol implementations.

If the issue is design defects in the protocol specifications, such as may be 
found in various revisions of SSL, then having a good OS is not a complete 
answer.  Even there, it can help; for example, I suspect that the "heartbreak" 
attack on older SSL stacks, if it were operable on VMS, wouldn't get you very 
far because of OS and instruction set differences.  Certainly script kiddy 
attacks would not work.

paul




Re: VAX Q-bus identical to PDP-11 Q-bus?

2017-12-07 Thread Paul Koning via cctalk


> On Dec 7, 2017, at 11:50 AM, Jon Elson via cctalk  
> wrote:
> 
> ...
> MSCP is a software protocol.  Any device that has a driver available for the 
> PDP-11 operating system you want to use can use that device.  

True with small variations.  A sufficiently large disk might not be supported 
on some OS because the on-disk structure is limited in what device size it can 
handle.  (This applies to RSTS for example.)  Some devices use obscure MSCP 
mechanisms that might not be in all drivers -- for example, the RA80 uses 
host-based bad block replacement, which is quite a complicated process; I know 
RSTS supports that but it might be omitted in some other operating systems.

Also, in DEC terminology, "supported" doesn't mean "it works in the software" 
but rather "we stand behind it".  That means tested, sold, handled by product 
support and field service, etc.  For example, the RP07 works in RSTS on an 
11/70, but it is not "supported".  I'd expect the same is true for any number 
of MSCP or TMSCP devices that were intended to be sold only on VAXen -- they 
may very well work, but if you had plugged one in on a machine where they 
aren't supported, DEC would give you no help with any problems.

paul



Re: VAX Q-bus identical to PDP-11 Q-bus?

2017-12-07 Thread Paul Koning via cctalk


> On Dec 7, 2017, at 2:22 PM, allison via cctalk  wrote:
> 
> On 12/07/2017 12:44 PM, Paul Koning via cctalk wrote:
>> 
>>> On Dec 7, 2017, at 11:50 AM, Jon Elson via cctalk  
>>> wrote:
>>> 
>>> ...
>>> MSCP is a software protocol.  Any device that has a driver available for 
>>> the PDP-11 operating system you want to use can use that device.  
>> True with small variations.  A sufficiently large disk might not be 
>> supported on some OS because the on-disk structure is limited in what device 
>> size it can handle.  (This applies to RSTS for example.)  Some devices use 
>> obscure MSCP mechanisms that might not be in all drivers -- for example, the 
>> RA80 uses host-based bad block replacement, which is quite a complicated 
>> process; I know RSTS supports that but it might be omitted in some other 
>> operating systems.
> ...
> Things like block replacement are options of the OS and the device IO is
> then just a
> interface.  The protocol for to talk to the device is a lower level
> layer in most cases.

No, bad block replacement in the UDA50/RA80 case is an MSCP mechanism where the 
controller (UDA50) offloads the work of managing bad blocks in part to the OS.  
The host still sees a logically contiguous error-free block space, just like in 
other MSCP devices (but unlike pre-MSCP).  But if a block is found to be bad, 
the host driver has to help out with the process of assigning a replacement 
block.  It's very different from the earlier bad block management which simply 
involves marking the offending LBAs as unavailable in the file system free 
space tables, and much more complicated.  (The code in RSTS is about 2000 
lines.)

paul



Re: Dec-10 Day announcement from Living Computers: Museum + Labs

2017-12-11 Thread Paul Koning via cctalk


> On Dec 11, 2017, at 4:45 AM, Pontus Pihlgren via cctalk 
>  wrote:
> 
> On Sun, Dec 10, 2017 at 06:19:10PM -0500, Noel Chiappa via cctalk wrote:
>>> From: Ethan Dicks
>> 
>>> I look forward to taking a stab at this.
>> 
>> I suspect there are a number of people who'd be interested in MASSBUS storage
>> devices (e.g. me - suddenly all those RH11's I've got are no longer boat-
>> anchors :-). We should try and organize an group build, to share the load.
>> Anyone else interested?
>> 
> 
> I am and possibly the Update computer club and some of it's members. 
> However, I have very little experience with making hardware. How would 
> we aproach this?

There are plenty of companies in the "prototype production" business; they will 
make either blank printed circuit boards, or assembled boards, in small 
quantities.  For our purposes a group build probably counts as "small 
quantity".  One I can think of (and have used for blank PCBs) is PCB Pool.

Depending on the parts involved and the skill level of the consumers, it may be 
reasonable to get blank boards and a parts bag and have the user assemble it -- 
or it may be better to let professionals do the assembly.

paul



XKCD on what we're doing

2017-12-18 Thread Paul Koning via cctalk
https://xkcd.com/1909/

Our community isn't just about that, but it's part of what makes us tick.

paul



Re: Extra chips in PDP11/23 plus cpu board

2017-12-20 Thread Paul Koning via cctalk


> On Dec 20, 2017, at 1:16 AM, Nigel Williams via cctalk 
>  wrote:
> 
> On Wed, Dec 20, 2017 at 4:45 PM, Douglas Taylor via cctalk
>  wrote:
>> There is a CPU board for sale on ebay, M8189, and it has the usual 3 chips
>> CPU, MMU, FPU.  However, there are 2 extra chips and I've never seen that
>> before.
> 
> https://en.wikipedia.org/wiki/PDP-11_architecture#Optional_instruction_sets
> 
> Microcode ROMs for the CIS (Commercial Instruction Set)

I believe PDP11 COBOL makes use of those if available, and the RSTS kernel will 
use the string move instruction for block memory copies if it sees CIS.

paul



Re: Miss categorized DEC box on ebay

2017-12-21 Thread Paul Koning via cctalk


> On Dec 21, 2017, at 3:08 PM, Bill Gunshannon via cctalk 
>  wrote:
> 
> Further comment.  Notice the difference in height compared to a standard
> DEC box of that style.  Also notice he is holding up the #150 box with his
> fingers.  Remind me never to arm wrestle that guy.
> Personally, I don't think it is a DEC system at all.

Quite possible.  After all, he advertised it as "Digital Instruments" which is 
a valid company name different from Digital Equipment.

paul




Re: RT-11 idle light pattern

2017-12-27 Thread Paul Koning via cctalk
It's been a standard feature of RT-11 FB since it first came out (in V2).  You 
need to set the select switch to display the "Display" register (unlike most 
other OS idle patterns which rely on the data path display showing R0 when at a 
WAIT instruction.

Here's what it looked like in V2.0 rmonfb.mac:

; "A SOURCE OF INNOCENT MERRIMENT!"
;   - W.S. GILBERT, "MIKADO"
; "DID NOTHING IN PARTICULAR, AND DID IT VERY WELL"
;   - W.S. GILBERT, "IOLANTHE"
; "TO BE IDLE IS THE ULTIMATE PURPOSE OF THE BUSY"
;   - SAMUEL JOHNSON, "THE IDLER"

10$:DEC (PC)+   ;THE RT-11 LIGHTS ROUTINE!
20$:1
BNE 14$ ;NOT TOO OFTEN
ADD #512.,20$   ;RESET COUNT, CLEAR CARRY
16$:ROL 13$ ;JUGGLE THE LIGHTS
BNE 11$ ;NOT CLEAR YET
COM 13$ ;TURN ON LIGHTS, SET CARRY
11$:BCC 12$ ;NOTHING FELL OFF, KEEP MOVING
ADD #100,16$;REVERSE DIRECTION
BIC #200,16$;ROL/ROR FLIP
12$:MOV (PC)+,@(PC)+;PUT IN LIGHTS
13$:.WORD   0,SR
14$:MOVB#MXJNUM/2+200,INTACT ;DO A COMPLETE SCAN
EXUSLK: BR  EXUSER  ;BACK INTO LOOKFOR LOOP

paul

> On Dec 27, 2017, at 11:03 AM, william degnan via cctalk 
>  wrote:
> 
> Do you have an octal or asm listing for the part of the code with the
> migrating bar effect?   This would be a good practice / test for me to try
> on my RT 11 system.  Merry Christmas
> Bill



Re: RT-11 idle light pattern

2017-12-29 Thread Paul Koning via cctalk
Yes, RT11 (when it introduced Sysgen, which was later than V2) did so by 
supplying sources that had been stripped of their comments.  So they were 
useful for sysgen but not (easily) useable for custom OS changes.  

DEC did offer source licenses for many of its operating systems, at extremely 
high prices.  They also offered listings, typically on microfiche, still 
substantially more expensive than the binary licenses but not nearly as crazy 
as source.

And sometimes you could get source or listings as a special deal.  In college 
we started out with RSTS-11 V4, which had a major reliability problem (as in: 
roughly daily crash).  As part of trying to keep the customer placated, DEC 
supplied full OS sources, 5 dectapes.  We printed them (on our 30 cps Silent 
733 terminals).  I used them to learn about RSTS as a student, which got me 
hired by the computer center.  ("I make it my habit to hire students before 
they become dangerous" -- Michael A. Hall, director of computer services.)  I 
still have copies of those files.

paul

> On Dec 28, 2017, at 10:57 PM, David C. Jenner via cctalk 
>  wrote:
> 
> The sources to each release were usually included with the distribution so 
> that custom system settings could be sysgened.  The sources are uncommented, 
> however.
> 
> You could implement this by finding the commented out source in the sources 
> and regenerating the system, with the code in the appropriate place.
> 
> Dave



Re: RT-11 idle light pattern

2017-12-29 Thread Paul Koning via cctalk


> On Dec 29, 2017, at 9:16 AM, Noel Chiappa via cctalk  
> wrote:
> 
>> From: Paul Koning
> 
>> Here's what it looked like 
> 
> Not having RT11, I embedded this in a small stand-alone program (which took a
> little work, Unix assembler being rather different :-), so I could see it (it
> wasn't obvious from the code what it did).
> 
> Pretty clever, to get that complex a pattern out of so few instructions.
> Although the self-modifying code is, err (If anyone wants the source or
> .LDA, let me know, I can post/upload it.)

Yes, a bit weird indeed.  Stranger still is the "fancy" lights in RSTS, also an 
Anton Chernoff creation.  "Fancy" because it produces a rotating pattern not 
just in the data lights which is easy, but also in the address lights.  It runs 
in supervisor mode, in versions of RSTS that did not use that mode for real 
work.

paul




Re: Computing from 1976

2017-12-30 Thread Paul Koning via cctalk


> On Dec 30, 2017, at 5:55 PM, Fred Cisin via cctalk  
> wrote:
> 
> ...
> "Moore's Law", which was a prediction, not a "LAW", has often been mis-stated 
> as predicting a doubling of speed/capacity every 18 months.

True, but that applies also to any "law of nature".  They are not rules, as 
political laws are; instead they are (a) a compact statement of what has been 
observed and (b) a prediction of what will be observed in the future.   They 
are always subject to revision if contradicted by evidence, as Newton's law of 
gravitation was.

paul




Re: RT-11 idle light pattern

2017-12-31 Thread Paul Koning via cctalk


> On Dec 31, 2017, at 9:41 AM, Noel Chiappa via cctalk  
> wrote:
> 
>> From: Paul Koning
> 
>> RSTS-11 V4, which had a major reliability problem ... As part of trying
>> to keep the customer placated, DEC supplied full OS sources, 5
>> dectapes. ... We printed them ... I still have copies of those files.
> 
> Is that version available online? If not, maybe an OCR project?

I don't know; if not I should dig through mine and submit it to Bitsavers.  I 
don't have the original DECtapes, but rather a copy of the files on magtape, so 
metadata is largely missing but the actual files should be there.

> (Although I know other versions of RSTS-11 are available, so maybe it's not
> rare enough to make the tedium of OCR worth it. That has been used on a
> number of systems; notably CTSS, but also the IMP code and the Apollo
> Guidance Computer, that I know of. I'm currently looking into getting an
> early version of MERT, and that may also come down to OCR - if we're lucky!)
> 
> 
>> Stranger still is the "fancy" lights in RSTS ... "Fancy" because it
>> produces a rotating pattern not just in the data lights which is easy,
>> but also in the address lights. It runs in supervisor mode
> 
> Ah; it must busy loop at loops spread across the address space? Clever!
> (Perhaps using the mapping hardware so that it doesn't use too much _actual_
> memory.) Is the source available?

Correct, it uses the MMU so it only needs 64 bytes of table space to get the 
low order bits right.  See attached.

paul




Re: RT-11 idle light pattern

2017-12-31 Thread Paul Koning via cctalk


> On Dec 31, 2017, at 10:21 AM, Paul Koning via cctalk  
> wrote:
> 
> ...
>> 
>> Ah; it must busy loop at loops spread across the address space? Clever!
>> (Perhaps using the mapping hardware so that it doesn't use too much _actual_
>> memory.) Is the source available?
> 
> Correct, it uses the MMU so it only needs 64 bytes of table space to get the 
> low order bits right.  See attached.

Ok, so the list stripped the attachment.  Try it this way.

paul

.INCLUDE /CMN:COMMON/
TITLE   LIGHTS,,0A,10-MAY-91,MHB/ABC/WBN

;
;   COPYRIGHT (c) 1974, 1991 BY
;   DIGITAL EQUIPMENT CORPORATION, MAYNARD, MASS.
;
; THIS SOFTWARE IS FURNISHED UNDER A LICENSE AND MAY BE USED AND  COPIED
; ONLY  IN  ACCORDANCE  WITH  THE  TERMS  OF  SUCH  LICENSE AND WITH THE
; INCLUSION OF THE ABOVE COPYRIGHT NOTICE.  THIS SOFTWARE OR  ANY  OTHER
; COPIES  THEREOF MAY NOT BE PROVIDED OR OTHERWISE MADE AVAILABLE TO ANY
; OTHER PERSON.  NO TITLE TO AND OWNERSHIP OF  THE  SOFTWARE  IS  HEREBY
; TRANSFERRED.
;
; THE INFORMATION IN THIS SOFTWARE IS SUBJECT TO CHANGE  WITHOUT  NOTICE
; AND  SHOULD  NOT  BE  CONSTRUED  AS  A COMMITMENT BY DIGITAL EQUIPMENT
; CORPORATION.
;
; DIGITAL ASSUMES NO RESPONSIBILITY FOR THE USE  OR  RELIABILITY  OF ITS
; SOFTWARE ON EQUIPMENT WHICH IS NOT SUPPLIED BY DIGITAL.
;
.SBTTL EDIT HISTORY FOR LIGHTS
;+
;
;  000  RRF  06-MAR-81  CREATION - COPIED FROM V7.0-07
;
;-
DEFORG  LIGHTS

.SBTTL  A FANCY NULL JOB

; NEEDED DEFINITIONS

SISDR0  =   172200  ;SUPERVISOR INSTRUCTION DESC REG 0
SISAR0  =   172240  ;SUPERVISOR INSTRUCTION ADDR REG 0

PS  =   16  ;PROCESSOR STATUS

ORG NULJOB

; INITIAL ENTRY POINT

NULJOB: BIT #004000,@#PS;DO WE HAVE SUPERVISOR MODE (2 REG SETS)?
BNE 30$ ;YES, DO IT FANCY...

; THE SIMPLE NULL JOB

10$:MOV R2,R1   ;RELOAD THE WAIT COUNTER
20$:WAIT;DISPLAY THE LIGHTS (R0) A WHILE
SOB R1,20$  ;KEEP WAITING
ROL R0  ;ELSE SHIFT PATTERN 1 PLACE LEFT
BR  10$ ; AND AROUND AGAIN...

; FANCY NULL JOB SETUP

30$:MOV #176000,R3  ;PRE-SET THE MEM ADR LIGHT PATTERN
MOV (PC)+,R4;GET DESC REG VALUE FOR
 .BYTE  4!2,128.-1  ;   R/W AND 4K
MOV #SISDR0,R5  ;POINT TO SUPERVISOR DESC REGS
MOV R4,(R5)+;LOAD SISDR0 WITH 4K AND R/W
MOV R4,(R5)+;LOAD SISDR1 WITH 4K AND R/W
MOV R4,(R5)+;LOAD SISDR2 WITH 4K AND R/W
MOV R4,(R5)+;LOAD SISDR3 WITH 4K AND R/W
MOV #177600,SISAR0-SISDR0(R5) ;LOAD PAR4 FOR THE I/O PAGE
MOV R4,(R5)+;LOAD SISDR4 WITH 4K AND R/W
MOV #40$,R1 ;FORM A MMU ADDRESS
ASH #-6,R1  ; THAT WILL MAP OUR CODE
BIC #^C<001777>,R1  ;  IN SUPERVISOR MODE
MOV R1,SISAR0-SISDR0(R5) ;LOAD MMU ADDRESS FOR PAR5 (CODE)
MOV (PC)+,(R5)+ ;LOAD SISDR5 WITH
 .BYTE  2,128.-1;   R-O AND 4K
CLR (R5)+   ;LOAD SISDR6 WITH "ABORT"
MOV R4,(R5)+;LOAD SISDR7 WITH 4K AND R/W
MOV #000340,@#PS;;;WE CAN'T AFFORD AN INTERRUPT HERE
MOV #054040,-(SP)   ;;;NEW PS OF SUPERVISOR MODE @ PR1
MOV #40$,-(SP)  ;;;NEW PC OF OUR ROUTINE
BIC #^C<77>,(SP);;; CORRECTED FOR RUNNING
BIS #12,(SP);;;  OUT OF PAR5
RTI ;;;DROP INTO SUPERVISOR MODE!!!
; THE FANCY NULL JOB

40$:MOV R3,R1   ;COPY PATTERN FOR MEM ADR LIGHTS
BIC #^C<06>,R1  ; AND ENSURE AN HONEST ADDRESS
CLR R4  ;CLEAR A HIGH ORDER
MOV R1,R5   ; AND SET LOW ORDER AS ADDRESS
ASHC#3,R4   ;EXTRACT THE APR #
ASL R4  ; AND FORM APR # TIMES 2
ADD #SISAR0-16+10,R4 ;FIND PAR TO USE (I/O PAGE = PAR4)
ASH #-3,R5  ;CORRECT THE VIRTUAL ADDRESS
BIC #^C<017700>,R5  ; AND ISOLATE OFFSET WITHIN PAR
NEG R5  ;SUBTRACT OFFSET WITHIN PAR
ADD #70$,R5 ; FROM OUR WORK TABLE
ASH #-6,R5  ;FIND THAT AS A MMU ADDRESS
BIC #^C<001777>,R5  ; WITH NO SIGN EXTENSION
MOV R5,(R4) ;LOAD CORRECT PAR WITH CORRECT ADDRESS
BIT R4,#16  ;ARE WE USING PAR0?
BNE 50$ ;NO
ADD #20,R4  ;YES, CORRECT FOR PAR7 NEXT
50$:SUB #200,R5 ;GO BACKWARDS 4K
MOV R5,-(R4); AND LOAD NEXT LOWER PAR WITH THAT
MOV (PC)+,(R1)  ;LOAD THE RETURN INSTRUCTION
JMP (R4); WHICH IS JUMP THROUGH R4
MOV   

Re: Attaching SIMH devices without halting simulation?

2017-12-31 Thread Paul Koning via cctalk
Typically you need to have devices online so the OS will see them at startup.  
But for removable media devices (tapes and many disks) you may want to mount 
the media (image files) later on, and switch them at runtime, exactly as you 
would do with disk packs or tape reels on a real computer.

paul

> On Dec 31, 2017, at 5:10 PM, william degnan via cctalk 
>  wrote:
> 
> I thought you set all devices you planned to use "online" so you could
> mount them later?  I don't simh vax much so thanks for the correction as to
> the procedure.  I mostly do pdp8 or pdp11 stuff, or esoteric hardware.
> 
> Bill Degnan


Re: Computing from 1976

2018-01-01 Thread Paul Koning via cctalk


> On Jan 1, 2018, at 12:26 PM, dwight via cctalk  wrote:
> 
> One other thing that larger/faster becomes a problem. That is probability!
> 
> We think of computers always making discrete steps. This is not always true. 
> The processors run so fast that different areas, even using the same clock, 
> have enough skew that the data has to be treated as asynchronous. 
> Transferring asynchronous information it always a probability issue. It used 
> to be that 1 part in 2^30 was such a large number, it could be ignored.
> 
> Parts often use ECC to account for this but that just works if the lost is 
> recoverable ( not always so ).

That doesn't sound quite right.

"Asychronous" does not mean the clock is skewed, it means the system operates 
without a clock -- instead relying either on worst case delays or on explicit 
completion signals.  That used to be done at times.  The Unibus is a classis 
example of an asynchronous bus, and I suppose there are others from that era.  
The only asynchronous computer I can think of is the Dutch ARRA 1, which is 
notorious for only ever executing one significant program successfully for that 
reason.  Its successor (ARRA 2) was a conventional synchronous design.

About 15 years or so ago, an ASIC company attempted to build processors with an 
asynchronous structure.  That didn't work out, partly because the design tools 
didn't exist.  I think they ended up building packet switch chips instead.

Clock skew applies to synchronous devices (since "synchronous" means "it has a 
clock").  It is a real issue in any fast computer, going back at least as far 
as the CDC 6600.  The way it's handled is by analyzing worst case skew and 
designing the logic for correct operation in that case.  (Or, in the case of 
the 6600, by tweaking until the machine seems to work.)  ECC isn't applicable; 
computer logic doesn't use ECC, it doesn't really fit.  ECC applies to memory, 
where it is used to handle the fact that data is not stored with 100% 
reliability. 

I suppose you can design logic with error correction in it, and indeed you will 
find this in quantum computers, but I haven't heard of it being done in 
conventional computers.

> Even the flipflops have a probability of loosing their value. Flops that are 
> expected to be required to hold state for a long time are designed 
> differently that flops that are only used for transient data.

Could you give an example of that, i.e., the circuit design and where it is 
used?  I have never heard of such a thing and find it rather surprising.

paul



Re: Computing from 1976

2018-01-01 Thread Paul Koning via cctalk


> On Jan 1, 2018, at 3:57 PM, David Bridgham via cctalk  
> wrote:
> 
> On 01/01/2018 03:33 PM, Noel Chiappa via cctalk wrote:
> 
>>> From: Paul Koning
>> 
>>> The only asynchronous computer I can think of is the Dutch ARRA 1
>> 
>> Isn't the KA10 basically asynchronous? (I know, it has a clock, but I'm
>> not sure how much it is used for.)
> 
> This was my understanding, as well.
> 
> More recently there was the AMULET processors designed at the University
> of Manchester.
> 
> https://en.wikipedia.org/wiki/AMULET_microprocessor
> 
> One of the stories I read about the AMULET was that they wrote a little
> program to blink an LED where the timing was determined by a busy loop. 
> If they sat a hot cup of coffee on the processor, the light would blink
> slower; a cup of ice water and it would blink faster. 

Neat.  I found this 2011 paper that's interesting: 
http://www.cs.columbia.edu/~nowick/nowick-singh-ieee-dt-11-published.pdf

The company I was trying to remember is Fulcrum, which was bought by Intel; 
they had morphed into an Ethernet switch chip company by then.  A pretty good 
one, as I recall.  But the original concept was a microprocessor, possibly a 
MIPS one, I don't remember.  The idea was that the chip speed would depend on 
how fast things happened to work, so different chips would run at different 
speeds due to process variations, and power supply and temperature changes 
would also affect things just as you described.

The paper I just mentioned lists a number of early computer designs as 
asynchronous, though it doesn't mention the ARRA 1, probably because it's not 
well known (a problem common to Dutch computers).  Also, those other computers 
did work.

paul




Re: Large discs

2018-01-05 Thread Paul Koning via cctalk


> On Jan 5, 2018, at 3:24 PM, Warner Losh via cctalk  
> wrote:
> 
> On Fri, Jan 5, 2018 at 1:13 PM, Fred Cisin via cctalk > wrote:
> 
>> On Fri, 5 Jan 2018, Mazzini Alessandro wrote:
>> 
>>> I'm  not sure I would use SSD for long term "secure" storage, unless maybe
>>> using enterprise level ones.
>>> Consumer level SSD are, by specifics, guaranteed to retain data for 6
>>> months
>>> 
>> 
> The JEDEC spec for Consumer grade SSDs is 1 year unpowered at 30C at end of
> life.
> The JEDEC spec for Enterprise grade SSDs is 90 days, unpowered at 30C at
> end of life.

That's curious.  Then again, end of life for enterprise SSDs is many thousands 
of write passes over the full disk (or the same amount of writes to smaller 
address ranges thanks to remapping).  Under high but not insane loads that 
takes 5-7 years.  So presumably the retention while fairly new (not very worn) 
is much better.  Still it's surprising to see a number that small.

> As far as I've seen, all SATA and NVME drive vendors adhere to these specs
> as a minimum, but there's also a new class of drive for 'cold storage'
> which has high retention, but low endurance and longer data read times...

I don't know if the "cold storage" SSD stuff is going anywhere.  But in any 
case, it seems to aim at high density at the expense of low endurance.  I don't 
remember hearing retention discussed at all, higher or unchanged.

Having drives with limited retention seems quite problematic.  And "unpowered" 
suggests that leaving the power on would help -- but I don't see why that would 
be so.

As for writable DVDs and such, do they have any useful retention specs?  

paul



Re: Large discs

2018-01-05 Thread Paul Koning via cctalk


> On Jan 5, 2018, at 5:07 PM, Diane Bruce via cctalk  
> wrote:
> 
> On Fri, Jan 05, 2018 at 09:33:47PM +, Bill Gunshannon via cctalk wrote:
>> Do they also guarantee there will be a device capable of reading it
>> in 1000 years?
> 
> It was bad enough with the BBC Domesday project.
> 
> Paper. Paper is the only way.
> acid free paper.

Paper isn't all that reliable.  The Long Now foundation is one outfit that has 
worked on this issue.  Check out their "Rosetta Disk".

paul




Re: DL10 documentation

2018-01-10 Thread Paul Koning via cctalk


> On Jan 9, 2018, at 7:56 PM, Phil Budne via cctalk  
> wrote:
> 
> ...
> (*) "A Network For 10s?" possibly based on a VERY early spec for
> DECnet.  It may have used link-state routing.  I don't think routing
> in DECnet appeared before Phase III; Between Phase II systems you
> needed to use a passthru service, and ended up hand specifying routes,
> like in the UUCP world: A::B::C:: -- DECnet routing (at least up to
> Phase IV) was distance vector (within an area, I think node zero was
> designated to be a route to an inter-area router).  The ONE nice thing
> I remember about Phase IV is that an area could span multiple Ethernet
> links, so you didn't have to waste a "network number" on each Ether
> segment the way you had to use a Class-C in TCP/IP before subnetting.
> I've wondered how much longer the IPv4 address space might have lasted
> if there hadn't been a constraint that each network link have its own
> network number (and each interface be uniquely addressable).

DECnet phase 1 was point to point, usually described as RSX only though there 
is a DECnet-8 document that describes it.  

DECnet phase 2 is also point to point except for "intercept" nodes which do 
routing (by node name -- not number).  As I understand it, intercept was 
intended for PDP-10/20 systems where the front end would be the intercept, but 
that may be a misunderstanding on my part.  I worked on DECnet/E, which neither 
asked for nor offered intercept.  An intercept node is more than a router, 
actually; it keeps connection state (NSP state) so it can disconnect 
connections whose destination has gone away.  Note that Phase 2 NSP doesn't do 
timeout and retransmit, because it works on a "reliable" datalink (DDCMP).

DECnet phase 3 adds distance vector routing, NSP now has timeout and 
retransmit.  255 nodes max, no hierarchy.  Still only point to point (X.25 was 
added).

DECnet phase 4 adds hierarchy, Ethernet support.  This is where the infamous 
"high order MAC address" hack was concocted.  And yes, areas are not subnets, 
for that matter addresses are node addresses, not interface addresses, in all 
versions of DECnet.  That made a bunch of things much cleaner while 
complicating a few others.  Phase 4 is still distance vector, now with two 
instances: one for routing within the area, one for routing among areas.  The 
latter is present only in area routers.  And yes, in the within-area routing 
table, node number 0 is the alias for "any destination outside this area".

> DECnet Phase V encompassed ISO, and might have included IS-IS,
> which Rhea Perlman had a hand in (while at DEC?).  XNS (and hence
> Netware) had 32-bits network number (host/node address was 48 bits
> (ethernet address) and might also have had longer legs for global
> use.

Phase 5 adopted OSI ES-IS (network layer) and TP-4 (transport layer).  ISO 
didn't have a routing protocol; their theory was that the world is X.25-ish 
stuff where telcos do the routing in a proprietary way.  That was obviously 
nonsense, so the DECnet architecture team created a link state routing protocol 
inspired by earlier IP work, with a lot of fixes to deal with failures.  That 
was then adopted by OSI as IS-IS, and further tweaked to become OSPF.

A bit of obscure history: When she first arrived at DEC (1981?), Radia proposed 
a link state routing protocol for what would be phase 4.  That wasn't adopted 
because it was considered too complicated by the VMS team; instead "phase 3e (3 
extended)" was created by a straightforward hack of phase 3, and that is what 
we now know as phase 4.  But the packet headers in Radia's proposal were 
retained for the Eternet case, which is where the "long headers" come from with 
a whole pile of fields with strange names that are for practical purposes 
simply reserved values.  When we outgrew phase 4 and link state was dusted off 
again, OSI had become relevant so a new design was created on that basis.  So 
the link state algorithm is in IS-IS but the packet formats and addressing are 
entirely different from the previous "long header" Ethernet stuff.

All DECnet versions from phase 3 onward were one phase backward compatible.  
Phase 2 wasn't backward compatible with phase 1; the packet formats are rather 
different.  I'm not sure why this wasn't done; perhaps no one thought it would 
be interesting.  No DEC product that I know of was multiple-phase backward 
compatible and no spec says how to do that, but it isn't actually hard; my 
Python based router does so.

paul



Re: DL10 documentation

2018-01-10 Thread Paul Koning via cctalk


> On Jan 10, 2018, at 10:52 AM, Noel Chiappa via cctalk  
> wrote:
> 
>> From: Paul Koning
> 
>> That was then adopted by OSI as IS-IS, and further tweaked to become
>> OSPF.
> 
> Err, no. OSPF was not a descendant of IS-IS - it was a separate development,
> based mainly on the ARPANET's original link state routing. (I can't recall if
> John Moy and I took a lot from the later 'area' version of the ARPANET link
> state, although we knew of it.) I think we became aware of IS-IS as OSPF
> progressed, and IIRC John 'borrowed' a few ideas (maybe the sequence number
> thing).

That may be the story, but I don't believe it.  Contemporary accounts have it 
that he started from a draft IS-IS spec.

paul



Re: DL10 documentation

2018-01-10 Thread Paul Koning via cctalk


> On Jan 9, 2018, at 7:56 PM, Phil Budne via cctalk  
> wrote:
>...
>DC44TYPESET-10 front end (PDP-11) for PTR (PA611R), PTP (PA611P), CAT? 
> photocomposition machine (LPC11)

That takes me back a while... 6 channel paper tape equipment, for communicating 
with typesetting machinery of that era.

Which reminds me:

> ...(*) "A Network For 10s?" possibly based on a VERY early spec for
> DECnet.  It may have used link-state routing.  I don't think routing
> in DECnet appeared before Phase III; 

I don't know anything about ANF-10.  But while routing appeared in DECnet with 
phase 3, that was not the first time DEC did routing.  Earlier (late 1977, I 
think -- certainly by summer 1978), Typeset-11 did link state routing.  It had 
a primitive kind of cluster that operated by passing work around as files, via 
a proprietary protocol over DMC-11 links, with link state routing.  It was 
pretty transparent: terminals were connected to any of the nodes, and could 
edit work and pass it around (to other people or to processing components such 
as typesetting back ends) independent of the location of those other resources.

paul




PDP11 media looking for a good home

2018-01-10 Thread Paul Koning via cctalk
Gentlepeople,

I have two items that I'd like to send to a good home.  That means, someone who 
can read the item in question and make it available so it's preserved.

1. A DECtape labeled "VT30 distribution for RSX11D V06-B".  VT30 is a DEC CSS 
product, a color alphanumeric terminal.

2. An RA60 pack labeled "RT11 V5.6" and possibly (it's hard to see) "kit".  
That "kit" seems a bit unlikely, an RA60 is way bigger than makes sense for an 
RT11 kit.  But if it were a source pack that would be a different matter.

#2 was found in an abandoned DEC facility; #1 I don't remember, possibly the 
same.

An RA60 pack looks physically like an RM03 pack, but its capacity is much 
larger so the format is entirely different.  A PDP11 or VAX with an RA60 drive 
should be able to read it.

If you have the ability to use one or both of these and are willing to read the 
data and post it, please contact me.

paul



Re: DL10 documentation

2018-01-11 Thread Paul Koning via cctalk


> On Jan 11, 2018, at 9:47 AM, Noel Chiappa via cctalk  
> wrote:
> ...
> Like I said, we did 'borrow' some idea from IS-IS, in particular the sequence
> number thing - but that may have come direct from Radia's paper:
> 
>  Radia Perlman, "Fault-Tolerant Broadcast of Routing Information", Computer
>Networks, Dec. 1983

Yes, that documents work she did at DEC early on, while developing the original 
link state routing proposal that was intended to be Phase IV but was set aside 
as "too complicated".

> I don't recall where the concept of a designated router stuff came from, if
> IS-IS was any influence there or not.

Designated router was part of DECnet Phase IV, so early 1980s.  OSPF does it in 
a fundamentally different way: DECnet aimed to be deterministic, OSPF aims to 
be stable.  The consequence is that in DECnet a given topology always has the 
same designated router no matter the sequence in which things came together, 
while in OSPF the designated router depends the order in which things happened. 
 There are arguments for either approach; in routers it doesn't matter much.

paul



Re: DECtape madness

2018-01-13 Thread Paul Koning via cctalk


> On Jan 13, 2018, at 12:28 PM, Al Kossow via cctalk  
> wrote:
> 
> 
> 
> On 1/13/18 9:04 AM, Jon Elson via cctalk wrote:
> 
>> I don't know what you are talking about with Mylar on both sides. They were 
>> conventional magnetic tape, a clear mylar
>> film with oxide applied to one side.
> 
> the actual spec is here:
> 
> http://bitsavers.org/pdf/dec/dectape/3M_DECtape_Spec_Nov66.pdf

And that spec is quite clear, "protective overlay".  This is the reason for the 
legendary robustness of DECtape media.  It was possible to wear it out, but 
only if you used it -- as done at Lawrence University for example -- as 
permanently mounted public file storage so it was read/written many times per 
hour for months on end.  When used as private removable storage it was pretty 
much invulnerable.  Stories of DECtapes being laundered by accident and still 
working fine afterwards have been around for a long time.

paul




Re: Spectre & Meltdown

2018-01-13 Thread Paul Koning via cctalk


> On Jan 13, 2018, at 1:08 PM, Murray McCullough via cctalk 
>  wrote:
> 
> I wrote about Spectre and Meltdown recently: INTEL took its time to inform
> the world! 

Of course, and for good reason.  The current practice has been carefully 
crafted by the consensus of security vulnerability workers.  That is: when a 
vulnerability is discovered, the responsible party is notified confidentially 
and given a reasonable amount of time to produce a fix before the issue is 
announced publicly.  There's a big incentive for that response to happen and 
typically it does.  If the issue is ignored, the announcement happens anyway 
along with public shaming of the part who didn't bother to respond.

With this approach, a fix can often be released concurrently with the 
disclosure of the issue, which dramatically reduces the oppportunity for 
criminals to take advantage of the problem.  This isn't a case of being nice to 
Intel; it's an attempt to benefit Intel's customers.

If you read the Meltdown and Spectre papers (by the researchers who discovered 
the problem, not the news rags reporting on it) you'll see this policy 
mentioned in passing.  

paul



Re: Spectre & Meltdown

2018-01-13 Thread Paul Koning via cctalk


> On Jan 13, 2018, at 1:22 PM, Dave Wade via cctalk  
> wrote:
> 
> ...
> It delayed telling the world to allow time for OS providers to apply fixes. 
> This is now standard and the delays are defined...
> 
> http://abcnews.go.com/Technology/wireStory/intel-fixing-security-vulnerability-chips-52122993
> 
> but it looks like in this case it leaked early. Similar bugs affect ARM, AMD 
> and PowerPC but nothing from them either. IBM won't tell the world (it will 
> tell customers, but I am not a customer) if and how it affects Z.

There are two bugs that are largely unrelated other than the fact they both 
start from speculative execution.  One is "Meltdown" which is specific to Intel 
as far as is known.  The other is "Spectre" which is a pretty much unavoidable 
side effect of the existence of speculative execution and appears to apply to 
multiple architectures.  There may be variations; I assume some designs have 
much shorter speculation pipelines than others and if so would be less affected.

Meltdown has a software workaround (it could also be fixed in future chips by 
changing how speculative loads work, to match what other companies did).  
Spectre needs software fixes, possibly along with microcode changes (for 
machines that have such a thing).  You're likely to hear more when the fixes 
are available; it would not make sense to have much discussion before then for 
the reason you mentioned at the top.

paul



Re: DECtape madness

2018-01-16 Thread Paul Koning via cctalk


> On Jan 16, 2018, at 1:04 PM, Doug Ingraham via cctalk  
> wrote:
> 
> On Sat, Jan 13, 2018 at 7:34 AM, David Bridgham via cctalk <
> cctalk@classiccmp.org> wrote:
> 
>> I've wondered if you might not make DECtape tape from 3/4" video tape.
>> I know that DECtape has mylar on both sides but what if you somehow
>> glued two strips of video tape together with the mylar backing on the
>> outside.  Probably want to build a jig of some sort and I'm not sure
>> what glue to use.
>> 
> 
> I have read on several occasions about the mylar on both faces of the
> tape.  I have over 300 reels of DECTape in my collection.  Most of these
> are 3M Scotch branded but around 30 of them are DEC branded in the blue
> plastic boxes.  I have never seen one with mylar on both sides.  This may
> have been something that existed early on but certainly wasn't the norm.

Well, the spec is clear about a protective layer on top.  And I've always been 
told that it's mylar.  And the fact that DECtape is far more wear resistant 
than regular magtape makes it clear it isn't constructed the same way.

It is correct that it doesn't have a glossy top layer matching the glossy 
substrate.  But that doesn't mean there isn't a top layer present.

paul



Re: Google, Wikipedia directly on ASCII terminal?

2018-01-16 Thread Paul Koning via cctalk


> On Jan 16, 2018, at 4:02 PM, Chuck Guzis via cctalk  
> wrote:
> 
> ...
> Of course, when the power goes out during a winter storm, *everything*
> goes out, even if you have emergency backup power for your home.  Said
> fiber-fed terminal has only about an hour of reserve power,
> 
> So a mobile phone, lousy coverage and all, is still a necessity.

Which of course also goes out if the power fails, perhaps not as quickly as a 
poorly constructed POTS system but it will.  Various emergency sitatuations 
(hurricanes etc.) have demonstrated this repeatedly.

paul



Re: Google, Wikipedia directly on ASCII terminal?

2018-01-16 Thread Paul Koning via cctalk


> On Jan 16, 2018, at 4:19 PM, Grant Taylor via cctalk  
> wrote:
> 
> On 01/16/2018 02:07 PM, Paul Koning via cctalk wrote:
>> Which of course also goes out if the power fails, perhaps not as quickly as 
>> a poorly constructed POTS system but it will.  Various emergency 
>> sitatuations (hurricanes etc.) have demonstrated this repeatedly.
> 
> That surprises me.  In Missouri, analog (a.k.a. B1) phone lines are 
> considered "life saving devices" and have (had?) mandates to be available for 
> service even when the power is out.

Sure.  That's why I said that a POTS that fails in an hour or so is "poorly 
constructed".

Still, any telecom service is going to deal only with limited power failures.  
Once the batteries drain, or the generators run out of fuel, *poof*.  And any 
of them rely on quite complex infrastructure that can, and sometimes will, fall 
apart.  I still remember a small NH telco which broke 911 service for a full 
day because their SONET loop wasn't a loop.  They had only bothered to connect 
one end, so when a squirrel chewed through a fiber cable the supposedly fault 
tolerant connection wasn't, and the whole town went off line.

> This is one of the reasons that TelCo equipment had such massive battery 
> backups.
> 
> I expect that a true analog (B1) phone line should stay in service even 
> without power.

It certainly does, which is why I still use them.  Then again, mine is the only 
house of about 20 on this one-mile stretch of line that still uses POTS.

paul



Re: Weird thing ID (core stack?)

2018-01-17 Thread Paul Koning via cctalk
The marking on the connector certainly says CDC.  And the next to last picture 
shows a black faceplate that pretty much matches what you see in 6000 
computers.  Look in the Thornton book 
(http://www.bitsavers.org/pdf/cdc/cyber/books/DesignOfAComputer_CDC6600.pdf 
) 
which shows a photo on page 31.  Two connectors, 30 pins each in 2 rows of 15 
matches what those memories use.

You could confirm it further by looking at the number of core planes in the 
stack.  The 6000 memory modules use 12 planes for the 12 bit PPU words (5 
memory units combine to make the 60 bit CPU word).

Finally, if you're inclined to take off some covers so you can look at the 
memory plane, the fact that it has 5 wires per core (x, y, x inhibit, y 
inhibit, and sense) is distinctive.  Most other memories have only a single 
inhibit wire per core, not two.  The details of how this is used are in chapter 
4 of 60147400A_6600_Training_Manual_Jun65.pdf which you can find on Bitsavers.

paul

> On Jan 17, 2018, at 3:24 PM, Toby Thain via cctalk  
> wrote:
> 
> Hi,
> 
> An acquiantance was wondering about more details on this part:
> 
>  https://imgur.com/a/p1GQ2
> 
> It seems to be a core memory stack? But of what type? CDC?
> 
> Any info appreciated.
> --Toby



Re: Weird thing ID (core stack?)

2018-01-17 Thread Paul Koning via cctalk


> On Jan 17, 2018, at 3:58 PM, Christian Kennedy via cctalk 
>  wrote:
> 
> 
> 
> On 1/17/18 12:24 PM, Toby Thain via cctalk wrote:
> 
>> It seems to be a core memory stack? But of what type? CDC?
> 
> Almost certainly a 6000-series core memory "block" from a PP. 

6000 series central memory uses the same memory blocks, in groups of 5 to make 
up 60 bit words, 4kW per bank, 32 banks in a fully loaded 6600 (128 kW).

paul




Re: Malware history was: Spectre & Meltdown

2018-01-17 Thread Paul Koning via cctalk


> On Jan 17, 2018, at 6:55 PM, Fred Cisin via cctalk  
> wrote:
> 
>>> I used to have a tiny portable manual card punch.
>>> An acquaintance used it to punch /* in the first two columns of his
>>> punchcard based utility bills.   (those characters have special meaning
>>> to 360 JCL.  They have multiple punches per column, so it required
>>> making a punch, then backspacing to make the other punch(es))
> 
> On Wed, 17 Jan 2018, Chuck Guzis via cctalk wrote:
>> /* = end of data set
>> /& = end of job
>> One wonders how a S/360 "C" compiler might deal with this. Preceding it
>> with a space might do the trick.
> 
> Yes, it would, but how would you get 100% compliance wiht no mistakes from 
> PROGRAMMERS?
> 
> A 360 s'posedly COULD be told to ignore, or to respond to something else, but 
> that wasn't usually available.

// DD DATA would ignore // in cols 1,2, but not /*.  I found // DD 
DATA,DLM='@@' -- not sure when that appeared.  I don't remember it from my 
OS/360 dabblings.

paul




Re: Reviving ARPAnet

2018-01-18 Thread Paul Koning via cctalk


> On Jan 18, 2018, at 12:27 PM, Grant Taylor via cctalk  
> wrote:
> 
> On 01/17/2018 01:12 PM, Frank McConnell via cctalk wrote:
> ...
>> So you might think I'd be able to move files between it and a modern FreeBSD 
>> box, right?  I mean, it's all just Ethernet, right?
> 
> Ethernet != Ethernet
> 
> I'm wondering if it might be possible to use an old NetWare 4.x / 5.x box as 
> a router to convert from one Ethernet frame type to another Ethernet frame 
> type.  I.e. from IP over Ethernet II frames to IP over 802.3 frames.

I didn't know there's any such thing as IP over 802.3.  There's IP over 802.2 
(LLC) which is used for things like FDDI, but it would be weird to attempt that 
on Ethernet.

> ...
> Here are the four frame types that NetWare supports:
> 
> - Ethernet II
>- I think this is what we are using for just about everything today.
> - IEEE 802.3 "raw"
>- I'm speculating that this is the frame type that Frank is referring to 
> above.

That's the infamous non-compliant mess Netware came up with by not 
understanding the 802 standard.  It's never valid to run "raw 802.3" -- the 
only correct usage is 802.2 (LLC n for some n) over a MAC layer like 802.3 or 
FDDI.  SNAP is essentially an additional muxing layer on top of 802.2.

paul



Re: GT-40 etc.

2018-01-21 Thread Paul Koning via cctalk


> On Jan 20, 2018, at 11:06 PM, Ethan Dicks via cctalk  
> wrote:
> 
> On Sat, Jan 20, 2018 at 8:32 PM, Paul Anderson  wrote:
>> I think just the VR12, VR14, and the VR17.
> 
> OK.  I've never had any of those.  I'm more wondering what modern
> tubes might work.

Remember that the GT40 is a vector drawing display, not a raster scan.  So you 
need a tube and associated deflection machinery that can handle high frequency 
X and Y deflection waveforms accurately.  This is not easy, especially with 
magnetic deflection.  I don't know what DEC used; CDC did it both ways with the 
6000 series consoles.  The original ones had "dual radar tubes" with 
electrostatic deflection, hairy circuits with 3cx100a5 final amplifier tubes.  
The next generation, in the 170 series, had a single large tube with magnetic 
deflection but still random access vector drawing.  How they did that with 
magnetic deflection is not clear to me, it sounds hard.

paul




Re: Apollo Software

2018-01-21 Thread Paul Koning via cctalk


> On Jan 21, 2018, at 2:50 PM, Al Kossow via cctalk  
> wrote:
> 
> CHM has an agreement with HP to host Apollo and 68K HP 9000 software legally.
> 
> 
> On 1/21/18 11:42 AM, David Collins via cctalk wrote:
>> The HP Computer Museum would be happy to host copies of any Apollo software 
>> if it can be imaged..
> 

I just dawned on me that the subject is Apollo the company bought by HP, not 
Apollo the spacecraft.  Oh well...

paul



Re: Ethernet cable (Was: Sun3 valuations?)

2018-01-23 Thread Paul Koning via cctalk
> On Jan 23, 2018, at 11:10 AM, Bill Gunshannon via cctalk 
>  wrote:
> 
> If you didn't locate the transceivers on those black marks you would
> have had terrible performance as that affects collisions.  Timing (among
> other things like grounding) was very important with that version of
> ethernet hardware.
> 
> bill

Yes, the purpose of the marks is to make the collision mechanism reliable.

Ethernet does not have any critical timing; collisions do not depend on timing. 
 The black stripes on official Ethernet cable exists for a different reason: to 
get you to place the taps at positions that are NOT round multiples of a 
quarter wavelength.  The reason: a tap is a (small) impedance bump, which 
causes reflections on the cable.  If you have a lot of taps and they are spaced 
multiples of a wavelength apart, those reflections will combine to produce a 
large reflection, which if you're unlucky will look like a collision.  If you 
pick the correct spacing, the reflections from the various taps are spread out 
across time and don't combine, so none of them add up to a strong enough pulse 
to be seen as a collision.

This is clearly stated in the Ethernet V2 spec, section 7.6.2:

> Coaxial cables marked as specified in 7.3.1.1.6 have marks at regular 2.5 
> meters spacing; a transceiver may be placed at any mark on the cable. This 
> guarantees both a minimum spacing between transceivers of 2.5 meters, as well 
> as controlling the relative spacing of transceivers to insure non-alignment 
> on fractional wavelength boundaries.

Reading between the lines, it's clear you could ignore those marks and get away 
with it in many cases.  Low tap count, for example.  Other positioning that 
meets the "non-alignment" intent.  But for large installations, using the marks 
ensures that you stay out of trouble.

The need to have a transmission line with controlled reflections is also why 
the cable is required to be terminated with accurate terminating resistors, at 
both end points (but not at any other point :-) ) and why splices are made with 
constant impedance connectors (N connector barrels).

Apart from the marks, the 10Base5 cable is pretty ordinary.  It's not exactly 
RG-8/U but it is not all that differen either, and if the diameter is close 
enough something like RG-8/U would make an acceptable substitute.

The same sort of considerations could apply to 10Base2, but there things are 
not as strict because the cable is shorter and the station count is 
significantly lower (max of 30).  So the spec simply states that stations 
should be at least 1/2 meter apart, and that there must not be a significant 
stub (more than a few centimeters) between the T connector and the transceiver 
electronics.

If you build with transmission line design rules in mind, you can make Ethernet 
buses out of cable of your choice, so long as it's 50 ohms and good quality 
components are used throughout.  You can, for example, splice 10Base5 to 
10Base2 (with a barrel, not a T) if you follow the more restrictive of the two 
configuration rules.

paul




Re: Ethernet cable (Was: Sun3 valuations?)

2018-01-23 Thread Paul Koning via cctalk


> On Jan 23, 2018, at 3:19 PM, Noel Chiappa via cctalk  
> wrote:
> 
>> From: Grant Taylor
> 
>> I can fairly clearly see the RG-8/U on the side of the cable that David
>> is holding ... Sure, there was probably a better alternative that came
>> along after, with better shielding and marking bands. 
> 
> You keep mixing up the 3 Mbit and 10 Mbit. _They were not the same_. (I
> _really_ need to retake those photos with a ruler in them...)
> 
> The stuff with better shield, marking bands, etc is 10 Mb; it's about 1.05cm
> in diagmeter. The black stuff (the stuff Dave is holding in the video) is 3Mb;
> the piece I have is .95 cm.

The Ethernet spec says that the cable OD is in the range .365 to .415 inch, 
which is 9.27 to 10.54 mm.  The nominal OD of RG-8/U is .405 inches, or 10.28 
mm, which is within spec for Ethernet cable.

One place where the two cable specs differ is in the velocity factor, 0.66 for 
RG-8/U and 0.77 for Ethernet cable.  That relates to the dielectric -- solid 
polyethylene for RG-8/U and foamed material (unspecified) for Ethernet.  Also, 
Ethernet requires a solid inner conductor (for the tap) while RG-8/U may come 
stranded.  (Maybe only in some variants, I'm not sure.)  And there are the 
stripes, of course, but those have no electrical significance.  You can use a 
tape measure if you don't have the stripes.

paul



Re: Experimental Ethernet, XGP, etc.

2018-01-23 Thread Paul Koning via cctalk


> On Jan 23, 2018, at 1:35 PM, Mark Kahrs via cctech  
> wrote:
> 
> A few notes:
> ...
> The vampire tap transceiver used RG-8 cable originally.  That's before they
> added the lines around the cable and added additional shielding.

The cable spec given in the Ethernet standard doesn't mention additional 
shielding.  It does differ from RG-8 in that it calls for foam dielectric and a 
higher velocity factor (0.77 rather than 0.66).  Another common difference (not 
required by the spec) is that RG-8 is polyethylene while Ethernet coax is 
usually PTFE.

paul



Re: Experimental Ethernet, XGP, etc.

2018-01-23 Thread Paul Koning via cctalk


> On Jan 23, 2018, at 4:17 PM, Glen Slick via cctalk  
> wrote:
> 
> On Tue, Jan 23, 2018 at 11:15 AM, Paul Koning via cctalk
>  wrote:
>> 
>> The cable spec given in the Ethernet standard doesn't mention additional 
>> shielding.  It does differ from RG-8 in that it calls for foam dielectric 
>> and a higher velocity factor (0.77 rather than 0.66).  Another common 
>> difference (not required by the spec) is that RG-8 is polyethylene while 
>> Ethernet coax is usually PTFE.
>> 
> 
> FWIW, here is a spec sheet for Belden 89880 Thicknet 10BASE5 cable,
> which apparently DEC used for their part number 17-00324-00 cable.
> 
> https://catalog.belden.com/techdata/EN/89880_techdata.pdf 
> <https://catalog.belden.com/techdata/EN/89880_techdata.pdf>

Yes, that looks like the stuff.  A smoky orange color due to the translucent 
jacket.  I once saw at DEC a run of prototype cable, which was bright yellow 
with black stripes.  Perhaps polyethylene jacket, which would explain why it 
was changed before becoming a product -- that was right around the time when 
the "plenum rated" cable specs were appearing.

paul



Re: Ethernet cable (Was: Sun3 valuations?)

2018-01-24 Thread Paul Koning via cctalk


> On Jan 24, 2018, at 4:05 PM, Brent Hilpert via cctalk  
> wrote:
> 
> On 2018-Jan-23, at 12:27 PM, Paul Koning via cctalk wrote:
>> The Ethernet spec says that the cable OD is in the range .365 to .415 inch, 
>> which is 9.27 to 10.54 mm.  The nominal OD of RG-8/U is .405 inches, or 
>> 10.28 mm, which is within spec for Ethernet cable.
>> 
>> One place where the two cable specs differ is in the velocity factor, 0.66 
>> for RG-8/U and 0.77 for Ethernet cable.  That relates to the dielectric -- 
>> solid polyethylene for RG-8/U and foamed material (unspecified) for 
>> Ethernet.  Also, Ethernet requires a solid inner conductor (for the tap) 
>> while RG-8/U may come stranded.  (Maybe only in some variants, I'm not 
>> sure.)  And there are the stripes, of course, but those have no electrical 
>> significance.  You can use a tape measure if you don't have the stripes.
> 
> I was attempting some calculations to see if I could derive the 2.5M 
> transceiver spacing and was wondering what the velocity factor for the cable 
> was, as it should affect the transceiver spacing in theory.

The velocity factor is specified as 0.77.  The Manchester encoding of 10 Mb 
Ethernet means the dominant frequency is 10 MHz, which in the coax would have a 
wavelength of 23.1 meters (0.77 * c / 10e6).  So the 2.5 meter spacing is 
0.1082 wavelengths, i.e., a number chosen NOT be be a round value.  If you look 
at the integer multiples of that value you'll find hardly any that are close to 
an integer; the first one I see is 37x which is 4.004.  This ensures that there 
is very little in the way of coincident reflections.

There are lots of other spacings that would work, for example 2.0 meters is 
also a decent choice -- all that is required is "pick a spacing such that 
nearly all the tap points are not round multiples of 1/2 wavelength of 10 MHz 
apart".  2.5 seems a fine choice given the max cable length (500 meters) and 
station count (100); there is no real benefit in picking a smaller value.

If you're using actual RG-8/U (VF = 0.66) then the answers come out different, 
of course.

paul




Re: QSIC update - v6 Unix boots and runs

2018-01-29 Thread Paul Koning via cctalk


> On Jan 29, 2018, at 4:06 PM, David Bridgham via cctalk 
>  wrote:
> 
> For those of you who are following along with our QSIC project, today we
> booted v6 Unix successfully for the first time.  We'd first tried this a
> week or two back but discovered that Unix does use partial block reads
> and writes after all and I hadn't implemented those yet. 

FWIW, so does RT11, and in the case of writes, it requires the rest of the 
block to be zero-filled.  Not everything depends on this, but some parts do; I 
think Fortran is one.

I remember this because I had to handle it in the driver for the RC11, since it 
has a 64 byte blocksize.

paul




Re: QSIC update - v6 Unix boots and runs

2018-01-29 Thread Paul Koning via cctalk


> On Jan 29, 2018, at 6:03 PM, Noel Chiappa via cctalk  
> wrote:
> 
> ...
>> (actually, this should work with Q18 QBUS systems as well)
> 
> Goodness, never thought of that. Hmmm.. it's probably enough hassle to mod
> the software (who ever heard of a 'QBUS map' on a QBUS -11 - but you'd need
> it to give DMA devices access to high memory) ...

No, there is no such thing as a QBUS map.  Q18 systems are like 18 bit Unibus 
systems such as the 11/45: they have a max of 124 kW of memory so 18 bits 
address all available memory.

paul



Re: Foonlies

2018-01-31 Thread Paul Koning via cctalk


> On Jan 31, 2018, at 3:28 AM, Lars Brinkhoff via cctalk 
>  wrote:
> 
> This document seems to imply that the Super Foonly and the Foonly F1
> were separate machines.  When I've seen them discussed, they always
> seemed to be uses synonymously.
> 
> http://www.bitsavers.org/pdf/dec/pdp10/KC10_Jupiter/memos/foonly_19840410.pdf
> 
> SUPERFOONLY   DESIGNED 1968-71
> 10,000 TTL IC'S
> 3 MIPS
> 
> F1 (1978)
> 5,000 ECL IC'S
> 3.5 MIPS

Wow, 10 years later, with faster chips, and still the same speed?  That's 
surprising.

paul




Re: Foonlies

2018-01-31 Thread Paul Koning via cctalk


> On Jan 31, 2018, at 7:20 PM, Mark Linimon via cctalk  
> wrote:
> 
> On Wed, Jan 31, 2018 at 10:00:53AM -0800, Chuck Guzis via cctalk wrote:
>> An all-ECL redesign (details escape me) resulted in no appreciable
>> improvement in performance.
> 
> But I'm sure the local power company appreciated the extra revenue they
> got from it.
> 
> (I recently donated the little chunk of ECL logic I had back to Rice
> University, where it came from lo those many years ago.  Even by the
> time they were building their "fast" computer in 1970, 74S was already
> starting to catch up.)

Then again, DEC Western Research Lab in the mid 1980s did an interesting 
project to do a full custom single ECL chip implementation of a MIPS (or 
Alpha?) CPU, intended to run at 1 GHz.  The CAD system they built for this was 
quite interesting, as were bits of key technology like a heat pipe based chip 
cooling setup, possibly the first such device.  It wasn't finished (the ECL fab 
shops kept going out of business faster than the CAD team could tweak the 
design rules in the tools) but some neat stuff came out of it, in internal 
reports only unfortunately.

paul



Re: chip technology dead-ends (was: Foonlies)

2018-02-01 Thread Paul Koning via cctalk


> On Feb 1, 2018, at 12:40 AM, Mark Linimon via cctalk  
> wrote:
> 
> On Wed, Jan 31, 2018 at 07:07:23PM -0800, Chuck Guzis via cctalk wrote:
>> Back in the 70s, 4000-series CMOS was among the slowest logic around.
> 
> I really wish I still had one technical magazine that came out during
> the late 70s/early 80s.  (I don't remember which one it was, anymore.)
> It was devoted to keeping you up with the latest chip/minicomputer
> technology.

Lambda? (Later renamed VLSI Design if I remember right.)  I still have the 
first issue, with an article by Ron Rivest describing the full-custom RSA chip 
(512 bit ALU) he designed.

As for CMOS for high speed computing, I recently read an interesting article 
about CDC spinoff ETA betting the company on that.  It worked in the sense that 
the technology was a success, but the company closed anyway due to the fact 
that it was controlled by CDC.

http://ethw.org/w/index.php?title=First-Hand:The_First_CMOS_And_The_Only_Cryogenically_Cooled_Supercomputer&oldid=154872

paul



Re: SuperTerm Maintenance Manual

2018-02-01 Thread Paul Koning via cctalk


> On Feb 1, 2018, at 12:51 PM, Eric Smith via cctalk  
> wrote:
> 
> On Thu, Feb 1, 2018 at 10:19 AM, Mike Norris via cctalk <
> cctalk@classiccmp.org> wrote:
> 
>> The SuperTerm was manufactured by Intertec Data Systems c. 1978, it was a
>> 180 CPS dot matrix printer (RS232), quite often used as a console printer
>> in place of a LA36,
> 
> 
> I know it sounds snarky, and admittedly my sample size is small, but it
> seems to me that it was quite _rarely_ used as a console printer in place
> of a LA36. Of the DEC machine rooms I saw back in the day (DECsystem-10,
> PDP-11, VAX-11/7xx), most used an LA36 or LA120 as the console terminal,
> but I also saw one Teletype Model 43 and one VT52. (It was not good
> practice to use a CRT as the system console, IMO.)

My college experience (1973-1975) with consoles started with a ASR-33, then an 
LA30, and finally an LA36.  The LA30 was, amazingly enough, even less reliable 
than the ASR-33.  The LA36, on the other hand, was rock solid (as was the 
LA120, which I didn't see until after I went to DEC).

As for CRTs, it all depends on the design assumptions.  Lots of operating 
system console interfaces are designed on the assumption you have hardcopy 
consoles, and if so a CRT is a bad idea.  But you can certainly make CRT 
consoles and have it work -- consider the CDC 6000 series.

paul



Re: SuperTerm Maintenance Manual

2018-02-01 Thread Paul Koning via cctalk


> On Feb 1, 2018, at 1:01 PM, Eric Smith  wrote:
> 
> On Thu, Feb 1, 2018 at 10:56 AM, Paul Koning  wrote:
> > On Feb 1, 2018, at 12:51 PM, Eric Smith via cctalk  
> > wrote:
> > console terminal [...] VT52. (It was not good
> > practice to use a CRT as the system console, IMO.)
> 
> As for CRTs, it all depends on the design assumptions.  Lots of operating 
> system console interfaces are designed on the assumption you have hardcopy 
> consoles, and if so a CRT is a bad idea.  But you can certainly make CRT 
> consoles and have it work -- consider the CDC 6000 series.
> 
> Just a wild-ass guess, but I suspect that a typical CDC 6600 system would 
> have had a printer that logged console interaction?  I'm only suggesting that 
> a CRT console with no logging was a bad idea.

True.  The CDC OS would log anything interesting to a "dayfile", essentially a 
running log of system events including operator actions.  Those go to disk.  
Dayfile messages related to a particular job would also be printed with that 
job output.

> Of course, in principle the logging could be to disk or tape, but I don't 
> think most "machine-room" people would have trusted that nearly as much for a 
> console log. One wants a log of what happened on the console even when the 
> system was not working well.

I guess they trusted the disk enough.  Normal practice would be to save the 
dayfile to a regular disk file periodically (perhaps as part of daily 
maintenance), at which point you could print it, or archive it to tape, or 
whatever else comes to mind.

There was also the "accounting log", a second dayfile with accounting related 
messages coded in a fashion that made it straightforward to extract the data 
for billing.  And an "error log" with messages related to hardware problems 
(I/O errors with the hardware error detail data).

paul



Re: SuperTerm Maintenance Manual

2018-02-01 Thread Paul Koning via cctalk


> On Feb 1, 2018, at 1:56 PM, Fred Cisin via cctalk  
> wrote:
> 
> On Thu, 1 Feb 2018, Paul Koning via cctalk wrote:
>> I guess they trusted the disk enough.  Normal practice would be to save the 
>> dayfile to a regular disk file periodically (perhaps as part of daily 
>> maintenance), at which point you could print it, or archive it to tape, or 
>> whatever else comes to mind.
> 
> Was there typically any other way to access the disk file, such as if the 
> system were down?
> It could be useful in troubleshooting, such as if the system were down.

Not that I know of.  The file system structure was quite trivial so it would be 
easy to write a standalone inspect tool but I don't remember any such thing.

> At least they would not have had "Help" to suggest, "If the system will not 
> IPL/Boot, then run Troubleshooting Wizard"

That at least isn't an issue, since deadstart (CDC for "IPL") was traditionally 
done from magnetic tape, and could also load from other media if needed.

paul



Re: Interest Check: Belden Thicknet 10base5 Ethernet Coax

2018-02-02 Thread Paul Koning via cctalk


> On Feb 2, 2018, at 4:35 PM, Tony Aiuto via cctalk  
> wrote:
> 
> Interesting. Count me in for 20'. I would want to pick up at VCF east.
> 
> These people have taps, but no transceivers.
> https://www.connectorpeople.com/Connector/TYCO-AMP-TE_CONNECTIVITY/2/228752-1
> 
> Has anyone found the right terminators?

It's simply a good quality 50 ohm terminator, coax type, with a matching 
connector.  If you use standard 10Base5 conventions, the connectors are Type N 
male, so you splice pieces together with F-F type N barrel connectors, and you 
terminate with female type N 50 ohm terminators.  If you can't find female 
terminators, a male terminator plus a barrel will of course serve.

paul



Re: Interest Check: Belden Thicknet 10base5 Ethernet Coax

2018-02-02 Thread Paul Koning via cctalk
On Feb 2, 2018, at 5:08 PM, Bill Gunshannon via cctalk  
wrote:
> 
> ... Just remember,
> never ground both ends.  :-)

More precisely: ground the cable at exactly one point.  Any point will do, but 
it must be grounded (because none of the taps provide a ground).

paul



Re: Interest Check: Belden Thicknet 10base5 Ethernet Coax

2018-02-02 Thread Paul Koning via cctalk


> On Feb 2, 2018, at 5:22 PM, Grant Taylor via cctalk  
> wrote:
> 
> On 02/02/2018 03:10 PM, Paul Koning via cctalk wrote:
>> More precisely: ground the cable at exactly one point.  Any point will do, 
>> but it must be grounded (because none of the taps provide a ground).
> 
> Why is that?
> 
> Is it in an attempt to avoid current loops / sneak current paths?

Yes, exactly.  And if the cable crosses between buildings, which at least for 
10Base5 is plausible, they might have different ground systems.  If so, 
grounding both ends might produce a LARGE current through the cable, possibly 
enough to be hazardous.

(Somewhat different but similar: there's a story about the DEC building at 
Marlborough, which apparently had two separate power sources at the two ends, 
from different external supplies.  Each was grounded at the service entry, but 
the two were not bonded together (a code violation).  One of the machine rooms 
had branch circuits from both.  One system had a string of large disk drives, 
RP04 or the like, some fed from one branch, some from the other.  As required 
by the book, the drives were bonded together by substantial braided wire 
jumpers.  One of those got hot, possibly enough to melt it, because the two 
grounds were at different voltages and the "sneak current" was many amps.  I'm 
not sure if the story is true, but it sounded somewhat plausible.)

> I thought the outside of BNC connectors (et al) was typically bonded to the 
> card edge connector, which is (ideally) bonded to the system chassis, which 
> should be grounded either directly or indirectly.

Many coax connectors have the shell connected to the chassis.  But 10Base2 
Ethernet connectors are required to be insulated: if you look closely you will 
(should) see a plastic sleeve between the jack shell and the mounting flange.  
I don't remember for sure, but it may be that 10Base2 repeaters ground that 
end, or have an option to do so.  That would make sense because you usually 
have just one repeater on a 10Base2 segment, so grounding there is a logical 
thing to do.

The requirement for controlling grounding is also why 10Base2 connectors are 
often made with insulating sleeves.  For example, the ones DEC sold had plastic 
shells as an integral part of the connector assemblies (T connectors too).   
Similarly, you might find plastic shells covering 10Base5 barrel splices or 
terminators (those were separate from the connectors themselves).

paul



Re: Tangent: Interest Check: Belden Thicknet 10base5 Ethernet Coax

2018-02-02 Thread Paul Koning via cctalk


> On Feb 2, 2018, at 7:26 PM, Paul Berger via cctalk  
> wrote:
> 
> 
> 
> On 2018-02-02 7:13 PM, Fred Cisin via cctalk wrote:
>>> they might have different ground systems. If so, grounding both ends might 
>>> produce a LARGE current through the cable, possibly enough to be hazardous.
>> 
>> OB_Ignorant_Question:  Is that the reason why RS232 DB25 has both pin 1 and 
>> pin 7?
> Pin 1 is supposed to be protective (shield) ground and pin 7  is signal 
> ground.
> 
> Paul.

Similar to safety ground (green) and neutral (white, in the USA) on power 
wiring.

paul



Re: Control Data 841 disk drive's 3-phase power supply resurrection

2018-02-18 Thread Paul Koning via cctalk


> On Feb 18, 2018, at 12:34 PM, P Gebhardt via cctalk  
> wrote:
> 
> Hello list,
> 
> currently, I am in the process of trying to bring back to life a disk drive 
> installation from Control Data known as "841 Multiple Disk Drive" ( MDD ). 
> From the early '70s. It uses hydraulic disk head actuators! Pictures of the 
> subsystem are here:
> 
> http://www.digitalheritage.de/peripherals/cdc/841/841.htm
> 
> I started with the power supply. Most of the electrolytic capacitors need to 
> be reformed which is being done. 
> As far as I know, some computer installations used 400Hz 3-phase back in the 
> days. Does anybody know, if that is the case for this type of drive systems? 
> I couldn't find any indication so far, except for the input filter that 
> supports up to 400Hz (written on it). 
> I've quite some experience with old linear power supplies, but never worked 
> with three-phase supplies, yet.
> Has anybody experience with this? Anything particular to be considered?

CDC mainframe shops used 400 Hz three-phase power for the CPU and the display 
console (DD60).  I don't know if it was used in other places.  The advantage of 
400 Hz power is smaller transformers and filters; in addition, it was generated 
by motor-generator units which produce good clean power even if the building 
receives crummy power from the utility.

> There is an operator's manual, but there don't seem to be manuals or 
> schematics about this type of CDC drive nor on bitsavers, neither elsewhere 
> on the net. How could help me in pointing out where to get these?
> A lot of questions, I know :)

There is a by-invitation list of CDC experts and fans, I included a BCC to the 
list owner on this reply.  There might be help available from that group.

There are two considerations for disk drive power.  Older disk drives often use 
AC induction motors, which tend to be three-phase motors.  With induction 
motors, the rotational speed is determined by the motor design and the power 
line frequency.  But I haven't heard of induction motors operating from 400 Hz 
power; those would spin quite rapidly indeed unless they had lots of poles.  50 
or 60 Hz induction motors are fairly common, which is why you tend to see 3600 
rpm nominal rotation specs for a lot of disk drives.

The other aspect is DC power supply design.  If a supply is designed 
specifically for 400 Hz input, it won't operate properly from 60 Hz mains.  The 
transformers will be too small, and the filtering inadequate.  But if the spec 
says 50-400 or 60-400 Hz, that means the transformer and filter sizing is done 
for mains power (50/60 Hz) and the transformer core is constructed to still 
have acceptable losses at 400 Hz.  That's not common but I could imagine CDC 
doing this because of their use of 400 Hz.  If you have such a wide-range 
device, just feed it mains power and all will be well.  But, say, a CDC 6600 
central processor can't be powered by 50 or 60 Hz mains power.

As for 3-phase vs. 1-phase DC power supplies, that just means there are more 
rectifiers and smaller filters because the ripple is reduced by rectifying 3 
phases.  You might be able to get away with feeding such a device 1-phaser 
power but it would probably be marginal, and 3 phase would be a better option 
if you can get it.  There are converters that do a decent job, and depending on 
where you live you might just get it from the power company.

paul



Re: Control Data 841 disk drive's 3-phase power supply resurrection

2018-02-19 Thread Paul Koning via cctalk


> On Feb 18, 2018, at 11:02 PM, Chuck Guzis via cctalk  
> wrote:
> 
> On 02/18/2018 04:55 PM, Al Kossow via cctalk wrote:
>> 
>> 
>> On 2/18/18 4:07 PM, Chuck Guzis via cctalk wrote:
>> 
>>> Generally, the electromechanical stuff (motors) was run from 208V
>>> 3-phase and often, the electronics from 400Hz.
>>> 
>>> At least that's what I recall.
>>> 
>>> --Chuck
>>> 
>> 
>> that isn't what the schematic looks like.
>> there is a low voltage transformer hung off one of the phases
> 
> Perhaps that's only for controllers and CPUs and such.  It's been too
> long...

It varies.  Looking at CDC 6600 CPU cabinet power schematics, you can see 400 
Hz 3 phase powering the DC supplies, and 50/60 Hz three phase for the cooling 
system compressors.  An interesting detail is that the DC supplies seem to be 
unregulated, with choke input filters.  That makes some sense, the load is 
reasonably constant with the logic used in the 6600, and choke input supplies 
have decent regulation.

The DD60 console takes 400 Hz 3 phase for the high voltage supply, and uses 60 
Hz single phase (120 volt) for the other supplies.

paul



Re: Writing emulators [Was: Re: VCF PNW 2018: Pictures!]

2018-02-20 Thread Paul Koning via cctalk


> On Feb 20, 2018, at 2:22 PM, Pontus Pihlgren via cctalk 
>  wrote:
> 
> On Mon, Feb 19, 2018 at 06:36:13PM -0600, Adrian Stoness via cctalk wrote:
>> whats invovled in makin an emulator?
>> i have a chunk of stuff for the phillips p1000
> 
> I would say it depends a lot on how complex your target machine is. But 
> in essense you will have to write code for each device you wish to 
> emulate mapping their functionality over to your host machine, the one 
> running the emulator.
> 
> As a minimum you will write code for the CPU and some sort of output 
> device, such as a serial console. 

I would add: unless you have a very strange situation (which is unlikely with a 
Philips machine), SIMH is the foundation to use.  There are other simulators 
out there built on other foundations, and they work fine.  But SIMH has an 
excellent structure and takes care of a whole lot of work, allowing you to 
concentrate on the actual technical substance of emulating a specific machine.

paul



Re: Writing emulators [Was: Re: VCF PNW 2018: Pictures!]

2018-02-21 Thread Paul Koning via cctalk


> On Feb 20, 2018, at 8:18 PM, Sean Conner via cctalk  
> wrote:
> 
> It was thus said that the Great Eric Christopherson via cctalk once stated:
>> On Tue, Feb 20, 2018 at 5:30 PM, dwight via cctalk 
>> wrote:
>> 
>>> In order to connect to the outside world, you need a way to queue event
>>> based on cycle counts, execution of particular address or particular
>>> instructions. This allows you to connect to the outside world. Other than
>>> that it is just looking up instructions in an instruction table.
>>> 
>>> Dwight
>>> 
>> 
>> What I've always wondered about was how the heck cycle-accurate emulation
>> is done. In the past I've always felt overwhelmed looking in the sources of
>> emulators like that to see how they do it, but maybe it's time I tried
>> again.
> 
>  It depends upon how cycle accurate you want.  My own MC6809 emulator [1]
> keeps track of cycles on a per-instruction basis, so it's easy to figure out
> how many cycles have passed.  

SIMH in principle allows the writing of cycle-accurate CPU simulators, but I 
don't believe anyone has bothered.  It's hard to see why that would be all that 
interesting.  For some CPUs, the full definition of how long instructions take 
is extremely complex.  Take a look at the instruction timing appendix in a 
PDP-11 processor handbook, for example; there are dozens of possibilities even 
for simple instructions like MOV, and things get more interesting still on 
machines that have caches.

Another consideration is that you don't get accurate system timing unless the 
whole system, not just the CPU emulation, is cycle-accurate.  And while there 
is roughly-accurate simulation of DECtape in SIMH (presumably for TOPS-10 
overlapped seek to work?) in general it is somewhere between impractical and 
impossible to accurately model the timing of moving media storage devices.  
You'd have to deal not just with seek timing but with the current sector 
position -- yes, I can imagine how in theory that would be done but it would be 
amazingly hard, and for what purpose?

If you have a machine with just trivial I/O devices like a serial port or a 
typewriter, then things get a bit more manageable.

SIMH certainly has event queueing to deal with I/O.  For correctly written 
operating systems that is extremely non-critical; if an I/O completes in a few 
cycles the OS doesn't care, it just has a fast I/O device.  Poorly written 
operating systems may have unstated dependencies on interrupts occurring "slow 
enough".  So a common practice is for the CPU emulation just to count 
instructions (not cycles or nanoseconds) and for the I/O events to be scheduled 
in terms of a nominal delay expressed in average instruction times.  I haven't 
yet run into trouble with that (but then again, I've been working with 
well-built operating systems).

paul




Re: Writing emulators (was Re: VCF PNW 2018: Pictures!)

2018-02-21 Thread Paul Koning via cctalk


> On Feb 21, 2018, at 11:19 AM, Ray Arachelian via cctalk 
>  wrote:
> 
> On 02/19/18 19:36, Adrian Stoness via cctalk wrote:
>> whats invovled in makin an emulator?
>> i have a chunk of stuff for the phillips p1000
> 
> Quite a lot actually.  A single CPU system is difficult enough, but a
> mainframe might be much, much harder.  The idea to use an existing
> emulator framework, such as SIMH, is a great one.
> 
> An easy implementation is to implement an interpretive CPU emulation at
> first, and then later on add other things such as JITs or caching.
> Does this machine implement microcode?  Do you want to emulate it all
> the way down to the microcode level?  Is microcode changeable by the
> OS/applications?  If not maybe implement the higher level (assuming
> CISC) CPU layer.

If microcode is not user-changeable, or if that capability is not a core 
feature, then you can easily omit it.  That tends to make the job much easier.  
For example, I don't know that anyone emulates the PDP-11/60 WCS.  The absence 
of that emulation isn't a big deal, unless you want to run the Richy Lary PDP-8 
emulator on that emulated 11/60.  (Has it been preserved?)

Caching doesn't change user-visible functionality, so I can't imagine wanting 
to emulate that.  The same goes for certain error handling.  I've seen an 
emulator that included support for bad parity and the instructions that control 
wrong-parity writing.  So you could run the diagnostic that handles memory 
parity errors.  But that's a pretty uncommon thing to do and I wouldn't bother.

> There's a lot to consider.  The CPU(s), any co-processors, I/O
> devices/busses, peripherals/terminals, etc.  Are you going to emulate
> every co-processor in software, or is the system documented enough so
> you can emulate just the protocols that the main CPU(s) use to talk to
> those devices?  For example, many systems have some sort of storage
> processors.  You could emulate everything 100% in software, but for that
> you'd need disk and firmware dumps of everything.  Or, if the firmware
> on those is fairly fixed, just emulate the functionality.

Typically you'd emulate the I/O device functionality, regardless of whether 
that is implemented in gates or in co-processor firmware.  That's the approach 
taken with the MSCP I/O device emulation in SIMH, or the disk controller 
emulation in the CDC 6000 emulator DtCyber.  All those use coprocessors, but 
the internals of those engines are much more obscure and much less documented 
than the APIs of the I/O devices, and finding executable code may also be very 
hard (never mind source code and assemblers).  For example, I have only seen 
UDA50 firmware once, on a listing on a desk in CXO back around 1981.

> .
> You'll need to fully understand all aspects of the hardware of that
> machine, every detail.  If you have schematics and can read them, they
> can be an absolute gold mine as documentation sometimes is vague, and
> looking at the schematics and resolve what actually happens.  If you
> have source code for an OS, that too can be a great resource to help
> understand what happens and how.

Exact emulation is ideal, but often not necessary.  What's required is 
emulation that delivers what the OS and drivers rely on.  Devices may have 
undocumented features, and bugs, that aren't touched by the software that you 
care about.  Just the documented features (from the programmer manuals) can 
often be sufficient, especially in well documented machines (DEC machines are 
examples of such, most of the time).  Sometimes you get surprised by some OS.  
RSTS/E occasionally pokes at strange internals, perhaps to complain about a 
missing ECO that needs to be installed for the OS to be happy.

> ...
> You'll have to implement more than 90% of a working emulator before
> you'll even be able to see progress and know you're on the right track,

Handwritten short test sequences are very helpful for this.  While you'll need 
90% or so for an OS to boot, you might need only 5% to test a few instructions 
entered via deposit commands.

paul



Re: Writing emulators (was Re: VCF PNW 2018: Pictures!)

2018-02-21 Thread Paul Koning via cctalk


> On Feb 21, 2018, at 2:24 PM, Guy Sotomayor Jr  wrote:
> 
> 
> 
>> On Feb 21, 2018, at 10:59 AM, Paul Koning via cctalk  
>> wrote:
>> 
>> 
>> Caching doesn't change user-visible functionality, so I can't imagine 
>> wanting to emulate that.  The same goes for certain error handling.  I've 
>> seen an emulator that included support for bad parity and the instructions 
>> that control wrong-parity writing.  So you could run the diagnostic that 
>> handles memory parity errors.  But that's a pretty uncommon thing to do and 
>> I wouldn't bother.
> 
> I disagree, especially if you’re using an emulator for development.  Caching 
> is one of those things that can go
> horribly wrong and not having them emulated properly (or at all) can lead to 
> bugs/behaviors that are significantly
> different from real HW.  The same goes for error reporting/handling.  There 
> are cases where errors may be expected
> and not having them can cause the SW to behave differently.

Yes, there may be cases where errors are expected.  For example, out of range 
address errors, which are used by memory size probing.  But wrong-parity errors 
are far less likely.

I saw it emulated on Dick Gruene's EL-X8 emulator, and tested by the associated 
test program he wrote (which compared bare-metal execution of the test program 
with execution of that same program on the emulator, just like what Ray 
suggested).  There is a CWI report that describes this, its pretty neat.  But 
apart from testing the compleness of his understanding and the emulation of the 
"write wrong parity" instruction, it didn't really add anything useful to any 
other software.  No OS or application I have found depends on that capability, 
so our SIMH based emulator omits this and no harm is done by that omission.

>>> ...
> 
> However, it is my belief (and I think others have also stated) that assuming 
> infinitely fast I/O (e.g. no delays what so ever) can cause issues because in 
> many cases the SW expects to be able to do some work between the time that 
> the I/O is started and when it completes.

True, that is unfortunately a fairly common type of software bug.  And because 
it is, emulators have to work around those bugs.  I make it a point to call it 
a bug, though, because I don't want anyone to get the impression that OS 
programmers who wrote such things were doing the right thing.

paul




Re: Writing emulators (was Re: VCF PNW 2018: Pictures!)

2018-02-21 Thread Paul Koning via cctalk


> On Feb 21, 2018, at 1:36 PM, Adrian Stoness via cctalk 
>  wrote:
> 
> i dont think there is any physical hardware to test from left sept for
> maybe part of the front panel and a tape drive sitting in europe in  museum
> and a couple random bit in some private collections
> 
> drawings theres some but not allot. most of the known documentation is in
> dutch  ...

There are some people around who speak Dutch and are interested in classic 
computers.  I've done a bunch, but my interest is Electrologica so I don't want 
to divert to other architectures.

I wonder if Google Translate, or analogous services, would do a tolerable job 
on techical documents of this sort.

paul



Re: Writing emulators (was Re: VCF PNW 2018: Pictures!)

2018-02-21 Thread Paul Koning via cctalk


> On Feb 21, 2018, at 2:47 PM, Henk Gooijen via cctalk  
> wrote:
> 
> ...
> Tekst on my website:
> 
> The only other PDP-11 that has a WCS option (KUV11, M8018) is the PDP-11/03, 
> KD11-F processor.
> Ritchie Lary wrote the micro-code for the PDP-11/60 to emulate the PDP-8 
> instruction set, making it the "fastest PDP-8 ever".

A few more snippets of information: it was used by the WPS-8 development group 
in Merrimack, NH (a few cubes over from my first office at DEC).  The host ran 
RSTS/E, and there was a RSTS "Runtime System" as part of the machinery.

I've never seen any of the code; the brief description above is all I know.

paul




Re: SimH DECtape vs. Tops-10 [was RE: Writing emulators [Was: Re: VCF PNW 2018: Pictures!]]

2018-02-21 Thread Paul Koning via cctalk


> On Feb 21, 2018, at 3:25 PM, Rich Alderson via cctalk  
> wrote:
> 
> From: Paul Koning
> Sent: Wednesday, February 21, 2018 6:41 AM
> 
>> And while there is roughly-accurate simulation of DECtape in SIMH (presumably
>> for TOPS-10 overlapped seek to work?)
> 
> It's not for Tops-10.  SimH only provides the KS-10 processor[1], so DECtape 
> is
> not a possible peripheral.

Ok, then it could be for VMS, which also does this (via Andy's unsupported 
driver).  I don't know of PDP-11 or other minicomputer systems that do DECtape 
overlapped seek.  I suppose it could be for artistic verisimilitude...

paul



Re: Writing emulators [Was: Re: VCF PNW 2018: Pictures!]

2018-02-22 Thread Paul Koning via cctalk


> On Feb 22, 2018, at 3:09 AM, Chris Hanson via cctalk  
> wrote:
> 
> On Feb 21, 2018, at 11:09 AM, Al Kossow via cctalk  
> wrote:
>> 
>> That is tricky to cleanly and efficiently implement where each component is 
>> modeled independently and
>> glued together with a higher-level framework.
> 
> This is why I wonder if multithreaded emulation might be a reasonable future 
> approach: Model more components of a system as operating independently as 
> they produce and react to signals, have them block when not reacting (either 
> to a clock pulse or a signal), and let the operating system manage scheduling.

It depends on the machine being emulated.  In some cases, multiple components 
that seem to be independent actually have tightly coupled timing, and software 
relies on that.

For example, a CDC 6000 series mainframe has 10 or 20 PPUs plus one or two 
CPUs.  With a bit of care, you can model the two CPUs using two threads.  But 
all the PPUs have to be done in one thread because they run in lockstep.  If 
you make them each a thread, the OS won't boot.  I tried it and gave up.  It 
would have been nice, it might have opened a path to a power-efficient 
emulation, but it didn't appear doable.

Processors vs. I/O devices might work, but again the devil is in the details.

paul




Re: DEC Pro 350

2018-02-26 Thread Paul Koning via cctalk


> On Feb 25, 2018, at 5:39 PM, Kurt Hamm via cctalk  
> wrote:
> 
> Thanks for the suggestions. Interestingly, upon first boot I was able to
> get the hard disk controller error with the picture of the computer.  Then,
> sure subsequent reboots failed to display anything.
> 
> I removed all the cards and booted with no luck.

That would be the expected result if the status lights indicate a motherboard 
failure -- it means you're not reaching the point where it looks at the I/O 
cards.

> It looks like I will need to build a cable to try terminal mode.  I did
> hook a vt220 with a 9to25 cablw, but didn't get anything.

You need a cable specifically wired as console cable.  Check the technical 
manual for the details.  The DB25 connector is used to connect either a 
(serial) printer or a console, and the two are distinguished by a jumper 
between two of the pins.  So a console cable has that jumper in its connector, 
a printer cable does not.  If you don't have the jumper, the speed will be set 
differently (4800 rather than 9600) and the UART will not appear at the console 
UART address.

paul



Re: SimH DECtape vs. Tops-10 [was RE: Writing emulators [Was: Re: VCF PNW 2018: Pictures!]]

2018-02-26 Thread Paul Koning via cctalk


> On Feb 26, 2018, at 12:06 PM, Doug Ingraham via cctalk 
>  wrote:
> 
> The purpose of an emulator is to accurately pretend to be the original
> hardware.  It doesn't matter that the original OS runs on a particular
> emulator.  If a program can be written that runs on the original hardware
> but fails on the emulator then there is a flaw in that emulator.

That's true.  But it is unfortunately also true that creating a bug for bug 
accurate model of an existing machine is extremely hard.  Building an 
OS-compatible version is not nearly as hard, but still hard enough.  Passing 
diagnostics is yet another hurdle; in some cases that isn't feasible without an 
entirely different design.  For example, in the CDC 6600 there is the "exchange 
jump" test, which at some point depends on the execution time of a divide 
instruction and the timing of exchange instructions.  It is very hard for an 
emulator to  mimic that (and an utter waste of effort for every other bit of 
software available for that machine).

Another example is the work pdp2011 had to do in order to make RSTS boot on 
that FPGA based PDP-11 emulation, because RSTS was doing some CPU-specific 
hackery to test for an obscure CPU (or FPU?) bug that had been corrected in 
some ECO that it wanted to require.  The only way to figure out how to do that 
is to reverse engineer that particular bit of code, which isn't normally 
available in source form.

paul




Re: DEC Pro 350

2018-02-26 Thread Paul Koning via cctalk


> On Feb 26, 2018, at 2:56 PM, Kurt Hamm  wrote:
> 
> Yeah, the fact that the issue is intermittent (mostly not booting) is weird.
> 
> Just to be clear,  You mentioned a 25 pin connector.  My understanding is 
> that the console port is the printer port which is a 9 pin connector.

Sorry, you're right, my bad memory.  The DB25 is at the other end of the 
standard DEC cables, of course.  

The relevant documentation is table 5-26, page 181 (5-131) of the PRO technical 
manual, volume 1.  "Terminal L" (pin 9) is the signal that says "this is a 
console terminal" -- jumper that to signal ground (pin 7).  That is also stated 
explicitly on page 177 (5-127), third paragraph.

paul



Re: Bug-for-bug compatibility [was RE: SimH DECtape vs. Tops-10 [was RE: Writing emulators [Was: Re: VCF PNW 2018: Pictures!]]]

2018-02-28 Thread Paul Koning via cctalk


> On Feb 28, 2018, at 1:10 PM, David Bridgham via cctalk 
>  wrote:
> 
> 
>> Imagine our chagrin when days of trying to correct the
>> problem led to the conclusion that the diagnostic was incorrect.
> 
> I may have a situation like this in working on my FPGA PDP-10.  The
> Processor Reference Manuals seem quite clear that the rotate
> instructions take E mod 256.  One of the manuals I've found even adds
> that they never move more than 255 positions.  And yet the diagnostics I
> have clearly want ROT AC,-256 to move 256 positions to the right, not
> 0.  Not having a real PDP-10 to compare against, I don't know which is
> right.

In general, manuals are only a rough approximation of reality.  I remember an 
old joke that "PDP-11/x is compatible with PDP-11/y if and only if x == y".  
And sure enough, if you look at the models appendix of the PDP-11 Architecture 
Handbook you will see cleary that this is true.  More precisely, it is if you 
ignore cases where two model numbers were assigned to the same thing, such as 
11/35 and 11/40.

With the VAX, this got cleaned up to a significant extent, and ditto with 
Alpha.  In both cases, an internal validator tool was created to verify that, 
at least from the point of view of instruction execution, a new machine worked 
the same as an existing reference machine.  But this seems to be quite an 
unusual notion in the history of computer hardware development generally.  Even 
when standard specifications exist that appear to spell out how an architecture 
is supposed to work, the reality is that two implementations will in general do 
it differently.  That is particularly likely to happen in cases of "no one will 
do this" -- like shifts by more than the word size, or other oddball stuff.  

And sometimes CPU designers do stuff that's just plain nuts, like the CDC 6600 
which has a shift instruction where some of the high order bits must be zero 
and some are ignored.  Or the way it executes a 30-bit instruction that starts 
in the last 15 bits of the instruction word.  Both are cases where there is 
additional logic involved (or at least extra wires) to do something that 
clearly serves no purpose.  And these things are definitely not documented in 
any user manual, though you can find them if you read the schematics carefully 
enough.

paul



Re: PDP11 I/O page memory map

2018-02-28 Thread Paul Koning via cctalk
The various handbooks are useful.  Processor, peripherals, and architecture 
handbooks all give parts of the picture.

paul


> On Feb 28, 2018, at 8:15 PM, Douglas Taylor via cctalk 
>  wrote:
> 
> Is there a document that describes the bank 7 memory page and what addresses 
> are reserved for what?  I think I've seen this before but can't seem to put 
> my hands on it.
> 
> Another question, bootstrap is reserved for 173000, how many words are 
> allowed there for this?  How do the more complicated bootstraps, e.g. 
> microPDP11-53, accommodate this limitation?
> 
> Doug
> 



Re: PDP11 I/O page memory map

2018-03-01 Thread Paul Koning via cctalk


> On Mar 1, 2018, at 6:12 AM, allison via cctalk  wrote:
> 
> ... and the MMU also
> understands that peripherals live in that physical space be it 16/18/22
> bit memory map.

That's true when the MMU is disabled; if so it supplies 1 bits for the upper 
bits for page 7, and zeroes for the other pages.  But if the MMU is enabled, 
all mapping goes through its mapping registers, and page 7 is no longer 
special.  By software convention, kernel data page 7 is configured to point to 
the I/O page, but that isn't required.  If you wanted to be be perverse you 
could map the I/O page via page 6 and confuse a whole generation of programmers.

paul



<    1   2   3   4   5   6   7   8   9   10   >