[cctalk] SMD/ESDI emulator progress report

2024-03-14 Thread Guy Sotomayor via cctalk

Hi,

I just wanted to provide a bit of a progress report on the SMD/ESDI 
emulator project.


Now that I'm retired I have a bit more time to actually work on it.  
Previously I was just doing a bunch of research and writing notes on the 
design.  I now have a solid design and I'm starting with the implementation.


I'm going to list some of the design goals and then sketch out a few of 
the major issues and how they're being addressed.


Goals:

 * Emulate any drive geometry
 * Emulate a drive's performance characteristics
 * Work across different interface types
 * Fully built emulator cost below $500

Major Issues:

 * SMD/ESDI have head switch times < 10 microseconds (basically the
   time it takes for the read amplifiers to settle on a "real" drive). 
   Solving this issue drives the majority of the design
 * Address marks on a "real" drive are implemented by writing a DC
   signal on the track and the read circuitry detects that and
   generates the address mark signal

When looking at the specifications for SMD and ESDI disks there aren't 
really a lot of difference in how the drive behaves.  The interfaces 
differ in detail but in reality the differences are pretty minor.  So 
the current design should allow for 95+% code sharing between SMD and 
ESDI emulators.


To solve the head switch performance, it is necessary to have an entire 
cylinder in some sort of RAM.  This allows for very fast head switch 
times (e.g. the selected head just addresses a particular portion of the 
RAM).  However, this means that loading a cylinder (which in some cases 
could be as much as 1MB) could take considerable time.  It will take 
even longer if some of the tracks in the cylinder are "dirty" due to 
them having being written to prior to the seek.


Since I want the emulator to be able to faithfully emulate drives in all 
respects, the limiting factor is the cylinder-to-cylinder seek time 
(e.g. moving from one cylinder to another cylinder that is adjacent).  
This is typically in the 4-8ms range.  So doing the math, one must move 
1MB in 4ms (that turns out to be ~250MB/sec of bandwidth...using 32-bit 
transfers, this means over 60M transfers/sec).


The above implies that the cylinder RAM and where the storage holding 
the cylinders of the image must be capable of at least 60M transfers/sec 
between them.  This is going to involve a complex FPGA that is able to 
have large internal RAMs and a direct connection to some sort of DRAM to 
hold the full image.  I've chosen to use a SOM (System-On-Module) 
version of the Xilinx Zynq 7020.  This has dual 32-bit ARM cores (plus a 
whole bunch of peripherals), 32-bit DDR3 memory interface, plus a fairly 
large FPGA with enough block RAM to contain the maximum cylinder.  The 
calculations I've done should allow a new cylinder to be loaded from 
DRAM into the cylinder RAM in 4-8ms (I think with a few tricks I can 
keep it in the lower range).


I've looked a quite a few Zynq SOMs (and have acquired quite a few for 
evaluation purposes).  I finally found one that's ~$200 (most of the 
others are in the $400-$1000+ range).  This SOM brings out most of the 
Zynq's I/Os (94 I/Os) in addition to having ethernet, USB, serial, SD 
card, etc. as well as 1GB of 32-bit DDR3 DRAM.  It also runs Linux which 
means that developing the SW is fairly straight forward.


The next issue was how to emulate address marks.  The emulated drive 
will have a bit clock which is necessary for clocking out the data when 
reading (or out when writing).  The bit clock is always running (just 
like a "real" drive when spinning).  That will drive a counter (which 
represents which bit is under the emulated "head"), that counter (along 
with the head number) will be used to address the block RAM.  The 
counter is always running, so as to emulate the spinning disk.  The 
address marks are emulated by having a series of comparators (one for 
each possible sector).  They compare the bit counter with the value 
associated with the comparator, if there's a match then that signals an 
address mark.  It's bit more complicated because writing address marks 
(in the case of soft-sectors) has to be dealt with.


The emulator is composed of 4 major components:

1. Emulator application plus some utilities
   I'm currently writing all of this code...since I'd been a SW
   engineer for 50+ years, this is all "production" quality code and is
   extremely well documented...still a ways to go.
2. Linux device driver which provides the interface between the
   emulator application and the emulator HW
   I haven't started on the driver yet but it should be fairly straight
   forward as it really only provides an interface to the emulator HW
   to the emulator application
3. Emulator HW RTL
   I haven't started on this other than to do some basic blocks of
   what's here.  It mainly is the cylinder RAM, serdes (I *may* be able
   to finesse this by having 32-bits on the AXI bus and 1 bit on the
   interface side...a nice 

[cctalk] Re: Getting floppy images to/from real floppy disks.

2023-05-25 Thread Guy Sotomayor via cctalk



On 5/25/23 13:21, Paul Koning via cctalk wrote:



On May 25, 2023, at 3:30 PM, Chuck Guzis via cctalk  
wrote:

On 5/25/23 10:06, Guy Sotomayor via cctalk wrote:

The way SPARK works is that you have code and then can also provide
proofs for the code.  Proofs are you might expect are *hard* to write
and in many cases are *huge* relative to the actual code (at least if
you want a platinum level proof).

...and we still get gems like the Boeing 737MAX...

--Chuck

Yes.  The problem is the gap between informal understanding and formal 
description.  For many programmers, that gap occurs when the program source is 
created.  If the programs are subjected to formal proofs, the gap occurs when 
the formal specs are written.

So such things are largely a non-solution.  They may help a little if the gap 
to the formal spec is smaller.  If, as Guy is saying, the formal spec is larger 
than the code, then obviously that won't be the case.


In our particular case, we spend about 10x developing all of the 
"safety" collateral (requirement docs, architecture docs, design docs, 
etc) than actually writing, debugging and testing the code.


Part of the problem is that most of the automotive safety standards were 
developed for fairly simple use cases (1000s to a few 10's of 1000s 
lines of code).  In our particular case, we're looking at 10's of 
millions of lines of code and we've discovered that a lot of the 
processes specified by the standards do not scale well to that level of 
code.  :-/



Languages other than C and C++ have advantages in that they detect, or avoid, a 
whole lot of bugs that C/C++ ignore, like bounds violations or memory leaks.  
So Ada can be helpful in that some bugs are harder or impossible to create, or 
more likely to be detected in testing.  But, in spite of having taken a very 
interesting week-long course on program proofs by pioneer E.W. Dijkstra, I 
don't actually believe in those things.
I don't either.  ;-)  Proofs are *hard* and take a special way of 
thinking about the problem.  For example, prove that a doubly linked 
list points only to elements allowed in the linked list (e.g. things 
that have only been placed on the list) and that the forward and 
backward pointers actually point to the elements they're supposed 
to...and that's one of the simpler things that needs to be proved. It 
gets *really* interesting when you try and prove that the scheduler is 
actually scheduling the way it's supposed to.  :-/


The 737MAX is a classic example of designers turning off their brains before 
doing their work.  It is obvious even to me (who have never created 
safety-sensitive software) that you don't attach systems with single points of 
failure such as non-replicated sensors to a control system whose specific 
purpose is to point the airplane nose DOWN.  If you do your work with your 
brain disabled you can't produce correct software, with or without formal 
proofs.
Yes, in self-driving cars we do "sensor fusion" which allows us to 
derive (and validate/replicate) data from various sensors.  For example, 
we use cameras, LIDAR, etc to validate each other's data. The point is 
to not have a "single point of failure".


--
TTFN - Guy



[cctalk] Re: ***SPAM*** Re: ***SPAM*** Re: Getting floppy images to/from real floppy disks.

2023-05-25 Thread Guy Sotomayor via cctalk



On 5/25/23 10:00, Chuck Guzis via cctalk wrote:

On 5/25/23 08:58, Guy Sotomayor via cctalk wrote:

ADA and SPARK (a stripped down version of ADA) are used heavily in
embedded that has to be "safety certified".  SPARK also allows the code
to be "proven" (as in you can write formal proofs to ensure that the
code does what you say it does).  Ask me how I know.  ;-)

I was aware of Ada's requirements in the defense- and aerospace-related
industry.  Is that where your experience lies?  Is SPARK the "magic
bullet" that's been searched for decades to write provably correct code?


I'm familiar with it from the higher end automotive perspective 
(self-driving cars).  Even when using C/C++ we have *lots* of standards 
that we have to adhere to (MISRA, CERT-C, ISO-26262, etc).


The way SPARK works is that you have code and then can also provide 
proofs for the code.  Proofs are you might expect are *hard* to write 
and in many cases are *huge* relative to the actual code (at least if 
you want a platinum level proof).


--
TTFN - Guy



[cctalk] Re: ***SPAM*** Re: Getting floppy images to/from real floppy disks.

2023-05-25 Thread Guy Sotomayor via cctalk



On 5/25/23 07:55, Chuck Guzis via cctalk wrote:

On 5/25/23 04:52, Tony Duell via cctalk wrote:

For the programming language, I stick with C, not C++, not Python and
plain old makefiles--that's what the support libraries are written in.
I don't use an IDE, lest I become reliant on one--a text editor will do.
I document the heck out of code.  Over the 50 or so years that I've been
cranking out gibberish, it's nice to go back to code that I wrote 30 or
40 years ago and still be able to read it.


That's basically what I do too.  It's too easy to get stuck with an 
unsupported environment.  A text editor and makefiles mean that I can 
(generally) port my code over to any new environment fairly easily.




I'm all too aware of the changing trends in the industry--and how
quickly they can change.  I remember when there was a push in embedded
coding not long ago to use Ada--where is that today?
ADA and SPARK (a stripped down version of ADA) are used heavily in 
embedded that has to be "safety certified".  SPARK also allows the code 
to be "proven" (as in you can write formal proofs to ensure that the 
code does what you say it does).  Ask me how I know.  ;-)


--
TTFN - Guy



Re: DEC OSF/1 for i386?

2022-04-29 Thread Guy Sotomayor via cctalk
I knew folks who worked on A/UX at Apple, but I don't have any details 
about it's internals.


TTFN - Guy

On 4/29/22 11:40, Cameron Kaiser via cctalk wrote:

but I know at IBM we had 2 principle "ports" that we maintained (PPC


Did this have anything to do with Apple's alleged "A/UX for PowerPC" which was
supposedly OSF/1 based?


--
TTFN - Guy



Re: DEC OSF/1 for i386?

2022-04-29 Thread Guy Sotomayor via cctalk
I was at IBM when OSF (and subsequently OSF/1) was created and had a lot 
of discussions with OSF at that time.  At IBM I was working on the IBM 
Microkernel.  OSF/1 also used Mach (but a different source base) as the 
kernel.  The big effort was to keep the APIs and documentation 
"similar".  We had huge arguments about RPC and I think that's the area 
that we didn't converge which I think made the whole thing pointless 
since the IPC/RPC was one of the main points of Mach.  :-/


I don't know what DEC did in terms of their OSF/1 product, but I know at 
IBM we had 2 principle "ports" that we maintained (PPC & x86) as well as 
a few others (MIPS, StrongARM, 68K being the other ones as I recall) 
that we "kept alive".


TTFN - Guy

On 4/29/22 07:45, Dennis Grevenstein via cctech wrote:

Hi,

just recently I found this archive:

https://vetusware.com/download/OSF1%20Source%20Code%201.10/?id=11574

this is a package of source code for DEC OSF/1 V 1.0. I knew that this is
supposed to run on DECstations (with MIPS), in fact I have a DS3100
running it myself.
However, one thing really puzzled me: This archive apparently includes
support for i386. There is even a kernel build log from 1990.
Now that was news to me. I never realized that this worked on i386.
Can anybody here tell any stories about this?

regards,
Dennis


--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-20 Thread Guy Sotomayor via cctalk
I'm using Zynq SOMs (System on a module) that will plug into a "base 
board" (with hilrose connectors).  It is the base board that will have 
the "personality" of the emulator.  The baseboard will be fairly simple 
(level shifters, a small bit of logic and the drive interface 
transceivers).  So the base board is fairly simple (I think I have an 
early version in KiCAD...but it needs updating).


I'm trying to use as much as I can from the free libraries so I'm trying 
to keep stuff as simple as possible from a logic design perspective.  
Since I already have everything (in multiples) except for the base 
board, the cost to me is time at this point (which I don't have a lot of 
at the moment).


I also didn't want to get into doing any design with BGAs (at least 
where I need to worry about it) hence the decision to go with SOMs.  
With those, the SOM has the Zynq FPGA, flash, DRAM, etc (including the 
critical VRs and clocks).  All I need to provide is 3.3v.  ;-)


I should be able to dig up the docs.  Many are already on bitsavers.  
Let me know what you can't find on Bitsavers.


TTFN - Guy

On 4/20/22 11:22, shad via cctech wrote:

Guy,
I agree that accessing data in blockram (synchronous with fixed latency) is 
really easier than accessing it from RAM (asynchronous with variable latency).
Anyway I'm weighting the "cost" of additional complexity, which in change would 
allow to spare on Zynq cost.
In any case memory access is never sequential, but a sequence of bursts with 
shorter length (16 beats or less).
Considering this, the fact of starting or ending a sequential transfer is just 
a matter of generating addresses multiple of burst length. For this however you 
have to forget about Xilinx's free IP cores, and work directly with AXI3 bus of 
HP ports.

As I would have to invest a large amount of time and of money, it would be nice 
to have somebody interested in buying a working and assembled kit with moderate 
price gain, in way to repay part of the investment.
This however drives to bottom end FPGAs, with very limited amount of internal 
memory... whence the memory-spare design.

About documentation: you mentioned several documents about SMD/ESDI standards 
and related details.
Would you mind sharing this collection?

Many thanks.

Andrea


--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-19 Thread Guy Sotomayor via cctalk
It's not really fast enough and you'll get into all sorts of 
complications once you start to think about trying to keep up with 
simulation rotations.  For example, if someone starts a read at half way 
through a rotation (e.g. after the index pulse) now you have to have 
logic/code that can start/stop the transfer in random places.  The way 
that I have it designed, it's all sequential so no random start / 
lengths and it's all done during a seek which the data isn't being 
clocked out.


The Zynq-7020 (which is my low end design) has 4.9Mb of block RAM (in 
140 36Kb blocks).  In the cylinders I actually use 9-bits per byte as I 
need an escape in order to encode some other data.  ;-) With that it can 
hold the 512KB needed with some to spare.  My high end design will use 
the Zynq-Ultrascale+ (ZU3CG) has 7.6Mb of block RAM (in 216 36Kb 
blocks).  If I go to the next higher version (ZU4CG)the block RAM goes 
down to 4.5Kb (in 128 36Kb blocks) but gains 13.5Mb in "UltraRAM" which 
should allow for any reasonable cylinder buffering.


Of course, I'm just describing my design and design requirements.  First 
and foremost I wanted a simple HW & SW design that could provide 
accurate drive timings (e.g. I could faithfully reproduce the timings of 
any particular drive) so as to maximize the compatibility with any 
controller (and I have some weird ones).


I've been pouring over ANSI specs, controller specs and drive specs for 
SMD/ESDI for a few years now and have thought about a number of 
different ways to do this and what I've described is what I've come up with.


You may have different goals which may drive you to make different 
choices/decisions.


TTFN - Guy

On 4/19/22 11:49, shad via cctalk wrote:

Guy,
I understand that cylinder command has no particular timing requirements, while 
head command must be effective within microseconds. My doubt is, RAM access on 
high performance port could be fast enough to satisfy also the latter.
In case it couldn't or was not assured, I think the best strategy could be to 
preload only a small block of data for each head, for prompt start on head 
command; enough to manage safely RAM access latency.
Each block also would work as buffer for data of subsequent RAM accesses, until 
whole cylinder had been processed.
This strategy would remove the strict requirement of blockram capacity for the 
Zynq, and given that bigger models cost a lot, it would be a significant spare 
for anybody.
Furthermore,  support for any hypotetical disk with bigger cylinder (not SMD) or for tape 
with very large blocks or "infinite" streams could not be feasible with the 
whole cylinder design. I would prefer to avoid any such limitation, in way to possibly 
reuse the same data transfer modules for any media.

Andrea


--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-19 Thread Guy Sotomayor via cctalk
The problem is that you don't get the cylinder and head information in 
the same command (they are 2 different commands). So when you're doing a 
seek, you don't know which track(s) to prioritize.  That is why during a 
seek command I will transfer the entire cylinder so when the head 
command arrives, it can be handled quickly.  That's the only way I could 
think of to ensure maximum compatibility with the controllers (e.g. I 
can provide identical timings to an actual drive...you never really know 
what assumptions a particular controller might have).


TTFN - Guy

On 4/18/22 10:26, shad via cctalk wrote:

Guy,
I agree on keeping Linux out of the loop, to allow fast access on head 
location, selection.
However, I'm not convinced on the fact that a whole cylinder must be on 
blockram to achieve this. Given that ram access is fast (on Zynq with PL 
working at 200MHz and HP port at 64bits I'm running at around 1200MB/s peak), 
logic can jump across the whole disk without the software intervention, it's 
just a matter of being able to calculate conversion from CHS to address and 
read with sufficient buffer.
Probably using Xilinx IP cores could be a severe limit, as these are really 
full of bugs and inefficient implementations... but are free, so you can't 
argue.

On software side, given that you can go also slow, there's no need for very 
complex driver development, just an user level UIO driver could make do.
About language, I know very well VHDL, and it's a little bit at higher level 
than Verilog, so development with implementation parameters is maybe a little 
easier.

About interfaces which doesn't have separated clock recovery: these need a sort 
of oversampling, but you don't need to store every sample, just the ones with 
state change. Leveraging on IOSERDES you can work at a multiple of internal 
clock.

Please keep in consideration that the idea is to develop a single device that 
can work both as drive and as interface, so implementation should be reversible.
Probably this is not very difficult to obtain, as fast data paths for read and 
write are already in opposite directions.

Andrea


I have proceeded as far as full block diagrams (still have to write all
of the verilog) and basic SW architecture.? This is why I've had this
discussion.? I've thought about this *a lot* and have gone through
several iterations of what will or will not work given timing constraints.

I have all of the components for putting a prototype together but I just
haven't had the time yet to write the verilog, the Linux device driver
and the "personality board".? That is, there is still a lot to do.? ;-)

Some requirements that I've put on my design:

   * straight forward SW architecture
   * SW is *not* time critical (that is I didn't want SW in the critical
     path of keeping the data stream)
   * Must be able to emulate any SMD/ESDI drive
   * Must be able to match performance of the drive (or better)
   * Must be able to work with any controller (ESDI or SMD...depending
     upon interface)

With those in mind, that's how I came up with my design.

I found that the Zynq has sufficient Block RAM to contain a full
cylinder of 512KB.? I'm keeping a full cylinder because that allows
everything to be done in verilog except for seeks (see SW not being
required to be in the critical path).? If I didn't do that, then SW
would have to be involved in some aspects of head switch, etc and those
can have tight (<< 100us) latencies and I just didn't want to try and
get Linux to handle that.? Yes, I could use some form of RTOS (I'm
actually in the middle of writing one...but that's still a ways away)
but I don't see any that are really up to what I need/want to do for
this project.

BTW, I'm basing my initial implementation on the Zynq 7020 which has 1GB
of DRAM.? However, I'm also planning on a "bigger/better" one based upon
the Zynq Ultrascale+ which has 4GB of DRAM so that I can support
multiple/larger drives.

The amount required by Linux doesn't have to be large...I plan on having
the KMD just allocate a really big buffer (e.g. sufficient for
containing the entire drive image).? Linux will run happily in
128MB-256MB since there won't be any GUI.? It could be significantly
less if I were to strip out everything that isn't needed by the kernel
and only have a basic shell for booting/debug.? My plan is to have the
emulated drive data and the configuration file on the SD card...so
there's no real user interaction necessary (and Linux would not be on
the SD card but on the embedded flash on the Zynq module).


I chose ESDI and SMD fundamentally because the interface is 100% digital
(e.g. the data/clock separator is in the drive itself). So I don't need
to do any oversampling.


--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-17 Thread Guy Sotomayor via cctalk
I have proceeded as far as full block diagrams (still have to write all 
of the verilog) and basic SW architecture.  This is why I've had this 
discussion.  I've thought about this *a lot* and have gone through 
several iterations of what will or will not work given timing constraints.


I have all of the components for putting a prototype together but I just 
haven't had the time yet to write the verilog, the Linux device driver 
and the "personality board".  That is, there is still a lot to do.  ;-)


Some requirements that I've put on my design:

 * straight forward SW architecture
 * SW is *not* time critical (that is I didn't want SW in the critical
   path of keeping the data stream)
 * Must be able to emulate any SMD/ESDI drive
 * Must be able to match performance of the drive (or better)
 * Must be able to work with any controller (ESDI or SMD...depending
   upon interface)

With those in mind, that's how I came up with my design.

I found that the Zynq has sufficient Block RAM to contain a full 
cylinder of 512KB.  I'm keeping a full cylinder because that allows 
everything to be done in verilog except for seeks (see SW not being 
required to be in the critical path).  If I didn't do that, then SW 
would have to be involved in some aspects of head switch, etc and those 
can have tight (<< 100us) latencies and I just didn't want to try and 
get Linux to handle that.  Yes, I could use some form of RTOS (I'm 
actually in the middle of writing one...but that's still a ways away) 
but I don't see any that are really up to what I need/want to do for 
this project.


BTW, I'm basing my initial implementation on the Zynq 7020 which has 1GB 
of DRAM.  However, I'm also planning on a "bigger/better" one based upon 
the Zynq Ultrascale+ which has 4GB of DRAM so that I can support 
multiple/larger drives.


The amount required by Linux doesn't have to be large...I plan on having 
the KMD just allocate a really big buffer (e.g. sufficient for 
containing the entire drive image).  Linux will run happily in 
128MB-256MB since there won't be any GUI.  It could be significantly 
less if I were to strip out everything that isn't needed by the kernel 
and only have a basic shell for booting/debug.  My plan is to have the 
emulated drive data and the configuration file on the SD card...so 
there's no real user interaction necessary (and Linux would not be on 
the SD card but on the embedded flash on the Zynq module).


TTFN - Guy

On 4/17/22 10:28, shad via cctech wrote:

hello,
there's much discussion about the right  method to transfer data in and out.
Of course there are several methods, the right one must be carefully chosen 
after some review of all the disk interfaces that must be supported. The idea 
of having a copy of the whole disk in RAM is OK, assuming that a maximum size 
of around 512MB is required, as the RAM is also needed for the OS, and for Zynq 
maximum is 1GB.
About logic implementation, we know that the device must be able to work with 
one cylinder at a time. Given RAM bandwidth, this doesn't means that it must 
fit completely in blockram, also it can be produced at output while it is read, 
so delay time is really the time between first data request and actual read 
response. In between an elastic FIFO is required to adapt synchronous constant 
rate transfer of the disk to the burst transfer toward RAM.

Guy, you mentioned about development of a similar interface.
So you already produced some working hardware?

Andrea


--
TTFN - Guy


Re: idea for a universal disk interface

2022-04-17 Thread Guy Sotomayor via cctalk
I chose ESDI and SMD fundamentally because the interface is 100% digital 
(e.g. the data/clock separator is in the drive itself). So I don't need 
to do any oversampling.


TTFN - Guy

On 4/17/22 11:12, Paul Koning via cctalk wrote:



On Apr 17, 2022, at 1:28 PM, shad via cctalk  wrote:

hello,
there's much discussion about the right  method to transfer data in and out.
Of course there are several methods, the right one must be carefully chosen 
after some review of all the disk interfaces that must be supported. The idea 
of having a copy of the whole disk in RAM is OK, assuming that a maximum size 
of around 512MB is required, as the RAM is also needed for the OS, and for Zynq 
maximum is 1GB.

For reading a disk, an attractive approach is to do a high speed analog capture 
of the waveforms.  That way you don't need a priori knowledge of the encoding, 
and it also allows you to use sophisticated algorithms (DSP, digital filtering, 
etc.) to recover marginal media.  A number of old tape recovery projects have 
used this approach.  For disk you have to go faster if you use an existing 
drive, but the numbers are perfectly manageable with modern hardware.

If you use this technique, you do generate a whole lot more data than the 
formatted capacity of the drive; 10x to 100x or so.  Throw in another order of 
magnitude if you step across the surface in small increments to avoid having to 
identify the track centerline in advance -- again, somewhat like the tape 
recovery machines that use a 36 track head to read 7 or 9 or 10 track tapes.

Fred mentioned how life gets hard if you don't have a drive.  I'm wondering how difficult 
it would be to build a useable "spin table", basically an accurate spindle that 
will accept the pack to be recovered and that will rotate at a modest speed, with a head 
positioner that can accurately position a read head along the surface.  One head would 
suffice, RAMAC fashion.  For slow rotation you'd want an MR head, and perhaps supplied 
air to float the head off the surface.  Perhaps a scheme like this with slow rotation 
could allow for recovery much of the data on a platter that suffered a head crash, 
because you could spin it slowly enough that either the head doesn't touch the scratched 
areas, or touches it slowly enough that no further damage results.

paul



--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-17 Thread Guy Sotomayor via cctalk
I think the issue is that you're thinking of somehow emulating the 
formatted data.  I'm working on just emulating the bit-stream as then 
it'll work with any controller and sector/track layout so I won't 
actually know what a sector really is (unless I do "hard sectoring" 
which some drives did support).


At a 15Mhz clock rate, 30 bytes is 1.us.  Not a lot of time. And 
frankly, that's defined by the controller and not the drive (though 
usually the drives specify some layout but that's only a 
recommendation).  Dealing with drive speed variations doesn't solve 
anything because it's actually done via the drive itself (e.g. the drive 
provides the clock to the controller so any variation is already 
accounted for).  The drive really cares about total bits (e.g. 
bits-per-inch) that the media supports.


If we assume 32KB track at 500MB/s DMA transfer rate, that takes 65us.  
But as I've said, the spec says that the time between a head select and 
read is 15us or so, you can see that you can't just transfer a track and 
still meet the minimum timings.  I will agree that you can probably take 
longer but I'm trying to have a design that can meet all of the minimum 
timings so I can emulate any drive/controller combination with at least 
the same performance as a real drive (and in many cases I can provide 
*much* higher performance).


By keeping a full cylinder in the FPGA Block RAM I can keep the head 
select time < 1us (it's basically just selecting the high order address 
bits going to the block RAM).


By keeping the entire disk image in DRAM, I can emulate any drive (that 
fits in the DRAM) with identical (or faster) performance. If I wanted to 
do something simpler (not much though) I could have a smaller DRAM (but 
since the Zynq modules I'm using have 1GB or 4GB of DRAM there isn't 
much motivation) but then any seek would be limited by access to the 
backing store.  Also remember, in the worst case you have to write the 
previous track out if it was written to so that will slow things down as 
well.  With the full image maintained in DRAM, any writes can be 
performed in a lazy manner in the background so that won't impact the 
performance of the emulated drive.


TTFN - Guy

On 4/16/22 14:32, Tom Gardner wrote:

-Original Message-----
From: Guy Sotomayor [mailto:g...@shiresoft.com]
Sent: Friday, April 15, 2022 3:25 PM
To: t.gard...@computer.org; cct...@classiccmp.org
Subject: Re: idea for a universal disk interface

I'm looking at what the spec says.  ;-)  The read command delay from the head 
set command is 15us (so I was wrong) but still not a lot of time (that is after 
a head set, a read command must be at least 15us later).



-
And after the read command is given there is a gap, usually all zeros, at the 
end of which is a sync byte which is then followed by the first good data (or 
header) byte.  In SMD the gaps can be  20 or 30 bytes long so there is quite a 
bit of time until good data.

Tom



--
TTFN - Guy



Re: idea for a universal disk interface

2022-04-15 Thread Guy Sotomayor via cctalk
I'm looking at what the spec says.  ;-)  The read command delay from the 
head set command is 15us (so I was wrong) but still not a lot of time 
(that is after a head set, a read command must be at least 15us later).


Since I'm not looking at formatted data rate (just handling the raw bit 
stream) it doesn't really matter what the formatted rate is...and the 
formatted data is different between different controllers, so I don't 
want to even try to do that on the fly (and they might do tricks where 
different tracks/cylinders have different formats.


If some one wants the "formatted" data, then I'd let them post process 
that off the captured data.


As I said, I'm trying to do this with fairly simple logic and low cost 
storage (as such this isn't going to particularly cheap).  I don't want 
to add another $100+ to the cost just to have a high performance drive 
when the HW is capable of doing a suitable job with a $10 SD card.


In reality an SD card (from a storage perspective) is way overkill.  
We're talking about emulating drives with capacities < 1GB and good 
quality SD cards contain 32GB for $10 or so.


TTFN - Guy

On 4/15/22 12:12, Tom Gardner wrote:


I haven't looked it up but I bet the head switch time is a lot longer 
than 1-2 usec - that's what the leading gap is for and the sync took 
most of the gap back in those days.


The issue is sustained data rate isn't it?  The ESMD raw data rate is 
24 Mb/s but the formatted data is something like 80% of that or maybe 
2.5 MB/sec.  A modern HDD in sequential mode can sustain a much higher 
rate, e.g. Seagate SAS 
<https://www.seagate.com/files/www-content/solutions/mach-2-multi-actuator-hard-drive/files/tp714-dot-2-2006us-mach-2-technology-paper.pdf> 
at 520 MB/sec.  My understanding is that the sectors are slipped 
and/or cylinders are horizontal so that head switching doesn't lose 
any revolutions.  Maybe one would run into a problem at the cylinder 
seek moment so maybe one would have to keep each full emulated 
cylinder on the modern drive’s cylinder, but with Terabytes of data on 
a modern drive who cares about some wasted storage


Tom

-Original Message-
From: Guy Sotomayor [mailto:g...@shiresoft.com]
Sent: Friday, April 15, 2022 10:56 AM
To: t.gard...@computer.org; cct...@classiccmp.org
Subject: Re: idea for a universal disk interface

I ran the numbers for Zynq FPGAs.  First of all for ESDI and SMD the 
head switch time is 1-2us (basically the time it takes for the clocks 
to re-lock on the new data).


Two tracks isn't sufficient (which is the "other" track...you will be 
wrong).


So I decided to go and have a full cylinder (I'm allowing for up to 
32KB tracks and up to 16 heads) which is 512KB.  The Zynq DMA from HW 
block RAM to DRAM (at 500MB/s) is ~1ms.  Given that the previous 
cylinder could be dirty (e.g. has written data), the worst case seek 
time is ~2ms.  This allows me to emulate any seek latency curve(s) I want.


In my design, any dirty data is written back to storage in a lazy 
manner so the performance of the storage isn't really an issue.


I should note that the Zynq 7020 module has 1GB of DRAM on it, so 
there is no additional cost to just put the entire disk contents in 
DRAM and I'm using the attached SD Card interface for storage (so you 
can use a


$10 SD Card for storage).  Adding a high speed disk interface (e.g.

MD.2, PCIe, or other serially attached storage) would add additional 
cost in terms of having to create the interface as well as a 
reasonably fast drive and I don't see the advantage.


I'm planning on using a Zynq UltraScale+ module to allow for larger 
disks and multiple disk emulations (it has more block RAM and 4GB of 
DRAM on the module).


TTFN - Guy

On 4/14/22 23:34, Tom Gardner wrote:

> I suggest if we are talking about an emulator it really isn't 
necessary to have the entire disk in DRAM, two tracks of DRAM acting 
as a buffer with a modern HDD holding the emulated drive's data should 
be fast enough to keep any old iron controller operating without 
missing any revolutions.  The maximum unformatted track length of any 
old iron drive is well known and therefore one can allocate the number 
of blocks sufficient to store a full track and then write every track, 
gaps and all to the modern disk.  Given the data rate, track size and 
sequential seek times of a modern HDD one should be able to fill then 
next track buffer before the current track buffer is read into the 
controller.  If two track buffers and an HDD isn't fast enough then 
one could add a track buffer or two or go to SSD's.


>

> This was the approach IBM used in it's first RAMAC RAID where I think

> they had to buffer a whole cylinder but that was many generations ago

>

> Tom

>

> -Original Message-

> From: Guy Sotomayor [mailto:g...@shiresoft.com 
<mailto:g...@shiresoft.com>]


> Sent: Wednesday, April 13, 2022 10:02 AM

> To: cct..

Re: idea for a universal disk interface

2022-04-15 Thread Guy Sotomayor via cctalk
I ran the numbers for Zynq FPGAs.  First of all for ESDI and SMD the 
head switch time is 1-2us (basically the time it takes for the clocks to 
re-lock on the new data).


Two tracks isn't sufficient (which is the "other" track...you will be 
wrong).


So I decided to go and have a full cylinder (I'm allowing for up to 32KB 
tracks and up to 16 heads) which is 512KB.  The Zynq DMA from HW block 
RAM to DRAM (at 500MB/s) is ~1ms.  Given that the previous cylinder 
could be dirty (e.g. has written data), the worst case seek time is 
~2ms.  This allows me to emulate any seek latency curve(s) I want.


In my design, any dirty data is written back to storage in a lazy manner 
so the performance of the storage isn't really an issue.


I should note that the Zynq 7020 module has 1GB of DRAM on it, so there 
is no additional cost to just put the entire disk contents in DRAM and 
I'm using the attached SD Card interface for storage (so you can use a 
$10 SD Card for storage).  Adding a high speed disk interface (e.g. 
MD.2, PCIe, or other serially attached storage) would add additional 
cost in terms of having to create the interface as well as a reasonably 
fast drive and I don't see the advantage.


I'm planning on using a Zynq UltraScale+ module to allow for larger 
disks and multiple disk emulations (it has more block RAM and 4GB of 
DRAM on the module).


TTFN - Guy

On 4/14/22 23:34, Tom Gardner wrote:

I suggest if we are talking about an emulator it really isn't necessary to have 
the entire disk in DRAM, two tracks of DRAM acting as a buffer with a modern 
HDD holding the emulated drive's data should be fast enough to keep any old 
iron controller operating without missing any revolutions.  The maximum 
unformatted track length of any old iron drive is well known and therefore one 
can allocate the number of blocks sufficient to store a full track and then 
write every track, gaps and all to the modern disk.  Given the data rate, track 
size and sequential seek times of a modern HDD one should be able to fill then 
next track buffer before the current track buffer is read into the controller.  
If two track buffers and an HDD isn't fast enough then one could add a track 
buffer or two or go to SSD's.

This was the approach IBM used in it's first RAMAC RAID where I think they had 
to buffer a whole cylinder but that was many generations ago

Tom

-Original Message-----
From: Guy Sotomayor [mailto:g...@shiresoft.com]
Sent: Wednesday, April 13, 2022 10:02 AM
To: cct...@classiccmp.org
Subject: Re: idea for a universal disk interface

I've had a similar project in the works for a while (mainly for ESDI and SMD).

I think the main issue you're going to face is that what you need to do for 
something like ESDI or SMD (or any of the bit serial interfaces) is going to be 
radically different than something like IDE or SCSI.  This is not just the 
interface signals but also what's needed in the FPGA as well as the embedded SW.

For example, for the ESDI and SMD interface in order to meet the head switch 
times (1-2 microseconds) requires that a full cylinder be cached in HW.  Once 
you do that and look at the timings to move a max cylinder between the HW cache 
(that will serialize/de-serialize the data over the
interface) and storage, you'll see that the only way to have any reasonable 
performance (e.g. not have seek times be > 40ms for *any*
seek) is to cache the entire drive image in DRAM and lazily write back dirty 
tracks.

I've been looking at the Xylinx Zynq SoCs for this (mainly the Zynq 7020 for single drive 
emulation and the Zynq Ultrascale+ for up to 4 drives).  In my case the HW, FPGA logic 
and SW will share significant portions but they will not be identical.  In my case there 
is no need for an external PC (just adds complexity) other than something to do basic 
configuration (e.g. drive parameters such as number of heads, number of cylinders, etc) 
which will actually be over USB/serial.  The actual persistent storage will be an SD card 
since all reading will be done at "boot time" and writes will be handled in a 
lazy manner (since the writes will first go to the DRAM based upon time or seek).

It may also be sufficient for configuration purposes to have a file
(text) on the SD card that defines the configuration so no external 
interactions would be necessary.  I'm still thinking about that one.  ;-)

TTFN - Guy

On 4/12/22 22:35, shad via cctech wrote:

Hello,
I'm a decent collector of big iron, aka mini computers, mainly DEC and DG.
I'm often facing common problems with storage devices, magnetic discs and tapes 
are a little prone to give headaches after years, and replacement drives/media 
in case of a severe failure are unobtainable.
In some cases, the ability to make a dump of the media, also without a running 
computer is very important.

Whence the idea: realize an universal device, with several input/output 
interfaces, which could be used both as s

Re: idea for a universal disk interface

2022-04-13 Thread Guy Sotomayor via cctalk
I've had a similar project in the works for a while (mainly for ESDI and 
SMD).


I think the main issue you're going to face is that what you need to do 
for something like ESDI or SMD (or any of the bit serial interfaces) is 
going to be radically different than something like IDE or SCSI.  This 
is not just the interface signals but also what's needed in the FPGA as 
well as the embedded SW.


For example, for the ESDI and SMD interface in order to meet the head 
switch times (1-2 microseconds) requires that a full cylinder be cached 
in HW.  Once you do that and look at the timings to move a max cylinder 
between the HW cache (that will serialize/de-serialize the data over the 
interface) and storage, you'll see that the only way to have any 
reasonable performance (e.g. not have seek times be > 40ms for *any* 
seek) is to cache the entire drive image in DRAM and lazily write back 
dirty tracks.


I've been looking at the Xylinx Zynq SoCs for this (mainly the Zynq 7020 
for single drive emulation and the Zynq Ultrascale+ for up to 4 
drives).  In my case the HW, FPGA logic and SW will share significant 
portions but they will not be identical.  In my case there is no need 
for an external PC (just adds complexity) other than something to do 
basic configuration (e.g. drive parameters such as number of heads, 
number of cylinders, etc) which will actually be over USB/serial.  The 
actual persistent storage will be an SD card since all reading will be 
done at "boot time" and writes will be handled in a lazy manner (since 
the writes will first go to the DRAM based upon time or seek).


It may also be sufficient for configuration purposes to have a file 
(text) on the SD card that defines the configuration so no external 
interactions would be necessary.  I'm still thinking about that one.  ;-)


TTFN - Guy

On 4/12/22 22:35, shad via cctech wrote:

Hello,
I'm a decent collector of big iron, aka mini computers, mainly DEC and DG.
I'm often facing common problems with storage devices, magnetic discs and tapes 
are a little prone to give headaches after years, and replacement drives/media 
in case of a severe failure are unobtainable.
In some cases, the ability to make a dump of the media, also without a running 
computer is very important.

Whence the idea: realize an universal device, with several input/output 
interfaces, which could be used both as storage emulator, to run a computer 
without real storage, and as controller emulator, to read/write a media without 
a running computer.
To reduce costs as much as possible, and to allow the better compatibility, the 
main board shall host enough electrical interfaces to support a large number of 
disc standard interfaces, ideally by exchanging only a personality adapter for 
each specific interface, i.e. connectors and few components.

There are several orders of problems:
- electrical signals, number and type (most disk employ 5V TTL or 3.3V TTL, 
some interfaces use differential mode for some faster signals?)
- logical implementation: several electrical signals are used for a specific 
interface. These must be handled with correct timings
- software implementation: the universal device shall be able to switch between 
interface modes and be controlled by a remote PC

I suppose the only way to obtain this is to employ an FPGA for logic 
implementation of the interface, and a microprocessor running Linux to handle 
software management, data interchange to external (via Ethernet). This means a 
Xilinx Zynq module for instance.
I know there are several ready devices based on cheaper microcontrollers, but 
I'm sure these can't support fast and tight timing required by hard disk 
interfaces (SMD-E runs at 24MHz).

The main board should include a large enough array of bidirectional 
transceivers, possibly with variable voltage, to support as much interfaces as 
possible, namely at least Shugart floppy, ST506 MFM/RLL, ESDI, SMD, IDE, SCSI1, 
DEC DSSI, DEC RX01/02, DG6030, and so on, to give a starting point.
The common factor determining what kind of disc interface can be support on 
hardware side is obviously the type of transceiver employed, for instance a 
SATA would require a differential serial channel, which could not be available.
But most old electronic is based on TTL/CMOS 5V logic, so a large variety of 
computer generations should be doable.

For the first phase, I would ask you to contribute with a list of interfaces 
which could be interesting to emulate, specially if these are similar to one 
from my list.
I please submitters to send me by email or by web link when possible, detailed 
documentation about the interface they propose, so I can check if it could be 
doable and what kind of electrical signals are needed.
Also detailed information about interfaced I listed is appreciated, as could 
give some detail I'm missing.

Thanks
Andrea


--
TTFN - Guy



Re: IBM 5110 (5100)

2022-03-17 Thread Guy Sotomayor via cctalk

But it has APL (you can tell by the keyboard *and* the BASIC/APL switch).

I can't say if the price is worth it for that...but having the APL ROS 
and the keytops has some value.


TTFN - Guy

On 3/17/22 17:56, Brent Hilpert via cctalk wrote:

On 2022-Mar-17, at 5:02 PM, D. Resor via cctalk wrote:

Was the computer auction in question a 5100 or a 5110?

Presently I see there is a 5110-C for sale
https://www.ebay.com/itm/294865912729

Yes, that was/is the one. So it has been relisted. It had been listed as a 5100 
or 5100-C earlier, then delisted, and I couldn't find a new listing for 
anything like a 5100 or 5110 this morning.

It might be easy to fix. Or it might have the computing potential of a doorstop.



There were also a 5110 with 8" external drives and a printer which sold:
https://www.ebay.com/itm/304377532685


--
TTFN - Guy



Re: VAX 780 on eBay

2022-01-01 Thread Guy Sotomayor via cctalk



On 1/1/22 10:40 AM, Paul Koning via cctalk wrote:



On Jan 1, 2022, at 1:12 PM, Noel Chiappa via cctalk  
wrote:

This:

https://www.ebay.com/itm/275084268137

The starting price is expensive, but probably not utterly unreasonable,
given that:

- the 780 was the first VAX, and thus historically important

- 780's are incredibly rare; this is the first one I recall seeing for sale
  in the classic computer era (versus several -11/70's, /40s, etc)

- this one appears to be reasonably complete; no idea if all the key CPU
  boards are included, but it's things like the backplane, etc (all of which
  seem to be there) which would be completely impossible to find now - if any
  boards _are_ missing, there's at least the _hope_ that they can be located
  (780 boards seem to come by every so often on eBait), since people seem to
  keep boards, not realizing that without the other bits they are useless

Interesting, but the argument for why it's not tested is implausible which 
makes me very suspicious.  I suppose there might be a few American homes that  
have only 110 volt power, but I'm hard pressed to think of any I have ever 
seen, and that includes really old houses.


Without replacing the power controller in the 11/780, you need 208v 
3-phase to run it.  It's not impossible...nothing in the CPU actually 
*needs* 3-phase as the individual power supplies are 120v but the 
overall maximum load is greater than a 30A 120v circuit.


TTFN - Guy



Re: RC11 controller (Was: Reproduction DEC 144-lamp indicator panels)

2021-12-10 Thread Guy Sotomayor via cctalk



On 12/10/21 6:21 AM, Jay Jaeger via cctalk wrote:

On 12/9/2021 11:06 PM, Guy Sotomayor via cctalk wrote:


On 12/9/21 8:15 PM, Jay Jaeger via cctalk wrote:


One could perhaps emulate the RS64 data stream using a fast-enough 
micro, ala the MFM emulator.


Why does everyone seem to want to emulate HW like this with a micro 
when a reasonable FPGA implementation with some external FRAM would 
do the job?




1)  Because not everyone has that kind of design experience and 
capability (I do, but that is beside the point).  In such a case, 
suggesting a FPGA might cause those readers to just skip it without 
further thought, whereas suggesting a micro is less likely to have 
that effect on someone who *does* have the design experience.


2)  Because the tooling on FPGAs is sometimes a pain and the parts 
themselves are always in flux, and the updated tools often don't 
support the older parts.  Over the last 20 years I have gone through 
at least 3 different FPGA development boards and toolsets, where as my 
original Arduino is just as useful as ever.


3)  Because a highly flexible FPGA development board costs a lot more 
than a micro, and micros would be a lot cheaper on a stand-alone PCB 
than an FPGA part (or an FPGA through-hold carrier for those who are 
not up to doing something like an FPGA part on a PCB.)


4)  Because a micro form factor is smaller than an FPGA development 
board.


5)  For someone well versed in software but not as well versed in 
design (though enough that they could still do what you suggest), 
doing the software might only take a couple of days for something like 
a 64Kw disk (if it isn't too fast), and easier to debug/fix/enhance as 
well.


6)  Because it was just a *suggestion* that one might emulate the disk 
itself in hardware (see also point 1).


All valid points.  My frustration has been where I see projects that use 
a RPi for something that a simple HW circuit/CPLD/FPGA could have done 
it simpler and more efficiently.


I've lost count of the FPGA boards that I have.  I also typically don't 
use eval boards for actual projects other than testing a few "flows".  
Everything gets done with a custom board because I typically need other 
components and it gets too messy if I'm just using an "off the shelf" 
eval board (and more than likely the eval board doesn't have enough I/Os).


I should also note that the Beagle Bone MFM emulator isn't actually fast 
enough.  It works OK if you only have one drive but it's not fast enough 
to handle the drive select signal when you have more than one.




On the other hand, speed kills, and some disks are just too fast for a 
micro alone to do.

Project below.  ;-)


The SMD/ESDI emulator that I've been working on has to "brute force" 
the emulation because of BW concerns.  That is, it has to read the 
entire emulated disk image into DRAM because:


1. You need at least a track's worth of buffering to send/receive the
    data though the data interface (serial)
2. You don't have enough time to transfer tracks in/out of the track
    buffer to flash (or what ever) to meet the head switch times
3. You don't have enough time to transfer whole cylinders in/out of the
    cylinder buffer to flash (or what ever) to have reasonable
    track-to-track seek times

So it will require a micro, but that's mainly to manage what's going 
in/out of the (large) DRAM back to flash (it reads the entire 
emulated disk image into DRAM during boot).  All of the actual 
commands and data movement across the interface are all done by an FPGA.




Cool.  Would love an ESDI emulator for my Apollo DN3000 and SMD 
emulation for my VAXen and PDP-11/24.


Yes, it's on my project list...I have it mostly designed but other stuff 
has pushed in front of it.  The big issue is the SW (HW is fairly 
straightforward)..which is funny because I'm a system SW guy.  ;-)  I'm 
using a Xilinx Zynq FPGA mainly because I need:


 * A reasonably fast processor for handling the run-time management of
   buffers (one version has 4 ARM Cortex-A9 CPUs running at 1+GHz and
   the other has 4 ARM Cortex-A53 CPUs running at 1.5GHz).
 * Lots of DRAM (has to contain the entire emulated disk image).
   Smaller Zynq FPGA will support 1GB of DRAM and the larger one (at
   least the one that I'm using) supports 4GB of DRAM.
 * Lots of internal RAM (has to contain the maximum sized cylinder)
 * A *fast* connection between the DRAM and internal RAM (this
   determines the track-to-track latency).  Cylinders can be up to 1MB
   (32KB/track, 32heads per cylinder) so when seeking, up to 2MB (1MB
   in, 1MB out) has to be moved to/from DRAM.  I'm trying to keep the
   seek times < 10ms (ideally ~4ms) so that means my data rate has to
   be on the order of 200-500MB/s.

I'm doing this for my Symbolics machines (ESDI and SMD) and 11/70 (SMD).

--
TTFN - Guy



Re: RC11 controller (Was: Reproduction DEC 144-lamp indicator panels)

2021-12-09 Thread Guy Sotomayor via cctalk



On 12/9/21 8:15 PM, Jay Jaeger via cctalk wrote:


One could perhaps emulate the RS64 data stream using a fast-enough 
micro, ala the MFM emulator.


Why does everyone seem to want to emulate HW like this with a micro when 
a reasonable FPGA implementation with some external FRAM would do the job?


The SMD/ESDI emulator that I've been working on has to "brute force" the 
emulation because of BW concerns.  That is, it has to read the entire 
emulated disk image into DRAM because:


1. You need at least a track's worth of buffering to send/receive the
   data though the data interface (serial)
2. You don't have enough time to transfer tracks in/out of the track
   buffer to flash (or what ever) to meet the head switch times
3. You don't have enough time to transfer whole cylinders in/out of the
   cylinder buffer to flash (or what ever) to have reasonable
   track-to-track seek times

So it will require a micro, but that's mainly to manage what's going 
in/out of the (large) DRAM back to flash (it reads the entire emulated 
disk image into DRAM during boot).  All of the actual commands and data 
movement across the interface are all done by an FPGA.


--
TTFN - Guy



Re: RK11-C indicator panel inlays?

2021-12-06 Thread Guy Sotomayor via cctalk
I haven't priced anything out yet.  My current project will have 
reasonably large sized board but will be using DIN 41612 style 
connectors (so I don't need edge fingers).  I haven't gone to different 
board vendors yet to see what pricing will be yet (still settling on 
board size and number of layers...right now it looks like it'll be 4 
layers).


On 12/6/21 3:50 PM, Mike Katz via cctalk wrote:
For other boards without gold fingers where would you recommend and 
how expensive for omnibus size boards?


On 12/6/2021 5:07 PM, Guy Sotomayor via cctalk wrote:


On 12/6/21 2:45 PM, Mike Katz via cctalk wrote:



If I may8 ask a question.  I have never had boards made before. How 
do I find a good board house that is reasonable and how do I specify 
the board especially for the PDP-8 Omnibus which should have gold 
fingers on the edge connectors?


Anything the size of an Omnibus board with gold fingers is *not* 
going to be "reasonable" especially if you want "hard gold" (which 
IMHO is the only way to go if you want reasonable life of the boards 
and sockets).


I've used Advanced Circuits for all my boards that needed gold 
fingers (they are *not* cheap...you've been warned).  When you submit 
your Gerber files, you also specify if you have edge fingers and how 
you want them plated.  I have been 100% satisfied with the boards 
that I've received from them.


TTFN - Guy





--
TTFN - Guy



Re: RK11-C indicator panel inlays?

2021-12-06 Thread Guy Sotomayor via cctalk



On 12/6/21 2:45 PM, Mike Katz via cctalk wrote:



If I may8 ask a question.  I have never had boards made before. How do 
I find a good board house that is reasonable and how do I specify the 
board especially for the PDP-8 Omnibus which should have gold fingers 
on the edge connectors?


Anything the size of an Omnibus board with gold fingers is *not* going 
to be "reasonable" especially if you want "hard gold" (which IMHO is the 
only way to go if you want reasonable life of the boards and sockets).


I've used Advanced Circuits for all my boards that needed gold fingers 
(they are *not* cheap...you've been warned).  When you submit your 
Gerber files, you also specify if you have edge fingers and how you want 
them plated.  I have been 100% satisfied with the boards that I've 
received from them.


TTFN - Guy




Re: PDP-11/70 Boards

2021-11-30 Thread Guy Sotomayor via cctalk



On 11/30/21 10:06 AM, Noel Chiappa via cctalk wrote:


 From the blog of someone who got a KB11-A working, you'll really need KM11
cards; dunno if Guy Steele still has those clones he was selling.


I think you meant me.  Guy Steele is from Common LISP fame.  ;-)

I do still have KM11 boards and some overlays (I'd have to check to see 
if I have the appropriate overlays for the 11/70).  I don't 
unfortunately have any light masks or full kits.


--
TTFN - Guy



Re: The precarious state of classic software and hardware preservation

2021-11-22 Thread Guy Sotomayor via cctalk
In my case it's stuff that *I* didn't save and just tossed it because 
"Why would I ever want this anymore?".  I *really* regret tossing all of 
the source for stuff I wrote while I was at IBM. It was after all IBM's 
property (since I wrote it all as an IBM employee) and I doubt any of it 
survives in any form anywhere, but I still wish I had some of it.  :-(


After it's all said and done, one has to wonder if we really leave any 
lasting impact.  :-/


TTFN - Guy

On 11/22/21 2:00 PM, s shumaker via cctalk wrote:
and yet, after it's over and there's *nothing* left from 30+ years of 
collecting, there are occasional reflections on what you left behind...


just saying...

Steve


On 11/22/2021 11:50 AM, John Ames via cctalk wrote:

On 2021-11-21 9:45 a.m., Adam Thornton via cctalk wrote:

On 11/19/21 9:33 PM, Steve Malikoff via cctalk wrote:

And what happens when you wake  up one morning to find archive.org is
gone, too?



Fundamentally, eventually we're all going to be indistinguishable
mass-components inside the supermassive black hole that used to be the
Milky Way and Andromeda galaxies anyway.

Smoke 'em while you got 'em.

Yeah, I had a long, hard think about this while the Caldor Fire was
looking like it was about to come knocking on my doorstep this fall
and I was trying to prep myself for a short-notice evacuation and
decide what I could and couldn't take (read: leave stowed in the trunk
of the car for the next couple weeks.) Ultimately, while I'd *like*
what I have and enjoy to pass on to someone else once I get busy
decomposing, in the long run it's all dust, so I'm not gonna worry
myself too much over it.



--
TTFN - Guy



Re: Found my favorite DOS editor

2021-09-28 Thread Guy Sotomayor via cctalk



On 9/28/21 3:41 PM, Fred Cisin via cctalk wrote:
"I've been using vi for about two years, mostly because I can't 
figure out how to exit it."

:q
you're welcome

Or having to power cycle the machine to get out of EMACS.

On Tue, 28 Sep 2021, Mike Katz via cctalk wrote:

To Exit EMACS:  Control-X Control-C



Can EMACS be expanded enough to emulate VI?


Yes.  There is an elisp package called EVIL (Extensible VI Layer) that 
emulates VI in EMACS.


Since EMACS has a full programming language (elisp), you can write 
anything you want in it (mail readers, browsers, calendar apps, other 
editors, etc).  I've written a few things in elisp to mainly deal with 
global changes that were more complicated than I could figure out with a 
SED script.



Can VI be expanded enough to emulate EMACS?

No idea.

--
TTFN - Guy



Re: Found my favorite DOS editor

2021-09-28 Thread Guy Sotomayor via cctalk



On 9/28/21 3:02 PM, jim stephens via cctalk wrote:



On 9/28/2021 2:48 PM, Al Kossow via cctalk wrote:


"I've been using vi for about two years, mostly because I can't 
figure out

how to exit it."


:q

you're welcome


Or having to power cycle the machine to get out of EMACS.


Why would you ever want to get out of EMACS?  ;-)

Editors I've used:

 * SOS (Son-Of-Stopgap) on TOPS-10
 * TECO-10 on TOPS-10
 * XEDIT on VM/370
 * EMACS

I only use VI if I absolutely must and always have issues with the modality.

--
TTFN - Guy



Re: Multiprocessor Qbus PDP-11

2021-08-20 Thread Guy Sotomayor via cctalk
There were a couple of other PDP-11 multiprocessors that I know of (and 
used):


 * C.MMP (eventually 16 PDP-11/40e's in an SMP configuration with a
   crosspoint switch accessing a large memory).  It ran a capability
   based OS called Hydra.
 * CM*  this was a cluster of LSI-11s (as I recall) that were
   hierachially interconnected to allow for distributed operation (I
   think it was potentially capable of running with 255 nodes). I don't
   recall what OS CM* used.

Of course both of the above did not use off the shelf OS's or software.

TTFN - Guy

On 8/20/21 12:41 PM, Alan Frisbie via cctalk wrote:

Charles Dickman  wrote:

> There are indications in the KDJ11-B processor spec on bitsavers that
> the M8190 could be used in a multiprocessor configuration. For
> example, bit 10 of the Maintenance Register (17 777 750) is labeled
> "Multiprocessor Slave" and indicates that the bus arbitrator is
> disabled. There is also section 6.6, "Cache Multi-Processor Hooks",
> that describes cache features that allow multiprocessor operation.
>
>Would it be as simple as connecting to 11/83 qbus together? And adding
> the proper software.
>
> Anybody ever heard of such a thing?

Such a system was put together and tested at DEC with the RSX group
(who did the PDP-11/74 multiprocessor work).  I'm told that while it
worked, it wasn't terrible successful, and the project was abandoned.

I was given a gift of one of the CPU modules that was used in the test
and I might still have it around here.  I can't recall for certain,
but I think the module required some ECOs to make it work in a
multi-processor configuration.

The person to ask about this, Brian McCarthy, is unfortunately no
longer with us.  :-(

Alan Frisbie


--
TTFN - Guy



Re: Reading MT/ST Tapes

2021-07-31 Thread Guy Sotomayor via cctalk



On 7/31/21 9:19 AM, Chuck Guzis via cctalk wrote:

On 7/31/21 8:55 AM, Paul Berger via cctalk wrote:


Since there was still a few 360s around when I started I also got to see
the inside of a 1052 a few times, they are a really stripped down
keyboardless selectric.  They used a function cam to space and since
they did not have a tab rack they would space a lot which would cause
the space cam to wear, I remember one that was so worn  that when it
cycled it wobbled very noticeably, the customer would not let us replace
it as this was the console for the 360 and they did not want it
unavailable for the time it would take to replace it.  Some customers
apparently would have a spare 1052 onsite.  The keyboard on the 1052 is
the keyboard from a keypunch machine.

Did the 1620 Mod II and the 1130 use the same Selectric mechanism as the
S/260 1052?  I remember that the Model B on the CADET always felt as if
it would shake itself to pieces every time the carriage returned.


I was "loaned" a 1052 when I was in college and it was built like a 
tank.  Much heavier than typical selectrics from what I could tell.


I built my own 48v drivers to run it and wrote  bunch of code (8080/z80) 
to run it as an ASCII terminal (yea, I know). Unfortunately, I've lost 
all of it in various moves/purges.


--
TTFN - Guy



Re: Looking for VAX6000 items

2021-07-14 Thread Guy Sotomayor via cctalk



On 7/14/21 9:50 AM, Paul Koning wrote:



On Jul 14, 2021, at 12:33 PM, Guy Sotomayor via cctalk  
wrote:

I've found 2 issues w.r.t. "rotary converters".

* They *always* consume lots of power regardless of the actual load

Really?  That seems odd.  A rotary converter is merely a three phase motor with 
run capacitors.  Just like any other motor, its power demand depends on the 
applied load.  A normal motor spinning without anything connected to it 
consumes power to overcome electrical, magnetical, and friction losses, but 
none of these are particularly large.

Can you cite a source for this?
Spec sheets for various rotary converters that I looked at.  I'd have to 
go back and find them again but they typically drew full load power all 
the time...and they were *loud*.



* They typically don't have great frequency regulation as they are
   really designed for machine tools (which are pretty tolerant) so if
   the load varies, the frequency will vary until the "mass" catches up

They have no frequency regulation at all; what comes out of the third wire is a 
phase shifted version of the line input.

You may be thinking about motor-generators, where the output frequency is 
defined by the construction of the generator section and how fast it spins.  
Yes, under high load those will slow down some, reducing the output frequency.


I did a fair amount of investigation of this in order to power the peripherals 
for my IBM 4331.  The peripherals in total require on the order of 21KVA of 
3-phase power and with them (printer, card reader/punch and tape drives) the 
load will vary *a lot) which would screw up the DASD (string of 3340 drives and 
some 3350 clones).

Yes, I would expect that.  Power supplies would not care much.  Another example 
is the CDC 6000 series, which uses 400 Hz M/G sets feeding power supplies.  The 
disk drives run off mains power, so any M/G speed variations is not a factor.


I ended up looking at a solid state phase converter (takes in 220v single phase 
and produces 208v 3-phase).  It has a good (< 1% frequency regulation) and only 
consumed 100W at idle.  Plus it's relatively small and quiet.  The downside is 
cost (~$5000).

$5000 ???  I have a VFC on my lathe (3 hp rating, so about 2 kW electric).  It 
cost only $150 or so as I recall -- TECO Westinghouse brand. I think they are 
still around.  That particular model was rated for single phase input.  Larger 
ones are not, though I'm told that they still work if connected that way (220 
to two of the input terminals and the third left open) at reduced rating.

Here is a current example, 3 hp single phase input: 
https://www.wolfautomation.com/vfd-3hp-230v-single-phase-ip20/

The concern with VFCs is the pulse width modulated output waveform, which I am 
told will bother some types of loads (some electronics) but not others.  Motors 
will certainly be fine with them, so if you're looking at feeding disk drive 
motor loads, this is the perfect answer.


The one I looked at produced full sine wave output for all 3 phases.  I 
don't recall the THD but it was sub 1%.


21KVA I think works out to 15 or 20HP.  The input for what I was looking 
at was 75A @ 220v single phase.  So it's quite a bit more than 2KW and 
the MOSFETs they use are *huge*.


Yes, the "small" VCFs are relatively inexpensive if they are just PWM 
outputs.


I'm was concerned because there are ferro-resonant  transformers in some 
of this gear and the IBM specs for these devices was pretty tight on 
frequency and THD.  Given the nature of this gear, I'd rather not have 
to go and start replacing unobtainium parts due to poor quality power.


--
TTFN - Guy



Re: Looking for VAX6000 items

2021-07-14 Thread Guy Sotomayor via cctalk



On 7/14/21 6:21 AM, Paul Koning via cctalk wrote:



On Jul 13, 2021, at 11:34 PM, Chris Zach via cctalk  
wrote:


When we got an 8530 at work in the early 90s (needed a machine with a
Nautilus bus for specific hardware testing), it was definitely a
3-phase machine and since we were in an industrial setting, I just
tapped into our panel at the back of the warehouse and wired up a
3-phase outlet for it.  It never sat on our datacenter floor as a
result, but it really only ever had one purpose and that wasn't a
daily driver.  Too much power, too much heat for so few employees (at
that stage of the company).

Interesting. Were the power supplies 3 phase input? Like you I have noticed 
that most pdp and vax gear just pull 120 volt legs off the 3 phase to balance 
power loads. So you can run them on a couple of 120 circuits. Outside of say 
the RP07 (which is a real 3 phase motor)

A number of the large disk drives use 3 phase motors; RP04/5/6 are examples as 
well.

Three phase motors won't run on single phase power without help from run capacitors.  
(There is no such thing as "two phase power" -- 220 volts is single phase, 
balanced.)

If the issue is motors, a "variable frequency converter" will do the job 
easily.  I have suggested in the past that three phase power supplies could run from 
those, but others have pointed out I overlooked some issues.  So that's probably not a 
good idea.

If you need three phase power to feed power supplies or other non-motor power consumers, 
the best answer is probably a "rotary converter".  You can find those in 
machine tool supply catalogs.  Basically they are a three phase motor equipped with run 
capacitors so they can be fed single phase power; the three phase power needed is then 
taken off the three motor terminals.  You can think of these as rotary transformers -- 
dynamotors in a sense, for those of you who remember electronics that old.  :-)

Don't look at "static converters" -- those are only for motors, it seems they 
aren't much more than run capacitors in a box.  They won't help you for anything other 
than a motor, and even for motors they aren't very good.

paul


I've found 2 issues w.r.t. "rotary converters".

 * They *always* consume lots of power regardless of the actual load
 * They typically don't have great frequency regulation as they are
   really designed for machine tools (which are pretty tolerant) so if
   the load varies, the frequency will vary until the "mass" catches up

I did a fair amount of investigation of this in order to power the 
peripherals for my IBM 4331.  The peripherals in total require on the 
order of 21KVA of 3-phase power and with them (printer, card 
reader/punch and tape drives) the load will vary *a lot) which would 
screw up the DASD (string of 3340 drives and some 3350 clones).


I ended up looking at a solid state phase converter (takes in 220v 
single phase and produces 208v 3-phase).  It has a good (< 1% frequency 
regulation) and only consumed 100W at idle.  Plus it's relatively small 
and quiet.  The downside is cost (~$5000).


--
TTFN - Guy



Re: PDP-11/05 (was: PDP-11/05 microcode dump?)

2021-06-15 Thread Guy Sotomayor via cctalk



On 6/15/21 12:16 PM, Tom Uban via cctalk wrote:

On 6/15/21 2:02 PM, Josh Dersch wrote:

Just to provide some real-world data, I used a pair of KM11's to debug my 
11/05, see the picture here:

http://yahozna.dyndns.org/scratch/1105-debug.jpg 


They worked fine.  (These are clones, from Guy Sotomayor's kit.)  I can verify 
tonight whether I
have the earlier or later rev CPU set, if that helps.

- Josh


Interesting! From your pic, you have the M7260 without the circular baud rate 
selector switch, but I
cannot tell which M7261 board you have.

Does the machine come up and run normally with the boards in and the switches 
all the disabled
positions or do you have to do a special sequence to start?

I will have to look at the schematics to see how the two slots connect to the 
processor on each of
the board versions and maybe also take a look at Guy's KL11 schematic if it is 
on his site.


The schematic should be in the user's manual for the KM11.

--

TTFN - Guy



Re: Is this a new record?

2021-04-22 Thread Guy Sotomayor via cctalk
I have a number of keyboards that folks of this ilk like (several 
Symbolics keyboards and a number of 3278/9 keyboards). Fortunately, 
they're all connected to respective machines.


I did see that someone (on ebay) had taken an APL 3278 keyboard and 
converted it to USB!  Grr.  These people make me mad.


On 4/22/21 4:19 PM, Josh Dersch via cctalk wrote:

https://www.ebay.com/itm/164815576309

$9570 for a keyboard.

As much as I'd like to find a keyboard for my Lambda's second head, I
somehow doubt that's going to happen.  And now I think I need to go find a
really, really (really) safe place to keep the keyboard I *do* have...

- Josh


--
TTFN - Guy



Re: Anyone know ancient versions of XLC?

2021-04-15 Thread Guy Sotomayor via cctalk



On 4/15/21 9:42 AM, Liam Proven via cctalk wrote:

On Thu, 15 Apr 2021 at 16:00, Stefan Skoglund  wrote:


FRAME from that era was nice and fast.

As in FrameMaker? I barely know it. Back in the '80s I was a total
Aldus PageMaker fanboy. :-) IMHO one of the greatest GUI apps ever
written.

I've used FrameMaker a lot...it's great for handling large documents and 
collections of documents.  Used it quite a bit at IBM and handled 1000+ 
page documents (of course that wasn't all one "source" file).


I could never get my head around Word for anything more than 10 pages or 
so.  Just too hard to deal with everything in massive documents.


Now I almost exclusively use LaTeX.  I've found that being able to use 
my own text editor to actually write the content means I don't have to 
switch between different notions on how moving around and edit should 
work.  Using a mark-up language also means I generally have more control 
on how things appear in the document (something that continually 
frustrated me with Word especially when dealing with cross references 
and figures).


--
TTFN - Guy



Re: 80286 Protected Mode Test

2021-03-15 Thread Guy Sotomayor via cctalk

On 3/15/21 7:23 AM, Noel Chiappa via cctalk wrote:

 > From: Guy Sotomayor

 > the LOADALL instructions including all of it's warts (and its inability
 > to switch back from protected mode)

Good to have that confirmed (for the 286; apparently it works in the 386).
The 386 loadall instruction was different (not really a surprise since 
the internal microarchitecture was different).  The 386 didn't need to 
do this "hack" because it had vm86 mode for tasks so that accomplished 
what everyone was really using LOADALL on the 286 for.


 > the other way to get back to real mode from protected mode is via a
 > triple-fault.

Any insight into why IBM didn't use that, but went with the (allegedly slow)
keyboard hack?
At this point I don't recall.  But I suspect it was allegedly simpler 
conceptually.


--
TTFN - Guy



Re: 80286 Protected Mode Test

2021-03-14 Thread Guy Sotomayor via cctalk



On 3/14/21 11:09 AM, Peter Corlett via cctalk wrote:

On Sun, Mar 14, 2021 at 04:32:20PM +0100, Maciej W. Rozycki via cctalk wrote:

On Sun, 7 Mar 2021, Noel Chiappa via cctalk wrote:

The 286 can exit protected mode with the LOADALL instruction.

[...]

The existence of LOADALL (used for in-circuit emulation, a predecessor
technique to modern JTAG debugging and the instruction the modern x86 RSM
instruction grew from) in the 80286 wasn't public information for a very
long time, and you won't find it in public Intel 80286 CPU documentation
even today. Even if IBM engineers knew of its existence at the time the
PC/AT was being designed, surely they have decided not to rely in their
design on something not guaranteed by the CPU manufacturer to exist.


I can say with a fair amount of certainty, that we at IBM knew of the 
existence of the LOADALL instructions including all of it's warts (and 
its inability to switch back from protected mode) from the earliest days.


There were many heated discussions in various task forces (this was of 
course IBM) about the next generation OS (to become OS/2) about the 
'286.  First and foremost was how to be able to run DOS programs on the 
'286. Over very vocal opposition, management decided to use "mode 
switching" rather than any of the other techniques.  It should be noted, 
that a significant portion of us advocated abandoning the '286 in favor 
of the '386 to solve this problem.  The argument that management made 
against that approach assumed that OS/2 would be ready in 9 months and 
that the '386 would be late ('386 at the time was about 12-18 months 
away).  It turned out that OS/2 took well over 18 months to develop.


At the time I was fairly familiar with the LOADALL instruction.  I had 
modified PC/AT Xenix to use the LOADALL instruction to allow for running 
Xenix programs and multiple DOS programs simultaneously.  I gave 
multiple demos to various folks in management but to no avail.  They had 
decided that mode switching as *the* way that OS/2 was going to work.


I should also note, that the other way to get back to real mode from 
protected mode is via a triple-fault.  What gets me (and I railed on 
Intel when I worked there for a time) that it still existing in the 
architecture even though they have a machine check architecture now 
(which while at IBM pushed Intel to implement for the '386!).



The Wikipedia page on LOADALL claims "The 80286 LOADALL instruction can not
be used to switch from protected back to real mode (it can't clear the PE
bit in the MSW). However, use of the LOADALL instruction can avoid the need
to switch to protected mode altogether."

I find that paragraph very persuasive. The author knows about LOADALL and
the desire to use it to avoid going into protected mode, and also explains
that there's a specific exception in its behaviour which prevents returning
to real mode. All of the other hacky uses of LOADALL would be unnecessary if
it could be used to switch modes at will. It just doesn't seem like
something that would be written if it was wrong.

Is Wikipedia incorrect and the 286 LOADALL *can* exit protected mode, and if
so, how?


--
TTFN - Guy



Re: DEC RK11-C Disk Controller - on ebay...or is it?

2021-02-08 Thread Guy Sotomayor via cctalk
It looks like it could be an RK11-C.  Are you possibly thinking of the 
RK11-D which fits in a BA11 chassis?


TTFN - Guy

On 2/8/21 2:53 PM, Bill Degnan via cctalk wrote:

If you search ebay for "DEC RK11-C Disk Controller", you'll find a listing
of a backplane of flipchip cards, but it's not like any RK11-C I have ever
seen.  Am I right, this is a mis-labeled auction?
Bill


--
TTFN - Guy



Re: PDP-11/70 debugging advice

2021-01-31 Thread Guy Sotomayor via cctalk
Did you check to make sure that power is wired correctly to the 
PEP-70/Hypercache?  They are typically installed in "empty" slots and 
don't have power (or anything else) routed to them.  They require some 
additional jumpers to be installed on the backplane so that they get power.



On 1/31/21 2:31 PM, Josh Dersch via cctalk wrote:

Hi all --

Making some progress with the "fire sale" PDP-11/70. Over the past month
I've rebuilt the power supplies and burned them in on the bench, and I've
gotten things cleaned up and reassembled.  I'm still waiting on some new
chassis fans but my curiosity overwhelmed my caution and I decided to power
it up for a short time (like 30 seconds) just to see what happens.  Good
news: no smoke or fire.  Voltages look good (need a tiny bit of adjustment
yet) and AC LO and DC LO looked good everywhere I tested them.  Bad news:
processor is almost entirely unresponsive; comes up with the RUN and MASTER
lights on, toggling Halt, and hitting Start causes the RUN light to go out,
but that's the only response I get from the console.

I got out the KM11 boardset and with that installed I can step through
microinstructions and it's definitely executing them, and seems to be
following the flow diagrams in the engineering drawings.  Left to its own
devices, however, the processor doesn't seem to be executing
microinstructions at all, it's stuck at uAddress 200.

In the troubleshooting section of the 11/70 service docs (diagram on p.
5-16) it states:

IF LOAD ADRS DOES NOT WORK AND:
- RUN, MASTER & ALL DATA INDICATORS ARE ON
- uADRS = 200 (ZAP)
THEN MEMORY HAS LOST POWER

Which seems to adequately describe the symptoms I'm seeing, but as far as I
can tell the AC and DC LO signals are all fine.  (This system has a Setasi
PEP70/Hypercache installed, so there's no separate memory chassis to worry
about.)  I'm going to go back and re-check everything, but I was curious if
anyone knows whether loss of AC or DC would prevent the processor from
executing microcode -- from everything I understand it should cause a trap,
and I don't see anything in the docs about inhibiting microcode execution.
But perhaps if this happens at power-up things behave differently?  And the
fact that the troubleshooting flowchart calls out these exact symptoms
would seem to indicate that this is expected.  But I'm curious why the KM11
can step the processor, in this case.

I'm going to wait until the new fans arrive (hopefully tomorrow or tuesday)
before I poke at this again, just looking for advice here on the off chance
anyone's seen this behavior before.

Thanks as always!
- Josh


--
TTFN - Guy



Re: APL\360

2021-01-30 Thread Guy Sotomayor via cctalk



On 1/30/21 9:52 AM, Chuck Guzis via cctalk wrote:

On 1/29/21 10:03 PM, Guy Sotomayor via cctalk wrote:


And unfortunately some industries it is prohibited.  Those industries
*require* conformance to MISRA, CERT-C, ISO-26262 and others.  There is
*no* choice since the code has to be audited and compliance is *not*
optional.

Just an illustration of what happens when you take a "portable
alternative to assembly" and put lipstick on it.   I've been programming
C since System III Unix and I still consider it to be a portable (sort
of) alternative to assembly.

One of the problems with C, in my view, is a lack of direction.  There
are plenty of languages that aim for specific ends.  (e.g. COBOL =
business/commercial, FORTRAN = scientific, Java = web applications,
etc.).   But whence C or C++?

In my dotage, I do a fair amount of MCU programming nowadays, and C is
the lingua franca in that world; the only real alternative is assembly,
so that makes some sense.  Python, Ada, etc. never really managed to
make much headway there.  C is far more prevalent than C++ in that
world, FWIW.

Does standard C have vector extensions yet?  I was an alternate rep for
my firm for F90 (was supposed to be F88) for vector extensions; it's
just a matter of curiosity.


I've been writing in C since 1977 (Unix V6 days and went through the =+ 
to += conversion in V7).  I've seen *a lot* of changes in C over that time.


Most of what I do is low level stuff (OS, RTOS, etc) and actually 
*rarely* even use the C library (most of what I build is built with 
-nostdlibs).


I typically build using -c99 but I'm looking at C11 because of atomics 
that were introduced then but I have to see what's native compiler 
generated versus what it relies on for the atomic operations.  I haven't 
yet seen what's in C17 yet.  I've also been known to write a special 
hand crafted function so that an entire portion of the C library doesn't 
get pulled in.  Not only did it save a bunch of space but it was *much* 
faster too.



TTFN - Guy




Re: APL\360

2021-01-29 Thread Guy Sotomayor via cctalk



On 1/29/21 4:32 PM, Fred Cisin via cctalk wrote:

if ( !(myfile = fopen( filename, "r"))


On Fri, 29 Jan 2021, Guy Sotomayor via cctalk wrote:
In a lot of industry standard coding practices (MISRA, CERT-C) that 
type of statement is prohibited and *will* result in an error being 
reported by the checker/scanner.
The if statement in your example has at least 2 errors from MISRA's 
perspective:

* assignment within a conditional statement
* the conditional not being a boolean type (that is you can't assume 0
  is false and non-0 is true...you actually need to compare...in this
  case against NULL)


That particular structure has become an industry standard.
MOST dialects of C return a NULL pointer on fopen error.
Similarly the code in strcpy has an assignment and is using the 
numeric valus of each character as if it were boolean, with the 
terminating NULL ending the while condition.



And unfortunately some industries it is prohibited.  Those industries 
*require* conformance to MISRA, CERT-C, ISO-26262 and others.  There is 
*no* choice since the code has to be audited and compliance is *not* 
optional.



--
TTFN - Guy



Re: APL\360

2021-01-29 Thread Guy Sotomayor via cctalk
In a lot of industry standard coding practices (MISRA, CERT-C) that type 
of statement is prohibited and *will* result in an error being reported 
by the checker/scanner.


The if statement in your example has at least 2 errors from MISRA's 
perspective:


 * assignment within a conditional statement
 * the conditional not being a boolean type (that is you can't assume 0
   is false and non-0 is true...you actually need to compare...in this
   case against NULL)


On 1/29/21 3:59 PM, Fred Cisin via cctalk wrote:

On Fri, 29 Jan 2021, Chuck Guzis via cctalk wrote:


In the past (and occasionally today, I use the following construct:

FILE *myfile;

if ( !(myfile = fopen( filename, "r"))
{
 fprintf( stderr, "Couldn\'t open %s - exiting\n", filename);
 exit (1);
}

Yes, it only saves a line, but neatly describes what's being done.

--Chuck


Yes.
That is another excellent example of where you DO want to do an 
assignment AND a comparison (to zero).  A better example than my 
strcpy one, although yours does not need to save that extra line, but 
a string copy can't afford to be slowed down even a little.


That is why it MUST be a WARNING, not an ERROR.
Of course, the error is when that wasn't what you intended to do.


--
TTFN - Guy



Re: APL\360

2021-01-29 Thread Guy Sotomayor via cctalk



On 1/29/21 12:21 PM, ben via cctalk wrote:

On 1/29/2021 12:59 PM, Fred Cisin via cctalk wrote:



Without OTHER changes in parsing arithmetic expressions, that may or 
may not be warranted, just replacing the '=' being used for 
assignment with an arrow ELIMINATED that particular confusion.  Well, 
mostly.  You can't use a right pointing arrow to fix 3 = X




Blame K with C with the '=' and '==' mess because assignment is a 
operation. I never hear that C or PASCAL have problems.


We complained bitterly about this in the early days (Unix v6 days).  
They at least listened and fixed the = (e.g. =+, =-) because of 
ambiguity but refused to change assignment.  I find it annoying that a 
type-o of forgetting an '=' in a comparison can result in a hard to find 
bug.



TTFN - Guy



Re: DEC backplane power connectors

2021-01-27 Thread Guy Sotomayor via cctalk

Could you post the part numbers?

Thanks.

TTFN - Guy

On 1/27/21 7:19 AM, Tom Uban via cctalk wrote:

Thanks much. I think I found the mating plugs I need on the te.com site and 
digikey has them.

--tom

On 1/27/21 2:05 AM, Mattis Lind wrote:


Den ons 27 jan. 2021 kl 06:59 skrev Tom Uban via cctalk mailto:cctalk@classiccmp.org>>:

 Are the power connectors on the DEC PDP-11 backplanes (e.g. DD11-DF 15pin 
and 6pin) Molex or
 other?
 Are they still commonly available?


They are called Commercial Mate-n-lok.  Company is called TE-Connectivity 
nowadays.

Later on DEC used Universal Mate-n-lok. For example in the VAX-11/750.

/Mattis


 --tnx
 --tom


--
TTFN - Guy



Re: Keyboard storage

2020-12-21 Thread Guy Sotomayor via cctalk
No worries.  I use Uline for all sorts of stuff and they generally
deliver within 2 days (even out here in the boonies). I always find a
use for any extras.  ;-)

I generally avoid USPS partly because they don't deliver to our house,
so we have a P.O. Box (which means I have to talk to the shipper to determine 
what method they use for shipping so I can give them the right address).  
Frankly, I don't understand because UPS and FedEx deliver right to our door 
(although sometimes it's fun to figure out *which* door they left the package 
at).

TTFN - Guy

On Mon, 2020-12-21 at 23:05 -0800, Alan Perry wrote:
> Thanks. I had seen that one before, but didn't know what to do with
> the 
> extra 15 boxes.
> 
> The USPS box has the advantages of being free and being a box I am
> more 
> likely to use to ship something with (because of the flat rate price
> and 
> not having to deal with weighing the box).
> 
> alan
> 
> On 12/21/20 10:59 PM, Guy Sotomayor wrote:
> > Try ULine (uline.com).  They have a keyboard shipping box (p/n S-
> > 6496).
> >   They're only $2.70/ea but the minimum order is 25.  :-(
> > 
> > TTFN - Guy
> > 
> > On Mon, 2020-12-21 at 22:17 -0800, Alan Perry via cctalk wrote:
> > > I have a bunch of Sun keyboards that I need to store more
> > > efficiently
> > > and don't want to risk damaging by stacking on top of each other.
> > > They
> > > are Type 4s, 5s, and 6s (without the wrist rest), maybe 10 in
> > > total.
> > > Anyone here know of a box or boxes that would work well for this?
> > > 
> > > alan



Re: Keyboard storage

2020-12-21 Thread Guy Sotomayor via cctalk
Try ULine (uline.com).  They have a keyboard shipping box (p/n S-6496). 
 They're only $2.70/ea but the minimum order is 25.  :-(

TTFN - Guy

On Mon, 2020-12-21 at 22:17 -0800, Alan Perry via cctalk wrote:
> I have a bunch of Sun keyboards that I need to store more
> efficiently 
> and don't want to risk damaging by stacking on top of each other.
> They 
> are Type 4s, 5s, and 6s (without the wrist rest), maybe 10 in total. 
> Anyone here know of a box or boxes that would work well for this?
> 
> alan



Re: Strange magtape anecdote

2020-10-27 Thread Guy Sotomayor via cctalk
We had a similar problem when I was at IBM and we were developing a
follow on to the PC/AT (it never shipped).  We had a bunch of
prototypes in the lab running tests with stepper HDDs (rather than
voice coils)  We kept having disk errors (failure to find track 0) when
running tests.

It took us a while to figure several things out:
1) all of the machines were run with their covers off
2) all of the machines that failed were by the windows
3) the failures always happened at a particular time of day

After a bit of head scratching we discovered that the tack 0 sensor was
optical and at that particular time of day the sun at a particular
angle did not allow the sensor to register that the drive was at track
0.

TTFN - Guy

 
On Tue, 2020-10-27 at 06:50 +0100, nico de jong via cctalk wrote:
> Hi all,
> 
> Back in the early 70's I was an operator on an IBM 360/40 with 4 
> tapedrives. Nobody could understand that sometimes a tape transfer
> would 
> stop saying "end of tape", mainly around 3 PM, when not called for.
> It 
> was mainly one specific drive, but its two neighbours, one on each
> side, 
> could also behave like this. Tape drive specialists visitied us, 
> scratched their heads, and went off again. When the blinds were
> rolled 
> down, the error disappeared. The reason for the strange behaviour
> was 
> that the sun could shine into the machine room when it was in a
> specific 
> position, so it could send some light into the drive, where the tape 
> then reflected the light into the sensor, making it believe that it
> had 
> met the end-of-tape marker.
> 
> /Nico OZ 1 BMC
> 
> On 2020-10-26 17:01, Al Kossow via cctalk wrote:
> > 
> > http://mnembler.com/computers_mini_stories.html
> > 
> > "George Dragner always wore a belt with a metal dragon buckle.  He
> > was 
> > a colorful character known for pissing off management.  His most 
> > famous act was tossing a chair through the window at a customer
> > site. 
> > The customer refused to believe that the lack of humidity in the
> > room 
> > was screwing up his magnetic tape media.  As the tape heads depend
> > on 
> > the moisture from the air to prevent the magnetic oxide from being 
> > torn off the media from the friction during a rewind. George broke
> > the 
> > window to prove his point.  He was right ! "
> > 
> > There is a minimum RH specified for tape, but "tape heads depend
> > on 
> > the moisture from the air"  ??
> > 



Re: Next project: 11/24. Does it need memory?

2020-10-19 Thread Guy Sotomayor via cctalk
On Mon, 2020-10-19 at 20:20 -0400, Chris Zach via cctalk wrote:
> 
> won't work. Maybe I'll just drag out the 11/05 and get that working 
> first, it's got a nice front panel that doesn't lock up often :-)
> 
> 
The 11/05 was the first 11 that I repaired and got working.  You should
note that the 11/05's front panel is driven by the uCode of the CPU. 
It's connection to the CPU is through a "serial" protocol (it's been
too long...I think it's just a big shift register) to keep the pin
count (e.g. cost) low.

TTFN - Guy




Re: RL02 Disk and maybe pdp11 something at auction.

2020-10-19 Thread Guy Sotomayor via cctalk
On Mon, 2020-10-19 at 17:45 -0400, Noel Chiappa via cctalk wrote:
> > From: Guy Sotomayor ggs at shiresoft.com 
> 
> > It looks like it's 11/84 from the badge on the front.
> 
> In a 10-1/2" box. Seen them in the docs (forget the model number),
> never seen
> a real one.

I had a number of 11/84s in the 10-1/2" box and in the 21" box.  Got
rid of them all in the last move (along with 3(!) 11/78x VAXen...I was
a bit surprised because I thought I had only 2).

TTFN - Guy



Re: RL02 Disk and maybe pdp11 something at auction.

2020-10-19 Thread Guy Sotomayor via cctalk
On Mon, 2020-10-19 at 13:12 -0700, Wayne Sudol via cctalk wrote:
> I spotted this for an auction from the  FORMER OYSTER CREEK NUCLEAR
> GENERATING STATION. 
> Looks like a pair of RL02 with a pdp something in the middle. I can't
> make out what model it is from the photo.
> Anyone know?
> 
> 
> 
https://www.bidspotter.com/en-us/auction-catalogues/bscunited/catalogue-id-united4-10061/lot-9f3350e0-a11b-493d-868b-ac43015bce6d
> 

It looks like it's 11/84 from the badge on the front.

TTFN - Guy



Re: 11/84 print set

2020-10-19 Thread Guy Sotomayor via cctalk
On Mon, 2020-10-19 at 11:22 -0700, Fred Cisin via cctalk wrote:
> On Mon, 19 Oct 2020, Al Kossow via cctalk wrote:
> > yes, I went ahead and got it even though I can't afford to
> > paypal is my normal aek@bitsavers adr
> 
> Done $50
> 
> 
Me too.

TTFN - Guy




Re: Tutor needed for college student

2020-10-12 Thread Guy Sotomayor via cctalk
I agree with the others: go look for other textbooks.  There are also
surprisingly good "webinar's" on various math related topics on YouTube
(free), so it might be worthwhile to have him do a bit of searching.

Oddly, I never had any discrete math courses in school...it was "old
school EE" so everything was differential equations and stochastic
processes.

I did end up teaching myself about finite fields (Galios Fields to be
specific) when I needed to do some work with error correcting codes.  I
ended up with 3 or 4 different textbooks on the topic.  I have since
gone back to refresh myself about them and found several good video
courses on YouTube.

TTFN - Guy



Re: 9 track tapes and block sizes

2020-10-03 Thread Guy Sotomayor via cctalk
On Sat, 2020-10-03 at 08:33 -0700, Chuck Guzis via cctalk wrote:
> 
> In particular, consider a government project where several hundred
> millions of 1970s dollars were spent by the government, yet almost
> nothing other than a few papers survives.  Those involved with
> intimate
> knowledge are inexorably dying off as the community ages out.  The
> lessons of "what did we learn form all of this?" will be gone
> forever.
> 
> Sometimes it seems that we spend as many resources in forgetting as
> we
> spend trying to remember.
> 

I couldn't agree more...and it's not just governments (at all levels)
but companies as well.

In the mid-90's I worked on the IBM Microkernel project (was one of the
original 6 people who started it).  It eventually grew to 100's of
people and morf'd into Workplace OS.

I still have some of the printed documentation from that project but
have long since lost a set of CDs that contained not only the PDFs for
those documents (the source was in FrameMaker) but also all of the IBM
microkernel source *and* build environment and tools.

And that was only a part of the project...there were all of the
personality neutral software as well as the various OS personalities
(including AIX and OS/2).  I seriously doubt if any of that survived in
any form because of the way that the project was shutdown.

The last estimate of the cost to IBM of the project was over
$2,000,000,000 (in 1995 dollars).  To my knowledge not much survived. 
What a waste.

TTFN - Guy



Re: Small C ver 1.00 source?

2020-07-14 Thread Guy Sotomayor via cctalk
Yes, I spent a good amount of my time at CMU in the late 70's re-
writing the TOPS-10 version of that compiler with a new P-Code
definition so that the target code could be run efficiently on small
machines.  I did the original work to target the PDP-11s on C.MMP.

I still have the compiler source, documentation I wrote and all of the
test cases.  Unfortunately I no longer have the PDP11 P-Code
interpreter that I wrote (all in PDP-11 assembler and BLISS-11).  :-( 
However, I *think* I still have the interpreter I wrote in Pascal that
I used for testing the compiler changes and code generation.

TTFN - Guy

On Tue, 2020-07-14 at 12:19 -0600, Eric Smith via cctalk wrote:
> On Tue, Jul 14, 2020 at 10:42 AM Chuck Guzis via cctalk <
> cctalk@classiccmp.org> wrote:
> 
> > The term "p-code" comes from the 1973 Pascal-P version of UCSD
> > Pascal.
> > 
> 
> "p-code" does come from Pascal-P, but Pascal-P wasn't a version of
> UCSD
> Pascal. Pascal-P was developed on the CDC 6600 in 1972.
> 
> UCSD Pascal didn't come about until 1977, so the term p-code predates
> UCSD
> Pascal by five years.



Re: Fixing an RK8E ....

2020-06-19 Thread Guy Sotomayor via cctalk
On Fri, 2020-06-19 at 12:24 -0700, Robert Armstrong via cctech wrote:
>   It appears that my RK8E has a problem - it fails the diskless
> control test
> with
> 
>   .R DHRKAE.DG
>   SR= 
> 
>   COMMAND REGISTER ERROR
>   PC:1160 GD: CM:0001 
>   DHRKAE  FAILED   PC:6726  AC:  MQ:  FL:
>   WAITING
> 
> Ok, maybe a bad bit in the command register so I'll check it
> out.  But then
> it dawns on me - how do you work on this thing?  It's three boards
> connected
> with "over the top" connectors - you can't use a module extender on
> it.
> Worse, the M7105 Major Registers board is the middle one of the
> stack!   Is
> there some secret to working on this thing?  Has anybody fixed
> one?  Any
> suggestions?
> 
>   I hadn't thought about it before, but the KK8E CPU would have the
> same
> problem.  Fingers crossed that one never dies...
> 

I seem to recall that there were some "special" (read unobtanium) over
the top connectors that permitted one of the boards in a board set to
be up on an extender.

TTFN - Guy




Re: Living Computer Museum

2020-05-27 Thread Guy Sotomayor via cctalk
On Wed, 2020-05-27 at 14:57 -0700, geneb wrote:
> On Wed, 27 May 2020, Guy Sotomayor via cctalk wrote:
> 
> > I just received an email from the Living Computer Museum that they
> > were
> > suspending operations.  It wasn't clear from the email what that
> > actually means.
> > 
> 
> They've been closed to visitors since early March I think.

That I knew.  It's just that the email that was sent sounded pretty
ominous.

TTFN - Guy



Living Computer Museum

2020-05-27 Thread Guy Sotomayor via cctalk
I just received an email from the Living Computer Museum that they were
suspending operations.  It wasn't clear from the email what that
actually means.

TTFN - Guy



Re: history is hard (was: Microsoft open sources GWBASIC)

2020-05-25 Thread Guy Sotomayor via cctalk
On Mon, 2020-05-25 at 14:13 -0700, Fred Cisin via cctalk wrote:
> > I hadn't thought about IBMCACHE.SYS in *years*.  I wrote it in 
> > its entirety (there's even a patent that covers some of its
> operation). 
> > I was in an AdTech (Advanced Technology) group at the time and 
> > was looking at how to make disk operations faster in DOS at the
> time 
> > when I came up with the idea. There was a *huge* battle within IBM
> on if 
> > it should be released and in order to do so, it was fairly well 
> > hidden.
> 
> I think that I recall a mention of REFERENCE disk of PS/2?
> (NOT model 25 or 30, which didn't have extended memory)
> 
> 
> Can IBMCACHE co-exist with HIMEM.SYS?
> Or require it?
> Or the A20 support needed by Windows 3.10?
> When SMARTDRV was activated, did it disable IBMCACHE? or conflict
> with it?
> 

No, IBMCACHE was standalone.  As I recall (I wish I'd kept a copy of
the source), you could tell it how much (and starting address) of where
it would use memory > 1MB (I think there was also a mode that allowed
you to use it < 1MB as well).  That was done to allow for co-existence
with HIMEM.SYS.

When the write back cache was enabled (it would always allow write-
thru), in addition to intercepting INT 13 (and timer) it would also
intercept INT 21 so that if you did a "close" it would immediately
flush out the dirty buffers.

One of the differences between between IBMCACHE and SMARTDRV as I
recall (I really didn't spend too much time thinking about SMARTDRV)
was that IBMCACHE was block based versus SMARTDRV being track based. 
It allowed for much better caching (from my own analysis when I was
developing it).  It also allowed for caching blocks that had bad
sectors (which was one of the patents for IBMCACHE).

When IBMCACHE did a write out of dirty blocks they were always in
sorted order (the list of dirty blocks was kept in sorted order).  I
recall playing around with dual elevator algorithms (it knew where the
last read/write was) so it could do the writes that required the
fewest/shortest seeks.  It turned out now to be a huge win (for DOS)
versus the complexity, so I never released that.

I even had a version that cached floppies (but would *never* enable the
write-back cache for devices that it thought were removable).  If it
detected a disk change it would flush the cache for that drive.  

TTFN - Guy



Re: history is hard (was: Microsoft open sources GWBASIC)

2020-05-25 Thread Guy Sotomayor via cctalk
On Mon, 2020-05-25 at 13:21 -0700, Ali wrote:
> 
> >I hadn't thought about IBMCACHE.SYS in *years*.  I wrote it in its
> >entirety (there's even a patent that covers some of its operation).
> I
> >was in an AdTech (Advanced Technology) group at the time and was
> >looking at how to make disk operations faster in >DOS at the time
> when I
> >came up with the idea.
> 
> >There was a *huge* battle within IBM on if it >should be released
> and in
> >order to do so, it was fairly well hidden.
> 
> 
> Guy,
> 
> It is so well hidden I don't think I have ever seen it. Was it part
> of pc-dos? If so what version?

No, it came on one of the diskettes supplied with PS/2 systems though
it would work on any system.  That is, it didn't do anything to detect
that it was running on a PS/2 system.  There was a lot of discussion to
have the "core" of IBMCACHE actually in BIOS and a tiny .SYS file to
allocate the memory above 1MB.

Most interest in it faded when Microsoft started shipping smartdrv.sys
which IMHO was not as good as IBMCACHE, but smartdrv.sys came with DOS.

TTFN - Guy



Re: history is hard (was: Microsoft open sources GWBASIC)

2020-05-25 Thread Guy Sotomayor via cctalk
On Mon, 2020-05-25 at 20:28 +0200, Liam Proven via cctalk wrote:
> On Mon, 25 May 2020 at 20:22, Guy Sotomayor 
> wrote:
> > 
> > I hadn't thought about IBMCACHE.SYS in *years*.  I wrote it in its
> > entirety (there's even a patent that covers some of its operation).
> > I
> > was in an AdTech (Advanced Technology) group at the time and was
> > looking at how to make disk operations faster in DOS at the time
> > when I
> > came up with the idea.
> 
> Oh my word! Well I thank you for it. It helped a very great deal and
> made dozens of users of rather expensive IBM PS/2s in the Isle of Man
> very happy for a while in the late 1980s and early 1990s. :-)

You're very welcome!  I know that there were some bids that IBM
marketing needed IBMCACHE.SYS to win (millions of dollars) and it was
*still* a battle to get it released!

> 
> > There was a *huge* battle within IBM on if it should be released
> > and in
> > order to do so, it was fairly well hidden.
> 
> I can believe that! I think I read of it in a magazine and thought
> "never! I'd know!" -- so I looked and there it was.
> 
> > There was a switch on config.sys statement for IBMCACHE.SYS to turn
> > off
> > the write-back cache (e.g. writes would always go straight to
> > disk).
> > As I recall, there was a 30 second timer for the writeback cache so
> > that if a disk block was "dirty" for more than 30 seconds it would
> > get
> > flushed to disk.
> 
> Yes, both true. I think I may have used the write-through switch for
> some people, but ISTR it reduced performance a little bit. Just
> teaching people to be a bit more patient was sometimes hard -- after
> all, this was a tool that appealed to the impatient!
> I think for them it was easier to teach them to  press C-A-D and then
> wait for the RAM check before turning off.
> 
> Or hit C-A-D, let it boot all the way, then turn it off!
> 
> Great bit of work, if I may say so!

Yea, not only did I have to write it, but I had to write a series of
tests to run through billions of disk operations (and go validate the
internal state of the cache) before it could even be considered for
release.  ;-)

BTW, as a bit of copyright paranoia, if you do an ASCII dump of
IBMCACHE.SYS, you'll see my 3 initials (GGS) (or it may have been
IBM...it's been so long I can't remember).  They are actually
instructions!  It was required at the time to have code embed a text
string as actual instructions that get executed.  It took me a bit of
time to figure out (in x86 assembler) how to generate an appropriate
string.  The idea was that if someone "cloned" the program and just did
a replacement of the string, it would stop working because the string
was actually instructions.

TTFN - Guy



Re: history is hard (was: Microsoft open sources GWBASIC)

2020-05-25 Thread Guy Sotomayor via cctalk
On Mon, 2020-05-25 at 20:00 +0200, Liam Proven via cctalk wrote:
> On Mon, 25 May 2020 at 05:30, Fred Cisin via cctalk
>  wrote:
> > 
> > 
> IBMs came with an installable driver called, I think, IBMCACHE.SYS.
> This used extended RAM (above 1MB) as a hard disk cache, without XMS
> or HIMEM.SYS or any of that. I played with it and was amazed by the
> results. I started enabling it by default on customers' machines. 

I hadn't thought about IBMCACHE.SYS in *years*.  I wrote it in its
entirety (there's even a patent that covers some of its operation). I
was in an AdTech (Advanced Technology) group at the time and was
looking at how to make disk operations faster in DOS at the time when I
came up with the idea.

There was a *huge* battle within IBM on if it should be released and in
order to do so, it was fairly well hidden.

> Most
> were happy but some had the habit of just turning off -- DOS didn't
> really have a shutdown routine. Some, I could train to press
> Ctrl-Alt-Del before turning off. Some I couldn't, so I had to disable
> the disk cache.

There was a switch on config.sys statement for IBMCACHE.SYS to turn off
the write-back cache (e.g. writes would always go straight to disk). 
As I recall, there was a 30 second timer for the writeback cache so
that if a disk block was "dirty" for more than 30 seconds it would get
flushed to disk.

> 
> But for those that could learn and adapt, it made DOS _much_ faster,
> and on a 1MB PS/2 Model 50 or 60, it was about the only thing you
> could do with the extra 386 KB of RAM before MS-DOS 5 came out.
> 

TTFN - Guy



Re: ISO: Diablo 30 heads

2020-05-14 Thread Guy Sotomayor via cctalk
I chatted with him on FB earlier in the day and he's doing fine.

TTFN - Guy

On Thu, 2020-05-14 at 19:45 +, dwight via cctalk wrote:
> I just emailed him an hour ago and he replied. I suspect he is fine.
> Dwight
> 
> 
> From: cctalk  on behalf of Al Kossow
> via cctalk 
> Sent: Thursday, May 14, 2020 9:54 AM
> To: cctalk@classiccmp.org 
> Subject: Re: ISO: Diablo 30 heads
> 
> On 5/13/20 6:28 PM, Jay Jaeger via cctalk wrote:
> 
> > Carl, feel free to contact me off list.
> 
> Has anyone heard anything from Carl?
> I'm a bit concerned since there have been no updates on his 1130 page
> for a while.
> 
> 



Re: APL-11

2020-03-30 Thread Guy Sotomayor via cctalk
I don't have an easy way to dump the ROMs at the moment.

TTFN - Guy

On Mon, 2020-03-30 at 13:49 -0600, Eric Smith wrote:
> On Mon, Mar 30, 2020 at 10:24 AM Guy Sotomayor via cctalk <
> cctalk@classiccmp.org> wrote:
> > I have a DEC Writer III with the APL character set ROM and the APL
> > keyboard!  Just need to hook it up to something that has APL on it
> > and will generate the correct character sequences.  ;-)
> 
> Cool!  When you get a chance, could you please dump the DECwriter III
> ROMs?
> 



Re: APL-11

2020-03-30 Thread Guy Sotomayor via cctalk
On Mon, 2020-03-30 at 11:07 -0400, Diane Bruce via cctalk wrote:
> On Mon, Mar 30, 2020 at 10:58:46AM -0400, Bill Gunshannon via cctalk
> wrote:
> > 
> > Haven't given up on DIBOL.  May try installing the RT-11 version
> > and
> > see if it runs.
> > 
> > But now another language of interest has reared its ugly head.  :-)
> > 
> > Anybody have an image of the tape for APL-11?  Manual claims it
> > runs on all of the PDP-11 OSes and it is another language from
> > my past that I haven't touched (other than to read some programs
> > out of curiosity) in more than two decades.
> 
> Oh neat! Be sure you have the special keyboard and character set for
> it!
> e.g. just overlays for the keyboard.

I have a DEC Writer III with the APL character set ROM and the APL
keyboard!  Just need to hook it up to something that has APL on it
and will generate the correct character sequences.  ;-)

TTFN - Guy




Re: DIBOL and RPG for RSTS

2020-03-29 Thread Guy Sotomayor via cctalk
On Sun, 2020-03-29 at 10:21 -0400, Paul Koning via cctalk wrote:
> > On Mar 28, 2020, at 2:55 PM, dwight via cctalk <
> > cctalk@classiccmp.org> wrote:
> > 
> > There are a few reasons most don't like Forth:
> > 
> >  1.   no type checking ( suppose to save dumb programmers )
> >  2.   Often, no floating point. ( Math has to be well thought out
> > but when done right in integer math it has few bugs ).
> >  3.  Few libraries ( One can often make code to attach to things
> > like C libraries but it is a pain in the A. Often if you know what
> > needs to be done it is easier and better to write your own low
> > level code. Things like USB are tough to get at the low level
> > stuff, though )
> >  4.  To many cryptic symbols ( : , . ! @ ; )
> >  5.  To much stack noise ( dup swap rot over )
> > 
> > I still use Forth for all my hobby work. It is the easiest language
> > to get something working of any of the languages I've worked with.
> > ...
> > Learning to be effective with Forth has a relatively steep learning
> > curve. You have to understand the compiler and how it deals with
> > your source code. You need to get used to proper comments to handle
> > stack usage. You need to learn how to write short easily test words
> > ( routines ). It is clearly not just a backwards LISP. It is not
> > Python either.
> > Dwight
> 
> No, it certainly isn't Python, which is my other major fast-coding
> language.
> 
> FORTH started as a small fast real-time control language; its
> inventor worked in an observatory and needed a way to control
> telescopes.  It's still used for that today.  I recently went looking
> for FORTH processors in FPGA, there are several.  One that looked
> very good was designed for robotics and machine vision work.  The
> designer posted both the FPGA design and the software, which includes
> a TCP/IP (or UDP/IP ?) stack.  He reports that the code is both much
> smaller and faster than compiled C code running on conventional FPGA
> embedded processors.
> 
Yes, that would be J1.  I've used it and even wrote a simulator for it
(in FORTH 'natch) so that I could debug my code.  It's a useful FPGA
implementation.

TTFN - Guy



Re: HPE OpenVMS Hobbyist license program is closing

2020-03-10 Thread Guy Sotomayor via cctalk
Am I forgetting, but isn't BSD (4.3/4.4 as I recall) on the VAX?  That
seems more suitable for running on classic hardware than moving to
something newer.

Of course I got rid of all of my 11/780 and 11/785 systems (along with
a smattering of VAXStations) years ago so I don't have any particular
interest here.  ;-)

TTFN - Guy

On Tue, 2020-03-10 at 16:44 +0100, Jan-Benedict Glaw via cctalk wrote:
> On Tue, 2020-03-10 09:06:57 -0600, Warner Losh via cctalk <
> cctalk@classiccmp.org> wrote:
> > On Tue, Mar 10, 2020 at 3:48 AM Peter Corlett via cctalk <
> > cctalk@classiccmp.org> wrote
> > > Linux has taken thirty years to get this far. It's arguable what
> > > is "major" but to a rough approximation, there are no good open
> > > source clones of other operating systems of similar complexity:
> > > I'm aware of FreeDOS, AROS, EmuTOS and a few others, but they're
> > > relatively simple.
> > 
> > Linux never was a thing on the VAX that was very good. It was too
> > late in
> > its life cycle to get enough love.
> 
> I quite apologize for that!
> 
> > Linux and/or NetBSD/vax would be a good choice, though, to
> > implement the
> > VAX's system calls and execute it's binaries. Though there were
> > more
> > concerted efforts to do this years ago, but I don't know what
> > became of
> > them. Google shows a smattering of efforts littered with broken
> > links. :(
> 
> There was a vax-linux port started by others, and I cared for it for
> a
> good number of years. My life changed a lot since then, I quite
> failed
> (and failed hard!) to bring up the needed time to care for Linux,
> care
> for GCC and Binutils, GNU libc and all those programs silently
> expecting IEEE floating point support.
> 
>   I still have a good number of VAXen around, though all powered off
> and in good storage. We're actually searching for a larger room to
> put
> all the old iron in there, get them on cables (power, network and
> serial) and eventually even restart on hacking them.
> 
>   Hacking VAXen was a great thing do to! ...at least for me. I
> learned
> so much from doing so, about Linux, libc, their interface, about
> Binutils and GCC. It really made me "fit" for paid business. But lets
> face it: I'm in the fourties, have a family and a day still does only
> have 24 hours.
> 
>   So... Once getting all my hardware into usable condition is
> settled,
> I'd be quite willing to hand out serial and power access to them, for
> whatever you'd like to do. (If it's not already too late.)
> 
> MfG, JBG
> 



Re: Mach

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 15:21 -0800, Chris Hanson via cctalk wrote:
> On Jan 5, 2020, at 2:30 PM, Guy Sotomayor via cctalk <
> cctalk@classiccmp.org> wrote:
> > 
> > It did seem for a while that a lot of things were based on Mach,
> > but
> > > 
> > > very few seemed to make it to market. NeXTstep and OSF/1, the
> > > only
> > > version of which to ship AFAIK was DEC OSF/1 AXP, later Digital
> > > UNIX,
> > > later Tru64.
> > 
> > Yes, a lot of things were based on Mach. One OS that you're
> > forgetting
> > is OS X. That is based upon Mach 2.5.
> 
> Nope, Mac OS X 10.0 was significantly upgraded and based on Mach 4
> and BSD 4.4 content (via FreeBSD among other sources). It was NeXT
> that never got beyond Mach 2.5 and BSD 4.2. (I know, distinction
> without a difference, but this is an issue of historicity.)
> 
> I think only some of the changes from Mach 2.5→3→4 made it into Mac
> OS X Server 1.0 (aka Rhapsody) so maybe that’s what you’re
> remembering.

You're probably thinking about the user space.  I was working on the
OS X kernel from 2006-2012.  I can tell you that most of the kernel
that was still Mach related (most actually got removed...about all that
was left was mach message) was 2.5 based with some enhancements.

> 
> > > MkLinux didn't get very far, either, did it?
> > > 
> > 
> > I think that was the original Linux port for PPC.
> 
> It was the original Linux port for NuBus PowerPC Macs at least. It
> was never really intended to “get very far” in the first place, it
> was more of an experimental system that a few people at Apple threw
> together and managed to allow the release of to the public.
> 
> MkLinux was interesting for two reasons: It documented the NuBus
> PowerMac hardware such that others could port their OSes to it, and
> it enabled some direct performance comparisons of things like running
> the kernel in a Mach task versus running it colocated with the
> microkernel (and thus turning all of its IPCs into function calls).
> Turns out running the kernel as an independent Mach task cost 10-15%
> overhead, which was significant on a system with a clock under
> 100MHz. Keep in mind too that this was in the early Linux 2.x days
> where Linux “threads” were implemented via fork()…

At IBM we spent a *significant* amount of time optimizing the
microkernel performance.  I recall that on a 90MHz 601 PPC, we got
round-trip RPC below 1 micro-second.

I personally spent a significant amount of time optimizing the
Pentium kernel entry/exit code and optimizing the CPU specific
porition of Mach RPC (it actually took advantage of the x86
segmentation hardware).

> 
> I don’t recall if anyone ever did any “multi-server” experiments with
> it like were done at CMU, where the monolithic kernel were broken up
> into multiple cooperating tasks by responsibility. It would have been
> interesting to see whether the overhead stayed relatively constant,
> grew, or shrank, and how division of responsibility affected that.

The IBM microkernel project was *very* multi-server.  There was a
version of AIX and OS/2 that ran on top of the IBM microkernel (which
was a heavily modified version of Mach 3.0) were there were quite a few
OS neutral servers (including most device drivers) that were all in
their own server tasks.

-- 
TTFN - Guy



Re: Taligent

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 15:06 -0800, Chris Hanson via cctalk wrote:
> On Jan 5, 2020, at 12:56 AM, Jeffrey S. Worley via cctalk <
> cctalk@classiccmp.org> wrote:
> 
> > Does Talingent Pink sound familiar?  OS/2 was ported to powerPC,
> > and so
> > was Netware iirc.  The field was quite busy with hopeful Microsoft
> > killers.  OS/2 was to be morphed into a cross-platform o/s, to wean
> > folks from dos/x86. Then PPC kills the x86 and we all get a
> > decent
> > os.  That was the plan anyway.   I never saw OS2 for PPC or Netware
> > for
> > OS/2, thought I know both to have shipped.
> 
> Pink was the C++ operating system project at Apple that became
> Taligent. I know a couple of people who did a developer kitchen for
> Pink pre-Taligent, and I also know a number of folks who worked on
> the Taligent system and tools—and have personally seen a demo of the
> Taligent Application Environment running on AIX.
> 
> I’ve even seen a CD set for Taligent Application Environment (TalAE)
> 1.0 on AIX, and I have a beta developer and user documentation set.
> Unfortunately my understanding is that the CD sets given to employees
> to commemorate shipping TalAE were all *blank*—the rumor I’ve heard
> is that IBM considered it too valuable to give them the actual
> software that they had worked for years on. (Maybe there were tax
> implications because of what IBM valued the license at, and the fact
> that it would have to be considered compensation?)
> 
> Taligent itself was only one component of IBM’s Workplace/OS
> strategy, which was a plan to rebase everything atop Mach so you
> could run AIX and OS/2 and Taligent all at once on the same hardware
> without quite using virtual machines for it all. The idea is that
> Apple would do pretty much the same with Copland and Taligent atop
> NuKernel rather than Mach.
> 
> It would be really great to actually get the shipping Taligent
> environment and tools archived somewhere. While only bits and pieces
> of it are still in use—for example, ICU—a lot of important and
> influential work was done as part of the project. For example, the
> design of most of the unit testing frameworks today actually comes
> from *Taligent*, since Kent Beck wrote SUnit to re-create it in
> Smalltalk, and JUnit and OCUnit were based on SUnit’s design and
> everything else derived from JUnit…

No, you don't.  The object model that they used was *seriously*
deranged.  When I last looked at it there were >1200 objects and they
were so interdependent that it was nearly impossible to make a change
to one object without the change cascading across a large number of
objects.  They were also proud of the fact that on average only 6
*instructions* would be executed between method invocations...so
performance sucked because you were just doing method calls.

Rather than having a standardized "size" method for an object they
actually had code in the object look at the new operator for the
object (e.g. the binary machine code) in order to determine its
size.

As I said, I have scars from my interactions with Taligent.

-- 
TTFN - Guy



Re: cctalk Digest, Vol 64, Issue 3

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 23:41 +0100, Liam Proven via cctalk wrote:
> On Sun, 5 Jan 2020 at 23:30, Guy Sotomayor via cctalk
>  wrote:
> > 
> > Yes.  We first started with Mach 3.0 build MK58.  We did our final
> > fork at MK68.  We made some *significant* changes from what CMU
> > had (things like changing mach messages from IPC to RPC) and a
> > whole lot of work in the area of scheduling.
> 
> Very interesting. If you are allowed to, you should blog about this
> somewhere -- it is historic stuff.

Yea, unfortunately I've lost most of the historical documentation
starting when we were all packed up to move from Boca Raton, FL to
Austin, TX and then when I left IBM in 1997.

I still have a set of the IBM Microkernel manuals (several 1000 pages
that was all written in Framemaker) and I *may* still have a CD with
the final set of sources on it (but where that might be would be an
interesting question).

> 
> > Yes, a lot of things were based on Mach. One OS that you're
> > forgetting
> > is OS X. That is based upon Mach 2.5.
> 
> Well, firstly, no, I wasn't. I didn't mention OS X, or macOS as it's
> called now, because it's based on NeXTstep. It's a later version of
> the same OS.
> 
> Secondly, AIUI, NeXTstep used Mach 2.5 but one of the changes in Mac
> OS X 1.0 is that they moved to Mach 3 and updated the userland from
> BSD 4.4-Lite to FreeBSD then-current, hiring Jordan Hubbard to do
> much
> of that work..

No OS X uses Mach 2.5.  I worked in the kernel group at Apple for a
number of years and am fairly familiar with the kernel.  They may have
pulled a few things from Mach 3.0, but it is still fundamentally
Mach 2.5.

> 
> > > MkLinux didn't get very far, either, did it?
> > > 
> > 
> > I think that was the original Linux port for PPC.
> 
> It was, and I think only on Apple hardware. There were a few dev
> builds and then it disappeared, IIRC.
> 
> [*Checkes*]
> 
> Yup, OldWorld-ROM NuBus PowerMacs, and later OldWorld PCI PowerMacs
> --
> but later Linux supported PCI Macs directly.
> 
> There were apparently 4 "developer releases", an R1 and an unfinished
> R2. Supplanted by Mac OS X, but apparently the Mach work really
> helped
> to get NeXTstep and "Rhapsody" bootstrapped on PowerMacs.
> 
-- 
TTFN - Guy



Re: cctalk Digest, Vol 64, Issue 3

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 21:54 +0100, Liam Proven via cctalk wrote:
> On Sun, 5 Jan 2020 at 19:02, Guy Sotomayor via cctalk
>  wrote:
> 
> I had been working on the IBM Microkernel (was one of the original 6
> people onthat team).  It was eventually to form the basis of OS/2 for
> PPC.  The way thatthe microkernel project was structured was that
> most
> of the "OS" was personalityneutral (e.g. could be used for Unix,
> OS/2,
> DOS, etc) and then there was an OSpersonality that ran on top of the
> infrastructure.  OS/2 on PPC was supposed tobe the first to ship.
> 
> I think I read that it was based on CMU Mach -- is that right?

Yes.  We first started with Mach 3.0 build MK58.  We did our final
fork at MK68.  We made some *significant* changes from what CMU
had (things like changing mach messages from IPC to RPC) and a
whole lot of work in the area of scheduling.


> It did seem for a while that a lot of things were based on Mach, but
> very few seemed to make it to market. NeXTstep and OSF/1, the only
> version of which to ship AFAIK was DEC OSF/1 AXP, later Digital UNIX,
> later Tru64.

Yes, a lot of things were based on Mach. One OS that you're forgetting
is OS X. That is based upon Mach 2.5.

> MkLinux didn't get very far, either, did it?
> 

I think that was the original Linux port for PPC.



-- 
TTFN - Guy



Re: cctalk Digest, Vol 64, Issue 3

2020-01-05 Thread Guy Sotomayor via cctalk
On Sun, 2020-01-05 at 03:56 -0500, Jeffrey S. Worley via cctalk wrote:
> A lot of odd PPC work happened in a group a friend worked for
> inAustin TX, but not sure if they did Netware work there.? There was
> a lot ofOS2 work there as well, but that's off track a bit more.
> thanksJim
> I was lead tech at a small computer company in Asheville, NC. in
> thosedays.  I ran OS/2 from version 2 in the early 90's to
> Ecomstation inthe early 2000's.
> Does Talingent Pink sound familiar?  OS/2 was ported to powerPC, and
> sowas Netware iirc.  The field was quite busy with hopeful
> Microsoftkillers.  OS/2 was to be morphed into a cross-platform o/s,
> to weanfolks from dos/x86. Then PPC kills the x86 and we all get
> a decentos.  That was the plan anyway.   I never saw OS2 for PPC or
> Netware forOS/2, thought I know both to have shipped.
> Jeff
> 

Yes, Taligent Pink is very familiar (and I still have the scars to
prove it!).I was part of the IBM team that evaluated Pink.  We (IBM)
was mainly looking atit to see how to converge OS's between IBM and
Apple...at least in terms of themicro-kernel.  The Pink teams was shall
we say "difficult to work with".
I had been working on the IBM Microkernel (was one of the original 6
people onthat team).  It was eventually to form the basis of OS/2 for
PPC.  The way thatthe microkernel project was structured was that most
of the "OS" was personalityneutral (e.g. could be used for Unix, OS/2,
DOS, etc) and then there was an OSpersonality that ran on top of the
infrastructure.  OS/2 on PPC was supposed tobe the first to ship.

-- 
TTFN - Guy


Re: Ordering parts onesie twosie

2020-01-03 Thread Guy Sotomayor via cctalk
On Fri, 2020-01-03 at 09:22 -0400, Paul Berger via cctalk wrote:
> On 2020-01-03 2:51 a.m., Chuck Guzis via cctalk wrote:
>   On 2020-01-02 9:58 p.m., Nemo Nusquam via cctalk wrote:
>   >Well, Canada Post stopped delivering to individual >houses years
> ago.
> I assume that rural delivery still goes house-to-house.
> --Chuck   
> Rural delivery is done to mail boxes along the roads, which means the
> people have to travel from their house to said road to get their
> mail.  We lived on a farm for part of the time I was growing up and
> for us that was 3/4 of a mile, and that was not uncommon in the area,
> for some it was even further.  Quite different from walking a block,
> maybe, to a community box.
> 
> 
> 
Yea, our bank of mailboxes is 2.5 miles from our house.  We got intoheated 
arguments with the Post Office because we didn't go down and emptyour box every 
day.  We finally got a P.O Box at a different (more convenient)Post Office.  
Now we have to deal with folks who don't understand that ourmailing address and 
physical address are different.  :-/
It also infuriates me that *every* other shipper (UPS, FedEx) can deliverright 
to our door but USPS can't be bothered.
-- 
TTFN - Guy


Re: RCA 1802s available

2019-12-17 Thread Guy Sotomayor Jr via cctalk
According to the website, they are “genuine” original RCA parts.

TTFN - Guy

> On Dec 17, 2019, at 11:22 AM, crufta cat via cctalk  
> wrote:
> 
> They are likely later than the 70s and even more likely Intersil made parts
> from
> the 80s and even later.  Same for 1806s.
> 
> People forget that Chrysler used them for engine controls int he late 80s.
> 
> There is also "refurb parts" of questionable quality and origin.
> 
> Allison
> 
> On Tue, Dec 17, 2019 at 1:56 PM Al Kossow via cctalk 
> wrote:
> 
>> 
>> 
>> On 12/17/19 9:30 AM, Will Cooke via cctalk wrote:
>>> An ad was emailed to me today with an interesting item:  RCA 1802
>> processors.
>> 
>> Not a bad price, did you buy any?
>> Were they actually RCA-branded parts from the 70's?
>> 
>> 
>> 



Re: Bad heads on RL02: Worth replacing

2019-12-16 Thread Guy Sotomayor Jr via cctalk



> On Dec 16, 2019, at 3:07 PM, Chris Zach via cctalk  
> wrote:
> 
> So one of my RL02 drives (bought on Ebay years ago) is eating RL02 packs. 
> Makes tings, then the disks have errors in my other RL02 drive. Took a bit to 
> figure out which drive was eating what, but I'm 100% certain it's this newer 
> drive.
> 
> So pulled the heads. The top one had significant gunk on it, the bottom one a 
> bit. Pics below.
> 
> Top: https://i.imgur.com/FELhF9X.jpg
> Bottom: https://i.imgur.com/Tmsf5Nd.jpg
> 
> With alcohol and lintless swabs I managed to clean both of the heads up.
> 
> Top: https://i.imgur.com/gmACM4R.jpg
> Bottom: https://i.imgur.com/SfZQV5F.jpg
> 
> Then put them back in the drive and mounted a scratch pack. With finger on 
> the load/run I let the drive spin up and when I heard tinging I immediately 
> spun down. Hopefully I didn't trash my scratch pack.
> 
> Top head has gunk, bottom one had a few flecks, but looks pretty ok.
> Top: https://i.imgur.com/EAvgmuH.jpg
> Bottom: (picture didn't upload)
> 
> Obviously the head is crashing, any idea why and if it's worth replacing the 
> head or should I put this drive out for parts? Yes I cleaned the RL02 pack 
> before putting it in.
> 
> Never dull.

I would try replacing the head(s).

TTFN - Guy



Re: SMD disk specifications

2019-12-14 Thread Guy Sotomayor Jr via cctalk
Thanks Eric, I somehow missed that one.

TTFN - Guy

> On Dec 13, 2019, at 11:42 AM, Eric Smith  wrote:
> 
> On Fri, Dec 13, 2019 at 10:55 AM Guy Sotomayor via cctalk 
> mailto:cctalk@classiccmp.org>> wrote:
> I’ve been trying to find *detailed* specifications (mainly detailed signal 
> timings) for the SMD disk interface but all I’ve found so far are the 
> interface specifications for individual disks (CDC, Fujitsu, etc).  I’ve 
> looked in the usual places (bitsavers mostly) and haven’t found the spec 
> itself.  If anyone has any pointers, I’d appreciate it.
> 
> You've seen that the SMD spec (as of March 1981) is on Bitsavers?
> pdf/cdc/discs/interface_specs/64712400_SMDCableSpec_Mar81.pdf 
> 
> That doesn't cover later enhancements such as SMD-E.
> 



SMD disk specifications

2019-12-13 Thread Guy Sotomayor via cctalk
Hi,

I’ve been trying to find *detailed* specifications (mainly detailed signal 
timings) for the SMD disk interface but all I’ve found so far are the interface 
specifications for individual disks (CDC, Fujitsu, etc).  I’ve looked in the 
usual places (bitsavers mostly) and haven’t found the spec itself.  If anyone 
has any pointers, I’d appreciate it.

Thanks.

TTFN - Guy

Re: Converting C for KCC on TOPS20

2019-12-11 Thread Guy Sotomayor Jr via cctalk
I think the challenge will be does binutils (where nm, objcopy and objdump 
live) support for the object file format used by TOPS20.

I haven’t looked at the TOPS20 object file format but it seems like the best 
approach would be to have the C compiler generate symbols as it normally would 
and write a utility to “fixup” the too long symbols rather than munging the 
source (which is basically what you’re proposing using the stuff from 
binutils…just a bit more work.

TTFN - Guy

> On Dec 11, 2019, at 9:07 AM, Guy N. via cctalk  wrote:
> 
> On Wed, 2019-12-11 at 00:25 +, David Griffith via cctalk wrote:
>> I'm trying to convert some C code[1] so it'll compile on TOPS20 with KCC. 
>> KCC is mostly ANSI compliant, but it needs to use the TOPS20 linker, which 
>> has a limit of six case-insentive characters.  [...] Does anyone here have 
>> any knowledge of existing tools or techniques to do what I'm trying to do?
> 
> Is "objcopy --redefine-syms" any help?  Compile the code as-is to
> produce object files, use nm or objdump to find all of the global
> symbols, generate unique six-character names for them, and then use
> objcopy to create new object files with the new names.
> 
> Or have I completely missed the point?  I'm not familiar with KCC, does
> it produce object modules in a format objcopy doesn't support?
> 
> I know someone who was working on gcc support for the PDP-10, I wonder
> if he's still doing that or has given up
> 



Re: FPGA based 3174 replacement

2019-11-18 Thread Guy Sotomayor Jr via cctalk
I have several Zynq boards that I’m using (or will be) for other projects but 
having a replacement
for the 3174 would be nice.

TTFN - Guy

> On Nov 18, 2019, at 9:51 AM, Dave Wade via cctalk  
> wrote:
> 
> Chris,
> I have several Spartan boards, no Zync...
> ... but would be interested to see how it works...
> Dave
> 
>> -Original Message-
>> From: cctalk  On Behalf Of cw via cctalk
>> Sent: 18 November 2019 17:43
>> To: cctalk@classiccmp.org
>> Subject: FPGA based 3174 replacement
>> 
>> As it happens I wrote a Verilog module last week for serializing and
>> deserializing 3270 coax frames, without realizing someone had already done
>> this with an arduino. What I’ve written is intended for a zynq device but is
>> general enough to be used in other designs.
>> 
>> At the moment I have petalinux installed and can send frames with a small 
>> test
>> program and see them on my scope. A loop back configuration also seems to
>> work. I haven’t build the coax driver circuit yet so I can’t be sure of it’s 
>> correct
>> operation.
>> 
>> I’d be willing to make this code available tonight. Although I’m not sure how
>> many people have the right hardware lying around.
>> 
>> Chris
> 



Re: 3270 controller simulation

2019-11-18 Thread Guy Sotomayor Jr via cctalk
Yes, I have my 3174 communicating with Hercules (as well as my MP3000).

TTFN - Guy

> On Nov 18, 2019, at 8:45 AM, Al Kossow via cctalk  
> wrote:
> 
> 
> 
> On 11/18/19 8:22 AM, Grant Taylor via cctalk wrote:
> 
>> I believe the *Remote* 3174 (et al.) is one of these devices from IBM.
> 
> The bigger 3174s have ethernet as an option, as did the Memorex/Telex 1174
> More commonly, 3174s had token ring and people have gone Cisco to 3174 to
> get coax terminals talking to Hercules. I believe Guy S., Dave W. and Jay W.
> have done this. I have the parts to do either token ring or ethernet on a
> 3174, and just got probably one of the last 1174s in the world with ethernet
> (but no docs)
> 
> One of my projects is to try to get something similar going that is 
> open-source
> which is why I was interested in ajk's arduino project
> 
> 



Re: LISP implementations on small machines

2019-10-03 Thread Guy Sotomayor Jr via cctalk



> On Oct 3, 2019, at 10:26 AM, Paul Koning via cctalk  
> wrote:
> 
> 
> 
>> On Oct 3, 2019, at 12:39 PM, Chuck Guzis via cctalk  
>> wrote:
>> 
>> On 10/3/19 9:01 AM, Noel Chiappa via cctalk wrote:
>> 
>>> The PDP-6 and KA10 (basically a re-implementation of the PDP-6 architecture)
>>> both had cheapo versions where addresses 0-15 were in main memory, but also
>>> had an option for real registers, e.g. in the PDP-6: "The Type 162 Fast
>>> Memory Module contains 16 words with a 0.4 usecond cycle." The KA10 has
>>> a similar "fast memory option".
>> 
>> A bit more contemporary example might be the low-end PIC
>> microcontrollers (e.g. the 12F series).   Harvard architecture (14 bit
>> instructions, 8 bit data), but data is variously described as
>> "registers" (when used an instruction operand) or "memory" when
>> addressed indirectly.   That is, the 64 bytes of SRAM can be referred to
>> as either a memory location or as a register operand.
> 
> Then again, the PDP-10 has that "two ways to refer to it" as well.  In that 
> case, you do have dedicated register logic, and what happens is that memory 
> addresses 0-15 are instead redirected to the register array.  The same 
> applies to the EL-X8.  The way you can address things doesn't necessarily 
> tell you what sort of storage mechanism is used for it.
> 

So does the PDP-11.  The 8 registers are mapped to the top 8 words of memory so 
you can do some quite interesting things.  It is also possible to run a (small) 
program in only the registers (e.g. no memory at all).

TTFN - Guy



Re: Early Univac Commercial

2019-09-20 Thread Guy Sotomayor Jr via cctalk
Yea, I recall having to take that test.  I almost didn’t because my degree is 
EE but then
they realized I was applying for SW positions.  Go figure!  ;-)

Worked there for 17+ years.

TTFN - Guy

> On Sep 20, 2019, at 8:52 AM, Chuck Guzis via cctalk  
> wrote:
> 
> On 9/20/19 8:16 AM, William Sudbrink via cctalk wrote:
>> Isn't there also one that's a "help wanted" for programming positions?
>> I seem to recall that they didn't say anything about professional training
>> or experience, just things like "do you have a logical, ordered way of
>> thinking?"
> 
> I don't recall, but IBM had a "computer aptitude test" that it
> administered to just about anyone involved in sales or the technical
> end, regardless of education.   I recall taking such a test, though I
> turned down IBM's job offer.
> 
> --Chuck
> 



Re: [Simh] Fwd: VAX + Spectre

2019-09-18 Thread Guy Sotomayor Jr via cctalk



> On Sep 18, 2019, at 9:59 AM, Chris Elmquist  wrote:
> 
> On Wednesday (09/18/2019 at 09:19AM -0700), Guy Sotomayor Jr via cctalk wrote:
>> 
>> 
>>> On Sep 18, 2019, at 12:42 AM, Liam Proven via cctalk 
>>>  wrote:
>>> 
>>> On Wed, 18 Sep 2019 at 02:19, Paul Koning via cctalk
>>>  wrote:
>>>>> ...
>>>> Speaking of timing, that reminds me of two amazing security holes written 
>>>> up in the past few years.  Nothing to do with the Spectre etc. issue.
>>>> 
>>>> One is the recovery of speech from an encrypted VoIP channel such as 
>>>> Skype, by looking at the sizes of the encrypted data blocks.  (Look for a 
>>>> paper named "Hookt on fon-iks" by White et al.)  The fix for this is 
>>>> message padding.
>>>> 
>>>> The other is the recovery of the RSA private key in a smartphone by 
>>>> listening to the sound it makes while decrypting.  The fix for this is 
>>>> timing tweaks in the decryption inner loop.  (Look for a paper by, among 
>>>> others, Adi Shamir, the S in RSA and one of the world's top 
>>>> cryptographers.)
>>>> 
>>>> It's pretty amazing what ways people find to break into security 
>>>> mechanisms.
>>> 
>>> ... Wow.
>>> 
>>> *Wow.*
>>> 
>>> Thanks for those!
>> 
>> In the deep dark days of yore, I recall an actual demonstration of being 
>> able to read/replicate the contents of the screen (CRT) of a PC by looking 
>> at the AC (e.g. mains) that the PC was plugged into.  Admittedly it was 
>> relatively low fidelity, but yikes!
> 
> https://en.wikipedia.org/wiki/Van_Eck_phreaking 
> <https://en.wikipedia.org/wiki/Van_Eck_phreaking>

Cool!

Yea, I had to make a trip to a “secure facility” once and there were entire 
“tempest” rooms with conditioned power and no external communications 
equipment.  The room itself (think *large*) was a faraday cage with a vault 
door that was kept closed when ever there was sensitive stuff going on.  Since 
I didn’t have a security clearance, the door was open and everywhere I went 
there were red lights in the rooms/halls that I was in that would be on to 
indicate that no sensitive information should be discussed (makes you feel 
really wanted).  ;-)

TTFN - Guy

Re: [Simh] Fwd: VAX + Spectre

2019-09-18 Thread Guy Sotomayor Jr via cctalk



> On Sep 18, 2019, at 12:42 AM, Liam Proven via cctalk  
> wrote:
> 
> On Wed, 18 Sep 2019 at 02:19, Paul Koning via cctalk
>  wrote:
>>> ...
>> Speaking of timing, that reminds me of two amazing security holes written up 
>> in the past few years.  Nothing to do with the Spectre etc. issue.
>> 
>> One is the recovery of speech from an encrypted VoIP channel such as Skype, 
>> by looking at the sizes of the encrypted data blocks.  (Look for a paper 
>> named "Hookt on fon-iks" by White et al.)  The fix for this is message 
>> padding.
>> 
>> The other is the recovery of the RSA private key in a smartphone by 
>> listening to the sound it makes while decrypting.  The fix for this is 
>> timing tweaks in the decryption inner loop.  (Look for a paper by, among 
>> others, Adi Shamir, the S in RSA and one of the world's top cryptographers.)
>> 
>> It's pretty amazing what ways people find to break into security mechanisms.
> 
> ... Wow.
> 
> *Wow.*
> 
> Thanks for those!

In the deep dark days of yore, I recall an actual demonstration of being able 
to read/replicate the contents of the screen (CRT) of a PC by looking at the AC 
(e.g. mains) that the PC was plugged into.  Admittedly it was relatively low 
fidelity, but yikes!

TTFN - Guy

Re: UNIBUS FTGH: EMM / CMU MICRORAM memories

2019-09-17 Thread Guy Sotomayor Jr via cctalk



> On Sep 17, 2019, at 8:17 AM, Stefan Skoglund  wrote:
> 
> mån 2019-09-16 klockan 11:17 -0700 skrev Guy Sotomayor Jr via cctalk:
>> 
>> And that’s just the HW.  Hydra (the OS that ran on C.MMP) was a
>> capability based system (so you needed the proper capability to do
>> anything).  I recall at one point the grad student who was doing work
>> on the file system, “lost” the root capability to the file system…so
>> it was no longer possible to create new file systems.
>> 
>> 
> 
> Which also means that for boot-strap you need to create a correct fully
> populated binary dump of all the file system in the system.
> A dump which is sane which regards to capabilities and user
> capabilities 
> 
> It is a little like kick-starting a database system ie writing the
> system database (pg_system) into such a state that is it possible to
> correctly run for example:
> ---
> create tablespace 'stefan';
> create schema 'pg';
> ---
> and so on…
> 

The problem (and I’d be thrilled to be wrong) is that the probability of having
a binary dump of Hydra is 0 for small values of 0.  Part of the issue is that 
the
C.MMP has a few RP06 drives (4-6?).  If it were ever saved, it would have to
be to 9-track tapes and I don’t recall any sort of backup facilities.  The RP06
packs *might* still be around but I wouldn’t hold out too much hope for them
either.

TTFN - Guy




Re: UNIBUS FTGH: EMM / CMU MICRORAM memories

2019-09-16 Thread Guy Sotomayor Jr via cctalk
11/40’s were pretty ubiquitous at CMU when I was there and at least as far as I 
could tell, were all configured pretty much the same (in that they all had 
custom writable control store).  I personally dealt with 3 different sets of 
11/40s:
A single 11/40 with WCS that I used for doing some image processing work
2 11/40’s tied together with a prototype of C.MMP’s cross point switch
C.MMP (16-way PDP11…I saw it running with 4 11/20s and 12 11/40s).  At the end 
the 11/20s were removed and just the 11/40s remained.

There were only 2 11/45s that I knew of.  The first was the “front end” that 
sat in front of all of the terminals and allowed connection to the various 10s 
(at the time there were 3: 2 KA10s and a KL10), C.MMP and CM*.  The other 11/45 
ran the XGP (Xerox Graphics Printer)…granddaddy of laser printers so that we 
could get “high quality” output (versus line printer).

TTFN - Guy

> On Sep 16, 2019, at 2:27 PM, Fritz Mueller via cctalk  
> wrote:
> 
> First off, I've had a couple of follow-ups on these units, so they are spoken 
> for at this point.
> 
> The member with first dibs has also offered to scan the docs and see that 
> they make their way to Al.
> 
> I was wondering if these were c.mmp cast-offs?  Guy: I encountered these in 
> the CMU computer club hardware room (Doherty Hall basement, I think?) circa 
> 1986.  There were a couple of '11/40s adjacent, and those did have some sort 
> of custom writable control store cards.
> 
> The computer club was cleaning house, so I hauled off an '11/45 with CPU 
> spares that looked pretty stock, the aforementioned memory units, and a rack 
> mount Tek 'scope (about all I could convince my friends to help me haul off 
> campus at the time :-)
> 
> Not sure what ever happened to the rest of the equipment that was down there. 
>  I know they had a couple of working Altos, on a thick net segment with the 
> old vampire transceivers that had the little round glass windows in an 
> aluminum box.  And what must have been parts of an earlier PDP (I remember a 
> smallish teletype bolted on to a piece of white Formica desktop.)
> 
>   --FritzM.
> 



Re: UNIBUS FTGH: EMM / CMU MICRORAM memories

2019-09-16 Thread Guy Sotomayor Jr via cctalk
I don’t know.  It would be hard to replicate because of the custom HW and the 
custom uCode that ran on the 11/40s.  I even think that the 11/20s were 
modified as well.  So trying to figure that out would be “interesting”.  ;-)  
There is documentation on bitsavers that covers the custom uCode HW for the 
11/40s.  The MMU as I recall was also radically different than what was 
standard on 11/40s (and non-existant on 11/20s…the 11/20 changes started to get 
to be so large, I believe later on they just ditched the 11/20s and C. was just 
all 11/40s) to allow for  really large memory spaces…I don’t recall what the 
maximum possible memory on C. was, it did have 1.2MB while I was there.

And that’s just the HW.  Hydra (the OS that ran on C.MMP) was a capability 
based system (so you needed the proper capability to do anything).  I recall at 
one point the grad student who was doing work on the file system, “lost” the 
root capability to the file system…so it was no longer possible to create new 
file systems.

Since C.MMP was a “one off” system, don’t expect (even if the SW survives) that 
there’s an “installation guide”.  ;-)  It was pretty organic.  Last and not 
least, all of the code was either PDP-11 assembler or BLISS-11.  It was all 
cross built from the (heavily) modified TOP-10 systems that the CS department 
was running.

Hydra did a number of things that eventually lead to Accent and then Mach 
(which portions are still in use in the guts of OS X).  It was what we would 
call today a microkernel system in that the kernel was the only thing that ran 
in privileged mode.  Everything else were user processes (file system, drivers, 
terminal system, etc).  As I said, it was a capability based system, so to use 
something you needed to have a capability to it (files didn’t have Unix style 
permissions…if you had a capability to a file that capability determined what 
you could do to the file).  It had a number of reliability traits: it could 
detect failures in HW and in SW and restart the appropriate failed item.  In 
the case of CPUs and memory, it could “wall off” the failed component can cause 
diagnostics to be run to either isolate the problem further or determine that 
the failure is no longer present.

TTFN - Guy

> On Sep 16, 2019, at 10:59 AM, Paul Koning  wrote:
> 
> 
> 
>> On Sep 16, 2019, at 1:52 PM, Guy Sotomayor Jr via cctalk 
>>  wrote:
>> 
>> The only thing that I believe would have used these would have been C.MMP.  
>> It had 1.2MB of memory on it when I was there.
>> 
>> TTFN - Guy
> 
> It's been a long time since I've heard that reference.  Did any of that 
> software get preserved?  I wonder how hard it would be to make SIMH handle it.
> 
>   paul
> 



Re: UNIBUS FTGH: EMM / CMU MICRORAM memories

2019-09-16 Thread Guy Sotomayor Jr via cctalk
The only thing that I believe would have used these would have been C.MMP.  It 
had 1.2MB of memory on it when I was there.

TTFN - Guy

> On Sep 16, 2019, at 10:45 AM, Al Kossow via cctalk  
> wrote:
> 
> I would be interested in putting up the later docs
> I wonder if Guy remembers what these were used for at CMU
> 
> On 9/16/19 10:19 AM, Shoppa, Tim via cctalk wrote:
>> The Microram was a multipurpose solid state memory chassis sold by EMM 
>> (Electronic Memories and Magnetics) with what we called later in the 1970's 
>> a "personality board" that plugged it into each different CPU's backplane.  
>> They sold a similar system (maybe even plug compatible at some level) with 
>> core planes under "Micromemory" brand name. I see we already have a "emm" 
>> directory in bitsavers with docs about some of their core products.
>> 
> 



Re: SMD disks

2019-08-30 Thread Guy Sotomayor Jr via cctalk
I’m actively working on SMD and ESDI emulators.  However, given my work 
schedule this is a long term project.  :-(

TTFN - Guy

> On Aug 30, 2019, at 10:56 AM, Jonathan Haddox via cctalk 
>  wrote:
> 
> With SMD disks even harder to come by than MFM disks, has there been any 
> plug-in replacements developed for them? I've seen MFM disk emulators, 
> haven't seen SMD ones though, anyone know if they exist?  



Re: Shipping from Europe to USA

2019-08-25 Thread Guy Sotomayor Jr via cctalk


> On Aug 25, 2019, at 2:05 AM, Noel Chiappa via cctalk  
> wrote:
> 
>> From: Jon Elson
> 
>> I have NEVER had even the SLIGHTEST damage with FedEx, even their
>> ground service. This could just be statistical chance
> 
> This. I once had FexEx Ground destroy the entire packaging of a shipment (one
> of those rigid plastic tubs, sealed closed with those tension tapes) so badly
> they had to build entirely new packaging for it.
> 
> Assume _all_ shippers will throw your item across the room, and pack
> accordingly - because they will.
> 

I have found that if the item is packed *appropriately* in a crate and then put
on a pallet it receives much gentler handling than something that’s been stuffed
in a cardboard box.

It all comes down to what is the item worth to *you*.  Yes, doing what I 
proposed
will cost more in shipping but what is that cost relative to the value (to you) 
of the
item and the difficulty in replacing it?

TTFN - Guy



Re: S/23 machine update card

2019-08-19 Thread Guy Sotomayor Jr via cctalk



> On Aug 19, 2019, at 9:35 AM, Dennis Boone  wrote:
> 
>> The uCode in the S/23 is 8085 assembly code that is contained within
>> the ROMs.  The ROMs have the ability to be patched and the card
>> you’re referencing is used to hold those updates.  So without that
>> card you’re not able to apply any ROM updates (which are loaded each
>> boot).
> 
> Ah, ok, that makes sense.  It's only 16k of RAM.
> 
>> It’s been long enough that I don’t recall what (if any) updates there
>> are and when (and from what) they’re loaded.
> 
> When the machine powers up, it pre-enters a command (PROC START?  PROC
> INIT?  Brane faid.) which could presumably load firmware from diskette.
> There aren't a lot of other options.  They could be loaded from fixed
> disk if you had one, but they'd have to get there somehow.

Yea, that’s part of setting up the fixed disk.  I was working on other projects
(mainly the PC) at that point.

> 
>> The system architecture allows for *much* more than the 64KB normally
>> accessible by the 8085 CPU.  The memory is bank switched.  There is a
>> fixed ROM and fix RAM portion of the address space and a bank
>> switched ROM and RAM portion of the address space.  16KB of fixed
>> (for ROM/RAM) sticks in my head for some reason.  I don’t recall the
>> granularity of the bank switched areas.
> 
> Right, the memory map for all of that is in the service manuals.  The
> pageable sections each have 16 possible pages, and footnotes indicate
> that two ROM and one RAM (think I have that right way round) pages are
> not used.  A total of 32k of address space for each of ROM and RAM, so
> it would make sense that the pages are 16k, and that the fixed portions
> are 16k also.
> 
> What's not there is how the RAM on the card is paged in.  The base RAM
> card and the feature RAM card are mentioned, but I don't believe the
> details of their mapping is described.

I never actually used a version with actual ROMs (except for the one I have
sitting here in my shop).  All development was done with special RAM cards
in place of the ROM.  Made development *much* easier.

However, development was not without its pain.  When I started, a full build
of the base software (e.g. full ROM image) took a week (yep, 7 days).  So
we would carry around “patches”.  Both “blessed” patches and our own
individual test patches.

Getting those into the “official” build cleanly usually resulted in 2 or 3 
attempts
because how you needed to develop a patch was different than actually
putting in the change in built source.  So when a new build came out, tests
would be run and additional patches applied.  It was a real pain.

We were jumping up and down for joy when the build was moved to the
mainframe and we would get builds back in ~24 hours!

> 
>> The patching was accomplished by having each major or critical
>> function in the ROM be dispatched through a call table (that is
>> placed in RAM at boot and can be “patched” to point to a different
>> function).  It was *more* than just the ROM address as it also
>> contained the bank # of the ROM as well since (with few exceptions)
>> all calls were “long calls”.
> 
> Thanks for the enlightenment!  Were you involved in the development of
> these machines?

Yes, it was my first job out of college.  I wrote about 20% of the ROM code:
floating point code…which because it was a “business” machine was all done in 
BCD
formatting code (anything that is done via print and print using).  I don’t 
recall if I did anything on the “input” side.
unwinding expressions when you do a “list”.  The source is always thrown away 
and just the bytecode is retained so the source needs to be “recreated”.
a bunch of other stuff that I can’t recall now.

> 
> Presumably there's an I/O to be done to the mapping hardware as part of
> a "long call”?

That is correct.

TTFN - Guy



Re: S/23 machine update card

2019-08-19 Thread Guy Sotomayor Jr via cctalk
The uCode in the S/23 is 8085 assembly code that is contained within the ROMs.
The ROMs have the ability to be patched and the card you’re referencing is used 
to
hold those updates.  So without that card you’re not able to apply any ROM 
updates
(which are loaded each boot).

It’s been long enough that I don’t recall what (if any) updates there are and 
when (and
from what) they’re loaded.

The system architecture allows for *much* more than the 64KB normally 
accessible by
the 8085 CPU.  The memory is bank switched.  There is a fixed ROM and fix RAM 
portion
of the address space and a bank switched ROM and RAM portion of the address 
space.
16KB of fixed (for ROM/RAM) sticks in my head for some reason.  I don’t recall 
the
granularity of the bank switched areas.

There was a lot of confusion when the S/23 came out about what the ROM/RAM
specifications (192KB of ROM, 128KB of RAM) because an 8085 could only address
64KB.  ;-)

The patching was accomplished by having each major or critical function in the 
ROM 
be dispatched through a call table (that is placed in RAM at boot and can be 
“patched” 
to point to a different function).  It was *more* than just the ROM address as 
it also
contained the bank # of the ROM as well since (with few exceptions) all calls 
were
“long calls”.

TTFN - Guy

> On Aug 18, 2019, at 6:11 PM, Jon Elson via cctalk  
> wrote:
> 
> 
> On 08/18/2019 06:38 PM, Dennis Boone via cctalk wrote:
>> Folks,
>> 
>> I've determined that the piece of my S/23 that's causing the power
>> supply to blow its 12V fuse is the machine update card.  The manual says
>> this provides additional R/W storage for microprogram updates.  That
>> sounds like something that wouldn't be necessary for normal operation.
>> 
>> 
> Not knowing anything about this system, but you might check the card for a 
> bad Tantalum capacitor.
> 
> Jon



Re: Identification of an HP minicomputer

2019-08-15 Thread Guy Sotomayor Jr via cctalk


> On Aug 15, 2019, at 6:57 PM, Mark Linimon  wrote:
> 
> On Thu, Aug 15, 2019 at 02:27:16PM -0700, Guy Sotomayor Jr via cctalk wrote:
>> Between work and preparing for potential fire evacuations (they're
>> expecting ~300 wild fires in my area this fire season: we've only had 
>> about 6 so far so I expect *a lot* more soon)
> 
> Yikes!  Please stay safe.

That’s the plan.  Thanks.

We’ve had a fire (when I say fire, I mean wildfire and not what most folks are 
familiar with which are structure fires) near town yesterday and another one (a 
bit further away) today.  So it’s picking up.  To hit the expected 300 for this 
season, we’ll need about 2 per day!  Fortunately most so far have been fairly 
small (20-80 acres).  I know that sounds *large* (our property is 10 acres) but 
last year’s fire in Paradise was over 150,000 acres (~240 sq miles) and 
destroyed over 18,000 buildings.  It is really hard to imagine the scale of the 
devastation.

  So everyone is taking this *much* more seriously now.  Today’s fire had 6+ 
fire engines respond, 2 bulldozers and 2 air tankers respond.

TTFN - Guy



Re: Identification of an HP minicomputer

2019-08-15 Thread Guy Sotomayor Jr via cctalk
Thanks Marc.

What I’ve done is about all I have time for at the moment.  Between work and 
prep’ing for potential fire evacuations (they’re expecting ~300 wild fires in 
my area this fire season…we’ve only had about 6 so far…so I expect *a lot* more 
soon) all of my time is gone.  :-(

TTFN - Guy

> On Aug 15, 2019, at 2:22 PM, Curious Marc via cctalk  
> wrote:
> 
> I found Brent Hilpert’s site most useful in getting a quick meaning for these 
> numbers:
> http://madrona.ca/e/HP21xx/index.html
> http://madrona.ca/e/HP21xx/iointerfaces.html
> There is also a very useful series 1000 reference manual that lists most of 
> the configs and options and cards, I will get to it when I am home and try to 
> send you a link.
> 
> My experience is that you absolutely have to open them up to figure out what 
> they actually are. They are so modular and upgradable and interchangeable 
> that the original config sticker rarely matches what’s inside. Actually, I 
> have yet to see one that has a config that matches the factory sticker. 
> Sometimes the motherboard isn’t even the series that the front panel says!
> 
> Also you need to find out what optional microcode ROMs they are fitted with 
> (extended/virtual memory, fast fortran, vector, scientific, etc...) to know 
> what version of RTE they can actually run, and which boot ROMs are installed. 
> That said they are very easy to take apart, just open front and back, slide 
> out top and bottom covers, slide the cards out, and admire the modular 
> design. They are also very well documented.
> 
> Marc
> 
>> On Aug 12, 2019, at 3:21 PM, Norman Jaffe via cctalk  
>> wrote:
>> 
>> Perhaps these will help? 
>> https://www.hpmuseum.net/exhibit.php?hwimg=108 
>> http://www.datormuseum.se/computers/hewlett-packard/hp-21mx 
>> 
>> 
>> From: "Guy Sotomayor Jr"  
>> To: "myself" , "cctalk"  
>> Sent: Monday, August 12, 2019 3:04:31 PM 
>> Subject: Re: Identification of an HP minicomputer 
>> 
>> It’s a 9-slot variant that says HP-1000 M-Series on the front panel. From 
>> what I can tell the front panel appears to be the same as any of the other 
>> HP-1000 series. 
>> 
>> What I’m trying to figure out is what the actual CPU configuration is 
>> without disassembly (which I still need to figure out) so that I can 
>> actually examine the boards. 
>> 
>> Thanks. 
>> 
>> TTFN - Guy 
>> 
>>> On Aug 12, 2019, at 2:59 PM, Norman Jaffe via cctalk 
>>>  wrote: 
>>> 
>>> Can you provide a picture of the front panel? 
>>> 2113 implies a 21MX-E; the nine-slot version is a 2109 while the 
>>> fourteen-slot would be a 2113. 
>>> This might help - https://www.hpmuseum.net/display_item.php?hw=109 . 
>>> 
>>> From: "cctalk"  
>>> To: "cctalk"  
>>> Sent: Monday, August 12, 2019 2:52:18 PM 
>>> Subject: Identification of an HP minicomputer 
>>> 
>>> Hi, 
>>> 
>>> I have sitting in my pile of stuff an HP minicomputer that I’m trying to 
>>> identify (at least in terms of exactly what it is and what sort of 
>>> configuration it might have). 
>>> 
>>> As far as I can tell, it’s an HP-1000 M-Series minicomputer (that should 
>>> hopefully get us *some* details). The “asset tag” lists the part number as 
>>> 2113023-108. Looking at the back there’s space for 9 I/O cards (5 are 
>>> occupied). 
>>> 
>>> So my question is which of the several CPUs could this be and how do I tell 
>>> (for example) what the configuration is (e.g. how much memory, etc). 
>>> 
>>> Yes, I have looked on bitsavers, but short of disassembling the box to look 
>>> at the (at least) 2 boards that are below the I/O slots, I can’t tell 
>>> what’s there and I’d like to see if there’s a way to determine what this is 
>>> without resorting to disassembly. 
>>> 
>>> Thanks. 
>>> 
>>> TTFN - Guy 



Re: GW-DEC-1: A New DEC Prototyping Board

2019-08-15 Thread Guy Sotomayor Jr via cctalk
Speaking from experience from having done a few Unibus boards now (none of them 
available yet unfortunately) that providing a general Unibus interface on a 
quad board will consume a reasonable amount of the board space and limit 
flexibility on which driver/receiver/transceiver parts that can be used.  
That’s just for the Unibus drivers.  If you want to actually *run* the 
interface then you’re talking a lot more stuff.

Of course, the boards I’m doing are all SMD (with the exception of the unibus 
interface parts).  I also have to add in 5v to 3.3v conversion.  Even on a 4 
layer board there’s lots of “congestion” which limits the number of parts that 
can actually placed on the board.  :-(

TTFN - Guy

> On Aug 15, 2019, at 1:23 AM, emanuel stiebler via cctalk 
>  wrote:
> 
> On 2019-08-15 02:13, systems_glitch via cctalk wrote:
>> Connor Krukosky and I have been working on laying out a new quad-height DEC
>> protoboard, which can also be sheared down into a dual-height board. Full
>> announce on the VC Forums:
>> 
>> http://www.vcfed.org/forum/showthread.php?71177-GW-DEC-1-A-New-Quad-Height-DEC-Prototyping-Board=582892#post582892
> 
> Was always hoping somebody would do something like that, but with the
> bus interface already on it ...



Re: Electr* Engineering

2019-08-13 Thread Guy Sotomayor Jr via cctalk
I can attest to that.  ;-)

Where I went (CMU) the CS department grew out of the Math department…while I 
was there the only degree that the CS department granted was PhD.  So everyone 
else majored in something else (EE in my case…which had a bunch of digital 
stuff but still focused on a lot of theory…differential equations, 
electromagnetic fields/waves and communications theory) and took CS courses as 
electives (which focused on data structures, algorithms, etc…e.g. a lot of CS 
theory).

TTFN - Guy

> On Aug 12, 2019, at 11:05 PM, Adam Thornton via cctalk 
>  wrote:
> 
> At Rice in the early 90s the department was "Electrical and Computer 
> Engineering" if my hazy memory serves.
> 
> The genealogy of Computer Science departments (and their curricula) (at least 
> in the US) is also weird and historically-contingent.  Basically it seems to 
> have been a tossup at any given school whether it came out of the 
> Electr[ical|onic] Engineering department, in which case it was memories and 
> logic gates and a bottom-up, hardware-focused curriculum, or out of the 
> Mathematics department, in which case it was algorithms and complexity 
> analysis and a software-focused curriculum.
> 
> Adam



Re: Identification of an HP minicomputer

2019-08-12 Thread Guy Sotomayor Jr via cctalk
Cool!

Thanks.

TTFN - Guy

> On Aug 12, 2019, at 4:50 PM, Mike Loewen via cctalk  
> wrote:
> 
> 
>   Not a single reference, but these two directories should provide most of 
> what you need:
> 
> http://www.bitsavers.org/pdf/hp/1000/
> 
> http://hpmuseum.net/exhibit.php?hwdoc=108
> 
>   The CE Handbook, Loader ROMS, Interfaces, and Standard Memory manuals will 
> all be useful.
> 
> 
> On Mon, 12 Aug 2019, Guy Sotomayor Jr wrote:
> 
>> OK, thanks.
>> 
>> Is there a sheet somewhere that I can use to decode all of these part 
>> numbers?
>> 
>> TTFN - Guy
>> 
>>> On Aug 12, 2019, at 4:25 PM, Mike Loewen via cctalk  
>>> wrote:
>>> 
>>> 
>>>  Sorry, I mistyped.  12746A is a 64KB (32KW) memory module.
>>> 
>>> On Mon, 12 Aug 2019, Guy Sotomayor Jr wrote:
>>> 
>>>> Except that I don?t have a 12745A memory board, I believe it?s a 12746A 
>>>> which I think I saw was a 16K board.
>>>> 
>>>> Thanks.
>>>> 
>>>> TTFN - Guy
>>>> 
>>>>> On Aug 12, 2019, at 4:07 PM, Mike Loewen via cctalk 
>>>>>  wrote:
>>>>> 
>>>>> 
>>>>> 2102B is the Standard Performance Memory Controller
>>>>> 12745A is a 64KB (32KW) memory board
>>>>> 12897B is a DCPC (Dual Channel Port Controller)
>>>>> 12992B is a 7905/7906/7920/7925 disc loader PROM
>>>>> 12892B is a Memory Protect board
>>>>> 12944B is the Power Fail Recovery System
>>>>> 
>>>>> On Mon, 12 Aug 2019, Guy Sotomayor Jr wrote:
>>>>> 
>>>>>> Thanks all!
>>>>>> 
>>>>>> The trick was opening up the front panel (I?m used to keylocks that are 
>>>>>> only electrical and not just physical).
>>>>>> 
>>>>>> Here?s the HP label with the options:
>>>>>> CPU 2103
>>>>>> MEM BP 1713
>>>>>> IO BP 1727
>>>>>> Accessories
>>>>>> 12992B
>>>>>> 12944B
>>>>>> 2102B
>>>>>> 12897B
>>>>>> 12892B
>>>>>> 12746A
>>>>>> 
>>>>>> In opening the panel on the front card cage, I saw that it only had 16K 
>>>>>> of memory.  :-(
>>>>>> 
>>>>>> I?ll see about firing it up and if that goes well (anyone have 
>>>>>> suggestions for this type of mini?) I?ll see if I find more memory and 
>>>>>> suitable peripherals.
>>>>>> 
>>>>>> Thanks.
>>>>>> 
>>>>>> TTFN - Guy
>>>>>> 
>>>>>> 
>>>>>>> On Aug 12, 2019, at 3:29 PM, Mike Loewen via cctalk 
>>>>>>>  wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> The original M-Series machines were the 2105A and the 2108A (9-slot), 
>>>>>>> which sound like what you have.  The early machines didn't say 
>>>>>>> "M-Series" on the front panel, and had a different lock than the later 
>>>>>>> models:
>>>>>>> 
>>>>>>> http://q7.neurotica.com/Oldtech/HP/2108A/HP2108A-8L.jpg (my model 2108A)
>>>>>>> 
>>>>>>> Early models had the power switch on the back panel, while later models 
>>>>>>> had it behind the front panel.
>>>>>>> 
>>>>>>> It sounds like you might have a later model M. It would be helpful to 
>>>>>>> see a closeup of the read card cage (with readable labels), as well as 
>>>>>>> the front card cage.  The front card cage is accessed by unlocking the 
>>>>>>> panel and removing the cover on the right side over the card cage.  
>>>>>>> That's where the memory boards live.
>>>>>>> 
>>>>>>> On Mon, 12 Aug 2019, Guy Sotomayor Jr via cctalk wrote:
>>>>>>> 
>>>>>>>> It?s a 9-slot variant that says HP-1000 M-Series on the front panel.  
>>>>>>>> From what I can tell the front panel appears to be the same as any of 
>>>>>>>> the other HP-1000 series.
>>>>>>>> 
>>>>>>>> What I?m trying to figure out is what the actual CPU configuration is 
>>>>>>>>

Re: Identification of an HP minicomputer

2019-08-12 Thread Guy Sotomayor Jr via cctalk
Fun!

I have 4 HP minis at the moment:
2116C that was running the last time I checked
2 2114B that are in various states of “not working”.  Interestingly the most 
promising one (e.g. the one that hasn’t had various parts clipped or otherwise 
buggered) is where I can’t get it to power up at all (not even the fan).  So I 
have to go and dig into the power supply a bit more…it could also be that the 
power cord is wired up incorrectly since it uses an old style hubble twist-lock 
that I may not have wired up quite right)
HP-1000 M Series

TTFN - Guy

> On Aug 12, 2019, at 4:38 PM, Guy Dunphy  wrote:
> 
> Hi Guy,
> 
> If you didn't see this, it may be of interest: 
>   http://everist.org/NobLog/20131112_HP_1000_minicomputer_teardown.htm
> 
> It won't help you identify your system model, but could be of help with 
> disassembly.
> 
> Funny coincidence that we have the same name, and similar HP-1000 
> minicomputers.
> 
> Sigh... 2019 slips by, and I still haven't returned to that project.
> 
> Guy
> 
> 
> At 02:52 PM 12/08/2019 -0700, you wrote:
>> Hi,
>> 
>> I have sitting in my pile of stuff an HP minicomputer that I’m trying to 
>> identify (at least in terms of exactly what it is and what sort of 
>> configuration it might have).
>> 
>> As far as I can tell, it’s an HP-1000 M-Series minicomputer (that should 
>> hopefully get us *some* details).  The “asset tag” lists the part number 
>> as 2113023-108.  Looking at the back there’s space for 9 I/O cards (5 are 
>> occupied).
>> 
>> So my question is which of the several CPUs could this be and how do I tell 
>> (for example) what the configuration is (e.g. how much memory, etc).
>> 
>> Yes, I have looked on bitsavers, but short of disassembling the box to look 
>> at the (at least) 2 boards that are below the I/O slots, I can’t tell 
>> what’s there and I’d like to see if there’s a way to determine what 
>> this is without resorting to disassembly.
>> 
>> Thanks.
>> 
>> TTFN - Guy



Re: Identification of an HP minicomputer

2019-08-12 Thread Guy Sotomayor Jr via cctalk
OK, thanks.

Is there a sheet somewhere that I can use to decode all of these part numbers?

TTFN - Guy

> On Aug 12, 2019, at 4:25 PM, Mike Loewen via cctalk  
> wrote:
> 
> 
>   Sorry, I mistyped.  12746A is a 64KB (32KW) memory module.
> 
> On Mon, 12 Aug 2019, Guy Sotomayor Jr wrote:
> 
>> Except that I don?t have a 12745A memory board, I believe it?s a 12746A 
>> which I think I saw was a 16K board.
>> 
>> Thanks.
>> 
>> TTFN - Guy
>> 
>>> On Aug 12, 2019, at 4:07 PM, Mike Loewen via cctalk  
>>> wrote:
>>> 
>>> 
>>>  2102B is the Standard Performance Memory Controller
>>>  12745A is a 64KB (32KW) memory board
>>>  12897B is a DCPC (Dual Channel Port Controller)
>>>  12992B is a 7905/7906/7920/7925 disc loader PROM
>>>  12892B is a Memory Protect board
>>>  12944B is the Power Fail Recovery System
>>> 
>>> On Mon, 12 Aug 2019, Guy Sotomayor Jr wrote:
>>> 
>>>> Thanks all!
>>>> 
>>>> The trick was opening up the front panel (I?m used to keylocks that are 
>>>> only electrical and not just physical).
>>>> 
>>>> Here?s the HP label with the options:
>>>> CPU 2103
>>>> MEM BP 1713
>>>> IO BP 1727
>>>> Accessories
>>>> 12992B
>>>> 12944B
>>>> 2102B
>>>> 12897B
>>>> 12892B
>>>> 12746A
>>>> 
>>>> In opening the panel on the front card cage, I saw that it only had 16K of 
>>>> memory.  :-(
>>>> 
>>>> I?ll see about firing it up and if that goes well (anyone have suggestions 
>>>> for this type of mini?) I?ll see if I find more memory and suitable 
>>>> peripherals.
>>>> 
>>>> Thanks.
>>>> 
>>>> TTFN - Guy
>>>> 
>>>> 
>>>>> On Aug 12, 2019, at 3:29 PM, Mike Loewen via cctalk 
>>>>>  wrote:
>>>>> 
>>>>> 
>>>>> The original M-Series machines were the 2105A and the 2108A (9-slot), 
>>>>> which sound like what you have.  The early machines didn't say "M-Series" 
>>>>> on the front panel, and had a different lock than the later models:
>>>>> 
>>>>> http://q7.neurotica.com/Oldtech/HP/2108A/HP2108A-8L.jpg (my model 2108A)
>>>>> 
>>>>> Early models had the power switch on the back panel, while later models 
>>>>> had it behind the front panel.
>>>>> 
>>>>> It sounds like you might have a later model M. It would be helpful to see 
>>>>> a closeup of the read card cage (with readable labels), as well as the 
>>>>> front card cage.  The front card cage is accessed by unlocking the panel 
>>>>> and removing the cover on the right side over the card cage.  That's 
>>>>> where the memory boards live.
>>>>> 
>>>>> On Mon, 12 Aug 2019, Guy Sotomayor Jr via cctalk wrote:
>>>>> 
>>>>>> It?s a 9-slot variant that says HP-1000 M-Series on the front panel.  
>>>>>> From what I can tell the front panel appears to be the same as any of 
>>>>>> the other HP-1000 series.
>>>>>> 
>>>>>> What I?m trying to figure out is what the actual CPU configuration is 
>>>>>> without disassembly (which I still need to figure out) so that I can 
>>>>>> actually examine the boards.
>>>>>> 
>>>>>> Thanks.
>>>>>> 
>>>>>> TTFN - Guy
>>>>>> 
>>>>>>> On Aug 12, 2019, at 2:59 PM, Norman Jaffe via cctalk 
>>>>>>>  wrote:
>>>>>>> 
>>>>>>> Can you provide a picture of the front panel?
>>>>>>> 2113 implies a 21MX-E; the nine-slot version is a 2109 while the 
>>>>>>> fourteen-slot would be a 2113.
>>>>>>> This might help - https://www.hpmuseum.net/display_item.php?hw=109 .
>>>>>>> 
>>>>>>> From: "cctalk" 
>>>>>>> To: "cctalk" 
>>>>>>> Sent: Monday, August 12, 2019 2:52:18 PM
>>>>>>> Subject: Identification of an HP minicomputer
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I have sitting in my pile of stuff an HP minicomputer that I?m trying 
>>>>>>> to identify (at least in terms of exactly what it is and what sort of 
>>>>>>> configuration it might have).
>>>>>>> 
>>>>>>> As far as I can tell, it?s an HP-1000 M-Series minicomputer (that 
>>>>>>> should hopefully get us *some* details). The ?asset tag? lists the part 
>>>>>>> number as 2113023-108. Looking at the back there?s space for 9 I/O 
>>>>>>> cards (5 are occupied).
>>>>>>> 
>>>>>>> So my question is which of the several CPUs could this be and how do I 
>>>>>>> tell (for example) what the configuration is (e.g. how much memory, 
>>>>>>> etc).
>>>>>>> 
>>>>>>> Yes, I have looked on bitsavers, but short of disassembling the box to 
>>>>>>> look at the (at least) 2 boards that are below the I/O slots, I can?t 
>>>>>>> tell what?s there and I?d like to see if there?s a way to determine 
>>>>>>> what this is without resorting to disassembly.
>>>>>>> 
>>>>>>> Thanks.
>>>>>>> 
>>>>>>> TTFN - Guy
>>>>>> 
>>>>>> 
>>>>> 
>>>>> Mike Loewen   mloe...@cpumagic.scol.pa.us
>>>>> Old Technologyhttp://q7.neurotica.com/Oldtech/
>>>> 
>>>> 
>>> 
>>> Mike Loewen mloe...@cpumagic.scol.pa.us
>>> Old Technology  http://q7.neurotica.com/Oldtech/
>> 
>> 
> 
> Mike Loewen   mloe...@cpumagic.scol.pa.us
> Old Technologyhttp://q7.neurotica.com/Oldtech/



Re: Identification of an HP minicomputer

2019-08-12 Thread Guy Sotomayor Jr via cctalk
Except that I don’t have a 12745A memory board, I believe it’s a 12746A which I 
think I saw was a 16K board.

Thanks.

TTFN - Guy

> On Aug 12, 2019, at 4:07 PM, Mike Loewen via cctalk  
> wrote:
> 
> 
>   2102B is the Standard Performance Memory Controller
>   12745A is a 64KB (32KW) memory board
>   12897B is a DCPC (Dual Channel Port Controller)
>   12992B is a 7905/7906/7920/7925 disc loader PROM
>   12892B is a Memory Protect board
>   12944B is the Power Fail Recovery System
> 
> On Mon, 12 Aug 2019, Guy Sotomayor Jr wrote:
> 
>> Thanks all!
>> 
>> The trick was opening up the front panel (I?m used to keylocks that are only 
>> electrical and not just physical).
>> 
>> Here?s the HP label with the options:
>> CPU 2103
>> MEM BP 1713
>> IO BP 1727
>> Accessories
>> 12992B
>> 12944B
>> 2102B
>> 12897B
>> 12892B
>> 12746A
>> 
>> In opening the panel on the front card cage, I saw that it only had 16K of 
>> memory.  :-(
>> 
>> I?ll see about firing it up and if that goes well (anyone have suggestions 
>> for this type of mini?) I?ll see if I find more memory and suitable 
>> peripherals.
>> 
>> Thanks.
>> 
>> TTFN - Guy
>> 
>> 
>>> On Aug 12, 2019, at 3:29 PM, Mike Loewen via cctalk  
>>> wrote:
>>> 
>>> 
>>>  The original M-Series machines were the 2105A and the 2108A (9-slot), 
>>> which sound like what you have.  The early machines didn't say "M-Series" 
>>> on the front panel, and had a different lock than the later models:
>>> 
>>> http://q7.neurotica.com/Oldtech/HP/2108A/HP2108A-8L.jpg (my model 2108A)
>>> 
>>>  Early models had the power switch on the back panel, while later models 
>>> had it behind the front panel.
>>> 
>>>  It sounds like you might have a later model M. It would be helpful to see 
>>> a closeup of the read card cage (with readable labels), as well as the 
>>> front card cage.  The front card cage is accessed by unlocking the panel 
>>> and removing the cover on the right side over the card cage.  That's where 
>>> the memory boards live.
>>> 
>>> On Mon, 12 Aug 2019, Guy Sotomayor Jr via cctalk wrote:
>>> 
>>>> It?s a 9-slot variant that says HP-1000 M-Series on the front panel.  From 
>>>> what I can tell the front panel appears to be the same as any of the other 
>>>> HP-1000 series.
>>>> 
>>>> What I?m trying to figure out is what the actual CPU configuration is 
>>>> without disassembly (which I still need to figure out) so that I can 
>>>> actually examine the boards.
>>>> 
>>>> Thanks.
>>>> 
>>>> TTFN - Guy
>>>> 
>>>>> On Aug 12, 2019, at 2:59 PM, Norman Jaffe via cctalk 
>>>>>  wrote:
>>>>> 
>>>>> Can you provide a picture of the front panel?
>>>>> 2113 implies a 21MX-E; the nine-slot version is a 2109 while the 
>>>>> fourteen-slot would be a 2113.
>>>>> This might help - https://www.hpmuseum.net/display_item.php?hw=109 .
>>>>> 
>>>>> From: "cctalk" 
>>>>> To: "cctalk" 
>>>>> Sent: Monday, August 12, 2019 2:52:18 PM
>>>>> Subject: Identification of an HP minicomputer
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I have sitting in my pile of stuff an HP minicomputer that I?m trying to 
>>>>> identify (at least in terms of exactly what it is and what sort of 
>>>>> configuration it might have).
>>>>> 
>>>>> As far as I can tell, it?s an HP-1000 M-Series minicomputer (that should 
>>>>> hopefully get us *some* details). The ?asset tag? lists the part number 
>>>>> as 2113023-108. Looking at the back there?s space for 9 I/O cards (5 are 
>>>>> occupied).
>>>>> 
>>>>> So my question is which of the several CPUs could this be and how do I 
>>>>> tell (for example) what the configuration is (e.g. how much memory, etc).
>>>>> 
>>>>> Yes, I have looked on bitsavers, but short of disassembling the box to 
>>>>> look at the (at least) 2 boards that are below the I/O slots, I can?t 
>>>>> tell what?s there and I?d like to see if there?s a way to determine what 
>>>>> this is without resorting to disassembly.
>>>>> 
>>>>> Thanks.
>>>>> 
>>>>> TTFN - Guy
>>>> 
>>>> 
>>> 
>>> Mike Loewen mloe...@cpumagic.scol.pa.us
>>> Old Technology  http://q7.neurotica.com/Oldtech/
>> 
>> 
> 
> Mike Loewen   mloe...@cpumagic.scol.pa.us
> Old Technologyhttp://q7.neurotica.com/Oldtech/



Re: Identification of an HP minicomputer

2019-08-12 Thread Guy Sotomayor Jr via cctalk
Thanks all!

The trick was opening up the front panel (I’m used to keylocks that are only 
electrical and not just physical).

Here’s the HP label with the options:
CPU 2103
MEM BP 1713
IO BP 1727
Accessories
12992B
12944B
2102B
12897B
12892B
12746A

In opening the panel on the front card cage, I saw that it only had 16K of 
memory.  :-(

I’ll see about firing it up and if that goes well (anyone have suggestions for 
this type of mini?) I’ll see if I find more memory and suitable peripherals.

Thanks.

TTFN - Guy


> On Aug 12, 2019, at 3:29 PM, Mike Loewen via cctalk  
> wrote:
> 
> 
>   The original M-Series machines were the 2105A and the 2108A (9-slot), which 
> sound like what you have.  The early machines didn't say "M-Series" on the 
> front panel, and had a different lock than the later models:
> 
> http://q7.neurotica.com/Oldtech/HP/2108A/HP2108A-8L.jpg (my model 2108A)
> 
>   Early models had the power switch on the back panel, while later models had 
> it behind the front panel.
> 
>   It sounds like you might have a later model M. It would be helpful to see a 
> closeup of the read card cage (with readable labels), as well as the front 
> card cage.  The front card cage is accessed by unlocking the panel and 
> removing the cover on the right side over the card cage.  That's where the 
> memory boards live.
> 
> On Mon, 12 Aug 2019, Guy Sotomayor Jr via cctalk wrote:
> 
>> It?s a 9-slot variant that says HP-1000 M-Series on the front panel.  From 
>> what I can tell the front panel appears to be the same as any of the other 
>> HP-1000 series.
>> 
>> What I?m trying to figure out is what the actual CPU configuration is 
>> without disassembly (which I still need to figure out) so that I can 
>> actually examine the boards.
>> 
>> Thanks.
>> 
>> TTFN - Guy
>> 
>>> On Aug 12, 2019, at 2:59 PM, Norman Jaffe via cctalk 
>>>  wrote:
>>> 
>>> Can you provide a picture of the front panel?
>>> 2113 implies a 21MX-E; the nine-slot version is a 2109 while the 
>>> fourteen-slot would be a 2113.
>>> This might help - https://www.hpmuseum.net/display_item.php?hw=109 .
>>> 
>>> From: "cctalk" 
>>> To: "cctalk" 
>>> Sent: Monday, August 12, 2019 2:52:18 PM
>>> Subject: Identification of an HP minicomputer
>>> 
>>> Hi,
>>> 
>>> I have sitting in my pile of stuff an HP minicomputer that I?m trying to 
>>> identify (at least in terms of exactly what it is and what sort of 
>>> configuration it might have).
>>> 
>>> As far as I can tell, it?s an HP-1000 M-Series minicomputer (that should 
>>> hopefully get us *some* details). The ?asset tag? lists the part number as 
>>> 2113023-108. Looking at the back there?s space for 9 I/O cards (5 are 
>>> occupied).
>>> 
>>> So my question is which of the several CPUs could this be and how do I tell 
>>> (for example) what the configuration is (e.g. how much memory, etc).
>>> 
>>> Yes, I have looked on bitsavers, but short of disassembling the box to look 
>>> at the (at least) 2 boards that are below the I/O slots, I can?t tell 
>>> what?s there and I?d like to see if there?s a way to determine what this is 
>>> without resorting to disassembly.
>>> 
>>> Thanks.
>>> 
>>> TTFN - Guy
>> 
>> 
> 
> Mike Loewen   mloe...@cpumagic.scol.pa.us
> Old Technologyhttp://q7.neurotica.com/Oldtech/



Re: Identification of an HP minicomputer

2019-08-12 Thread Guy Sotomayor Jr via cctalk
It’s a 9-slot variant that says HP-1000 M-Series on the front panel.  From what 
I can tell the front panel appears to be the same as any of the other HP-1000 
series.

What I’m trying to figure out is what the actual CPU configuration is without 
disassembly (which I still need to figure out) so that I can actually examine 
the boards.

Thanks.

TTFN - Guy

> On Aug 12, 2019, at 2:59 PM, Norman Jaffe via cctalk  
> wrote:
> 
> Can you provide a picture of the front panel? 
> 2113 implies a 21MX-E; the nine-slot version is a 2109 while the 
> fourteen-slot would be a 2113. 
> This might help - https://www.hpmuseum.net/display_item.php?hw=109 . 
> 
> From: "cctalk"  
> To: "cctalk"  
> Sent: Monday, August 12, 2019 2:52:18 PM 
> Subject: Identification of an HP minicomputer 
> 
> Hi, 
> 
> I have sitting in my pile of stuff an HP minicomputer that I’m trying to 
> identify (at least in terms of exactly what it is and what sort of 
> configuration it might have). 
> 
> As far as I can tell, it’s an HP-1000 M-Series minicomputer (that should 
> hopefully get us *some* details). The “asset tag” lists the part number as 
> 2113023-108. Looking at the back there’s space for 9 I/O cards (5 are 
> occupied). 
> 
> So my question is which of the several CPUs could this be and how do I tell 
> (for example) what the configuration is (e.g. how much memory, etc). 
> 
> Yes, I have looked on bitsavers, but short of disassembling the box to look 
> at the (at least) 2 boards that are below the I/O slots, I can’t tell what’s 
> there and I’d like to see if there’s a way to determine what this is without 
> resorting to disassembly. 
> 
> Thanks. 
> 
> TTFN - Guy 



Identification of an HP minicomputer

2019-08-12 Thread Guy Sotomayor via cctalk
Hi,

I have sitting in my pile of stuff an HP minicomputer that I’m trying to 
identify (at least in terms of exactly what it is and what sort of 
configuration it might have).

As far as I can tell, it’s an HP-1000 M-Series minicomputer (that should 
hopefully get us *some* details).  The “asset tag” lists the part number as 
2113023-108.  Looking at the back there’s space for 9 I/O cards (5 are 
occupied).

So my question is which of the several CPUs could this be and how do I tell 
(for example) what the configuration is (e.g. how much memory, etc).

Yes, I have looked on bitsavers, but short of disassembling the box to look at 
the (at least) 2 boards that are below the I/O slots, I can’t tell what’s there 
and I’d like to see if there’s a way to determine what this is without 
resorting to disassembly.

Thanks.

TTFN - Guy

Re: Control Data 9766 drive on epay

2019-08-12 Thread Guy Sotomayor Jr via cctalk
Well, crap.

I got rid of my 2 9766’s and all the packs that I had for them a couple of 
years ago for nothing what this guy is asking for his.  ;-)
I probably still have a pile of heads for them (but they’d probably go to the 
guy who purchased the drives/packs from me).

What are folks using these types of drivers for?  Media content recovery or 
just using them as intended?

TTFN - Guy

> On Aug 12, 2019, at 11:09 AM, William Donzelli via cctalk 
>  wrote:
> 
> There is a Make Offer option, and it does look like the seller does
> take offers fairly regularly. I will not be buying it.
> 
> If someone does, I have a huge amount of spares for 976x drives,
> including refurbished heads. It might take a while to find them in my
> mess, however.
> 
> --
> Will
> 
> On Mon, Aug 12, 2019 at 1:43 PM P Gebhardt via cctalk
>  wrote:
>> 
>> Hi list,
>> 
>> Just came across this:
>> 
>> https://www.ebay.com/itm/Vintage-Computing-CDC-Magnetic-Peripherals-Control-Data-9766-Storage-Module/143351908424?hash=item2160708848:g:3yEAAOSw1oJdTo9u
>> 
>> Haven't seen one listed in years. The price lets me assume that this offer 
>> addresses customers that may use these drives in a production environment or 
>> so...
>> I am not aware of museums or hobbyists who have such drives currently in a 
>> functional state to read and write from and to 80MB (CDC 9762) or 300MB (CDC 
>> 9766) disk packs. Maybe the CHM? ... not taking into consideration the CHM 
>> activities related to the Xerox disk cartidge  (2315-equivalent) software 
>> archive project.
>> Anybody out there? Would be interesting to know.
>> 
>> Best regards,
>> Pierre
>> 
>> -
>> http://www.digitalheritage.de



  1   2   3   4   5   >