Re: [Simh] smallest pdp-11 that can run TECO and sockets(*)?

2012-07-09 Thread Michael Bloom

On 07/08/2012 09:22 AM, Johnny Billquist b...@softjar.se wrote:

On 2012-07-08 13:58, Michael Bloom wrote:

version of TECO, it might be beneficial to make as much use of local Q
registers (those with two char names beginning with .), so that you
don't unintentionally accumulate data that you no longer need.  You
could think of them as a TECO equivalent to alloca().


They came after V36. But they are not strictly needed, as you can push 
down Q-registers yourself if you want to play with them without 
affecting someone else.
The whole point is avoiding the need to push Q-registers.  It is all too 
easy to make a mistake when pushing Q-registers that costs you a lot of 
debugging time. If you don't push Q regs, you never have to pop them!
If you have local Q regs, there is little legitimate use for 
pushing/popping them other than to rapidly copy both parts of one q-reg 
to another q-reg (it's a good idea to use q-regs as two member structs, 
when you can)


If you run out of memory, you are always in trouble...
That's why defensive coding is especially important with as memory space 
as TECO has


Not sure when and why you'd need 32-bit arithmetic, though...
I'm not sure either, since, as I've already admitted, I don't know the 
HTTP protocol.  But I did want to make a suggestion about long 
arithmetic,  just in case HTTP packets _did_ contain 32 bit fields upon 
which arithmetic might be performed.  With a heads up about this, 
Richard can look for places where this might be needed, and plan 
accordingly. It's always beneficial to strategize how to deal with 
problems prior to dealing with them, rather than just jumping in to 
code, and then figuring out how to handle each bridge as it is 
encountered.
I doubt you'll ever have TECO leak memory. However, you can run out of 
memory, so cleaning up your Q-registers, especially if you know they 
might store lots of data, is a good idea.
(TECOs memory handling is rather simplistic, not to mention well 
tested by now, which is why I doubt you have any memory leaks.)

Of course, TECO itself is robust,  but . . .

I was not referring to *TECO* leaking memory, but rather the program 
running /within/ TECO, which may  append to q register space,   push 
q-regs without popping them, or make memory disappear in other ways. If 
you've ever written a reasonably large TECO program (such as the DECUS 
11-737 package that I previously mentioned),  you've got a good chance 
of having experienced trying to debug a TECO memory leak.  This is the 
kind of place where defensive programming really shines.  As one of my 
college profs was known to say The main prerequisite for debugging is 
''bugging''.  And especially with a language that so resembles line 
noise as TECO does, avoiding bugging takes care.
Dumping out a file is something TECO can do all day long without a 
problem.

You can either do it page by page yourself, or let teco do it.
I was assuming that Richard planned to take use of the TECO data 
manipulation facilities,  not just use it as a glorified cat
There are pros and cons to both. But neither will cause you any weird 
memory issues.
Yes, it's how you program that determines whether you reclaim memory 
that's no longer needed, or not.  Local Q-regs allow de-allocation to be 
automatic when you leave a macro, eliminating a source of coding errors 
that can result in weird memory issues.  That was why I made a 
reference to alloca(),  since local q reg's effectively allocate their 
space on the TECO program's execution stack.  With a C program,  if you 
allocate memory within a routine that you subsequently exit without 
without saving or freeing the allocated space,  you get a memory leak, 
but memory allocated with alloca() is automatically freed.  Same thing 
with Local Q-regs.  When you leave the routine they belong to, poof 
they are gone.


 michael

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] smallest pdp-11 that can run TECO and sockets(*)?

2012-07-08 Thread Michael Bloom

On 07/07/2012 10:09 AM, Johnny Billquist b...@softjar.se wrote:
What is left is actually writing the code, something that seems to get 
much less attention...


Johnny
Good point.   I had written a message last night which considered this,  
but did not get around to sending it.   It also addressed hardware and 
OS options,   which are now moot, since an 11/03 running RT-11 has all 
but been chosen.  Here it is below, somewhat edited.  I've chopped off 
the tail end, which discussed Unix Teco



 Unsent Message 
[ --- snip --- Message headers snipped -- snip -- ]

On 07/06/2012 15:05:38 EDT 2012, Richard legalize at xmission.com wrote:

In article 4FF6AE2C.6050104 at dslextreme.com,
Michael Bloom mabloom at dslextreme.com writes:

 What aspect of the experiment requires a pdp-11 architecture?

Desire.


That's a legitimate reason.  I do not understand the reason behind it,  
but if that is a design requirement, so be it. (Although it does not add 
any technical information that will help us help you).


Even so, with the limits you've chosen,  here are a few considerations:

You will need enough memory to include the TECO executable, the program 
written in TECO,  and the Q-register data storage necessary for your 
TECO program, all on top of system overhead.  If using a late enough 
version of TECO, it might be beneficial to make as much use of local Q 
registers (those with two char names beginning with .), so that you 
don't unintentionally accumulate data that you no longer need.  You 
could think of them as a TECO equivalent to alloca().  They are 
documented in the V40 manual (dated May 1985), but I don't recall them 
being present in V36, so I'm not certain when they were introduced.


TECO may not work reliably (except as an editor) without maxing out (to 
the degree permitted on a PDP-11) the process address space.  Under 
RSTS/E, that would mean 48 KB (the remaining 16KB is needed for the TECO 
run-time system) minus stack space.   I do not recall what the exact 
overhead might be with other DEC OS's.


For RT-11,  you'll lose 8 Kb space reserved for device registers plus 
the amount of space RT-11 itself occupies (4K maybe? Anyone remember?), 
and of course the space needed for the TECO interpreter itself.  A rough 
guess might be 38Kb for TECO (16Kb for instructions, 6Kb for TECO's 
private data, 4Kb(??) for TECO's stack), RT-11 and I/O space.  That's 
38Kb already used, leaving 26Kb left for your buffer,  your own TECO 
code, and  your code's Q register variables .


If you need to do any 32 bit arithmetic,  you'll need to write your own 
32 bit arithmetic macros. (I'd suggest using 4 bytes of the text portion 
of a Q-register for storing a 32-bit datum, rather than wasting the int 
portion of two q registers (for anyone not familiar with Teco, there are 
36 2-part Q-registers, data areas which can be used for 36 16-bit 
variables plus 36 string variables and you can have executable TECO code 
as the data in the string variables)).  Using a late enough version of 
Teco that also has macro-local Q-registers  accessed as (for example) 
Q.1 or Q.b, instead of Q1 or Qb) will greatly ease that limitation by 
not limiting you to using just the global Q registers. V36 did not have 
this feature. At least V39 and V40 do. (as does the Almy Unix TECO 
version)


The maximum buffer size shrank from one TECO release to the next as new 
features were added. And obviously,  the more Q-register space you use 
for code and data, the smaller the maximum buffer size will be at any 
given time.


As you proceed during coding,  it might be a good idea to periodically 
check for memory leaks to prevent your server from crashing due to being 
out of space. One way to do this is to check if the number of characters 
that  the buffer can hold shrinks after each EC command.


I don't know the HTTP protocol, so I don't know whether there is a 
maximum response size, but for larger responses, you might need to build 
part of the response in the text buffer, write it to the output stream,  
replace the data in the buffer with the next part of the response, write 
that out, and so on (probably using PW and HK commands after building 
each part of the response).


[ Afterthought: it might be better to first build response header info 
in the text buffer,  use the A command to append the first page of the 
reply,  then write the served file using the EC command or one of it's 
derivatives. (This approach would reduce the risk of running out of 
memory).  If you need to make modifications to the file data before 
sending it,  or if you need to send a trailer after the data,  then you 
might choose to page through the buffer with P commands before using EC. ]


[ --snip --the rest of this message talked about approaches that have 
already been excluded, so I have snipped it  ]
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com

Re: [Simh] cpu idle with NetBSD

2012-04-11 Thread Michael Bloom

On 04/10/2012 11:19 PM, Gregg Levine wrote:
Hello! Mike, what were you looking for when you clicked there? That's 
how his resource presents use with zip files of his work.
I had assumed that clicking on a link named 
https://github.com/markpizz/simh/zipball/v3.9-0-rc1; would yield a zip 
file whose name contained the string 3.9-0-rc1 or something similar. 
my message  was commenting on the fact that the zip file retrieved by 
clicking that link instead contained the string v3.8-2-rc2, which 
seemingly indicated earlier code.


If a zip file named after v3.8-2-rc2 actually contains code from the 
later version 3.9-0-rc1,  how is one to be certain which file in one's 
Downloads directory contains what contents, after several downloads of a 
development release whose packaging names do not match the purported 
versions have taken up residence in that directory?


 Cygnus was glommed by Red Hat because of its interest in eCos . . . 
however that part got spun off years later leaving them holding the 
bag for the Cygnus tool chains. The question is, What company were 
you thinking of? . . . Please explain off list.


As long as doing so ends with this clarification,  I don't think it 
hurts to respond on-list.   There are many companies and individuals who 
sell support services for free software, probably in part due to the 
example set by Cygnus years ago.  I only said like Cygnus, because I 
didn't want to present the implication that they were the only ones.  A 
large list of such companies and individuals is maintained at 
http://www.fsf.org/resources/service .


- michael

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] cpu idle with NetBSD

2012-04-10 Thread Michael Bloom

Hi Mark,

I like the idea of splx(splx(7)),  but it might be good to make the 
level configurable.  Currently, the lowest software interrupt used on 
NETBSD is level 8 for softclock,  but conceivably another system (or a 
future version) might have an additional software interrupt level and 
make use of 7.


Regarding your last paragraph, there are actually (believe it or not) 
still job postings for people with VAX experience.  Including various 
BSD's, not just VMS.


There is a surprising amount of application software out there that 
still runs on vaxes,  but the cost of maintaining those machines must 
keep increasing.  And there is more support available for NETBSD (which 
can run 4.3BSD a.out's) via the net or a company similar to Cygnus than 
there is for, say,  More/BSD, whose vendor disappeared years ago.  
Replacing a Vax running that system with a simulator running NETBSD may 
make sense to those with a large investment in applications that run on 
their vaxes.


Also, clicking on https://github.com/markpizz/simh/zipball/v3.9-0-rc1 
unexpectedly downloaded a file named 
markpizz-simh-v3.8-2-rc2-17-g15570e5.zip.


- michael

On 01/-10/-28163 11:59 AM, Mark Pizzolato - Info Comm wrote:


Hi Chris,

There were issues like this on prior versions of simh.  (V3.8-1 and 
earlier), which you are running.


The latest (about to be released) version is v3.9-0-rc2 which has 
significant improvements to the idle implementation, including a 
solution to the issue you found.  My earlier comments were 
specifically referring to that new idle implementation for the VAX.


The release candidate which is close to release is available at 
https://github.com/markpizz/simh/zipball/v3.9-0-rc1


Save what that URL returns as a zip file and unpack it and build a vax 
simulator with networking support using:


   unzip --a zipfilename.zip

   cd markpizz*

   make vax

The key issue with recent versions of NetBSD is that earlier versions 
of the OS had the vax specific idle routine within an assembler module 
called subr.S .  The simh idle logic detects the code which is 
implemented for idle in subr.S.  Meanwhie, newer versions of NetBSD 
don't carry this assembler code anymore and a much more complicated 
sequence of things going on, essentially all from compiled modules 
(from a little examination of the code I've done).  The structure of 
the idle management has been adjusted to accommodate the features we 
have on modern system... (Everything Multi-Core, HyperThreading, etc.) 
with some low level tasks delegated to the idle loop as well (page 
zeroing).  There is one platform specific callout to cpu_idle().  
cpu_idle() is defined in usr/src/sys/arch/vax/include/cpu.h.  It is 
defined to be a macro:  #define cpu_idle() do {} while 
(/*CONSCOND*/0)  A normal compiler wouldn't generate any code for 
this macro.  If the macro instead was defined to be #define 
cpu_idle() do {splx(splx(7))} while (/*CONSCOND*/0)


I have sent a message to the NetBSD vax mailing list with the above 
suggested change to the base source code.  Maybe it will get adopted.  
OpenBSD has similar, but different code but I'll make the same 
suggestion there as well.  Maybe this will end up built into these OS 
builds


I come back to the question of why folks would want to run the new 
version of NetBSD on a simulated VAX when they can run a native one 
for their host platform which will be the same OS and be more 
naturally behaved.  If the point is merely to test to see if the OS 
still works, that's great, but then you boot it test a few things and 
then turn it off.  Great idle support isn't needed since it won't be 
running continuously.


-Mark




___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] A few more bugs found

2012-03-22 Thread Michael Bloom
I've updated the Makefile diff to fix a couple of things and to allow 
individual simulators to be linted.


After applying the diff,  you can lint individual simulators by saying, 
for example:


make lint-pdp11 BIN=lint-

Or you can lint everything with just make lint.

The changes are in the attached file DIFF.  They can be applied to a 
makefile from wednesday's github master branch via


patch makefile  DIFF.

 If your makefile contains carriage returns as well as newlines, you'll 
need to run:


tr -d \\r  makefile

first, otherwise you'll get six hunk failed messages.

If you have a different makefile version, it's not hard to update by 
hand as there are only six hunks in the diff.  The last two hunks 
comprise about 72% of the diff,  and can be manually accomplished by a 
global replace of the string -o $@ with $(OUTPUTSPEC).  The first 
four chunks are all of 2238 bytes.


--- /tmp/makefile   2012-03-22 01:28:33.300598646 -0700
+++ makefile2012-03-22 13:36:19.201126159 -0700
@@ -29,6 +29,16 @@
 # Asynchronous I/O support can be disabled if GNU make is invoked with
 # NOASYNCH=1 on the command line.
 #
+ifneq (,$(or $(findstring lint, $(MAKECMDGOALS)) ,$(findstring lint, 
$(LINTALL
+STD=--std=c99
+OUTPUTSPEC=
+DEBUG=lint
+GCC=cppcheck
+else
+OUTPUTSPEC= -o $@
+STD=-std=c99
+endif
+#
 # CC Command (and platform available options).  (Poor man's autoconf)
 #
 # building the pdp11, or any vax simulator could use networking support
@@ -288,9 +298,13 @@
   endif
 endif
 ifneq ($(DEBUG),)
-  CFLAGS_G = -g -ggdb -g3
-  CFLAGS_O = -O0
-  BUILD_FEATURES = - debugging support
+   ifeq ($(OUTPUTSPEC),)   
+ CFLAGS_G = --enable=all --template=gcc --suppress=variableScope 
--suppress=invalidscanf 
+   else
+ CFLAGS_G = -g -ggdb -g3
+ CFLAGS_O = -O0
+ BUILD_FEATURES = - debugging support
+   endif
 else
   CFLAGS_O = -O2
   LDFLAGS_O = 
@@ -338,14 +352,19 @@
 ifneq ($(DONT_USE_ROMS),)
   ROMS_OPT = -DDONT_USE_INTERNAL_ROM
 else
-  BUILD_ROMS = ${BIN}BuildROMs${EXE}
+  ifeq (,$(findstring lint,$(MAKECMDGOALS)))
+BUILD_ROMS = ${BIN}BuildROMs${EXE}
+  endif
 endif
 ifneq ($(DONT_USE_READER_THREAD),)
   NETWORK_OPT += -DDONT_USE_READER_THREAD
 endif
 
-CC = $(GCC) -std=c99 -U__STRICT_ANSI__ $(CFLAGS_G) $(CFLAGS_O) -I . 
$(OS_CCDEFS) $(ROMS_OPT)
-LDFLAGS = $(OS_LDFLAGS) $(NETWORK_LDFLAGS) $(LDFLAGS_O)
+CC = $(GCC) $(STD) -U__STRICT_ANSI__ $(CFLAGS_G) $(CFLAGS_O) -I . $(OS_CCDEFS) 
$(ROMS_OPT)
+
+ifeq (,$(findstring lint,$(MAKECMDGOALS)))
+  LDFLAGS = $(OS_LDFLAGS) $(NETWORK_LDFLAGS) $(LDFLAGS_O)
+endif
 
 #
 # Common Libraries
@@ -586,6 +605,17 @@
id32 sds lgp h316 swtp
 
 all : ${ALL}
+#
+# Having a lint entry was once traditional in unix makefiles.  There is no free
+# lint (tanstaafl :), however that can fully handle ANSI C without choking.  
+# For example, splint 3.1.2 chokes on ellipses in #define parameter lists.
+#
+# However... the latest version of cppfree (1.53) works pretty well. I can 
+# guarantee, though, that 1.47 (currently in ubuntu repositories) will not.
+#
+
+lint: ;
+   ${MAKE} GCC=$(GCC) all LINTALL=lint LDFLAGS=
 
 clean :
 ifeq ($(WIN32),)
@@ -598,9 +628,9 @@
 ${BIN}BuildROMs${EXE} :
${MKDIRBIN}
 ifeq (agcc,$(findstring agcc,$(firstword $(CC
-   gcc $(wordlist 2,1000,${CC}) sim_BuildROMs.c -o $@
+   gcc $(wordlist 2,1000,${CC}) sim_BuildROMs.c $(OUTPUTSPEC)
 else
-   ${CC} sim_BuildROMs.c -o $@
+   ${CC} sim_BuildROMs.c $(OUTPUTSPEC)
 endif
 ifeq ($(WIN32),)
$@
@@ -620,160 +650,160 @@
 
 ${BIN}pdp1${EXE} : ${PDP1} ${SIM}
${MKDIRBIN}
-   ${CC} ${PDP1} ${SIM} ${PDP1_OPT} -o $@ ${LDFLAGS}
+   ${CC} ${PDP1} ${SIM} ${PDP1_OPT} $(OUTPUTSPEC) ${LDFLAGS}
 
 pdp4 : ${BIN}pdp4${EXE}
 
 ${BIN}pdp4${EXE} : ${PDP18B} ${SIM}
${MKDIRBIN}
-   ${CC} ${PDP18B} ${SIM} ${PDP4_OPT} -o $@ ${LDFLAGS}
+   ${CC} ${PDP18B} ${SIM} ${PDP4_OPT} $(OUTPUTSPEC) ${LDFLAGS}
 
 pdp7 : ${BIN}pdp7${EXE}
 
 ${BIN}pdp7${EXE} : ${PDP18B} ${SIM}
${MKDIRBIN}
-   ${CC} ${PDP18B} ${SIM} ${PDP7_OPT} -o $@ ${LDFLAGS}
+   ${CC} ${PDP18B} ${SIM} ${PDP7_OPT} $(OUTPUTSPEC) ${LDFLAGS}
 
 pdp8 : ${BIN}pdp8${EXE}
 
 ${BIN}pdp8${EXE} : ${PDP8} ${SIM}
${MKDIRBIN}
-   ${CC} ${PDP8} ${SIM} ${PDP8_OPT} -o $@ ${LDFLAGS}
+   ${CC} ${PDP8} ${SIM} ${PDP8_OPT} $(OUTPUTSPEC) ${LDFLAGS}
 
 pdp9 : ${BIN}pdp9${EXE}
 
 ${BIN}pdp9${EXE} : ${PDP18B} ${SIM}
${MKDIRBIN}
-   ${CC} ${PDP18B} ${SIM} ${PDP9_OPT} -o $@ ${LDFLAGS}
+   ${CC} ${PDP18B} ${SIM} ${PDP9_OPT} $(OUTPUTSPEC) ${LDFLAGS}
 
 pdp15 : ${BIN}pdp15${EXE}
 
 ${BIN}pdp15${EXE} : ${PDP18B} ${SIM}
${MKDIRBIN}
-   ${CC} ${PDP18B} ${SIM} ${PDP15_OPT} -o $@ ${LDFLAGS}
+   ${CC} ${PDP18B} ${SIM} ${PDP15_OPT} $(OUTPUTSPEC) ${LDFLAGS}
 
 pdp10 : ${BIN}pdp10${EXE}
 
 ${BIN}pdp10${EXE} : ${PDP10} ${SIM}
${MKDIRBIN}
-   ${CC} ${PDP10} ${SIM} ${PDP10_OPT} -o $@ ${LDFLAGS}
+   ${CC} ${PDP10} ${SIM} 

Re: [Simh] A few more bugs found

2012-03-22 Thread Michael Bloom

Please ignore my updated diff.  I should not have posted it.  It is broken!

On 03/22/2012 02:18 PM, Michael Bloom wrote:
I've updated the Makefile diff to fix a couple of things and to allow 
individual simulators to be linted.


After applying the diff,  you can lint individual simulators by 
saying, for example:


make lint-pdp11 BIN=lint-

Or you can lint everything with just make lint.

The changes are in the attached file DIFF.  They can be applied to a 
makefile from wednesday's github master branch via


patch makefile  DIFF.

 If your makefile contains carriage returns as well as newlines, 
you'll need to run:


tr -d \\r  makefile

first, otherwise you'll get six hunk failed messages.

If you have a different makefile version, it's not hard to update by 
hand as there are only six hunks in the diff.  The last two hunks 
comprise about 72% of the diff,  and can be manually accomplished by a 
global replace of the string -o $@ with $(OUTPUTSPEC).  The first 
four chunks are all of 2238 bytes.




___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh


Re: [Simh] rasberry pi

2012-03-08 Thread Michael Bloom
** With the ARM, gcc compilation options could have a significant impact 
on CPU performance and  performance hits.  Compiling with -O2 might 
help some at the cost of losing debuggability, but there are so many 
different ARM processors that the biggest improvements  may come in 
matching the code generation to the processor using code generation that 
uses the lowest common denominator strategy.  The -mcpu= and  -mtune= 
options can help with this.  Check the gcc man page for the list of ARM 
processor types that can be specified.


Also,  -mfloat-abi=softfp can help in some circumstances when  you've 
previously been using -mfloat-abi=soft despite having an FPU  (it 
generates FP instructions but retains the library call api used when 
linking with libraries that contain floating point simulations , in 
order to retain binary compatibility).


*
On*/Thu Mar 8 05:58:36 EST 2012 /*Quentin North (noisy)*quentin at 
quentin.org.uk 
mailto:simh%40trailing-edge.com?Subject=Re%3A%20%5BSimh%5D%20EXT%20%3ARe%3A%20%20rasberry%20piIn-Reply-To=%3CFE5D568B-52F0-48BC-B251-B0486C32719A%40quentin.org.uk%3E/wrote://

/

I run simh on a sheevaplug with two concurrent HP2100 systems simulated with 
network based interconnect kits between the two. With only a single Arm core 
the performance takes a hit on CPU before memory becomes an issue.

Sent from my iPad


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] rasberry pi

2012-03-08 Thread Michael Bloom
Oops.  That should have read . . . the biggest improvements  may come 
in matching the code generation to the processor versus using code 
generation that uses the lowest common denominator strategy.


Leaving out the word versus resulted in a totally senseless 
statement!  Sorry for any confusion.


On 03/08/2012 09:35 AM, Michael Bloom wrote:
** With the ARM, gcc compilation options could have a significant 
impact on CPU performance and  performance hits.  Compiling with -O2 
might help some at the cost of losing debuggability, but there are so 
many different ARM processors that the biggest improvements  may come 
in matching the code generation to the processor using code generation 
that uses the lowest common denominator strategy.  The -mcpu= and  
-mtune= options can help with this.  Check the gcc man page for the 
list of ARM processor types that can be specified.


Also,  -mfloat-abi=softfp can help in some circumstances when  you've 
previously been using -mfloat-abi=soft despite having an FPU  (it 
generates FP instructions but retains the library call api used when 
linking with libraries that contain floating point simulations , in 
order to retain binary compatibility).


*
On*/Thu Mar 8 05:58:36 EST 2012 /*Quentin North (noisy)*quentin at 
quentin.org.uk 
mailto:simh%40trailing-edge.com?Subject=Re%3A%20%5BSimh%5D%20EXT%20%3ARe%3A%20%20rasberry%20piIn-Reply-To=%3CFE5D568B-52F0-48BC-B251-B0486C32719A%40quentin.org.uk%3E/wrote://

/

I run simh on a sheevaplug with two concurrent HP2100 systems simulated with 
network based interconnect kits between the two. With only a single Arm core 
the performance takes a hit on CPU before memory becomes an issue.

Sent from my iPad




___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] SIMH performance

2012-03-07 Thread Michael Bloom

On 03/05/2012 06:30 AM, Mark Pizzolato - Info Comm wrote:
Comments in the pdp11_cpu code suggest that Bob added Idle support. 
I'm not a pdp11 user so I don't have direct experience there, but I 
suspect he probably didn't test with 2.11BSD. Please suggest what 
might be appropriate for this operating system


The 2.11BSD idle routine sets priority to zero,  issues a wait 
instruction, then restores the original priority. (**1)


So the appropriate thing would be for the CPU to just do nothing until 
interrupted.


(and as I write that last sentence, I see your points more clearly than 
ever.  (Even though the wait is supposed to do nothing until 
interrupted,  the device interrupting it is not atomic from it's own 
viewpoint.  It has a lot of work to do to in preparation for generating 
the interrupt! . . . so it is actually that the simulated device(s) 
needs a way to idle efficiently until it has work to do) )


Once again, on a tangent . . .

I think that PDP-11 implementors might find 2.11BSD to be a useful tool, 
if they were to start using it, as it exercises the PDP-11 in ways that 
other systems either don't exercise,  or it does so without source code, 
the latter making it harder to ascertain whether a problem is in the 
kernel or in the hardware.


One example might be that networking runs in supervisor mode(**2), in 
contrast some with other PDP-11 operating systems which use the mode to 
support larger user programs.


That may involve much different usage patterns than, say RSX-11M when a 
user mode program utilizes a supervisor mode library written and 
compiled by the programmer at the same time as the main program


As another example,  both the kernel and any user mode program may each 
have up to 15 overlays.  Programs too large for split I/D can be built 
with overlays  there is now an optimizing overlay makefile generator, 
mkovmake, which helps to make overlays be nearly transparent to user 
programs.(**3)  Here again,  if there was some obscure bug with 
simulating a segment register,  having the 2.11BSD source can make it 
easier to distinguish what the PDP11 is actually doing with that 
register from what the kernel code intends for it to do.


- Michael

Notes:
(**1) Just for amusement and for the benefit of models with a front 
panel,  the idle routine also rotates a bit pattern on the front panel 
lights once every few times it is called.  A comment in the code reads: 
 /* If the system is very active the display  will appear blurred. */  
.   SIMH coderd shouldn't care about that, which is why I moved it to 
down to a note..


(**2) Moving the network from kernel space to supervisor freed up a lot 
of room for making the system look more like 4.3BSD.  So much stuff 
ranging from kernel internals and aspects of the filesystem to the 
system call API to the C library and user mode programs has been brought 
over or ported from 4.3BSD, that, except for the limitations imposed by 
the word size, it really does look like a 4.3BSD systrm. Programs 
cross-port very easily between 4.3BSD and 2.9 BSD.  The only real coding 
issues  occur when the coder assumes int to be 32 bits.  The usual 
problems of this type occur when man pages are ignored (for example, the 
coder uses passes an int rather than an off_t as the second argument to 
lseek() )


(**3) As an example ofnear transparency:
2.11BSD comes with the source to the game rogue.  When I tried to 
build it, I got a text overflow message from ld.   After a very little 
digging,  I discovered mkovmake.  I ran it, following the mkovmake man 
page, and was given a new Ovmakefile that built a binary with eight 
overlays.
I started it up, and immediately ran into what I thought was a Kobold 
(letter K) and found that it was now called a Kestrel.


   size rogue
   textdatabss dec hex
   25856   18384   10556   54796   d60ctotal text: 88192
   -
overlays: 8192,7936,7936,8192,7552,7488,6976,8064

Note that this is a pdp11 binary with an 88192 text size.

Just for the hell of it, here's the kernel, using a total 108288 of text:

   size /unix
   textdatabss dec hex
   55424   634041960   103724  1952c   total text: 108288
overlays: 7744,7488,7872,7296,2240,7104,4992,5696,2432

That is without networking. The kernel separately loads  /netnix to 
supervisor space:


   size /netnix
   textdatabss dec hex
   60864   236238448   101674  18d2a




___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Using TAP/TUN

2012-03-06 Thread Michael Bloom
I've started looking a little deeper,  and the first thing I found is 
this from linux-source/Documentation/networking/tuntap.txt:


   4. What is the difference between TUN driver and TAP driver?
   TUN works with IP frames. TAP works with Ethernet frames.

So it would seem that to get the desired multi-protocol generality, one 
would want to use the TAP ioctls not the TUN ioctls that tunctl.c uses.  
Meaning to support non IP protocols, the example in 0readme_ethernet.txt 
would need to change to use a tapctl program rather than the tunctl 
program it currently uses (or tunctl could be made into a more generic 
tuntapctl program).  Or perhaps it somehow works as desired right now, 
in spite of the available documentation.


I downloaded the uml_utilities  source, and unfortunately, the code 
contains precious little commenting other than Licensed under the GPL 
and no documentation.  So figuring it out may take a little time.


There is a uml_switch program.  If we're lucky, this may be a network 
switch in software.  Unfortunately, it uses the TUN ioctls, so if it is 
a switch it would need reworking to use TAP ioctls for it to be an 
ethernet switch rather than an IP switch.  If only it were commented, I 
could say more at this time.


- Michael


On 03/05/2012 05:55 AM, Mark Pizzolato - Info Comm wrote:

That is completely true for IP traffic.

However, the key goal for the simulator level networking is to have
these simulated systems be able to talk to other real or simulated systems
in all the ways that they did when they were natively networking.  When
these systems were in their prime, IP was not the dominant networking
protocol.

These systems spoke DECnet, LAT, SCS(Vax Cluster Communications), etc.
All of these protocols were designed around communications on a LAN.
The 0readme_ethernet.txt document's goal is to try and get a simulator
to BOTH be able to participate with these protocols on the local LAN
AND to have the host system be able to also communicate with the host
System with whatever protocols they may actually share (usually only
IP these days).
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Using TAP/TUN

2012-03-06 Thread Michael Bloom
I previously sent a reply to the original message (below), but used only 
reply (oops), not reply all, so it did not go to the list.


It was too long anyway, but the major jist was that the bridge calls 
will not attach a wireless lan card to a bridge, at least under linux 
3.0.0 (the successor to linux 2.6.n).  They return EOPNOTSUPP. So some 
other means is necessary to gate a simulated network of SIMH systems to 
the real world.


Perhaps a sufficient workaround might be for one of the machines on 
the simulated network to have two NICS, (one of which is just tunnelled 
to the host) and do routing on that machine between the nic on the 
simulated network and the merely tunnelled nic.


As things stand now, however, at least with linux version 3, the 
simulated lan seemingly cannot be merged with a wireless lan via bridging.


- Michael

On 03/05/2012 05:55 AM, Mark Pizzolato - Info Comm wrote:

That is completely true for IP traffic.

However, the key goal for the simulator level networking is to have
these simulated systems be able to talk to other real or simulated systems
in all the ways that they did when they were natively networking.  When
these systems were in their prime, IP was not the dominant networking
protocol.

These systems spoke DECnet, LAT, SCS(Vax Cluster Communications), etc.
All of these protocols were designed around communications on a LAN.
The 0readme_ethernet.txt document's goal is to try and get a simulator
to BOTH be able to participate with these protocols on the local LAN
AND to have the host system be able to also communicate with the host
System with whatever protocols they may actually share (usually only
IP these days).

In the earliest days of simh networking, the only strategy we could
come up with which would achieve the full networking goal was to
install an additional NIC in the hosting system which was dedicated
to the simh instance and connect that NIC to the same LAN as the
primary host NIC.  The host's network stack would be configured to
not use this additional NIC for anything and things would work just
fine.  This strategy was one which also worked for essentially any
host platform.

Meanwhile, many folks either had host systems which couldn't easily
accommodate the addition of additional NICs or they merely wanted
to come up with ways to achieve the full set of goals without the
addition of extra hardware.  The current simh networking document
(0readme_ethernet.txt) describes how these combined goals can be
achieved in various host specific ways.  On Linux the bridging
approach achieves this functionality.  On Windows, the native stack
(combined with some extra code in sim_ether.c) can achieve the goal
Without any extra hardware or any special host configuration steps.

If your only goal as to use IP to communicate between a simulator
and other places (including the hosting system), then your recipe
will be sufficient.  I'm not sure how well it would achieve this
goal if you happened to want to run multiple simulators on the
same host.  The bridging recipe works for this extended case as
well.

- Mark Pizzolato






___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh


Re: [Simh] Using TAP/TUN

2012-03-04 Thread Michael Bloom
Here's a relatively simple shell script I cobbled together to automate 
setting up TAP/TUN on Linux:

-- CUT HERE 
#! /bin/sh

USERNAME=mab

ETH_IF=wlan3
TUNNEL_IF=tap0
NET_BITS=24
SUB=0

LAN_NET=192.168.$SUB
TUNNEL_NET=192.168.$SUB

# Addresses for both sides of the tunnel
SIMH_HOST=$TUNNEL_NET.253
TUNNEL_GATE=$TUNNEL_NET.254

 END OF STUFF YOU CAN EDIT

# If tunnel doesn't exist, create it.
ifconfig $TUNNEL_IF  /dev/null 21 ||  tunctl -u $USERNAME

ifconfig $TUNNEL_IF $TUNNEL_GATE up

if [ x$LAN_NET =  x$TUNNEL_NET ]
then
# The above ifconfig of tap0 created a default route through our 
tunnel,

# superceding the default route via $ETH_IF to our router.  Remove it.

route delete -net $TUNNEL_NET/$NET_BITS gw 0.0.0.0 dev $TUNNEL_IF

else
echo Your router probably has at least a default route going to the 
internet,
echo and a route for your lan. If your tunnel and lan are on 
different nets,
echo you should make sure the router knows to try to send packets 
intended
echo for $TUNNEL_NET through $LAN_NET.  Linux is supposed to 
automagically

echo proxy arp for a whole subnet when forwarding, but there must
echo still be a route to this host from your router for $TUNNEL_NET.
fi

echo 1  /proc/sys/net/ipv4/conf/$TUNNEL_IF/proxy_arp

# create a route to SIMH's system host through our tunnel

route add -host $SIMH_HOST dev $TUNNEL_IF

# prepare to answer arps for the simulated machine

arp -Ds $SIMH_HOST $ETH_IF pub

exit 0

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

[Simh] Using TAP/TUN

2012-03-03 Thread Michael Bloom
I just wanted to comment that the instructions for networking in 
00readme_ethernet can be simplified for linux.  The bridge utilities are 
not really necessary after using tunctl, unless you need to do something 
more complex.


On a ubuntu oneiric system,  I was able to use just the commands listed 
in the tunctl man page.


On that system,  the commands I used were:

#/usr/sbin/tunctl -t tap0 -u username


# ifconfig tap0 192.168.0.254 up
# route add -host 192.168.0.253 dev tap0
# bash -c 'echo 1  /proc/sys/net/ipv4/conf/tap0/proxy_arp'
# arp -Ds 192.168.0.253 wlan3 pub
(if hardwired. use eth0 in place of wlan3 )

$ Run simulator and attach xq tap:tap0 ;

finally, on pdp11 (or vax) Unix:

ifconfig qe0192.168.0.253 netmask ... broadcast ...
etc . . .

There was one surprise, however.

Without one more command,  I lost the ability to reach the outside world.

The fix was adding this command:

route delete -net 192.168.0/24 gw 0.0.0.0 dev tap0

which unmasked normal default routing through the ethernet interface.

After that,  the emulated system and the host were both able to see each 
other,  and both could access the outside world.


Apparently, the commands I took from the man page resulted in an unspecified

route add -net 192.168.0/24 gw 0.0.0.0 dev tap0

being seen as having been implied by the man page's route add -host 
... command


The result was that *all* packets on the host that were destined for the 
network 192.168.0/24 were being sent to my simulator through tap0 rather 
than to my router through my usual interface.


If I had made the simulator be on a different network, I don't think 
that would  have been a problem.  Only the man page does not tell you 
any of that, and while the man page's example shows both sides of the 
tunnel being on the same network, it leaves it unspecified as to whether 
or not the host is on the same network as the tunnel.


- Michael

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh


Re: [Simh] trouble selecting VAX ethernet on CentOS

2012-03-02 Thread Michael Bloom

On 02/29/2012 15:38 PM,  Michael L. Umbricht  wrote:

With no special options to make vax I do get push   %rbp from disas
  eth_open

Thanks, Michael.  That is good to know.

Thanks also to Mark, for the explanation about the stack trashing.  
(Perhaps pthread_attr_setguardsize() or the equivalent  where available, 
might help to help  prevent such a bug from affecting it's debuggability?)


The source of my worry had been this: I've seen several systems (they 
were not not linux, though) in which insufficient thought had been given 
to the needs of debugging when setting up the stack after the occurrence 
of a trap.


On one such (SystemV R3) system,  I ended up rewriting part of the trap 
handling code so that an expanded signal info frame, that was placed on 
the stack, placed a copy of the frame pointer in the same relative 
position that it would have been in had the handler been invoked by a 
subroutine call so  that the debugger could easily trace  (from the 
signal handler, if the signal was caught) back to the routine where the 
trap had been generated, with no gaps resulting from insufficient 
thought in design.


The only tricky part was making it backward compatible so that old 
binaries would still run correctly.  To do this, my new signal 
trampoline took advantage of the fact that the old binaries were looking 
for a flag placed on the stack in the event of traps like SEGV's having 
happened at user level to tell them to issue a specific 
return-from-signal sys3b() call (despite it being a 68020 machine not a 
3b2).  If it was an old binary, it would look for the flag and use the 
old mechanism, but a new binary would be aware it was in a BSD style 
signal trampoline and use a BSD style sigreturn() that had been added in 
support of the new mechanism. It's so long ago, I no longer remember the 
details clearly,  but the improvement in debuggability was so dramatic 
that I subsequently modified the kernel to save the frame pointer on all 
traps so that panic dumps could be debugged just as easily, at only a 
very slight cost in kernel time.


When I saw that stack,  and read in the gcc manual page that the default 
for frame pointer generation on intel machines had been reversed,  I 
guess I panic'ed a bit myself.  But, empirical evidence seems to show 
the manual page to be wrong in that regard.

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh


Re: [Simh] trouble selecting VAX ethernet on CentOS

2012-02-26 Thread Michael Bloom

On 01/-10/-28163 11:59 AM, Jan-Benedict Glaw wrote:

Are all object files built with -g and is the `vax' binary not
stripped? Then you'd just run it with gdb:

$ gdb ./vax .

Then do your work (-  make it segfault) and when you're back at GDB's
prompt, use the `bt' command to show us where it's breaking.


It may be obvious, but in case it is not,  don't replace the . 
with simh arguments.  That would produce a No such file or directory 
message.


Instead, after typing:

$ gdb ./vax

run the program with

(gdb) run Program_Arguments

where Program_Arguments are the arguments you would normally provide 
on the command line to simh.


Then, follow the rest of Jan-Benedict's instructions (make it segfault, 
and get a backtrace).  Actually, after typing bt, it might help to 
also say bt full,  to provide additional information (about local 
variable values), without cluttering up the initial backtrace.


- Michael


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh