Re: [Simh] Simulator development: Advice on system timing

2017-10-26 Thread Mark Pizzolato
On Thursday, October 26, 2017 at 10:56 AM, Seth Morabito wrote:
> I'm battling system timing in the 3B2/400 emulator I'm working on. As
> with any system, particular activities such as disk seeks, reads, and
> writes must be completed within certain margins -- if they happen too
> early or too late, an interrupt will be missed. But the "fudge factor",
> so to speak, seems pretty tight on the 3B2, possibly because it runs at
> a system clock frequency of 10MHz.
> 
> In the simulator, I really want to be able to say "call sim_activate()
> with a delay of 8ms (or 720us, or whatever) of simulated time". I'm
> trying to come up with the best strategy to map simulator steps with
> simulated time.
> 
> If I know that the real system runs at 10MHz, I know each clock cycle
> takes 100ns. So far so good -- but of course on a real system, each
> instruction takes several system clock steps. If I had to hazard a
> guess, I'd say each real instruction on a real system takes an average
> of 8-10 clock cycles, depending on the instruction length and number of
> memory accesses. Each step of the simulator does a complete instruction
> cycle - fetch, decode, execute, reads and writes - in one go, so it's
> not a direct mapping of simulator step to the 10Mhz clock.
> 
> How do I translate this knowledge into accurate delays for
> "sim_activate()" in my disk access routines? Is there a best practice
> for this?

On top of Paul's explanation, there are some relevant concepts relating to 
how other simulators address this subject.

In general, device simulations use sim_activate() delay times which are 
somewhat empirically determined based on the minimum instruction 
times that the common software running on the system requires.  For 
many devices, that is completely sufficient, and in combination with 
simulator throttling the user experience can reasonably closely reflect 
the experience that occurred with the original systems.  This probably 
would be considered best practice.

Meanwhile, if you REALLY want explicit time based device activation 
delays, you can use sim_activate_after().  This API takes a time (in usecs) 
as the time until activation.  Time here is dynamically calibrated based 
on the actual simulated instruction execution rate.

- Mark
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Simulator development: Advice on system timing

2017-10-26 Thread Paul Koning

> On Oct 26, 2017, at 2:23 PM, Armistead, Jason  
> wrote:
> 
> If you need accurate device timing, then perhaps something like the core of 
> MAME/MESS is a better choice than SIMH.  All those retro arcade machine games 
> in MAME depend on counting cycles in order to give realistic game behavior 
> for the human who is playing them.  If they ran twice as fast, they'd be 
> unplayable (or at least, very challenging), so everything is carefully 
> handled to ensure it runs smoothly at a realistic rate.
> 
> Maybe not the answer you're looking for, but it is one alternative.

I didn't know about MAME, thanks.  Then again, both computation and I/O can 
look pretty realistic with SIMH with instruction pacing enabled (set to the 
average instruction time).  You're quite right that this isn't exact, and if 
you have real time software (like games) that would be visible.  But it's quite 
good enough to give you a "oh yes, it really *was* that slow" experience.  :-)

paul

___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Simulator development: Advice on system timing

2017-10-26 Thread Armistead, Jason
If you need accurate device timing, then perhaps something like the core of 
MAME/MESS is a better choice than SIMH.  All those retro arcade machine games 
in MAME depend on counting cycles in order to give realistic game behavior for 
the human who is playing them.  If they ran twice as fast, they'd be unplayable 
(or at least, very challenging), so everything is carefully handled to ensure 
it runs smoothly at a realistic rate.

Maybe not the answer you're looking for, but it is one alternative.

-Original Message-
From: Simh [mailto:simh-boun...@trailing-edge.com] On Behalf Of Paul Koning
Sent: Thursday, 26 October 2017 2:16 PM
To: Seth Morabito
Cc: simh@trailing-edge.com
Subject: Re: [Simh] Simulator development: Advice on system timing


> On Oct 26, 2017, at 1:55 PM, Seth Morabito <w...@loomcom.com> wrote:
> 
> Hello all, and especially those who have written or are writing 
> simulators,
> 
> I'm battling system timing in the 3B2/400 emulator I'm working on. As 
> with any system, particular activities such as disk seeks, reads, and 
> writes must be completed within certain margins -- if they happen too 
> early or too late, an interrupt will be missed. But the "fudge 
> factor", so to speak, seems pretty tight on the 3B2, possibly because 
> it runs at a system clock frequency of 10MHz.

In most systems, odd things can happen if interrupts happen too soon.  If an 
I/O completes essentially instantaneously, then software that relies on being 
able to start I/O, then do some more stuff, and count on that completing before 
the interrupt -- even though interrupts are enabled -- will break.

The correct description for such software is "defective" though there certainly 
is quite a lot of it in the wild.

For this reason, simulators need to delay interrupts by some number of 
instruction times, and SIMH makes that easy.  But it doesn't normally matter 
that the timing is not exact, all that's needed in most cases that the 
interrupt is held off long enough to work around the sort of poorly written 
code I mentioned.  So if you base your delays on average instruction times and 
average I/O latency, you'll normally be fine.  In the 3B2 case, with 10 MHz 
clock and an average of 8 cycles per instruction, that's 1.25 MIPS.  A disk I/O 
might take 20 ms (1/2 rotation at 3600 RPM), so that would be 25,000 
instruction times.  Quite likely you could crank that number way down and have 
the code still run.

If you want to have realistic timing, that's a different matter.  You'd find 
yourself tracking the cylinder position and charging for seek timing.  The 
DECtape emulation does that, and it matters because some operating systems 
(TOPS-10, VMS) do tape position prediction based on elapsed time.  But that's 
an unusual case.

paul


___
Simh mailing list
Simh@trailing-edge.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__mailman.trailing-2Dedge.com_mailman_listinfo_simh=DwIGaQ=ilBQI1lupc9Y65XwNblLtw=CeQuuq99aEH_1RE1yGnGXY4AbMg6_1cjkYeJjaT-sd4=28teaDEfpeRsG91nozxabV4mdFUU29mk-Z8uKnyQzU8=eipMJsFHOejQwt_QvM2i4SQzThEadhwDwVzO6zm-M7s=
 
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

Re: [Simh] Simulator development: Advice on system timing

2017-10-26 Thread Paul Koning

> On Oct 26, 2017, at 1:55 PM, Seth Morabito  wrote:
> 
> Hello all, and especially those who have written or are writing
> simulators,
> 
> I'm battling system timing in the 3B2/400 emulator I'm working on. As
> with any system, particular activities such as disk seeks, reads, and
> writes must be completed within certain margins -- if they happen too
> early or too late, an interrupt will be missed. But the "fudge factor",
> so to speak, seems pretty tight on the 3B2, possibly because it runs at
> a system clock frequency of 10MHz.

In most systems, odd things can happen if interrupts happen too soon.  If an 
I/O completes essentially instantaneously, then software that relies on being 
able to start I/O, then do some more stuff, and count on that completing before 
the interrupt -- even though interrupts are enabled -- will break.

The correct description for such software is "defective" though there certainly 
is quite a lot of it in the wild.

For this reason, simulators need to delay interrupts by some number of 
instruction times, and SIMH makes that easy.  But it doesn't normally matter 
that the timing is not exact, all that's needed in most cases that the 
interrupt is held off long enough to work around the sort of poorly written 
code I mentioned.  So if you base your delays on average instruction times and 
average I/O latency, you'll normally be fine.  In the 3B2 case, with 10 MHz 
clock and an average of 8 cycles per instruction, that's 1.25 MIPS.  A disk I/O 
might take 20 ms (1/2 rotation at 3600 RPM), so that would be 25,000 
instruction times.  Quite likely you could crank that number way down and have 
the code still run.

If you want to have realistic timing, that's a different matter.  You'd find 
yourself tracking the cylinder position and charging for seek timing.  The 
DECtape emulation does that, and it matters because some operating systems 
(TOPS-10, VMS) do tape position prediction based on elapsed time.  But that's 
an unusual case.

paul


___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh

[Simh] Simulator development: Advice on system timing

2017-10-26 Thread Seth Morabito
Hello all, and especially those who have written or are writing
simulators,

I'm battling system timing in the 3B2/400 emulator I'm working on. As
with any system, particular activities such as disk seeks, reads, and
writes must be completed within certain margins -- if they happen too
early or too late, an interrupt will be missed. But the "fudge factor",
so to speak, seems pretty tight on the 3B2, possibly because it runs at
a system clock frequency of 10MHz.

In the simulator, I really want to be able to say "call sim_activate()
with a delay of 8ms (or 720us, or whatever) of simulated time". I'm
trying to come up with the best strategy to map simulator steps with
simulated time.

If I know that the real system runs at 10MHz, I know each clock cycle
takes 100ns. So far so good -- but of course on a real system, each
instruction takes several system clock steps. If I had to hazard a
guess, I'd say each real instruction on a real system takes an average
of 8-10 clock cycles, depending on the instruction length and number of
memory accesses. Each step of the simulator does a complete instruction
cycle - fetch, decode, execute, reads and writes - in one go, so it's
not a direct mapping of simulator step to the 10Mhz clock.

How do I translate this knowledge into accurate delays for
"sim_activate()" in my disk access routines? Is there a best practice
for this?

-Seth
-- 
  Seth Morabito
  w...@loomcom.com
___
Simh mailing list
Simh@trailing-edge.com
http://mailman.trailing-edge.com/mailman/listinfo/simh