In these cases, I think we may be describing the same thing. To
reconcile terms and elaborate:
a) I've been clumsily trying to describe an interactive command shell
with loadable configuration/extension scripts. With this, one could
"change directories" through the design hierarchy, select signals to
trace, inspect state, step the simulation forward, etc. Is this what you
mean by "control simulation from outside (asynchronously)"?
Yes, let me throw in a somewhat documented scenario (with the current
limitations):
http://tech.section5.ch/news/?p=124
The "plus" features would be something like:
$ ./sim --interactive --port=2016
Python:
> import ghdlex
> def cond(signals):
> return True
> ...
> sim = ghdlex.connect(":2016")
> sim.set("Timescale", "1ns")
> s0 = ghdlex.StopCondition(cond)
> sim.stop_conditions.insert(s0)
> sim.start()
> while sim.state() == ghdlex.RUNNING:
> time.sleep(0.1)
The interface would also have to react to the assert events that you
would normally use to terminate your sim from *within* your HDL.
It is arguable to have a blocking call like
> evt = sim.run()
too.
Some LabVIEW uers would do that similarly, the simulation then being
accessed by a VI.
b) I've vaguely imagined some kind of generic VHPI library connected to
the embedded interpreter such that writing VHDL that interfaces through
configurable IPC to whatever environment one prefers (e.g., Octave, R,
SciPy, etc.) would be easy.
It can just slow down your system massively to make blocking calls from
the simulation to something else. If you only do that every now and then
with a artificially slow clock, it might be ok (I'm using that to
refresh a virtual display, some outdated example:
http://tech.section5.ch/news/?p=130).
The more performant way is just a software FIFO running as a separate
thread, I think. I've also implemented a virtual RAM for more
performance that can be accessed through the network backdoor. In fact
that's the more elegant way to run a framebuffer display.
If you have any non-covered application scenarios in mind, feel free to
elaborate (no pun intended).
By "client side stuff from inside the simulation" do you mean actually
implementing the VHPI coroutines in the embedded interpreter? I suppose
that is where the embedded Python [program] interpreter would be *much*
more useful than a light-weight Tcl [command] interpreter. But that
would also most likely mean that Python would be used as the interactive
command interface.
Well, if you'd embed the python interpreter as a cli which would fire up
everything behind as a separate (asynchronous) thread, you could
probably have both (a) and (b) using callbacks.
But this calls for complicated scenarios, typically. That's why I tend
to insert the small "netpp" property layer only into the exes that are
supposed to speak to each other, i.e. resolving into a clear
client/server (master/slave) relationship that's calling for less
pitfalls, I guess. The simulation.exe just does not feel like a "master"
to me.
The ugly side of it (from an engineering side of view) is: I wouldn't
know how Ada can interface with Python except going through a C API.
That's why it would probably be wise to separate things and look at a
plugin concept as it's already there with the .vpi DLL interface.
...
How often does shell syntax need to change? (Rhetorical).
Not too often. For the tcl user, things don't matter much. It's rather
implementation-wise. I just had tried to migrate some old (10y) code a
few years ago to a newer TCL version. Compared to updating old python
crap, this turned out to be more of a maintenance pain. You might start
an argument with Python v2.x vs 3.x, but since this would be a new
design anyway, no point looking at legacy.
On the other side, the pain you may have with Python/C interfacing are
reference count issues. Certainly, I wouldn't want to create too many
python dynamic objects from within a simulation.
I hadn't really considered that including Tk would be desirable. Would
people want to create a graphical interface to the [executable
simulation model][2]? The jimTcl implementation does *not* have a Tk
interface.
I know jimtcl somewhat from the OpenOCD debugger side, but this always
was something to me that worked half way. That's probably got nothing to
do with jimtcl but with the messy implementation concept of OpenOCD. But
eventually, it's always up to the one doing the job to decide :-)
[2]: What is a good way to refer to the executable binary file that is
produced by the ghdl compiler?
Sounds verbose to me "as is". Or 'simulation runtime'? Or Herman the
german style: Elaboriertes Kompilat :-)
Now we've gotten off the actual synthesis topic though.
If we make the loop from Python back to synthesis, I'd like to point out
that there's a powerful MyHDL framework that already has all the
ast/elaboration issues covered inside the python runtime environment.
However its lacking a proper and strongly typed direct transfer language
usable for mapping and I believe it shouldn't be used for verification
except from a functional point of view. That's where GHDL comes in...
Greetings,
- Martin
_______________________________________________
Ghdl-discuss mailing list
Ghdl-discuss@gna.org
https://mail.gna.org/listinfo/ghdl-discuss