On 4 Jun 2013, at 6:13 AM, Jon Taylor <[email protected]> wrote:

> Setting stop-delta to 2 billion seems to make the simulation run long enough 
> to enable me to test my code - the native sockets I'm using to emulate a bus 
> aren't timing out any more. Later on when Controlix gets more controls and 
> the execution environment is ready to run bare-metal on a CPU, I will 
> probably need to compile my own simulation code with -stop-delta disabled as 
> you describe.

I was under the impression from the Controlix blog Update you were planning on 
eventually getting around this limitation lacking portability:

All process are now self-clocked to run independently. Each process is made 
sensitive to a unique signal which is precessed within the process itself 
(usually at the end). While this isn't at core synthesizable (each process 
becomes a testbench |->), it shouldn't be much work later on to precess each 
process' clock signal off a master clock of some type, thus solving that 
problem. I just don't want to have to fiddle with synchronizing that many 
signals right now, when I have a lot of general code I need to get working... 
at least I don't have to run any more infinite loops anywhere in the whole 
codebase to get process to progress in general, and I am free to write "real" 
(i.e. orthogonal) VHDL code as I need to.

One design goal of Controlix is stated as:

Based on the VHDL language for concurrent, verifiable and sythesizable 
programming. Run the same systems code in userspace, as an OS on a CPU-based 
system, in parallel on an FPGA or even masked into an integrated circuit!

the ability to target hardware implementation through synthesis.

I used to be an ASIC guy from the 1980's forward.  The complexity of the 
networking stuff is on the same order as a full featured networking switch chip 
these days, and you can imagine likewise requiring exception processing (a CPU 
somewhere, dragging down performance).

Coming up with targeted 'chip' boundaries might allow you to tackle conversion 
to hardware targetable implementation on a case by case basis.  The idea being 
a pure behavioral model which becomes mixed mode and eventually capable of 
becoming hardware. The early conversions teach you how to do the latter faster 
and more efficiently.

Your short term solution sounds like it will only work when simulation time 
doesn't advance and it's likely the more you count on synthetic programming in 
lieu of hardware simulation the harder it will be to target hardware later.

_______________________________________________
Ghdl-discuss mailing list
[email protected]
https://mail.gna.org/listinfo/ghdl-discuss

Reply via email to