> So the root problem IMO seems to me that the data structures used in grt
> cause non-sensitized processes to use simulation time even when they're
> idle (i.e. waiting for some event). grt should IMO keep with every
> signal a list of non-sensitized processes to wake up in case of an event
> (as it does for sensitized processes, but unlike the sensitized case it
> must be changed dynamically at runtime) and a sorted list of wakeup
> times. That way, wait would be somewhat more expensive (as it has to
> update those two data structures), but there would be no need to
> continually iterate over the list of non-sensitized processes, thus
> inactive non-sensitized processes would not consume any CPU at all.
>   

It sounds to me like you're talking about very fundamental changes to 
the simulation kernel in GHDL, changes that unfortunately few of us are 
equipped to help Tristan with for lack of Ada coding experience.

Please correct me if I am wrong, but is the need to iterate over 
processes to decide whether to resume them or not a testbench-specific 
issue for your particular situation? You say you've changed from 
non-sensitized to sensitized VITAL blocks but still getting the same 
behaviour, so its therefore instrinsic to the grt itself.

Is there any way to determine for other types of simulations whether the 
addition of the data structures to track sensitized and non-sensitized 
processes would help or hinder performance for other simulation runs? 
GHDL has been around for some time, and this appears to be the first 
time something like this has come up.

This particular situation is about test benches, but perhaps in most 
other situations the best thing to do is iterate over the list of 
processes, in which case it might slow things down overall.


_______________________________________________
Ghdl-discuss mailing list
[email protected]
https://mail.gna.org/listinfo/ghdl-discuss

Reply via email to