Hi folks! Since the systemc implementation seems to be nearing an
interesting point, I thought it would be a good idea to let everyone know
what's going on.

1. Even though I've been checking in a batch of 40 CLs a week, the list of
pending CLs still sits at 92. Since the previous CLs haven't been reviewed,
I'm hoping it will be fine to just wait a week (without annotating all 92
CLs as such), and then checking them all in to get everything up to date.


2. I'm planning to make some examples to show how you can run a systemc
simulation based on sc_main, or one which has systemc objects instantiated
by gem5 and built into a gem5 hierarchy. I asked about this in an earlier
email, but I'm assuming putting these in a new directory called "examples"
at the top level of the repository is ok. CLs will of course be available
for review if people want to object and/or suggest alternatives.


3. The systemc sources have been guarded by a scons variable which means
that while you all likely have them (as of 92 CLs ago) in your checkouts,
they aren't currently doing anything. I'm planning to flip that switch to
be default on in the near future, and then after that removing the switch.
This is just a heads up.


4. I have an adapted version of Accelera's (the upstream systemc folks)
package of tests and run script in src/systemc/tests. The script is pretty
flexible and deserves some documentaiton, but in short if you want to run
it you can use a command line like this:

src/systemc/tests/verify.py -j 48 build/ARM/ --filter-file
src/systemc/tests/working.filt

Of the 853 total tests, some are simply uncompilable, a few are for
features which are deprecated or highly non-standard which I chose not to
implement, leaving 818. Exactly which those are and why is described in the
working.filt file mentioned in the command line above.

Of those 818, 798 are considered passing right now. Of the 20 remaining,
the majority, 18, are "failing" because my implementation chooses to run
some processes in one order, while the Accellera implementation chooses a
different order. Both are within the spec, but since "correctness" is
determined by matching the output, these tests fail. One of the others is
because of a corner case in how reset signals work at the very beginning of
simulation, and the last is because I don't free up dead processes, and one
of the tests attempts to reuse their names. That works in the Accellera
implementation, but mine recongizes the conflict and renames the new
objects per the spec.

I've made considerable effort to match the ordering of the Accellera
implementation, but these last few are, in my opinion, not worth chasing
down or contorting the gem5 implementation to match. My thought as of right
now is to update the reference output so that the tests pass with gem5's
ordering.

In general it's not great to use the output and arbitrary non-standard
ordering decisions to determine correctness. There are several places in
the implementation where I've done things in ways that I know are
inefficient, but which match Accellera. I would like to fix those, but
doing so would make some of the tests fail. In a perfect world, we would
have tests which checked only the required behavior and not arbitrary
undefined decisions the implementation is allowed to make.


5. There is no mechanism yet for connecting models either to gem5 through
TLM with a bridge, or with gem5 ports with a systemc interface on the far
side. This is the next major thing to implement, probably starting with an
interface/channel for speaking the gem5 protocol. This would also be where
the existing gem5/TLM bridge would be very handy.

Gabe
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to