> On Aug. 21, 2016, 4:29 a.m., Michael LeBeane wrote:
> > Well, having played around with this solution for KVM MMIO + Ruby a bit it 
> > does work (with the addition of a timing_noncacheable state as Andreas S. 
> > noted), but not very well.  If you put the system in timing mode you get 
> > events like DRAM refresh that make it so you can't stay in KVM very long, 
> > which kinda defeats the purpose. Any non-hacky ideas how to get around this?

This is an unfortunate side-effect of using KVM. A display processor would 
cause the same type of issues (you'd get events at least once per refresh, but 
possibly once per N pixels). There are basically two high-level solutions: 

   1. Don't issue frequent events when running in KVM mode. I have been 
considering this for the HDLCD. If running in *_noncacheable, we'd just reduce 
simulation fidelity to get events down to something manageable.
   2. Run KVM in a separate thread similar to when simulating a multi-core 
system using KVM. This allows you to put devices in one event queue and each of 
the simulated KVM cores in separate event queues and control when the queues 
are synchronised.

In this particular case, I think 2 sounds like a reasonable solution since you 
presumably want good timing fidelity for the GPU. Synchronisation is going to 
be "interesting", but the KVM CPU should be able to cope with being in its own 
thread. Communication should only really happen when handling MMIOs and 
interrupts, which already support synchronisation. I have something along these 
lines in my KVM script to map CPUs to threads:

```python
root.sim_quantum=m5.ticks.fromSeconds(options.quantum * 1E-3)

# Assign independent event queues (threads) to the KVM CPUs,
# event queue 0 is reserved for simulated devices.
for idx, cpu in enumerate(system.cpu):
    # Child objects usually inherit the parent's event
    # queue. Override that and use queue 0 instead.
    for obj in cpu.descendants():
        obj.eventq_index = 0

    cpu.eventq_index = idx + 1
```

You might want to test the timing changes on their own in a multi-core system 
in timing_noncacheable mode to make sure that they synchronise correctly. I 
have a sneaking suspicion that they don't at the moment.


- Andreas


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3619/#review8668
-----------------------------------------------------------


On Aug. 21, 2016, 4:19 a.m., Michael LeBeane wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3619/
> -----------------------------------------------------------
> 
> (Updated Aug. 21, 2016, 4:19 a.m.)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> -------
> 
> Changeset 11561:4595cc3848fc
> ---------------------------
> kvm: Support timing accesses for KVM cpu
> This patch enables timing accesses for KVM cpu.  A new state,
> RunningMMIOPending, is added to indicate that there are outstanding timing
> requests generated by KVM in the system.  KVM's tick() is disabled and the
> simulation does not enter into KVM until all outstanding timing requests have
> completed.  The main motivation for this is to allow KVM CPU to perform MMIO
> in Ruby, since Ruby does not support atomic accesses.
> 
> 
> Diffs
> -----
> 
>   src/cpu/kvm/x86_cpu.cc 91f58918a76abf1a1dedcaa70a9b95789da7b88c 
>   src/cpu/kvm/base.hh 91f58918a76abf1a1dedcaa70a9b95789da7b88c 
>   src/cpu/kvm/base.cc 91f58918a76abf1a1dedcaa70a9b95789da7b88c 
> 
> Diff: http://reviews.gem5.org/r/3619/diff/
> 
> 
> Testing
> -------
> 
> 
> Thanks,
> 
> Michael LeBeane
> 
>

_______________________________________________
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to