Hi,

Does anyone know how to do a chiplet based simulation on the latest Gem5
version with garnet?

My basic requirements are: running distributed C++ applications (using
multi-threading) on the Gem5 O3CPYU type, having the ability to control the
number of cores on individual chiplet dies (even one core per die is
sufficient), control the D-to-D bandwidth, and getting various statistics
including power consumption per die and in the interposer, overall latency
of communication between dies.

On the other hand these are optional: configuring a package with different
topologies (mesh XY, butterfly, etc), possibly different packaging
techniques (like 2D, 2.5D, 3D), simulate heterogenous components like GPU
with CPU, and other NoC parameters like flit with, interposer width, etc.

I tried an older version of Gem5 from Dr. Tushar Krishna's group called
gem5_chips (https://github.com/GT-CHIPS/gem5_chips) but I had to use an
older version of Python (2.7) and GCC (8) to even compile Gem5 on an Ubuntu
20.04 system. Despite that I was not able to run their benchmark beyond a
certain instruction limit (using max-insts) since the simulator started
throwing assertion errors in the garnet2.0/OutVcState.cc code.

I would really appreciate it if someone can point me towards the right
direction here.

Thank you. Best regards,
Preet.
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org

Reply via email to