Details are in the attached tarball.

I'm running a very simple RISC-V system configuration with only L1 data / 
instruction caches.
I'm running a very simple cache test executable that reuses memory inside of a 
24KiB block.

When I run with 32KiB caches "riscv-gem5 ./cache_bench.py", it takes 6971237877 
ticks.
When I run with 1KiB caches "riscv-gem5 ./cache_bench.py --no-cache" it takes 
7042666907 ticks.
This is only a 1.01x improvement from having 32x the cache size; working set 
should be 24KiB.

I must be using Gem5 wrong since there is almost no improvement with increased 
cache sizes.
Does anyone have any ideas about what I might be missing here?

The test program, "ubench_cache.c" has been used in a different context with 
success.
I was able to use a Linux kernel module on an x86 machine to get access to 
uncacheable memory:
https://github.com/lemonsqueeze/uncached-ram-lkm
The benchmark has >100x performance with cacheable memory on x86 hardware vs 
uncacheable.

I feel like I must be missing something about configuring Gem5 correctly.
Any ideas or tips would be greatly appreciated.

Thanks,
~Aaron Vose

--- Begin Message ---
Title: Attachment Notice
Attachment Notice

The following attachment was removed from the associated email message.
 
File Name
cache_bench.tar.gz
File Size
353386 Bytes
 
Attachment management policies limit the types of attachments that are allowed to pass through the email infrastructure.

Attachments are monitored and audited for security reasons.

--- End Message ---
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org

Reply via email to