Andreas Sandberg has submitted this change. ( https://gem5-review.googlesource.com/c/public/gem5/+/39576 )

Change subject: mem: Consistently use ISO prefixes
......................................................................

mem: Consistently use ISO prefixes

We currently use the traditional SI-like prefixes for to represent
binary multipliers in some contexts. This is ambiguous in many cases
since they overload the meaning of the SI prefix.

Here are some examples of commonly used in the industry:
  * Storage vendors define 1 MB as 10**6 bytes
  * Memory vendors define 1 MB as 2**20 bytes
  * Network equipment treats 1Mbit/s as 10**6 bits/s
  * Memory vendors define 1Mbit as 2**20 bits

In practice, this means that a FLASH chip on a storage bus uses
decimal prefixes, but that same flash chip on a memory bus uses binary
prefixes. It would also be reasonable to assume that the contents of a
1Mbit FLASH chip would take 0.1s to transfer over a 10Mbit Ethernet
link. That's however not the case due to different meanings of the
prefix.

The quantity 2MX is treated differently by gem5 depending on the unit
X:

  * Physical quantities (s, Hz, V, A, J, K, C, F) use decimal prefixes.
  * Interconnect and NoC bandwidths (B/s) use binary prefixes.
  * Network bandwidths (bps) use decimal prefixes.
  * Memory sizes and storage sizes (B) use binary prefixes.

Mitigate this ambiguity by consistently using the ISO/IEC/SI prefixes
for binary multipliers for parameters and comments where appropriate.

Change-Id: I2d24682d207830f3b7b0ad2ff82b55e082cccb32
Signed-off-by: Andreas Sandberg <andreas.sandb...@arm.com>
Reviewed-on: https://gem5-review.googlesource.com/c/public/gem5/+/39576
Reviewed-by: Richard Cooper <richard.coo...@arm.com>
Reviewed-by: Daniel Carvalho <oda...@yahoo.com.br>
Reviewed-by: Nikos Nikoleris <nikos.nikole...@arm.com>
Maintainer: Nikos Nikoleris <nikos.nikole...@arm.com>
Tested-by: kokoro <noreply+kok...@google.com>
---
M src/mem/AbstractMemory.py
M src/mem/DRAMInterface.py
M src/mem/NVMInterface.py
M src/mem/SimpleMemory.py
M src/mem/XBar.py
M src/mem/cache/prefetch/Prefetcher.py
M src/mem/cache/tags/Tags.py
7 files changed, 49 insertions(+), 48 deletions(-)

Approvals:
  Nikos Nikoleris: Looks good to me, approved; Looks good to me, approved
  Daniel Carvalho: Looks good to me, approved
  Richard Cooper: Looks good to me, but someone else must approve
  kokoro: Regressions pass



diff --git a/src/mem/AbstractMemory.py b/src/mem/AbstractMemory.py
index 4c21d52..e1941c3 100644
--- a/src/mem/AbstractMemory.py
+++ b/src/mem/AbstractMemory.py
@@ -44,9 +44,10 @@
     abstract = True
     cxx_header = "mem/abstract_mem.hh"

-    # A default memory size of 128 MB (starting at 0) is used to
+    # A default memory size of 128 MiB (starting at 0) is used to
     # simplify the regressions
- range = Param.AddrRange('128MB', "Address range (potentially interleaved)")
+    range = Param.AddrRange('128MiB',
+                            "Address range (potentially interleaved)")
     null = Param.Bool(False, "Do not store data, always return zero")

     # All memories are passed to the global physical memory, and
diff --git a/src/mem/DRAMInterface.py b/src/mem/DRAMInterface.py
index 85a6092..4f59498 100644
--- a/src/mem/DRAMInterface.py
+++ b/src/mem/DRAMInterface.py
@@ -259,7 +259,7 @@
 # an 8x8 configuration.
 class DDR3_1600_8x8(DRAMInterface):
     # size of device in bytes
-    device_size = '512MB'
+    device_size = '512MiB'

     # 8x8 configuration, 8 devices each with an 8-bit interface
     device_bus_width = 8
@@ -268,7 +268,7 @@
     burst_length = 8

     # Each device has a page (row buffer) size of 1 Kbyte (1K columns x8)
-    device_rowbuffer_size = '1kB'
+    device_rowbuffer_size = '1KiB'

     # 8x8 configuration, so 8 devices
     devices_per_rank = 8
@@ -338,7 +338,7 @@
# [2] High performance AXI-4.0 based interconnect for extensible smart memory
 # cubes (E. Azarkhish et. al)
 # Assumed for the HMC model is a 30 nm technology node.
-# The modelled HMC consists of 4 Gbit layers which sum up to 2GB of memory (4 +# The modelled HMC consists of 4 Gbit layers which sum up to 2GiB of memory (4
 # layers).
 # Each layer has 16 vaults and each vault consists of 2 banks per layer.
# In order to be able to use the same controller used for 2D DRAM generations
@@ -354,8 +354,8 @@
 # of the HMC
 class HMC_2500_1x32(DDR3_1600_8x8):
     # size of device
-    # two banks per device with each bank 4MB [2]
-    device_size = '8MB'
+    # two banks per device with each bank 4MiB [2]
+    device_size = '8MiB'

     # 1x32 configuration, 1 device with 32 TSVs [2]
     device_bus_width = 32
@@ -458,11 +458,11 @@
 # A single DDR4-2400 x64 channel (one command and address bus), with
 # timings based on a DDR4-2400 8 Gbit datasheet (Micron MT40A2G4)
 # in an 16x4 configuration.
-# Total channel capacity is 32GB
-# 16 devices/rank * 2 ranks/channel * 1GB/device = 32GB/channel
+# Total channel capacity is 32GiB
+# 16 devices/rank * 2 ranks/channel * 1GiB/device = 32GiB/channel
 class DDR4_2400_16x4(DRAMInterface):
     # size of device
-    device_size = '1GB'
+    device_size = '1GiB'

     # 16x4 configuration, 16 devices each with a 4-bit interface
     device_bus_width = 4
@@ -569,14 +569,14 @@
 # A single DDR4-2400 x64 channel (one command and address bus), with
 # timings based on a DDR4-2400 8 Gbit datasheet (Micron MT40A1G8)
 # in an 8x8 configuration.
-# Total channel capacity is 16GB
-# 8 devices/rank * 2 ranks/channel * 1GB/device = 16GB/channel
+# Total channel capacity is 16GiB
+# 8 devices/rank * 2 ranks/channel * 1GiB/device = 16GiB/channel
 class DDR4_2400_8x8(DDR4_2400_16x4):
     # 8x8 configuration, 8 devices each with an 8-bit interface
     device_bus_width = 8

     # Each device has a page (row buffer) size of 1 Kbyte (1K columns x8)
-    device_rowbuffer_size = '1kB'
+    device_rowbuffer_size = '1KiB'

     # 8x8 configuration, so 8 devices
     devices_per_rank = 8
@@ -596,14 +596,14 @@
 # A single DDR4-2400 x64 channel (one command and address bus), with
 # timings based on a DDR4-2400 8 Gbit datasheet (Micron MT40A512M16)
 # in an 4x16 configuration.
-# Total channel capacity is 4GB
-# 4 devices/rank * 1 ranks/channel * 1GB/device = 4GB/channel
+# Total channel capacity is 4GiB
+# 4 devices/rank * 1 ranks/channel * 1GiB/device = 4GiB/channel
 class DDR4_2400_4x16(DDR4_2400_16x4):
     # 4x16 configuration, 4 devices each with an 16-bit interface
     device_bus_width = 16

     # Each device has a page (row buffer) size of 2 Kbyte (1K columns x16)
-    device_rowbuffer_size = '2kB'
+    device_rowbuffer_size = '2KiB'

     # 4x16 configuration, so 4 devices
     devices_per_rank = 4
@@ -646,7 +646,7 @@
     dll = False

     # size of device
-    device_size = '512MB'
+    device_size = '512MiB'

     # 1x32 configuration, 1 device with a 32-bit interface
     device_bus_width = 32
@@ -656,7 +656,7 @@

     # Each device has a page (row buffer) size of 1KB
     # (this depends on the memory density)
-    device_rowbuffer_size = '1kB'
+    device_rowbuffer_size = '1KiB'

     # 1x32 configuration, so 1 device
     devices_per_rank = 1
@@ -745,7 +745,7 @@
     dll = False

     # size of device
-    device_size = '1024MB'
+    device_size = '1024MiB'

     # 1x128 configuration, 1 device with a 128-bit interface
     device_bus_width = 128
@@ -755,7 +755,7 @@

     # Each device has a page (row buffer) size of 4KB
     # (this depends on the memory density)
-    device_rowbuffer_size = '4kB'
+    device_rowbuffer_size = '4KiB'

     # 1x128 configuration, so 1 device
     devices_per_rank = 1
@@ -814,7 +814,7 @@
     dll = False

     # size of device
-    device_size = '512MB'
+    device_size = '512MiB'

     # 1x32 configuration, 1 device with a 32-bit interface
     device_bus_width = 32
@@ -823,7 +823,7 @@
     burst_length = 8

     # Each device has a page (row buffer) size of 4KB
-    device_rowbuffer_size = '4kB'
+    device_rowbuffer_size = '4KiB'

     # 1x32 configuration, so 1 device
     devices_per_rank = 1
@@ -911,7 +911,7 @@
 # H5GQ1H24AFR) in a 2x32 configuration.
 class GDDR5_4000_2x32(DRAMInterface):
     # size of device
-    device_size = '128MB'
+    device_size = '128MiB'

     # 2x32 configuration, 1 device with a 32-bit interface
     device_bus_width = 32
@@ -992,7 +992,7 @@
 # ("HBM: Memory Solution for High Performance Processors", MemCon, 2014),
 # IDD measurement values, and by extrapolating data from other classes.
 # Architecture values based on published HBM spec
-# A 4H stack is defined, 2Gb per die for a total of 1GB of memory.
+# A 4H stack is defined, 2Gb per die for a total of 1GiB of memory.
 class HBM_1000_4H_1x128(DRAMInterface):
     # HBM gen1 supports up to 8 128-bit physical channels
     # Configuration defines a single channel, with the capacity
@@ -1006,11 +1006,11 @@
     # HBM supports BL4 and BL2 (legacy mode only)
     burst_length = 4

-    # size of channel in bytes, 4H stack of 2Gb dies is 1GB per stack;
-    # with 8 channels, 128MB per channel
-    device_size = '128MB'
+    # size of channel in bytes, 4H stack of 2Gb dies is 1GiB per stack;
+    # with 8 channels, 128MiB per channel
+    device_size = '128MiB'

-    device_rowbuffer_size = '2kB'
+    device_rowbuffer_size = '2KiB'

     # 1x128 configuration
     devices_per_rank = 1
@@ -1077,7 +1077,7 @@

 # A single HBM x64 interface (one command and address bus), with
 # default timings based on HBM gen1 and data publically released
-# A 4H stack is defined, 8Gb per die for a total of 4GB of memory.
+# A 4H stack is defined, 8Gb per die for a total of 4GiB of memory.
 # Note: This defines a pseudo-channel with a unique controller
 # instantiated per pseudo-channel
 # Stay at same IO rate (1Gbps) to maintain timing relationship with
@@ -1095,13 +1095,13 @@
     # HBM pseudo-channel only supports BL4
     burst_length = 4

-    # size of channel in bytes, 4H stack of 8Gb dies is 4GB per stack;
-    # with 16 channels, 256MB per channel
-    device_size = '256MB'
+    # size of channel in bytes, 4H stack of 8Gb dies is 4GiB per stack;
+    # with 16 channels, 256MiB per channel
+    device_size = '256MiB'

# page size is halved with pseudo-channel; maintaining the same same number
     # of rows per pseudo-channel with 2X banks across 2 channels
-    device_rowbuffer_size = '1kB'
+    device_rowbuffer_size = '1KiB'

     # HBM has 8 or 16 banks depending on capacity
     # Starting with 4Gb dies, 16 banks are defined
@@ -1146,10 +1146,10 @@
     burst_length = 32

     # size of device in bytes
-    device_size = '1GB'
+    device_size = '1GiB'

-    # 2kB page with BG mode
-    device_rowbuffer_size = '2kB'
+    # 2KiB page with BG mode
+    device_rowbuffer_size = '2KiB'

     # Use a 1x16 configuration
     devices_per_rank = 1
@@ -1279,8 +1279,8 @@
 # Configuring for 8-bank mode, burst of 32
 class LPDDR5_5500_1x16_8B_BL32(LPDDR5_5500_1x16_BG_BL32):

-    # 4kB page with 8B mode
-    device_rowbuffer_size = '4kB'
+    # 4KiB page with 8B mode
+    device_rowbuffer_size = '4KiB'

     # LPDDR5 supports configurable bank options
     # 8B  : BL32, all frequencies
@@ -1384,8 +1384,8 @@
 # Configuring for 8-bank mode, burst of 32
 class LPDDR5_6400_1x16_8B_BL32(LPDDR5_6400_1x16_BG_BL32):

-    # 4kB page with 8B mode
-    device_rowbuffer_size = '4kB'
+    # 4KiB page with 8B mode
+    device_rowbuffer_size = '4KiB'

     # LPDDR5 supports configurable bank options
     # 8B  : BL32, all frequencies
diff --git a/src/mem/NVMInterface.py b/src/mem/NVMInterface.py
index 3f6fbc4..20f51fc 100644
--- a/src/mem/NVMInterface.py
+++ b/src/mem/NVMInterface.py
@@ -76,7 +76,7 @@
     device_rowbuffer_size = '256B'

     # 8X capacity compared to DDR4 x4 DIMM with 8Gb devices
-    device_size = '512GB'
+    device_size = '512GiB'
     # Mimic 64-bit media agnostic DIMM interface
     device_bus_width = 64
     devices_per_rank = 1
diff --git a/src/mem/SimpleMemory.py b/src/mem/SimpleMemory.py
index 6e4b915..e8eac69 100644
--- a/src/mem/SimpleMemory.py
+++ b/src/mem/SimpleMemory.py
@@ -45,7 +45,7 @@
     port = ResponsePort("This port sends responses and receives requests")
     latency = Param.Latency('30ns', "Request to response latency")
latency_var = Param.Latency('0ns', "Request to response latency variance")
-    # The memory bandwidth limit default is set to 12.8GB/s which is
+    # The memory bandwidth limit default is set to 12.8GiB/s which is
     # representative of a x64 DDR3-1600 channel.
-    bandwidth = Param.MemoryBandwidth('12.8GB/s',
+    bandwidth = Param.MemoryBandwidth('12.8GiB/s',
                                       "Combined read and write bandwidth")
diff --git a/src/mem/XBar.py b/src/mem/XBar.py
index c162584..2dfe7c1 100644
--- a/src/mem/XBar.py
+++ b/src/mem/XBar.py
@@ -138,7 +138,7 @@
system = Param.System(Parent.any, "System that the crossbar belongs to.")

     # Sanity check on max capacity to track, adjust if needed.
- max_capacity = Param.MemorySize('8MB', "Maximum capacity of snoop filter") + max_capacity = Param.MemorySize('8MiB', "Maximum capacity of snoop filter")

 # We use a coherent crossbar to connect multiple requestors to the L2
 # caches. Normally this crossbar would be part of the cache itself.
diff --git a/src/mem/cache/prefetch/Prefetcher.py b/src/mem/cache/prefetch/Prefetcher.py
index 758803f..0840c60 100644
--- a/src/mem/cache/prefetch/Prefetcher.py
+++ b/src/mem/cache/prefetch/Prefetcher.py
@@ -293,7 +293,7 @@
         "Limit the strides checked up to -X/X, if 0, disable the limit")
     start_degree = Param.Unsigned(4,
         "Initial degree (Maximum number of prefetches generated")
-    hot_zone_size = Param.MemorySize("2kB", "Memory covered by a hot zone")
+ hot_zone_size = Param.MemorySize("2KiB", "Memory covered by a hot zone")
     access_map_table_entries = Param.MemorySize("256",
         "Number of entries in the access map table")
     access_map_table_assoc = Param.Unsigned(8,
@@ -456,7 +456,7 @@
     cxx_class = "Prefetcher::STeMS"
     cxx_header = "mem/cache/prefetch/spatio_temporal_memory_streaming.hh"

-    spatial_region_size = Param.MemorySize("2kB",
+    spatial_region_size = Param.MemorySize("2KiB",
         "Memory covered by a hot zone")
     active_generation_table_entries = Param.MemorySize("64",
         "Number of entries in the active generation table")
diff --git a/src/mem/cache/tags/Tags.py b/src/mem/cache/tags/Tags.py
index ce086fa..1e5b355 100644
--- a/src/mem/cache/tags/Tags.py
+++ b/src/mem/cache/tags/Tags.py
@@ -119,8 +119,8 @@
     cxx_class = 'FALRU'
     cxx_header = "mem/cache/tags/fa_lru.hh"

- min_tracked_cache_size = Param.MemorySize("128kB", "Minimum cache size for"
-                                              " which we track statistics")
+ min_tracked_cache_size = Param.MemorySize("128KiB", "Minimum cache size" + " for which we track statistics")

     # This tag uses its own embedded indexing
     indexing_policy = NULL

--
To view, visit https://gem5-review.googlesource.com/c/public/gem5/+/39576
To unsubscribe, or for help writing mail filters, visit https://gem5-review.googlesource.com/settings

Gerrit-Project: public/gem5
Gerrit-Branch: develop
Gerrit-Change-Id: I2d24682d207830f3b7b0ad2ff82b55e082cccb32
Gerrit-Change-Number: 39576
Gerrit-PatchSet: 4
Gerrit-Owner: Andreas Sandberg <andreas.sandb...@arm.com>
Gerrit-Reviewer: Andreas Sandberg <andreas.sandb...@arm.com>
Gerrit-Reviewer: Daniel Carvalho <oda...@yahoo.com.br>
Gerrit-Reviewer: Giacomo Travaglini <giacomo.travagl...@arm.com>
Gerrit-Reviewer: Nikos Nikoleris <nikos.nikole...@arm.com>
Gerrit-Reviewer: Richard Cooper <richard.coo...@arm.com>
Gerrit-Reviewer: kokoro <noreply+kok...@google.com>
Gerrit-MessageType: merged
_______________________________________________
gem5-dev mailing list -- gem5-dev@gem5.org
To unsubscribe send an email to gem5-dev-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to