If you're using a recent version of DRAMSim2, try to enable the debug print
on this line: MemoryController.cpp:830 (
https://github.com/dramninjasUMD/DRAMSim2/commit/566f68b3a065a47583059bf68d8125075bb78f52#L0R830)

That will tell you how many bits DRAMsim2 thinks each field should be.

On Mon, Aug 8, 2011 at 12:37 AM, Zhe Wang <[email protected]> wrote:

> Hi,
>
> I get the out of boundary address from the last level cache part.  I mean
> if physical address of the memory access out of boundary , for example we
> set the  memory size to be 8G(32 binary bits), but we get some out of
> boundary memory address larger than 8G(such as 33 binary bits). In the
> memory address mapping scheme in DRAMsim2, if we map the highest bit to the
> rank, should the 32 bit map to the rank number or the 33 bit map to the rank
> number? Since the physical address out of boundary, we do not know which one
> is the highest bit.
>
> Thanks
> zhe
>
>
> On Sun, Aug 7, 2011 at 11:04 PM, DRAM Ninjas <[email protected]> wrote:
>
>>
>>
>> On Sun, Aug 7, 2011 at 11:09 PM, Zhe Wang <[email protected]> wrote:
>>
>>> I can set the qemu memory size to 8G when the host machine just has 4G
>>> memory, but can not set to the memory size larger than 12G, that confuse me.
>>>
>>>
>> Well, you probably have to consider the size of physical memory size + the
>> size of swap.
>>
>>
>>>
>>> It seems the out of boundary address is not the MMIO address, when I run
>>> a simulation to 100 million instructions, almost half of the memory request
>>> address out of boundary. If this is the case, the memory mapping scheme in
>>> DRAMsim2 may not accurate.
>>>
>>
>> Hm, where are you seeing the out of bounds addresses? The address mapping
>> scheme doesn't actually change anything about the physical address that is
>> passed into DRAMSim2, so I'm not sure what you mean.
>>
>>
>>>
>>> Thanks
>>> zhe
>>>
>>>
>>>
>>> On Sun, Aug 7, 2011 at 4:05 PM, DRAM Ninjas <[email protected]>wrote:
>>>
>>>>
>>>>
>>>> On Sun, Aug 7, 2011 at 1:31 PM, avadh patel <[email protected]>wrote:
>>>>
>>>>>
>>>>>
>>>>> On Sat, Aug 6, 2011 at 6:47 PM, Zhe Wang <[email protected]> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I am using marss with DRAMsim2 to run the experiment to test memory
>>>>>> performance.
>>>>>>
>>>>>> I would like to set the memory size to 16G. But I found if I set the
>>>>>> memory size larger than 12000M, the qemu is abort. So I am wondering is
>>>>>> there any way to set the memory size larger than 12000m?
>>>>>>
>>>>>> QEMU doesn't allow VM memory size to  be greater than Host memory
>>>>> size. So you can run simulations of 16 GB RAM only if your host has 16GB 
>>>>> or
>>>>> more RAM. If this is not the case then, does unmodified QEMU allows you to
>>>>> allocate more than 12GB?
>>>>>
>>>>>
>>>>>> I also did some tests that set the memory size to 8G, then when the
>>>>>> simulator running the spec2006 benchmark, I print out the physical 
>>>>>> address,
>>>>>> I found some of the physical address is 34 bis(binary) that means the
>>>>>> physical address is larger than 8G.  I am confused by this, since if the
>>>>>> memory size is 8G, all the physical address should within 8G(33 binary
>>>>>> bits). Is anybody could give me some help on these questions?
>>>>>>
>>>>>> This is interesting. Please can you give some example memory
>>>>> addresses? Even with 8GB RAM, there can be memory access to IO devices via
>>>>> MMIO which can be mapped to higher memory range. In ptl-qemu.cpp,
>>>>> 'check_and_translate' function set 'mmio' flag if the requested address
>>>>> belongs to MMIO range. You can use that function to check if your virtual
>>>>> address is MMIO or not (in QEMU there is no way to check if physical 
>>>>> address
>>>>> is MMIO or not).
>>>>>
>>>>
>>>> Jim, Ishwar, and Mu-Tien have actually run across some out of bound
>>>> addresses as well -- I didn't get a chance to look to see if the MMIO
>>>> addresses actually generate cache requests. If I'm not mistaken, the MMIO
>>>> flag is a reference that gets passed back to the caller -- so theoretically
>>>> the code that generates the call could propagate this information all the
>>>> way back to the core so it won't generate cache access for MMIO accesses.
>>>> That way, DRAMSim will never see MMIO addresses. Or, if check and translate
>>>> has no side effects, just check the address before generating a cache
>>>> request.
>>>>
>>>>
>>>>>
>>>>> - Avadh
>>>>>
>>>>>>
>>>>>> Thank you!
>>>>>> zhe
>>>>>>
>>>>>> _______________________________________________
>>>>>> http://www.marss86.org
>>>>>> Marss86-Devel mailing list
>>>>>> [email protected]
>>>>>> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> http://www.marss86.org
>>>>> Marss86-Devel mailing list
>>>>> [email protected]
>>>>> https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel
>>>>>
>>>>>
>>>>
>>>
>>
>
_______________________________________________
http://www.marss86.org
Marss86-Devel mailing list
[email protected]
https://www.cs.binghamton.edu/mailman/listinfo/marss86-devel

Reply via email to