> On 25 Mar 2024, at 08:58, Michal Orzel <michal.or...@amd.com> wrote:
> 
> Hi Luca,
> 

Hi Michal,

> On 12/03/2024 14:03, Luca Fancellu wrote:
>> 
>> 
>> Currently Xen is not exporting the static shared memory regions
>> to the device tree as /memory node, this commit is fixing this
>> issue.
> Looking at the implementation, you will always call make_shm_memory_node() 
> twice. For the first
> time, to create /memory node and for the second time to create entry under 
> /reserved-memory. Also,
> you will create a separate /memory node for every single shmem region instead 
> of combining them
> in a single /memory region like make_memory_node() would do. Can't we reuse 
> this function for simplicity?

You mean using make_memory_node() to populate /reserved-memory and /memory? I 
feel they are very different
In terms of properties to be created, so I don’t think we should create a 
make_memory_node() that does both.

Otherwise if you were suggesting to modify make_memory_node() only for what 
concerns the /memory nodes,
it might be feasible, however there are some parts that need to be skipped with 
some flags (all the code accessing .type
member), if I understand correctly you like this function because it doesn’t 
create one node for every bank, but it creates
reg addresses instead, in that case I could modify the current 
make_shm_memory_node() to do the same.

> 
> Also, afaict it is not forbidden to specify shmem range (correct me if I'm 
> wrong), where guest address will be
> within with RAM allocated by Xen (e.g. GPA RAM range 0x40000000 - 0x60000000 
> and shmem is at 0x50000000). In this case,
> you would create yet another /memory node that would result in overlap (i.e. 
> more than one /memory node specifying the same range).

This is a good point I didn’t think about, yes currently the code is creating 
overlapping nodes in that case, wow so it means I
need to compute the non overlapping regions and emit a /memory node then! :) 
ouch

Please let me know if I understood correctly your comments.

Cheers,
Luca

> 
> ~Michal

Reply via email to