I was unable to compile and test the diff - compile times exceeded a week
on a T1000, and
then I took ill.

I now confirm that this problems is present on 7.2 and appears to affect my
T5120 as well.
I can create configs with OBSD 6.3 which are bootable on T1000, T2000 and
T5120.
All later releases, including 7.2 corrupt the device tree in a similar way.

However, I now find a new, and possibly related problem - a config created
with 6.3 on the T5120
works ok with 6.3. However, while I was able to install 7.2 on the primary,
I was not able to
boot the guests. When I do
# ldomctl start -c guestX
the process hangs. I do not know how to continue from here - I tried
several plausible escape sequences, and none worked,
There may be one that does, but I generally end up losing my ssh connection
or without visible change.
I can login to the primary with ssh, and kill the process, at which point
the original thread returns to the
prompt.
The config provides two vdisks to each guest. I have tried this without
valid disks on the primary, and with vdisks
created with ldomctlr create-vdisk. Also with one of the vdisks being
linked to miniroot.72.img.
The result is the same in each case. I am hoping to netboot, and have
"rarpd -ad" running on the T5120 and
a V100. Nether notices a rarp packet.
This hanging does not happen on the T1000 (or T2000, I think).
I can (and do) netboot the T1000, T2000 and T5120 primary from my trusty
V100.
I believe that configs created with 6.3 may not pass the rarp packets from
guests to the primary
through the vnets. Is this correct?

regards

Andrew

On Tue, 2 Nov 2021 at 14:57, Andrew Grillet <[email protected]> wrote:

> I will look at the pdf while I am away. I will not be able to test the
> diff til I get back next week.
>
>
>
>
> On Tue, 2 Nov 2021 at 14:46, Mark Kettenis <[email protected]>
> wrote:
>
>> > From: Andrew Grillet <[email protected]>
>> > Date: Tue, 2 Nov 2021 14:14:13 +0000
>> >
>> > These were attached to the email I sent last week - that is, the entire
>> > contents of the directories in
>> > which I defined the ldom.conf file and the results of compiling it. I
>> have
>> > attached the file again.
>> >
>> > In each case, this was on a fresh install of the OS, and with a fresh
>> copy
>> > of the factory-default config.
>> >
>> >  >> > After this, my device tree is empty.
>> >   >>
>> >   >> I'm not sure what you mean by that.
>> >   >> You mean you end up in OBP but there are no devices you can boot
>> from?
>> >   >>
>> > Yes, this.
>> > The PCI address appears to be wrong - it is @780, when it should be
>> @7c0.
>> >
>> > If you have a tool to examine the binaries in the directory, I would
>> like a
>> > copy, and any documentation
>> > of what the contents means.
>>
>> There is sysutils/mdprint in ports.
>>
>> There is
>>
>>   https://sun4v.github.io/downloads/hypervisor-api-3.0draft7.pdf
>>
>> in particular chaper 8.  But the documentation is somewhat incomplete.
>>
>>
>> Quickly scanning your files, I found one difference related to PCIe.
>> Might be worth trying the attached diff.
>>
>> Index: usr.sbin/ldomctl/config.c
>> ===================================================================
>> RCS file: /cvs/src/usr.sbin/ldomctl/config.c,v
>> retrieving revision 1.42
>> diff -u -p -r1.42 config.c
>> --- usr.sbin/ldomctl/config.c   31 Jan 2021 05:14:24 -0000      1.42
>> +++ usr.sbin/ldomctl/config.c   2 Nov 2021 14:41:58 -0000
>> @@ -704,7 +704,8 @@ hvmd_init_device(struct md *md, struct m
>>         device = xzalloc(sizeof(*device));
>>         md_get_prop_val(md, node, "gid", &device->gid);
>>         md_get_prop_val(md, node, "cfghandle", &device->cfghandle);
>> -       md_get_prop_val(md, node, "rcid", &device->rcid);
>> +       if (!md_get_prop_val(md, node, "rcid", &device->rcid))
>> +               device->rcid = -1;
>>         device->resource_id = resource_id;
>>         if (strcmp(node->name->str, "pcie_bus") == 0)
>>                 pcie_busses[resource_id] = device;
>> @@ -1072,7 +1073,8 @@ hvmd_finalize_device(struct md *md, stru
>>         md_add_prop_val(md, node, "resource_id", device->resource_id);
>>         md_add_prop_val(md, node, "cfghandle", device->cfghandle);
>>         md_add_prop_val(md, node, "gid", device->gid);
>> -       md_add_prop_val(md, node, "rcid", device->rcid);
>> +       if (device->rcid != -1)
>> +               md_add_prop_val(md, node, "rcid", device->rcid);
>>         device->hv_node = node;
>>  }
>>
>>

Reply via email to