Hi,
After I tried a patch, it seems like I need to modify more python codes on
my OpenStack version.
Not only patched from that site, but also nova-conductor and
oslo_versionedobjects.
Because it will going to check version and seems like pci_device.py version
only support up to 1.5 on my
Oops, I just report this issue on the Launchpad at last moment.
Thanks Moshe, I'll try this commit.
2017-07-11 9:13 GMT+08:00 Moshe Levi :
> Hi Eddie,
>
>
>
>
> Looking on the your nova database after the delete looks correct to me.
>
> | created_at | updated_at
Hi Eddie,
Looking on the your nova database after the delete looks correct to me.
| created_at | updated_at | deleted_at | deleted | id
| 2017-06-21 00:56:06 | 2017-07-07 02:27:16 | NULL| 0 | 2
| 2017-07-07 01:42:48 | 2017-07-07 02:13:14 |
Roger that,
I may going to report this bug on the OpenStack Compute (Nova) Launchpad to
see what happen.
Anyway, thanks for ur help, really appreciate.
Eddie.
2017-07-11 8:12 GMT+08:00 Jay Pipes :
> Unfortunately, Eddie, I'm not entirely sure what is going on with your
>
Unfortunately, Eddie, I'm not entirely sure what is going on with your
situation. According to the code, the non-existing PCI device should be
removed from the pci_devices table when the PCI manager notices the PCI
device is no longer on the local host...
On 07/09/2017 08:36 PM, Eddie Yen
Hi there,
Does the information already enough or need additional items?
Thanks,
Eddie.
2017-07-07 10:49 GMT+08:00 Eddie Yen :
> Sorry,
>
> Re-new the nova-compute log after remove "1002:68c8" and restart
> nova-compute.
>
Sorry,
Re-new the nova-compute log after remove "1002:68c8" and restart
nova-compute.
http://paste.openstack.org/show/qUCOX09jyeMydoYHc8Oz/
2017-07-07 10:37 GMT+08:00 Eddie Yen :
> Hi Jay,
>
> Below are few logs and information you may want to check.
>
>
>
> I wrote GPU
Hi Jay,
Below are few logs and information you may want to check.
I wrote GPU inforamtion into nova.conf like this.
pci_passthrough_whitelist = [{ "product_id":"0ff3", "vendor_id":"10de" }, {
"product_id":"68c8", "vendor_id":"1002" }]
pci_alias = [{ "product_id":"0ff3", "vendor_id":"10de",
Hmm, very odd indeed. Any way you can save the nova-compute logs from
when you removed the GPU and restarted the nova-compute service and
paste those logs to paste.openstack.org? Would be useful in tracking
down this buggy behaviour...
Best,
-jay
On 07/06/2017 08:54 PM, Eddie Yen wrote:
Hi
Uh wait,
Is that possible it still shows available if PCI device still exist in the
same address?
Because when I remove the GPU card, I replace it to a SFP+ network card in
the same slot.
So when I type lspci the SFP+ card stay in the same address.
But it still doesn't make any sense because
Hi Jay,
The status of the "removed" GPU still shows as "Available" in pci_devices
table.
2017-07-07 8:34 GMT+08:00 Jay Pipes :
> Hi again, Eddie :) Answer inline...
>
> On 07/06/2017 08:14 PM, Eddie Yen wrote:
>
>> Hi everyone,
>>
>> I'm using OpenStack Mitaka version
Hi again, Eddie :) Answer inline...
On 07/06/2017 08:14 PM, Eddie Yen wrote:
Hi everyone,
I'm using OpenStack Mitaka version (deployed from Fuel 9.2)
In present, I installed two different model of GPU card.
And wrote these information into pci_alias and pci_passthrough_whitelist
in
Hi everyone,
I'm using OpenStack Mitaka version (deployed from Fuel 9.2)
In present, I installed two different model of GPU card.
And wrote these information into pci_alias and pci_passthrough_whitelist in
nova.conf on Controller and Compute (the node which installed GPU).
Then restart
13 matches
Mail list logo