Re: [openstack-dev] [nova] Is the Intel SRIOV CI running and if so, what does it test?

2016-03-30 Thread yongli he

Hi, mriedem

Shaohe is on vacation. And Intel SRIOV CI  comment  to Neutron. running 
the macvtap vnic  SRIOV testing and plus required neutron smoking test.


[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI

Regards
Yongli He




在 2016年03月30日 23:21, Matt Riedemann 写道:

Intel has a few third party CIs in the third party systems wiki [1].

I was talking with Moshe Levi today about expanding coverage for 
mellanox CI in nova, today they run an SRIOV CI for vnic type 
'direct'. I'd like them to also start running their 'macvtap' CI on 
the same nova changes (that job only runs in neutron today I think).


I'm trying to see what we have for coverage on these different NFV 
configurations, and because of limited resources to run NFV CI, don't 
want to duplicate work here.


So I'm wondering what the various Intel NFV CI jobs run, specifically 
the Intel Networking CI [2], Intel NFV CI [3] and Intel SRIOV CI [4].


From the wiki it looks like the Intel Networking CI tests ovs-dpdk but 
only for Neutron. Could that be expanded to also test on Nova changes 
that hit a sub-set of the nova tree?


I really don't know what the latter two jobs test as far as 
configuration is concerned, the descriptions in the wikis are pretty 
empty (please update those to be more specific).


Please also include in the wiki the recheck method for each CI so I 
don't have to dig through Gerrit comments to find one.


[1] https://wiki.openstack.org/wiki/ThirdPartySystems
[2] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI
[3] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-NFV-CI
[4] https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-SRIOV-CI




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][pci] What is the point of the ALLOCATED vs. CLAIMED device status?

2016-03-09 Thread yongli he

Hi, Jay

Here is a rough summary about claimed state:

when we resizing VM to same host, suppose we now have PCI A,  will renew 
to B1, B2, before the whole resizing process finished, we given user the 
change to revert the re-size, it's better reverting to original PCI A, 
not a new PCI devices. claimed status help to recording which one is 
allocated, but not suppose be assign to the current VM.


these whole logic is missing in the Nova now,  i had post some patches 
for this, but need to refresh(now Abandoned):

https://review.openstack.org/#/q/topic:pci_resize
(ignore the first patch)

Regards
Yongli He



在 2016年03月08日 02:23, Jay Pipes 写道:

Subject says it all.

I've been trying to fix this bug:

https://bugs.launchpad.net/nova/+bug/1549984

and just shake my head every time I look at the PCI handling code in 
nova/pci/manager.py and nova/pci/stats.py.


Why do we have a CLAIMED state as well as an ALLOCATED state?

Best,
-jay

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] pci_alias

2016-02-29 Thread yongli he

Hi, Beliveau, Ludovic

currently the alias define as a multiple string option item. this make 
the code look this configure option is a array, but user define it 
multiple times instead of a array.

pci_alias_opt = cfg.MultiStrOpt

Yongli He


在 2015年10月27日 23:44, Beliveau, Ludovic 写道:

Hi,

I'm configuring multiple pci_alias like so:

pci_alias=[{"vendor_id":"8086", "product_id":"0443", "name":"a1"}, 
{"vendor_id":"8086", "product_id":"0443", "name":"a2"}]


But I'm getting the following error when booting an instance:
ERROR (BadRequest): Invalid PCI alias definition: [{u'vendor_id': 
u'8086', u'product_id': u'0443', u'name': u'a1'}, {u'vendor_id': 
u'8086', u'product_id': u'0443', u'name': u'a2'}] is not of type 
'object' Failed validating 'type' in schema: {'additionalProperties': 
False, 'properties': {'capability_type': {'enum': ['pci'], 'type': 
'string'}, 'device_type': {'enum': ['NIC', 'ACCEL', 'GPU'], 'type': 
'string'}, 'name': {'maxLength': 256, 'minLength': 1, 'type': 
'string'}, 'product_id': {'pattern': '^([\\da-fA-F]{4})$', 'type': 
'string'}, 'vendor_id': {'pattern': '^([\\da-fA-F]{4})$', 'type': 
'string'}}, 'required': ['name'], 'type': 'object'} On instance: 
[{u'name': u'a1', u'product_id': u'0443', u'vendor_id': u'8086'}, 
{u'name': u'a2', u'product_id': u'0443', u'vendor_id': u'8086'}] (HTTP 
400) (Request-ID: req-3fe994bc-6a99-4c0c-be98-1a22703c58ee)


Based on the code, the default value for the pci_alias is an array.  
So I'm expecting that defining multiple pci_alias withing an array 
would be supported.  Or am I missing something ?


The workaround to this issue would be to declare each pci_alias in a 
separate line in nova.conf:


pci_alias={"vendor_id":"8086", "product_id":"0443", "name":"a1"}
pci_alias={"vendor_id":"8086", "product_id":"0443", "name":"a2"}

This format is valid for a pci_passthrough_whitelist, I think for 
clarity and consistency they should align.


Furthermore, the nova puppet module 
(puppet/modules/nova/manifests/api.pp) is also expecting the pci_alias 
to be defined as a list.


Thanks,
/ludovic


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Intel PCI CI appears lost in the weeds

2015-10-07 Thread yongli he

Hi,  mriedem and all

Sorry for the CI problem.  we now back from holiday now, and find the 
problem and got solution.  CI will be back soon.


summary:
the LOG server connection lost, so the test result failed to uploading.

Yongli He



在 2015年10月08日 07:03, Matt Riedemann 写道:
Was seeing immediate posts on changes which I knew was bogus, and 
getting 404s on the logs:


http://52.27.155.124/232252/1

Anyone know what's going on?




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][infra][third-party] Intel actively seeking solution to CI issue and getting close to a solution

2015-07-15 Thread yongli he

Hello OpenStackers!

The Intel PCI/SRIOV/NGFW/PTAS CI located in China, due to reasons beyond 
our control, lost connectivity to the Jenkins servers. Although the CI 
system is working fine we haven’t been able to report results back for 
about a month now.


We are actively looking for a solution to this problem.

Currently we are seeking a VM in AWS or similar public cloud to hold our 
CI logs, which will quickly give us a short term solution.  For a longer 
term solution we are exploring moving to machines located in the US.


Sorry for the inconvenience and your patience.

Regards
Yongli

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: PCI passthrough of 40G ethernet interface

2015-03-26 Thread yongli he

在 2015年03月11日 22:15, jacob jacob 写道:
Hi, jacob

  we now find   przemyslaw.czesnowicz have same NIC, hope will help a 
little bit.


Yongli He


-- Forwarded message --
From: *jacob jacob* mailto:opstk...@gmail.com>>
Date: Tue, Mar 10, 2015 at 6:00 PM
Subject: PCI passthrough of 40G ethernet interface
To: openst...@lists.openstack.org <mailto:openst...@lists.openstack.org>



Hi,
I'm interested in finding out if anyone has successfully tested PCI 
passthrough functionality for 40G interfaces in Openstack(KVM hypervisor).


I am trying this out on a Fedora 21 host  with Fedora 21 VM 
image.(3.18.7-200.fc21.x86_64)


Was able to successfully test PCI passthrough of 10 G interfaces:
  Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
Network Connection (rev 01)


With 40G interface testing, the PCI device is passed through to the VM 
but data transfer is failing.
0a:00.1 Ethernet controller: Intel Corporation Ethernet Controller 
XL710 for 40GbE QSFP+ (rev 01)


Tried this with both the i40e driver and latest dpdk driver but no 
luck so far.


With the i40e driver, the data transfer fails.
 Relevant dmesg output:
 [   11.544088] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None
[   11.689178] i40e :00:06.0 eth2: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None

[   16.704071] [ cut here ]
[   16.705053] WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:303 
dev_watchdog+0x23e/0x250()

[   16.705053] NETDEV WATCHDOG: eth1 (i40e): transmit queue 1 timed out
[   16.705053] Modules linked in: cirrus ttm drm_kms_helper i40e drm 
ppdev serio_raw i2c_piix4 virtio_net parport_pc ptp virtio_balloon 
crct10dif_pclmul pps_core parport pvpanic crc32_pclmul 
ghash_clmulni_intel virtio_blk crc32c_intel virtio_pci virtio_ring 
virtio ata_generic pata_acpi
[   16.705053] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 
3.18.7-200.fc21.x86_64 #1
[   16.705053] Hardware name: Fedora Project OpenStack Nova, BIOS 
1.7.5-20140709_153950- 04/01/2014
[   16.705053]   2e5932b294d0c473 88043fc83d48 
8175e686
[   16.705053]   88043fc83da0 88043fc83d88 
810991d1
[   16.705053]  88042958f5c0 0001 88042865f000 
0001

[   16.705053] Call Trace:
[   16.705053][] dump_stack+0x46/0x58
[   16.705053]  [] warn_slowpath_common+0x81/0xa0
[   16.705053]  [] warn_slowpath_fmt+0x55/0x70
[   16.705053]  [] dev_watchdog+0x23e/0x250
[   16.705053]  [] ? dev_graft_qdisc+0x80/0x80
[   16.705053]  [] call_timer_fn+0x3a/0x120
[   16.705053]  [] ? dev_graft_qdisc+0x80/0x80
[   16.705053]  [] run_timer_softirq+0x212/0x2f0
[   16.705053]  [] __do_softirq+0x124/0x2d0
[   16.705053]  [] irq_exit+0x125/0x130
[   16.705053]  [] smp_apic_timer_interrupt+0x48/0x60
[   16.705053]  [] apic_timer_interrupt+0x6d/0x80
[   16.705053][] ? hrtimer_start+0x18/0x20
[   16.705053]  [] ? native_safe_halt+0x6/0x10
[   16.705053]  [] ? rcu_eqs_enter+0xa3/0xb0
[   16.705053]  [] default_idle+0x1f/0xc0
[   16.705053]  [] arch_cpu_idle+0xf/0x20
[   16.705053]  [] cpu_startup_entry+0x3c5/0x410
[   16.705053]  [] start_secondary+0x1af/0x1f0
[   16.705053] ---[ end trace 7bda53aeda558267 ]---
[   16.705053] i40e :00:05.0 eth1: tx_timeout recovery level 1
[   16.705053] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx 
ring 0 disable timeout
[   16.744198] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx 
ring 64 disable timeout

[   16.779322] i40e :00:05.0: i40e_ptp_init: added PHC on eth1
[   16.791819] i40e :00:05.0: PF 40 attempted to control timestamp 
mode on port 1, which is owned by PF 1
[   16.933869] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None
[   18.853624] SELinux: initialized (dev tmpfs, type tmpfs), uses 
transition SIDs

[   22.720083] i40e :00:05.0 eth1: tx_timeout recovery level 2
[   22.826993] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx 
ring 0 disable timeout
[   22.935288] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx 
ring 64 disable timeout

[   23.669555] i40e :00:05.0: i40e_ptp_init: added PHC on eth1
[   23.682067] i40e :00:05.0: PF 40 attempted to control timestamp 
mode on port 1, which is owned by PF 1
[   23.722423] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None

[   23.800206] i40e :00:06.0: i40e_ptp_init: added PHC on eth2
[   23.813804] i40e :00:06.0: PF 48 attempted to control timestamp 
mode on port 0, which is owned by PF 0
[   23.855275] i40e :00:06.0 eth2: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None

[   38.720091] i40e :00:05.0 eth1: tx_timeout recovery level 3
[   38.725844] random: nonblocking pool is initialized
[   38.729874] i40e :00:06.0: HMC error interrupt
[   38.733425] i40e :00:06.0: i40e_vsi_control_tx: VSI seid 518 Tx 
ring 0 disable timeout
[   38.738886] 

Re: [openstack-dev] Fwd: PCI passthrough of 40G ethernet interface

2015-03-24 Thread yongli he

在 2015年03月11日 22:15, jacob jacob 写道:
Hi, jacob

I'm trying  to find someone to check it, if there any feed back, i 
update you.


Yongli He


-- Forwarded message --
From: *jacob jacob* mailto:opstk...@gmail.com>>
Date: Tue, Mar 10, 2015 at 6:00 PM
Subject: PCI passthrough of 40G ethernet interface
To: openst...@lists.openstack.org <mailto:openst...@lists.openstack.org>



Hi,
I'm interested in finding out if anyone has successfully tested PCI 
passthrough functionality for 40G interfaces in Openstack(KVM hypervisor).


I am trying this out on a Fedora 21 host  with Fedora 21 VM 
image.(3.18.7-200.fc21.x86_64)


Was able to successfully test PCI passthrough of 10 G interfaces:
  Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
Network Connection (rev 01)


With 40G interface testing, the PCI device is passed through to the VM 
but data transfer is failing.
0a:00.1 Ethernet controller: Intel Corporation Ethernet Controller 
XL710 for 40GbE QSFP+ (rev 01)


Tried this with both the i40e driver and latest dpdk driver but no 
luck so far.


With the i40e driver, the data transfer fails.
 Relevant dmesg output:
 [   11.544088] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None
[   11.689178] i40e :00:06.0 eth2: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None

[   16.704071] [ cut here ]
[   16.705053] WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:303 
dev_watchdog+0x23e/0x250()

[   16.705053] NETDEV WATCHDOG: eth1 (i40e): transmit queue 1 timed out
[   16.705053] Modules linked in: cirrus ttm drm_kms_helper i40e drm 
ppdev serio_raw i2c_piix4 virtio_net parport_pc ptp virtio_balloon 
crct10dif_pclmul pps_core parport pvpanic crc32_pclmul 
ghash_clmulni_intel virtio_blk crc32c_intel virtio_pci virtio_ring 
virtio ata_generic pata_acpi
[   16.705053] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 
3.18.7-200.fc21.x86_64 #1
[   16.705053] Hardware name: Fedora Project OpenStack Nova, BIOS 
1.7.5-20140709_153950- 04/01/2014
[   16.705053]   2e5932b294d0c473 88043fc83d48 
8175e686
[   16.705053]   88043fc83da0 88043fc83d88 
810991d1
[   16.705053]  88042958f5c0 0001 88042865f000 
0001

[   16.705053] Call Trace:
[   16.705053][] dump_stack+0x46/0x58
[   16.705053]  [] warn_slowpath_common+0x81/0xa0
[   16.705053]  [] warn_slowpath_fmt+0x55/0x70
[   16.705053]  [] dev_watchdog+0x23e/0x250
[   16.705053]  [] ? dev_graft_qdisc+0x80/0x80
[   16.705053]  [] call_timer_fn+0x3a/0x120
[   16.705053]  [] ? dev_graft_qdisc+0x80/0x80
[   16.705053]  [] run_timer_softirq+0x212/0x2f0
[   16.705053]  [] __do_softirq+0x124/0x2d0
[   16.705053]  [] irq_exit+0x125/0x130
[   16.705053]  [] smp_apic_timer_interrupt+0x48/0x60
[   16.705053]  [] apic_timer_interrupt+0x6d/0x80
[   16.705053][] ? hrtimer_start+0x18/0x20
[   16.705053]  [] ? native_safe_halt+0x6/0x10
[   16.705053]  [] ? rcu_eqs_enter+0xa3/0xb0
[   16.705053]  [] default_idle+0x1f/0xc0
[   16.705053]  [] arch_cpu_idle+0xf/0x20
[   16.705053]  [] cpu_startup_entry+0x3c5/0x410
[   16.705053]  [] start_secondary+0x1af/0x1f0
[   16.705053] ---[ end trace 7bda53aeda558267 ]---
[   16.705053] i40e :00:05.0 eth1: tx_timeout recovery level 1
[   16.705053] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx 
ring 0 disable timeout
[   16.744198] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx 
ring 64 disable timeout

[   16.779322] i40e :00:05.0: i40e_ptp_init: added PHC on eth1
[   16.791819] i40e :00:05.0: PF 40 attempted to control timestamp 
mode on port 1, which is owned by PF 1
[   16.933869] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None
[   18.853624] SELinux: initialized (dev tmpfs, type tmpfs), uses 
transition SIDs

[   22.720083] i40e :00:05.0 eth1: tx_timeout recovery level 2
[   22.826993] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx 
ring 0 disable timeout
[   22.935288] i40e :00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx 
ring 64 disable timeout

[   23.669555] i40e :00:05.0: i40e_ptp_init: added PHC on eth1
[   23.682067] i40e :00:05.0: PF 40 attempted to control timestamp 
mode on port 1, which is owned by PF 1
[   23.722423] i40e :00:05.0 eth1: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None

[   23.800206] i40e :00:06.0: i40e_ptp_init: added PHC on eth2
[   23.813804] i40e :00:06.0: PF 48 attempted to control timestamp 
mode on port 0, which is owned by PF 0
[   23.855275] i40e :00:06.0 eth2: NIC Link is Up 40 Gbps Full 
Duplex, Flow Control: None

[   38.720091] i40e :00:05.0 eth1: tx_timeout recovery level 3
[   38.725844] random: nonblocking pool is initialized
[   38.729874] i40e :00:06.0: HMC error interrupt
[   38.733425] i40e :00:06.0: i40e_vsi_control_tx: VSI seid 518 Tx 
ring 0 disable timeout
[   38.738

Re: [openstack-dev] [nova][ThirdPartyCI][PCI] Intel Third party Hardware based CI for PCI

2015-03-24 Thread yongli he

在 2015年03月07日 04:59, Chris Friesen 写道:
Hi...it would be good to test a bunch of the 
hugepages/pinning/multi-numa-node-guests/etc. features with real 
hardware.  The normal testing doesn't cover much of that since it's 
hardware-agnostic.



we had another team build up a Networking CI for this purposes.

networking CI(commenting to Neutron) - 
https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-Networking-CI

Yongli He



Chris


On 01/07/2015 08:31 PM, yongli he wrote:

Hi,

Intel  set up a Hardware based Third Part CI.   it's already running 
sets of PCI

test cases
for several  weeks (do not sent out comments, just log the result)
the log server and these test cases seems fairly stable now. to begin 
given

comments  to nova
repository,  what other necessary work need to be address?

Details:
1. ThirdPartySystems 
<https://wiki.openstack.org/wiki/ThirdPartySystems> Information

https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-PCI-CI

2. a sample logs:
http://192.55.68.190/143614/6/ 

http://192.55.68.190/143614/6/

http://192.55.68.190/139900/4

http://192.55.68.190/143372/3/

http://192.55.68.190/141995/6/

http://192.55.68.190/137715/13/

http://192.55.68.190/133269/14/

3. Test cases on github:
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases 





Thanks
Yongli He



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ThirdPartyCI][PCI] Intel Third party Hardware based CI for PCI

2015-01-15 Thread yongli he

在 2015年01月13日 03:12, Kurt Taylor 写道:
The public link for your test logs should really be a host name 
instead of an IP address. That way if you have to change it again in 
the future, you won't have dead links in old comments. You may already 
know, but all of the requirements and recommendations are here: 
http://git.openstack.org/cgit/openstack-infra/system-config/tree/doc/source/third_party.rst

thanks very much, we does considering change to a dns name.

Yongli He


Kurt Taylor (krtaylor)

On Sun, Jan 11, 2015 at 11:18 PM, yongli he <mailto:yongli...@intel.com>> wrote:


在 2015年01月08日 10:31, yongli he 写道:
to make a more stable service we upgrade the networking device, 
then the log server address change to a new

IP address:  198.175.100.33

so  the sample log change to(replace the 192.55.68.190 to new
address):


http://198.175.100.33/143614/6/
http://198.175.100.33/139900/4
http://198.175.100.33/143372/3/
http://198.175.100.33/141995/6/
http://198.175.100.33/137715/13/
http://198.175.100.33/133269/14/

Yongli He



Hi,

Intel  set up a Hardware based Third Part CI.   it's already 
running sets of PCI test cases

for several  weeks (do not sent out comments, just log the result)
the log server and these test cases seems fairly stable now.   to
begin given comments  to nova
repository,  what other necessary work need to be address?

Details:
1. ThirdPartySystems
<https://wiki.openstack.org/wiki/ThirdPartySystems> Information
https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-PCI-CI

2. a sample logs:
<http://192.55.68.190/138795/6/>

3. Test cases on github:

https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases



Thanks
Yongli He




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa][pci] Intel PCI CI , testing method/env and test cases

2015-01-15 Thread yongli he

Hi, all

Intel PCI CI use  hardware machine to testing PCI,  there are some 
change to devstack, tempest

use Jerkins  dispatch task.

basic information:
* the topology:
log server <--->  Jenkins server <> node pool
* use devstack deploy testing env.

PCI CI 's  main problem is how to know pci information and how configure 
nova/tempest:
 1)  the test cases should know the  machine's  pci device 
information ( to checking the allocated pci does passed to vm)
 2)  how Jerkins  deliver these pci information  to 
nova/devstack/tempest


how Intel CI solve these problem:
  1)  Q:how the test cases know the allocated machine's  pci device 
information :
   A: each node might have different HW for PCI, different 
number of PCI devices so there is a configure file

   for each node storing the node's pci information.

   * test node local config file:
 pci.conf
pci_info=name:PCI_network_card,vendor_id:8086,product_id:1520,count:20; ...

   2) Q:  how Jekins  deliver pci information  to nova/devstack/tempest
A: Jekins  allocated a node for a pach set, then sent a 
scripts to :

*  configure the devstack
 then,  devstack configure the nova pci 
pci_passthrough_whitelist and alias
* exporting a env var storing the pci_info for tempest pci 
test cases ( this  need improve, of course)
  now the test case know everything to create  VM with PCI 
devices.


  What changed to devstack/tempest:
* devstack
   adding  "insert_pci"  to  devstack/functions-common
* tempest
adding linux utils to get pci device information from vm
adding routines to create pci flavor
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/blob/master/pci_tempest_patch/0001-Add-Intel-PCI-functions.patch

* test cases
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases

What can be improvement and what is common( maybe into devstack/tempest)?
   * tempest use env variable  deliver information to tempest cases 
need improve, may be a config option
   * the pci information could be simplify by use interface name 
instead of  vendor_id/product_id

   * "init_pci" might be valuable to devstack
   * test cases:
   landing the test cases to nova function testing (pending):
 https://review.openstack.org/#/c/141270/
   trying put  some improved test case to tempest(rejected):
https://review.openstack.org/#/c/139000/

Regards
Yongli h...@intel.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][ThirdPartyCI][PCI] Intel Third party Hardware based CI for PCI

2015-01-11 Thread yongli he

在 2015年01月08日 10:31, yongli he 写道:
to make a more stable service we upgrade the networking device, then the 
log server address change to a new

IP address:  198.175.100.33

so  the sample log change to(replace the 192.55.68.190 to new address):


http://198.175.100.33/143614/6/
http://198.175.100.33/139900/4
http://198.175.100.33/143372/3/
http://198.175.100.33/141995/6/
http://198.175.100.33/137715/13/
http://198.175.100.33/133269/14/

Yongli He



Hi,

Intel  set up a Hardware based Third Part CI.   it's already running 
sets of PCI test cases

for several  weeks (do not sent out comments, just log the result)
the log server and these test cases seems fairly stable now.   to 
begin given comments  to nova

repository,  what other necessary work need to be address?

Details:
1. ThirdPartySystems 
<https://wiki.openstack.org/wiki/ThirdPartySystems> Information

https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-PCI-CI

2. a sample logs:


3. Test cases on github:
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases



Thanks
Yongli He



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ThirdPartyCI][PCI] Intel Third party Hardware based CI for PCI

2015-01-07 Thread yongli he

Hi,

Intel  set up a Hardware based Third Part CI.   it's already running 
sets of PCI test cases

for several  weeks (do not sent out comments, just log the result)
the log server and these test cases seems fairly stable now.   to begin 
given comments  to nova

repository,  what other necessary work need to be address?

Details:
1. ThirdPartySystems <https://wiki.openstack.org/wiki/ThirdPartySystems> 
Information

https://wiki.openstack.org/wiki/ThirdPartySystems/Intel-PCI-CI

2. a sample logs:
http://192.55.68.190/143614/6/ 

http://192.55.68.190/143614/6/

http://192.55.68.190/139900/4

http://192.55.68.190/143372/3/

http://192.55.68.190/141995/6/

http://192.55.68.190/137715/13/

http://192.55.68.190/133269/14/

3. Test cases on github:
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases



Thanks
Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ThirdPartyCI][PCI CI] comments to Nova

2014-12-23 Thread yongli he

Hi, Joe Gordon and all

recently Intel is setting up a HW based Third Part CI.   it's already  
running a set of basic
PCI test cases  for several  weeks, but do not sent out comments, just 
log the result.

the log server and these test cases seems stable.  here is one sample log:

http://192.55.68.190/138795/6/

for now, the test cases land in the github:
https://github.com/intel-hw-ci/Intel-Openstack-Hardware-CI/tree/master/pci_testcases

to begin given comments  to nova repository,  what other necessary work 
need to be address?


some notes:
* the test cases just cover basic PCI pass through testing.
* after it begin to working, more test cases will be added , 
include basic SRIOV


Thanks
Yongli He

More logs:
http://192.55.68.190/138795/6

http://192.55.68.190/74423/6

http://192.55.68.190/141115/6

http://192.55.68.190/142565/2

http://192.55.68.190/142835/3

http://192.55.68.190/74423/5

http://192.55.68.190/142835/2

http://192.55.68.190/140739/3

.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][pci] A couple of questions

2014-06-24 Thread yongli he

Hi, Robert, Irenab

does your patches are properly seting up the topic, like 
pci-passthrough-sriov?
all SRIOV patch need this tag i think , help people find this set of 
patch to review.


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/pci-passthrough-sriov,n,z

Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][pci] A couple of questions

2014-06-12 Thread yongli he

? 2014?06?11? 05:09, Jiang, Yunhong ??:


Hi, Robert

 For your first question, I suspect it's something wrong and 
should be 'devi_id', which is the hypervisor's identification for the 
device. I will leave Yongli to have more comments on it.


sorry for later, this name is libvirt returned device name, use to 
oprate the device via libvirt later. device_id might a good name for this.


Yongli He


 For the second one, thanks for point the issue out. Yes, I'm 
working on fixing it.


--jyh

*From:*Robert Li (baoli) [mailto:ba...@cisco.com]
*Sent:* Tuesday, June 10, 2014 1:46 PM
*To:* Jiang, Yunhong; He, Yongli
*Cc:* OpenStack Development Mailing List (not for usage questions)
*Subject:* [openstack-dev][nova][pci] A couple of questions

Hi Yunhong & Yongli,

In the routine _prepare_pci_devices_for_use(), it's referring to 
dev['hypervisor_name']. I didn't see code that's setting it up, or the 
libvirt nodedev xml includes hypervisor_name. Is this specific to Xen?


Another question is about the issue that was raised in this review: 
https://review.openstack.org/#/c/82206/. It's about the use of node id 
or host name in the PCI device table. I'd like to know you guys' 
thoughts on that.


thanks,

Robert



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] about pci device filter

2014-05-06 Thread yongli he

于 2014年05月05日 16:28, Bohai (ricky) 写道:

Hi, stackers:

Now there is an default while list filter for PCI device.
But maybe it's not enough in some scenario.

Maybe it's better if we provide a mechanism to specify a customize filter.

For example:
So user can make a special filter , then specify which filter to use in 
configure files.

Any advices?


now i work on a similar thing, not exactly your want, but i also put module
white list on my to-do list.

https://review.openstack.org/#/c/87500/




Best regards to you.
Ricky


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][pci]PCI SR-IOV use cases initial doc

2014-04-10 Thread yongli he
于 2014年04月10日 16:40, John Garbutt 写道:
> Apologies, that came out all wrong...
>
> On 10 April 2014 09:28, John Garbutt  wrote:
>> I think writing this up as a nova-spec is going to make this process
>> much easier:
>> https://wiki.openstack.org/wiki/Blueprints#Nova
>>
>> It will save you having to re-write your document once you want to
>> submit a blueprint, and we can all see each others comments in gerrit,
>> and more clearly see how things change and evolve. The way the
>> template in nova-spec works, it should also help you with structuring
>> your argument.
> Thats just want I would find easier, its just a suggestion.
>
>> Please don't design assuming a single vendor solution, that is sure to
>> get rejected (at least my me) at the blueprint review stage. You might
>> want a different vendor in each AZ to isolate you from failures due to
>> vendor bugs, if you are digging for a use case.
> I guess thats a tenant use case, I got confused reading through those.
>
>> I still can't see a clear description of the "tenant" use cases, I
>> still think thats the key to getting agreement here, and getting
>> useful feedback at the summit. Not sure I understand the tables, they
>> seem a bit confusing/distracting.
> Sorry, forgot to mention, you are making good progress here. But,
> given the loop we are going around here, I think agreeing the "ideal"
> use cases, then looking at the detail, and looping back to see if
> everything "works" is probably the right approach. Other ideas
> welcome!
>
> Once there are the use cases, given all the Config vs API debates, I
about use cases, i might have different picture in my head , i.e how do
you think this:

tenant user want to pick up a PCI acceleration card with MD5 and RC6
encryption/hash support.


1. is this the use case you are look for?
2. what other information should be add to this use-case?

and any other suggestion?

Yongli he


> would look at the pure data flow, in a Config/API agnostic way.
> Agreeing the info needed from the user, then in the VIF driver, then
> in between, etc. We should be able to agree on that, before returning
> to the host aggregates API vs something new API vs more config debate.
> Right it doesn't seem to be clear what is required, so its hard to
> know what the best approach is, compared to other features we already
> have in Nova.
>
> At the moment I am struggling to see the whole picture, getting the
> general idea clear before the summit would be awesome, so we can
> discuss how to stage the implementation, deal with backwards
> compatibility, etc.
>
> Thanks,
> John
>
>> On 10 April 2014 09:14, yongli he  wrote:
>>> 于 2014年04月10日 15:59, Irena Berezovsky 写道:
>>>
>>> Hi Robert,
>>>
>>> Thanks a lot the inputs you posted in the doc.
>>>
>>> I have raised there few questions and added use case for High Availability.
>>>
>>> Another concern I have is regarding the assumption that there is no case to
>>> mix different vendor cards in the setup. I think that mixing Cisco and Intel
>>> or Mellanox cards does not make sense, but Intel and Mellanox cards can
>>> coexist. At least for my understanding, but I may be wrong, both Intel and
>>> Mellanox take HW VEB (HW embedded switch) approach.
>>>
>>> 1. open to mail list.
>>> 2. admin/usr won't mixing Intel/Cisco/Mellanox card, does not mean we
>>> should disable it, or don't give a chance.
>>> 3. i raise couple of question and questioning the aggregate solution. see
>>> inline comments.
>>>
>>> https://docs.google.com/document/d/1zgMaXqrCnad01-jQH7Mkmf6amlghw9RMScGLBrKslmw/edit
>>>
>>> Yongli He
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Irena
>>>
>>>
>>>
>>> From: Robert Li (baoli) [mailto:ba...@cisco.com]
>>> Sent: Wednesday, April 09, 2014 11:11 PM
>>> To: Irena Berezovsky; Sandhya Dasu (sadasu); Robert Kukura; He, Yongli
>>> (yongli...@intel.com); Itzik Brown; beag...@redhat.com
>>> Subject: Re: PCI SR-IOV use cases initial doc
>>>
>>>
>>>
>>> Hi,
>>>
>>>
>>>
>>> I updated the doc with some of my thoughts.
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Robert
>>>
>>>
>>>
>>> On 3/24/14, 8:41 AM, "Irena Berezovsky"  wrote:
>>>
>>>
>>>
>>> Hi,
>>>
>>> I have created the initial doc to capture PCI SR-IOV networking use cases:
>>>
>>> https://docs.google.com/document/d/1zgMaXqrCnad01-jQH7Mkmf6amlghw9RMScGLBrKslmw/edit
>>>
>>>
>>>
>>> I have updated the agenda for tomorrow meeting to discuss the use cases.
>>>
>>>
>>>
>>> Please comment and update
>>>
>>>
>>>
>>> BR,
>>>
>>> Irena
>>>
>>>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][pci]PCI SR-IOV use cases initial doc

2014-04-10 Thread yongli he
于 2014年04月10日 16:28, John Garbutt 写道:
> I think writing this up as a nova-spec is going to make this process
> much easier:
> https://wiki.openstack.org/wiki/Blueprints#Nova
>
> It will save you having to re-write your document once you want to
> submit a blueprint, and we can all see each others comments in gerrit,
> and more clearly see how things change and evolve. The way the
> template in nova-spec works, it should aslo help you with structuring
> your argument.
yeah, this worthy to do now, i begin to format my own bp to a spec.
>
> Please don't design assuming a single vendor solution, that is sure to
agree.
> get rejected (at least my me) at the blueprint review stage. You might
> want a different vendor in each AZ to isolate you from failures due to
> vendor bugs, if you are digging for a use case.
>
> I still can't see a clear description of the "tenant" use cases, I
> still think thats the key to getting agreement here, and getting
> useful feedback at the summit. Not sure I understand the tables, they
> seem a bit confusing/distracting.
yeah, maybe because it's hard to description the use case flow without
bothering a reference design .

>
> John
>
>
> On 10 April 2014 09:14, yongli he  wrote:
>> 于 2014年04月10日 15:59, Irena Berezovsky 写道:
>>
>> Hi Robert,
>>
>> Thanks a lot the inputs you posted in the doc.
>>
>> I have raised there few questions and added use case for High Availability.
>>
>> Another concern I have is regarding the assumption that there is no case to
>> mix different vendor cards in the setup. I think that mixing Cisco and Intel
>> or Mellanox cards does not make sense, but Intel and Mellanox cards can
>> coexist. At least for my understanding, but I may be wrong, both Intel and
>> Mellanox take HW VEB (HW embedded switch) approach.
>>
>> 1. open to mail list.
>> 2. admin/usr won't mixing Intel/Cisco/Mellanox card, does not mean we
>> should disable it, or don't give a chance.
>> 3. i raise couple of question and questioning the aggregate solution. see
>> inline comments.
>>
>> https://docs.google.com/document/d/1zgMaXqrCnad01-jQH7Mkmf6amlghw9RMScGLBrKslmw/edit
>>
>> Yongli He
>>
>>
>>
>> Thanks,
>>
>> Irena
>>
>>
>>
>> From: Robert Li (baoli) [mailto:ba...@cisco.com]
>> Sent: Wednesday, April 09, 2014 11:11 PM
>> To: Irena Berezovsky; Sandhya Dasu (sadasu); Robert Kukura; He, Yongli
>> (yongli...@intel.com); Itzik Brown; beag...@redhat.com
>> Subject: Re: PCI SR-IOV use cases initial doc
>>
>>
>>
>> Hi,
>>
>>
>>
>> I updated the doc with some of my thoughts.
>>
>>
>>
>> Thanks,
>>
>> Robert
>>
>>
>>
>> On 3/24/14, 8:41 AM, "Irena Berezovsky"  wrote:
>>
>>
>>
>> Hi,
>>
>> I have created the initial doc to capture PCI SR-IOV networking use cases:
>>
>> https://docs.google.com/document/d/1zgMaXqrCnad01-jQH7Mkmf6amlghw9RMScGLBrKslmw/edit
>>
>>
>>
>> I have updated the agenda for tomorrow meeting to discuss the use cases.
>>
>>
>>
>> Please comment and update
>>
>>
>>
>> BR,
>>
>> Irena
>>
>>


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][pci]PCI SR-IOV use cases initial doc

2014-04-10 Thread yongli he

? 2014?04?10? 15:59, Irena Berezovsky ??:


Hi Robert,

Thanks a lot the inputs you posted in the doc.

I have raised there few questions and added use case for High 
Availability.


Another concern I have is regarding the assumption that there is no 
case to mix different vendor cards in the setup. I think that mixing 
Cisco and Intel or Mellanox cards does not make sense, but Intel and 
Mellanox cards can coexist. At least for my understanding, but I may 
be wrong, both Intel and Mellanox take HW VEB (HW embedded switch) 
approach.



1. open to mail list.
2. admin/usr won't mixing Intel/Cisco/Mellanox card, does not mean 
we should disable it, or don't give a chance.
3. i raise couple of question and questioning the aggregate solution. 
see inline comments.


https://docs.google.com/document/d/1zgMaXqrCnad01-jQH7Mkmf6amlghw9RMScGLBrKslmw/edit

Yongli He


Thanks,

Irena

*From:*Robert Li (baoli) [mailto:ba...@cisco.com]
*Sent:* Wednesday, April 09, 2014 11:11 PM
*To:* Irena Berezovsky; Sandhya Dasu (sadasu); Robert Kukura; He, 
Yongli (yongli...@intel.com); Itzik Brown; beag...@redhat.com

*Subject:* Re: PCI SR-IOV use cases initial doc

Hi,

I updated the doc with some of my thoughts.

Thanks,

Robert

On 3/24/14, 8:41 AM, "Irena Berezovsky" <mailto:ire...@mellanox.com>> wrote:


Hi,

I have created the initial doc to capture PCI SR-IOV networking
use cases:


https://docs.google.com/document/d/1zgMaXqrCnad01-jQH7Mkmf6amlghw9RMScGLBrKslmw/edit

I have updated the agenda for tomorrow meeting to discuss the use
cases.

Please comment and update

BR,

Irena



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][PCI] one use case make the flavor/extra-info based solution to be right choice

2014-03-24 Thread yongli he

于 2014年03月21日 03:18, Jay Pipes 写道:

On Thu, 2014-03-20 at 13:50 +, Robert Li (baoli) wrote:

Hi Yongli,

I'm very glad that you bring this up and relive our discussion on PCI
passthrough and its application on networking. The use case you brought up
is:

user wants a FASTER NIC from INTEL to join a virtual
networking.

By FASTER, I guess that you mean that the user is allowed to select a
particular vNIC card. Therefore, the above statement can be translated
into the following requests for a PCI device:
 . Intel vNIC
 . 1G or 10G or ?
 . network to join

First of all, I'm not sure in a cloud environment, a user would care about
the vendor or card type.

Correct. Nor would/should a user of the cloud know what vendor or card
type is in use on a particular compute node. At most, all a user of the
cloud would be able to select from is an instance type (flavor) that
listed some capability like "high_io_networking" or something like that,
and the mapping of what "high_io_networking" meant on the back end of
Nova would need to be done by the operator (i.e. if the tag
"high_io_networking" is on a flavor a user has asked to launch a server
with, then that tag should be translated into a set of capabilities that
is passed to the scheduler and used to determine where the instance can
be scheduled by looking at which compute nodes support that set of
capabilities.

This is what I've been babbling about with regards to "leaking
implementation through the API". What happens if, say, the operator
decides to use IBM cards (instead of or in addition to Intel ones)? If
you couple the implementation with the API, like the example above shows
("user wants a FASTER NIC from INTEL"), then you have to add more
complexity to the front-end API that a user deals with, instead of just
adding a capabilities mapping for new compute nodes that says
"high_io_networking" tag can match to these new compute nodes with IBM
cards.

Jay

thank you, sorry for later reply

in this use case, use might so not care about the vendor id/product id.
but for a specific image , the product's model(which related to the 
vendor id/product id)

might cared by user. cause the image might could not support new device
which possibly use vendor_id and product id to eliminate the unsupported 
device.


anyway, even without the product/vendor id, the multiple extra tag still 
needed.
and consideration this case, some accelerate card for encryption and 
decryption/hash
there are many supported feature, and most likely different pci card 
might support
different feature set,like : md5, DES,3DES, AES, RSA, SHA-x, IDEA, 
RC4,5,6 
the way to select such a device is use it's feature set instead of one 
or 2 of group, so

the extra information about a pci card is need, in a flexible way.

Yongli He






Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][PCI] problem about PCI SRIOV

2014-03-23 Thread yongli he

? 2014?03?21? 18:31, Gouzongmei ??:


Hi,

I have a problem when reading the wiki below, which is based on the 
latest SRIOV design.


https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support#API_interface

My problem is about the "/PCI SRIOV with tagged flavor"/ part.


In "pci_information =  { { 'device_id': "8086", 'vendor_id':
"000[1-2]" }, { 'e.physical_network': 'X' } }" , I'm confused
what is the "e.physical_network", if it means a network
resource, why we need to filter the assignable nics by a
network resource?

this it for the Neutron SRIOV, a physical netowrk is a attribute added 
to a pci device. if you want allocated a nic from pci, you must ensure 
we get a PCI device which does connect to the correct physical network 
as same as Neutron defined.



Can you please tell me more about the "physical_network" here,
thanks a lot.

In*"*{'e.physical_netowrk':'X', 'count': 1 }*"*, I think the "count" 
means the count of virtual nics a SRIOV nic can support, is that right?**



yes.


In the last step while booting a vm with a virtual nic, the command is 
"nova boot  mytest  --flavor m1.tiny --image=cirros-0.3.1-x86_64-uec  
--nic  net-id=network_X pci_flavor= '1:phyX_NIC;'".


I noticed that, "pci_flavor" is prompted while there already has the 
m1.tiny flavor, will the "pci_flavor" be separated from the normal 
flavor in the next step?


yes. but not mean the pci_flavor is separated from normal flavor, the 
flavor is the original alias with api support. we rename to pci_flavor 
because it's just for pci and the flaovr things much more OS style.


hope help you.

Yongli He


Thanks



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][PCI] flavor/extra-info based solution: why need vendor/product id

2014-03-20 Thread yongli he
s the usage and semantics. It's up to a well educated admin
to use it properly, and it's not easy to manage. Therefore, I believe it
requires further work.

I think that practical use cases would really help us find the right
solution, and provide the optimal interface to the admin/user. So let's
keep the discussion going.

thanks,
Robert

On 3/20/14 4:22 AM, "yongli he"  wrote:


HI, all

cause of the Juno, the PCI discuss keen open, for group VS to
flavor/extra-information based solution. there is a use case, which
group based
solution can not supported well.

please considerate of this, and choose the flavor/extra-information
based solution.


Groups problem:

I: exposed may detail under laying grouping to user, user burden of deal
with those things. and in a OS system, the group name might be messy.
refer to II)
--
II: group based solution can not well support such a simple use case:

user want a faster NIC from Intel to join a virtual networking.

suppose the tenant use physical network name is "phy1". then the 'group'
style solution won't meeting such a simple use case. reason:

1) the group name must be 'phy1', otherwise, the neutron can't not fill
the pci request, the neutron have only the physical network name for this.
(suppose the phy1 not bothering the user, if bothering user, user will
see group like : the "intel_phy1" "ciscio_v1_phy1" )

2) because there is only one property in pci stats pool, user then loose
the chance to choice the version or model of the pci device, then user
can not request a simple thing like the "intel-NIC" or "1G_NIC.


Regards
Yongli He



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][PCI] one use case make the flavor/extra-info based solution to be right choice

2014-03-20 Thread yongli he
crypto, networking, storage, etc. The pci_flavor_attrs needs to be defined
on EVERY node, and has to accommodate attributes from ALL of these classes
of cards. However, an attribute for one class of cards may not be
applicable to other classes of cards. However, the stats group are keyed
on pci_flavor_attrs, and PCI flavors can be defined with any attributes

you suggestion is basicly remove the pci flavor attrs and make it
hard code in source of python. and no flavor. even you use a identical
key name the 'group' like thing, for now there's also need a
network=phy1,
the name 'group' also lack of semantics, if we need use 'group' for
every other module and network not special to worthy had it's self
attr 'network'.

so i prefer the flavor/extra-info solution , treat all module is equally.
i accept define 'phy netowrk' extra info for network, just don't make
network a special case.



from pci_flavor_attrs. Thus, it really lacks the level of abstraction that
clearly defines the usage and semantics. It's up to a well educated admin
to use it properly, and it's not easy to manage. Therefore, I believe it
requires further work.

I think that practical use cases would really help us find the right
solution, and provide the optimal interface to the admin/user. So let's
keep the discussion going.

thanks,
Robert

On 3/20/14 4:22 AM, "yongli he"  wrote:


HI, all

cause of the Juno, the PCI discuss keen open, for group VS to
flavor/extra-information based solution. there is a use case, which
group based
solution can not supported well.

please considerate of this, and choose the flavor/extra-information
based solution.


Groups problem:

I: exposed may detail under laying grouping to user, user burden of deal
with those things. and in a OS system, the group name might be messy.
refer to II)
--
II: group based solution can not well support such a simple use case:

user want a faster NIC from Intel to join a virtual networking.

suppose the tenant use physical network name is "phy1". then the 'group'
style solution won't meeting such a simple use case. reason:

1) the group name must be 'phy1', otherwise, the neutron can't not fill
the pci request, the neutron have only the physical network name for this.
(suppose the phy1 not bothering the user, if bothering user, user will
see group like : the "intel_phy1" "ciscio_v1_phy1" )

2) because there is only one property in pci stats pool, user then loose
the chance to choice the version or model of the pci device, then user
can not request a simple thing like the "intel-NIC" or "1G_NIC.


Regards
Yongli He



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][PCI] one use case make the flavor/extra-info based solution to be right choice

2014-03-20 Thread yongli he
HI, all

cause of the Juno, the PCI discuss keen open, for group VS to
flavor/extra-information based solution. there is a use case, which
group based
solution can not supported well.

please considerate of this, and choose the flavor/extra-information
based solution.


Groups problem:

I: exposed may detail under laying grouping to user, user burden of deal
with those things. and in a OS system, the group name might be messy.
refer to II)
--
II: group based solution can not well support such a simple use case:

user want a faster NIC from Intel to join a virtual networking.

suppose the tenant use physical network name is "phy1". then the 'group'
style solution won't meeting such a simple use case. reason:

1) the group name must be 'phy1', otherwise, the neutron can't not fill
the pci request, the neutron have only the physical network name for this.
(suppose the phy1 not bothering the user, if bothering user, user will
see group like : the "intel_phy1" "ciscio_v1_phy1" )

2) because there is only one property in pci stats pool, user then loose
the chance to choice the version or model of the pci device, then user
can not request a simple thing like the "intel-NIC" or "1G_NIC.


Regards
Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][pci][sriov] rewriting the the common SRIOV support blue-prints :https://blueprints.launchpad.net/nova/+spec/pci-extra-info

2014-03-04 Thread yongli he

Hi, all

this SRIOV common support bp link: 
https://blueprints.launchpad.net/nova/+spec/pci-extra-info



after a long discuss, SRIOV design choice discuss is done and reach a 
agreement.  i want to rewrite this blueprints,
maybe use diagram to present it clear.  i hope that will be done in one 
week or a little bit longer.  then i can introduce

this to nova meeting before the design summit.

all SRIOV work might partition to 3 task:
* the common SRIOV support in nova( which this blue prints will focus)
* Nova side nic , vif , interface to common PCI.
* different MD drivers and other in the neutron.

this blue prints is intend to support common SRIOV in nova side, not 
only for neutron.  all formal design decision about
SRIOV should be in this blue prints. other details information in the 
meeting, or in the dev mail list is reference

for who had interest.

so i will focus on this blue prints, i really want you guy check this bp 
to make sure it's the right things we had agreed(
i think so) before and after new  bp is done.  i will update you when i 
finished it.



meeting link:  https://wiki.openstack.org/wiki/Meetings/Passthrough

Regards
Yongli He





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PCI SRIOV meeting suspend?

2014-03-04 Thread yongli he
On 2014年03月04日 20:45, Robert Li (baoli) wrote:
> Hi Yongli,
>
> I have been looking at your patch set. Let me look at it again if you have
> new update. 
look forward to that.

thanks.
>
> The meeting changed back to UTC 1300 Tuesday.
>
> thanks,
> Robert
>
> On 3/4/14 12:39 AM, "yongli he"  wrote:
>
>> On 2014年03月04日 13:33, Irena Berezovsky wrote:
>>> Hi Yongli He,
>>> The PCI SRIOV meeting switched back to weekly occurrences,.
>>> Next meeting will be today at usual time slot:
>>> https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting
>>>
>>> In coming meetings we would like to work on content to be proposed for
>>> Juno.
>>> BR,
>> thanks, Irena.
>>
>> Yongli he
>>> Irena
>>>
>>> -Original Message-
>>> From: yongli he [mailto:yongli...@intel.com]
>>> Sent: Tuesday, March 04, 2014 3:28 AM
>>> To: Robert Li (baoli); Irena Berezovsky; OpenStack Development Mailing
>>> List
>>> Subject: PCI SRIOV meeting suspend?
>>>
>>> HI, Robert
>>>
>>> does it stop for while?
>>>
>>> and if you are convenient please review this patch set , check if the
>>> interface is ok.
>>>
>>>
>>>
>>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branc
>>> h:master+topic:bp/pci-extra-info,n,z
>>>
>>> Yongli He


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PCI SRIOV meeting suspend?

2014-03-03 Thread yongli he

On 2014年03月04日 13:33, Irena Berezovsky wrote:

Hi Yongli He,
The PCI SRIOV meeting switched back to weekly occurrences,.
Next meeting will be today at usual time slot:
https://wiki.openstack.org/wiki/Meetings#PCI_Passthrough_Meeting

In coming meetings we would like to work on content to be proposed for Juno.
BR,

thanks, Irena.

Yongli he

Irena

-Original Message-
From: yongli he [mailto:yongli...@intel.com]
Sent: Tuesday, March 04, 2014 3:28 AM
To: Robert Li (baoli); Irena Berezovsky; OpenStack Development Mailing List
Subject: PCI SRIOV meeting suspend?

HI, Robert

does it stop for while?

and if you are convenient please review this patch set , check if the interface 
is ok.


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/pci-extra-info,n,z

Yongli He



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] PCI SRIOV meeting suspend?

2014-03-03 Thread yongli he

HI, Robert

does it stop for while?

and if you are convenient please review this patch set , check if the 
interface is ok.



https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/pci-extra-info,n,z

Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] A problem about pci-passthrough

2014-03-03 Thread yongli he

On 2014年03月03日 22:21, Alexis Lee wrote:

Liuji (Jeremy) said on Mon, Mar 03, 2014 at 08:06:57AM +:

Test scenario:
1)There are two compute nodes in the environment named A and B. A has two NICs 
of vendor_id='8086' and product_id='105e', B has two NICs of vendor_id='8086' 
and product_id='10c9'.
2)I configured "pci_alias={"vendor_id":"8086", "product_id":"10c9", 
"name":"a1"}" in nova.conf on the controller node, and of course the pci_passthrough_whitelist on this two compute nodes 
seperately.
3)Finally, a flavor named "MyTest" with extra_specs= {u'pci_passthrough:alias': 
u'a1:1'}
4)When I create a new instance with the "MyTest" flavor, it starts or is error 
randomly.

The problem is in the _schedule function of nova/scheduler/filter_scheduler.py:
 chosen_host = random.choice(
 weighed_hosts[0:scheduler_host_subset_size])
 selected_hosts.append(chosen_host)

 # Now consume the resources so the filter/weights
 # will change for the next instance.
 chosen_host.obj.consume_from_instance(instance_properties)

while "scheduler_host_subset_size" is configured to 2, the
weighed_hosts are A and B, but the chosen_host is selected randomly.
When chosen_host is B, the instance starts, but when chosen_host is A,
the instance becomes error. The "consume_from_instance" will raise a
exception.

Hi Jeremy,

You didn't mention the PciPassthroughFilter, have you enabled this in

definitely need this filter.

Yongli He

your scheduler?
   https://wiki.openstack.org/wiki/Pci_passthrough


Alexis



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] pci device hotplug

2014-02-27 Thread yongli he

On 2014年02月22日 07:52, yunhong jiang wrote:

On Mon, 2014-02-17 at 06:43 +, Gouzongmei wrote:

Hello,

  


In current PCI passthrough implementation, a pci device is only
allowed to be assigned to a instance while the instance is being
created, it is not allowed to be assigned or removed from the instance
while the instance is running or stop.

Besides, I noticed that the basic ability--remove a pci device from
the instance(not by delete the flavor) has never been implemented or
prompted by anyone.

The current implementation:

https://wiki.openstack.org/wiki/Pci_passthrough

  


I have tested the nic hotplug on my experimental environment, it’s
supported by the latest libvirt and qemu.

  


My problem is, why the pci device hotplug is not proposed in openstack
until now, and is there anyone planning to do the pci device hotplug?

Agree that PCI hotplug is an important feature. The reason of no support
yet is bandwidth. The folks working on PCI spend a lot of time on SR-IOV
NIC discussion.

--jyh

sorry for later notice this,  sure i also think it should have hotplug .

--Yongli He



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] does exception need localize or not?

2014-02-27 Thread yongli he

refer to :
https://wiki.openstack.org/wiki/Translations

now some exception use _ and some not.  the wiki suggest do not to do 
that. but i'm not sure.


what's the correct way?


F.Y.I


   What To Translate

At present the convention is to translate/all/user-facing strings. This 
means API messages, CLI responses, documentation, help text, etc.


There has been a lack of consensus about the translation of log 
messages; the current ruling is that while it is not against policy to 
mark log messages for translation if your project feels strongly about 
it, translating log messages is not actively encouraged.


Exception text should/not/be marked for translation, becuase if an 
exception occurs there is no guarantee that the translation machinery 
will be functional.




Regards
Yongli He

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-16 Thread yongli he

On 2014?01?16? 08:28, Ian Wells wrote:
To clarify a couple of Robert's points, since we had a conversation 
earlier:
On 15 January 2014 23:47, Robert Li (baoli) > wrote:


---  do we agree that BDF address (or device id, whatever you call
it), and node id shouldn't be used as attributes in defining a PCI
flavor?


Note that the current spec doesn't actually exclude it as an option.  
It's just an unwise thing to do.  In theory, you could elect to define 
your flavors using the BDF attribute but determining 'the card in this 
slot is equivalent to all the other cards in the same slot in other 
machines' is probably not the best idea...  We could lock it out as an 
option or we could just assume that administrators wouldn't be daft 
enough to try.


  * the compute node needs to know the PCI flavor. [...]
  - to support live migration, we need to use it
to create network xml


I didn't understand this at first and it took me a while to get what 
Robert meant here.


This is based on Robert's current code for macvtap based live 
migration.  The issue is that if you wish to migrate a VM and it's 
tied to a physical interface, you can't guarantee that the same 
physical interface is going to be used on the target machine, but at 
the same time you can't change the libvirt.xml as it comes over with 
the migrating machine.  The answer is to define a network and refer 
out to it from libvirt.xml.  In Robert's current code he's using the 
group name of the PCI devices to create a network containing the list 
of equivalent devices (those in the group) that can be macvtapped.  
Thus when the host migrates it will find another, equivalent, 
interface. This falls over in the use case under
but, with flavor we defined, the group could be a tag for this purpose, 
and all Robert's design still work, so it ok, right?
consideration where a device can be mapped using more than one flavor, 
so we have to discard the use case or rethink the implementation.


There's a more complex solution - I think - where we create a 
temporary network for each macvtap interface a machine's going to use, 
with a name based on the instance UUID and port number, and containing 
the device to map. Before starting the migration we would create a 
replacement network containing only the new device on the target host; 
migration would find the network from the name in the libvirt.xml, and 
the content of that network would behave identically.  We'd be 
creating libvirt networks on the fly and a lot more of them, and we'd 
need decent cleanup code too ('when freeing a PCI device, delete any 
network it's a member of'), so it all becomes a lot more hairy.

--
Ian.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2014-01-09 Thread yongli he

On 2014?01?10? 00:49, Robert Li (baoli) wrote:


Hi Folks,


HI, all

basiclly i flavor  the pic-flavor style and against massing  the 
white-list. please see my inline comments.





With John joining the IRC, so far, we had a couple of productive 
meetings in an effort to come to consensus and move forward. Thanks 
John for doing that, and I appreciate everyone's effort to make it to 
the daily meeting. Let's reconvene on Monday.


But before that, and based on our today's conversation on IRC, I'd 
like to say a few things. I think that first of all, we need to get 
agreement on the terminologies that we are using so far. With the 
current nova PCI passthrough


PCI whitelist: defines all the available PCI passthrough 
devices on a compute node. pci_passthrough_whitelist=[{ 
"vendor_id":"","product_id":""}]
PCI Alias: criteria defined on the controller node with which 
requested PCI passthrough devices can be selected from all the PCI 
passthrough devices available in a cloud.
Currently it has the following format: 
pci_alias={"vendor_id":"", "product_id":"", "name":"str"}
nova flavor extra_specs: request for PCI passthrough devices 
can be specified with extra_specs in the format for 
example:"pci_passthrough:alias"="name:count"


As you can see, currently a PCI alias has a name and is defined on the 
controller. The implications for it is that when matching it against 
the PCI devices, it has to match the vendor_id and product_id against 
all the available PCI devices until one is found. The name is only 
used for reference in the extra_specs. On the other hand, the 
whitelist is basically the same as the alias without a name.


What we have discussed so far is based on something called PCI groups 
(or PCI flavors as Yongli puts it). Without introducing other 
complexities, and with a little change of the above representation, we 
will have something like:
pci_passthrough_whitelist=[{ "vendor_id":"","product_id":"", 
"name":"str"}]


By doing so, we eliminated the PCI alias. And we call the "name" in 
above as a PCI group name. You can think of it as combining the 
definitions of the existing whitelist and PCI alias. And believe it or 
not, a PCI group is actually a PCI alias. However, with that change of 
thinking, a lot of
the white list configuration is mostly local to a host, so only address 
in there, like John's proposal is good. mix the group into the whitelist 
means we make the global thing per host style, this is maybe wrong.



benefits can be harvested:

 * the implementation is significantly simplified

but more mass, refer my new patches already sent out.

 * provisioning is simplified by eliminating the PCI alias
pci alias provide a good way to define a global reference-able name for 
PCI, we need this, this is also true for John's pci-flavor.
 * a compute node only needs to report stats with something 
like: PCI group name:count. A compute node processes all the PCI 
passthrough devices against the whitelist, and assign a PCI group 
based on the whitelist definition.
simplify this seems like good, but it does not, separated the local and 
global is the instinct nature simplify.
 * on the controller, we may only need to define the PCI group 
names. if we use a nova api to define PCI groups (could be private or 
public, for example), one potential benefit, among other things 
(validation, etc),  they can be owned by the tenant that creates them. 
And thus a wholesale of PCI passthrough devices is also possible.
this mean you should consult the controller to deploy your host, if we 
keep white-list local, we simplify the deploy.

 * scheduler only works with PCI group names.
 * request for PCI passthrough device is based on PCI-group
 * deployers can provision the cloud based on the PCI groups
 * Particularly for SRIOV, deployers can design SRIOV PCI 
groups based on network connectivities.


Further, to support SRIOV, we are saying that PCI group names not only 
can be used in the extra specs, it can also be used in the —nic option 
and the neutron commands. This allows the most flexibilities and 
functionalities afforded by SRIOV.

i still feel use alias/pci flavor is better solution.


Further, we are saying that we can define default PCI groups based on 
the PCI device's class.
default grouping make our conceptual model more mass, pre-define a 
global thing in API and your hard code is not good way, i post -2 for this.


For vnic-type (or nic-type), we are saying that it defines the link 
characteristics of the nic that is attached to a VM: a nic that's 
connected to a virtual switch, a nic that is connected to a physical 
switch, or a nic that is connected to a physical switch, but has a 
host macvtap device in between. The actual names of the choices are 
not important here, and can be debated.


I'm hoping that we can go over the above on Monday. But any c

Re: [openstack-dev] [nova] [neutron] Todays' meeting log: PCI pass-through network support

2013-12-23 Thread yongli he

On 2013?12?24? 07:35, Ian Wells wrote:
On autodiscovery and configuration, we agree that each compute node 
finds out what it has based on some sort of list of match expressions; 
we just disagree on where they should live.

i think what we talk is group/class auto discovery here.


I know we've talked APIs for setting that matching expression, but I 
would prefer that compute nodes are responsible for their own physical 
configuration - generally this seems wiser on the grounds that 
configuring new hardware correctly is a devops problem and this pushes 
the problem into the installer, clear devops territory.  It also makes 
the (I think likely) assumption that the config may differ per compute 
node without having to add more complexity to the API with host 
aggregates and so on.  And it means that a compute node can start 
working without consulting the central database or reporting its 
entire device list back to the central controller.

let's wait Nova core comments about this.


On PCI groups, I think it is a good idea to have them declared 
centrally (their name, not their content).  Now, I would use config to 
define them and maybe an API for the tenant to list their names, 
personally; that's simpler and easier to implement and doesn't 
preclude adding an (admin) API in the future.  But I don't imagine the 
list of groups will change frequently so any update API would be very 
infrequently used, and if someone really feels they want to implement 
it I'm not going to stop them.
if you try setup a only a name for the group, how about current pci 
alias? We don't need create new terminology for this. and alias can use 
to specify groups, but we want kill alias,  seems it come back to our 
discussion.


On nova boot, I completely agree that we need a new argument to --nic 
to specify the PCI group of the NIC.  The rest of the arguments - I'm 
wondering if we could perhaps do this in two stages:

agree.

1. Neutron will read those arguments (attachment type, additional 
stuff like port group where relevant) from the port during an attach 
and pass relevant information to the plugging driver in Nova
2. We add a feature to nova so that you can specify other properties 
in the --nic section line and they're passed straight to the 
port-create called from within nova.


This is not specific to passthrough at all, just a useful general 
purpose feature.  However, it would simplify both the problem and 
design here, because these parameters, whatever they are, are now 
entirely the responsibility of Neutron and Nova's simply transporting 
them into it.  A PCI aware Neutron will presumably understand the 
attachment type, the port group and so on, or will reject them if 
they're meaningless to it, and we've even got room for future 
expansion without changing Nova or Neutron, just the plugin.  We can 
propose it now and independently, put in a patch and have it ready 
before we need it.  I think anything that helps to clarify and divide 
the responsibilities of Nova and Neutron will be helpful, because then 
we don't end up with too many cross-project-interrelated patches.


I'm going to ignore the allocation problem for now.  If a single user 
can allocate all the NICs in the cluster to himself, we still have a 
more useful solution than the one now where he can't use them, so it's 
not the top of our list.



Time seems to be running out for Icehouse. We need to come to
agreement ASAP. I will be out from wednesday until after new year.
I'm thinking that to move it forward after the new year, we may
need to have the IRC meeting in a daily basis until we
reach agreement. This should be one of our new year's resolutions?


Whatever gets it done.
--
Ian.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Infra] Support for PCI Passthrough

2013-11-28 Thread yongli he

On 2013年11月27日 23:43, Jeremy Stanley wrote:

On 2013-11-27 11:18:46 +0800 (+0800), yongli he wrote:
[...]

if you post -1, you should post testing log somewhere for people
to debug it, so does third party testing can post testing log to
the infra log server?

Not at the moment--the "infra log server" is just an Apache
name-based virtual host on the static.openstack.org VM using
mod_autoindex to serve log files out of the DocumentRoot (plus a
custom filter CGI Sean Dague wrote recently), and our Jenkins has a
shell account it can use to SCP files onto it. We can't really scale
that access control particularly safely to accommodate third
parties, nor do we have an unlimited amount of space on that machine
(we currently only preserve 6 months of test logs, and even
compressing the limit on how much Cinder block storage we can attach
to the VM is coming into sight).

There has been recent discussion about designing a more scalable
build/test artifact publication system backed by Swift object
storage, and suggestion that once it's working we might consider
support for handing out authorization to third-party-specific
containers for the purpose you describe. Until we have developed
something like that, however, you'll need to provide your own place

this need appoved by my supervisor or IT, i can not do anything about this.
does any one hear of any free space can host such thing?

Yongli He

to publish your logs (something like we use--bog standard Apache on
a public VM--should work fine I'd think?).



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Infra] Support for PCI Passthrough

2013-11-26 Thread yongli he

On 2013年11月23日 03:43, Jeremy Stanley wrote:
Hi, Jeremy

for currently, we need setup it up asap, so the third party seems the 
right way. but i have some concern,


if you post -1, you should post testing log somewhere for people to 
debug it, so does third party testing can

post testing log to the infra log server?


Yongli h...@intel.com

On 2013-11-22 08:59:16 + (+), Tan, Lin wrote:
[...]

Our module only works on the compute node that enables VT-d and
contains special PCIs which support the SR-IOV.

So is it possible to

1. setup compute node which enables pci passthrough.

2. modify the testing schedule logic allow the pci testing case
be scheduled to that machine

[...]

If you're asking about our official test infrastructure for the
OpenStack project, I believe this is infeasible for now. We
currently perform testing within generic virtual machines provided
by HPCloud and Rackspace, so the Nova compute nodes we build and
test are already running under virtualization and in turn manage
only paravirtualized QEMU instances.

In the near term, your best bet is to run your own test
infrastructure supporting the hardware features you require and
report advisory results back on proposed changes:

 http://ci.openstack.org/third_party.html

For a longer term solution, you may want to consult with the TripleO
project with regards to their bare-metal test plan:

 https://wiki.openstack.org/wiki/TripleO/TripleOCloud




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] PCI next step blue print

2013-11-25 Thread yongli he

Hi, John

you mentioned the summit discuss and pci next step work in this blue prints:
https://blueprints.launchpad.net/nova/+spec/pci-api-support

this bp provide basic API for what we already done: 
https://wiki.openstack.org/wiki/Pci-api-support


we also proposal another bp for the PCI next step, include whitelist API 
we discussed on summit:

https://blueprints.launchpad.net/nova/+spec/pci-extra-info

so we think first bp can be treated seperately, what do you think?


and we are setup some docs for user case and design:
pci next step design:
https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support 
<https://wiki.openstack.org/wiki/PCI_passthrough_SRIOV_support>


user case discuss:
https://docs.google.com/document/d/1EMwDg9J8zOxzvTnQJ9HwZdiotaVstFWKIuKrPse6JOs/edit#heading=h.30de7p6sgoxp



Yongli He (Pauli He) @intel.com





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Infra] Support for PCI Passthrough

2013-11-24 Thread yongli he

On 2013年11月25日 02:13, Robert Collins wrote:

On 23 November 2013 08:43, Jeremy Stanley  wrote:

On 2013-11-22 08:59:16 + (+), Tan, Lin wrote:
[...]
In the near term, your best bet is to run your own test
infrastructure supporting the hardware features you require and
report advisory results back on proposed changes:

 http://ci.openstack.org/third_party.html

For a longer term solution, you may want to consult with the TripleO
project with regards to their bare-metal test plan:

 https://wiki.openstack.org/wiki/TripleO/TripleOCloud

I think using the donated resources to perform this sort of testing is
an ideal example of the value the TripleO cloud can bring to OpenStack
as a whole.

I don't know if we have the necessary hardware (I'm fairly sure we
have VT-d, but I'm not 100% sure we have anything setup for SR-IOV. If
we do, then cool - please come and work with us to get that testing
what you need.

A key consideration will be whether you want checking or gating. For
gating or infra run checking there need to be two regions (which the

we should want checking and gating, we definetely should put
effort on it, it seems a fairly straightforward solution for such
testing.

Yongli He(Pauli He)

TripleO cloud is aiming at) and infra running the tests; for checking
without infra running it the third-party system is a good mechanism
(and that can be run from a single TripleO region too, in principle.

-Rob




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-28 Thread yongli he

On 2013?10?27? 15:48, Irena Berezovsky wrote:


Hi Robert,

Thank you very much for sharing the information regarding your 
efforts. Can you please share your idea of the end to end flow? How do 
you suggest  to bind Nova and Neutron?


The blueprints you registered make sense. On Nova side, there is a 
need to bind between requested virtual network and PCI 
device/interface to be allocated as vNIC.


On the Neutron side, there is a need to  support networking 
configuration of the vNIC. Neutron should be able to identify the PCI 
device/macvtap interface in order to apply configuration. I think it 
makes sense to provide neutron integration via dedicated Modular Layer 
2 Mechanism Driver to allow PCI pass-through vNIC support along with 
other networking technologies.


During the Havana Release, we introduced Mellanox Neutron plugin that 
enables networking via SRIOV pass-through devices or macvtap interfaces.



Hi, Irena & Robert

I'm very intresting on your work Mellanox Neutron plugin, which enable 
SRIOV devices or mactap interfaces. and could you provide more 
infomation about it: bp/patches/current work flow/what is expect from 
nova pci passthourgh.  and then, plus Robert's requements/discuss, i 
know the more detail about what's expected from nova pci, what pci next 
will to be.


in current stats i got:

a) fine classify of devices by auto discovery and request
1) enable white list specify the address
2) enable white list append group info like (IN/OUT/... anything)
3) enable pci request can apppend  more infomation into the extra info
i need input here, what is it? eventhough pci don't care the 
extra info, but clear is better.

i.e. Robet's
 . direct pci-passthrough/macvtap
  port profile

b) extra info awawness allocation ('feature pci' by Robert)
<https://launchpad.net/%7Ebaoli>

   1) had API and code level interface to access extra info
   2) Scheduler awawa ness about extra info/or device type so vNIC can 
be differentiated.
   3) boot/interface-attach APIs:  API interface for convertneutron NIC 
info to PCI request. :

from binding:capabilities binding:profile  to
   PCI alias(request)/
   direct pci-passthrough/macvtap  ( is it need store into pci 
device extra info?)

   port profile( is it need store into pci device extra info?)
   4) scheduler enhancement to meet NIC requements

Yongli He@intel

We want to integrate our solution with PCI pass-through Nova support. 
 I will be glad to share more details if you are interested.


The PCI pass-through networking support is planned to be discussed 
during the summit: http://summit.openstack.org/cfp/details/129. I 
think it's worth to drill down into more detailed proposal and present 
it during the summit, especially since it impacts both nova and 
neutron projects.


Would you be interested in collaboration on this effort? Would you be 
interested to exchange more emails or set an IRC/WebEx meeting during 
this week before the summit?


Regards,

Irena

*From:*Robert Li (baoli) [mailto:ba...@cisco.com]
*Sent:* Friday, October 25, 2013 11:16 PM
*To:* prashant.upadhy...@aricent.com; Irena Berezovsky; 
yunhong.ji...@intel.com; chris.frie...@windriver.com; yongli...@intel.com
*Cc:* OpenStack Development Mailing List; Brian Bowen (brbowen); Kyle 
Mestery (kmestery); Sandhya Dasu (sadasu)
*Subject:* Re: [openstack-dev] [nova] [neutron] PCI pass-through 
network support


Hi Irena,

This is Robert Li from Cisco Systems. Recently, I was tasked to 
investigate such support for Cisco's systems that support VM-FEX, 
which is a SRIOV technology supporting 802-1Qbh. I was able to bring 
up nova instances with SRIOV interfaces, and establish networking in 
between the instances that employes the SRIOV interfaces. Certainly, 
this was accomplished with hacking and some manual intervention. Based 
on this experience and my study with the two existing nova 
pci-passthrough blueprints that have been implemented and committed 
into Havana 
(https://blueprints.launchpad.net/nova/+spec/pci-passthrough-base and
https://blueprints.launchpad.net/nova/+spec/pci-passthrough-libvirt), 
 I registered a couple of blueprints (one on Nova side, the other on 
the Neutron side):


https://blueprints.launchpad.net/nova/+spec/pci-passthrough-sriov

https://blueprints.launchpad.net/neutron/+spec/pci-passthrough-sriov

in order to address SRIOV support in openstack.

Please take a look at them and see if they make sense, and let me know 
any comments and questions. We can also discuss this in the summit, I 
suppose.


I noticed that there is another thread on this topic, so copy those 
folks  from that thread as well.


thanks,

Robert

On 10/16/13 4:32 PM, "Irena Berezovsky" <mailto:ire...@mellanox.com>> wrote:


Hi,

As one of the next steps for PCI pass-through I would like to
discuss is the support for PC

Re: [openstack-dev] [nova] [neutron] PCI pass-through network support

2013-10-28 Thread yongli he

On 2013?10?29? 03:22, Robert Li (baoli) wrote:

Hi Irena,

Thank you very much for your comments. See inline.

--Robert

On 10/27/13 3:48 AM, "Irena Berezovsky" > wrote:


Hi Robert,

Thank you very much for sharing the information regarding your
efforts. Can you please share your idea of the end to end flow?
How do you suggest  to bind Nova and Neutron?


The end to end flow is actually encompassed in the blueprints in a 
nutshell. I will reiterate it in below. The binding between Nova and 
Neutron occurs with the neutron v2 API that nova invokes in order to 
provision the neutron services. The vif driver is responsible for 
plugging in an instance onto the networking setup that neutron has 
created on the host.


Normally, one will invoke "nova boot" api with the —nic options to 
specify the nic with which the instance will be connected to the 
network. It currently allows net-id, fixed ip and/or port-id to be 
specified for the option. However, it doesn't allow one to specify 
special networking requirements for the instance. Thanks to the nova 
pci-passthrough work, one can specify PCI passthrough device(s) in the 
nova flavor. But it doesn't provide means to tie up these PCI devices 
in the case of ethernet adpators with networking services. Therefore 
the idea is actually simple as indicated by the blueprint titles, to 
provide means to tie up SRIOV devices with neutron services. A work 
flow would roughly look like this for 'nova boot':


  -- Specifies networking requirements in the —nic option. 
Specifically for SRIOV, allow the following to be specified in 
addition to the existing required information:

   . PCI alias
   . direct pci-passthrough/macvtap
   . port profileid that is compliant with 802.1Qbh
The above information is optional. In the absence of them, the 
existing behavior remains.


 -- if special networking requirements exist, Nova api creates PCI 
requests in the nova instance type for scheduling purpose


 -- Nova scheduler schedules the instance based on the requested 
flavor plus the PCI requests that are created for networking.


 -- Nova compute invokes neutron services with PCI passthrough 
information if any


 --  Neutron performs its normal operations based on the request, 
such as allocating a port, assigning ip addresses, etc. Specific to 
SRIOV, it should validate the information such as profileid, and 
stores them in its db. It's also possible to associate a port 
profileid with a neutron network so that port profileid becomes 
optional in the —nic option. Neutron returns  nova the port 
information, especially for PCI passthrough related information in the 
port binding object. Currently, the port binding object contains the 
following information:

  binding:vif_type
  binding:host_id
  binding:profile
  binding:capabilities
(openstack bonce stop me to sent to so many people at one time, so i 
remove cc & to, hope every one can see this)


i heard of some nic passthrough solution in summary, and you metioned 
this, in high level of implement the NIC passthrough there is:

  hardware VEB (Virtual ethernet Switches )
  the nic need external switch like 802.1qbg

so question is:
  where is the diffrent type infomation?
  does  802.1qbg need know which port the PF connected to?


-- nova constructs the domain xml and plug in the instance by 
calling the vif driver. The vif driver can build up the interface xml 
based on the port binding information.




The blueprints you registered make sense. On Nova side, there is a
need to bind between requested virtual network and PCI
device/interface to be allocated as vNIC.

On the Neutron side, there is a need to  support networking
configuration of the vNIC. Neutron should be able to identify the
PCI device/macvtap interface in order to apply configuration. I
think it makes sense to provide neutron integration via dedicated
Modular Layer 2 Mechanism Driver to allow PCI pass-through vNIC
support along with other networking technologies.


I haven't sorted through this yet. A neutron port could be associated 
with a PCI device or not, which is a common feature, IMHO. However, a 
ML2 driver may be needed specific to a particular SRIOV technology.


During the Havana Release, we introduced Mellanox Neutron plugin
that enables networking via SRIOV pass-through devices or macvtap
interfaces.

We want to integrate our solution with PCI pass-through Nova
support.  I will be glad to share more details if you are interested.


Good to know that you already have a SRIOV implementation. I found out 
some information online about the mlnx plugin, but need more time to 
get to know it better. And certainly I'm interested in knowing its 
details.


The PCI pass-through networking support is planned to be discu

Re: [openstack-dev] [nova] [Pci passthrough] bug? -- 'NoneType' object has no attribute 'support_requests'

2013-09-22 Thread yongli he

于 2013年09月17日 05:07, David Kang 写道:
Hi, David

this should be fixed.

  Hi,

  I'm testing PCI passthrough features on Havana (single node installation).
I've installed OpenStack on CentOS 6.4 using EPEL.
The pci_passthrough_filter doesn't seem to be able to get the object 
'host_state.pci_stats'.
Is it a bug?

  Thanks,
  David

  Here is the information of the test environment:

1. /etc/nova.conf

pci_alias={"name":"test", "product_id":"7190", "vendor_id":"8086"}
pci_passthrough_whitelist=[{"vendor_id":"8086","product_id":"7190"}]


scheduler_available_filters=nova.scheduler.filters.all_filters
scheduler_available_filters=nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,PciPassthroughFilter

2. flavor

# nova flavor-list --extra-specs
++---+---+--+---+--+---+-+---+---+
| ID | Name  | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | 
Is_Public | extra_specs   |
++---+---+--+---+--+---+-+---+---+
| 1  | m1.tiny   | 512   | 1| 0 |  | 1 | 1.0 | 
True  | {u'pci_passthrough:alias': u'test:1'} |


3. RPM information of nova-scheduler:
Name: openstack-nova-scheduler
Arch: noarch
Version : 2013.2
Release : 0.19.b3.el6
Size: 2.3 k
Repo: installed
>From repo   : openstack-havana


4. /var/log/nova/scheduler.log

2013-09-16 17:04:51.259 13088 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') _load_plugins 
/usr/lib/python2.6/site-packages/stevedore/extension.py:70
2013-09-16 17:04:51.259 13088 DEBUG stevedore.extension [-] found extension 
EntryPoint.parse('file = nova.image.download.file') _load_plugins 
/usr/lib/python2.6/site-packages/stevedore/extension.py:70
2013-09-16 17:04:51.259 13088 WARNING nova.scheduler.utils 
[req-267b2f38-825f-4609-82ef-6d4164e227b1 8ace6a952a0f4a9d81c435a2c8194fe9 
656fecdc92df43c2a047316e5a1e3a24] [instance: 
9a7e57e1-8c6f-4a18-94f0-c406aae99f9a] Setting instance to ERROR state.
2013-09-16 17:04:51.398 13088 ERROR nova.openstack.common.rpc.amqp 
[req-267b2f38-825f-4609-82ef-6d4164e227b1 8ace6a952a0f4a9d81c435a2c8194fe9 
656fecdc92df43c2a047316e5a1e3a24] Exception during message handling
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp Traceback 
(most recent call last):
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, 
in _process_data
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp **args)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", 
line 172, in dispatch
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp result = 
getattr(proxyobj, method)(ctxt, **kwargs)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/scheduler/manager.py", line 160, in 
run_instance
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp context, 
ex, request_spec)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/scheduler/manager.py", line 147, in 
run_instance
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp 
legacy_bdm_in_spec)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py", line 87, 
in schedule_run_instance
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp 
filter_properties, instance_uuids)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py", line 
336, in _schedule
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp 
filter_properties, index=num)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/scheduler/host_manager.py", line 397, in 
get_filtered_hosts
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp hosts, 
filter_properties, index)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/filters.py", line 82, in 
get_filtered_objects
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp 
list_objs = list(objs)
2013-09-16 17:04:51.398 13088 TRACE nova.openstack.common.rpc.amqp   File 
"/usr/lib/python2.6/site-packages/nova/filters.py", line 43, in filter_all
2013-09-16 17:04:

Re: [openstack-dev] [nova] [pci passthrough] Is "extra_info" broken?

2013-09-22 Thread yongli he

于 2013年09月21日 05:05, David Kang 写道:

- Original Message -

From: "Russell Bryant" 
To: openstack-dev@lists.openstack.org
Sent: Friday, September 20, 2013 1:28:13 PM
Subject: Re: [openstack-dev] [nova] [pci passthrough] Is "extra_info" broken?
https://bugs.launchpad.net/nova/+bug/1223559

This should be already fixed. Make sure you're using a version new
enough to have the fix in it.

  I've already patched the following two bug fixes.

https://review.openstack.org/#/c/46690/
https://review.openstack.org/#/c/46464/

  And the error that I have now is different from the previous ones.
Previous bug happens when extra_info is not specified in the pci_whitelist flag
in the nova.conf file.
Now, I specified extra_info something like this (for test):

pci_passthrough_whitelist=[{"vendor_id":"8086","product_id":"100f","extra_info": 
{"path":"/dev/sda"}}]

hi, David Kang

thanks for use pci passthrough, extra info now is only use to store the 
'VF' 's "PF" info.

for this version, extra info is not fully ready to easy use.

and would you please fill your 'wish list' to this bug? that help to 
track pci custom requirement.

https://bugs.launchpad.net/nova/+bug/1222990








Then, I got error.
Is the error due to my misuse of extra_info or a bug?

The log in /var/log/nova/compute.log says:

2013-09-20 14:00:53.203 7292 CRITICAL nova [-] Unacceptable parameters.
2013-09-20 14:00:53.203 7292 TRACE nova Traceback (most recent call last):
2013-09-20 14:00:53.203 7292 TRACE nova   File "/usr/bin/nova-compute", line 10, in 

2013-09-20 14:00:53.203 7292 TRACE nova sys.exit(main())
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/cmd/compute.py", line 68, in main
2013-09-20 14:00:53.203 7292 TRACE nova db_allowed=False)
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/service.py", line 257, in create
2013-09-20 14:00:53.203 7292 TRACE nova db_allowed=db_allowed)
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/service.py", line 139, in __init__
2013-09-20 14:00:53.203 7292 TRACE nova self.manager = 
manager_class(host=self.host, *args, **kwargs)
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 450, in 
__init__
2013-09-20 14:00:53.203 7292 TRACE nova self.driver = 
driver.load_compute_driver(self.virtapi, compute_driver)
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/virt/driver.py", line 1106, in 
load_compute_driver
2013-09-20 14:00:53.203 7292 TRACE nova virtapi)
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/openstack/common/importutils.py", line 
52, in import_object_ns
2013-09-20 14:00:53.203 7292 TRACE nova return 
import_class(import_value)(*args, **kwargs)
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 337, in 
__init__
2013-09-20 14:00:53.203 7292 TRACE nova self.dev_filter = 
pci_whitelist.get_pci_devices_filter()
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/pci/pci_whitelist.py", line 117, in 
get_pci_devices_filter
2013-09-20 14:00:53.203 7292 TRACE nova return 
PciHostDevicesWhiteList(CONF.pci_passthrough_whitelist)
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/pci/pci_whitelist.py", line 102, in 
__init__
2013-09-20 14:00:53.203 7292 TRACE nova self.spec = 
self._parse_white_list_from_config(whitelist_spec)
2013-09-20 14:00:53.203 7292 TRACE nova   File 
"/usr/lib/python2.6/site-packages/nova/pci/pci_whitelist.py", line 84, in 
_parse_white_list_from_config
2013-09-20 14:00:53.203 7292 TRACE nova raise 
exception.PciConfigInvalidWhitelist(reason=str(e))
2013-09-20 14:00:53.203 7292 TRACE nova PciConfigInvalidWhitelist: Unacceptable 
parameters.
2013-09-20 14:00:53.203 7292 TRACE nova

  Thanks,
  David



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [pci device passthrough] fails with "NameError: global name '_' is not defined"

2013-09-13 Thread yongli he

于 2013年09月11日 21:27, Henry Gessau 写道:

For the "TypeError: expected string or buffer" I have filed Bug #1223874.


On Wed, Sep 11, at 7:41 am, yongli he  wrote:


于 2013年09月11日 05:38, David Kang 写道:

- Original Message -

From: "Russell Bryant" 
To: "David Kang" 
Cc: "OpenStack Development Mailing List" 
Sent: Tuesday, September 10, 2013 5:17:15 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with "NameError: 
global name '_' is not defined"
On 09/10/2013 05:03 PM, David Kang wrote:

- Original Message -

From: "Russell Bryant" 
To: "OpenStack Development Mailing List"

Cc: "David Kang" 
Sent: Tuesday, September 10, 2013 4:42:41 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails
with "NameError: global name '_' is not defined"
On 09/10/2013 03:56 PM, David Kang wrote:

   Hi,

I'm trying to test pci device passthrough feature.
Havana3 is installed using Packstack on CentOS 6.4.
Nova-compute dies right after start with error "NameError: global
name '_' is not defined".
I'm not sure if it is due to misconfiguration of nova.conf or bug.
Any help will be appreciated.

Here is the info:

/etc/nova/nova.conf:
pci_alias={"name":"test", "product_id":"7190", "vendor_id":"8086",
"device_type":"ACCEL"}

pci_passthrough_whitelist=[{"vendor_id":"8086","product_id":"7190"}]

   With that configuration, nova-compute fails with the following
   log:

File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py",
line 461, in _process_data
  **args)

File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py",
line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)

File
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py",
line 567, in object_action
  result = getattr(objinst, objmethod)(context, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/objects/base.py",
line
141, in wrapper
  return fn(self, ctxt, *args, **kwargs)

File
"/usr/lib/python2.6/site-packages/nova/objects/pci_device.py",
line 242, in save
  self._from_db_object(context, self, db_pci)

NameError: global name '_' is not defined
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup Traceback (most recent call
last):
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py",
line 117, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup x.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py",
line 49, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.thread.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line
166, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self._exit_event.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in
wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return hubs.get_hub().switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177,
in switch
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.greenlet.switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line
192, in main
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup result = function(*args,
**kwargs)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/service.py",
line 65, in run_service
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup service.start()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/service.py", line 164, in
start
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup self.manager.pre_start_hook()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line
805, in pre_start_hook
2013-09-10 12:52:23.774 14749 TRACE
nova.op

Re: [openstack-dev] [nova] [pci device passthrough] fails with "NameError: global name '_' is not defined"

2013-09-11 Thread yongli he

于 2013年09月11日 21:27, Henry Gessau 写道:

For the "TypeError: expected string or buffer" I have filed Bug #1223874.

got, thanks。



On Wed, Sep 11, at 7:41 am, yongli he  wrote:


于 2013年09月11日 05:38, David Kang 写道:

- Original Message -

From: "Russell Bryant" 
To: "David Kang" 
Cc: "OpenStack Development Mailing List" 
Sent: Tuesday, September 10, 2013 5:17:15 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with "NameError: 
global name '_' is not defined"
On 09/10/2013 05:03 PM, David Kang wrote:

- Original Message -

From: "Russell Bryant" 
To: "OpenStack Development Mailing List"

Cc: "David Kang" 
Sent: Tuesday, September 10, 2013 4:42:41 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails
with "NameError: global name '_' is not defined"
On 09/10/2013 03:56 PM, David Kang wrote:

   Hi,

I'm trying to test pci device passthrough feature.
Havana3 is installed using Packstack on CentOS 6.4.
Nova-compute dies right after start with error "NameError: global
name '_' is not defined".
I'm not sure if it is due to misconfiguration of nova.conf or bug.
Any help will be appreciated.

Here is the info:

/etc/nova/nova.conf:
pci_alias={"name":"test", "product_id":"7190", "vendor_id":"8086",
"device_type":"ACCEL"}

pci_passthrough_whitelist=[{"vendor_id":"8086","product_id":"7190"}]

   With that configuration, nova-compute fails with the following
   log:

File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py",
line 461, in _process_data
  **args)

File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py",
line 172, in dispatch
  result = getattr(proxyobj, method)(ctxt, **kwargs)

File
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py",
line 567, in object_action
  result = getattr(objinst, objmethod)(context, *args, **kwargs)

File "/usr/lib/python2.6/site-packages/nova/objects/base.py",
line
141, in wrapper
  return fn(self, ctxt, *args, **kwargs)

File
"/usr/lib/python2.6/site-packages/nova/objects/pci_device.py",
line 242, in save
  self._from_db_object(context, self, db_pci)

NameError: global name '_' is not defined
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup Traceback (most recent call
last):
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py",
line 117, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup x.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py",
line 49, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.thread.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line
166, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self._exit_event.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in
wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return hubs.get_hub().switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177,
in switch
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.greenlet.switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line
192, in main
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup result = function(*args,
**kwargs)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/service.py",
line 65, in run_service
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup service.start()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/service.py", line 164, in
start
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup self.manager.pre_start_hook()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line
805, in pre_start_hook
2013-09-10 12:52:23.774 14749 TRACE
nova.op

Re: [openstack-dev] [nova] [pci device passthrough] fails with "NameError: global name '_' is not defined"

2013-09-11 Thread yongli he
last):
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py",
line 461, in _process_data
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup **args)
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py",
line 172, in dispatch
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup result = getattr(proxyobj,
method)(ctxt, **kwargs)
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/conductor/manager.py", line
567, in object_action
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup result = getattr(objinst,
objmethod)(context, *args, **kwargs)
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/objects/base.py", line 141,
in wrapper
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup return fn(self, ctxt, *args,
**kwargs)
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/objects/pci_device.py", line
243, in save
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup self._from_db_object(context,
self, db_pci)
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/objects/pci_device.py", line
150, in _from_db_object
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup pci_device.extra_info =
jsonutils.loads(extra_info)
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/jsonutils.py",
line 158, in loads
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup return json.loads(s)
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib64/python2.6/json/__init__.py", line 307, in loads
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup return _default_decoder.decode(s)
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup File
"/usr/lib64/python2.6/json/decoder.py", line 319, in decode
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup obj, end = self.raw_decode(s,
idx=_w(s, 0).end())
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup
2013-09-10 13:56:35.366 16736 TRACE
nova.openstack.common.threadgroup TypeError: expected string or
buffer

Try this:

diff --git a/nova/objects/pci_device.py b/nova/objects/pci_device.py
index a83b8f3..d0a628a 100644
--- a/nova/objects/pci_device.py
+++ b/nova/objects/pci_device.py
@@ -145,7 +145,7 @@ class PciDevice(base.NovaPersistentObject,
base.NovaObject):
if key != 'extra_info':
pci_device[key] = db_dev[key]
else:
- extra_info = db_dev.get("extra_info")
+ extra_info = db_dev.get("extra_info", '{}')
pci_device.extra_info = jsonutils.loads(extra_info)
pci_device._context = context
pci_device.obj_reset_changes()


--
Russell Bryant


  The same error happens.
The error message says "TypeError: expected string or buffer".

hi, David
could you paste the new trace to the bug ? (note it with the patch) 
that's close to the fix i think.


thanks
Yongli he


  David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [pci device passthrough] fails with "NameError: global name '_' is not defined"

2013-09-11 Thread yongli he

于 2013年09月11日 05:17, Russell Bryant 写道:

On 09/10/2013 05:03 PM, David Kang wrote:

- Original Message -

From: "Russell Bryant" 
To: "OpenStack Development Mailing List" 
Cc: "David Kang" 
Sent: Tuesday, September 10, 2013 4:42:41 PM
Subject: Re: [openstack-dev] [nova] [pci device passthrough] fails with "NameError: 
global name '_' is not defined"
On 09/10/2013 03:56 PM, David Kang wrote:

  Hi,

   I'm trying to test pci device passthrough feature.
Havana3 is installed using Packstack on CentOS 6.4.
Nova-compute dies right after start with error "NameError: global
name '_' is not defined".
I'm not sure if it is due to misconfiguration of nova.conf or bug.
Any help will be appreciated.

Here is the info:

/etc/nova/nova.conf:
pci_alias={"name":"test", "product_id":"7190", "vendor_id":"8086",
"device_type":"ACCEL"}

pci_passthrough_whitelist=[{"vendor_id":"8086","product_id":"7190"}]

  With that configuration, nova-compute fails with the following log:

   File
   "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py",
   line 461, in _process_data
 **args)

   File
   "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py",
   line 172, in dispatch
 result = getattr(proxyobj, method)(ctxt, **kwargs)

   File "/usr/lib/python2.6/site-packages/nova/conductor/manager.py",
   line 567, in object_action
 result = getattr(objinst, objmethod)(context, *args, **kwargs)

   File "/usr/lib/python2.6/site-packages/nova/objects/base.py", line
   141, in wrapper
 return fn(self, ctxt, *args, **kwargs)

   File
   "/usr/lib/python2.6/site-packages/nova/objects/pci_device.py",
   line 242, in save
 self._from_db_object(context, self, db_pci)

NameError: global name '_' is not defined
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup Traceback (most recent call last):
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py",
line 117, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup x.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/threadgroup.py",
line 49, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.thread.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line
166, in wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self._exit_event.wait()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/event.py", line 116, in
wait
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return hubs.get_hub().switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/hubs/hub.py", line 177,
in switch
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return self.greenlet.switch()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/eventlet/greenthread.py", line
192, in main
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup result = function(*args, **kwargs)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/service.py",
line 65, in run_service
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup service.start()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/service.py", line 164, in
start
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup self.manager.pre_start_hook()
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line
805, in pre_start_hook
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup
self.update_available_resource(nova.context.get_admin_context())
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line
4773, in update_available_resource
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup
rt.update_available_resource(context)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py",
line 246, in inner
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup return f(*args, **kwargs)
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup File
"/usr/lib/python2.6/site-packages/nova/compute/resource_tracker.py",
line 318, in update_available_resource
2013-09-10 12:52:23.774 14749 TRACE
nova.openstack.common.threadgroup self._sync_compute