[openstack-dev] [Heat] userdata empty when using software deployment/config in Kilo

2015-10-28 Thread Gabe Black
Using my own template or the example template:
https://github.com/openstack/heat-templates/blob/master/hot/software-config/example-templates/example-deploy-sequence.yaml

results in the VM's /var/lib/cloud/instance/script/userdata being empty.

The only warnings during the cloud-init boot sequence are:
[   14.470601] cloud-init[775]: 2015-10-28 17:48:15,104 - util.py[WARNING]: 
Failed running /var/lib/cloud/instance/scripts/userdata [-]
[   15.051625] cloud-init[775]: 2015-10-28 17:48:15,685 - 
cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in 
/var/lib/cloud/instance/scripts)
[   15.057189] cloud-init[775]: 2015-10-28 17:48:15,690 - util.py[WARNING]: 
Running module scripts-user () failed

I believe those warnings are simply because the userdata file is empty

I googled and searched and couldn't find why it wasn't working for me.

The nova.api logs show the transfer of the files, no problem there.  It is 
really sending empty userdata and it thinks it should be doing that.

To verify I added some debug prints in 
heat/engine/resources/openstack/nova/server.py:612 in handle_create() method.  
Below is the first part of the method for reference:

def handle_create(self):
security_groups = self.properties.get(self.SECURITY_GROUPS)

user_data_format = self.properties.get(self.USER_DATA_FORMAT)
ud_content = self.properties.get(self.USER_DATA)  #<---

if self.user_data_software_config() or self.user_data_raw(): #<---
if uuidutils.is_uuid_like(ud_content):
# attempt to load the userdata from software config
ud_content = self.get_software_config(ud_content) #<--- 

I added some debug log prints after the #<--- above to see what it was getting 
for user_data, and it turns out it is empty (e.g. I don't even see the third 
debug print I put in).  Spending more time looking through the code it appears 
to me that the self.properties.get(self.USER_DATA) should be returning the uuid 
for the software config resource associated with the deployment, but I could be 
wrong.  Either way, it is empty which I think is not right.

Does anyone have an idea what I might be doing wrong?  I've been struggling for 
the past couple of days on this one!  Or is deployment just not stable in Kilo? 
 Documentation seems to indicate it has been supported even before Kilo.

Thanks in advance!
Gabe



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Millions of packets going to a single flow?

2015-08-27 Thread Gabe Black
Hi,
I've been running against openstack-dev (master branch) using the 
stackforge/networking-ovs-dpdk master branch (OVS GIT TAG 
1e77bbe565bbf5ae7f4c47f481a4097d666d3d68), using the single-node local.conf 
file on Ubuntu 15.04.  I've had to patch a few things to get past ERRORs during 
start:
- disable/purge apparmor
- patch ovs-dpdk-init to find correct qemu group and launch ovs-dpdk with sg
- patch ovs_dvr_neutron_agent.py to change default datapath_type to be netdev
- modify ml2_conf.ini to have [ovs] datapath_type=netdev
- create a symlink between /usr/var/run/openvsitch and /var/run/openvswitch

Everything appears to be working from a horizon point of view, I can launch 
VMs, create routers/networks, etc.

However, I've tried to figure out how to get two vms using ovs-dpdk 
(ovs-vswitchd --dpdk ...) to be able to ping eachother (or do anything network 
related for that matter - like get an ipv4 address (via dhcp), ping the 
gateway/router, etc), but to no avail.

I'm wondering if there is a flow that is bogus as when I dump the flows:

# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=5234.334s, table=0, n_packets=0, n_bytes=0, 
idle_age=5234, priority=3,in_port=2,dl_vlan=2001 actions=mod_vlan_vid:1,NORMAL  
cookie=0x0, duration=5246.680s, table=0, n_packets=0, n_bytes=0, idle_age=5246, 
priority=2,in_port=1 actions=drop  cookie=0x0, duration=5246.613s, table=0, 
n_packets=0, n_bytes=0, idle_age=5246, priority=2,in_port=2 actions=drop  
cookie=0x0, duration=5246.744s, table=0, n_packets=46828985, 
n_bytes=2809779780, idle_age=0, priority=0 actions=NORMAL  cookie=0x0, 
duration=5246.740s, table=23, n_packets=0, n_bytes=0, idle_age=5246, priority=0 
actions=drop  cookie=0x0, duration=5246.738s, table=24, n_packets=0, n_bytes=0, 
idle_age=5246, priority=0 actions=drop

There is only one flow that ever gets any packets, and it gets millions of them 
apparently.  Viewing the number of packets sent on an interface (via ifconfig) 
doesn't have any interfaces with near that many packets.

Dumping ports doesn't show any stats near those values either:
===
ovs-ofctl dump-ports br-int
OFPST_PORT reply (xid=0x2): 7 ports
  port  6: rx pkts=0, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=636, bytes=?, drop=0, errs=?, coll=?
  port  4: rx pkts=?, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=?, bytes=?, drop=?, errs=?, coll=?
  port LOCAL: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
   tx pkts=874, bytes=98998, drop=0, errs=0, coll=0
  port  1: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
   tx pkts=874, bytes=95570, drop=0, errs=0, coll=0
  port  5: rx pkts=?, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=?, bytes=?, drop=?, errs=?, coll=?
  port  2: rx pkts=0, bytes=0, drop=0, errs=0, frame=0, over=0, crc=0
   tx pkts=874, bytes=95570, drop=0, errs=0, coll=0
  port  3: rx pkts=?, bytes=?, drop=?, errs=?, frame=?, over=?, crc=?
   tx pkts=?, bytes=?, drop=?, errs=?, coll=?
===

One thing I find odd is that showing the br-int bridge (and all the other 
bridges for that matter) seem to show the port_down for almost all the 
interfaces:
===
ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:9ab69b50904d n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(int-br-em1): addr:0a:41:20:a8:6b:50
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 2(int-br-p6p1): addr:2a:5d:62:23:0a:60
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 3(tapa32182b8-ee): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 4(qr-a510b75f-f7): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 5(qr-d2f1d4a0-a9): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 6(vhucf0a0213-68): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:9a:b6:9b:50:90:4d
 config: PORT_DOWN
 state:  LINK_DOWN
 current:    10MB-FD COPPER
 speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 
===

In fact, trying to bring them up (i.e. ovs-ofctl mod-port br-int 3 up) does not 
change anything for any of t