[Yahoo-eng-team] [Bug 1918565] [NEW] quota: invalid JSON for reservation value when positive

2021-03-10 Thread Emilien Macchi
Public bug reported:

# High level description

When a resource (e.g. port) is requested, QuotaDetail returns invalid
JSON until all resources of that type are created (e.g. all ports
requested are now created and there is no ongoing request to create any
other port). The invalid JSON is only for the "reserved" key, which
returns a string instead of an integer.

This is incompatible with what the API is supposed to return:
https://docs.openstack.org/api-ref/network/v2/index.html?expanded=show-quota-details-for-a-tenant-detail

"The value for each resource type is itself an object (the quota set)
containing the quota’s used, limit and reserved integer values."

It is problematic in the context of gophercloud, which expect an integer to be 
returned, and not a string:
https://github.com/gophercloud/gophercloud/blob/cd9c207e93f4f76af2c0a06c6d449ab342bfbe56/openstack/networking/v2/extensions/quotas/results.go#L125

# Pre-conditions

* Neutron QuotaDB driver is used.
* Resources have to be under creation (e.g. ports)

# Step-by-step reproduction steps

* Get a token from Keystone:
$ openstack token issue

And export its ID as $token.

* Run this loop:
$ while true; do curl -H "X-Auth-Token: $token" 
:9696/v2.0/quotas//details.json | jq 
'.'|& tee -a logs; done

* In another terminal, run this command:
$ tail -f logs|grep 'reserved": "'

* Now, create ports (e.g. 3 ports)
$ openstack port create (...)

* Expected output

{
  "quota": {
"port": {
  "limit": 500,
  "used": 0,
  "reserved": 3
},
...
}

Then:

{
  "quota": {
"port": {
  "limit": 500,
  "used": 1,
  "reserved": 2
},
...
}

Then:

{
  "quota": {
"port": {
  "limit": 500,
  "used": 2,
  "reserved": 1
},
...
}

And then:

{
  "quota": {
"port": {
  "limit": 500,
  "used": 3,
  "reserved": 0
},
...
}

* Actual output

{
  "quota": {
"port": {
  "limit": 500,
  "used": 0,
  "reserved": "3"
},
...
}

Then:

{
  "quota": {
"port": {
  "limit": 500,
  "used": 1,
  "reserved": "2"
},
...
}

Then:

{
  "quota": {
"port": {
  "limit": 500,
  "used": 2,
  "reserved": "1"
},
...
}

And then:

{
  "quota": {
"port": {
  "limit": 500,
  "used": 3,
  "reserved": 0 // an integer!
},
...
}

# Version:

Master and stable branches.
RHEL 8.3
TripleO based deployment (OSP 17)

# Environment:
OSP17 deployed in standalone, OVN backend.

# Perceived severity:
Workaround is possible but currently blocking gophercloud to be working for 
QuotaDetails.

** Affects: neutron
 Importance: Undecided
 Status: New

** Summary changed:

- quota: invalid JSON for reservation data
+ quota: invalid JSON for reservation value when positive

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1918565

Title:
  quota: invalid JSON for reservation value when positive

Status in neutron:
  New

Bug description:
  # High level description

  When a resource (e.g. port) is requested, QuotaDetail returns invalid
  JSON until all resources of that type are created (e.g. all ports
  requested are now created and there is no ongoing request to create
  any other port). The invalid JSON is only for the "reserved" key,
  which returns a string instead of an integer.

  This is incompatible with what the API is supposed to return:
  
https://docs.openstack.org/api-ref/network/v2/index.html?expanded=show-quota-details-for-a-tenant-detail

  "The value for each resource type is itself an object (the quota set)
  containing the quota’s used, limit and reserved integer values."

  It is problematic in the context of gophercloud, which expect an integer to 
be returned, and not a string:
  
https://github.com/gophercloud/gophercloud/blob/cd9c207e93f4f76af2c0a06c6d449ab342bfbe56/openstack/networking/v2/extensions/quotas/results.go#L125

  # Pre-conditions

  * Neutron QuotaDB driver is used.
  * Resources have to be under creation (e.g. ports)

  # Step-by-step reproduction steps

  * Get a token from Keystone:
  $ openstack token issue

  And export its ID as $token.

  * Run this loop:
  $ while true; do curl -H "X-Auth-Token: $token" 
:9696/v2.0/quotas//details.json | jq 
'.'|& tee -a logs; done

  * In another terminal, run this command:
  $ tail -f logs|grep 'reserved": "'

  * Now, create ports (e.g. 3 ports)
  $ openstack port create (...)

  * Expected output

  {
"quota": {
  "port": {
"limit": 500,
"used": 0,
"reserved": 3
  },
  ...
  }

  Then:

  {
"quota": {
  "port": {
"limit": 500,
"used": 1,
"reserved": 2
  },
  ...
  }

  Then:

  {
"quota": {
  "port": {
"limit": 500,
"used": 2,
"reserved": 1
  },
  ...
  }

  And then:

  {
"quota": {
  "port": {
"limit": 500,
"used": 3,

[Yahoo-eng-team] [Bug 1622914] Re: agent traces about bridge-nf-call sysctl values missing

2018-12-29 Thread Emilien Macchi
This is an automated cleanup. This bug report has been closed because it
is older than 18 months and there is no open code change to fix this.
After this time it is unlikely that the circumstances which lead to
the observed issue can be reproduced.

If you can reproduce the bug, please:
* reopen the bug report (set to status "New")
* AND add the detailed steps to reproduce the issue (if applicable)
* AND leave a comment "CONFIRMED FOR: "
  Only still supported release names are valid (FUTURE, PIKE, QUEENS, ROCKY, 
STEIN).
  Valid example: CONFIRMED FOR: FUTURE


** Changed in: tripleo
   Importance: Medium => Undecided

** Changed in: tripleo
   Status: Triaged => Expired

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1622914

Title:
  agent traces about bridge-nf-call sysctl values missing

Status in devstack:
  Fix Released
Status in neutron:
  Fix Released
Status in tripleo:
  Expired

Bug description:
  spotted in gate:

  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent Traceback (most recent call 
last):
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 450, in daemon_loop
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent sync = 
self.process_network_devices(device_info)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/usr/local/lib/python2.7/dist-packages/osprofiler/profiler.py", line 154, in 
wrapper
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent return f(*args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/plugins/ml2/drivers/agent/_common_agent.py", 
line 200, in process_network_devices
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent device_info.get('updated'))
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 265, in 
setup_port_filters
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.prepare_devices_filter(new_devices)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 130, in 
decorated_function
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent *args, **kwargs)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 138, in 
prepare_devices_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._apply_port_filter(device_ids)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/securitygroups_rpc.py", line 163, in 
_apply_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self.firewall.prepare_port_filter(device)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 170, in 
prepare_port_filter
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
self._enable_netfilter_for_bridges()
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/iptables_firewall.py", line 114, in 
_enable_netfilter_for_bridges
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent run_as_root=True)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent   File 
"/opt/stack/new/neutron/neutron/agent/linux/utils.py", line 138, in execute
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent raise RuntimeError(msg)
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent RuntimeError: Exit code: 255; 
Stdin: ; Stdout: ; Stderr: sysctl: cannot stat 
/proc/sys/net/bridge/bridge-nf-call-arptables: No such file or directory
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent 
  2016-09-13 07:37:33.437 13401 ERROR 
neutron.plugins.ml2.drivers.agent._common_agent

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1622914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : 

[Yahoo-eng-team] [Bug 1715374] Re: Reloading compute with SIGHUP prenvents instances to boot

2018-09-28 Thread Emilien Macchi
remaining patch : https://review.openstack.org/#/c/596275/

** Changed in: tripleo
   Status: In Progress => Fix Released

** Changed in: tripleo
   Status: Fix Released => Triaged

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1715374

Title:
  Reloading compute with SIGHUP prenvents instances to boot

Status in OpenStack Compute (nova):
  Confirmed
Status in tripleo:
  Triaged

Bug description:
  When trying to boot a new instance at a compute-node, where nova-
  compute received SIGHUP(the SIGHUP is used as a trigger for reloading
  mutable options), it always failed.

== nova/compute/manager.py ==
  def cancel_all_events(self):
  if self._events is None:
  LOG.debug('Unexpected attempt to cancel events during shutdown.')
  return
  our_events = self._events
  # NOTE(danms): Block new events
  self._events = None<--- Set self._events to 
"None" 
  ...
  =

This will cause a NovaException when prepare_for_instance_event() was 
called.
It's the cause of the failure of network allocation.

  == nova/compute/manager.py ==
  def prepare_for_instance_event(self, instance, event_name):
  ...
  if self._events is None:
  # NOTE(danms): We really should have a more specific error
  # here, but this is what we use for our default error case
  raise exception.NovaException('In shutdown, no new events '
'can be scheduled')
  =

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1715374/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1773449] Re: VM rbd backed block devices inconsistent after unexpected host outage

2018-07-03 Thread Emilien Macchi
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1773449

Title:
  VM rbd backed block devices inconsistent after unexpected host outage

Status in OpenStack ceph-mon charm:
  Fix Released
Status in charms.ceph:
  Fix Released
Status in Ubuntu Cloud Archive:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in tripleo:
  Fix Released
Status in ceph package in Ubuntu:
  Invalid
Status in nova package in Ubuntu:
  Invalid
Status in qemu package in Ubuntu:
  Invalid

Bug description:
  Reboot host that contains VMs with volumes and all VMs fail to boot.
  Happens with Queens on Bionic and Xenial

  [0.00] Initializing cgroup subsys cpuset

  [0.00] Initializing cgroup subsys cpu

  [0.00] Initializing cgroup subsys cpuacct

  [0.00] Linux version 4.4.0-124-generic
  (buildd@lcy01-amd64-028) (gcc version 5.4.0 20160609 (Ubuntu
  5.4.0-6ubuntu1~16.04.9) ) #148-Ubuntu SMP Wed May 2 13:00:18 UTC 2018
  (Ubuntu 4.4.0-124.148-generic 4.4.117)

  [0.00] Command line:
  BOOT_IMAGE=/boot/vmlinuz-4.4.0-124-generic
  root=UUID=bca2de6e-f774-4203-ae05-e8deeb05f64a ro console=tty1
  console=ttyS0

  [0.00] KERNEL supported cpus:

  [0.00]   Intel GenuineIntel

  [0.00]   AMD AuthenticAMD

  [0.00]   Centaur CentaurHauls

  [0.00] x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256

  [0.00] x86/fpu: Supporting XSAVE feature 0x01: 'x87 floating
  point registers'

  [0.00] x86/fpu: Supporting XSAVE feature 0x02: 'SSE registers'

  [0.00] x86/fpu: Supporting XSAVE feature 0x04: 'AVX registers'

  [0.00] x86/fpu: Enabled xstate features 0x7, context size is
  832 bytes, using 'standard' format.

  [0.00] x86/fpu: Using 'eager' FPU context switches.

  [0.00] e820: BIOS-provided physical RAM map:

  [0.00] BIOS-e820: [mem 0x-0x0009fbff]
  usable

  [0.00] BIOS-e820: [mem 0x0009fc00-0x0009]
  reserved

  [0.00] BIOS-e820: [mem 0x000f-0x000f]
  reserved

  [0.00] BIOS-e820: [mem 0x0010-0x7ffdbfff]
  usable

  [0.00] BIOS-e820: [mem 0x7ffdc000-0x7fff]
  reserved

  [0.00] BIOS-e820: [mem 0xfeffc000-0xfeff]
  reserved

  [0.00] BIOS-e820: [mem 0xfffc-0x]
  reserved

  [0.00] NX (Execute Disable) protection: active

  [0.00] SMBIOS 2.8 present.

  [0.00] Hypervisor detected: KVM

  [0.00] e820: last_pfn = 0x7ffdc max_arch_pfn = 0x4

  [0.00] x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WC
  UC- WT

  [0.00] found SMP MP-table at [mem 0x000f6a20-0x000f6a2f]
  mapped at [880f6a20]

  [0.00] Scanning 1 areas for low memory corruption

  [0.00] Using GB pages for direct mapping

  [0.00] RAMDISK: [mem 0x361f4000-0x370f1fff]

  [0.00] ACPI: Early table checksum verification disabled

  [0.00] ACPI: RSDP 0x000F6780 14 (v00 BOCHS )

  [0.00] ACPI: RSDT 0x7FFE1649 2C (v01 BOCHS
  BXPCRSDT 0001 BXPC 0001)

  [0.00] ACPI: FACP 0x7FFE14CD 74 (v01 BOCHS
  BXPCFACP 0001 BXPC 0001)

  [0.00] ACPI: DSDT 0x7FFE0040 00148D (v01 BOCHS
  BXPCDSDT 0001 BXPC 0001)

  [0.00] ACPI: FACS 0x7FFE 40

  [0.00] ACPI: APIC 0x7FFE15C1 88 (v01 BOCHS
  BXPCAPIC 0001 BXPC 0001)

  [0.00] No NUMA configuration found

  [0.00] Faking a node at [mem
  0x-0x7ffdbfff]

  [0.00] NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdbfff]

  [0.00] kvm-clock: Using msrs 4b564d01 and 4b564d00

  [0.00] kvm-clock: cpu 0, msr 0:7ffcf001, primary cpu clock

  [0.00] kvm-clock: using sched offset of 17590935813 cycles

  [0.00] clocksource: kvm-clock: mask: 0x
  max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns

  [0.00] Zone ranges:

  [0.00]   DMA  [mem 0x1000-0x00ff]

  [0.00]   DMA32[mem 0x0100-0x7ffdbfff]

  [0.00]   Normal   empty

  [0.00]   Device   empty

  [0.00] Movable zone start for each node

  [0.00] Early memory node ranges

  [0.00]   node   0: [mem 0x1000-0x0009efff]

  [0.00]   node   0: [mem 0x0010-0x7ffdbfff]

  [0.00] Initmem setup node 0 [mem
  0x1000-0x7ffdbfff]

  [0.00] ACPI: PM-Timer IO Port: 0x608

  [0.00] ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])

  [

[Yahoo-eng-team] [Bug 1239481] Re: nova baremetal requires manual neutron setup for metadata access

2017-07-05 Thread Emilien Macchi
This bug was last updated over 180 days ago, as tripleo is a fast moving
project and we'd like to get the tracker down to currently actionable
bugs, this is getting marked as Invalid. If the issue still exists,
please feel free to reopen it.

** Changed in: tripleo
   Status: Incomplete => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1239481

Title:
  nova baremetal requires manual neutron setup for metadata access

Status in Ironic:
  Invalid
Status in neutron:
  Expired
Status in OpenStack Compute (nova):
  Won't Fix
Status in tripleo:
  Invalid

Bug description:
  a subnet setup with host routes can use a bare metal gateway as long as there 
is a metadata server on the same network:
  neutron subnet-create ... (network, dhcp settings etc) host_routes 
type=dict list=true destination=169.254.169.254/32,nexthop= --gateway_ip=

  But this requires manual configuration - it would be nice if nova
  could configure this as part of bringing up the network for a given
  node.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ironic/+bug/1239481/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1696094] Re: CI: ovb-ha promotion job fails with 504 gateway timeout, neutron-server create-subnet timing out

2017-06-11 Thread Emilien Macchi
** Tags removed: alert gate-failure promotion-blocker

** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1696094

Title:
  CI: ovb-ha promotion job fails with 504 gateway timeout, neutron-
  server create-subnet timing out

Status in neutron:
  Confirmed
Status in tripleo:
  Fix Released

Bug description:
  http://logs.openstack.org/15/359215/106/experimental-tripleo/gate-
  tripleo-ci-centos-7-ovb-
  ha/2ea94ab/console.html#_2017-06-05_23_52_38_539282

  2017-06-05 23:50:34.148537 | 
+---+--+
  2017-06-05 23:50:35.545475 | neutron CLI is deprecated and will be removed in 
the future. Use openstack CLI instead.
  2017-06-05 23:52:38.539282 | 504 Gateway Time-out
  2017-06-05 23:52:38.539408 | The server didn't respond in time.
  2017-06-05 23:52:38.539437 | 

  It happens on where subnet creation should be.
  I see in logs ovs-vsctl failure, but not sure it's not red herring.

  http://logs.openstack.org/15/359215/106/experimental-tripleo/gate-
  tripleo-ci-centos-7-ovb-ha/2ea94ab/logs/controller-1-tripleo-
  ci-b-bar/var/log/messages

  Jun  5 23:48:22 localhost ovs-vsctl: ovs|1|vsctl|INFO|Called as 
/bin/ovs-vsctl --timeout=5 --id=@manager -- create Manager 
"target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
  Jun  5 23:48:22 localhost ovs-vsctl: ovs|2|db_ctl_base|ERR|transaction 
error: {"details":"Transaction causes multiple rows in \"Manager\" table to 
have identical values (\"ptcp:6640:127.0.0.1\") for index on column \"target\". 
 First row, with UUID 7e2b866a-40d5-4f9c-9e08-0be3bb34b199, existed in the 
database before this transaction and was not modified by the transaction.  
Second row, with UUID 49488cff-271a-457a-b1e7-e6ca3da6f069, was inserted by 
this transaction.","error":"constraint violation"}

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1696094/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1694524] Re: Neutron OVS agent fails to start when neutron-server is not available

2017-06-02 Thread Emilien Macchi
** Changed in: tripleo
   Status: Triaged => Fix Released

** Tags removed: alert ci promotion-blocker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694524

Title:
  Neutron OVS agent fails to start when neutron-server is not available

Status in neutron:
  Confirmed
Status in tripleo:
  Fix Released

Bug description:
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
[req-34115ed3-3043-4fcb-ba3f-ab0e4eb0e83c - - - - -] Agent main thread died of 
an exception
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
Traceback (most recent call last):
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 40, in agent_main_wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
ovs_agent.main(bridge_classes)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2166, in main
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
agent = OVSNeutronAgent(bridge_classes, cfg.CONF)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 180, in __init__
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.setup_rpc()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return f(*args, **kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 362, in setup_rpc
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.plugin_rpc = OVSPluginApi(topics.PLUGIN)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 182, in __init__
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.remote_resource_cache = create_cache_for_l2_agent()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 174, in 
create_cache_for_l2_agent
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
rcache.bulk_flood_cache()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/resource_cache.py", line 55, in 
bulk_flood_cache
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
for resource in puller.bulk_pull(context, rtype):
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 48, in wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return method(*args, **kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py", 
line 109, in bulk_pull
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
version=resource_type_cls.VERSION, filter_kwargs=filter_kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 174, in call
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
time.sleep(wait)
  2017-05-30 18:58:01.947 27929 ERROR 

[Yahoo-eng-team] [Bug 1694524] Re: Neutron OVS agent fails to start when neutron-server is not available

2017-05-31 Thread Emilien Macchi
** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
Milestone: None => pike-2

** Changed in: tripleo
   Importance: Undecided => Critical

** Tags added: alert ci promotion-blocker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1694524

Title:
  Neutron OVS agent fails to start when neutron-server is not available

Status in neutron:
  Confirmed
Status in tripleo:
  Triaged

Bug description:
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
[req-34115ed3-3043-4fcb-ba3f-ab0e4eb0e83c - - - - -] Agent main thread died of 
an exception
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
Traceback (most recent call last):
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py",
 line 40, in agent_main_wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
ovs_agent.main(bridge_classes)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 2166, in main
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
agent = OVSNeutronAgent(bridge_classes, cfg.CONF)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 180, in __init__
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.setup_rpc()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/osprofiler/profiler.py", line 153, in wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return f(*args, **kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py",
 line 362, in setup_rpc
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.plugin_rpc = OVSPluginApi(topics.PLUGIN)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 182, in __init__
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
self.remote_resource_cache = create_cache_for_l2_agent()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 174, in 
create_cache_for_l2_agent
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
rcache.bulk_flood_cache()
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/agent/resource_cache.py", line 55, in 
bulk_flood_cache
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
for resource in puller.bulk_pull(context, rtype):
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/oslo_log/helpers.py", line 48, in wrapper
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
return method(*args, **kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py", 
line 109, in bulk_pull
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 
version=resource_type_cls.VERSION, filter_kwargs=filter_kwargs)
  2017-05-30 18:58:01.947 27929 ERROR 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp   File 
"/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 174, in call
  2017-05-30 

[Yahoo-eng-team] [Bug 1683469] Re: InvalidRequestError: Can't attach instance

2017-04-18 Thread Emilien Macchi
** Changed in: tripleo
   Status: Triaged => Fix Released

** Tags removed: alert

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683469

Title:
  InvalidRequestError: Can't attach instance

Status in neutron:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  Deploying TripleO undercloud under Pike packaging, Neutron with ML2
  OVS plugin.

  It fails to create network:

  neutron net-create ctlplane --provider:network_type flat 
--provider:physical_network ctlplane
  Request Failed: internal server error while processing your request.

  See the trace:
  
http://logs.openstack.org/64/457264/1/check/gate-tripleo-ci-centos-7-scenario002-multinode-oooq/56c3c3e/logs/undercloud/var/log/neutron/server.log.txt.gz#_2017-04-17_18_56_53_265

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1683469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1683469] [NEW] InvalidRequestError: Can't attach instance

2017-04-17 Thread Emilien Macchi
Public bug reported:

Deploying TripleO undercloud under Pike packaging, Neutron with ML2 OVS
plugin.

It fails to create network:

neutron net-create ctlplane --provider:network_type flat 
--provider:physical_network ctlplane
Request Failed: internal server error while processing your request.

See the trace:
http://logs.openstack.org/64/457264/1/check/gate-tripleo-ci-centos-7-scenario002-multinode-oooq/56c3c3e/logs/undercloud/var/log/neutron/server.log.txt.gz#_2017-04-17_18_56_53_265

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: In Progress

** Affects: tripleo
 Importance: Critical
 Status: Triaged


** Tags: alert ci networking

** Changed in: tripleo
   Importance: Undecided => Critical

** Changed in: tripleo
Milestone: None => pike-2

** Changed in: tripleo
   Status: New => Triaged

** Tags added: alert ci networking

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1683469

Title:
  InvalidRequestError: Can't attach instance

Status in neutron:
  In Progress
Status in tripleo:
  Triaged

Bug description:
  Deploying TripleO undercloud under Pike packaging, Neutron with ML2
  OVS plugin.

  It fails to create network:

  neutron net-create ctlplane --provider:network_type flat 
--provider:physical_network ctlplane
  Request Failed: internal server error while processing your request.

  See the trace:
  
http://logs.openstack.org/64/457264/1/check/gate-tripleo-ci-centos-7-scenario002-multinode-oooq/56c3c3e/logs/undercloud/var/log/neutron/server.log.txt.gz#_2017-04-17_18_56_53_265

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1683469/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1677008] [NEW] Stop using os-cloud-config

2017-03-28 Thread Emilien Macchi
Public bug reported:

os-cloud-config is deprecated in Ocata and will be removed in the future.
TripleO doesn't use it anymore. Only Neutron Client functional tests are using 
it.

** Affects: tripleo
 Importance: Medium
 Status: Triaged


** Tags: needs-attention

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Medium

** Changed in: tripleo
Milestone: None => pike-3

** Also affects: neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1677008

Title:
  Stop using os-cloud-config

Status in tripleo:
  Triaged

Bug description:
  os-cloud-config is deprecated in Ocata and will be removed in the future.
  TripleO doesn't use it anymore. Only Neutron Client functional tests are 
using it.

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1677008/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1674236] Re: CI / promotion: Nova isn't aware of the nodes that were registered with Ironic

2017-03-23 Thread Emilien Macchi
** Changed in: tripleo
   Status: In Progress => Fix Released

** Tags removed: alert ci promotion-blocker

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1674236

Title:
  CI / promotion: Nova isn't aware of the nodes that were registered
  with Ironic

Status in OpenStack Compute (nova):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  All CI periodic jobs fail with "No valid host" error:

  
http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-ha/6504587/
  
http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-nonha/12d034e/

  Hosts are not deployed:
  
http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-nonha/12d034e/logs/postci.txt.gz#_2017-03-19_07_22_10_000
  2017-03-19 07:22:10.000 | 
+--+-+++-+--+
  2017-03-19 07:22:10.000 | | ID   | Name   
 | Status | Task State | Power State | Networks |
  2017-03-19 07:22:10.000 | 
+--+-+++-+--+
  2017-03-19 07:22:10.000 | | 96e8d6bc-0ff4-46ad-a274-7bf554cdaf1a | 
overcloud-cephstorage-0 | ERROR  | -  | NOSTATE |  |
  2017-03-19 07:22:10.000 | | 56266ef5-7483-4052-8698-37efe14bc1c6 | 
overcloud-novacompute-0 | ERROR  | -  | NOSTATE |  |
  2017-03-19 07:22:10.000 | 
+--+-+++-+--+

  ironic node-list
  
+--+--+---+-++-+
  | UUID | Name | Instance UUID 
| Power State | Provisioning State | Maintenance |
  
+--+--+---+-++-+
  | b285-e40e-4068-abd8-7edeeb255cef | baremetal-periodic-0 | None  
| power off   | available  | False   |
  | 102deb76-7f12-49a1-9c3c-53472a1d0f3e | baremetal-periodic-1 | None  
| power off   | available  | False   |
  | 8afea687-4d29-4eed-97f3-57ba449eed14 | baremetal-periodic-2 | None  
| power off   | available  | False   |
  
+--+--+---+-++-+

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1674236/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1667679] Re: Setting quota fails saying admin project is not a valid project

2017-02-24 Thread Emilien Macchi
** Changed in: tripleo
   Status: Triaged => Fix Released

** Changed in: tripleo
 Assignee: (unassigned) => Emilien Macchi (emilienm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1667679

Title:
  Setting quota fails saying admin project is not a valid project

Status in OpenStack Compute (nova):
  Confirmed
Status in tripleo:
  Fix Released

Bug description:
  This is what's in the logs http://logs.openstack.org/15/359215/63
  /check-tripleo/gate-tripleo-ci-centos-7-ovb-
  ha/3465882/console.html#_2017-02-24_11_07_08_893276

  2017-02-24 11:07:08.893276 | 2017-02-24 11:07:02.000 | 2017-02-24 
11:07:02,929 INFO: + openstack quota set --cores -1 --instances -1 --ram -1 
b0fe52b0ac15450ba0a38ac9acd8fea8
  2017-02-24 11:07:08.893365 | 2017-02-24 11:07:08.000 | 2017-02-24 
11:07:08,674 INFO: Project ID b0fe52b0ac15450ba0a38ac9acd8fea8 is not a valid 
project. (HTTP 400) (Request-ID: req-9e0a00b7-75ae-41d5-aeed-705bb1a54bae)
  2017-02-24 11:07:08.893493 | 2017-02-24 11:07:08.000 | 2017-02-24 
11:07:08,758 INFO: [2017-02-24 11:07:08,757] (os-refresh-config) [ERROR] during 
post-configure phase. [Command '['dib-run-parts', 
'/usr/libexec/os-refresh-config/post-configure.d']' returned non-zero exit 
status 1]

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1667679/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661360] Re: tempest test fails with "Instance not found" error

2017-02-16 Thread Emilien Macchi
** Changed in: tripleo
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661360

Title:
  tempest test fails with "Instance not found" error

Status in OpenStack Compute (nova):
  In Progress
Status in tripleo:
  Fix Released

Bug description:
  Running OpenStack services from master, when we try to run tempest
  test
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
  (among others). It always fails with message "u'message': u'Instance
  bf33af04-6b55-4835-bb17-02484c196f13 could not be found.'" (full log
  in http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/console.html)

  According to the sequence in the log, this is what happens:

  1. tempest creates an instance:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_291997

  2. nova server returns instance bf33af04-6b55-4835-bb17-02484c196f13
  so it seems it has been properly created:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292483

  3. tempest try to get status of the instance right after creating it
  and nova server returns 404, instance not found:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292565

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292845

  At that time following messages are found in nova log:

  2017-02-02 12:58:10.823 7439 DEBUG nova.compute.api 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] [instance: 
bf33af04-6b55-4835-bb17-02484c196f13] Fetching instance by UUID get 
/usr/lib/python2.7/site-packages/nova/compute/api.py:2312
  2017-02-02 12:58:10.879 7439 INFO nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] HTTP exception thrown: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found.
  2017-02-02 12:58:10.880 7439 DEBUG nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] Returning 404 to user: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1039

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/logs/nova/nova-
  api.txt.gz#_2017-02-02_12_58_10_879

  4. Then tempest start cleaning up environment, deleting security
  group, etc...

  We are hitting this with nova from commit
  f40467b0eb2b58a369d24a0e832df1ace6c400c3





  
  Tempest starts cleaning up securitygroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663187] Re: Nova Placement API on IPv6 unreachable from compute nodes

2017-02-13 Thread Emilien Macchi
** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1663187

Title:
  Nova Placement API on IPv6 unreachable from compute nodes

Status in OpenStack Compute (nova):
  Invalid
Status in tripleo:
  In Progress

Bug description:
  logs at

  http://logs.openstack.org/periodic/periodic-tripleo-ci-centos-7-ovb-
  updates/024c997/console.html#_2017-02-09_08_41_30_014769

  show the error
  "No valid host was found"

  The deployment completed and updated successfully

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1663187/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1663458] [NEW] brutal stop of ovs-agent doesn't kill ryu controller

2017-02-09 Thread Emilien Macchi
Public bug reported:

It seems like when we kill neutron-ovs-agent and start it again, the ryu
controller fails to start because the previous instance (in eventlet) is
still running.

(... ovs agent is failing to start and is brutally killed)

Trying to start the process 5 minutes later:
INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 
10.0.0.0rc2.dev33
INFO ryu.base.app_manager [-] loading app 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp
INFO ryu.base.app_manager [-] loading app ryu.app.ofctl.service
INFO ryu.base.app_manager [-] loading app ryu.controller.ofp_handler
INFO ryu.base.app_manager [-] instantiating app 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp of 
OVSNeutronAgentRyuApp
INFO ryu.base.app_manager [-] instantiating app ryu.controller.ofp_handler of 
OFPHandler
INFO ryu.base.app_manager [-] instantiating app ryu.app.ofctl.service of 
OfctlService
ERROR ryu.lib.hub [-] hub: uncaught exception: Traceback (most recent call 
last):
  File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 54, in _launch
return func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/ryu/controller/controller.py", line 
97, in __call__
self.ofp_ssl_listen_port)
  File "/usr/lib/python2.7/site-packages/ryu/controller/controller.py", line 
120, in server_loop
datapath_connection_factory)
  File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 117, in __init__
self.server = eventlet.listen(listen_info)
  File "/usr/lib/python2.7/site-packages/eventlet/convenience.py", line 43, in 
listen
sock.bind(addr)
  File "/usr/lib64/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
error: [Errno 98] Address already in use
INFO neutron.agent.ovsdb.native.vlog [-] tcp:127.0.0.1:6640: connecting...
INFO neutron.agent.ovsdb.native.vlog [-] tcp:127.0.0.1:6640: connected
INFO neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_bridge 
[-] Bridge br-int has datapath-ID badb62a6184f
ERROR neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ofswitch 
[-] Switch connection timeout

I haven't figured out yet how the previous instance of ovs agent was
killed (my theory is that Puppet killed it but I don't have the killing
code yet, I'll update the bug asap).

** Affects: neutron
 Importance: Undecided
 Assignee: Ihar Hrachyshka (ihar-hrachyshka)
 Status: New

** Affects: tripleo
 Importance: Critical
 Assignee: Emilien Macchi (emilienm)
 Status: Triaged


** Tags: needs-attention ovs

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
 Assignee: (unassigned) => Emilien Macchi (emilienm)

** Changed in: tripleo
Milestone: None => ocata-rc1

** Changed in: tripleo
   Importance: Undecided => Critical

** Tags added: alert ci

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1663458

Title:
  brutal stop of ovs-agent doesn't kill ryu controller

Status in neutron:
  New
Status in tripleo:
  Triaged

Bug description:
  It seems like when we kill neutron-ovs-agent and start it again, the
  ryu controller fails to start because the previous instance (in
  eventlet) is still running.

  (... ovs agent is failing to start and is brutally killed)

  Trying to start the process 5 minutes later:
  INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 
10.0.0.0rc2.dev33
  INFO ryu.base.app_manager [-] loading app 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp
  INFO ryu.base.app_manager [-] loading app ryu.app.ofctl.service
  INFO ryu.base.app_manager [-] loading app ryu.controller.ofp_handler
  INFO ryu.base.app_manager [-] instantiating app 
neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp of 
OVSNeutronAgentRyuApp
  INFO ryu.base.app_manager [-] instantiating app ryu.controller.ofp_handler of 
OFPHandler
  INFO ryu.base.app_manager [-] instantiating app ryu.app.ofctl.service of 
OfctlService
  ERROR ryu.lib.hub [-] hub: uncaught exception: Traceback (most recent call 
last):
File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 54, in _launch
  return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/ryu/controller/controller.py", line 
97, in __call__
  self.ofp_ssl_listen_port)
File "/usr/lib/python2.7/site-packages/ryu/controller/controller.py", line 
120, in server_loop
  datapath_connection_factory)
File "/usr/lib/python2.7/site-packages/ryu/lib/hub.py", line 117, in 
__init__
  self.server = eventlet.listen(listen_info)
File "/usr/lib/python2.7/site-packages/eventlet/convenience.py&

[Yahoo-eng-team] [Bug 1661396] Re: undercloud install fails (nova-db-sync timeout) on VM on an SATA disk hypervisor

2017-02-02 Thread Emilien Macchi
Mike: done.

** Also affects: nova
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661396

Title:
  undercloud install fails (nova-db-sync timeout) on VM on an SATA disk
  hypervisor

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  In Progress

Bug description:
  2017-02-01 15:24:49,084 INFO: Error: Command exceeded timeout
  2017-02-01 15:24:49,084 INFO: Error: 
/Stage[main]/Nova::Db::Sync/Exec[nova-db-sync]/returns: change from notrun to 0 
failed: Command exceeded timeout

  The nova-db-sync command is exceeding 300 seconds when installing the
  undercloud on a VM that is using SATA based storage. This seems to be
  related to the switch to innodb_file_per_table to ON which has doubled
  the amount of time the db sync takes on this class of hardware.  To
  unblock folks doing Ocata testing, we need to skip doing this in Ocata
  and will need to revisit enabling it in Pike.

  See Bug 1660722 for details as to why we enabled this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661396/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1661360] Re: tempest test fails with "Instance not found" error

2017-02-02 Thread Emilien Macchi
It affects Puppet OpenStack CI but also TripleO. We can't spawn a VM
anymore.

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Critical

** Changed in: tripleo
Milestone: None => ocata-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1661360

Title:
  tempest test fails with "Instance not found" error

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  Running OpenStack services from master, when we try to run tempest
  test
  
tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops
  (among others). It always fails with message "u'message': u'Instance
  bf33af04-6b55-4835-bb17-02484c196f13 could not be found.'" (full log
  in http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/console.html)

  According to the sequence in the log, this is what happens:

  1. tempest creates an instance:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_291997

  2. nova server returns instance bf33af04-6b55-4835-bb17-02484c196f13
  so it seems it has been properly created:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292483

  3. tempest try to get status of the instance right after creating it
  and nova server returns 404, instance not found:

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292565

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-
  centos-7/b29f35b/console.html#_2017-02-02_13_04_48_292845

  At that time following messages are found in nova log:

  2017-02-02 12:58:10.823 7439 DEBUG nova.compute.api 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] [instance: 
bf33af04-6b55-4835-bb17-02484c196f13] Fetching instance by UUID get 
/usr/lib/python2.7/site-packages/nova/compute/api.py:2312
  2017-02-02 12:58:10.879 7439 INFO nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] HTTP exception thrown: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found.
  2017-02-02 12:58:10.880 7439 DEBUG nova.api.openstack.wsgi 
[req-eec92d3e-9f78-4915-b3b9-ca6858f8dd6a - - - - -] Returning 404 to user: 
Instance bf33af04-6b55-4835-bb17-02484c196f13 could not be found. __call__ 
/usr/lib/python2.7/site-packages/nova/api/openstack/wsgi.py:1039

  http://logs.openstack.org/15/424915/8/check/gate-puppet-openstack-
  integration-4-scenario001-tempest-centos-7/b29f35b/logs/nova/nova-
  api.txt.gz#_2017-02-02_12_58_10_879

  4. Then tempest start cleaning up environment, deleting security
  group, etc...

  We are hitting this with nova from commit
  f40467b0eb2b58a369d24a0e832df1ace6c400c3





  
  Tempest starts cleaning up securitygroup

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1661360/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1656276] Re: Error running nova-manage cell_v2 simple_cell_setup when configuring nova with puppet-nova

2017-02-01 Thread Emilien Macchi
** Changed in: tripleo
   Status: Triaged => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1656276

Title:
  Error running nova-manage  cell_v2 simple_cell_setup when configuring
  nova with puppet-nova

Status in OpenStack Compute (nova):
  In Progress
Status in OpenStack Compute (nova) newton series:
  New
Status in Packstack:
  New
Status in puppet-nova:
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  When installing and configuring nova with puppet-nova (with either
  tripleo, packstack or puppet-openstack-integration), we are getting
  following errors:

  Debug: Executing: '/usr/bin/nova-manage  cell_v2 simple_cell_setup 
--transport-url=rabbit://guest:guest@172.19.2.159:5672/?ssl=0'
  Debug: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Sleeping for 5 seconds between tries
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 Cell0 is already setup.
  Notice: 
/Stage[main]/Nova::Db::Sync_cell_v2/Exec[nova-cell_v2-simple-cell-setup]/returns:
 No hosts found to map to cell, exiting.

  The issue seems to be that it's running "nova-manage  cell_v2
  simple_cell_setup" as part of the nova database initialization when no
  compute nodes have been created but it returns 1 in that case [1].
  However, note that the previous steps (Cell0 mapping and schema
  migration) were successfully run.

  I think for nova bootstrap a reasonable orchestrated workflow would
  be:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. nova cell0 mapping and schema creation.
  4. Adding compute nodes
  5. mapping compute nodes (by running nova-manage cell_v2 discover_hosts)

  For step 3 we'd need to get simple_cell_setup to return 0 when not
  having compute nodes, or having a different command.

  With current behavior of nova-manage the only working workflow we can
  do is:

  1. Create required databases (including the one for cell0).
  2. Nova db sync
  3. Adding all compute nodes
  4. nova cell0 mapping and schema creation with "nova-manage cell_v2 
simple_cell_setup".

  Am I right?, Is there any better alternative?

  
  [1] 
https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L1112-L1114

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1656276/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1660160] [NEW] No host-to-cell mapping found for selected host

2017-01-29 Thread Emilien Macchi
Public bug reported:

This report is maybe not a bug but I found useful to share what happens in 
TripleO since this commit:
https://review.openstack.org/#/c/319379/

We are unable to deploy the overcloud nodes anymore (in other words,
create servers with Nova / Ironic).

Nova Conductor sends this message:
"No host-to-cell mapping found for selected host"
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/undercloud/var/log/nova/nova-conductor.txt.gz#_2017-01-27_19_21_56_348

And it sounds like the compute host is not registered:
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/undercloud/var/log/nova/nova-compute.txt.gz#_2017-01-27_18_56_56_543

Nova Config is available here:
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/etc/nova/nova.conf.txt.gz

That's all the details I have now, feel free for more details if needed.

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: Critical
 Status: Triaged

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => Critical

** Changed in: tripleo
Milestone: None => ocata-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1660160

Title:
  No host-to-cell mapping found for selected host

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  This report is maybe not a bug but I found useful to share what happens in 
TripleO since this commit:
  https://review.openstack.org/#/c/319379/

  We are unable to deploy the overcloud nodes anymore (in other words,
  create servers with Nova / Ironic).

  Nova Conductor sends this message:
  "No host-to-cell mapping found for selected host"
  
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/undercloud/var/log/nova/nova-conductor.txt.gz#_2017-01-27_19_21_56_348

  And it sounds like the compute host is not registered:
  
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/undercloud/var/log/nova/nova-compute.txt.gz#_2017-01-27_18_56_56_543

  Nova Config is available here:
  
http://logs.openstack.org/31/426231/1/check-tripleo/gate-tripleo-ci-centos-7-ovb-ha/915aeba/logs/etc/nova/nova.conf.txt.gz

  That's all the details I have now, feel free for more details if
  needed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1660160/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657467] Re: placement: unable to refresh compute resource provider record

2017-01-18 Thread Emilien Macchi
We've found it it comes from HAproxy configuration in TripleO. We're
working on it now.

** No longer affects: nova

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657467

Title:
  placement: unable to refresh  compute resource provider record

Status in tripleo:
  Triaged

Bug description:
  Deploying Nova Placement API used to work a few days ago in TripleO
  and Puppet OpenStack CIs but not anymore.

  "Unable to refresh my resource provider record"

  nova-compute log files:
  
http://logs.openstack.org/66/421366/1/check/gate-puppet-openstack-integration-4-scenario004-tempest-centos-7/f138bcc/logs/nova/nova-compute.txt.gz#_2017-01-17_17_32_39_092

  nova.conf:
  https://paste.fedoraproject.org/529703/14847479/

To manage notifications about this bug go to:
https://bugs.launchpad.net/tripleo/+bug/1657467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1657467] [NEW] placement: unable to refresh compute resource provider record

2017-01-18 Thread Emilien Macchi
Public bug reported:

Deploying Nova Placement API used to work a few days ago in TripleO and
Puppet OpenStack CIs but not anymore.

"Unable to refresh my resource provider record"

nova-compute log files:
http://logs.openstack.org/66/421366/1/check/gate-puppet-openstack-integration-4-scenario004-tempest-centos-7/f138bcc/logs/nova/nova-compute.txt.gz#_2017-01-17_17_32_39_092

nova.conf:
https://paste.fedoraproject.org/529703/14847479/

** Affects: nova
 Importance: Undecided
 Status: New

** Affects: tripleo
 Importance: High
 Status: Triaged

** Also affects: tripleo
   Importance: Undecided
   Status: New

** Changed in: tripleo
   Status: New => Triaged

** Changed in: tripleo
   Importance: Undecided => High

** Changed in: tripleo
Milestone: None => ocata-rc1

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1657467

Title:
  placement: unable to refresh  compute resource provider record

Status in OpenStack Compute (nova):
  New
Status in tripleo:
  Triaged

Bug description:
  Deploying Nova Placement API used to work a few days ago in TripleO
  and Puppet OpenStack CIs but not anymore.

  "Unable to refresh my resource provider record"

  nova-compute log files:
  
http://logs.openstack.org/66/421366/1/check/gate-puppet-openstack-integration-4-scenario004-tempest-centos-7/f138bcc/logs/nova/nova-compute.txt.gz#_2017-01-17_17_32_39_092

  nova.conf:
  https://paste.fedoraproject.org/529703/14847479/

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1657467/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1490990] Re: acceptance: neutron fails to start server service

2016-09-26 Thread Emilien Macchi
** Changed in: puppet-neutron
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1490990

Title:
  acceptance: neutron fails to start server service

Status in neutron:
  Fix Released
Status in oslo.config:
  Invalid
Status in puppet-neutron:
  Fix Released

Bug description:
  This is a new error that happened very lately, using RDO liberty
  packaging:

  With current state of beaker manifests, we have this error:
  No providers specified for 'LOADBALANCER' service, exiting

  Source: http://logs.openstack.org/50/216950/5/check/gate-puppet-
  neutron-puppet-beaker-rspec-dsvm-
  centos7/9e7e510/logs/neutron/server.txt.gz#_2015-09-01_12_40_22_734

  That means neutron-server can't start correctly.

  This is probably a misconfiguration in our manifests or a packaging
  issue in Neutron, because we don't have the issue in Trusty jobs.

  RDO packaging version: 7.0.0.0b3-dev606

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1490990/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619758] Re: Credential Encryption breaks deployments without Fernet

2016-09-12 Thread Emilien Macchi
** Changed in: tripleo
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1619758

Title:
  Credential Encryption breaks deployments without Fernet

Status in OpenStack Identity (keystone):
  Fix Released
Status in tripleo:
  Fix Released

Bug description:
  A recent change to encrypt credetials broke RDO/Tripleo deployments:


  2016-09-02 17:16:55.074 17619 ERROR keystone.common.fernet_utils 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] Either [fernet_tokens] 
key_repository does not exist or Keystone does not have sufficient permission 
to access it: /etc/keystone/credential-keys/
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] MultiFernet requires at 
least one Fernet instance
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi result = 
method(req, **params)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 164, in 
inner
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi return f(self, 
request, *args, **kwargs)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/controllers.py", line 69, 
in create_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ref = 
self.credential_api.create_credential(ref['id'], ref)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 106, in 
create_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi credential_copy 
= self._encrypt_credential(credential)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 72, in 
_encrypt_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
json.dumps(credential['blob'])
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py",
 line 68, in encrypt
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto, keys = 
get_multi_fernet_keys()
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py",
 line 49, in get_multi_fernet_keys
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto = 
fernet.MultiFernet(fernet_keys)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 128, in 
__init__
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi "MultiFernet 
requires at least one Fernet instance"
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ValueError: 
MultiFernet requires at least one Fernet instance
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1619758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619758] Re: Credential Encryption breaks deployments without Fernet

2016-09-02 Thread Emilien Macchi
I'm adding TripleO because we need to automate the process of upgrade regarding:
http://docs.openstack.org/releasenotes/keystone/unreleased.html#upgrade-notes

"Keystone now supports encrypted credentials at rest. In order to
upgrade successfully to Newton, deployers must encrypt all credentials
currently stored before contracting the database. Deployers must run
keystone-manage credential_setup in order to use the credential API
within Newton, or finish the upgrade from Mitaka to Newton. This will
result in a service outage for the credential API where credentials will
be read-only for the duration of the upgrade process. Once the database
is contracted credentials will be writeable again. Database contraction
phases only apply to rolling upgrades."

So I'm going to try to make it transparent in puppet-keystone but for
sure TripleO will have to run the command in the upgrade scripts.

** Also affects: tripleo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1619758

Title:
  Credential Encryption breaks deployments without Fernet

Status in OpenStack Identity (keystone):
  New
Status in tripleo:
  New

Bug description:
  A recent change to encrypt credetials broke RDO/Tripleo deployments:


  2016-09-02 17:16:55.074 17619 ERROR keystone.common.fernet_utils 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] Either [fernet_tokens] 
key_repository does not exist or Keystone does not have sufficient permission 
to access it: /etc/keystone/credential-keys/
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
[req-31d60075-7e0e-401e-a93f-58297cd5439b f2caffbaf10d4e3da294c6366fe19a36 
fd71b607cfa84539bf0440915ea2d94b - default default] MultiFernet requires at 
least one Fernet instance
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/wsgi.py", line 225, in 
__call__
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi result = 
method(req, **params)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/controller.py", line 164, in 
inner
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi return f(self, 
request, *args, **kwargs)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/controllers.py", line 69, 
in create_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ref = 
self.credential_api.create_credential(ref['id'], ref)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/common/manager.py", line 124, in 
wrapped
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi __ret_val = 
__f(*args, **kwargs)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 106, in 
create_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi credential_copy 
= self._encrypt_credential(credential)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/core.py", line 72, in 
_encrypt_credential
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi 
json.dumps(credential['blob'])
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py",
 line 68, in encrypt
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto, keys = 
get_multi_fernet_keys()
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib/python2.7/site-packages/keystone/credential/providers/fernet/core.py",
 line 49, in get_multi_fernet_keys
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi crypto = 
fernet.MultiFernet(fernet_keys)
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi   File 
"/usr/lib64/python2.7/site-packages/cryptography/fernet.py", line 128, in 
__init__
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi "MultiFernet 
requires at least one Fernet instance"
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi ValueError: 
MultiFernet requires at least one Fernet instance
  2016-09-02 17:16:55.074 17619 ERROR keystone.common.wsgi

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1619758/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1619696] [NEW] "neutron-db-manage upgrade heads" fails with networksegments_ibfk_2

2016-09-02 Thread Emilien Macchi
Public bug reported:

Since this commit: https://review.openstack.org/#/c/293305/

Puppet OpenStack CI is failing to run db upgrades:

2016-09-02 13:41:05.973470 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.runtime.migration] Running upgrade 3b935b28e7a0, 67daae611b6e -> 
b12a3ef66e62, add standardattr to qos policies
2016-09-02 13:41:05.973831 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.runtime.migration] Running upgrade b12a3ef66e62, 89ab9a816d70 -> 
97c25b0d2353, Add Name and Description to the networksegments table
2016-09-02 13:41:05.974141 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Running upgrade 
for neutron ...
2016-09-02 13:41:05.974450 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most 
recent call last):
2016-09-02 13:41:05.974762 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/bin/neutron-db-manage", line 10, in 
2016-09-02 13:41:05.975062 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
sys.exit(main())
2016-09-02 13:41:05.975360 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 686, in 
main
2016-09-02 13:41:05.975647 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: return_val |= 
bool(CONF.command.func(config, CONF.command.name))
2016-09-02 13:41:05.975959 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 207, in 
do_upgrade
2016-09-02 13:41:05.976238 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: desc=branch, 
sql=CONF.command.sql)
2016-09-02 13:41:05.976541 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py", line 108, in 
do_alembic_command
2016-09-02 13:41:05.976854 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
getattr(alembic_command, cmd)(config, *args, **kwargs)
2016-09-02 13:41:05.977153 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/command.py", line 174, in upgrade
2016-09-02 13:41:05.977420 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
script.run_env()
2016-09-02 13:41:05.977711 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/script/base.py", line 397, in run_env
2016-09-02 13:41:05.978016 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
util.load_python_file(self.dir, 'env.py')
2016-09-02 13:41:05.978335 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 81, in 
load_python_file
2016-09-02 13:41:05.978614 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = 
load_module_py(module_id, path)
2016-09-02 13:41:05.978932 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/util/compat.py", line 79, in 
load_module_py
2016-09-02 13:41:05.979212 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = 
imp.load_source(module_id, path, fp)
2016-09-02 13:41:05.979568 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 120, in 
2016-09-02 13:41:05.979862 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
run_migrations_online()
2016-09-02 13:41:05.980238 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py",
 line 114, in run_migrations_online
2016-09-02 13:41:05.980519 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
context.run_migrations()
2016-09-02 13:41:05.980858 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"", line 8, in run_migrations
2016-09-02 13:41:05.981163 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/runtime/environment.py", line 797, in 
run_migrations
2016-09-02 13:41:05.981445 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
self.get_context().run_migrations(**kw)
2016-09-02 13:41:05.981744 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
"/usr/lib/python2.7/site-packages/alembic/runtime/migration.py", line 312, in 
run_migrations
2016-09-02 13:41:05.982034 | Notice: 

[Yahoo-eng-team] [Bug 1597357] Re: When kestone is slow to respond getting user fails

2016-06-29 Thread Emilien Macchi
** Also affects: puppet-keystone
   Importance: Undecided
   Status: New

** No longer affects: puppet-keystone

** Project changed: keystone => puppet-keystone

** Changed in: puppet-keystone
   Status: New => Confirmed

** Changed in: puppet-keystone
   Importance: Undecided => High

** Changed in: puppet-keystone
 Assignee: (unassigned) => Sofer Athlan-Guyot (sofer-athlan-guyot)

** Summary changed:

- When kestone is slow to respond getting user fails
+ When keystone is slow to respond: getting user fails

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1597357

Title:
  When keystone is slow to respond: getting user fails

Status in puppet-keystone:
  Confirmed
Status in tripleo:
  Confirmed

Bug description:
  To test if an user exists we check the keystone db by using

  openstack show user 'foo' ...

  If the user doesn't exists then we get an error.  The usual retry of
  openstack lib would imply that we wait the full request_timeout to get
  this.  This is currently ~170s.  So 170s times the number of user
  in the catalog!

  To overcome this a the call is wrapped inside a no retry outer
  function[1]

  The problem is that on very slow platform legit timeout can occur,
  this is especially true for CI.  Here is an example of such failure:

  Error: /Stage[main]/Keystone::Roles::Admin/Keystone_user[admin]:
  Could not evaluate: Command: 'openstack ["user", "show", "--format",
  "shell", ["admin", "--domain", "default"]]' has been running for more
  then 20 seconds (tried 0, for a total of 0 seconds)

  From  http://logs.openstack.org/58/322858/11/check-tripleo/gate-
  tripleo-ci-centos-7-ha/7e5b0a6/logs/postci.txt.gz

  
  [1] 
https://github.com/openstack/puppet-keystone/blob/master/lib/puppet/provider/keystone_user/openstack.rb#L81

To manage notifications about this bug go to:
https://bugs.launchpad.net/puppet-keystone/+bug/1597357/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585680] [NEW] neutron-lbaas doesn't have tempest plugin

2016-05-25 Thread Emilien Macchi
Public bug reported:

Puppet OpenStack CI is interested to run neutron-lbaas Tempest tests but
it's currently not working because neutron-lbaas is missing a Tempest
plugin and its entry-point, so discovery of tests does not work.

Right now, to run tempest we need to go in the neutron-lbaas directory and run 
tox inside, etc.
That's not the way to go and other projects already (Neutron itself does) 
provide tempest plugins.

This is a official RFE to have it in neutron-lbaas so we can run the
tests in a consistent way with other projects.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585680

Title:
  neutron-lbaas doesn't have tempest plugin

Status in neutron:
  New

Bug description:
  Puppet OpenStack CI is interested to run neutron-lbaas Tempest tests
  but it's currently not working because neutron-lbaas is missing a
  Tempest plugin and its entry-point, so discovery of tests does not
  work.

  Right now, to run tempest we need to go in the neutron-lbaas directory and 
run tox inside, etc.
  That's not the way to go and other projects already (Neutron itself does) 
provide tempest plugins.

  This is a official RFE to have it in neutron-lbaas so we can run the
  tests in a consistent way with other projects.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585680/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1581876] Re: neutron lbaas v2: update of default "device_driver" inside lbaas_agent.ini

2016-05-17 Thread Emilien Macchi
** Also affects: puppet-neutron
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1581876

Title:
  neutron lbaas v2: update of default "device_driver" inside
  lbaas_agent.ini

Status in neutron:
  New
Status in puppet-neutron:
  New

Bug description:
  Dear,

  As from Mitaka only v2 of lbaas is supported please update default
  "device_driver" inside config file /etc/neutron/lbaas_agent.ini from:

  device_driver =
  
neutron_lbaas.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver

  to

  device_driver =
  neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

  More inside this IRC log:

  http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas
  /%23openstack-lbaas.2016-02-02.log.html

  Kind regards,
  Michal.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1581876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1582103] [NEW] nova-novncproxy fails to start: NoSuchOptError: no such option in group DEFAULT: verbose

2016-05-16 Thread Emilien Macchi
Public bug reported:

Description
===

nova-novncproxy process fails to start, because of this error:
NoSuchOptError: no such option in group DEFAULT: verbose


Steps to reproduce
==

1) deploy OpenStack Nova from master and oslo-config 3.9.0
2) do not configure verbose option in nova.conf, it's deprecated
3) start nova-novncproxy


Expected result
===
nova-novncproxy should start without error.

Actual result
=

nova-novncproxy starts with error:

CRITICAL nova [-] NoSuchOptError: no such option in group DEFAULT: verbose
ERROR nova Traceback (most recent call last):
ERROR nova   File "/usr/bin/nova-novncproxy", line 10, in 
ERROR nova sys.exit(main())
ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", 
line 41, in main
ERROR nova port=CONF.vnc.novncproxy_port)
ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", 
line 59, in proxy
ERROR nova verbose=CONF.verbose,
ERROR nova   File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 
2189, in __getattr__
ERROR nova raise NoSuchOptError(name)
ERROR nova NoSuchOptError: no such option in group DEFAULT: verbose

Environment
===

Nova was deployed by Puppet OpenStack CI using RDO packaging from trunk
(current master).

List of packages:
http://logs.openstack.org/20/316520/2/check/gate-puppet-openstack-integration-3-scenario001-tempest-centos-7/f2c0699/logs/rpm-qa.txt.gz

Nova logs: http://logs.openstack.org/20/316520/2/check/gate-puppet-
openstack-integration-3-scenario001-tempest-centos-7/f2c0699/logs/nova
/nova-novncproxy.txt.gz

Nova config: http://logs.openstack.org/20/316520/2/check/gate-puppet-
openstack-integration-3-scenario001-tempest-
centos-7/f2c0699/logs/etc/nova/nova.conf.txt.gz

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1582103

Title:
  nova-novncproxy fails to start: NoSuchOptError: no such option in
  group DEFAULT: verbose

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  nova-novncproxy process fails to start, because of this error:
  NoSuchOptError: no such option in group DEFAULT: verbose

  
  Steps to reproduce
  ==

  1) deploy OpenStack Nova from master and oslo-config 3.9.0
  2) do not configure verbose option in nova.conf, it's deprecated
  3) start nova-novncproxy

  
  Expected result
  ===
  nova-novncproxy should start without error.

  Actual result
  =

  nova-novncproxy starts with error:

  CRITICAL nova [-] NoSuchOptError: no such option in group DEFAULT: verbose
  ERROR nova Traceback (most recent call last):
  ERROR nova   File "/usr/bin/nova-novncproxy", line 10, in 
  ERROR nova sys.exit(main())
  ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/novncproxy.py", 
line 41, in main
  ERROR nova port=CONF.vnc.novncproxy_port)
  ERROR nova   File "/usr/lib/python2.7/site-packages/nova/cmd/baseproxy.py", 
line 59, in proxy
  ERROR nova verbose=CONF.verbose,
  ERROR nova   File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 
2189, in __getattr__
  ERROR nova raise NoSuchOptError(name)
  ERROR nova NoSuchOptError: no such option in group DEFAULT: verbose

  Environment
  ===

  Nova was deployed by Puppet OpenStack CI using RDO packaging from
  trunk (current master).

  List of packages:
  
http://logs.openstack.org/20/316520/2/check/gate-puppet-openstack-integration-3-scenario001-tempest-centos-7/f2c0699/logs/rpm-qa.txt.gz

  Nova logs: http://logs.openstack.org/20/316520/2/check/gate-puppet-
  openstack-integration-3-scenario001-tempest-centos-7/f2c0699/logs/nova
  /nova-novncproxy.txt.gz

  Nova config: http://logs.openstack.org/20/316520/2/check/gate-puppet-
  openstack-integration-3-scenario001-tempest-
  centos-7/f2c0699/logs/etc/nova/nova.conf.txt.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1582103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577019] Re: BasicOperationsImagesTest tempest tests fail

2016-05-05 Thread Emilien Macchi
** Also affects: tempest
   Importance: Undecided
   Status: New

** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1577019

Title:
  BasicOperationsImagesTest tempest tests fail

Status in tempest:
  In Progress

Bug description:
  Deploying Glance from master (current Newton), 2 Tempest tests are
  failing all the time:

  
tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_register_upload_get_image_file
  tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image

  See http://logs.openstack.org/86/305886/16/check/gate-puppet-
  openstack-integration-3-scenario003-tempest-
  centos-7/32f1afb/console.html#_2016-04-28_18_09_14_598

  404 in Glance API logs:
  eventlet.wsgi.server [req-9cad60d9-d466-4f32-87c2-02163a61f812 
bf80ccd1360e491b9175229579bd7265 ad807d82f64d403c9982494c1fdb09f3 - - -] ::1 - 
- [28/Apr/2016 18:09:14] "GET /v2/images/ed8958a0-5402-462a-884d-42ad8fefa889 
HTTP/1.1" 404 288 0.020958

  See http://logs.openstack.org/86/305886/16/check/gate-puppet-
  openstack-integration-3-scenario003-tempest-
  centos-7/32f1afb/logs/glance/api.txt.gz#_2016-04-28_18_09_14_260

  All Glance logs:
  
http://logs.openstack.org/86/305886/16/check/gate-puppet-openstack-integration-3-scenario003-tempest-centos-7/32f1afb/logs/glance/
  All Glance config: 
http://logs.openstack.org/86/305886/16/check/gate-puppet-openstack-integration-3-scenario003-tempest-centos-7/32f1afb/logs/etc/glance/

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1577019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1577019] [NEW] BasicOperationsImagesTest tempest tests fail

2016-04-30 Thread Emilien Macchi
Public bug reported:

Deploying Glance from master (current Newton), 2 Tempest tests are
failing all the time:

tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_register_upload_get_image_file
tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image

See http://logs.openstack.org/86/305886/16/check/gate-puppet-openstack-
integration-3-scenario003-tempest-
centos-7/32f1afb/console.html#_2016-04-28_18_09_14_598

404 in Glance API logs:
eventlet.wsgi.server [req-9cad60d9-d466-4f32-87c2-02163a61f812 
bf80ccd1360e491b9175229579bd7265 ad807d82f64d403c9982494c1fdb09f3 - - -] ::1 - 
- [28/Apr/2016 18:09:14] "GET /v2/images/ed8958a0-5402-462a-884d-42ad8fefa889 
HTTP/1.1" 404 288 0.020958

See http://logs.openstack.org/86/305886/16/check/gate-puppet-openstack-
integration-3-scenario003-tempest-
centos-7/32f1afb/logs/glance/api.txt.gz#_2016-04-28_18_09_14_260

All Glance logs:
http://logs.openstack.org/86/305886/16/check/gate-puppet-openstack-integration-3-scenario003-tempest-centos-7/32f1afb/logs/glance/
All Glance config: 
http://logs.openstack.org/86/305886/16/check/gate-puppet-openstack-integration-3-scenario003-tempest-centos-7/32f1afb/logs/etc/glance/

** Affects: glance
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1577019

Title:
  BasicOperationsImagesTest tempest tests fail

Status in Glance:
  New

Bug description:
  Deploying Glance from master (current Newton), 2 Tempest tests are
  failing all the time:

  
tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_register_upload_get_image_file
  tempest.api.image.v2.test_images.BasicOperationsImagesTest.test_update_image

  See http://logs.openstack.org/86/305886/16/check/gate-puppet-
  openstack-integration-3-scenario003-tempest-
  centos-7/32f1afb/console.html#_2016-04-28_18_09_14_598

  404 in Glance API logs:
  eventlet.wsgi.server [req-9cad60d9-d466-4f32-87c2-02163a61f812 
bf80ccd1360e491b9175229579bd7265 ad807d82f64d403c9982494c1fdb09f3 - - -] ::1 - 
- [28/Apr/2016 18:09:14] "GET /v2/images/ed8958a0-5402-462a-884d-42ad8fefa889 
HTTP/1.1" 404 288 0.020958

  See http://logs.openstack.org/86/305886/16/check/gate-puppet-
  openstack-integration-3-scenario003-tempest-
  centos-7/32f1afb/logs/glance/api.txt.gz#_2016-04-28_18_09_14_260

  All Glance logs:
  
http://logs.openstack.org/86/305886/16/check/gate-puppet-openstack-integration-3-scenario003-tempest-centos-7/32f1afb/logs/glance/
  All Glance config: 
http://logs.openstack.org/86/305886/16/check/gate-puppet-openstack-integration-3-scenario003-tempest-centos-7/32f1afb/logs/etc/glance/

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1577019/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1570463] [NEW] RFE: keystone-manage CLI to allow using syslog & specific log files

2016-04-14 Thread Emilien Macchi
Public bug reported:

Currently, keystone-manage CLI tool will by default write in
$log_dir/$log_file, which is most of the case /var/log/keystone.log.

Some actions (like fernet keys generations) are dynamic, and having them
in a separated logfile would be a nice feature for operators. Also
supporting syslog would be very helpful for production deployments.

** Affects: keystone
 Importance: Medium
 Assignee: Ron De Rose (ronald-de-rose)
 Status: Triaged

** Affects: keystone/newton
 Importance: Medium
 Assignee: Ron De Rose (ronald-de-rose)
 Status: Triaged


** Tags: fernet logging low-hanging-fruit

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1570463

Title:
  RFE: keystone-manage CLI to allow using syslog & specific log files

Status in OpenStack Identity (keystone):
  Triaged
Status in OpenStack Identity (keystone) newton series:
  Triaged

Bug description:
  Currently, keystone-manage CLI tool will by default write in
  $log_dir/$log_file, which is most of the case /var/log/keystone.log.

  Some actions (like fernet keys generations) are dynamic, and having
  them in a separated logfile would be a nice feature for operators.
  Also supporting syslog would be very helpful for production
  deployments.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1570463/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557814] Re: Tempest tests fail on Glance: "Endpoint not found" when IPv6

2016-03-15 Thread Emilien Macchi
** No longer affects: glance

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1557814

Title:
  Tempest tests fail on Glance: "Endpoint not found" when IPv6

Status in tempest:
  New

Bug description:
  Deploying OpenStack Glance on IPv6 network.

  Glance config: http://logs.openstack.org/21/287521/26/check/gate-
  puppet-openstack-integration-scenario002-tempest-dsvm-
  centos7/a2cdb66/logs/etc/glance/

  Glance logs: http://logs.openstack.org/21/287521/26/check/gate-puppet-
  openstack-integration-scenario002-tempest-dsvm-
  centos7/a2cdb66/logs/glance/

  Tempest config: http://logs.openstack.org/21/287521/26/check/gate-
  puppet-openstack-integration-scenario002-tempest-dsvm-
  centos7/a2cdb66/logs/tempest.conf.txt.gz

  Tempest errors:
  
http://logs.openstack.org/21/287521/26/check/gate-puppet-openstack-integration-scenario002-tempest-dsvm-centos7/a2cdb66/console.html#_2016-03-15_23_57_52_486

  tempest.exceptions.EndpointNotFound: Endpoint not found
  Details: Error finding address for 
https://[::1]:9292/v2/images/db7b3ba5-adda-4805-b547-6d1a4d3cd7d8/file: [Errno 
-9] Address family for hostname not supported

To manage notifications about this bug go to:
https://bugs.launchpad.net/tempest/+bug/1557814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1557814] [NEW] Tempest tests fail on Glance: "Endpoint not found" when IPv6

2016-03-15 Thread Emilien Macchi
Public bug reported:

Deploying OpenStack Glance on IPv6 network.

Glance config: http://logs.openstack.org/21/287521/26/check/gate-puppet-
openstack-integration-scenario002-tempest-dsvm-
centos7/a2cdb66/logs/etc/glance/

Glance logs: http://logs.openstack.org/21/287521/26/check/gate-puppet-
openstack-integration-scenario002-tempest-dsvm-
centos7/a2cdb66/logs/glance/

Tempest config: http://logs.openstack.org/21/287521/26/check/gate-
puppet-openstack-integration-scenario002-tempest-dsvm-
centos7/a2cdb66/logs/tempest.conf.txt.gz

Tempest errors:
http://logs.openstack.org/21/287521/26/check/gate-puppet-openstack-integration-scenario002-tempest-dsvm-centos7/a2cdb66/console.html#_2016-03-15_23_57_52_486

tempest.exceptions.EndpointNotFound: Endpoint not found
Details: Error finding address for 
https://[::1]:9292/v2/images/db7b3ba5-adda-4805-b547-6d1a4d3cd7d8/file: [Errno 
-9] Address family for hostname not supported

** Affects: glance
 Importance: Undecided
 Status: New

** Affects: tempest
 Importance: Undecided
 Status: New

** Also affects: glance
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1557814

Title:
  Tempest tests fail on Glance: "Endpoint not found" when IPv6

Status in Glance:
  New
Status in tempest:
  New

Bug description:
  Deploying OpenStack Glance on IPv6 network.

  Glance config: http://logs.openstack.org/21/287521/26/check/gate-
  puppet-openstack-integration-scenario002-tempest-dsvm-
  centos7/a2cdb66/logs/etc/glance/

  Glance logs: http://logs.openstack.org/21/287521/26/check/gate-puppet-
  openstack-integration-scenario002-tempest-dsvm-
  centos7/a2cdb66/logs/glance/

  Tempest config: http://logs.openstack.org/21/287521/26/check/gate-
  puppet-openstack-integration-scenario002-tempest-dsvm-
  centos7/a2cdb66/logs/tempest.conf.txt.gz

  Tempest errors:
  
http://logs.openstack.org/21/287521/26/check/gate-puppet-openstack-integration-scenario002-tempest-dsvm-centos7/a2cdb66/console.html#_2016-03-15_23_57_52_486

  tempest.exceptions.EndpointNotFound: Endpoint not found
  Details: Error finding address for 
https://[::1]:9292/v2/images/db7b3ba5-adda-4805-b547-6d1a4d3cd7d8/file: [Errno 
-9] Address family for hostname not supported

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1557814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542486] Re: nova-compute stack traces with BadRequest: Specifying 'tenant_id' other than authenticated tenant in request requires admin privileges

2016-02-16 Thread Emilien Macchi
Jamie, we fixed it in puppet-nova:
https://review.openstack.org/#/c/276932/

** Changed in: puppet-nova
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542486

Title:
  nova-compute stack traces with BadRequest: Specifying 'tenant_id'
  other than authenticated tenant in request requires admin privileges

Status in OpenStack Identity (keystone):
  Invalid
Status in keystonemiddleware:
  New
Status in OpenStack Compute (nova):
  Incomplete
Status in puppet-nova:
  Fix Released

Bug description:
  The puppet-openstack-integration tests (rebased on
  https://review.openstack.org/#/c/276773/ ) currently fail on the
  latest version of RDO Mitaka (delorean current) due to what seems to
  be a problem with the neutron configuration.

  Everything installs fine but tempest fails:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/console.html#_2016-02-05_20_26_35_569

  And there are stack traces in nova-compute.log:
  
http://logs.openstack.org/92/276492/6/check/gate-puppet-openstack-integration-scenario001-tempest-dsvm-centos7/78b9c32/logs/nova/nova-compute.txt.gz#_2016-02-05_20_22_16_151

  I talked with #openstack-nova and they pointed out a difference between what 
devstack yields as a [neutron] configuration versus what puppet-nova configures:
  
  # puppet-nova via puppet-openstack-integration
  
  [neutron]
  service_metadata_proxy=True
  metadata_proxy_shared_secret =a_big_secret
  url=http://127.0.0.1:9696
  region_name=RegionOne
  ovs_bridge=br-int
  extension_sync_interval=600
  auth_url=http://127.0.0.1:35357
  password=a_big_secret
  tenant_name=services
  timeout=30
  username=neutron
  auth_plugin=password
  default_tenant_id=default

  
  # Well, it worked in devstack™
  
  [neutron]
  service_metadata_proxy = True
  url = http://127.0.0.1:9696
  region_name = RegionOne
  auth_url = http://127.0.0.1:35357/v3
  password = secretservice
  auth_strategy = keystone
  project_domain_name = Default
  project_name = service
  user_domain_name = Default
  username = neutron
  auth_plugin = v3password

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1542486/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1488740] Re: neutron dbsync fails with 44621190bc02_add_uniqueconstraint_ipavailability_ranges.py

2015-08-26 Thread Emilien Macchi
I think it might be invalid, because RDO needs Alembic upgade.
I'll close it, and re-open in case of I still have the issue.

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1488740

Title:
  neutron dbsync fails with
  44621190bc02_add_uniqueconstraint_ipavailability_ranges.py

Status in neutron:
  Invalid

Bug description:
  2015-08-26 02:50:39.383 | Debug: Executing 'neutron-db-manage
  --config-file /etc/neutron/neutron.conf --config-file
  /etc/neutron/plugin.ini upgrade head'

  2015-08-26 02:50:41.398 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: No handlers could 
be found for logger neutron.quota
  2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Context impl MySQLImpl.
  2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Will assume non-transactional DDL.
  2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Context impl MySQLImpl.
  2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Will assume non-transactional DDL.
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Running upgrade  - juno, juno_initial
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Running upgrade juno - 44621190bc02, 
add_uniqueconstraint_ipavailability_ranges
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Running upgrade 
for neutron ...
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most 
recent call last):
  2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/bin/neutron-db-manage, line 10, in module
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
sys.exit(main())
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 519, in 
main
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
CONF.command.func(config, CONF.command.name)
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 152, in 
do_upgrade
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 106, in 
do_alembic_command
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
getattr(alembic_command, cmd)(config, *args, **kwargs)
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/command.py, line 165, in upgrade
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
script.run_env()
  2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/script.py, line 382, in run_env
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
util.load_python_file(self.dir, 'env.py')
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/util.py, line 242, in 
load_python_file
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = 
load_module_py(module_id, path)
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/compat.py, line 79, in load_module_py
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = 
imp.load_source(module_id, path, fp)
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py,
 line 126, in module
  2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:  

[Yahoo-eng-team] [Bug 1488740] [NEW] neutron dbsync fails with 44621190bc02_add_uniqueconstraint_ipavailability_ranges.py

2015-08-25 Thread Emilien Macchi
Public bug reported:

2015-08-26 02:50:39.383 | Debug: Executing 'neutron-db-manage --config-
file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini
upgrade head'

2015-08-26 02:50:41.398 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: No handlers could 
be found for logger neutron.quota
2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Context impl MySQLImpl.
2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Will assume non-transactional DDL.
2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Context impl MySQLImpl.
2015-08-26 02:50:41.399 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Will assume non-transactional DDL.
2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Running upgrade  - juno, juno_initial
2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: INFO  
[alembic.migration] Running upgrade juno - 44621190bc02, 
add_uniqueconstraint_ipavailability_ranges
2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Running upgrade 
for neutron ...
2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: Traceback (most 
recent call last):
2015-08-26 02:50:41.401 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/bin/neutron-db-manage, line 10, in module
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
sys.exit(main())
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 519, in 
main
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
CONF.command.func(config, CONF.command.name)
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 152, in 
do_upgrade
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/cli.py, line 106, in 
do_alembic_command
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
getattr(alembic_command, cmd)(config, *args, **kwargs)
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/command.py, line 165, in upgrade
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
script.run_env()
2015-08-26 02:50:41.402 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/script.py, line 382, in run_env
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
util.load_python_file(self.dir, 'env.py')
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/util.py, line 242, in 
load_python_file
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: module = 
load_module_py(module_id, path)
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/alembic/compat.py, line 79, in load_module_py
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: mod = 
imp.load_source(module_id, path, fp)
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py,
 line 126, in module
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
run_migrations_online()
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
/usr/lib/python2.7/site-packages/neutron/db/migration/alembic_migrations/env.py,
 line 117, in run_migrations_online
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns: 
context.run_migrations()
2015-08-26 02:50:41.403 | Notice: 
/Stage[main]/Neutron::Db::Sync/Exec[neutron-db-sync]/returns:   File 
string, line 7, in run_migrations

[Yahoo-eng-team] [Bug 1470635] Re: endpoints added with v3 are not visible with v2

2015-07-01 Thread Emilien Macchi
** Also affects: puppet-keystone
   Importance: Undecided
   Status: New

** Changed in: puppet-keystone
   Status: New = Confirmed

** Changed in: puppet-keystone
   Importance: Undecided = High

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1470635

Title:
  endpoints added with v3 are not visible with v2

Status in OpenStack Identity (Keystone):
  New
Status in Puppet module for Keystone:
  Confirmed

Bug description:
  Create an endpoint with v3::

  # openstack --os-identity-api-version 3 [--admin credentials]
  endpoint create 

  try to list endpoints with v2::

  # openstack --os-identity-api-version 2 [--admin credentials]
  endpoint list

  nothing.

  We are in the process of trying to convert puppet-keystone to v3 with
  the goal of maintaining backwards compatibility.  That means, we want
  admins/operators not to have to change any existing workflow.  This
  bug causes openstack endpoint list to return nothing which breaks
  existing workflows and backwards compatibility.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1470635/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1302814] Re: nova notifications configuration takes tenant_id in config

2014-06-05 Thread Emilien Macchi
** Changed in: neutron
   Status: In Progress = Fix Committed

** Changed in: neutron
   Status: Fix Committed = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302814

Title:
  nova notifications configuration takes tenant_id in config

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  Since Icehouse, Neutron is able to send notifications to Nova API about some 
networking events.
  To make it working, you have to provide nova_admin_tenant_id in 
neutron.conf, which is painful for configuration management when deploying 
OpenStack in production.

  Like in all OpenStack projects, we should have a new parameter
  nova_admin_tenant_name without hitting Keystone API to ask for the
  real ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1326064] [NEW] RPC error in LBaaS with unplug_vip_port method

2014-06-03 Thread Emilien Macchi
Public bug reported:

Neutron Icehouse with LBaaS enabled and RabbitMQ in HA, running in tempest 
those tests:
tempest.api.network.test_load_balancer.LoadBalancerTestXML.test_list_members_with_filters[gate,smoke]
tempest.api.network.test_load_balancer.LoadBalancerTestXML.test_list_members_with_filters[gate,smoke]
 ... FAIL
tempest.api.network.test_load_balancer.LoadBalancerTestXML.test_list_pools[gate,smoke]
tempest.api.network.test_load_balancer.LoadBalancerTestXML.test_list_pools[gate,smoke]
 ... FAIL


Trace:
2014-06-03 13:41:19.354 10302 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp Traceback 
(most recent call last):
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py, line 
462, in _process_data
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
**args)
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/dispatcher.py, 
line 172, in dispatch
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agent_manager.py,
 line 235, in delete_vip
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
driver.delete_vip(vip)
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py,
 line 295, in delete_vip
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
self.undeploy_instance(vip['pool_id'])
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/lockutils.py, line 
249, in inner
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
return f(*args, **kwargs)
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py,
 line 126, in undeploy_instance
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
self._unplug(namespace, self.pool_to_port_id[pool_id])
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/drivers/haproxy/namespace_driver.py,
 line 262, in _unplug
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
self.plugin_rpc.unplug_vip_port(port_id)
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/services/loadbalancer/agent/agent_api.py,
 line 87, in unplug_vip_port
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
topic=self.topic
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/proxy.py, line 
129, in call
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
exc.info, real_topic, msg.get('method'))
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp Timeout: 
Timeout while waiting on RPC response - topic: n-lbaas-plugin, RPC method: 
unplug_vip_port info: unknown
2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1326064

Title:
  RPC error in LBaaS with unplug_vip_port method

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Neutron Icehouse with LBaaS enabled and RabbitMQ in HA, running in tempest 
those tests:
  
tempest.api.network.test_load_balancer.LoadBalancerTestXML.test_list_members_with_filters[gate,smoke]
  
tempest.api.network.test_load_balancer.LoadBalancerTestXML.test_list_members_with_filters[gate,smoke]
 ... FAIL
  
tempest.api.network.test_load_balancer.LoadBalancerTestXML.test_list_pools[gate,smoke]
  
tempest.api.network.test_load_balancer.LoadBalancerTestXML.test_list_pools[gate,smoke]
 ... FAIL

  
  Trace:
  2014-06-03 13:41:19.354 10302 ERROR neutron.openstack.common.rpc.amqp [-] 
Exception during message handling
  2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp 
Traceback (most recent call last):
  2014-06-03 13:41:19.354 10302 TRACE neutron.openstack.common.rpc.amqp   File 
/usr/lib/python2.7/dist-packages/neutron/openstack/common/rpc/amqp.py, line 
462, in _process_data
  2014-06-03 13:41:19.354 10302 TRACE 

[Yahoo-eng-team] [Bug 1320855] [NEW] sql: migration from 37 to 38 version fails

2014-05-19 Thread Emilien Macchi
Public bug reported:

Migration from Havana to Icehouse fails due to db_sync error, when
migrating 37 to 38 sql schema:

CRITICAL keystone [-] OperationalError: (OperationalError) (1005, Can't
create table 'keystone.assignment' (errno: 150)) \nCREATE TABLE
assignment (\n\ttype
ENUM('UserProject','GroupProject','UserDomain','GroupDomain') NOT NULL,
\n\tactor_id VARCHAR(64) NOT NULL, \n\ttarget_id VARCHAR(64) NOT NULL,
\n\trole_id VARCHAR(64) NOT NULL, \n\tinherited BOOL NOT NULL,
\n\tPRIMARY KEY (type, actor_id, target_id, role_id), \n\tFOREIGN
KEY(role_id) REFERENCES role (id), \n\tCHECK (inherited IN (0,
1))\n)\n\n ()#0122014-05-19 09:57:51.445 40373 TRACE keystone Traceback
(most recent call last):#0122014-05-19 09:57:51.445 40373 TRACE keystone
File /usr/bin/keystone-manage, line 51, in module#0122014-05-19
09:57:51.445 40373 TRACE keystone cli.main(argv=sys.argv,
config_files=config_files)#0122014-05-19 09:57:51.445 40373 TRACE
keystone   File /usr/lib/python2.7/dist-packages/keystone/cli.py, line
191, in main#0122014-05-19 09:57:51.445 40373 TRACE keystone
CONF.command.cmd_class.main()#0122014-05-19 09:57:51.445 40373 TRACE
keystone   File /usr/lib/python2.7/dist-packages/keystone/cli.py, line
67, in main#0122014-05-19 09:57:51.445 40373 TRACE keystone
migration_helpers.sync_database_to_version(extension,
version)#0122014-05-19 09:57:51.445 40373 TRACE keystone   File
/usr/lib/python2.7/dist-
packages/keystone/common/sql/migration_helpers.py, line 139, in
sync_database_to_version#0122014-05-19 09:57:51.445 40373 TRACE keystone
migration.db_sync(sql.get_engine(), abs_path,
version=version)#0122014-05-19 09:57:51.445 40373 TRACE keystone   File
/usr/lib/python2.7/dist-
packages/keystone/openstack/common/db/sqlalchemy/migration.py, line
197, in db_sync#0122014-05-19 09:57:51.445 40373 TRACE keystone
return versioning_api.upgrade(engine, repository, version)#0122014-05-19
09:57:51.445 40373 TRACE keystone   File /usr/lib/python2.7/dist-
packages/migrate/versioning/api.py, line 186, in upgrade#0122014-05-19
09:57:51.445 40373 TRACE keystone

** Affects: keystone
 Importance: Undecided
 Assignee: Emilien Macchi (emilienm)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Emilien Macchi (emilienm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1320855

Title:
  sql: migration from 37 to 38 version fails

Status in OpenStack Identity (Keystone):
  New

Bug description:
  Migration from Havana to Icehouse fails due to db_sync error, when
  migrating 37 to 38 sql schema:

  CRITICAL keystone [-] OperationalError: (OperationalError) (1005,
  Can't create table 'keystone.assignment' (errno: 150)) \nCREATE
  TABLE assignment (\n\ttype
  ENUM('UserProject','GroupProject','UserDomain','GroupDomain') NOT
  NULL, \n\tactor_id VARCHAR(64) NOT NULL, \n\ttarget_id VARCHAR(64) NOT
  NULL, \n\trole_id VARCHAR(64) NOT NULL, \n\tinherited BOOL NOT NULL,
  \n\tPRIMARY KEY (type, actor_id, target_id, role_id), \n\tFOREIGN
  KEY(role_id) REFERENCES role (id), \n\tCHECK (inherited IN (0,
  1))\n)\n\n ()#0122014-05-19 09:57:51.445 40373 TRACE keystone
  Traceback (most recent call last):#0122014-05-19 09:57:51.445 40373
  TRACE keystone   File /usr/bin/keystone-manage, line 51, in
  module#0122014-05-19 09:57:51.445 40373 TRACE keystone
  cli.main(argv=sys.argv, config_files=config_files)#0122014-05-19
  09:57:51.445 40373 TRACE keystone   File /usr/lib/python2.7/dist-
  packages/keystone/cli.py, line 191, in main#0122014-05-19
  09:57:51.445 40373 TRACE keystone
  CONF.command.cmd_class.main()#0122014-05-19 09:57:51.445 40373 TRACE
  keystone   File /usr/lib/python2.7/dist-packages/keystone/cli.py,
  line 67, in main#0122014-05-19 09:57:51.445 40373 TRACE keystone
  migration_helpers.sync_database_to_version(extension,
  version)#0122014-05-19 09:57:51.445 40373 TRACE keystone   File
  /usr/lib/python2.7/dist-
  packages/keystone/common/sql/migration_helpers.py, line 139, in
  sync_database_to_version#0122014-05-19 09:57:51.445 40373 TRACE
  keystone migration.db_sync(sql.get_engine(), abs_path,
  version=version)#0122014-05-19 09:57:51.445 40373 TRACE keystone
  File /usr/lib/python2.7/dist-
  packages/keystone/openstack/common/db/sqlalchemy/migration.py, line
  197, in db_sync#0122014-05-19 09:57:51.445 40373 TRACE keystone
  return versioning_api.upgrade(engine, repository,
  version)#0122014-05-19 09:57:51.445 40373 TRACE keystone   File
  /usr/lib/python2.7/dist-packages/migrate/versioning/api.py, line
  186, in upgrade#0122014-05-19 09:57:51.445 40373 TRACE keystone

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1320855/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net

[Yahoo-eng-team] [Bug 1320901] [NEW] sql: migration fails with embrane_lbaas_driver

2014-05-19 Thread Emilien Macchi
Public bug reported:

Migration Havana to Icehouse using HAproxy driver for LBaaS service:

root@os-ci-test10:~# neutron-db-manage --config-file /etc/neutron/neutron.conf 
--config-file /etc/neutron/plugin.ini  upgrade head
No handlers could be found for logger neutron.common.legacy
INFO  [alembic.migration] Context impl MySQLImpl.
INFO  [alembic.migration] Will assume non-transactional DDL.
INFO  [alembic.migration] Running upgrade 19180cf98af6 - 33dd0a9fa487, 
embrane_lbaas_driver
Traceback (most recent call last):
  File /usr/bin/neutron-db-manage, line 10, in module
sys.exit(main())
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 
169, in main
CONF.command.func(config, CONF.command.name)
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 82, 
in do_upgrade_downgrade
do_alembic_command(config, cmd, revision, sql=CONF.command.sql)
  File /usr/lib/python2.7/dist-packages/neutron/db/migration/cli.py, line 60, 
in do_alembic_command
getattr(alembic_command, cmd)(config, *args, **kwargs)
  File /usr/lib/python2.7/dist-packages/alembic/command.py, line 124, in 
upgrade
script.run_env()
  File /usr/lib/python2.7/dist-packages/alembic/script.py, line 199, in 
run_env
util.load_python_file(self.dir, 'env.py')
  File /usr/lib/python2.7/dist-packages/alembic/util.py, line 199, in 
load_python_file
module = load_module(module_id, path)
  File /usr/lib/python2.7/dist-packages/alembic/compat.py, line 55, in 
load_module
mod = imp.load_source(module_id, path, fp)
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py,
 line 103, in module
run_migrations_online()
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/env.py,
 line 87, in run_migrations_online
options=build_options())
  File string, line 7, in run_migrations
  File /usr/lib/python2.7/dist-packages/alembic/environment.py, line 652, in 
run_migrations
self.get_context().run_migrations(**kw)
  File /usr/lib/python2.7/dist-packages/alembic/migration.py, line 225, in 
run_migrations
change(**kw)
  File 
/usr/lib/python2.7/dist-packages/neutron/db/migration/alembic_migrations/versions/33dd0a9fa487_embrane_lbaas_driver.py,
 line 54, in upgrade
sa.PrimaryKeyConstraint(u'pool_id'))
  File string, line 7, in create_table
  File /usr/lib/python2.7/dist-packages/alembic/operations.py, line 647, in 
create_table
self._table(name, *columns, **kw)
  File /usr/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 149, in 
create_table
self._exec(schema.CreateTable(table))
  File /usr/lib/python2.7/dist-packages/alembic/ddl/impl.py, line 76, in _exec
conn.execute(construct, *multiparams, **params)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 662, 
in execute
params)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 720, 
in _execute_ddl
compiled
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 874, 
in _execute_context
context)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 1024, 
in _handle_dbapi_exception
exc_info
  File /usr/lib/python2.7/dist-packages/sqlalchemy/util/compat.py, line 196, 
in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/base.py, line 867, 
in _execute_context
context)
  File /usr/lib/python2.7/dist-packages/sqlalchemy/engine/default.py, line 
324, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib/python2.7/dist-packages/MySQLdb/cursors.py, line 174, in 
execute
self.errorhandler(self, exc, value)
  File /usr/lib/python2.7/dist-packages/MySQLdb/connections.py, line 36, in 
defaulterrorhandler
raise errorclass, errorvalue
sqlalchemy.exc.OperationalError: (OperationalError) (1005, Can't create table 
'neutron.embrane_pool_port' (errno: 150)) '\nCREATE TABLE embrane_pool_port 
(\n\tpool_id VARCHAR(36) NOT NULL, \n\tport_id VARCHAR(36) NOT NULL, 
\n\tPRIMARY KEY (pool_id), \n\tCONSTRAINT embrane_pool_port_ibfk_1 FOREIGN 
KEY(pool_id) REFERENCES pools (id), \n\tCONSTRAINT embrane_pool_port_ibfk_2 
FOREIGN KEY(port_id) REFERENCES ports (id)\n)\n\n' ()

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1320901

Title:
  sql: migration fails with embrane_lbaas_driver

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Migration Havana to Icehouse using HAproxy driver for LBaaS service:

  root@os-ci-test10:~# neutron-db-manage --config-file 
/etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini  upgrade head
  No handlers could be found for logger neutron.common.legacy
  INFO  [alembic.migration] Context impl MySQLImpl.
  INFO  [alembic.migration] Will 

[Yahoo-eng-team] [Bug 1302814] [NEW] nova notifications configuration takes tenant_id in config

2014-04-04 Thread Emilien Macchi
Public bug reported:

Since Icehouse, Neutron is able to send notifications to Nova API about some 
networking events.
To make it working, you have to provide nova_admin_tenant_id in neutron.conf, 
which is painful for configuration management when deploying OpenStack in 
production.

Like in all OpenStack projects, we should have a new parameter
nova_admin_tenant_name without hitting Keystone API to ask for the
real ID.

** Affects: neutron
 Importance: Undecided
 Assignee: Emilien Macchi (emilienm)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Emilien Macchi (emilienm)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1302814

Title:
  nova notifications configuration takes tenant_id in config

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  Since Icehouse, Neutron is able to send notifications to Nova API about some 
networking events.
  To make it working, you have to provide nova_admin_tenant_id in 
neutron.conf, which is painful for configuration management when deploying 
OpenStack in production.

  Like in all OpenStack projects, we should have a new parameter
  nova_admin_tenant_name without hitting Keystone API to ask for the
  real ID.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1302814/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281777] Re: ml2/ovs: veth_mtu has no effect

2014-02-19 Thread Emilien Macchi
** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281777

Title:
  ml2/ovs: veth_mtu has no effect

Status in OpenStack Neutron (virtual network service):
  Invalid

Bug description:
  This bug could be invalid, but anyway.

  Environment: OpenStack Havana 2013.2 / Debian Wheezy / Linux kernel
  3.12 / OVS 2.0.1 / Neutron ML2 Plugin with OVS Agent

  ml2_conf.ini: http://paste.openstack.org/show/jcbw7UmsIYjrCpSLaWi3/

  When I spawn a VM, both the TAP (hosted on compute) and the eth (in
  the VM) continue to have 1500 as MTU, while I'm trying to enforce
  1495.

  The only one way I found was to use
  DEFAULT/dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf
  parameter in DHCP agent configuration file and create /etc/neutron
  /dnsmasq-neutron.conf whith dhcp-options-force=26,1495.

  Am I missing something?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1281777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1281777] [NEW] ml2/ovs: veth_mtu has no effect

2014-02-18 Thread Emilien Macchi
Public bug reported:

This bug could be invalid, but anyway.

Environment: OpenStack Havana 2013.2 / Debian Wheezy / Linux kernel 3.12
/ OVS 2.0.1 / Neutron ML2 Plugin with OVS Agent

ml2_conf.ini: http://paste.openstack.org/show/jcbw7UmsIYjrCpSLaWi3/

When I spawn a VM, both the TAP (hosted on compute) and the eth (in the
VM) continue to have 1500 as MTU, while I'm trying to enforce 1495.

The only one way I found was to use
DEFAULT/dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf parameter
in DHCP agent configuration file and create /etc/neutron/dnsmasq-
neutron.conf whith dhcp-options-force=26,1495.

Am I missing something?

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1281777

Title:
  ml2/ovs: veth_mtu has no effect

Status in OpenStack Neutron (virtual network service):
  New

Bug description:
  This bug could be invalid, but anyway.

  Environment: OpenStack Havana 2013.2 / Debian Wheezy / Linux kernel
  3.12 / OVS 2.0.1 / Neutron ML2 Plugin with OVS Agent

  ml2_conf.ini: http://paste.openstack.org/show/jcbw7UmsIYjrCpSLaWi3/

  When I spawn a VM, both the TAP (hosted on compute) and the eth (in
  the VM) continue to have 1500 as MTU, while I'm trying to enforce
  1495.

  The only one way I found was to use
  DEFAULT/dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf
  parameter in DHCP agent configuration file and create /etc/neutron
  /dnsmasq-neutron.conf whith dhcp-options-force=26,1495.

  Am I missing something?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1281777/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1262678] Re: Missing firewall_driver with ml2 breaks neutron securitygroups API

2013-12-19 Thread Emilien Macchi
** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
 Assignee: (unassigned) = Emilien Macchi (emilienm)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1262678

Title:
  Missing firewall_driver with ml2 breaks neutron securitygroups API

Status in OpenStack Neutron (virtual network service):
  In Progress
Status in OpenStack Manuals:
  In Progress
Status in Puppet module for Neutron:
  In Progress

Bug description:
  When using nova 'security_group_api=neutron' and neutron
  'core_plugin=neutron.plugins.ml2.plugin.Ml2Plugin' with the 'vlan'
  type_driver/tenant_network_type, no securitygroup/firewall_driver is
  set in /etc/neutron/plugins.ini (which is symlinked to
  /etc/neutron/plugins/ml2/ml2_conf.ini).  This causes the 'neutron
  security-group-list' command to return 404 Not Found.

  Adding these two lines to ml2_conf.ini and restarting neutron-server
  causes the 'neutron security-group-list' command to function properly:

  [securitygroup]
  
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

  I have NOT confirmed full functionality (firewall operation) with this
  change -- I've only tested that the API now exists.

  Environment: Using RDO Havana on CentOS 6.5 with very recent patches.
  nova-api and neutron-server on the same machine, deployed entirely via
  puppet.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1262678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp