[Yahoo-eng-team] [Bug 1477860] [NEW] TestAsyncProcess.test_async_process_respawns fails with TimeoutException

2015-07-24 Thread Ihar Hrachyshka
Public bug reported:

Logstash:
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9hc3luY19wcm9jZXNzX3Jlc3Bhd25zXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mzc3MjMxNTU2ODB9

fails for both feature/qos and master:

2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.369 | Captured traceback:
2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.370 | ~~~
2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.371 | Traceback (most recent 
call last):
2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.372 |   File 
neutron/tests/functional/agent/linux/test_async_process.py, line 70, in 
test_async_process_respawns
2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.373 | 
proc._kill_process(proc.pid)
2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.375 |   File 
neutron/agent/linux/async_process.py, line 177, in _kill_process
2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.376 | self._process.wait()
2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.377 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/green/subprocess.py,
 line 75, in wait
2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.378 | 
eventlet.sleep(check_interval)
2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.379 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/greenthread.py,
 line 34, in sleep
2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.380 | hub.switch()
2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.381 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 294, in switch
2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.382 | return 
self.greenlet.switch()
2015-07-24 06:35:09.396 | 2015-07-24 06:35:07.383 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/hub.py,
 line 346, in run
2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.384 | 
self.wait(sleep_time)
2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.385 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/poll.py,
 line 85, in wait
2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.387 | presult = 
self.do_poll(seconds)
2015-07-24 06:35:09.397 | 2015-07-24 06:35:07.388 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/hubs/epolls.py,
 line 62, in do_poll
2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.389 | return 
self.poll.poll(seconds)
2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.390 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/fixtures/_fixtures/timeout.py,
 line 52, in signal_handler
2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.391 | raise 
TimeoutException()
2015-07-24 06:35:09.398 | 2015-07-24 06:35:07.392 | 
fixtures._fixtures.timeout.TimeoutException

Example: http://logs.openstack.org/64/199164/2/check/gate-neutron-dsvm-
functional/9b43ead/console.html

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477860

Title:
  TestAsyncProcess.test_async_process_respawns fails with
  TimeoutException

Status in neutron:
  New

Bug description:
  Logstash:
  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiaW4gdGVzdF9hc3luY19wcm9jZXNzX3Jlc3Bhd25zXCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mzc3MjMxNTU2ODB9

  fails for both feature/qos and master:

  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.369 | Captured traceback:
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.370 | ~~~
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.371 | Traceback (most 
recent call last):
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.372 |   File 
neutron/tests/functional/agent/linux/test_async_process.py, line 70, in 
test_async_process_respawns
  2015-07-24 06:35:09.394 | 2015-07-24 06:35:07.373 | 
proc._kill_process(proc.pid)
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.375 |   File 
neutron/agent/linux/async_process.py, line 177, in _kill_process
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.376 | 
self._process.wait()
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.377 |   File 
/opt/stack/new/neutron/.tox/dsvm-functional/local/lib/python2.7/site-packages/eventlet/green/subprocess.py,
 line 75, in wait
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.378 | 
eventlet.sleep(check_interval)
  2015-07-24 06:35:09.395 | 2015-07-24 06:35:07.379 |   File 

[Yahoo-eng-team] [Bug 1477851] [NEW] missing logging tag _LI

2015-07-24 Thread Yusuke Hayashi
Public bug reported:

I found missing loggin tag _LI at
keystone/keystone/tests/unit/ksfixtures/hacking.py

l.368
LOG.warn(_LI('this should cause an error'))  
= LOG.warn(_LW( ... ))

l.371
LOG.debug(_LI('this should cause an error')) 
= LOG.debug( ...)

** Affects: keystone
 Importance: Undecided
 Assignee: Yusuke Hayashi (hayashi-yusuke)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Yusuke Hayashi (hayashi-yusuke)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1477851

Title:
  missing logging tag _LI

Status in Keystone:
  New

Bug description:
  I found missing loggin tag _LI at
  keystone/keystone/tests/unit/ksfixtures/hacking.py

  l.368
  LOG.warn(_LI('this should cause an error'))  
  = LOG.warn(_LW( ... ))

  l.371
  LOG.debug(_LI('this should cause an error')) 
  = LOG.debug( ...)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1477851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477889] [NEW] fixed create subnetpool option '--pool-prefix' as a optional argument in neutronclient

2015-07-24 Thread zhaobo
Public bug reported:

neutron --version
2.6.0

As the --pool-prefix is needed,but the neutron client show it is a
optional argument.So if you just input the name,server will return
message like Invalid input for prefixes. Reason: 'None' is not a valid
IP subnet.

Before:
neutron help subnetpool-create l
usage: neutron subnetpool-create [-h] [-f {html,json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width integer]
 [--prefix PREFIX]
 [--request-format {json,xml}]
 [--tenant-id TENANT_ID]
 [--min-prefixlen MIN_PREFIXLEN]
 [--max-prefixlen MAX_PREFIXLEN]
 [--default-prefixlen DEFAULT_PREFIXLEN]
 [--pool-prefix PREFIXES] [--shared]
 name

Create a subnetpool for a given tenant.

positional arguments:
  name  Name of subnetpool to create.

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
The XML or JSON request format.
  --tenant-id TENANT_ID
The owner tenant ID.
  --min-prefixlen MIN_PREFIXLEN
Subnetpool minimum prefix length.
  --max-prefixlen MAX_PREFIXLEN
Subnetpool maximum prefix length.
  --default-prefixlen DEFAULT_PREFIXLEN
Subnetpool default prefix length.
  --pool-prefix PREFIXES
Subnetpool prefixes (This option can be repeated).
  --shared  Set the subnetpool as shared.

output formatters:
  output formatter options

  -f {html,json,shell,table,value,yaml}, --format 
{html,json,shell,table,value,yaml}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

table formatter:
  --max-width integer
Maximum display width, 0 to disable

shell formatter:
  a format a UNIX shell can parse (variable=value)

  --prefix PREFIX   add a prefix to all variable names

WISH:
usage: neutron subnetpool-create [-h] [-f {html,json,shell,table,value,yaml}]
 [-c COLUMN] [--max-width integer]
 [--prefix PREFIX]
 [--request-format {json,xml}]
 [--tenant-id TENANT_ID]
 [--min-prefixlen MIN_PREFIXLEN]
 [--max-prefixlen MAX_PREFIXLEN]
 [--default-prefixlen DEFAULT_PREFIXLEN]
 [--shared]
 name [POOL-RREFIX]

Create a subnetpool for a given tenant.

positional arguments:
  name  Name of subnetpool to create.
  POOL-RREFIX   Subnetpool prefixes (This option can be repeated).

optional arguments:
  -h, --helpshow this help message and exit
  --request-format {json,xml}
The XML or JSON request format.
  --tenant-id TENANT_ID
The owner tenant ID.
  --min-prefixlen MIN_PREFIXLEN
Subnetpool minimum prefix length.
  --max-prefixlen MAX_PREFIXLEN
Subnetpool maximum prefix length.
  --default-prefixlen DEFAULT_PREFIXLEN
Subnetpool default prefix length.
  --shared  Set the subnetpool as shared.

output formatters:
  output formatter options

  -f {html,json,shell,table,value,yaml}, --format 
{html,json,shell,table,value,yaml}
the output format, defaults to table
  -c COLUMN, --column COLUMN
specify the column(s) to include, can be repeated

table formatter:
  --max-width integer
Maximum display width, 0 to disable

shell formatter:
  a format a UNIX shell can parse (variable=value)

  --prefix PREFIX   add a prefix to all variable names

** Affects: neutron
 Importance: Undecided
 Assignee: zhaobo (zhaobo6)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = zhaobo (zhaobo6)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477889

Title:
  fixed create subnetpool option '--pool-prefix'  as a optional argument
  in neutronclient

Status in neutron:
  New

Bug description:
  neutron --version
  2.6.0

  As the --pool-prefix is needed,but the neutron client show it is a
  optional argument.So if you just input the name,server will return
  message like Invalid input for prefixes. Reason: 'None' is not a
  valid IP subnet.

  Before:
  neutron help subnetpool-create l
  usage: neutron subnetpool-create [-h] [-f 

[Yahoo-eng-team] [Bug 1477914] [NEW] amqp error looping for reply queue not found

2015-07-24 Thread JohnsonYi
Public bug reported:

Environment:
Fuel 6.0 2014.2 based environment with fresh oslo.messaging 1.4.1 from fuel 6.1
3 controller nodes, 2 compute nodes, 3 ceph nodes

RabbitMQ went into a looping on one controller node as below,
tail -f /var/log/rabbitmq/rabbitm...@node-x.log

=ERROR REPORT 23-Jul-2015::07:41:47 ===
connection 0.14200.0, channel 1 - soft error:
{amqp_error,not_found,
no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
'basic.consume'}

=ERROR REPORT 23-Jul-2015::07:41:48 ===
connection 0.14200.0, channel 1 - soft error:
{amqp_error,not_found,
no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
'basic.consume'}

=ERROR REPORT 23-Jul-2015::07:41:49 ===
connection 0.14200.0, channel 1 - soft error:
{amqp_error,not_found,
no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
'basic.consume'}
^C
root@node-2:/etc# rabbitmqctl list_queues reply_2e48d0e4650e4de3a200022406c27dea
Listing queues ...
Error: {bad_argument,reply_2e48d0e4650e4de3a200022406c27dea}

When I restart this node, the error switch to another controller node,
once it switch to rabbitmq master node, the horizon service for create 
delete instance will be unavailable.

I restored the service by command: crm resource restart p_rabbitmq-
server

** Affects: rabbitmq
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477914

Title:
  amqp error looping for reply queue not found

Status in RabbitMQ:
  New

Bug description:
  Environment:
  Fuel 6.0 2014.2 based environment with fresh oslo.messaging 1.4.1 from fuel 
6.1
  3 controller nodes, 2 compute nodes, 3 ceph nodes

  RabbitMQ went into a looping on one controller node as below,
  tail -f /var/log/rabbitmq/rabbitm...@node-x.log

  =ERROR REPORT 23-Jul-2015::07:41:47 ===
  connection 0.14200.0, channel 1 - soft error:
  {amqp_error,not_found,
  no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
  'basic.consume'}

  =ERROR REPORT 23-Jul-2015::07:41:48 ===
  connection 0.14200.0, channel 1 - soft error:
  {amqp_error,not_found,
  no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
  'basic.consume'}

  =ERROR REPORT 23-Jul-2015::07:41:49 ===
  connection 0.14200.0, channel 1 - soft error:
  {amqp_error,not_found,
  no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
  'basic.consume'}
  ^C
  root@node-2:/etc# rabbitmqctl list_queues 
reply_2e48d0e4650e4de3a200022406c27dea
  Listing queues ...
  Error: {bad_argument,reply_2e48d0e4650e4de3a200022406c27dea}

  When I restart this node, the error switch to another controller node,
  once it switch to rabbitmq master node, the horizon service for create
   delete instance will be unavailable.

  I restored the service by command: crm resource restart p_rabbitmq-
  server

To manage notifications about this bug go to:
https://bugs.launchpad.net/rabbitmq/+bug/1477914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477878] [NEW] Unable to launch instances from snapshot due to kernel and ramdisk fields in glance database

2015-07-24 Thread Vj
Public bug reported:

Hi All,

I am using openstack kilo with ceph backend. Creating a snapshot of an
instance works fine, but launching an instance from the snapshot fails.
Corresponding nova logs:

---
2015-07-24 12:46:44.918 7176 ERROR nova.compute.manager 
[req-f2bfa4ae-20d4-4f10-8772-ab8b1993260a - - - - -] [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] Instance failed to spawn
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] Traceback (most recent call last):
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2442, in 
_build_resources
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] yield resources
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/compute/manager.py, line 2314, in 
_build_and_run_instance
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] block_device_info=block_device_info)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2347, in 
spawn
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] admin_pass=admin_password)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2745, in 
_create_image
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] instance, size, fallback_from_host)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 5875, in 
_try_fetch_image_cache
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] size=size)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 231, 
in cache
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] *args, **kwargs)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 727, 
in create_image
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] prepare_template(target=base, 
max_size=size, *args, **kwargs)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py, line 445, in 
inner
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] return f(*args, **kwargs)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 221, 
in fetch_func_sync
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] fetch_func(target=target, *args, 
**kwargs)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py, line 2737, in 
clone_fallback_to_fetch
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] backend.clone(context, 
disk_images['image_id'])
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py, line 752, 
in clone
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] include_locations=True)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/image/api.py, line 93, in get
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc] show_deleted=show_deleted)
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 
b05daf8c-818f-4018-8790-8f03d44d2fcc]   File 
/usr/lib/python2.7/dist-packages/nova/image/glance.py, line 301, in show
2015-07-24 12:46:44.918 7176 TRACE nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1477851] Re: missing logging tag _LI

2015-07-24 Thread Yusuke Hayashi
** Changed in: keystone
   Status: In Progress = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1477851

Title:
  missing logging tag _LI

Status in Keystone:
  Invalid

Bug description:
  I found missing loggin tag _LI at
  keystone/keystone/tests/unit/ksfixtures/hacking.py

  l.368
  LOG.warn(_LI('this should cause an error'))  
  = LOG.warn(_LW( ... ))

  l.371
  LOG.debug(_LI('this should cause an error')) 
  = LOG.debug( ...)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1477851/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477110] Re: cinder delete Gluster snapshot failed

2015-07-24 Thread Deepak C Shetty
@Markus,
  The fix is in Nova project, so bug should be on Nova only. The effect of this 
bug is seen in Cinder as Cinder calls Nova for
online snapshot create/delete operations for GlusterFS backend.

** Changed in: cinder
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477110

Title:
  cinder delete Gluster snapshot failed

Status in Cinder:
  Invalid
Status in OpenStack Compute (nova):
  New

Bug description:
  I have a test that cinder use GlusterFS as storage. 
  1. create a instance
  2. create a volume
  3. attach the volume to the instance
  4. make snapshot to the volume
  5. delete the snapshot

  It get an error.

  OS: CentOS 7

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1477110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477898] [NEW] Fix five typos on keystone document

2015-07-24 Thread Atsushi SAKAI
Public bug reported:

Fix four typos and Add one space on keystone document

encryted   = encrypted
counterintuitive = counter intuitive (space added)
infomration =information
configuraton   = configuration
   http://docs.openstack.org/developer/keystone/configuration.html

Organizaion = Organization
   http://docs.openstack.org/developer/keystone/configure_federation.html

$ git diff | diffstat
 configuration.rst|8 
 configure_federation.rst |2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

** Affects: keystone
 Importance: Undecided
 Assignee: Atsushi SAKAI (sakaia)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1477898

Title:
  Fix five typos on keystone document

Status in Keystone:
  In Progress

Bug description:
  Fix four typos and Add one space on keystone document

  encryted   = encrypted
  counterintuitive = counter intuitive (space added)
  infomration =information
  configuraton   = configuration
 http://docs.openstack.org/developer/keystone/configuration.html

  Organizaion = Organization
 http://docs.openstack.org/developer/keystone/configure_federation.html

  $ git diff | diffstat
   configuration.rst|8 
   configure_federation.rst |2 +-
   2 files changed, 5 insertions(+), 5 deletions(-)

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1477898/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447215] Re: Schema Missing kernel_id, ramdisk_id causes #1447193

2015-07-24 Thread Flavio Percoco
** Also affects: glance/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447215

Title:
  Schema Missing kernel_id, ramdisk_id causes #1447193

Status in Glance:
  Fix Committed
Status in Glance kilo series:
  New
Status in glance package in Ubuntu:
  Confirmed

Bug description:
  [Description]

  
  [Environment]

  - Ubuntu 14.04.2
  - OpenStack Kilo

  ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - API
  ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store  0.4.0-0ubuntu1~cloud0all 
 OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all 
 Client library for Openstack glance server.

  [Steps to reproduce]

  0) Set /etc/glance/glance-api.conf to enable_v2_api=False
  1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
  2) nova image-create snapshot-bug snapshot-bug-instance 

  At this point the created image has no kernel_id (None) and image_id
  (None)

  3) Enable_v2_api=True in glance-api.conf and restart.

  4) Run a os-image-api=2 client,

  $ glance --os-image-api-version 2 image-list

  This will fail with #1447193

  [Description]

  The schema-image.json file needs to be modified to allow null, string
  values for both attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477921] [NEW] Create/Update port responds 500 to the request with invalid security_groups

2015-07-24 Thread Kengo Hobo
Public bug reported:

When I request POST /v2.0/ports/​ and PUT /v2.0/ports/​{port_id}​ with an 
invalid value(ex.integer) as security_groups,
Neutron Server responds 500(Internal Server Error).

It should respond 400 error not 500 error.

API Result and Logs are as follows.
[API Result]
POST /v2.0/ports/​ 
stack@ubuntu:~$ curl -gi -X POST http://192.168.122.99:9696/v2.0/ports -H 
X-Auth-Token:${TOKEN} -d 
'{port:{security_groups:90,network_id:77e2f811-96
a5-48d2-bd85-132e4f44bcb4}}'
HTTP/1.1 500 Internal Server Error
Content-Type: application/json; charset=UTF-8
Content-Length: 150
X-Openstack-Request-Id: req-b1e6de3d-8cd0-4015-be65-141126e7c807
Date: Fri, 24 Jul 2015 09:06:07 GMT

{NeutronError: {message: Request Failed: internal server error
while processing your request., type: HTTPInternalServerError,
detail: }}

PUT /v2.0/ports/​{port_id}​ 
stack@ubuntu:~$ curl -gi -X PUT 
http://192.168.122.99:9696/v2.0/ports/f95f74ff-1ede-483b-9386-c51f470500fe -H 
X-Auth-Token:${TOKEN} -d '{port:{security_groups:90}}'
HTTP/1.1 500 Internal Server Error
Content-Type: application/json; charset=UTF-8
Content-Length: 150
X-Openstack-Request-Id: req-4815865b-9631-4812-9424-3b73c997e56f
Date: Fri, 24 Jul 2015 08:46:51 GMT

{NeutronError: {message: Request Failed: internal server error
while processing your request., type: HTTPInternalServerError,
detail: }}

[Neutron Server Log]
2015-07-24 17:46:41.947 DEBUG neutron.api.v2.base 
req-4f4adaa0-3952-4a0f-a450-7b3594c5a11e demo 0522fc19a56b4d7ca32a9140d3d36a08 
Request body: {u'port': {u'security_groups': 3}}from (pid=24319) 
prepare_request_body /opt/stack/neutron/neutron/api/v2/base.py:606
2015-07-24 17:46:41.949 ERROR neutron.api.v2.resource 
req-4f4adaa0-3952-4a0f-a450-7b3594c5a11e demo 0522fc19a56b4d7ca32a9140d3d36a08 
update failed
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource Traceback (most recent 
call last):
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/resource.py, line 83, in resource
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource result = 
method(request=request, **args)
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 146, in wrapper
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource ectxt.value = 
e.inner_exc
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py, line 119, in 
__exit__
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource 
six.reraise(self.type_, self.value, self.tb)
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource   File 
/usr/local/lib/python2.7/dist-packages/oslo_db/api.py, line 136, in wrapper
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource return f(*args, 
**kwargs)
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 525, in update
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource 
allow_bulk=self._allow_bulk)
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/api/v2/base.py, line 658, in prepare_request_body
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource res_dict[attr] = 
attr_vals['convert_to'](res_dict[attr])
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource   File 
/opt/stack/neutron/neutron/extensions/securitygroup.py, line 177, in 
convert_to_uuid_list_or_none
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource for sg_id in 
value_list:
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource TypeError: 'int' object 
is not iterable
2015-07-24 17:46:41.949 TRACE neutron.api.v2.resource 
2015-07-24 17:46:41.955 INFO neutron.wsgi 
req-4f4adaa0-3952-4a0f-a450-7b3594c5a11e demo 
0522fc19a56b4d7ca32a9140d3d36a08192.168.122.99 - - [24/Jul/2015 17:46:41] PUT 
/v2.0/ports/f95f74ff-1ede-483b-9386-c51f470500fe HTTP/1.1 500 359 0.027116

** Affects: neutron
 Importance: Undecided
 Assignee: Kengo Hobo (hobo-kengo)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Kengo Hobo (hobo-kengo)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477921

Title:
  Create/Update port responds 500 to the request with invalid
  security_groups

Status in neutron:
  New

Bug description:
  When I request POST /v2.0/ports/​ and PUT /v2.0/ports/​{port_id}​ with an 
invalid value(ex.integer) as security_groups,
  Neutron Server responds 500(Internal Server Error).

  It should respond 400 error not 500 error.

  API Result and Logs are as follows.
  [API Result]
  POST /v2.0/ports/​ 
  stack@ubuntu:~$ curl -gi -X POST http://192.168.122.99:9696/v2.0/ports -H 
X-Auth-Token:${TOKEN} -d 
'{port:{security_groups:90,network_id:77e2f811-96
  a5-48d2-bd85-132e4f44bcb4}}'
  HTTP/1.1 500 Internal Server Error
  Content-Type: application/json; charset=UTF-8
  Content-Length: 150

[Yahoo-eng-team] [Bug 1456335] Re: neutron-vpn-netns-wrapper missing in Ubuntu Package

2015-07-24 Thread James Page
** Changed in: neutron-vpnaas (Ubuntu)
   Importance: Undecided = Medium

** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron-vpnaas (Ubuntu)
   Status: Confirmed = In Progress

** Changed in: neutron-vpnaas (Ubuntu)
 Assignee: (unassigned) = James Page (james-page)

** Also affects: neutron-vpnaas (Ubuntu Vivid)
   Importance: Undecided
   Status: New

** Changed in: neutron-vpnaas (Ubuntu Vivid)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1456335

Title:
  neutron-vpn-netns-wrapper missing in Ubuntu Package

Status in neutron:
  Invalid
Status in neutron-vpnaas package in Ubuntu:
  In Progress
Status in neutron-vpnaas source package in Vivid:
  New
Status in neutron-vpnaas package in Debian:
  New

Bug description:
  The executable neutron-vpn-netns-wrapper (path /usr/bin/neutron-vpn-
  netns-wrapper) in Ubuntu 14.04 packages is missing for OpenStack Kilo.

  I tried to enable VPNaaS with StrongSwan and it failed with this error 
message:
  2015-05-18 19:20:41.510 3254 TRACE 
neutron_vpnaas.services.vpn.device_drivers.ipsec Stderr: 
/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec 
qrouter-0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac neutron-vpn-netns-wrapper 
--mount_paths=/etc:/var/lib/neutron/ipsec/0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac/etc,/var/run:/var/lib/neutron/ipsec/0b4c88fa-4944-45a7-b1b3-fbee1d7fc2ac/var/run
 --cmd=ipsec,start (no filter matched)

  After copying the content of neutron-vpn-netns-wrapper from the Fedora
  repository VPNaaS with StrongSwan worked.

  The content of the vpn-netns-wrapper:

  #!/usr/bin/python2
  # PBR Generated from u'console_scripts'

  import sys

  from neutron_vpnaas.services.vpn.common.netns_wrapper import main

  
  if __name__ == __main__:
  sys.exit(main())

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1456335/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1422376] Re: enable package test suites: dependency on generated egg from git.openstack.org

2015-07-24 Thread James Page
All packaging unit test suites are now enabled - marking 'Fix Released'

** Changed in: neutron-fwaas (Ubuntu)
   Status: New = Fix Released

** Changed in: neutron-lbaas (Ubuntu)
   Status: New = Fix Released

** Changed in: neutron
   Status: Incomplete = Fix Released

** Changed in: neutron-vpnaas (Ubuntu)
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1422376

Title:
  enable package test suites: dependency on generated egg from
  git.openstack.org

Status in neutron:
  Fix Released
Status in neutron-fwaas package in Ubuntu:
  Fix Released
Status in neutron-lbaas package in Ubuntu:
  Fix Released
Status in neutron-vpnaas package in Ubuntu:
  Fix Released

Bug description:
  The split out drivers for neutron (lbaas, vpnaas, fwaas) all rely on a
  source snapshot of neutron, which we don't currently have an
  equivalent for in Ubuntu.

  We need to resolve this situation and re-enable the upstream test
  suites as part of the package build.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1422376/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477912] [NEW] size still exist when create image with incomplete parameter in api v1

2015-07-24 Thread wangxiyuan
Public bug reported:

version:Glance master

Reproduce:
1. glance image-create --file  --name test(in my env, image file's size 
is 642580480B)
   Then An error raised: 400 Bad Request: Disk format is not specified. (HTTP 
400)

2. glance image-list
list information:

ID  | Name | Disk Format | Container Format |   Size   | Status

xxx | test | |  |642580480 | queued

except: There should not leave 'size' in the list.

ID  | Name | Disk Format | Container Format |   Size   | Status

xxx | test | |  |  | queued

There is no bug in api v2.  Only occured in api v1

** Affects: glance
 Importance: Undecided
 Assignee: wangxiyuan (wangxiyuan)
 Status: New

** Changed in: glance
 Assignee: (unassigned) = wangxiyuan (wangxiyuan)

** Description changed:

  version:Glance master
  
  Reproduce:
  1. glance image-create --file  --name test(in my env, image file's 
size is 642580480B)
-Then An error raised: 400 Bad Request: Disk format is not specified. (HTTP 
400)
+    Then An error raised: 400 Bad Request: Disk format is not specified. (HTTP 
400)
  
  2. glance image-list
- list information:
- 
+--+-+---+---+-++
- | ID   | Name   | Disk 
Format | Container Format | Size   | Status |
- 
+--+-+---+---+-++
- | b2e8b309-d40f-4399-b34f-44bfdf6791e0 | test |   |   
| 642580480 | queued |
- 
+--+-+---+---+-++
+ list information:
+ 
+--+-+---+
+ | ID| Name   | Disk Format | Container Format | 
Size | Status 
|+--+-+---+---
+ | b2e8b309-d40f-4399-b34f-44bfdf6791e0 | test ||   | 
642580480 | queued |
+ 
+--+-+---+
  
  except: without 'size'
- 
+--+-+---+---+-++
- | ID   | Name   | Disk 
Format | Container Format | Size   | Status |
- 
+--+-+---+---+-++
- | b2e8b309-d40f-4399-b34f-44bfdf6791e0 | test |   |   
|  | queued |
- 
+--+-+---+---+-++
- 
+ 
+--+-+---+
+ | ID| Name   | Disk Format | Container Format | 
Size | Status 
|+--+-+---+---
+ | b2e8b309-d40f-4399-b34f-44bfdf6791e0 | test ||   |  
  | queued |
+ 
+--+-+---+
  
  There is no bug in api v2.  Only occured in api v1

** Description changed:

  version:Glance master
  
  Reproduce:
  1. glance image-create --file  --name test(in my env, image file's 
size is 642580480B)
     Then An error raised: 400 Bad Request: Disk format is not specified. (HTTP 
400)
  
  2. glance image-list
  list information:
- 
+--+-+---+
- | ID| Name   | Disk Format | Container Format | 
Size | Status 
|+--+-+---+---
- | b2e8b309-d40f-4399-b34f-44bfdf6791e0 | test ||   | 
642580480 | queued |
- 
+--+-+---+
+ 
+ ID  | Name | Disk Format | Container Format |   Size   | Status 
+ 
+ xxx | test ||642580480 | queued 
  
  except: without 'size'
- 
+--+-+---+
- | ID| Name   | Disk Format | Container Format | 
Size | Status 
|+--+-+---+---
- | 

[Yahoo-eng-team] [Bug 1477914] Re: amqp error looping for reply queue not found

2015-07-24 Thread JohnsonYi
** Project changed: nova = rabbitmq

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1477914

Title:
  amqp error looping for reply queue not found

Status in RabbitMQ:
  New

Bug description:
  Environment:
  Fuel 6.0 2014.2 based environment with fresh oslo.messaging 1.4.1 from fuel 
6.1
  3 controller nodes, 2 compute nodes, 3 ceph nodes

  RabbitMQ went into a looping on one controller node as below,
  tail -f /var/log/rabbitmq/rabbitm...@node-x.log

  =ERROR REPORT 23-Jul-2015::07:41:47 ===
  connection 0.14200.0, channel 1 - soft error:
  {amqp_error,not_found,
  no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
  'basic.consume'}

  =ERROR REPORT 23-Jul-2015::07:41:48 ===
  connection 0.14200.0, channel 1 - soft error:
  {amqp_error,not_found,
  no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
  'basic.consume'}

  =ERROR REPORT 23-Jul-2015::07:41:49 ===
  connection 0.14200.0, channel 1 - soft error:
  {amqp_error,not_found,
  no queue 'reply_2e48d0e4650e4de3a200022406c27dea' in vhost '/',
  'basic.consume'}
  ^C
  root@node-2:/etc# rabbitmqctl list_queues 
reply_2e48d0e4650e4de3a200022406c27dea
  Listing queues ...
  Error: {bad_argument,reply_2e48d0e4650e4de3a200022406c27dea}

  When I restart this node, the error switch to another controller node,
  once it switch to rabbitmq master node, the horizon service for create
   delete instance will be unavailable.

  I restored the service by command: crm resource restart p_rabbitmq-
  server

To manage notifications about this bug go to:
https://bugs.launchpad.net/rabbitmq/+bug/1477914/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477944] [NEW] Azure data source should handle more than one ProvisioningSection

2015-07-24 Thread Dan Watkins
Public bug reported:

Azure have informed us that they will be introducing a second format for
metadata in ovf-env.xml, which will be done using multiple versioned
ProvisioningSections.

At the moment, cloud-init chokes if there is more than a single
ProvisioningSection. We should, instead, be looking for the version 1.0
ProvisioningSection and ignoring all others.

** Affects: cloud-init
 Importance: Undecided
 Assignee: Dan Watkins (daniel-thewatkins)
 Status: New

** Changed in: cloud-init
 Assignee: (unassigned) = Dan Watkins (daniel-thewatkins)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1477944

Title:
  Azure data source should handle more than one ProvisioningSection

Status in cloud-init:
  New

Bug description:
  Azure have informed us that they will be introducing a second format
  for metadata in ovf-env.xml, which will be done using multiple
  versioned ProvisioningSections.

  At the moment, cloud-init chokes if there is more than a single
  ProvisioningSection. We should, instead, be looking for the version
  1.0 ProvisioningSection and ignoring all others.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1477944/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1443967] Re: VPNService loads names of device drivers from global CONF variable but not from self.conf

2015-07-24 Thread James Page
Actually this looks OK now:

https://github.com/openstack/neutron-
vpnaas/blob/master/neutron_vpnaas/services/vpn/vpn_service.py#L45

Marking 'Fix Released'

** Also affects: neutron
   Importance: Undecided
   Status: New

** Changed in: neutron
   Status: New = Fix Released

** Changed in: neutron-vpnaas (Ubuntu)
   Status: New = Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1443967

Title:
  VPNService loads names of device drivers from global CONF variable but
  not from self.conf

Status in neutron:
  Fix Released
Status in neutron-vpnaas package in Ubuntu:
  Fix Released

Bug description:
  The VPNService.load_device_drivers method takes device drivers names
  from global oslo_config.CONF variable. But there is a self.conf
  property wich is taken from l3_agent.conf.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1443967/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478072] [NEW] DVR enabled on neutron jobs where DVR should be disabled

2015-07-24 Thread Sean M. Collins
Public bug reported:

Job: gate-tempest-dsvm-neutron-full

http://git.openstack.org/cgit/openstack-infra/project-
config/tree/jenkins/jobs/devstack-gate.yaml#n448

http://logs.openstack.org/77/198877/4/gate/gate-tempest-dsvm-neutron-
full/29bcccb/

Port ID: a84df1ca-0a43-4138-b60b-c4c5

2015-07-23 15:27:56.611 DEBUG neutron.db.l3_dvrscheduler_db [req-
b781d84a-1079-4328-b123-f4cb311c3a4f tempest-
AllowedAddressPairIpV6TestJSON-1876686975 tempest-
AllowedAddressPairIpV6TestJSON-178337704] No namespaces available for
this DVR port fd6aefa9-e1cb-4569-b7dc-1fb546e0d650 on host devstack-
trust

db/l3_dvrscheduler_db.py:LOG.debug('No namespaces available
for this DVR port %(port)s '

Maybe this check is not accurate enough?

https://github.com/openstack/neutron/blob/d266b5a90585634f91f3830b9f99af9dfaac5e31/neutron/plugins/ml2/plugin.py#L1307

** Affects: neutron
 Importance: Undecided
 Assignee: Sean M. Collins (scollins)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) = Sean M. Collins (scollins)

** Description changed:

+ Job: gate-tempest-dsvm-neutron-full
+ 
+ http://git.openstack.org/cgit/openstack-infra/project-
+ config/tree/jenkins/jobs/devstack-gate.yaml#n448
+ 
  http://logs.openstack.org/77/198877/4/gate/gate-tempest-dsvm-neutron-
  full/29bcccb/
  
  Port ID: a84df1ca-0a43-4138-b60b-c4c5
  
  2015-07-23 15:27:56.611 DEBUG neutron.db.l3_dvrscheduler_db [req-
  b781d84a-1079-4328-b123-f4cb311c3a4f tempest-
  AllowedAddressPairIpV6TestJSON-1876686975 tempest-
  AllowedAddressPairIpV6TestJSON-178337704] No namespaces available for
  this DVR port fd6aefa9-e1cb-4569-b7dc-1fb546e0d650 on host devstack-
  trust
  
  db/l3_dvrscheduler_db.py:LOG.debug('No namespaces available
  for this DVR port %(port)s '
  
- 
  Maybe this check is not accurate enough?
  
  
https://github.com/openstack/neutron/blob/d266b5a90585634f91f3830b9f99af9dfaac5e31/neutron/plugins/ml2/plugin.py#L1307

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478072

Title:
  DVR enabled on neutron jobs where DVR should be disabled

Status in neutron:
  New

Bug description:
  Job: gate-tempest-dsvm-neutron-full

  http://git.openstack.org/cgit/openstack-infra/project-
  config/tree/jenkins/jobs/devstack-gate.yaml#n448

  http://logs.openstack.org/77/198877/4/gate/gate-tempest-dsvm-neutron-
  full/29bcccb/

  Port ID: a84df1ca-0a43-4138-b60b-c4c5

  2015-07-23 15:27:56.611 DEBUG neutron.db.l3_dvrscheduler_db [req-
  b781d84a-1079-4328-b123-f4cb311c3a4f tempest-
  AllowedAddressPairIpV6TestJSON-1876686975 tempest-
  AllowedAddressPairIpV6TestJSON-178337704] No namespaces available for
  this DVR port fd6aefa9-e1cb-4569-b7dc-1fb546e0d650 on host devstack-
  trust

  db/l3_dvrscheduler_db.py:LOG.debug('No namespaces
  available for this DVR port %(port)s '

  Maybe this check is not accurate enough?

  
https://github.com/openstack/neutron/blob/d266b5a90585634f91f3830b9f99af9dfaac5e31/neutron/plugins/ml2/plugin.py#L1307

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478012] [NEW] VPNaaS: Support VPNaaS with L3 HA

2015-07-24 Thread venkata anil
Public bug reported:

Problem:   Currently VPNaaS is not supported with L3 HA.
1)  When user tries to create ipsec site connection, vpn agent tries to run 
ipsec process on both HA master and backup routers. Running ipsec process on 
backup router fails as it's router interfaces will be down. 

2) Running two separate ipsec processes for the same side of connection(
East or West) is not allowed.

3) During HA router state transitions( master to backup and backup to
master), spawning and terminating of vpn process is not handled. For
example, when master transitioned to backup, that vpn connection will be
lost forever(unless both the agents hosting HA routers restarted).


Solution:   When VPN process is created for HA router, it should run only on HA 
master node. On transition from master to backup router,  vpn process should be 
shutdown (same like disabling radvd/metadata proxy) on that agent.  On 
transition from backup to master, vpn process should be enabled and running on 
that agent. 


Advantages:Through this we will have the advantages of L3 HA router i.e No 
need for user intervention for reestablishing vpn connection when the router is 
down. When existing master router is down, same vpn connection will be 
established automatically on the new master router.

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: l3-ha rfe vpnaas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478012

Title:
   VPNaaS: Support VPNaaS with L3 HA

Status in neutron:
  New

Bug description:
  Problem:   Currently VPNaaS is not supported with L3 HA.
  1)  When user tries to create ipsec site connection, vpn agent tries to run 
ipsec process on both HA master and backup routers. Running ipsec process on 
backup router fails as it's router interfaces will be down. 

  2) Running two separate ipsec processes for the same side of
  connection( East or West) is not allowed.

  3) During HA router state transitions( master to backup and backup to
  master), spawning and terminating of vpn process is not handled. For
  example, when master transitioned to backup, that vpn connection will
  be lost forever(unless both the agents hosting HA routers restarted).

  
  Solution:   When VPN process is created for HA router, it should run only on 
HA master node. On transition from master to backup router,  vpn process should 
be shutdown (same like disabling radvd/metadata proxy) on that agent.  On 
transition from backup to master, vpn process should be enabled and running on 
that agent. 

  
  Advantages:Through this we will have the advantages of L3 HA router i.e 
No need for user intervention for reestablishing vpn connection when the router 
is down. When existing master router is down, same vpn connection will be 
established automatically on the new master router.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478012/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478033] [NEW] neutron allows to create invalid floating IP

2015-07-24 Thread Vitalii
Public bug reported:

% neutron floatingip-create ISP_NET --floating-ip-address 31.28.168.167

But this is broadcast IP address.

% ping 31.28.168.167
Do you want to ping broadcast? Then -b

** Affects: neutron
 Importance: Undecided
 Assignee: Vitalii (vb-d)
 Status: Invalid

** Project changed: cinder = neutron

** Changed in: neutron
   Status: New = Invalid

** Changed in: neutron
 Assignee: (unassigned) = Vitalii (vb-d)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478033

Title:
  neutron allows to create invalid floating IP

Status in neutron:
  Invalid

Bug description:
  % neutron floatingip-create ISP_NET --floating-ip-address
  31.28.168.167

  But this is broadcast IP address.

  % ping 31.28.168.167
  Do you want to ping broadcast? Then -b

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478033/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478043] [NEW] nova.virt.block_device 'Failed to transform' error is spamming the logs since 7/19

2015-07-24 Thread Matt Riedemann
Public bug reported:

These:

http://logs.openstack.org/36/193236/10/check/gate-tempest-dsvm-full-
ceph/5549dfc/logs/screen-n-cpu.txt.gz?level=TRACE

2015-07-23 17:06:29.296 ERROR nova.virt.block_device [req-148f8955-8c6d-
4cd4-ba40-93734ea70349 tempest-AggregatesAdminTestJSON-483895825
tempest-AggregatesAdminTestJSON-20307] Failed to transform:
source_type: image, destination_type: local

There are tons of them:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHRyYW5zZm9ybVwiIEFORCBtb2R1bGU6XCJub3ZhLnZpcnQuYmxvY2tfZGV2aWNlXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mzc3NTA3Mjk0OTF9

3594 hits since 7/19, many different types of jobs, so it's not just
that ceph job link above.

** Affects: nova
 Importance: High
 Assignee: Matt Riedemann (mriedem)
 Status: In Progress


** Tags: volumes

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478043

Title:
  nova.virt.block_device 'Failed to transform' error is spamming the
  logs since 7/19

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  These:

  http://logs.openstack.org/36/193236/10/check/gate-tempest-dsvm-full-
  ceph/5549dfc/logs/screen-n-cpu.txt.gz?level=TRACE

  2015-07-23 17:06:29.296 ERROR nova.virt.block_device [req-148f8955
  -8c6d-4cd4-ba40-93734ea70349 tempest-AggregatesAdminTestJSON-483895825
  tempest-AggregatesAdminTestJSON-20307] Failed to transform:
  source_type: image, destination_type: local

  There are tons of them:

  
http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiRmFpbGVkIHRvIHRyYW5zZm9ybVwiIEFORCBtb2R1bGU6XCJub3ZhLnZpcnQuYmxvY2tfZGV2aWNlXCIgQU5EIHRhZ3M6XCJzY3JlZW4tbi1jcHUudHh0XCIiLCJmaWVsZHMiOltdLCJvZmZzZXQiOjAsInRpbWVmcmFtZSI6IjYwNDgwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0Mzc3NTA3Mjk0OTF9

  3594 hits since 7/19, many different types of jobs, so it's not just
  that ceph job link above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478043/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478080] [NEW] neutron allows to create invalid floating IP

2015-07-24 Thread Vitalii
Public bug reported:

Neutron allows to create floating IP 31.28.168.167, which is broadcast
address.

I think it should raise exception.

** Affects: neutron
 Importance: Undecided
 Status: New

** Project changed: nova = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478080

Title:
  neutron allows to create invalid floating IP

Status in neutron:
  New

Bug description:
  Neutron allows to create floating IP 31.28.168.167, which is broadcast
  address.

  I think it should raise exception.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478080/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478033] [NEW] neutron allows to create invalid floating IP

2015-07-24 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

% neutron floatingip-create ISP_NET --floating-ip-address 31.28.168.167

But this is broadcast IP address.

% ping 31.28.168.167
Do you want to ping broadcast? Then -b

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
neutron allows to create invalid floating IP
https://bugs.launchpad.net/bugs/1478033
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1447215] Re: Schema Missing kernel_id, ramdisk_id causes #1447193

2015-07-24 Thread Chris J Arges
** Also affects: glance (Ubuntu Vivid)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447215

Title:
  Schema Missing kernel_id, ramdisk_id causes #1447193

Status in Glance:
  Fix Released
Status in Glance kilo series:
  In Progress
Status in glance package in Ubuntu:
  Confirmed
Status in glance source package in Vivid:
  New

Bug description:
  [Description]

  
  [Environment]

  - Ubuntu 14.04.2
  - OpenStack Kilo

  ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - API
  ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store  0.4.0-0ubuntu1~cloud0all 
 OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all 
 Client library for Openstack glance server.

  [Steps to reproduce]

  0) Set /etc/glance/glance-api.conf to enable_v2_api=False
  1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
  2) nova image-create snapshot-bug snapshot-bug-instance 

  At this point the created image has no kernel_id (None) and image_id
  (None)

  3) Enable_v2_api=True in glance-api.conf and restart.

  4) Run a os-image-api=2 client,

  $ glance --os-image-api-version 2 image-list

  This will fail with #1447193

  [Description]

  The schema-image.json file needs to be modified to allow null, string
  values for both attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478065] [NEW] Block device metadata may be bogus with Ironic driver

2015-07-24 Thread Ben Nemec
Public bug reported:

This is a followup to the regression reported in
https://bugs.launchpad.net/nova/+bug/1464239  The problem there was that
Nova changed how it does block device mapping for ephemeral partitions,
and because Ironic isn't using that block device mapping the ephemeral
path returned by the metadata server became incorrect.  I'm opening this
bug because while it is possible to fix the regression, the behavior is
still bad.  The ephemeral partition metadata is only valid if Ironic
happens to assign the ephemeral partition to /dev/sda1.  This is often
the case, but there are valid situations where it is not true - consider
deploying to a vm where the ephemeral partition ends up on /dev/vda1.

Since I believe this would require a new method of synchronizing the
block device mapping between Nova and Ironic, I'm pushing a fix for the
regression to unbreak the previously working cases, and opening this bug
to document that the situation is still not right.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478065

Title:
  Block device metadata may be bogus with Ironic driver

Status in OpenStack Compute (nova):
  New

Bug description:
  This is a followup to the regression reported in
  https://bugs.launchpad.net/nova/+bug/1464239  The problem there was
  that Nova changed how it does block device mapping for ephemeral
  partitions, and because Ironic isn't using that block device mapping
  the ephemeral path returned by the metadata server became incorrect.
  I'm opening this bug because while it is possible to fix the
  regression, the behavior is still bad.  The ephemeral partition
  metadata is only valid if Ironic happens to assign the ephemeral
  partition to /dev/sda1.  This is often the case, but there are valid
  situations where it is not true - consider deploying to a vm where the
  ephemeral partition ends up on /dev/vda1.

  Since I believe this would require a new method of synchronizing the
  block device mapping between Nova and Ironic, I'm pushing a fix for
  the regression to unbreak the previously working cases, and opening
  this bug to document that the situation is still not right.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478065/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478143] [NEW] project param on user_create now default_project

2015-07-24 Thread David Lyle
Public bug reported:

with keystone v3 when creating a user, the argument 'project' is no
longer honored as it's been replaced by 'default_project'. Without
changing the name parameter, this error is encountered:

create takes at most 1 positional argument (2 given)
Internal Server Error: /identity/users/create/
Traceback (most recent call last):
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/django/core/handlers/base.py,
 line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File /home/david-lyle/horizon/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File /home/david-lyle/horizon/horizon/decorators.py, line 52, in dec
return view_func(request, *args, **kwargs)
  File /home/david-lyle/horizon/horizon/decorators.py, line 36, in dec
return view_func(request, *args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 69, in view
return self.dispatch(request, *args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/django/utils/decorators.py,
 line 29, in _wrapper
return bound_func(*args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/django/views/decorators/debug.py,
 line 76, in sensitive_post_parameters_wrapper
return view(request, *args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/django/utils/decorators.py,
 line 25, in bound_func
return func.__get__(self, type(self))(*args2, **kwargs2)
  File 
/home/david-lyle/horizon/openstack_dashboard/dashboards/identity/users/views.py,
 line 138, in dispatch
return super(CreateView, self).dispatch(*args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/base.py,
 line 87, in dispatch
return handler(request, *args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/django/views/generic/edit.py,
 line 173, in post
return self.form_valid(form)
  File /home/david-lyle/horizon/horizon/forms/views.py, line 173, in 
form_valid
exceptions.handle(self.request)
  File /home/david-lyle/horizon/horizon/exceptions.py, line 361, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File /home/david-lyle/horizon/horizon/forms/views.py, line 170, in 
form_valid
handled = form.handle(self.request, form.cleaned_data)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/django/views/decorators/debug.py,
 line 36, in sensitive_variables_wrapper
return func(*func_args, **func_kwargs)
  File 
/home/david-lyle/horizon/openstack_dashboard/dashboards/identity/users/forms.py,
 line 182, in handle
exceptions.handle(request, _('Unable to create user.'))
  File /home/david-lyle/horizon/horizon/exceptions.py, line 361, in handle
six.reraise(exc_type, exc_value, exc_traceback)
  File 
/home/david-lyle/horizon/openstack_dashboard/dashboards/identity/users/forms.py,
 line 157, in handle
domain=domain.id)
  File /home/david-lyle/horizon/openstack_dashboard/api/keystone.py, line 
324, in user_create
domain=domain, description=description)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/utils.py,
 line 336, in inner
return func(*args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/v3/users.py,
 line 75, in create
log=not bool(password))
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py,
 line 151, in _create
return self._post(url, body, response_key, return_raw, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/base.py,
 line 165, in _post
resp, body = self.client.post(url, body=body, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/adapter.py,
 line 176, in post
return self.request(url, 'POST', **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/adapter.py,
 line 206, in request
resp = super(LegacyJsonAdapter, self).request(*args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/adapter.py,
 line 95, in request
return self.session.request(url, method, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/utils.py,
 line 336, in inner
return func(*args, **kwargs)
  File 
/home/david-lyle/horizon/.venv/local/lib/python2.7/site-packages/keystoneclient/session.py,
 line 397, in request
raise exceptions.from_response(resp, method, url)
BadRequest: Invalid input for field 'default_project_id'. The value is ''. 
(HTTP 400) (Request-ID: req-6e6fc9f8-e723-4ad3-8d1f-c0c8b1a38218)
[24/Jul/2015 20:05:09] POST /identity/users/create/ HTTP/1.1 500 

[Yahoo-eng-team] [Bug 1470625] Re: Mechanism to register and run all external alembic migrations automatically

2015-07-24 Thread Henry Gessau
This is a mini-RFE since we are adding a new mechanism using entry-
points to register external alembic branches at install time.

** Tags added: rfe

** Also affects: networking-l2gw
   Importance: Undecided
   Status: New

** Also affects: networking-cisco
   Importance: Undecided
   Status: New

** Changed in: networking-cisco
 Assignee: (unassigned) = Henry Gessau (gessau)

** Changed in: networking-l2gw
 Assignee: (unassigned) = Henry Gessau (gessau)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1470625

Title:
  Mechanism to register and run all external alembic migrations
  automatically

Status in networking-cisco:
  New
Status in networking-l2gw:
  New
Status in neutron:
  In Progress

Bug description:
  For alembic migration branches that are out-of-tree, we need a
  mechanism whereby the external code can register its branches when it
  is installed, and then neutron will provide automation of running all
  installed external migration branches when neutron-db-manage is used
  for upgrading.

To manage notifications about this bug go to:
https://bugs.launchpad.net/networking-cisco/+bug/1470625/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478103] [NEW] need support for configuring syslog

2015-07-24 Thread Scott Moser
Public bug reported:

in order to instruct a host to easily log syslog information to another
system, we need to add a cloud-config format for this.

The format to use looks like this:
## syslog module allows you to configure the systems syslog.
## configuration of syslog is under the top level cloud-config 
## entry 'syslog'.
##
## remotes
##  remotes is a dictionary. items are of 'name: remote_info'
##  name is simply a name (example 'maas').  It has no importance other than
##  for cloud-init merging configs
##
##  remote_info is of the format
##* optional filter for log messages
##  default if not present: *.*
##* optional leading '@' or '@@'  (indicates udp or tcp).
##  default if not present (udp): @
##  This is rsyslog format for that.  if not present, is '@' which is udp
##* ipv4 or ipv6 or hostname
##  ipv6 addresses must be encoded in [::1] format. example: @[fd00::1]:514
##* optional port
##  port defaults to 514
##
## Example:
#cloud-config
syslog:
 remotes:
  # udp to host 'maas.mydomain' port 514
  maashost: maas.mydomain
  # udp to ipv4 host on port 514
  maas: @[10.5.1.56]:514
  # tcp to host ipv6 host on port 555
  maasipv6: *.* @@[FE80::0202:B3FF:FE1E:8329]:555

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Description changed:

  in order to instruct a host to easily log syslog information to another
  system, we need to add a cloud-config format for this.
  
  The format to use looks like this:
  ## syslog module allows you to configure the systems syslog.
+ ## configuration of syslog is under the top level cloud-config 
+ ## entry 'syslog'.
  ##
- ## a.) remotes
+ ## remotes
  ##  remotes is a dictionary. items are of 'name: remote_info'
  ##  name is simply a name (example 'maas').  It has no importance other than
  ##  for cloud-init merging configs
  ##
  ##  remote_info is of the format
  ##* optional filter for log messages
  ##  default if not present: *.*
  ##* optional leading '@' or '@@'  (indicates udp or tcp).
  ##  default if not present (udp): @
  ##  This is rsyslog format for that.  if not present, is '@' which is udp
  ##* ipv4 or ipv6 or hostname
  ##  ipv6 addresses must be encoded in [::1] format. example: 
@[fd00::1]:514
  ##* optional port
  ##  port defaults to 514
  ##
  ## Example:
+ #cloud-config
  syslog:
-  remotes:
-   # udp to host 'maas.mydomain' port 514
-   maashost: maas.mydomain
-   # udp to ipv4 host on port 514
-   maas: @[10.5.1.56]:514
-   # tcp to host ipv6 host on port 555
-   maasipv6: *.* @@[FE80::0202:B3FF:FE1E:8329]:555
+  remotes:
+   # udp to host 'maas.mydomain' port 514
+   maashost: maas.mydomain
+   # udp to ipv4 host on port 514
+   maas: @[10.5.1.56]:514
+   # tcp to host ipv6 host on port 555
+   maasipv6: *.* @@[FE80::0202:B3FF:FE1E:8329]:555

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1478103

Title:
  need support for configuring syslog

Status in cloud-init:
  New

Bug description:
  in order to instruct a host to easily log syslog information to
  another system, we need to add a cloud-config format for this.

  The format to use looks like this:
  ## syslog module allows you to configure the systems syslog.
  ## configuration of syslog is under the top level cloud-config 
  ## entry 'syslog'.
  ##
  ## remotes
  ##  remotes is a dictionary. items are of 'name: remote_info'
  ##  name is simply a name (example 'maas').  It has no importance other than
  ##  for cloud-init merging configs
  ##
  ##  remote_info is of the format
  ##* optional filter for log messages
  ##  default if not present: *.*
  ##* optional leading '@' or '@@'  (indicates udp or tcp).
  ##  default if not present (udp): @
  ##  This is rsyslog format for that.  if not present, is '@' which is udp
  ##* ipv4 or ipv6 or hostname
  ##  ipv6 addresses must be encoded in [::1] format. example: 
@[fd00::1]:514
  ##* optional port
  ##  port defaults to 514
  ##
  ## Example:
  #cloud-config
  syslog:
   remotes:
    # udp to host 'maas.mydomain' port 514
    maashost: maas.mydomain
    # udp to ipv4 host on port 514
    maas: @[10.5.1.56]:514
    # tcp to host ipv6 host on port 555
    maasipv6: *.* @@[FE80::0202:B3FF:FE1E:8329]:555

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1478103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478108] [NEW] Live migration should throttle itself

2015-07-24 Thread Dan Smith
Public bug reported:

Nova will accept an unbounded number of live migrations for a single
host, which will result in timeouts and failures (at least for libvirt).
Since live migrations are seriously IO intensive, allowing this to be
unlimited is just never going to be the right thing to do, especially
when we have functions in our own client to live migrate all instances
to other hosts (nova host-evacuate-live).

We recently added a build semaphore to allow capping the number of
parallel builds being attempted on a compute host for a similar reason.
This should be the same sort of thing for live migration.

** Affects: nova
 Importance: Low
 Status: New

** Changed in: nova
   Importance: Undecided = Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478108

Title:
  Live migration should throttle itself

Status in OpenStack Compute (nova):
  New

Bug description:
  Nova will accept an unbounded number of live migrations for a single
  host, which will result in timeouts and failures (at least for
  libvirt). Since live migrations are seriously IO intensive, allowing
  this to be unlimited is just never going to be the right thing to do,
  especially when we have functions in our own client to live migrate
  all instances to other hosts (nova host-evacuate-live).

  We recently added a build semaphore to allow capping the number of
  parallel builds being attempted on a compute host for a similar
  reason. This should be the same sort of thing for live migration.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478108/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478100] [NEW] DHCP agent scheduler can schedule dnsmasq to an agent without reachability to the network its supposed to serve

2015-07-24 Thread Assaf Muller
Public bug reported:

While overlay networks are typically available on every host, flat or
VLAN provider networks are often not. It may be the case where each rack
only has access to a subset of networks defined in Neutron (Determined
by the network's physical_network tag). In these cases, you would
install a DHCP agent in every rack, but the DHCP scheduler could
schedule a network to the wrong agent, and you end up in a situation
where the dnsmasq instance is on the wrong rack and has no reachability
to its VMs.

More information may be found here:
https://etherpad.openstack.org/p/Network_Segmentation_Usecases.
Specifically DHCP agents and metadata serices are run on nodes within each L2. 
 When the neutron network is created we specifically assign the dhcp agent in 
that segment to that network.

** Affects: neutron
 Importance: Undecided
 Assignee: Assaf Muller (amuller)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478100

Title:
  DHCP agent scheduler can schedule dnsmasq to an agent without
  reachability to the network its supposed to serve

Status in neutron:
  In Progress

Bug description:
  While overlay networks are typically available on every host, flat or
  VLAN provider networks are often not. It may be the case where each
  rack only has access to a subset of networks defined in Neutron
  (Determined by the network's physical_network tag). In these cases,
  you would install a DHCP agent in every rack, but the DHCP scheduler
  could schedule a network to the wrong agent, and you end up in a
  situation where the dnsmasq instance is on the wrong rack and has no
  reachability to its VMs.

  More information may be found here:
  https://etherpad.openstack.org/p/Network_Segmentation_Usecases.
  Specifically DHCP agents and metadata serices are run on nodes within each 
L2.  When the neutron network is created we specifically assign the dhcp agent 
in that segment to that network.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478100/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478103] Re: need support for configuring syslog

2015-07-24 Thread Andres Rodriguez
** Also affects: maas
   Importance: Undecided
   Status: New

** Changed in: maas
Milestone: None = 1.9.0

** Changed in: maas
   Importance: Undecided = Wishlist

** Changed in: maas
   Status: New = Confirmed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1478103

Title:
  need support for configuring syslog

Status in cloud-init:
  New

Bug description:
  in order to instruct a host to easily log syslog information to
  another system, we need to add a cloud-config format for this.

  The format to use looks like this:
  ## syslog module allows you to configure the systems syslog.
  ## configuration of syslog is under the top level cloud-config 
  ## entry 'syslog'.
  ##
  ## remotes
  ##  remotes is a dictionary. items are of 'name: remote_info'
  ##  name is simply a name (example 'maas').  It has no importance other than
  ##  for cloud-init merging configs
  ##
  ##  remote_info is of the format
  ##* optional filter for log messages
  ##  default if not present: *.*
  ##* optional leading '@' or '@@'  (indicates udp or tcp).
  ##  default if not present (udp): @
  ##  This is rsyslog format for that.  if not present, is '@' which is udp
  ##* ipv4 or ipv6 or hostname
  ##  ipv6 addresses must be encoded in [::1] format. example: 
@[fd00::1]:514
  ##* optional port
  ##  port defaults to 514
  ##
  ## Example:
  #cloud-config
  syslog:
   remotes:
    # udp to host 'maas.mydomain' port 514
    maashost: maas.mydomain
    # udp to ipv4 host on port 514
    maas: @[10.5.1.56]:514
    # tcp to host ipv6 host on port 555
    maasipv6: *.* @@[FE80::0202:B3FF:FE1E:8329]:555

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1478103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478103] Re: need support for configuring syslog

2015-07-24 Thread Blake Rouse
** No longer affects: maas

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1478103

Title:
  need support for configuring syslog

Status in cloud-init:
  New

Bug description:
  in order to instruct a host to easily log syslog information to
  another system, we need to add a cloud-config format for this.

  The format to use looks like this:
  ## syslog module allows you to configure the systems syslog.
  ## configuration of syslog is under the top level cloud-config 
  ## entry 'syslog'.
  ##
  ## remotes
  ##  remotes is a dictionary. items are of 'name: remote_info'
  ##  name is simply a name (example 'maas').  It has no importance other than
  ##  for cloud-init merging configs
  ##
  ##  remote_info is of the format
  ##* optional filter for log messages
  ##  default if not present: *.*
  ##* optional leading '@' or '@@'  (indicates udp or tcp).
  ##  default if not present (udp): @
  ##  This is rsyslog format for that.  if not present, is '@' which is udp
  ##* ipv4 or ipv6 or hostname
  ##  ipv6 addresses must be encoded in [::1] format. example: 
@[fd00::1]:514
  ##* optional port
  ##  port defaults to 514
  ##
  ## Example:
  #cloud-config
  syslog:
   remotes:
    # udp to host 'maas.mydomain' port 514
    maashost: maas.mydomain
    # udp to ipv4 host on port 514
    maas: @[10.5.1.56]:514
    # tcp to host ipv6 host on port 555
    maasipv6: *.* @@[FE80::0202:B3FF:FE1E:8329]:555

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1478103/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468000] Re: Group lookup by name in LDAP via v3 fails

2015-07-24 Thread Brant Knudson
** Also affects: keystone/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1468000

Title:
  Group lookup by name in LDAP via v3 fails

Status in Keystone:
  Fix Committed
Status in Keystone kilo series:
  New

Bug description:
  This bug is similar to
  https://bugs.launchpad.net/keystone/+bug/1454309 but relates to
  groups. When issuing an openstack group show group_name command on
  a domain associated with LDAP, invalid LDAP query is composed and
  Keystone returns ISE 500:

  $ openstack --os-token ADMIN --os-url http://localhost:35357/v3 
--os-identity-api-version 3 group show --domain ad 'Domain Admins'
  ERROR: openstack An unexpected error prevented the server from fulfilling 
your request: {'desc': 'Bad search filter'} (Disable debug mode to suppress 
these details.) (HTTP 500) (Request-ID: 
req-06fd5907-6ade-4872-95ab-e66f0809986a)

  Here's the log:

  2015-06-23 15:59:41.627 8571 DEBUG keystone.common.ldap.core [-] LDAP search: 
base=CN=Users,DC=dept,DC=example,DC=org scope=2 
filterstr=((None(sAMAccountName=Domain Admins))(objectClass=group)) 
attrs=['cn', 'sAMAccountName', 'description'] attrsonly=0 search_s 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py:933
  2015-06-23 15:59:41.628 8571 DEBUG keystone.common.ldap.core [-] LDAP unbind 
unbind_s 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py:906
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi [-] {'desc': 'Bad 
search filter'}
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi Traceback (most 
recent call last):
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/wsgi.py,
 line 240, in __call__
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi result = 
method(context, **params)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/controller.py,
 line 202, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
context, filters, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/controllers.py,
 line 310, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi hints=hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/manager.py,
 line 54, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py,
 line 342, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py,
 line 353, in wrapper
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return f(self, 
*args, **kwargs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/core.py,
 line 1003, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi ref_list = 
driver.list_groups(hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/backends/ldap.py,
 line 164, in list_groups
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi return 
self.group.get_all_filtered(hints)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/identity/backends/ldap.py,
 line 402, in get_all_filtered
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi for group in 
self.get_all(query)]
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py,
 line 1507, in get_all
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi for x in 
self._ldap_get_all(ldap_filter)]
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py,
 line 1469, in _ldap_get_all
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi attrs)
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi   File 
/home/vagrant/.venv/local/lib/python2.7/site-packages/keystone/common/ldap/core.py,
 line 946, in search_s
  2015-06-23 15:59:41.628 8571 ERROR keystone.common.wsgi attrlist_utf8, 
attrsonly)
  2015-06-23 

[Yahoo-eng-team] [Bug 1447215] Re: Schema Missing kernel_id, ramdisk_id causes #1447193

2015-07-24 Thread Jorge Niedbalski
** Changed in: glance
   Status: Fix Committed = Fix Released

** Changed in: glance/kilo
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Glance.
https://bugs.launchpad.net/bugs/1447215

Title:
  Schema Missing kernel_id, ramdisk_id causes #1447193

Status in Glance:
  Fix Released
Status in Glance kilo series:
  In Progress
Status in glance package in Ubuntu:
  Confirmed

Bug description:
  [Description]

  
  [Environment]

  - Ubuntu 14.04.2
  - OpenStack Kilo

  ii  glance   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Daemons
  ii  glance-api   1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - API
  ii  glance-common1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Common
  ii  glance-registry  1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Registry
  ii  python-glance1:2015.1~rc1-0ubuntu2~cloud0 all 
 OpenStack Image Registry and Delivery Service - Python library
  ii  python-glance-store  0.4.0-0ubuntu1~cloud0all 
 OpenStack Image Service store library - Python 2.x
  ii  python-glanceclient  1:0.15.0-0ubuntu1~cloud0 all 
 Client library for Openstack glance server.

  [Steps to reproduce]

  0) Set /etc/glance/glance-api.conf to enable_v2_api=False
  1) nova boot --flavor m1.small --image base-image --key-name keypair 
--availability-zone nova --security-groups default snapshot-bug 
  2) nova image-create snapshot-bug snapshot-bug-instance 

  At this point the created image has no kernel_id (None) and image_id
  (None)

  3) Enable_v2_api=True in glance-api.conf and restart.

  4) Run a os-image-api=2 client,

  $ glance --os-image-api-version 2 image-list

  This will fail with #1447193

  [Description]

  The schema-image.json file needs to be modified to allow null, string
  values for both attributes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/glance/+bug/1447215/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478169] [NEW] Could not setup simple webserver successfully when creating lb with session persistence type:HTTP_COOKIE

2015-07-24 Thread Madhusudhan Kandadai
Public bug reported:

I am not sure whether this bug is duplicate of this one:
https://bugs.launchpad.net/neutron/+bug/1477348 If it is so, feel free
to mark it as duplicate. Thanks. (However the errors reported are
different though)

(1) neutron lbaas-loadbalancer-create --name lb1 private-subnet

neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP 
--protocol-port 80 --name listener1
neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 
--protocol HTTP --session-persistence type=HTTP_COOKIE --name pool1

# Set up two backend servers with their right IPs:
neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.4 
--protocol-port 80 pool1
neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.5 
--protocol-port 80 pool1

# setup simple webserver in each of the backend server:
MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
while true; do echo -e HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP | sudo nc -l 
-p 80 ; done

curl http://lb_ip
curl: (7) Failed to connect to 10.0.0.5 port 80: Connection refused

(2) when I create pool without session_persistence, I am able to get 200
OK when I  curl http://lb-ip

** Affects: neutron
 Importance: Undecided
 Status: New


** Tags: lbaas

** Description changed:

  I am not sure whether this bug is duplicate of this one:
  https://bugs.launchpad.net/neutron/+bug/1477348 If it is so, feel free
- to mar it as duplicate. Thanks. (However the errors reported are
+ to mark it as duplicate. Thanks. (However the errors reported are
  different though)
  
  (1) neutron lbaas-loadbalancer-create --name lb1 private-subnet
  
  neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP 
--protocol-port 80 --name listener1
  neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 
--protocol HTTP --session-persistence type=HTTP_COOKIE --name pool1
  
  # Set up two backend servers with their right IPs:
  neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.4 
--protocol-port 80 pool1
  neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.5 
--protocol-port 80 pool1
  
  # setup simple webserver in each of the backend server:
  MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
  while true; do echo -e HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP | sudo nc -l 
-p 80 ; done
  
  curl http://lb_ip
  curl: (7) Failed to connect to 10.0.0.5 port 80: Connection refused
  
  (2) when I create pool without session_persistence, I am able to get 200
  OK when I  curl http://lb-ip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478169

Title:
  Could not setup simple webserver successfully when creating lb with
  session persistence type:HTTP_COOKIE

Status in neutron:
  New

Bug description:
  I am not sure whether this bug is duplicate of this one:
  https://bugs.launchpad.net/neutron/+bug/1477348 If it is so, feel free
  to mark it as duplicate. Thanks. (However the errors reported are
  different though)

  (1) neutron lbaas-loadbalancer-create --name lb1 private-subnet

  neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP 
--protocol-port 80 --name listener1
  neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 
--protocol HTTP --session-persistence type=HTTP_COOKIE --name pool1

  # Set up two backend servers with their right IPs:
  neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.4 
--protocol-port 80 pool1
  neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.5 
--protocol-port 80 pool1

  # setup simple webserver in each of the backend server:
  MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
  while true; do echo -e HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP | sudo nc -l 
-p 80 ; done

  curl http://lb_ip
  curl: (7) Failed to connect to 10.0.0.5 port 80: Connection refused

  (2) when I create pool without session_persistence, I am able to get
  200 OK when I  curl http://lb-ip

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478000] [NEW] VersionTestCase uses the same port for admin and public endpoints

2015-07-24 Thread Alexey Miroshkin
Public bug reported:

VersionTestCase uses the same port for admin and public endpoints:

 port = random.randint(1, 3)
 self.config_fixture.config(group='eventlet_server', public_port=port,
   admin_port=port)

https://github.com/openstack/keystone/blob/master/keystone/tests/unit/test_versions.py#L648

It makes public and admin endpoints indistinguishable. As results bugs
like Keystone API GET 5000/v3 returns wrong endpoint URL in response
body https://bugs.launchpad.net/keystone/+bug/1381961 can't be catched
by our tests  (e.g. VersionTestCase.test_admin_version_v3)

In reality admin and public endpoints must be different:

 admin_port = random.randint(1, 3)
 public_port = random.randint(1, 3)
 self.config_fixture.config(group='eventlet_server', public_port=public_port,
   admin_port=admin_port)

After that VersionTestCase.test_admin_version_v3 will fail because of
bug #1381961

** Affects: keystone
 Importance: Undecided
 Assignee: Alexey Miroshkin (amirosh)
 Status: New

** Changed in: keystone
 Assignee: (unassigned) = Alexey Miroshkin (amirosh)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Keystone.
https://bugs.launchpad.net/bugs/1478000

Title:
  VersionTestCase uses the same port for admin and public endpoints

Status in Keystone:
  New

Bug description:
  VersionTestCase uses the same port for admin and public endpoints:

   port = random.randint(1, 3)
   self.config_fixture.config(group='eventlet_server', public_port=port,
 admin_port=port)

  
https://github.com/openstack/keystone/blob/master/keystone/tests/unit/test_versions.py#L648

  It makes public and admin endpoints indistinguishable. As results bugs
  like Keystone API GET 5000/v3 returns wrong endpoint URL in response
  body https://bugs.launchpad.net/keystone/+bug/1381961 can't be
  catched by our tests  (e.g. VersionTestCase.test_admin_version_v3)

  In reality admin and public endpoints must be different:

   admin_port = random.randint(1, 3)
   public_port = random.randint(1, 3)
   self.config_fixture.config(group='eventlet_server', public_port=public_port,
 admin_port=admin_port)

  After that VersionTestCase.test_admin_version_v3 will fail because of
  bug #1381961

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/1478000/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1438331] Re: Nova fails to delete rbd image, puts guest in to ERROR state

2015-07-24 Thread Matt Riedemann
** Also affects: nova/juno
   Importance: Undecided
   Status: New

** Changed in: nova/juno
   Status: New = In Progress

** Changed in: nova/juno
   Importance: Undecided = Medium

** Changed in: nova/juno
 Assignee: (unassigned) = Dan Smith (danms)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1438331

Title:
  Nova fails to delete rbd image, puts guest in to ERROR state

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) juno series:
  In Progress

Bug description:
  When removing guests  that have been booted on Ceph, Nova will
  occasionally put guests in to ERROR state with the following ...

  Reported to the controller:

  | fault| {message: error removing image, 
code: 500, details:   File 
\/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 314, in 
decorated_function |
  |  | return function(self, context, 
*args, **kwargs)
   |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2525, in 
terminate_instance |
  |  | do_terminate_instance(instance, 
bdms)   
  |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py\, line 
272, in inner|
  |  | return f(*args, **kwargs)

 |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2523, in 
do_terminate_instance  |
  |  | 
self._set_instance_error_state(context, instance)   
  |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py\, line 
82, in __exit__   |
  |  | six.reraise(self.type_, 
self.value, self.tb)
  |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2513, in 
do_terminate_instance  |
  |  | self._delete_instance(context, 
instance, bdms, quotas) 
   |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/hooks.py\, line 131, in inner  
   |
  |  | rv = f(*args, **kwargs)  

 |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2482, in 
_delete_instance   |
  |  | quotas.rollback()

 |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py\, line 
82, in __exit__   |
  |  | six.reraise(self.type_, 
self.value, self.tb)
  |
  |  |   File 
\/usr/lib/python2.7/site-packages/nova/compute/manager.py\, line 2459, in 
_delete_instance   |
  |  | self._shutdown_instance(context, 
instance, bdms) 
 |
  |   

[Yahoo-eng-team] [Bug 1478003] [NEW] Fix seven typos on nova documentation

2015-07-24 Thread Atsushi SAKAI
Public bug reported:

Fix seven typos on nova documentation

behaviour = behavior4
poicy   =  policy
schedular  =  scheduler
environement = environment

** Affects: nova
 Importance: Undecided
 Assignee: Atsushi SAKAI (sakaia)
 Status: In Progress

** Changed in: nova
 Assignee: (unassigned) = Atsushi SAKAI (sakaia)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1478003

Title:
  Fix seven typos on nova documentation

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  Fix seven typos on nova documentation

  behaviour = behavior4
  poicy   =  policy
  schedular  =  scheduler
  environement = environment

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1478003/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1468828] Re: HA router-create breaks ML2 drivers that implement create_network such as Arista

2015-07-24 Thread Kyle Mestery
** Changed in: neutron/kilo
Milestone: None = 2015.1.1

** Changed in: neutron
Milestone: None = liberty-2

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided = Medium

** Changed in: neutron/juno
   Status: New = Fix Committed

** Changed in: neutron/juno
 Assignee: (unassigned) = Sukhdev Kapur (sukhdev-8)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1468828

Title:
  HA router-create  breaks ML2 drivers that implement create_network
  such as Arista

Status in neutron:
  Fix Committed
Status in neutron juno series:
  Fix Committed
Status in neutron kilo series:
  Fix Committed

Bug description:
  This issue was discovered with Arista ML2 driver, when an HA router
  was created. However, this will impact any ML2 driver that implements
  create_network.

  When an admin creates HA router (neutron router-create --ha ), the HA 
framework invokes network_create() and sets tenant-id to '' (The empty string).
  network_create() ML2 mech driver API expects tenant-id to be set to a valid 
ID.
  Any ML2 driver, which relies on tenant-id, will fail/reject network_create() 
request, resulting in router-create to fail.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1468828/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1472458] Re: Arista ML2 VLAN driver should ignore non-VLAN network types

2015-07-24 Thread Kyle Mestery
** Changed in: neutron/kilo
   Importance: Undecided = Low

** Changed in: neutron
   Importance: Undecided = Low

** Changed in: neutron
Milestone: None = liberty-2

** Also affects: neutron/juno
   Importance: Undecided
   Status: New

** Changed in: neutron/juno
   Importance: Undecided = Low

** Changed in: neutron/juno
 Assignee: (unassigned) = Sukhdev Kapur (sukhdev-8)

** Changed in: neutron/juno
   Status: New = Fix Committed

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1472458

Title:
  Arista ML2 VLAN driver should ignore non-VLAN network types

Status in neutron:
  Fix Committed
Status in neutron juno series:
  Fix Committed
Status in neutron kilo series:
  Fix Committed

Bug description:
  Arista ML2 VLAN driver should process only VLAN based networks. Any
  other network type (e.g. vxlan) should be ignored.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1472458/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478126] [NEW] getting success message on duplicate object

2015-07-24 Thread Ryan Peters
Public bug reported:

Getting success pop up message for creating duplicate object, object not
created as it already exists.

Steps to reproduce:

1. Create the container test.
2. Under the container test, upload the object/create the pseudo Folder with 
the name new. Observe thatnew is created and displayed in test.
3. Again, inside test create pseudo folder/object with the name new which 
is a duplicate.
4. Observe that popup message with Success message is displayed.

Expected Result: Instead display appropriate error message.

** Affects: horizon
 Importance: Low
 Assignee: Lin Hua Cheng (lin-hua-cheng)
 Status: In Progress


** Tags: swift

** Changed in: horizon
 Assignee: (unassigned) = Ryan Peters (rjpeter2)

** Changed in: horizon
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1478126

Title:
  getting success message on duplicate object

Status in OpenStack Dashboard (Horizon):
  In Progress

Bug description:
  Getting success pop up message for creating duplicate object, object
  not created as it already exists.

  Steps to reproduce:

  1. Create the container test.
  2. Under the container test, upload the object/create the pseudo Folder 
with the name new. Observe thatnew is created and displayed in test.
  3. Again, inside test create pseudo folder/object with the name new which 
is a duplicate.
  4. Observe that popup message with Success message is displayed.

  Expected Result: Instead display appropriate error message.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1478126/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1475356] Re: Serializer reports wrong supported version

2015-07-24 Thread Matt Riedemann
** Also affects: nova
   Importance: Undecided
   Status: New

** Also affects: nova/kilo
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1475356

Title:
  Serializer reports wrong supported version

Status in OpenStack Compute (nova):
  New
Status in OpenStack Compute (nova) kilo series:
  New
Status in oslo.versionedobjects:
  Fix Released

Bug description:
  The VersionedObjectSerializer is what calls object_backport in our
  indirection_api if we encounter an unsupported version. In order for
  this to work properly, we need to report the top-level object version
  that we're trying to deserialize, not the one we actually encountered.
  We depend on the conductor's object relationship mappings to guide us
  to a fully-supported object tree.

  Currently, the serializer is reporting the object that failed to
  deserialize, not the top.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1475356/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478072] Re: DVR enabled logic needs improvement

2015-07-24 Thread Sean M. Collins
http://eavesdrop.openstack.org/irclogs/%23openstack-neutron
/%23openstack-neutron.2015-07-24.log.html#t2015-07-24T16:21:09

** Summary changed:

- DVR enabled on neutron jobs where DVR should be disabled
+ DVR enabled logic needs improvement

** Changed in: neutron
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478072

Title:
  DVR enabled logic needs improvement

Status in neutron:
  Invalid

Bug description:
  Job: gate-tempest-dsvm-neutron-full

  http://git.openstack.org/cgit/openstack-infra/project-
  config/tree/jenkins/jobs/devstack-gate.yaml#n448

  http://logs.openstack.org/77/198877/4/gate/gate-tempest-dsvm-neutron-
  full/29bcccb/

  Port ID: a84df1ca-0a43-4138-b60b-c4c5

  2015-07-23 15:27:56.611 DEBUG neutron.db.l3_dvrscheduler_db [req-
  b781d84a-1079-4328-b123-f4cb311c3a4f tempest-
  AllowedAddressPairIpV6TestJSON-1876686975 tempest-
  AllowedAddressPairIpV6TestJSON-178337704] No namespaces available for
  this DVR port fd6aefa9-e1cb-4569-b7dc-1fb546e0d650 on host devstack-
  trust

  db/l3_dvrscheduler_db.py:LOG.debug('No namespaces
  available for this DVR port %(port)s '

  Maybe this check is not accurate enough?

  
https://github.com/openstack/neutron/blob/d266b5a90585634f91f3830b9f99af9dfaac5e31/neutron/plugins/ml2/plugin.py#L1307

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478072/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 968696] Re: admin-ness not properly scoped

2015-07-24 Thread Adam Young
** Also affects: glance
   Importance: Undecided
   Status: New

** Also affects: cinder
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/968696

Title:
  admin-ness not properly scoped

Status in Cinder:
  New
Status in Glance:
  New
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Keystone:
  Confirmed
Status in neutron:
  Incomplete
Status in OpenStack Compute (nova):
  Confirmed

Bug description:
  Fact: Keystone's rbac model grants roles to users on specific tenants,
  and post-keystone redux, there are no longer global roles.

  Problem: Granting a user an admin role on ANY tenant grants them
  unlimited admin-ness throughout the system because there is no
  differentiation between a scoped admin-ness and a global
  admin-ness.

  I don't have a specific solution to advocate, but being an admin on
  *any* tenant simply *cannot* allow you to administer all of keystone.

  Steps to reproduce (from Horizon, though you could do this with the
  CLI, too):

  1. User A (existing admin) creates Project B and User B.
  2. User A adds User B to Project B with the admin role on Project B.
  3. User B logs in and now has unlimited admin rights not only to view things 
in the dashboard, but to take actions like creating new projects and users, 
managing existing projects and users, etc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/968696/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478181] [NEW] there is something wrong in the note below function _validate_ip_address

2015-07-24 Thread Launchpad Bug Tracker
You have been subscribed to a public bug:

https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py
the note that below function _validate_ip_address
on line199 :
#netaddr.IPAddress('1' * 59)
will not get right result 
#   IPAddress('199.28.113.199')
but  raise error result like this:
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/local/lib/python2.7/dist-packages/netaddr/ip/__init__.py, line 
306, in __init__
'address from %r' % addr)
netaddr.core.AddrFormatError: failed to detect a valid IP address from 
'111'

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
there is something wrong in the note below function _validate_ip_address 
https://bugs.launchpad.net/bugs/1478181
You received this bug notification because you are a member of Yahoo! 
Engineering Team, which is subscribed to neutron.

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1478181] Re: there is something wrong in the note below function _validate_ip_address

2015-07-24 Thread Gauvain Pocentek
** Project changed: openstack-manuals = neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1478181

Title:
  there is something wrong in the note below function
  _validate_ip_address

Status in neutron:
  New

Bug description:
  https://github.com/openstack/neutron/blob/master/neutron/api/v2/attributes.py
  the note that below function _validate_ip_address
  on line199 :
  #netaddr.IPAddress('1' * 59)
  will not get right result 
  #   IPAddress('199.28.113.199')
  but  raise error result like this:
  Traceback (most recent call last):
File stdin, line 1, in module
File /usr/local/lib/python2.7/dist-packages/netaddr/ip/__init__.py, line 
306, in __init__
  'address from %r' % addr)
  netaddr.core.AddrFormatError: failed to detect a valid IP address from 
'111'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1478181/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1458945] Re: Use graduated oslo.policy instead of oslo-incubator code

2015-07-24 Thread Valeriy Ponomaryov
** Changed in: manila
   Status: New = Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1458945

Title:
  Use graduated oslo.policy instead of oslo-incubator code

Status in Barbican:
  Fix Released
Status in Ceilometer:
  Fix Released
Status in Cinder:
  In Progress
Status in congress:
  Fix Committed
Status in Glance:
  Fix Released
Status in heat:
  Fix Released
Status in OpenStack Dashboard (Horizon):
  Fix Released
Status in Ironic:
  Fix Released
Status in Keystone:
  Fix Released
Status in MagnetoDB:
  Confirmed
Status in Magnum:
  New
Status in Manila:
  Invalid
Status in Mistral:
  Invalid
Status in murano:
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  In Progress
Status in Rally:
  Invalid
Status in Sahara:
  Fix Released
Status in OpenStack Object Storage (swift):
  Invalid
Status in Trove:
  Invalid

Bug description:
  The Policy code is now be managed as a library, named oslo.policy.

  If there is a CVE level defect, deploying a fix should require
  deploying a new version of the library, not syncing each individual
  project.

  All the projects in the OpenStack ecosystem that are using the policy
  code from oslo-incubator should use the new library.

To manage notifications about this bug go to:
https://bugs.launchpad.net/barbican/+bug/1458945/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1460177] Re: Support metadata service with IPv6-only tenant network

2015-07-24 Thread James Page
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu)
   Status: New = Triaged

** Changed in: neutron (Ubuntu)
   Importance: Undecided = Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1460177

Title:
  Support metadata service with IPv6-only tenant network

Status in neutron:
  Triaged
Status in neutron package in Ubuntu:
  Triaged

Bug description:
  EC2 metatdata service is supported by nova metadata service that is
  running in the management network. Cloud-init running in the instance
  normally accesses the service at 169.254.169.254. Cloud-init can be
  configured with metadata_urls other than the default
  http://169.254.169.254 to access the service. But such configuration
  is not currently supported by openstack.  In order for the instance to
  access the nova metadata service, neutron provides proxy service that
  terminates http://169.254.169.254 and forwards the request to the nova
  metadata service, and responds back to the instance. Apparently, this
  works only when IPv4 is available in the tenant network. For an
  IPv6-only tenant work, to continue the support of this service, the
  instance has to access it at an IPv6 address. This requires
  enhancement in Neutron to support it.

  A few options have been discussed so far:
     -- define a well-known ipv6 link-local address to access the metadata 
service.
     -- enhance IPv6 RA to advertise the metadata service endpoint to 
instances. This would require standards work and enhance cloud-init to support 
it.
     -- define a well-known name for the metadata service and configure 
metadata_urls to use the name.  The name will be resolved to a datacenter 
specific IP address. The corresponding DNS record should be pre-provisioned in 
the datacenter DNS server for the instance to resolve the name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1460177/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1236439] Re: switch to use hostnames like nova breaks upgrades of l3-agent

2015-07-24 Thread James Page
As we've had no further bug reports about this feature, marking 'Won't
Fix' for Ubuntu and the UCA

** Changed in: neutron (Ubuntu)
   Status: Triaged = Won't Fix

** Changed in: cloud-archive
   Status: Triaged = Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1236439

Title:
  switch to use hostnames like nova breaks upgrades of l3-agent

Status in ubuntu-cloud-archive:
  Won't Fix
Status in neutron:
  Incomplete
Status in Release Notes for Ubuntu:
  Fix Released
Status in neutron package in Ubuntu:
  Won't Fix
Status in neutron source package in Saucy:
  Won't Fix

Bug description:
  Commit
  
https://github.com/openstack/neutron/commit/140029ebd006c116ee684890dd70e13b7fc478ec
  switch to using socket.gethostname() for the name of neutron agents;
  this has the unfortunate side effect with the l3-agent that all router
  services are no longer scheduled on an active agent, resulting in
  floating ip and access outages.

  Looks like this will effect upgrades from grizzly-havana as well:

  ubuntu@churel:/etc/maas$ quantum agent-list
  
+--++--+---++
  | id   | agent_type | host
 | alive | admin_state_up |
  
+--++--+---++
  | 02ad1175-209c-4125-889a-e390a15ecd50 | Open vSwitch agent | 
caipora.1ss.qa.lexington | xxx   | True   |
  | 191d4757-05f6-4170-a78d-d6a3c1b9265e | Open vSwitch agent | canaima 
 | :-)   | True   |
  | 306cbfbb-8879-4d64-ac26-db007f9113a9 | DHCP agent | 
cofgod.1ss.qa.lexington  | xxx   | True   |
  | 32081821-1e94-4274-993b-b0bf2714e5ac | Open vSwitch agent | 
ciguapa.1ss.qa.lexington | xxx   | True   |
  | 5697a23a-712e-4de3-a218-2a6c177bf555 | Open vSwitch agent | chakora 
 | :-)   | True   |
  | 5ea5e207-1da0-47e3-9a7e-984589b11300 | Open vSwitch agent | 
cuegle.1ss.qa.lexington  | xxx   | True   |
  | 71e31354-76e7-4640-9a5b-368678bc22d0 | Open vSwitch agent | 
canaima.1ss.qa.lexington | xxx   | True   |
  | 7267e3d2-d9bf-4e57-8d19-803aab636f36 | Open vSwitch agent | 
chakora.1ss.qa.lexington | xxx   | True   |
  | 75ff2563-f5a5-4df3-aa19-fe8310146c10 | Open vSwitch agent | cuegle  
 | :-)   | True   |
  | 875de52e-d6c3-4e82-8cbd-269831ff00bc | Open vSwitch agent | cofgod  
 | :-)   | True   |
  | 9afaf6f2-2756-4863-b5d0-7faba502e878 | L3 agent   | cofgod  
 | :-)   | True   |
  | a81ac370-a318-42e4-9279-eef2b6141644 | Open vSwitch agent | 
cofgod.1ss.qa.lexington  | xxx   | True   |
  | d6e6332e-822a-438e-8613-16013da825e0 | L3 agent   | 
cofgod.1ss.qa.lexington  | xxx   | True   |
  | d9712755-03b3-4326-99c1-3bf66c878dc6 | Open vSwitch agent | ciguapa 
 | :-)   | True   |
  | dadf284c-ac8f-4dc1-9ba4-73182e5f1911 | DHCP agent | cofgod  
 | :-)   | True   |
  | ed07ff1a-dcca-4bbd-b026-1296bb90f89b | Open vSwitch agent | caipora 
 | :-)   | True   |
  
+--++--+---++

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1236439/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477975] [NEW] too many dumps of log options values

2015-07-24 Thread Salvatore Orlando
Public bug reported:

A gate test usually dumps them twice [1], and the dumps become three if there 
are api workers.
If rpc workers are enabled as well, there will be 4 dumps.

[1] http://logs.openstack.org/08/188608/17/check/gate-tempest-dsvm-
neutron-full/625c79d/logs/screen-q-svc.txt.gz

oslo_service is dumping option values already. Probably neutron does not
need to do that anymore.

** Affects: neutron
 Importance: Medium
 Assignee: Salvatore Orlando (salvatore-orlando)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Salvatore Orlando (salvatore-orlando)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477975

Title:
  too many dumps of log options values

Status in neutron:
  In Progress

Bug description:
  A gate test usually dumps them twice [1], and the dumps become three if there 
are api workers.
  If rpc workers are enabled as well, there will be 4 dumps.

  [1] http://logs.openstack.org/08/188608/17/check/gate-tempest-dsvm-
  neutron-full/625c79d/logs/screen-q-svc.txt.gz

  oslo_service is dumping option values already. Probably neutron does
  not need to do that anymore.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1477975/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1477077] Re: Exception during message handling: QueuePool limit of size 10 overflow 20 reached, connection timed out, timeout 10 stable/kilo

2015-07-24 Thread James Page
** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: neutron (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1477077

Title:
  Exception during message handling: QueuePool limit of size 10 overflow
  20 reached, connection timed out, timeout 10 stable/kilo

Status in neutron:
  New

Bug description:
  OpenStack stable/kilo release, Ubuntu 14.04 devstack setup .while
  multiple calls are generated during tempest run(tempest.api.networks)
  in HP networking CI setup we see queuePool limit related exception
  which results in tempest failure.

  tempest tests that are run include
  cd /opt/stack/tempest; testr init ; ./tools/pretty_tox_serial.sh\  
tempest.api.network.test_networks tempest.api.network.test_extensions 
tempest.api.network.test_allowed_address_pair 
tempest.api.network.test_extra_dhcp_options 
tempest.api.network.test_floating_ips_negative 
tempest.api.network.test_floating_ips 
tempest.api.network.test_networks_negative tempest.api.network.test_networks 
tempest.api.network.test_ports tempest.api.network.test_routers_negative 
tempest.api.network.test_routers 
tempest.api.network.test_security_groups_negative 
tempest.api.network.test_security_groups 
tempest.api.network.test_service_type_management 
tempest.scenario.test_network_basic_ops|^test_security_groups_basic_ops|^test_server_basic_ops|^test_network_advanced_server_ops|^test_server_advanced_ops\
 


  detailed logs can be found at 
  
http://15.126.220.115/OVSVAPP-VLAN-KILO/602/203586/9215e7309301fd72961e906a1ae5afe1d5c8f909/
 

  might have been deleted concurrently.
  2015-07-21 12:27:55.595 WARNING neutron.db.agentschedulers_db 
[req-2585e7c4-8f33-439b-8b64-afc1522a686a None None] Removing network 
17d6770a-c478-46d4-9556-ee77bbaf2d6b from agent 
04fa72de-c1f9-4b51-8369-3b027aaf42d4 because the agent did not report to the 
server in the last 150 seconds.
  2015-07-21 12:28:15.822 ERROR oslo_messaging.rpc.dispatcher 
[req-99ec4c7c-3f9d-4dd7-b920-c57606a07df7 None None] Exception during message 
handling: QueuePool limit of size 10 overflow 20 reached, connection timed out, 
timeout 10
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher Traceback (most 
recent call last):
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
142, in _dispatch_and_reply
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher 
executor_callback))
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
186, in _dispatch
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher 
executor_callback)
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py, line 
130, in _do_dispatch
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher result = 
func(ctxt, **new_args)
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/neutron/neutron/db/agents_db.py, line 281, in report_state
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher 
self.plugin.create_or_update_agent(context, agent_state)
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/neutron/neutron/db/agents_db.py, line 238, in 
create_or_update_agent
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher return 
self._create_or_update_agent(context, agent)
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/neutron/neutron/db/agents_db.py, line 217, in 
_create_or_update_agent
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher context, 
agent['agent_type'], agent['host'])
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/opt/stack/neutron/neutron/db/agents_db.py, line 193, in 
_get_agent_by_type_and_host
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher Agent.host == 
host).one()
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2395, in 
one
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher ret = 
list(self)
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2438, in 
__iter__
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher return 
self._execute_and_instances(context)
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher   File 
/usr/local/lib/python2.7/dist-packages/sqlalchemy/orm/query.py, line 2451, in 
_execute_and_instances
  2015-07-21 12:28:15.822 TRACE oslo_messaging.rpc.dispatcher 
close_with_result=True)
  2015-07-21 12:28:15.822 TRACE 

[Yahoo-eng-team] [Bug 1445084] Re: neutron-dhcp-agent send 0x03 dhcp flag to a client

2015-07-24 Thread James Page
** Also affects: neutron
   Importance: Undecided
   Status: New

** No longer affects: neutron (Ubuntu)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1445084

Title:
  neutron-dhcp-agent send 0x03 dhcp flag to a client

Status in neutron:
  New

Bug description:
  If the client is a virtial host with Window introduced in the AD domain. The 
client does not send dns updates on AD server because neutron-dhcp-agent sends 
0x03 dhcp flag in response. 
  Client request:

  Option: (81) Client Fully Qualified Domain Name
  Length: 23
  Flags: 0x00
    = Reserved flags: 0x00
   0... = Server DDNS: Some server updates
   ..0. = Server overrides: No override
   ...0 = Server: Client
  A-RR result: 0
  PTR-RR result: 0
  Client name: dnstest.local

  Server respose:

  Option: (81) Client Fully Qualified Domain Name
  Length: 30
  Flags: 0x03
    = Reserved flags: 0x00
   0... = Server DDNS: Some server updates
   .0.. = Encoding: ASCII encoding
   ..1. = Server overrides: Override
   ...1 = Server: Server
  A-RR result: 255
  PTR-RR result: 255
  Client name: host-172-25-43-40.local

  Because of this windows host does not send update dns requests on AD
  server and after certain time out server remove client hostname from
  DNS zone.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1445084/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp