[Yahoo-eng-team] [Bug 1982945] [NEW] cannot start vm from disk with burst qos

2022-07-27 Thread Vladislav Belogrudov
Public bug reported:

when trying to run a VM from a disk associated with burst qos
consumer=both total_iops_sec_max=200 , nova/libvirt reports:

Failed to start libvirt guest: libvirt.libvirtError: internal error:
unable to execute QEMU command 'block_set_io_throttle': bps_max/iops_max
require corresponding bps/iops values

Also when setting up size_iops_sec:
nova.virt.libvirt.guest libvirt.libvirtError: internal error: unable to execute 
QEMU command 'block_set_io_throttle': iops size requires an iops value to be set

OpenStack Yoga, CentOS Stream 8
libvirt 8.0.0-2
qemu 6.2.0-5
cinder backend: lvm (can be any other iscsi storage as well)

** Affects: nova
 Importance: Undecided
 Status: New

** Attachment added: "log.txt"
   https://bugs.launchpad.net/bugs/1982945/+attachment/5605577/+files/log.txt

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1982945

Title:
  cannot start vm from disk with burst qos

Status in OpenStack Compute (nova):
  New

Bug description:
  when trying to run a VM from a disk associated with burst qos
  consumer=both total_iops_sec_max=200 , nova/libvirt reports:

  Failed to start libvirt guest: libvirt.libvirtError: internal error:
  unable to execute QEMU command 'block_set_io_throttle':
  bps_max/iops_max require corresponding bps/iops values

  Also when setting up size_iops_sec:
  nova.virt.libvirt.guest libvirt.libvirtError: internal error: unable to 
execute QEMU command 'block_set_io_throttle': iops size requires an iops value 
to be set

  OpenStack Yoga, CentOS Stream 8
  libvirt 8.0.0-2
  qemu 6.2.0-5
  cinder backend: lvm (can be any other iscsi storage as well)

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1982945/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1943631] Re: Neutron with OVN fails to bind port if hostname has dots

2022-05-25 Thread Vladislav Belogrudov
I could re-create the same with the latest devstack and long hostname

stack@myci:~/devstack$ hostname
myci.home.org


stack@myci:~/devstack$ sudo ovs-vsctl list open_vswitch
_uuid   : 3859f81a-2f37-456c-b3da-bb068f30310f
bridges : [06b3fc03-5783-4401-be39-2562836f2058, 
3e1363b6-fb78-4160-a41e-9c47441ca481]
cur_cfg : 2
datapath_types  : [netdev, system]
datapaths   : {}
db_version  : "8.2.0"
dpdk_initialized: false
dpdk_version: none
external_ids: {hostname=myci, ovn-bridge=br-int, 
ovn-bridge-mappings="public:br-ex", ovn-cms-options=enable-chassis-as-gw, 
ovn-encap-ip="192.168.122.20", ovn-encap-type=geneve, 
ovn-remote="tcp:192.168.122.20:6642", rundir="/var/run/openvswitch", 
system-id="cd754343-5266-4d01-8328-f462916a2a2c"}
iface_types : [erspan, geneve, gre, internal, ip6erspan, ip6gre, 
lisp, patch, stt, system, tap, vxlan]
manager_options : [a92f4ff7-f42f-4f4c-962a-647c82287cf1]
next_cfg: 2
other_config: {}
ovs_version : "2.13.5"
ssl : []
statistics  : {}
system_type : ubuntu
system_version  : "20.04"


A fix for this config:

stack@myci:~/devstack$ sudo ovs-vsctl set open_vswitch . external-
ids:hostname='myci.home.org'


After that everything works. So this is rather devstack bug


** Changed in: neutron
   Status: Expired => Confirmed

** Project changed: neutron => devstack

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1943631

Title:
  Neutron with OVN fails to bind port if hostname has dots

Status in devstack:
  Confirmed

Bug description:
  If the hostname has dots, as in for example "devstack.localdomain",
  when trying to create an instance, Neutron will fail with "Failed to
  bind port".

  Reproduced with the latest DevStack from master on top of Ubuntu
  Server 20.04.3 LTS with latest packages installed. Minimal
  installation, no additional packages installed (only removed
  python3-simplejson and python3-pyasn1-modules due to recent issues
  with those packages[1]).

  I find this weird, because afaik TripleO uses FQDNs, so in theory
  Neutron with OVN should break their CI (although I'm not sure if they
  use OVN or Open vSwitch). I'm still not sure if this is on my end or
  not, but I was able to reproduce this consistently, trying different
  hostnames, trying the most minimal local.conf possible (setting only
  passwords), so I decided to report it as a bug.

  Steps to reproduce:
  1. Set the system's hostname to something with at least one dot
  2. Deploy DevStack
  3. Create instance on any network
  4. Inspect devstack@q-svc.service to see the error

  Expected output:
  Instance is created successfully

  Actual output:
  Instance enters error state and devstack@q-svc.service reports this: 
https://paste.openstack.org/show/809315/

  Version:
  DevStack from master
  Ubuntu Server 20.04.3 LTS
  OVN mechanism driver

  Environment (local.conf): https://paste.openstack.org/show/809316/

  Perceived severity: low

  [1] https://bugs.launchpad.net/devstack/+bug/1871485

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/1943631/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777157] Re: cold migration fails for ceph volume instances

2018-06-25 Thread Vladislav Belogrudov
cannot reproduce it - will reopen and will recollect logs with debug if
it happens again

** Changed in: nova
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777157

Title:
  cold migration fails for ceph volume instances

Status in OpenStack Compute (nova):
  Invalid

Bug description:
  queens release,

  running instance from ceph volume, in horizon: 'migrate'

  Instance is in ERROR state. Horizon reports:

  Error: Failed to perform requested operation on instance "instance3", the
  instance has an error status: Please try again later [Error: list index out
  of range].

  yet another instance got:

  Error: Failed to perform requested operation on instance "instance4", the
  instance has an error status: Please try again later [Error: Conflict
  updating instance 6b837382-2a75-46a6-9a09-8c3f90f0ffd7. Expected:
  {'task_state': [u'resize_prep']}. Actual: {'task_state': None}].

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777157] [NEW] cold migration fails for ceph volume instances

2018-06-15 Thread Vladislav Belogrudov
Public bug reported:

queens release,

running instance from ceph volume, in horizon: 'migrate'

Instance is in ERROR state. Horizon reports:

Error: Failed to perform requested operation on instance "instance3", the
instance has an error status: Please try again later [Error: list index out
of range].

yet another instance got:

Error: Failed to perform requested operation on instance "instance4", the
instance has an error status: Please try again later [Error: Conflict
updating instance 6b837382-2a75-46a6-9a09-8c3f90f0ffd7. Expected:
{'task_state': [u'resize_prep']}. Actual: {'task_state': None}].

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1777157

Title:
  cold migration fails for ceph volume instances

Status in OpenStack Compute (nova):
  New

Bug description:
  queens release,

  running instance from ceph volume, in horizon: 'migrate'

  Instance is in ERROR state. Horizon reports:

  Error: Failed to perform requested operation on instance "instance3", the
  instance has an error status: Please try again later [Error: list index out
  of range].

  yet another instance got:

  Error: Failed to perform requested operation on instance "instance4", the
  instance has an error status: Please try again later [Error: Conflict
  updating instance 6b837382-2a75-46a6-9a09-8c3f90f0ffd7. Expected:
  {'task_state': [u'resize_prep']}. Actual: {'task_state': None}].

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1777157/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1777110] [NEW] horizon incorrectly shows backup to swift

2018-06-15 Thread Vladislav Belogrudov
Public bug reported:

queens,

when running with nfs backup driver horizon still thinks it is going to
use object storage and even gives user ability to enter container name.

** Affects: horizon
 Importance: Undecided
 Status: New

** Attachment added: "backup.png"
   https://bugs.launchpad.net/bugs/1777110/+attachment/5152986/+files/backup.png

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/1777110

Title:
  horizon incorrectly shows backup to swift

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  queens,

  when running with nfs backup driver horizon still thinks it is going
  to use object storage and even gives user ability to enter container
  name.

To manage notifications about this bug go to:
https://bugs.launchpad.net/horizon/+bug/1777110/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1678577] Re: nova live migration failed in some case

2017-11-20 Thread Vladislav Belogrudov
looks like this is dealt in

https://review.openstack.org/#/c/505771/

I could get the same error with initial cold migration and then tried to
live migrate an instance back with AUTO_SCHEDULE (only two computes in
my test)

** Changed in: nova
   Status: Expired => New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1678577

Title:
  nova live migration failed in some case

Status in OpenStack Compute (nova):
  New

Bug description:
  env: nova 15.0.2 + libvirt + kvm + centos

  in some situation, nova request spec become

  {"nova_object.version": "1.8", "nova_object.changes":
  ["instance_uuid", "requested_destination", "retry", "num_instances",
  "pci_requests", "limits", "availability_zone", "force_nodes", "image",
  "instance_group", "force_hosts", "numa_topology", "flavor",
  "project_id", "scheduler_hints", "ignore_hosts"], "nova_object.name":
  "RequestSpec", "nova_object.data": {"requested_destination": null,
  "instance_uuid": "ca01b22b-d2d4-4291-96bd-ff6111f1f88b", "retry":
  {"nova_object.version": "1.1", "nova_object.changes": ["num_attempts",
  "hosts"], "nova_object.name": "SchedulerRetries", "nova_object.data":
  {"num_attempts": 1, "hosts": {"nova_object.version": "1.16",
  "nova_object.changes": ["objects"], "nova_object.name":
  "ComputeNodeList", "nova_object.data": {"objects":
  [{"nova_object.version": "1.16", "nova_object.changes": ["host",
  "hypervisor_hostname"], "nova_object.name": "ComputeNode",
  "nova_object.data": {"host": "control01", "hypervisor_hostname":
  "control01"}, "nova_object.namespace": "nova"}]},
  "nova_object.namespace": "nova"}}, "nova_object.namespace": "nova"},
  "num_instances": 1, "pci_requests": {"nova_object.version": "1.1",
  "nova_object.name": "InstancePCIRequests", "nova_object.data":
  {"instance_uuid": "ca01b22b-d2d4-4291-96bd-ff6111f1f88b", "requests":
  []}, "nova_object.namespace": "nova"}, "limits":
  {"nova_object.version": "1.0", "nova_object.changes": ["memory_mb",
  "vcpu", "disk_gb", "numa_topology"], "nova_object.name":
  "SchedulerLimits", "nova_object.data": {"vcpu": null, "memory_mb":
  245427, "disk_gb": 8371, "numa_topology": null},
  "nova_object.namespace": "nova"}, "availability_zone": null,
  "force_nodes": null, "image": {"nova_object.version": "1.8",
  "nova_object.changes": ["min_disk", "container_format", "min_ram",
  "disk_format", "properties"], "nova_object.name": "ImageMeta",
  "nova_object.data": {"min_disk": 1, "container_format": "bare",
  "min_ram": 0, "disk_format": "raw", "properties":
  {"nova_object.version": "1.16", "nova_object.name": "ImageMetaProps",
  "nova_object.data": {}, "nova_object.namespace": "nova"}},
  "nova_object.namespace": "nova"}, "instance_group": null,
  "force_hosts": null, "numa_topology": null, "ignore_hosts": null,
  "flavor": {"nova_object.version": "1.1", "nova_object.name": "Flavor",
  "nova_object.data": {"disabled": false, "root_gb": 1, "name":
  "m1.tiny", "flavorid": "a70249ef-5ea9-49cb-b35f-ab4732064981",
  "deleted": false, "created_at": "2017-03-22T08:13:48Z",
  "ephemeral_gb": 0, "updated_at": null, "memory_mb": 256, "vcpus": 1,
  "extra_specs": {}, "swap": 0, "rxtx_factor": 1.0, "is_public": true,
  "deleted_at": null, "vcpu_weight": 0, "id": 119},
  "nova_object.namespace": "nova"}, "project_id":
  "f3c6d500b267432c858c588800b49653", "scheduler_hints": {}},
  "nova_object.namespace": "nova"}

  
  check the retry part

  retry": {"nova_object.version": "1.1", "nova_object.changes":
  ["num_attempts", "hosts"], "nova_object.name": "SchedulerRetries",
  "nova_object.data": {"num_attempts": 1, "hosts":
  {"nova_object.version": "1.16", "nova_object.changes": ["objects"],
  "nova_object.name": "ComputeNodeList", "nova_object.data": {"objects":
  [{"nova_object.version": "1.16", "nova_object.changes": ["host",
  "hypervisor_hostname"], "nova_object.name": "ComputeNode",
  "nova_object.data": {"host": "control01", "hypervisor_hostname":
  "control01"}, "nova_object.namespace": "nova"}]}

  it has control01 as host even it is in control02

  when live migrate this vm from controll02 to control01, get error in
  "migration-list", after check the nova-scheduler logs, got

  
  2017-04-02 14:01:47.010 6 DEBUG nova.filters 
[req-191c8f6e-010b-42f6-acc6-c84c689f649c 2442cfcb9d5c4daf8d90af8bcfe30df7 
8eb03bbcdfd84f68b88a7fbaa74e2327 - - -] Starting with 1 host(s) 
get_filtered_objects 
/var/lib/kolla/venv/lib/python2.7/site-packages/nova/filters.py:70
  2017-04-02 14:01:47.010 6 INFO nova.scheduler.filters.retry_filter 
[req-191c8f6e-010b-42f6-acc6-c84c689f649c 2442cfcb9d5c4daf8d90af8bcfe30df7 
8eb03bbcdfd84f68b88a7fbaa74e2327 - - -] Host [u'control01', u'control01'] 
fails.  Previously tried hosts: [[u'control01', u'control01']]

  
  I think the root cause is the retry part, and still do not know how it happen.

To manage notifications about this bug go to:

[Yahoo-eng-team] [Bug 1727708] [NEW] resize of vm with nfs volume fails with "Directory not empty: '/var/lib/nova/instances/ddd04ae5-6e02-4634-b8b1-8fd14d22773b"

2017-10-26 Thread Vladislav Belogrudov
Public bug reported:

Nova Pike,

I have nfs for cinder volumes and ephemeral disks, /var/lib/nova/instances is a 
share mounted on compute hosts
I created a boot-able volume, ran an instance (no config-drives, etc). 
Initiated a resize from micro1 to micro2:

+--+---+---+--+---+---+---+
| ID   | Name  |   RAM | Disk | Ephemeral | 
VCPUs | Is Public |
+--+---+---+--+---+---+---+
| 80bd2ab5-d99d-4e0f-8ae9-26ee31534210 | m1.micro1 |64 |1 | 0 | 
1 | True  |
| 23a09d59-a182-4863-9ecc-935de6c01170 | m1.micro2 |65 |2 | 1 | 
1 | True  |

Before resize:

[root@operator ~]# ls -ltrah /var/nfs/*/*
-rw-r--r-- 1 root  root93 Oct 26 12:30 /var/nfs/nova/compute_nodes
-rw-rw-rw- 1 42427 42427 1.0G Oct 26 12:47 
/var/nfs/cinder/volume-a7b6fb81-8143-4bb3-9216-eba1e5dc0433

/var/nfs/nova/locks:
total 8.0K
-rw-r--r-- 1 root root0 Oct 25 18:35 nova-storage-registry-lock
-rw-r--r-- 1 root root0 Oct 26 11:03 
nova-2dc09a364cbe47628fc413a564bc87425971543f
-rw-r--r-- 1 root root0 Oct 26 12:18 nova-ephemeral_1_0706d66
-rw-r--r-- 1 root root0 Oct 26 12:18 nova-swap_1
drwxr-xr-x 2 root root 4.0K Oct 26 12:18 .
drwxrwxrwx 5 root root 4.0K Oct 26 12:47 ..

/var/nfs/nova/_base:
total 20M
-rw-r--r-- 1 42427 42427  40M Oct 26 11:49 
2dc09a364cbe47628fc413a564bc87425971543f
-rw-r--r-- 1 42427 42427 1.0G Oct 26 12:18 ephemeral_1_0706d66
drwxr-xr-x 2 root  root  4.0K Oct 26 12:18 .
-rw-r--r-- 1 42427 42427 1.0M Oct 26 12:18 swap_1
drwxrwxrwx 5 root  root  4.0K Oct 26 12:47 ..

/var/nfs/nova/14dba039-bc6a-4726-a582-a1c6cb7fddf4:
total 28K
drwxrwxrwx 5 root  root  4.0K Oct 26 12:47 ..
drwxr-xr-x 2 root  root  4.0K Oct 26 12:47 .
-rw-r--r-- 1 42427 42427  17K Oct 26 12:47 console.log

After resize:

[root@operator ~]# ls -ltrah /var/nfs/*/*
-rw-r--r-- 1 root  root93 Oct 26 12:30 /var/nfs/nova/compute_nodes
-rw-rw-rw- 1 42427 42427 1.0G Oct 26 12:50 
/var/nfs/cinder/volume-a7b6fb81-8143-4bb3-9216-eba1e5dc0433

/var/nfs/nova/locks:
total 8.0K
-rw-r--r-- 1 root root0 Oct 25 18:35 nova-storage-registry-lock
-rw-r--r-- 1 root root0 Oct 26 11:03 
nova-2dc09a364cbe47628fc413a564bc87425971543f
-rw-r--r-- 1 root root0 Oct 26 12:18 nova-ephemeral_1_0706d66
-rw-r--r-- 1 root root0 Oct 26 12:18 nova-swap_1
drwxr-xr-x 2 root root 4.0K Oct 26 12:18 .
drwxrwxrwx 5 root root 4.0K Oct 26 12:49 ..

/var/nfs/nova/_base:
total 20M
-rw-r--r-- 1 42427 42427  40M Oct 26 11:49 
2dc09a364cbe47628fc413a564bc87425971543f
drwxr-xr-x 2 root  root  4.0K Oct 26 12:18 .
-rw-r--r-- 1 42427 42427 1.0G Oct 26 12:49 ephemeral_1_0706d66
-rw-r--r-- 1 42427 42427 1.0M Oct 26 12:49 swap_1
drwxrwxrwx 5 root  root  4.0K Oct 26 12:49 ..

/var/nfs/nova/14dba039-bc6a-4726-a582-a1c6cb7fddf4:
total 420K
-rw-r--r-- 1 42427 42427 193K Oct 26 12:49 .nfs0038000e0006
-rw-r--r-- 1 42427 42427 193K Oct 26 12:49 .nfs0038000f0005
drwxrwxrwx 5 root  root  4.0K Oct 26 12:49 ..
-rw-r--r-- 1 42427 42427  20K Oct 26 12:49 .nfs003800120004
drwxr-xr-x 2 root  root  4.0K Oct 26 12:49 .

Error: Failed to perform requested operation on instance "test1", the
instance has an error status: Please try again later [Error: [Errno 39]
Directory not empty:
'/var/lib/nova/instances/14dba039-bc6a-4726-a582-a1c6cb7fddf4'].

nova compute log on original host:

2017-10-26 12:49:50.267 1 ERROR nova.compute.manager 
[req-fff5cc2b-82a5-441b-a123-f0384f8db7e7 f19e34c10c0148509af54be1540a627a 
62743901dffd4739be97e2d3a2deead7 - default default] [instance: 
14dba039-bc6a-4726-a582-a1c6cb7fddf4] Setting instance vm_state to ERROR: 
OSError: [Errno 39] Directory not empty: 
'/var/lib/nova/instances/14dba039-bc6a-4726-a582-a1c6cb7fddf4'
2017-10-26 12:49:50.267 1 ERROR nova.compute.manager [instance: 
14dba039-bc6a-4726-a582-a1c6cb7fddf4] Traceback (most recent call last):
2017-10-26 12:49:50.267 1 ERROR nova.compute.manager [instance: 
14dba039-bc6a-4726-a582-a1c6cb7fddf4]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
6886, in _error_out_instance_on_exception
2017-10-26 12:49:50.267 1 ERROR nova.compute.manager [instance: 
14dba039-bc6a-4726-a582-a1c6cb7fddf4] yield
2017-10-26 12:49:50.267 1 ERROR nova.compute.manager [instance: 
14dba039-bc6a-4726-a582-a1c6cb7fddf4]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/manager.py", line 
3609, in _confirm_resize
2017-10-26 12:49:50.267 1 ERROR nova.compute.manager [instance: 
14dba039-bc6a-4726-a582-a1c6cb7fddf4] network_info)
2017-10-26 12:49:50.267 1 ERROR nova.compute.manager [instance: 
14dba039-bc6a-4726-a582-a1c6cb7fddf4]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 7727, in confirm_migration
2017-10-26 12:49:50.267 1 ERROR nova.compute.manager [instance: 

[Yahoo-eng-team] [Bug 1726871] [NEW] AttributeError: 'BlockDeviceMapping' object has no attribute 'uuid'

2017-10-24 Thread Vladislav Belogrudov
Public bug reported:

running Pike, Cinder is configured with disabled NAS security and
enabled snapshots in NFS driver.

While trying to make a snapshot nova-api reports:

2017-10-24 13:36:16.558 39 INFO nova.osapi_compute.wsgi.server 
[req-e6c3dec2-40ea-4858-af91-3d77395d6978 6f4347f3c1b34d69946b17592aaf5b7f 
aeb3218c5fdc4d58b1094a4d360a2a96 - default default] 
10.196.245.222,10.196.245.203 "GET /v2.1/ HTTP/1.1" status: 200 len: 763 time: 
0.0097859
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
[req-cb2a54c0-6103-41ca-a5dc-587e301bfbb1 6f4347f3c1b34d69946b17592aaf5b7f 
aeb3218c5fdc4d58b1094a4d360a2a96 - default default] Unexpected exception in API 
method: AttributeError: 'BlockDeviceMapping' object has no attribute 'uuid'
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions Traceback (most 
recent call last):
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/openstack/extensions.py",
 line 336, in wrapped
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 108, in wrapper
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/openstack/compute/assisted_volume_snapshots.py",
 line 52, in create
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions create_info)
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/compute/api.py", line 
4165, in volume_snapshot_create
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions return 
do_volume_snapshot_create(self, context, bdm.instance)
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/oslo_versionedobjects/base.py",
 line 67, in getter
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
self.obj_load_attr(name)
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/objects/block_device.py", 
line 288, in obj_load_attr
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 'uuid': 
self.uuid,
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions AttributeError: 
'BlockDeviceMapping' object has no attribute 'uuid'
2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
2017-10-24 13:36:17.317 39 INFO nova.api.openstack.wsgi 
[req-cb2a54c0-6103-41ca-a5dc-587e301bfbb1 6f4347f3c1b34d69946b17592aaf5b7f 
aeb3218c5fdc4d58b1094a4d360a2a96 - default default] HTTP exception thrown: 
Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and 
attach the Nova API log if possible.


** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1726871

Title:
  AttributeError: 'BlockDeviceMapping' object has no attribute 'uuid'

Status in OpenStack Compute (nova):
  New

Bug description:
  running Pike, Cinder is configured with disabled NAS security and
  enabled snapshots in NFS driver.

  While trying to make a snapshot nova-api reports:

  2017-10-24 13:36:16.558 39 INFO nova.osapi_compute.wsgi.server 
[req-e6c3dec2-40ea-4858-af91-3d77395d6978 6f4347f3c1b34d69946b17592aaf5b7f 
aeb3218c5fdc4d58b1094a4d360a2a96 - default default] 
10.196.245.222,10.196.245.203 "GET /v2.1/ HTTP/1.1" status: 200 len: 763 time: 
0.0097859
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions 
[req-cb2a54c0-6103-41ca-a5dc-587e301bfbb1 6f4347f3c1b34d69946b17592aaf5b7f 
aeb3218c5fdc4d58b1094a4d360a2a96 - default default] Unexpected exception in API 
method: AttributeError: 'BlockDeviceMapping' object has no attribute 'uuid'
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions Traceback 
(most recent call last):
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/openstack/extensions.py",
 line 336, in wrapped
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions return 
f(*args, **kwargs)
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/api/validation/__init__.py",
 line 108, in wrapper
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions return 
func(*args, **kwargs)
  2017-10-24 13:36:17.315 39 ERROR nova.api.openstack.extensions   File 

[Yahoo-eng-team] [Bug 1721522] [NEW] encrypted volumes: Cannot format device /dev/sdb which is still in use

2017-10-05 Thread Vladislav Belogrudov
Public bug reported:

Hi,

I followed the guide here -
https://docs.openstack.org/cinder/pike/configuration/block-storage
/volume-encryption.html

I also use Barbican and for that I added [barbican] auth_endpoint =
http://controller:5000 to cinder.conf and nova.conf

Creation of LUKS disks is successful. I also created normal disks and
could easily attach them to an instance.

Cinder disks are on LVM

Attaching LUKS disks fails with the following trace:

017-10-05 11:44:57.445 1 INFO nova.compute.manager 
[req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 
6a4c6d8b18714579a6e448e754d8838f - default default] [instance: 
90376ab6-d553-477a-bda6-eeed8e70cc8d] Attaching volume 
5c03f92f-470a-4f15-aaca-49d9232512a8 to /dev/vdc
2017-10-05 11:44:57.835 1 INFO oslo.privsep.daemon 
[req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 
6a4c6d8b18714579a6e448e754d8838f - default default] Running privsep helper: 
['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', 
'--config-file', '/etc/nova/nova.conf', '--privsep_context', 
'os_brick.privileged.default', '--privsep_sock_path', 
'/tmp/tmpKwgHpn/privsep.sock']
2017-10-05 11:44:58.598 1 INFO oslo.privsep.daemon 
[req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 
6a4c6d8b18714579a6e448e754d8838f - default default] Spawned new privsep daemon 
via rootwrap
2017-10-05 11:44:58.548 80 INFO oslo.privsep.daemon [-] privsep daemon starting
2017-10-05 11:44:58.553 80 INFO oslo.privsep.daemon [-] privsep process running 
with uid/gid: 0/0
2017-10-05 11:44:58.558 80 INFO oslo.privsep.daemon [-] privsep process running 
with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2017-10-05 11:44:58.558 80 INFO oslo.privsep.daemon [-] privsep daemon running 
as pid 80
2017-10-05 11:45:01.468 1 INFO os_brick.initiator.connectors.iscsi 
[req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 
6a4c6d8b18714579a6e448e754d8838f - default default] Trying to connect to iSCSI 
portal 10.10.245.211:3260
2017-10-05 11:45:05.762 1 WARNING os_brick.encryptors 
[req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 
6a4c6d8b18714579a6e448e754d8838f - default default] Use of the in tree 
encryptor class nova.volume.encryptors.luks.LuksEncryptor by directly 
referencing the implementation class will be blocked in the Queens release of 
os-brick.
2017-10-05 11:45:07.431 1 WARNING os_brick.encryptors.luks 
[req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 
6a4c6d8b18714579a6e448e754d8838f - default default] isLuks exited abnormally 
(status 1): Device /dev/sdb is not a valid LUKS device.
Command failed with code 22: Invalid argument
: ProcessExecutionError: Unexpected error while running command.
Command: cryptsetup isLuks --verbose /dev/sdb
Exit code: 1
Stdout: u''
Stderr: u'Device /dev/sdb is not a valid LUKS device.\nCommand failed with code 
22: Invalid argument\n'
2017-10-05 11:45:07.432 1 INFO os_brick.encryptors.luks 
[req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 
6a4c6d8b18714579a6e448e754d8838f - default default] /dev/sdb is not a valid 
LUKS device; formatting device for first use
2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver 
[req-4780adcc-161e-4223-b53c-56b27a984ca1 f20698e1d05a4e2582e023778bfa5693 
6a4c6d8b18714579a6e448e754d8838f - default default] [instance: 
90376ab6-d553-477a-bda6-eeed8e70cc8d] Failed to attach volume at mountpoint: 
/dev/vdc: ProcessExecutionError: Unexpected error while running command.
Command: cryptsetup --batch-mode luksFormat --key-file=- --cipher 
aes-xts-plain64 --key-size 256 /dev/sdb
Exit code: 5
Stdout: u''
Stderr: u'Cannot format device /dev/sdb which is still in use.\n'
2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 
90376ab6-d553-477a-bda6-eeed8e70cc8d] Traceback (most recent call last):
2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 
90376ab6-d553-477a-bda6-eeed8e70cc8d]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", 
line 1250, in attach_volume
2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 
90376ab6-d553-477a-bda6-eeed8e70cc8d] encryptor.attach_volume(context, 
**encryption)
2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 
90376ab6-d553-477a-bda6-eeed8e70cc8d]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/os_brick/encryptors/luks.py", 
line 160, in attach_volume
2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 
90376ab6-d553-477a-bda6-eeed8e70cc8d] self._format_volume(passphrase, 
**kwargs)
2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 
90376ab6-d553-477a-bda6-eeed8e70cc8d]   File 
"/var/lib/kolla/venv/lib/python2.7/site-packages/os_brick/encryptors/luks.py", 
line 87, in _format_volume
2017-10-05 11:45:18.848 1 ERROR nova.virt.libvirt.driver [instance: 

[Yahoo-eng-team] [Bug 1615715] Re: ip6tables-restore fails in neutron_openvswitch_agent

2017-06-14 Thread Vladislav Belogrudov
** Project changed: neutron => kolla-ansible

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615715

Title:
  ip6tables-restore fails in neutron_openvswitch_agent

Status in kolla-ansible:
  Confirmed

Bug description:
  2016-08-22 11:54:58.697 1 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-baa3335b-0013-42dd-856a-64a5c2557a01', 
'ip6tables-restore', '-n'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
  2016-08-22 11:54:58.970 1 ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: # Generated by iptables_manager

  Usage: ip6tables-restore [-b] [-c] [-v] [-t] [-h]
 [ --binary ]
 [ --counters ]
 [ --verbose ]
 [ --test ]
 [ --help ]
 [ --noflush ]
[ --modprobe=]

  It seems iptables-1.4.21-16.el7.x86_64 does not support '-n' option
  used in the command above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/kolla-ansible/+bug/1615715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1615715] Re: ip6tables-restore fails

2017-06-14 Thread Vladislav Belogrudov
this bug still happens. Neutron openvswitch agent container tries to run
ip6tables-restore and fails because there is no ip6table_filter module
loaded. The module normally is loaded by the command itself. But inside
the container we don't provide /lib/modules ... With proper host mount
the error is gone.

** Changed in: neutron
   Status: Invalid => Confirmed

** Changed in: neutron
 Assignee: (unassigned) => Vladislav Belogrudov (vlad-belogrudov)

** Summary changed:

- ip6tables-restore fails 
+ ip6tables-restore fails in neutron_openvswitch_agent

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1615715

Title:
  ip6tables-restore fails in neutron_openvswitch_agent

Status in neutron:
  Confirmed

Bug description:
  2016-08-22 11:54:58.697 1 DEBUG neutron.agent.linux.utils [-] Running 
command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 
'netns', 'exec', 'qrouter-baa3335b-0013-42dd-856a-64a5c2557a01', 
'ip6tables-restore', '-n'] create_process 
/var/lib/kolla/venv/lib/python2.7/site-packages/neutron/agent/linux/utils.py:83
  2016-08-22 11:54:58.970 1 ERROR neutron.agent.linux.utils [-] Exit code: 2; 
Stdin: # Generated by iptables_manager

  Usage: ip6tables-restore [-b] [-c] [-v] [-t] [-h]
 [ --binary ]
 [ --counters ]
 [ --verbose ]
 [ --test ]
 [ --help ]
 [ --noflush ]
[ --modprobe=]

  It seems iptables-1.4.21-16.el7.x86_64 does not support '-n' option
  used in the command above.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1615715/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1623041] [NEW] cisco plugin db migration fails due to unknown constraint

2016-09-13 Thread Vladislav Belogrudov
Public bug reported:

when using cisco networking plugin in liberty and later versions
neutron-db-manage fails for non-innodb engines. This happens due to
relying on autogenerated name for foreign key in cisco_router_mappings
table. In InnoDB it is 'cisco_router_mappings_ibfk_2' - the same name is
used in plugin upgrade() to delete this constraint. To fix the issue the
key must be explicitly named for all other database backends.

** Affects: neutron
 Importance: Undecided
 Assignee: Vladislav Belogrudov (vlad-belogrudov)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Vladislav Belogrudov (vlad-belogrudov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1623041

Title:
  cisco plugin db migration fails due to unknown constraint

Status in neutron:
  New

Bug description:
  when using cisco networking plugin in liberty and later versions
  neutron-db-manage fails for non-innodb engines. This happens due to
  relying on autogenerated name for foreign key in cisco_router_mappings
  table. In InnoDB it is 'cisco_router_mappings_ibfk_2' - the same name
  is used in plugin upgrade() to delete this constraint. To fix the
  issue the key must be explicitly named for all other database
  backends.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1623041/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1585169] [NEW] migration script for qospolicyrbacs has hardcoded InnoDB engine

2016-05-24 Thread Vladislav Belogrudov
Public bug reported:

Table creation uses mysql_engine from config file or from model which is
default "InnoDB".
neutron/db/migration/alembic_migrations/versions/mitaka/expand/15e43b934f81_rbac_qos_policy.py
still has hard-coded engine string. We use different engine and this
script makes incorrect database schema.

** Affects: neutron
 Importance: Undecided
 Assignee: Vladislav Belogrudov (vlad-belogrudov)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Vladislav Belogrudov (vlad-belogrudov)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1585169

Title:
  migration script for qospolicyrbacs has hardcoded InnoDB engine

Status in neutron:
  New

Bug description:
  Table creation uses mysql_engine from config file or from model which
  is default "InnoDB".
  
neutron/db/migration/alembic_migrations/versions/mitaka/expand/15e43b934f81_rbac_qos_policy.py
  still has hard-coded engine string. We use different engine and this
  script makes incorrect database schema.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1585169/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1465678] [NEW] Table cisco_ml2_apic_contracts has wrong router_id field length: 64 vs 36 in routers.id

2015-06-16 Thread Vladislav Belogrudov
Public bug reported:

Some database engines require foreign keys to be of the same size as
referenced fields. cisco_ml2_apic_contracts.router_id is a foreign key
to routers.id - the former is varchar(64) while the latter is
varchar(36). Database migration scripts fail while running new OpenStack
installations (tested with MySQL Cluster/NDB).

This bug is very similar to
https://bugs.launchpad.net/neutron/+bug/1463806 - the same type of
changes should be applied to fix the issue.

** Affects: neutron
 Importance: Undecided
 Assignee: Vladislav Belogrudov (vlad-belogrudov)
 Status: In Progress


** Tags: db

** Changed in: neutron
 Assignee: (unassigned) = Vladislav Belogrudov (vlad-belogrudov)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1465678

Title:
  Table cisco_ml2_apic_contracts has wrong router_id field length: 64 vs
  36 in routers.id

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  Some database engines require foreign keys to be of the same size as
  referenced fields. cisco_ml2_apic_contracts.router_id is a foreign key
  to routers.id - the former is varchar(64) while the latter is
  varchar(36). Database migration scripts fail while running new
  OpenStack installations (tested with MySQL Cluster/NDB).

  This bug is very similar to
  https://bugs.launchpad.net/neutron/+bug/1463806 - the same type of
  changes should be applied to fix the issue.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1465678/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1464291] [NEW] migration scripts ignore mysq-engine option

2015-06-11 Thread Vladislav Belogrudov
Public bug reported:

When running neutron-db-manage upgrade --mysql-engine SOME-ENGINE-HERE
head migration scripts still create tables with default engine = InnoDB

** Affects: neutron
 Importance: Undecided
 Assignee: Vladislav Belogrudov (vlad-belogrudov)
 Status: In Progress

** Changed in: neutron
 Assignee: (unassigned) = Vladislav Belogrudov (vlad-belogrudov)

** Changed in: neutron
   Status: New = In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1464291

Title:
  migration scripts ignore mysq-engine option

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  When running neutron-db-manage upgrade --mysql-engine SOME-ENGINE-
  HERE head migration scripts still create tables with default engine =
  InnoDB

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1464291/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1463806] [NEW] Neutron database: cisco_csr_identifier_map.ipsec_site_conn_id has wrong length: 64 vs 36 in ipsec_site_connections.id

2015-06-10 Thread Vladislav Belogrudov
Public bug reported:

cisco_csr_identifier_map.ipsec_site_conn_id is a foreign key to
ipsec_site_connections.id . The former is varchar(64) while the latter
is varchar(36). Some database engines (e.g. NDB / MySQL Cluster) are
very strict about mismatched sizes of foreign keys and migration script
neutron/db/migration/alembic_migrations/versions/24c7ea5160d7_cisco_csr_vpnaas.py
fail

** Affects: neutron
 Importance: Undecided
 Assignee: Vladislav Belogrudov (vlad-belogrudov)
 Status: In Progress


** Tags: database mysql ndb neutron

** Changed in: neutron
 Assignee: (unassigned) = Vladislav Belogrudov (vlad-belogrudov)

** Changed in: neutron
   Status: New = In Progress

** Tags added: neutron

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1463806

Title:
  Neutron database: cisco_csr_identifier_map.ipsec_site_conn_id has
  wrong length: 64 vs 36 in ipsec_site_connections.id

Status in OpenStack Neutron (virtual network service):
  In Progress

Bug description:
  cisco_csr_identifier_map.ipsec_site_conn_id is a foreign key to
  ipsec_site_connections.id . The former is varchar(64) while the latter
  is varchar(36). Some database engines (e.g. NDB / MySQL Cluster) are
  very strict about mismatched sizes of foreign keys and migration
  script
  
neutron/db/migration/alembic_migrations/versions/24c7ea5160d7_cisco_csr_vpnaas.py
  fail

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1463806/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp