Re: [openstack-dev] [requirements][vitrage] Networkx version 2.0

2018-01-07 Thread Ian Wienand

On 12/21/2017 02:51 AM, Afek, Ifat (Nokia - IL/Kfar Sava) wrote:

There is an open bug in launchpad about the new release of Networkx
2.0, that is backward incompatible with versions 1.x [1].


From diskimage-builder's POV, we can pretty much switch whenever
ready, just a matter of merging [2] after constraints is bumped.

It's kind of annoying in the code supporting both versions at once.
If we've got changes ready to go with all the related projects in [1]
bumping *should* be minimal disruption.

-i


[1] https://bugs.launchpad.net/diskimage-builder/+bug/1718576

[2] https://review.openstack.org/#/c/506524/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tempest][devstack][congress] tempest.config.CONF.service_available changed on Jan 2/3?

2018-01-07 Thread Ghanshyam Mann
On Sat, Jan 6, 2018 at 3:41 PM, Chandan kumar  wrote:
> Hello Eric,
>
> On Sat, Jan 6, 2018 at 4:46 AM, Eric K  wrote:
>> Seems that sometime between 1/2 and 1/3 this year,
>> tempest.config.CONF.service_available.aodh_plugin as well as
>> ..service_available.mistral became unavailable in congress dsvm check/gate
>> job. [1][2]
>>
>> I've checked the changes that went in to congress, tempest, devstack,
>> devstack-gate, aodh, and mistral during that period but don't see obvious
>> causes. Any suggestions on where to look next to fix the issue? Thanks
>> very much!

These config options should stay there even separating the tempest
plugin.  I have checked aodh and mistral config options and there are
present as tempest config.

- 
https://github.com/openstack/telemetry-tempest-plugin/blob/b30a19214d0036141de75047b444d48ae0d0b656/telemetry_tempest_plugin/config.py#L27
- 
https://github.com/openstack/mistral-tempest-plugin/blob/63a0fe20f98e0cb8316beb81ca77249ffdda29c5/mistral_tempest_tests/config.py#L18


Issue occurred because of removing the in-tree plugins before congress
was setup to use new repo. We should not remove the in-tree plugin
before gate setup of consuming the new plugin is complete for each
consumer of plugings.

>>
>
> The aodh tempest plugin [https://review.openstack.org/#/c/526299/] is
> moved to telemetry-tempest-plugin
> [https://github.com/openstack/telemetry-tempest-plugin].
> I have sent a patch to Congress project to fix the issue:
> https://review.openstack.org/#/c/531534/

Thanks Chandan, this will fix congress issue for Aodh, we need same
fix for mistral case too.

>
> The mistral bundled intree tempest plugin
> [https://review.openstack.org/#/c/526918/] is also moved to
> mistral-tempest-plugin repo
> [https://github.com/openstack/mistral-tempest-plugin]
>
> Tests are moved to a new repo as a part of Tempest Plugin Split goal
> [https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html].
> Feel free to consume the new tempest plugin and let me know if you
> need any more help.
>
> Thanks,
>
> Chandan Kumar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [castellan] Transferring ownership of secrets to another user

2018-01-07 Thread Alan Bishop
On Sat, Jan 6, 2018 at 3:26 AM, Juan Antonio Osorio  wrote:
>
>
> On 4 Jan 2018 23:35, "Alan Bishop"  wrote:
>
> Has there been any previous discussion on providing a mechanism for
> transferring ownership of a secret from one user to another?
>
> For castellan there isn't a discussion AFAIK. But it sounds like something
> you can enable with Barbican's ACLs.

Conceptually, the goal is to truly transfer ownership. I considered
Barbican ACLs as a workaround, but that approach isn't sufficient.

A Barbican ACL would allow the new owner to read the secret, but
woun't take into account whether the new owner happens to be an admin.
Barbican secrets owned by an admin can be read by other admins, but an
ACL would not allow other admins to read the secret.

The bigger problem, though, is what happens when the new owner
attempts to delete the volume. This requires deleting the secret, but
the new volume owner only has read access to the secret. Cinder blocks
attempts to delete encrypted volumes when the secret cannot be
deleted. Otherwise, deleting a volume would cause the secret to be
leaked (not exposed, but unmanaged by any owner).

> https://docs.openstack.org/barbican/latest/api/reference/acls.html
>
> You would need to leverage Barbican's API instead of castellan though.
>
>
> Cinder supports the notion of transferring volume ownership to another
> user, who may be in another tenant/project. However, if the volume is
> encrypted it's possible (even likely) that the new owner will not be
> able to access the encryption secret.
>
> The new user will have the
> encryption key ID (secret ref), but may not have permission to access
> the secret, let alone delete the secret should the volume be deleted
> later. This issue is currently flagged as a cinder bug [1].
>
> This is a use case where the ownership of the encryption secret should
> be transferred to the new volume owner.
>
> Alan
>
> [1] https://bugs.launchpad.net/cinder/+bug/1735285
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [faas] [qinling] project update - 2

2018-01-07 Thread Lingxian Kong
Hi, all

Happy new year!

This project update is posted by-weekly, but feel free to get in touch in
#openstack-qinling anytime.

- Introduce etcd in qinling for distributed locking and storing the
resources that need to be updated frequently.
- Get function workers (admin only)
- Support to detach function from underlying orchestrator (admin only)
- Support positional args in users function
- More unit tests and functional tests added
- Powerful resource query filtering of qinling openstack CLI
- Conveniently delete all executions of one or more functions in CLI

You can find previous emails below.

Have a good day :-)

Cheers,
Lingxian Kong (Larry)

-- Forwarded message --
From: Lingxian Kong 
Date: Tue, Dec 12, 2017 at 10:18 PM
Subject: [openstack-dev] [qinling] [faas] project update
​ - 1​

To: OpenStack Development Mailing List 


Hi, all

Maybe there are aleady some people interested in faas implementation in
openstack, and also deployed other openstack services to be integrated with
(e.g. trigger function by object uploading in swift), Qinling is the thing
you probably don't want to miss out. The main motivation I creatd Qinling
project is from frequent requirements of our public cloud customers.

For people who have not heard about Qinling before, please take a look at
my presentation in Sydney Summit:
https://youtu.be/NmCmOfRBlIU
There is also a simple demo video:
https://youtu.be/K2SiMZllN_A

As the first project update email, I will just list the features
implemented for now:

- Python runtime
- Sync/Async function execution
- Job (invoke function on schedule)
- Function defined in swift object storage service
- Function defined in docker image
- Easy to interact with openstack services in function
- Function autoscaling based on request rate
- RBAC operation
- Function resource limitation
- Simple documentation

I will keep posting the project update by-weekly, but feel free to get in
touch in #openstack-qinling anytime.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] propose to upgrade python kubernetes (the k8s python client) to 4.0.0 which breaks oslo.service

2018-01-07 Thread Lingxian Kong
Thanks, leyal. I've already changed the service framework from oslo.service
to cotyledon https://review.openstack.org/#/c/530428/, and it works
perfectly fine.


Cheers,
Lingxian Kong (Larry)

On Mon, Jan 8, 2018 at 2:47 AM, Eyal Leshem  wrote:

> Hi Lingxian,
>
> I uploaded a patch for kuryr-kubernetes  that monkey-patch the ThreadPool
> with
> GreenPool (https://review.openstack.org/#/c/530655/4/kuryr_kubernetes/
> thread_pool_patch.py).
>
> It's support only apply_async - but that should be enough for k8s.
>
> That can be dangers - if you use ThreadPool in other places in your code,
> but in such case you can't run with eventlet anyway.
>
> hope that helps,
> leyal
>
>
>
>
> On 4 January 2018 at 23:45, Lingxian Kong  wrote:
>
>> On Tue, Jan 2, 2018 at 1:56 AM, Eyal Leshem  wrote:
>>
>>> Hi ,
>>>
>>> According to https://github.com/eventlet/eventlet/issues/147 - it's
>>> looks that eventlet
>>> has issue with "multiprocessing.pool".
>>>
>>> The ThreadPool used in code that auto-generated by swagger.
>>>
>>> Possible workaround for that is to monky-patch the client library ,
>>> and replace the pool with greenpool.
>>>
>>
>> Hi, leyal, I'm not very familar with eventlet, but how can I monkey patch
>> kubernetes python lib?
>> The only way I can see now is to replace oslo.service with something
>> else, e.g. cotyledon, avoid to use eventlet, that's a signaficant change
>> though. I also found this bug https://bugs.launchpad.net
>> /taskflow/+bug/1225275 in taskflow, they chose to not use
>> multiprocessing module.
>>
>> Any other suggestions are welcomed!
>>
>>
>>>
>>> If someone has better workaround, please share that with us :)
>>>
>>> btw , I don't think that should be treated as compatibility issue
>>> in the client python as it's an eventlet issue..
>>>
>>> Thanks ,
>>> leyal
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] propose to upgrade python kubernetes (the k8s python client) to 4.0.0 which breaks oslo.service

2018-01-07 Thread Eyal Leshem
Hi Lingxian,

I uploaded a patch for kuryr-kubernetes  that monkey-patch the ThreadPool
with
GreenPool (
https://review.openstack.org/#/c/530655/4/kuryr_kubernetes/thread_pool_patch.py
).

It's support only apply_async - but that should be enough for k8s.

That can be dangers - if you use ThreadPool in other places in your code,
but in such case you can't run with eventlet anyway.

hope that helps,
leyal




On 4 January 2018 at 23:45, Lingxian Kong  wrote:

> On Tue, Jan 2, 2018 at 1:56 AM, Eyal Leshem  wrote:
>
>> Hi ,
>>
>> According to https://github.com/eventlet/eventlet/issues/147 - it's
>> looks that eventlet
>> has issue with "multiprocessing.pool".
>>
>> The ThreadPool used in code that auto-generated by swagger.
>>
>> Possible workaround for that is to monky-patch the client library ,
>> and replace the pool with greenpool.
>>
>
> Hi, leyal, I'm not very familar with eventlet, but how can I monkey patch
> kubernetes python lib?
> The only way I can see now is to replace oslo.service with something else,
> e.g. cotyledon, avoid to use eventlet, that's a signaficant change though.
> I also found this bug https://bugs.launchpad.net/taskflow/+bug/1225275 in
> taskflow, they chose to not use multiprocessing module.
>
> Any other suggestions are welcomed!
>
>
>>
>> If someone has better workaround, please share that with us :)
>>
>> btw , I don't think that should be treated as compatibility issue
>> in the client python as it's an eventlet issue..
>>
>> Thanks ,
>> leyal
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

2018-01-07 Thread Tobias Urdin
Hello everyone and a happy new year!

I will follow this thread up with some information about the tempest failure 
that occurs on Ubuntu.
Saw it happen on my recheck tonight and took some time now to check it out 
properly.

* Here is the job: 
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/

* The following test is failing but only sometimes: 
tempest.api.compute.servers.test_create_server.ServersTestManualDisk
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/job-output.txt.gz#_2018-01-07_01_56_31_072370

* Checking the nova API log is fails the request against neutron server
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/nova/nova-api.txt.gz#_2018-01-07_01_46_47_301

So this is the call that times out: 
https://github.com/openstack/nova/blob/3800cf6ae2a1370882f39e6880b7df4ec93f4b93/nova/api/openstack/compute/attach_interfaces.py#L61

The timeout occurs at 01:46:47 but the first try is done at 01:46:17, checking 
the log 
http://logs.openstack.org/37/529837/1/check/puppet-openstack-integration-4-scenario003-tempest-ubuntu-xenial/84b60a7/logs/neutron/neutron-server.txt.gz
 and searching for "GET 
/v2.0/ports?device_id=285061f8-2e8e-4163-9534-9b02900a8887"

You can see that neutron-server reports all request as 200 OK, so what I think 
is that neutron-server performs the request properly but for some reason 
nova-api does not get the reply and hence the timeout.

This is where I get stuck because since I can see all requests coming in there 
is no real way of seeing the replies.
At the same time you can see nova-api and neutron-server are continously 
handling requests so they are working but just that reply that neutron-server 
should send to nova-api does not occur.

Does anybody have any clue to why? Otherwise I guess the only way is to start 
running the tests on a local machine until I get that issue, which does not 
occur regularly.

Maybe loop in the neutron and/or Canonical OpenStack team on this one.

Best regards
Tobias



From: Tobias Urdin 
Sent: Friday, December 22, 2017 2:44 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [puppet] Ubuntu problems + Help needed

Follow up, have been testing some integration runs on a tmp machine.

Had to fix the following:
* Ceph repo key E84AC2C0460F3994 perhaps introduced in [0]
* Run glance-manage db_sync (have not seen in integration tests)
* Run neutron-db-manage upgrade heads (have not seen in integration tests)
* Disable l2gw because of
https://bugs.launchpad.net/ubuntu/+source/networking-l2gw/+bug/1739779
   proposed temp fix until resolved as [1]

[0] https://review.openstack.org/#/c/507925/
[1] https://review.openstack.org/#/c/529830/

Best regards

On 12/22/2017 10:44 AM, Tobias Urdin wrote:
> Ignore that, seems like it's the networking-l2gw package that fails[0]
> Seems like it hasn't been packaged for queens yet[1] or more it seems
> like a release has not been cut for queens for networking-l2gw[2]
>
> Should we try to disable l2gw like done in[3] recently for CentOS?
>
> [0]
> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_23_10_05_564
> [1]
> http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html
> [2] https://git.openstack.org/cgit/openstack/networking-l2gw/refs/
> [3] https://review.openstack.org/#/c/529711/
>
>
> On 12/22/2017 10:19 AM, Tobias Urdin wrote:
>> Follow up on Alex[1] point. The db sync upgrade for neutron fails here[0].
>>
>> [0] http://paste.openstack.org/show/629628/
>>
>> On 12/22/2017 04:57 AM, Alex Schultz wrote:
 Just a note, the queens repo is not currently synced in the infra so
 the queens repo patch is failing on Ubuntu jobs. I've proposed adding
 queens to the infra configuration to resolve this:
 https://review.openstack.org/529670

>>> As a follow up, the mirrors have landed and two of the four scenarios
>>> now pass.  Scenario001 is failing on ceilometer-api which was removed
>>> so I have a patch[0] to remove it. Scenario004 is having issues with
>>> neutron and the db looks to be very unhappy[1].
>>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/529787
>>> [1] 
>>> http://logs.openstack.org/57/529657/2/check/puppet-openstack-integration-4-scenario004-tempest-ubuntu-xenial/ce6f987/logs/neutron/neutron-server.txt.gz#_2017-12-21_22_58_37_338
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 

[openstack-dev] [nova] about rescue instance booted from volume

2018-01-07 Thread 李杰
Hi,all


This is the change about rescue a instance booted from volume, 
anyone who is interested in
  booted from volume can help to review this. Any suggestion is welcome.
  The link is here.
  https://review.openstack.org/#/c/531524/
  Re:the related 
bp:https://blueprints.launchpad.net/nova/+spec/volume-backed-server-rescue













Best Regards
Lijie__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [congress] generic push driver

2018-01-07 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi Eric,

I have two questions:


1.   An alarm is usually raised on a resource, and in Vitrage we can send 
you the details of that resource. Is there a way in Congress for the alarm to 
reference a resource that exists in another table? And what if the resource 
does not exist in Congress?

2.   Do you plan to support also updateRows? This can be useful for alarm 
state changes.

Thanks,
Ifat


From: Eric K 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 6 January 2018 at 3:50
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [congress] generic push driver

We've been discussing generic push drivers for Congress for quite a while. 
Finally sketching out something concrete and looking for some preliminary 
feedback. Below are sample interactions with a proposed generic push driver. A 
generic push driver could be used to receive push updates from vitrage, 
monasca, and many other sources.

1. creating a datasource:

congress datasource create generic_push_driver vitrage --config schema='
{
  "tables":[
{
  "name":"alarms",
  "columns":[
"id",
"name",
"state",
"severity",
  ]
}
  ]
}
'

2. Update an entire table:

PUT '/v1/data-sources/vitrage/tables/alarms' with body:
{
  "rows":[
{
  "id":"1-1",
  "name":"name1",
  "state":"active",
  "severity":1
},
[
  "1-2",
  "name2",
  "active",
  2
]
  ]
}
Note that a row can be either a {} or []


3. perform differential update:

PUT '/v1/data-sources/vitrage/tables/alarms' with body:
{
  "addrows":[
{
  "id":"1-1",
  "name":"name1",
  "state":"active",
  "severity":1
},
[
  "1-2",
  "name2",
  "active",
  2
]
  ]
}

OR

{
  "deleterows":[
{
  "id":"1-1",
  "name":"name1",
  "state":"active",
  "severity":1
},
[
  "1-2",
  "name2",
  "active",
  2
]
  ]
}

Note 1: we may allow 'rows', 'addrows', and 'deleterows' to be used together 
with some well defined semantics. Alternatively we may mandate that each 
request can have only one of the three pieces.

Note 2: we leave it as the responsibility of the sender to send and confirm the 
requests for differential updates in correct order. We could add sequencing in 
future work.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev