[openstack-dev] [Nova] Question about thread safe of key-pair and securtiy rules quota

2014-07-23 Thread Chen CH Ji

According to bug [1], there are some possibilities that concurrent
operations on keypair/security rules can exceed quota
Found that we have 3 kinds of resources in quotas.py:
ReservableResource/AbsoluteResource/CountableResource

curious about CountableResource because it's can't be thread safe due to
its logic:

count = QUOTAS.count(context, 'security_group_rules', id)
try:
projected = count + len(vals)
QUOTAS.limit_check(context, security_group_rules=projected)

was it designed by purpose to be different to ReservableResource? If set it
to ReservableResource just like RAM/CPU, what kind of side effect it might
lead to ?

Also, is it possible to consider a solution like 'hold a write lock in db
layer, check the count of resource and raise exception if it exceed quota'?

Thanks


[1] https://bugs.launchpad.net/nova/+bug/1301532

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting July 24 1800 UTC

2014-07-23 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting as usual in
#openstack-meeting-alt channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meeting&iso=20140724T18

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Rally] PTL Elections

2014-07-23 Thread Sergey Lukjanov
Hey folks,

the nomination period was already ended and there is only one
candidate, so, there is no need to setup voting.
The Rally PTL for Juno wiki page [0] updated.

So, the Rally PTL for Juno cycle is Boris Pavlovic, congratulations!

Thanks.

[0] https://wiki.openstack.org/wiki/Rally/PTL_Elections_Juno#PTL

On Thu, Jul 17, 2014 at 7:55 PM, Sergey Lukjanov  wrote:
> Hi folks,
>
> due to the requirement to have PTL for the program, we're running
> elections for the Rally PTL for Juno cycle. Schedule and policies
> are fully aligned with official OpenStack PTLs elections.
>
> You can find more info in official Juno elections wiki page [0] and
> the same page for Rally elections [1], additionally some more info
> in the past official nominations opening email [2].
>
> Timeline:
>
> till 05:59 UTC July 23, 2014: Open candidacy to PTL positions
> July 23, 2014 - 1300 UTC July 30, 2014: PTL elections
>
> To announce your candidacy please start a new openstack-dev at
> lists.openstack.org mailing list thread with the following subject:
> "[Rally] PTL Candidacy".
>
> [0] https://wiki.openstack.org/wiki/PTL_Elections_March/April_2014
> [1] https://wiki.openstack.org/wiki/Rally/PTL_Elections_Juno
> [2] http://lists.openstack.org/pipermail/openstack-dev/2014-March/031239.html
>
> Thank you.
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Sahara Technical Lead
> (OpenStack Data Processing)
> Principal Software Engineer
> Mirantis Inc.



-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] vhost-scsi support in Nova

2014-07-23 Thread Nicholas A. Bellinger
Hi Nova folks,

Please let me address some of the outstanding technical points that have
been raised recently within the following spec [1] for supporting vhost-scsi
[2] within Nova.

Mike and Daniel have been going back and forth on various details, so I
thought it might be helpful to open the discussion to a wider audience.

First, some background.  I'm the target (LIO) subsystem maintainer for the
upstream Linux kernel, and have been one of the primary contributors in that
community for a number of years.  This includes the target-core subsystem,
the backend drivers that communicate with kernel storage subsystems, and a
number of frontend fabric protocol drivers.

vhost-scsi is one of those frontend fabric protocol drivers that has been
included upstream, that myself and others have contributed to and improved
over the past three years.  Given this experience and commitment to
supporting upstream code, I'd like to address some of the specific points
wrt vhost-scsi here.

*) vhost-scsi doesn't support migration

Since it's initial merge in QEMU v1.5, vhost-scsi has a migration blocker
set.  This is primarily due to requiring some external orchestration in
order to setup the necessary vhost-scsi endpoints on the migration
destination to match what's running on the migration source.

Here are a couple of points that Stefan detailed some time ago about what's
involved for properly supporting live migration with vhost-scsi:

(1) vhost-scsi needs to tell QEMU when it dirties memory pages, either by
DMAing to guest memory buffers or by modifying the virtio vring (which also
lives in guest memory).  This should be straightforward since the
infrastructure is already present in vhost (it's called the "log") and used
by drivers/vhost/net.c.

(2) The harder part is seamless target handover to the destination host.
vhost-scsi needs to serialize any SCSI target state from the source machine
and load it on the destination machine.  We could be in the middle of
emulating a SCSI command.

An obvious solution is to only support active-passive or active-active HA
setups where tcm already knows how to fail over.  This typically requires
shared storage and maybe some communication for the clustering mechanism.
There are more sophisticated approaches, so this straightforward one is just
an example.

That said, we do intended to support live migration for vhost-scsi using
iSCSI/iSER/FC shared storage.

*) vhost-scsi doesn't support qcow2

Given all other cinder drivers do not use QEMU qcow2 to access storage
blocks, with the exception of the Netapp and Gluster driver, this argument
is not particularly relevant here.

However, this doesn't mean that vhost-scsi (and target-core itself) cannot
support qcow2 images.  There is currently an effort to add a userspace
backend driver for the upstream target (tcm_core_user [3]), that will allow
for supporting various disk formats in userspace.

The important part for vhost-scsi is that regardless of what type of target
backend driver is put behind the fabric LUNs (raw block devices using
IBLOCK, qcow2 images using target_core_user, etc) the changes required in
Nova and libvirt to support vhost-scsi remain the same.  They do not change
based on the backend driver.

*) vhost-scsi is not intended for production

vhost-scsi has been included the upstream kernel since the v3.6 release, and
included in QEMU since v1.5.  vhost-scsi runs unmodified out of the box on a
number of popular distributions including Fedora, Ubuntu, and OpenSuse.  It
also works as a QEMU boot device with Seabios, and even with the Windows
virtio-scsi mini-port driver.

There is at least one vendor who has already posted libvirt patches to
support vhost-scsi, so vhost-scsi is already being pushed beyond a debugging
and development tool.

For instance, here are a few specific use cases where vhost-scsi is
currently the only option for virtio-scsi guests:

  - Low (sub 100 usec) latencies for AIO reads/writes with small iodepth
workloads
  - 1M+ small block IOPs workloads at low CPU utilization with large
iopdeth workloads.
  - End-to-end data integrity using T10 protection information (DIF)

So vhost-scsi can/will support essential features like live migration,
qcow2, and the virtio-scsi data plane effort should not block existing
alternatives already in upstream.

With that, we'd like to see Nova officially support vhost-scsi because of
its wide availability in the Linux ecosystem, and the considerable
performance, efficiency, and end-to-end data-integrity benefits that it
already brings to the table.

We are committed to addressing the short and long-term items for this
driver, and making it a success in Openstack Nova.

Thank you,

--nab

[1] 
https://review.openstack.org/#/c/103797/5/specs/juno/virtio-scsi-settings.rst
[2] 
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/drivers/vhost/scsi.c
[3] http://www.spinics.net/lists/target-devel/msg07339.html


___

Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread Osanai, Hisashi

Thank you for the clarification.

I understand and agree with your thought it's clear enough.
Thank you for your time and I highly appreciate your responses.

Best Regards,
Hisashi Osanai


On Thursday, July 24, 2014 2:16 PM, John Dickinson wrote:

> Oh I totally agree with what you are saying. A DNS change may be lower
> cost than running Swift config/management commands. At the very least,
> ops already know how to do DNS updates, regardless of it's "cost", where
> they have to learn how to do Swift management.
> 
> I was simply adding clarity to the trickiness of the situation. As I said
> originally, it's a balance of offering a feature that has a known cost
> (DNS lookups in a large cluster) vs not offering it and potentially making
> some management more difficult. I don't think either solution is all that
> great, but in the absence of a decision, we've so-far defaulted to "less
> code has less bugs" and not yet written or merged it.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread John Dickinson
Oh I totally agree with what you are saying. A DNS change may be lower cost 
than running Swift config/management commands. At the very least, ops already 
know how to do DNS updates, regardless of it's "cost", where they have to learn 
how to do Swift management.

I was simply adding clarity to the trickiness of the situation. As I said 
originally, it's a balance of offering a feature that has a known cost (DNS 
lookups in a large cluster) vs not offering it and potentially making some 
management more difficult. I don't think either solution is all that great, but 
in the absence of a decision, we've so-far defaulted to "less code has less 
bugs" and not yet written or merged it.

--John






On Jul 23, 2014, at 10:07 PM, Osanai, Hisashi  
wrote:

> 
> Thank you for the quick response.
> 
> On Thursday, July 24, 2014 12:51 PM, John Dickinson wrote:
> 
>> you can actually do the same today
>> with the IP-based system. You can use the set_info command of
>> swift-ring-builder to change the IP for existing devices and this avoids
>> any rebalancing in the cluster.
> 
> Thanks for the info. 
> I will check the set_info command of swift-ring-builder.
> 
> My understanding now is 
> - in the FQDN case, an operator has to do DNS related operation. (no whole 
> rebalancing)
> - in the IP case, an operator has to execute swift's command. (no whole 
> rebalancing)
> 
> I think that the point of this discussion is "swift's independency in case of 
> failure" 
> and "adding a lot of operational complexity and burden".
> 
> I think that recovery procedure in the FQDN case is common one so it is 
> better to have the ability for using FQDN in addition to ip addresses.
> What do you think of this?
> 
> +--+--+---+
> |  | In the FQDN case | In the IP case|
> +--+--+---+
> |Swift's independency  |completely independent|rely on DNS systems|
> +--+--+---+
> |Operational complexity| (1)  | (2)   |
> |(recovery process)| simple   | a bit complex |
> +--+--+---+
> |Operational complexity| DNS and Swift| Swift only|
> |(necessary skills)|  |   |
> +--+--+---+
> 
> (1) in the FQDN case, change DNS info for the node. (no swift related 
> operation)
> (2) in the IP case, execute the swift-ring-builder command on a node then 
> copy it to 
>all related nodes.
> 
> Best Regards,
> Hisashi Osanai
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread Osanai, Hisashi

Thank you for the quick response.

On Thursday, July 24, 2014 12:51 PM, John Dickinson wrote:

> you can actually do the same today
> with the IP-based system. You can use the set_info command of
> swift-ring-builder to change the IP for existing devices and this avoids
> any rebalancing in the cluster.

Thanks for the info. 
I will check the set_info command of swift-ring-builder.

My understanding now is 
- in the FQDN case, an operator has to do DNS related operation. (no whole 
rebalancing)
- in the IP case, an operator has to execute swift's command. (no whole 
rebalancing)

I think that the point of this discussion is "swift's independency in case of 
failure" 
and "adding a lot of operational complexity and burden".

I think that recovery procedure in the FQDN case is common one so it is 
better to have the ability for using FQDN in addition to ip addresses.
What do you think of this?

+--+--+---+
|  | In the FQDN case | In the IP case|
+--+--+---+
|Swift's independency  |completely independent|rely on DNS systems|
+--+--+---+
|Operational complexity| (1)  | (2)   |
|(recovery process)| simple   | a bit complex |
+--+--+---+
|Operational complexity| DNS and Swift| Swift only|
|(necessary skills)|  |   |
+--+--+---+

(1) in the FQDN case, change DNS info for the node. (no swift related operation)
(2) in the IP case, execute the swift-ring-builder command on a node then copy 
it to 
all related nodes.

Best Regards,
Hisashi Osanai


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Neutron ML2 Blueprints

2014-07-23 Thread Mike Scherbakov
Sorry, we did the best to get it in... Risk looks high to me. We also
update OpenStack itself from stable/icehouse, which is also risky; +
Mellanox & NSX. To avoid broken master, I'd go with merged, seems to be
working version of ML2, and merge Andrew's patchset into 6.0.

Mike Scherbakov
#mihgen
On Jul 23, 2014 6:20 PM, "Vladimir Kuklin"  wrote:

> Andrew
>
> AFAIK, extended tests on full HA envs failed due to errors in deployment
> of secondary controllers. There is new patchset on review, but I am not
> sure that this code is passing extended tests. If it does, then we can
> consider merge of your code if it is working with NSX and Mellanox code. I
> am deeply concerned about this and my opinion is that we should not do it
> because we can introduce enormous regression right after Soft Code Freeze
> and put our release under very high risk.
>
>
> Mike, Andrew, what do you think?
>
>
> On Fri, Jul 18, 2014 at 10:53 AM, Andrew Woodward 
> wrote:
>
>> All issues should be resolved, and CI is passing. Please start testing.
>>
>>
>> On Thu, Jul 17, 2014 at 4:30 AM, Vladimir Kuklin 
>> wrote:
>>
>>> Andrew, we have extended system tests passing with our current pacemaker
>>> corosync code. Either it is your environment or some bug we cannot
>>> reproduce. Also, it may be related to puppet ordering issues thus trying to
>>> start some services before some others. As [2] is the only issue you are
>>> pointing at now, let's create a bug and track it in Launchpad.
>>>
>>>
>>> On Thu, Jul 17, 2014 at 11:20 AM, Andrew Woodward 
>>> wrote:
>>>
 [2] still has no positive progress, simply making puppet stop the
 services isn't all that usefull, will need to move towards always
 using over-ride files
 [3] is closed as it hasn't occurred in two days
 [4] may be closed as its not occuring in CI or on my testing anymore

 [5] is closed, was due to [7]

 [7] https://bugs.launchpad.net/puppet-neutron/+bug/1343009

 CI is passing CentOS now, and only failing ubuntu in OSTF. This
 appears to be due services not being properly managed in
 corosync/pacemaker

 On Tue, Jul 15, 2014 at 11:24 PM, Andrew Woodward 
 wrote:
 > [2] appears to be made worse, if not caused by neutron services
 > autostarting with debian, no patch yet, need to add mechanism to ha
 > layer to generate override files.
 > [3] appears to have stopped with this mornings master
 > [4] deleting the cluster, and restarting mostly removed this, was
 > getting issue with $::osnailyfacter::swift_partition/.. not existing
 > (/var/lib/glance), but is fixed in rev 29
 >
 > [5] is still the critical issue blocking progress, I'm super at a loss
 > of why this is occuring. Changes to ordering have no affect. Next
 > steps probably involve pre-hacking keystone and neutron and
 > nova-client to be more verbose about it's key usage. As a hack we
 > could simply restart neutron-server but I'm not convinced the issue
 > can't come back since we don't know how it started.
 >
 >
 >
 > On Tue, Jul 15, 2014 at 6:34 AM, Sergey Vasilenko
 >  wrote:
 >> [1] fixed in https://review.openstack.org/#/c/107046/
 >> Thanks for report a bug.
 >>
 >> ___
 >> OpenStack-dev mailing list
 >> OpenStack-dev@lists.openstack.org
 >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >>
 >
 >
 >
 > --
 > Andrew
 > Mirantis
 > Ceph community



 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>>
>>> --
>>> Yours Faithfully,
>>> Vladimir Kuklin,
>>> Fuel Library Tech Lead,
>>> Mirantis, Inc.
>>> +7 (495) 640-49-04
>>> +7 (926) 702-39-68
>>> Skype kuklinvv
>>> 45bk3, Vorontsovskaya Str.
>>> Moscow, Russia,
>>> www.mirantis.com 
>>> www.mirantis.ru
>>> vkuk...@mirantis.com
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Andrew
>> Mirantis
>> Ceph community
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Yours Faithfully,
> Vladimir Kuklin,
> Fuel Library Tech Lead,
> Mirantis, Inc.
> +7 (495) 640-49-04
> +7 (926) 702-39-68
> Skype kuklinvv
> 45bk3, Vorontsovskaya Str.
> Moscow, Russia,
> www.mirantis.com 
> www.mirantis.ru
> vkuk...@mirantis.com
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.o

[openstack-dev] [nova] Add support for hardware transports when using open-iscsi

2014-07-23 Thread Anish Bhatt
Hi,

Currently, the implementation that uses open-iscsi to log in to iscsi targets 
does not support the use of hardware transports (currently bnx2i, cxgb3i & 
cxgb4i are supported by open-iscsi)

The only change would be adding a -I  parameter to the 
standard login/discovery command when the requisite hardware is available. The 
transport iface files can be generated via iscsiadm itself. No other commands 
would change at all. The default value is -I tcp, which as the same as not 
giving the -I parameter.
As far as I can see, all changes would be localized to the following nova files 
:

nova/virt/libvirt/volume.py
nova/cmd/baremetal_deploy_helper.py
nova/tests/virt/libvirt/test_volume.py

Would this be a useful addition to openstack ?

Thanks,
Anish
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread John Dickinson
While your correct that a chassis replacement can avoid data rebalancing in the 
FQDN case if you update DNS, you can actually do the same today with the 
IP-based system. You can use the set_info command of swift-ring-builder to 
change the IP for existing devices and this avoids any rebalancing in the 
cluster.

--John



On Jul 23, 2014, at 6:27 PM, Osanai, Hisashi  
wrote:

> 
> I would like to discuss this topic more deeply.
> 
> I understand we need to prepare DNS systems and add a lot of operational 
> complexity and burden to use the DNS system when we use FQDN in Ring files.
> 
> However I think most datacenter have DNS systems to manage network resources 
> such as ip addresses and hostnames and it is centralized management.
> And you already pointed out that we can get benefit to use FQDN in Ring files 
> with some scenarios. 
> 
> A scenarios: Corruption of a storage node
> 
> IP case:
> One storage node corrupted when swift uses IPs in Ring files. An operator 
> removes 
> the node from swift system using ring-builder command and keeping the node 
> for 
> further investigation. Then the operator tries to add new storage node with 
> different ip address. In this case swift rebalance all objects.
> 
> FQDN case:
> One storage node corrupted when swift uses FQDN in Ring files. An operator 
> prepares 
> new storage node with difference ip address then changes info in DNS systems 
> with 
> the ip address. In this case swift copy objects that related to the node.
> 
> If above understanding is true, it is better to have ability for using FQDN 
> in Ring 
> files in addition to ip addresses. What do you think?
> 
> On Thursday, July 24, 2014 12:55 AM, John Dickinson wrote:
> 
>> However, note that until now, we've intentionally kept it as just IP
>> addresses since using hostnames adds a lot of operational complexity and
>> burden. I realize that hostnames may be preferred in some cases, but this
>> places a very large strain on DNS systems. So basically, it's a question
>> of do we add the feature, knowing that most people who use it will in
>> fact be making their lives more difficult, or do we keep it out, knowing
>> that we won't be serving those who actually require the feature.
> 
> Best Regards,
> Hisashi Osanai
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] Re:Question about addit log in nova-compute.log

2014-07-23 Thread Chen CH Ji

Hi
 [1] asked the opinion about nova-compute log questions and
I proposed [2] as the changes for it ,
 Can someone take a look and share your opinion? Thanks a
lot

[1] http://lists.openstack.org/pipermail/openstack-dev/2014-May/034312.html
[2] https://review.openstack.org/#/c/93261

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] Juno mid-cyle meetup last minute updates

2014-07-23 Thread Arnaud Legendre
Greetings,

A couple of updates for the meetup tomorrow:

- The schedule of the meetup can be found here: 
https://wiki.openstack.org/wiki/Glance/JunoCycleMeetup
- We will start at 9:00 AM PST: 
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140725T09&p1=1240
- It will be possible to join remotely the meetup through Webex. Please ping me 
(arnaud) on IRC to get the Webex URL and passcode. Hopefully, the experience 
won’t be too bad…
- The address has been updated:
VMware, Inc
3425 Hillview Avenue - Building Hilltop A (HTA)
Palo Alto, CA 94304 USA
- Directions to VMware: 
http://www.vmware.com/files/pdf/company/vmw-directions-to-vmware.pdf

That’s pretty much it!

Best,
Arnaud


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Doug Wiegley
Though, this is probably a good time to talk requirements, and to start 
thinking about whether this is an lbaas issue, or an advanced services (*aaS) 
issue, so we can have some useful discussions at the summit, and not solve this 
scaling metrics problem 8 different ways.

Doug


From: Stephen Balukoff mailto:sbaluk...@bluebox.net>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 23, 2014 at 7:14 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] health maintenance in autoscaling groups

It's probably worth pointing out that most of the Neutron LBaaS team are 
spending most of our time doing a major revision to Neutron LBaaS. How stats 
processing should happen has definitely been discussed but not resolved at 
present-- and in any case it was apparent to those working on the project that 
it has secondary importance compared to the revision work presently underway.

I personally would like to have queries about most objects in the stats API to 
Neutron LBaaS return a dictionary or statuses for child objects which then a UI 
or auto-scaling system can interpret however it wishes. Your points are 
certainly well made, and I agree that it might also be useful to inject status 
information externally, or have some kind of hook there to get event 
notifications when individual member statuses change. But this is really a 
discussion that needs to happen once the current code drive is near fruition 
(ie. for Kilo).

Stephen


On Wed, Jul 23, 2014 at 1:27 PM, Doug Wiegley 
mailto:do...@a10networks.com>> wrote:
Great question, and to my knowledge, not at present.  There is an ongoing 
discussion about a common usage framework for ceilometer, for all the various 
*aaS things, but status I not included (yet!).  I think that spec is in gerrit.

Thanks,
Doug


From: Mike Spreitzer mailto:mspre...@us.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 23, 2014 at 2:03 PM

To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] health maintenance in autoscaling groups

Doug Wiegley mailto:do...@a10networks.com>> wrote on 
07/23/2014 03:43:02 PM:

> From: Doug Wiegley mailto:do...@a10networks.com>>
> ...
> The state of the world today: ‘status’ in the neutron database is
> configuration/provisioning status, not operational status.  Neutron-
> wide thing.  We were discussing adding operational status fields (or
> a neutron REST call to get the info from the backend) last month,
> but it’s something that isn’t planned for a serious conversation
> until Kilo, at present.

Thanks for the prompt response.  Let me just grasp at one last straw: is there 
any chance that Neutron will soon define and implement Ceilometer metrics that 
reveal PoolMember health?

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Doug Wiegley
Hi Mike,

> and listed the possible values of the status field, including "INACTIVE".  
> Other sources are telling me that status=INACTIVE when the health monitor 
> thinks the member is unhealthy, status!=INACTIVE when the health monitor 
> thinks the member is healthy.  What's going on here?

Indeed, the code will return a server status of INACTIVE if the lbaas agent 
marks a member ‘DOWN’.  But, nowhere can I find that it actually ever does so.

My statements about the status field for lbaas/neutron came from the author of 
the ref lbaas driver; I’ll check with him tomorrow and see if I misunderstood.

Thanks,
doug

From: Mike Spreitzer mailto:mspre...@us.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 23, 2014 at 9:14 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] health maintenance in autoscaling groups

Stephen Balukoff mailto:sbaluk...@bluebox.net>> wrote on 
07/23/2014 09:14:35 PM:

> It's probably worth pointing out that most of the Neutron LBaaS team
> are spending most of our time doing a major revision to Neutron
> LBaaS. How stats processing should happen has definitely been
> discussed but not resolved at present-- and in any case it was
> apparent to those working on the project that it has secondary
> importance compared to the revision work presently underway.
>
> I personally would like to have queries about most objects in the
> stats API to Neutron LBaaS return a dictionary or

I presume you meant "of" rather than "or".

>   statuses for child
> objects which then a UI or auto-scaling system can interpret however
> it wishes.

That last part makes me a little nervious.  I have seen "can interpret however 
it wishes" mean "can not draw any useful inferences because there are no 
standards for that content".

I presume that as the grand and glorious future arrives, it will be with due 
respect for backwards compatibility.

In the present, I am getting what appears to be conflicting information on the 
status field of the responses of 
http://docs.openstack.org/api/openstack-network/2.0/content/GET_showMember__v2.0_pools__pool_id__members__member_id__lbaas_ext_ops_member.html

Doug Wiegely wrote
> ‘status’ in the neutron database is configuration/provisioning status, not 
> operational status
and listed the possible values of the status field, including "INACTIVE".  
Other sources are telling me that status=INACTIVE when the health monitor 
thinks the member is unhealthy, status!=INACTIVE when the health monitor thinks 
the member is healthy.  What's going on here?

Thanks,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Add static routes on neutron router to devices in the external network

2014-07-23 Thread Carl Baldwin
I wondered the same as Kevin.  Could you confirm that the vpn gateway is
directly connected to the external subnet or not?  The diagram isn't quite
clear

Assuming it is directly connected then it is probable that routes through
the external gateway are not considered, hence the error you received.  It
seems reasonable to me to consider a proposal that would allow this.  It
should be an admin only capability by default since it would be over the
external (shared) network and not a tenant network.  This seems like a new
feature rather than a bug to me.

As an alternative, could you try configuring your router with the static
route so that it would send an icmp redirect to the neutron router?

Carl
On Jul 22, 2014 11:23 AM, "Kevin Benton"  wrote:

> The issue (if I understand your diagram correctly) is that the VPN GW
> address is on the other side of your home router from the neutron router.
> The nexthop address has to be an address on one of the subnets directly
> attached to the router. In this topology, the static route should be on
> your home router.
>
> --
> Kevin Benton
>
>
> On Tue, Jul 22, 2014 at 6:55 AM, Ricardo Carrillo Cruz <
> ricardo.carrillo.c...@gmail.com> wrote:
>
>> Hello guys
>>
>> I have the following network setup at home:
>>
>> [openstack instances] -> [neutron router] -> [  [home router] [vpn gw]   ]
>>  TENANT NETWORK  EXTERNAL NETWORK
>>
>> I need my instances to connect to machines that are connected thru the
>> vpn gw server.
>> By default, all traffic that comes from openstack instances go thru the
>> neutron router, and then hop onto the home router.
>>
>> I've seen there's an extra routes extension for neutron routers that
>> would allow me to do that, but apparently I can't add extra routes to
>> destinations in the external network, only subnets known by neutron.
>> This can be seen from the neutron CLI command:
>>
>> 
>> neutron router-update  --routes type=dict list=true
>> destination=,nexthop=
>> Invalid format for routes: [{u'nexthop': u'', u'destination':
>> u''}], the nexthop is not connected with
>> router
>> 
>>
>> Is this use case not being possible to do at all?
>>
>> P.S.
>> I found Heat BP
>> https://blueprints.launchpad.net/heat/+spec/router-properties-object
>> that in the description reads this can be done on Neutron, but can't figure
>> out how.
>>
>> Regards
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Kevin Benton
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo.cfg] Dynamically load in options/groups values from the configuration files

2014-07-23 Thread Baohua Yang
Hi, all
 The current oslo.cfg module provides an easy way to load name known
options/groups from he configuration files.
  I am wondering if there's a possible solution to dynamically load
them?

  For example, I do not know the group names (section name in the
configuration file), but we read the configuration file and detect the
definitions inside it.

#Configuration file:
[group1]
key1 = value1
key2 = value2

   Then I want to automatically load the group1. key1 and group2. key2,
without knowing the name of group1 first.

Thanks a lot!

-- 
Best wishes!
Baohua
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Mike Spreitzer
Stephen Balukoff  wrote on 07/23/2014 09:14:35 PM:

> It's probably worth pointing out that most of the Neutron LBaaS team
> are spending most of our time doing a major revision to Neutron 
> LBaaS. How stats processing should happen has definitely been 
> discussed but not resolved at present-- and in any case it was 
> apparent to those working on the project that it has secondary 
> importance compared to the revision work presently underway.
> 
> I personally would like to have queries about most objects in the 
> stats API to Neutron LBaaS return a dictionary or

I presume you meant "of" rather than "or".

>   statuses for child
> objects which then a UI or auto-scaling system can interpret however
> it wishes.

That last part makes me a little nervious.  I have seen "can interpret 
however it wishes" mean "can not draw any useful inferences because there 
are no standards for that content".

I presume that as the grand and glorious future arrives, it will be with 
due respect for backwards compatibility.

In the present, I am getting what appears to be conflicting information on 
the status field of the responses of 
http://docs.openstack.org/api/openstack-network/2.0/content/GET_showMember__v2.0_pools__pool_id__members__member_id__lbaas_ext_ops_member.html

Doug Wiegely wrote
> ‘status’ in the neutron database is configuration/provisioning status, 
not operational status
and listed the possible values of the status field, including "INACTIVE". 
Other sources are telling me that status=INACTIVE when the health monitor 
thinks the member is unhealthy, status!=INACTIVE when the health monitor 
thinks the member is healthy.  What's going on here?

Thanks,
Mike


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon mode support

2014-07-23 Thread Carl Baldwin
+1

I think this is important for scalability.
On Jul 23, 2014 5:45 PM, "Miguel Angel Ajo Pelayo" 
wrote:

> +1
>
> Sent from my Android phone using TouchDown (www.nitrodesk.com)
>
>
> -Original Message-
> From: Yuriy Taraday [yorik@gmail.com]
> Received: Thursday, 24 Jul 2014, 0:42
> To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
>
> Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon
>mode support
>
>
> Hello.
>
> I'd like to propose making a spec freeze exception for
> rootwrap-daemon-mode spec [1].
>
> Its goal is to save agents' execution time by using daemon mode for
> rootwrap and thus avoiding python interpreter startup time as well as sudo
> overhead for each call. Preliminary benchmark shows 10x+ speedup of the
> rootwrap interaction itself.
>
> This spec have a number of supporters from Neutron team (Carl and Miguel
> gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
> The only thing that has been blocking its progress is Mark's -2 left when
> oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
> in oslo.rootwrap is steadily getting approved [5].
>
> [1] https://review.openstack.org/93889
> [2] https://review.openstack.org/82787
> [3] https://review.openstack.org/84667
> [4] https://review.openstack.org/107386
> [5]
> https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z
>
> --
>
> Kind regards, Yuriy.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] OpenStack Instance OS Support

2014-07-23 Thread Quang Long
Hi guys,
I have a question, if we use OpenStack Havana using QEMU/KVM Hypervisor and
based on Ubuntu 12.04 OS, what OS for instance can we use when lauch?

I found a link related to this issue, for refrence?
http://www.linux-kvm.org/page/Guest_Support_Status#Windows_Family

But with Red Hat Enterprise Linux OpenStack Platform, I realize less OS
supported
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Installation_and_Configuration_Guide/Supported_Virtual_Machine_Operating_Systems.html

All of your answers would be appreciated.
Many thanks.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread Osanai, Hisashi

I would like to discuss this topic more deeply.

I understand we need to prepare DNS systems and add a lot of operational 
complexity and burden to use the DNS system when we use FQDN in Ring files.

However I think most datacenter have DNS systems to manage network resources 
such as ip addresses and hostnames and it is centralized management.
And you already pointed out that we can get benefit to use FQDN in Ring files 
with some scenarios. 

A scenarios: Corruption of a storage node

IP case:
One storage node corrupted when swift uses IPs in Ring files. An operator 
removes 
the node from swift system using ring-builder command and keeping the node for 
further investigation. Then the operator tries to add new storage node with 
different ip address. In this case swift rebalance all objects.

FQDN case:
One storage node corrupted when swift uses FQDN in Ring files. An operator 
prepares 
new storage node with difference ip address then changes info in DNS systems 
with 
the ip address. In this case swift copy objects that related to the node.

If above understanding is true, it is better to have ability for using FQDN in 
Ring 
files in addition to ip addresses. What do you think?

On Thursday, July 24, 2014 12:55 AM, John Dickinson wrote:

> However, note that until now, we've intentionally kept it as just IP
> addresses since using hostnames adds a lot of operational complexity and
> burden. I realize that hostnames may be preferred in some cases, but this
> places a very large strain on DNS systems. So basically, it's a question
> of do we add the feature, knowing that most people who use it will in
> fact be making their lives more difficult, or do we keep it out, knowing
> that we won't be serving those who actually require the feature.

Best Regards,
Hisashi Osanai

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Stephen Balukoff
It's probably worth pointing out that most of the Neutron LBaaS team are
spending most of our time doing a major revision to Neutron LBaaS. How
stats processing should happen has definitely been discussed but not
resolved at present-- and in any case it was apparent to those working on
the project that it has secondary importance compared to the revision work
presently underway.

I personally would like to have queries about most objects in the stats API
to Neutron LBaaS return a dictionary or statuses for child objects which
then a UI or auto-scaling system can interpret however it wishes. Your
points are certainly well made, and I agree that it might also be useful to
inject status information externally, or have some kind of hook there to
get event notifications when individual member statuses change. But this is
really a discussion that needs to happen once the current code drive is
near fruition (ie. for Kilo).

Stephen


On Wed, Jul 23, 2014 at 1:27 PM, Doug Wiegley  wrote:

>  Great question, and to my knowledge, not at present.  There is an
> ongoing discussion about a common usage framework for ceilometer, for all
> the various *aaS things, but status I not included (yet!).  I think that
> spec is in gerrit.
>
>  Thanks,
> Doug
>
>
>   From: Mike Spreitzer 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, July 23, 2014 at 2:03 PM
>
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [heat] health maintenance in autoscaling
> groups
>
>  Doug Wiegley  wrote on 07/23/2014 03:43:02 PM:
>
> > From: Doug Wiegley 
> > ...
> > The state of the world today: ‘status’ in the neutron database is
> > configuration/provisioning status, not operational status.  Neutron-
> > wide thing.  We were discussing adding operational status fields (or
> > a neutron REST call to get the info from the backend) last month,
> > but it’s something that isn’t planned for a serious conversation
> > until Kilo, at present.
>
> Thanks for the prompt response.  Let me just grasp at one last straw: is
> there any chance that Neutron will soon define and implement Ceilometer
> metrics that reveal PoolMember health?
>
> Thanks,
> Mike
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - certificates data persistency

2014-07-23 Thread Stephen Balukoff
I'm willing to go with simpler code that gets us this feature faster, and
re-evaluating whether storing some extra data on certificates locally makes
significant performance gains later on.

First we need to get it working, then we need to make it work fast. :)

Stephen


On Tue, Jul 22, 2014 at 4:04 PM, Carlos Garza 
wrote:

>
> On Jul 20, 2014, at 6:32 AM, Evgeny Fedoruk  wrote:
>
> > Hi folks,
> >
> > In a current version of TLS capabilities RST certificate
> SubjectCommonName and SubjectAltName information is cached in a database.
> > This may be not necessary and here is why:
> >
> > 1.   TLS containers are immutable, meaning once a container was
> associated to a listener and was validated, it’s not necessary to validate
> the container anymore.
> > This is relevant for both, default container and containers used for SNI.
> > 2.   LBaaS front-end API can check if TLS containers ids were
> changed for a listener as part of an update operation. Validation of
> containers will be done for
> > new containers only. This is stated in “Performance Impact” section of
> the RST, excepting the last statement that proposes persistency for SCN and
> SAN.
> > 3.   Any interaction with Barbican API for getting containers data
> will be performed via a common module API only. This module’s API is
> mentioned in
> > “SNI certificates list management” section of the RST.
> > 4.   In case when driver really needs to extract certificate
> information prior to the back-end system provisioning, it will do it via
> the common module API.
> > 5.   Back-end provisioning system may cache any certificate data,
> except private key, in case of a specific need of the vendor.
> >
> > IMO, There is no real need to store certificates data in Neutron
> database and manage its life cycle.
> > Does anyone sees a reason why caching certificates’ data in Neutron
> database is critical?
>
> Its not so much caching the certificate. Lets just say when an lb
> change comes into the API that wants to add an X509 then we need to parse
> the subjectNames and SubjectAltNames from the previous X509s which aren't
> available to us so we must grab them all from barbican over the rest
> interface. Like I said in an earlier email its a balancing act between
> "Single Source of Truth" vs how much lag were willing to deal with.
>
>
>
> > Thank you,
> > Evg
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Flavor Framework spec approval deadline exception

2014-07-23 Thread Stephen Balukoff
I am wholly in favor of this sentiment!


On Tue, Jul 22, 2014 at 8:16 AM, Kyle Mestery  wrote:

> On Tue, Jul 22, 2014 at 10:10 AM, Eugene Nikanorov
>  wrote:
> > Hi folks,
> >
> > I'd like to request an exception for the Flavor Framework spec:
> > https://review.openstack.org/#/c/102723/
> >
> > It already have more or less complete server-side implementation:
> > https://review.openstack.org/#/c/105982/
> >
> > CLI will be posted on review soon.
> >
> We need the flavor framework to land for Juno, as LBaaS needs it. I'm
> ok with an exception here. Can we work to close the gaps in the spec
> review in the next few days? I see a few -1s on there still.
>
> Thanks,
> Kyle
>
> > Thanks,
> > Eugene.
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [Spec Freeze Exception] [Gantt] Scheduler Isolate DB spec

2014-07-23 Thread Michael Still
In that case this exception is approved. The exception is in the form
of another week to get the spec merged, so quick iterations are the
key.

Cheers,
Michael

On Wed, Jul 23, 2014 at 5:31 PM, Sylvain Bauza  wrote:
> Le 23/07/2014 01:11, Michael Still a écrit :
>> This spec freeze exception only has one core signed up. Are there any
>> other cores interested in working with Sylvain on this one?
>>
>> Michael
>
> By looking at
> https://etherpad.openstack.org/p/nova-juno-spec-priorities, I can see
> ndipanov as volunteer for sponsoring this blueprint.
>
> -Sylvain
>
>> On Mon, Jul 21, 2014 at 7:59 PM, John Garbutt  wrote:
>>> On 18 July 2014 09:10, Sylvain Bauza  wrote:
 Hi team,

 I would like to put your attention on https://review.openstack.org/89893
 This spec targets to isolate access within the filters to only Scheduler
 bits. This one is a prerequisite for a possible split of the scheduler
 into a separate project named Gantt, as it's necessary to remove direct
 access to other Nova objects (like aggregates and instances).

 This spec is one of the oldest specs so far, but its approval has been
 delayed because there were other concerns to discuss first about how we
 split the scheduler. Now that these concerns have been addressed, it is
 time for going back to that blueprint and iterate over it.

 I understand the exception is for a window of 7 days. In my opinion,
 this objective is targetable as now all the pieces are there for making
 a consensus.

 The change by itself is only a refactoring of the existing code with no
 impact on APIs neither on DB scheme, so IMHO this blueprint is a good
 opportunity for being on track with the objective of a split by
 beginning of Kilo.

 Cores, I leave you appreciate the urgency and I'm available by IRC or
 email for answering questions.
>>> Regardless of Gantt, tidying up the data dependencies here make sense.
>>>
>>> I feel we need to consider how the above works with upgrades.
>>>
>>> I am happy to sponsor this blueprint. Although I worry we might not
>>> get agreement in time.
>>>
>>> Thanks,
>>> John
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-23 Thread Michael Still
Another core sponsor would be nice on this one. Any takers?

Michael

On Thu, Jul 24, 2014 at 4:14 AM, Daniel P. Berrange  wrote:
> On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
>> Hi Folks,
>>
>> I'd like to propose the following as an exception to the spec freeze, on the 
>> basis that it addresses a potential data corruption issues in the Guest.
>>
>> https://review.openstack.org/#/c/89650
>>
>> We were pretty close to getting acceptance on this before, apart from a 
>> debate over whether one additional config value could be allowed to be set 
>> via image metadata - so I've given in for now on wanting that feature from a 
>> deployer perspective, and said that we'll hard code it as requested.
>>
>> Initial parts of the implementation are here:
>> https://review.openstack.org/#/c/68942/
>> https://review.openstack.org/#/c/99916/
>
> Per my comments already, I think this is important for Juno and will
> sponsor it.
>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-manager.org :|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon mode support

2014-07-23 Thread Miguel Angel Ajo Pelayo
+1

Sent from my Android phone using TouchDown (www.nitrodesk.com)


-Original Message-
From: Yuriy Taraday [yorik@gmail.com]
Received: Thursday, 24 Jul 2014, 0:42
To: OpenStack Development Mailing List [openstack-dev@lists.openstack.org]
Subject: [openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon   
mode support

Hello.

I'd like to propose making a spec freeze exception for rootwrap-daemon-mode
spec [1].

Its goal is to save agents' execution time by using daemon mode for
rootwrap and thus avoiding python interpreter startup time as well as sudo
overhead for each call. Preliminary benchmark shows 10x+ speedup of the
rootwrap interaction itself.

This spec have a number of supporters from Neutron team (Carl and Miguel
gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
The only thing that has been blocking its progress is Mark's -2 left when
oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
in oslo.rootwrap is steadily getting approved [5].

[1] https://review.openstack.org/93889
[2] https://review.openstack.org/82787
[3] https://review.openstack.org/84667
[4] https://review.openstack.org/107386
[5]
https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Carlos Garza
Yes we can discuss this during the meeting as well.  

On Jul 23, 2014, at 10:53 AM, Evgeny Fedoruk 
 wrote:

> Hi Carlos,
> 
> As I understand you are working on common module for Barbican  interactions.
> I will commit my code later today and I will appreciate if you and anybody 
> else  who is interested will review this change.
> There is one specific spot for the common Barbican interactions module API 
> integration.
> After the IRC meeting tomorrow, we can discuss the work items and decide who 
> is interested/available to do them.
> Does it make sense?
> 
> Thanks,
> Evg
> 
> -Original Message-
> From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
> Sent: Wednesday, July 23, 2014 6:15 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
> 
>Do you have any idea as to how we can split up the work?
> 
> On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk 
> wrote:
> 
>> Hi,
>> 
>> I'm working on TLS integration with loadbalancer v2 extension and db.
>> Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
>> https://review.openstack.org/#/c/105331/  , 
>> https://review.openstack.org/#/c/105610/
>> I will abandon previous 2 patches for TLS which are 
>> https://review.openstack.org/#/c/74031/ and 
>> https://review.openstack.org/#/c/102837/ 
>> Managing to submit my change later today. It will include lbaas extension v2 
>> modification, lbaas db v2 modifications, alembic migration for schema 
>> changes and new tests in unit testing for lbaas db v2.
>> 
>> Thanks,
>> Evg
>> 
>> -Original Message-
>> From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
>> Sent: Wednesday, July 23, 2014 3:54 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
>> 
>>  Since it looks like the TLS blueprint was approved I''m sure were all 
>> eager to start coded so how should we divide up work on the source code. I 
>> have Pull requests in pyopenssl 
>> "https://github.com/pyca/pyopenssl/pull/143";. and a few one liners in 
>> pica/cryptography to expose the needed low-level that I'm hoping will be 
>> added pretty soon to that PR 143 test's can pass. Incase it doesn't we will 
>> fall back to using the pyasn1_modules as it already also has a means to 
>> fetch what we want at a lower level. 
>> I'm just hoping that we can split the work up so that we can collaborate 
>> together on this with out over serializing the work were people become 
>> dependent on waiting for some one else to complete their work or worse one 
>> person ending up doing all the work.
>> 
>>  
>>  Carlos D. Garza ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-23 Thread Joshua Harlow
Awesome,

When I start to see emails on ML that say anyone need any help for XYZ ... 
(which is great btw) it makes me feel like there should be a more appropriate 
avenue for those inspirational folks looking to get involved (a ML isn't really 
the best place for this kind of guidance and directing). 

And in general mentoring will help all involved if we all do more of it :-)

Let me know if any thing is needed that I can possible help with to get more of 
it going.

-Josh

On Jul 23, 2014, at 2:44 PM, Jay Bryant  wrote:

> Great question Josh!
> 
> Have been doing a lot of mentoring within IBM for OpenStack and have now been 
> asked to formalize some of that work.  Not surprised there is an external 
> need as well.
> 
> Anne and Stefano.  Let me know if the kids anything I can do to help.
> 
> Jay
> 
> Hi all,
> 
> I was reading over a IMHO insightful hacker news thread last night:
> 
> https://news.ycombinator.com/item?id=8068547
> 
> Labeled/titled: 'I made a patch for Mozilla, and you can do it too'
> 
> It made me wonder what kind of mentoring support are we as a community 
> offering to newbies (a random google search for 'openstack mentoring' shows 
> mentors for GSoC, mentors for interns, outreach for women... but no mention 
> of mentors as a way for everyone to get involved)?
> 
> Looking at the comments in that hacker news thread, the article itself it 
> seems like mentoring is stressed over and over as the way to get involved.
> 
> Has there been ongoing efforts to establish such a program (I know there is 
> training work that has been worked on, but that's not exactly the same).
> 
> Thoughts, comments...?
> 
> -Josh
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] threading in nova (greenthreads, OS threads, etc.)

2014-07-23 Thread Joe Gordon
On Wed, Jul 23, 2014 at 9:41 AM, Chris Friesen 
wrote:

>
> Hi all,
>
> I was wondering if someone could point me to a doc describing the
> threading model for nova.
>
> I know that we use greenthreads to map multiple threads of execution onto
> a single native OS thread.  And the python GIL results in limitations as
> well.
>
> According to the description at "https://bugs.launchpad.net/
> tripleo/+bug/1203906" for nova-api we potentially fork off multiple
> instances because it's database-heavy and we don't want to serialize on the
> database.
>
> If that's the case, why do we only run one instance of nova-conductor on a
> single OS thread?


Nova-api and nova-conductor use the same logic to fork off multiple
workers, and we run with multiple conductor workers in the gate.

http://logs.openstack.org/62/107562/9/check/check-tempest-dsvm-full/50adcf5/logs/screen-n-cond.txt.gz#_2014-07-23_04_31_52_292


>
> And looking at nova-compute on a compute node with no instances running I
> see 22 OS threads.  Where do these come from?  Are these related to
> libvirt?  Or are they forked the way that nova-api is?
>
> Any pointers would be appreciated.
>
> Chris
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-23 Thread Joshua Harlow
Awesome,

When I start to see emails on ML that say anyone need any help for XYZ ... 
(which is great btw) it makes me feel like there should be a more appropriate 
avenue for those inspirational folks looking to get involved (a ML isn't really 
the best place for this kind of guidance and directing). 

And in general mentoring will help all involved if we all do more of it :-)

Let me know if any thing is needed that I can possible help with to get more of 
it going.

-Josh

On Jul 23, 2014, at 2:44 PM, Jay Bryant  wrote:

> Great question Josh!
> 
> Have been doing a lot of mentoring within IBM for OpenStack and have now been 
> asked to formalize some of that work.  Not surprised there is an external 
> need as well.
> 
> Anne and Stefano.  Let me know if the kids anything I can do to help.
> 
> Jay
> 
> Hi all,
> 
> I was reading over a IMHO insightful hacker news thread last night:
> 
> https://news.ycombinator.com/item?id=8068547
> 
> Labeled/titled: 'I made a patch for Mozilla, and you can do it too'
> 
> It made me wonder what kind of mentoring support are we as a community 
> offering to newbies (a random google search for 'openstack mentoring' shows 
> mentors for GSoC, mentors for interns, outreach for women... but no mention 
> of mentors as a way for everyone to get involved)?
> 
> Looking at the comments in that hacker news thread, the article itself it 
> seems like mentoring is stressed over and over as the way to get involved.
> 
> Has there been ongoing efforts to establish such a program (I know there is 
> training work that has been worked on, but that's not exactly the same).
> 
> Thoughts, comments...?
> 
> -Josh
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][Spec freeze exception] Rootwrap daemon mode support

2014-07-23 Thread Yuriy Taraday
Hello.

I'd like to propose making a spec freeze exception for rootwrap-daemon-mode
spec [1].

Its goal is to save agents' execution time by using daemon mode for
rootwrap and thus avoiding python interpreter startup time as well as sudo
overhead for each call. Preliminary benchmark shows 10x+ speedup of the
rootwrap interaction itself.

This spec have a number of supporters from Neutron team (Carl and Miguel
gave it their +2 and +1) and have all code waiting for review [2], [3], [4].
The only thing that has been blocking its progress is Mark's -2 left when
oslo.rootwrap spec hasn't been merged yet. Now that's not the case and code
in oslo.rootwrap is steadily getting approved [5].

[1] https://review.openstack.org/93889
[2] https://review.openstack.org/82787
[3] https://review.openstack.org/84667
[4] https://review.openstack.org/107386
[5]
https://review.openstack.org/#/q/project:openstack/oslo.rootwrap+topic:bp/rootwrap-daemon-mode,n,z

-- 

Kind regards, Yuriy.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

2014-07-23 Thread Alan Kavanagh
I find it really hard to comprehend the level of transparency here, or lack 
thereof. It seems to me that when we want to get features into a given release 
we are at the mercy of others and while I do understand that the core team cant 
approve and review everything we also can not wait for another release or 
another release for features that would be of importance for Openstack in 
general. Its very discouraging for other members of the community to have 
certain features which are important for them but maybe not for others being 
demoted and pushed further out.

Also, having a core set of people vote on what is essential for one release to 
the next to me is not very transparent and not very democratic way of working, 
it supports only those who want to guide the community one way. 
While I do see a need for ensuring priority, setting of priority to me is again 
not transparent imho. Also, having folks comment really late on BP's that have 
been progressing and folks working hard to progress them only for at the "last 
minute to get it demoted and then moved to another track" I find this not a 
nice way to work in the community, politics are a way of life but if they are 
going to be used as the rule for Openstack and its releases and a way for 
others to govern within the community I find this really disappointing.

If we have more work being put on the table, then more Core members would 
definitely go a long way with assisting this, we cant wait for folks to be 
reviewing stuff as an excuse to not get features landed in a given release.

I know this will strike a cord with some, but I see too much going on that 
makes me very disappointed so I hope by reaching out others will take note to 
help us improve this process, perhaps this is something the Openstack Board can 
take note of and jump in and try and resolve.

Alan

-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: July-23-14 9:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] Specs approved for Juno-3 and exceptions

On Wed, Jul 23, 2014 at 7:28 AM, Salvatore Orlando  wrote:
> I'm sure it is not news to anyone that we already have approved a too 
> many specifications for Juno-3. The PTL made clear indeed that "Low priority"
> blueprints are considered best effort.
>
> However, this already leaves us with 23 medium to high specifications 
> to merge in Juno-3. This is already quite close to what the core team 
> can handle, considering history from previous releases and the fact 
> that there are 3 very big items in the list (new LB APIs, distributed 
> router, and group policies).
>
> I've counted already at least 7 requests for spec freeze exceptions on 
> the mailing list, and it is likely more will come. In order to limit 
> oversubscribing, I would suggest to exclude freeze exceptions requests 
> for items which are not:
> - targeting stability and scalability for Neutron FOSS framework
> - have a "community" interest. By that I do not mean necessarily 
> targeting the FOSS bits, but necessarily have support and interest 
> from a number of teams of neutron contributors.
>
> I don't want to be evil to contributors, but I think it is better to 
> be clear now rather than arriving at the end of Juno-3 and having to 
> tell contributors that unfortunately we were not able to give their 
> patches enough review cycles.
>
Thanks for sending this out Salvatore. We are way oversubscribed, and at this 
point, I'm in agreement on not letting any new exceptions which do not fall 
under the above guidelines. Given how much is already packed in there, this 
makes the most sense.

Thanks,
Kyle

> Salvatore
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Denis Makogon
On Thu, Jul 24, 2014 at 1:01 AM, Nikhil Manchanda 
wrote:

>
> Tim Simpson writes:
>
> > To summarize, this is a conversation about the following LaunchPad
> > bug: https://launchpad.net/bugs/1325512
> > and Gerrit review: https://review.openstack.org/#/c/97194/6
> >
> > You are saying the function "_service_is_active" in addition to
> > polling the datastore service status also polls the status of the Nova
> > resource. At first I thought this wasn't the case, however looking at
> > your pull request I was surprised to see on line 320
> > (https://review.openstack.org/#/c/97194/6/trove/taskmanager/models.py)
> > polls Nova using the "get" method (which I wish was called "refresh"
> > as to me it sounds like a lazy-loader or something despite making a
> > full GET request each time).  So moving this polling out of there into
> > the two respective "create_server" methods as you have done is not
> > only going to be useful for Heat and avoid the issue of calling Nova
> > 99 times you describe but it will actually help operations teams to
> > see more clearly that the issue was with a server that didn't
> > provision. We actually had an issue in Staging the other day that took
> > us forever to figure out because the server wasn't provisioning, but
> > before anything checked that it was ACTIVE the DNS code detected the
> > server had no ip address (never mind it was in a FAILED state) so the
> > logs surfaced this as a DNS error. This change should help us avoid
> > such issues.
> >
>
> Thanks for bringing this up, Tim / Denis.
>
> As Tim mentions, it does look like the '_service_is_active' call in
> the taskmanager also polls Nova to check whether the instance is in
> ERROR, causing some unnecessary, extra polling while figuring out the
> state of the Trove instance.
>
> Given this, it does seem reasonable to split up the polling into two
> separate methods, in a manner similar to what [1] is trying to
> accomplish. However, [1] does seems a bit rough around the edges, and
> needs a bit of cleaning up -- and I've commented on the review to this
> effect.
>
>
Of course, all comments are reasonable. Will send patchset soon.

Thanks,
Denis


> [1] https://review.openstack.org/#/c/97194
>
> Hope this helps,
>
> Thanks,
> Nikhil
>
> >
> > [...]
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][all] Old review expiration

2014-07-23 Thread Jeremy Stanley
On 2014-07-14 17:05:32 +0100 (+0100), Daniel P. Berrange wrote:
> Indeed, I don't recall anyone telling Nova cores developers that
> we should be manually "expiring" patches, so I've not tried to
> expire any myself.

I think the point is not so much to "expire" patches since
age/staleness is only a moderately useful proxy for identifying
whether a change is still useful. Instead core reviewers can set a
change to Workflow -1 (work in progress) if it looks like it needs
more work to get back into a reasonable shape, or abandon it if it
no longer fits the direction of the project at all. This is a human
judgement call, and much more helpful than just culling changes
based on age alone.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Nikhil Manchanda

Tim Simpson writes:

> To summarize, this is a conversation about the following LaunchPad
> bug: https://launchpad.net/bugs/1325512
> and Gerrit review: https://review.openstack.org/#/c/97194/6
>
> You are saying the function "_service_is_active" in addition to
> polling the datastore service status also polls the status of the Nova
> resource. At first I thought this wasn't the case, however looking at
> your pull request I was surprised to see on line 320
> (https://review.openstack.org/#/c/97194/6/trove/taskmanager/models.py)
> polls Nova using the "get" method (which I wish was called "refresh"
> as to me it sounds like a lazy-loader or something despite making a
> full GET request each time).  So moving this polling out of there into
> the two respective "create_server" methods as you have done is not
> only going to be useful for Heat and avoid the issue of calling Nova
> 99 times you describe but it will actually help operations teams to
> see more clearly that the issue was with a server that didn't
> provision. We actually had an issue in Staging the other day that took
> us forever to figure out because the server wasn't provisioning, but
> before anything checked that it was ACTIVE the DNS code detected the
> server had no ip address (never mind it was in a FAILED state) so the
> logs surfaced this as a DNS error. This change should help us avoid
> such issues.
>

Thanks for bringing this up, Tim / Denis.

As Tim mentions, it does look like the '_service_is_active' call in
the taskmanager also polls Nova to check whether the instance is in
ERROR, causing some unnecessary, extra polling while figuring out the
state of the Trove instance.

Given this, it does seem reasonable to split up the polling into two
separate methods, in a manner similar to what [1] is trying to
accomplish. However, [1] does seems a bit rough around the edges, and
needs a bit of cleaning up -- and I've commented on the review to this
effect.

[1] https://review.openstack.org/#/c/97194

Hope this helps,

Thanks,
Nikhil

>
> [...]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Glance] Image tagging

2014-07-23 Thread Ruslan Kamaldinov
I was going to refer to Graffiti as a longer term plan for image
tagging. This initiative seems to be like a really good fit for our
image tagging use-cases.
For a short-term solutions proposed by Steve I have a couple of comments:

> 1)  Store allowed tags in the database, and allow administrators to add
> to that list. Ordinary users would likely not be able to create tags, though
> they could use pre-defined ones for images they owned.
In some cases users might upload their own packages and they would
likely need to mark some images as compatible with those specific
packages. But I think there is a solution. Each time a new package is
uploaded, Murano could create a tag with the same name (or fqn). That
could help users to to tag package-specific images.

> 2)  Have some public tags, but also allow user-specified tags for
> private packages. I think this leads to all sorts of tricky edge cases
Agree, edge cases will bring a lot of unnecessary complexity.

> 3)  Allow freeform tags (i.e. don’t provide any hints). Since there’s no
> formal link between the tag that a package looks for and the tags currently
> defined in code, this wouldn’t make anything more susceptible to
> inaccuracies

In general, I tend to agree that option 1 (with a slight modification)
is a good fit for a short-term solution.


Thanks,
Ruslan

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Spec freeze exception] - Big Switch Tenant Name Tracking

2014-07-23 Thread Kevin Benton
Hello,

I would like to propose a spec freeze exception for the tenant name
handling for the Big Switch plugin defined in
https://review.openstack.org/#/c/103268/.

The code is isolated to a third party plugin and the implementation was
already completed and tested (https://review.openstack.org/#/c/103269/)
before the deadline.

It missed the deadline because there is some concern around tracking tenant
names in a DB, but once that's resolved it should be simple to merge.
Ignoring unit tests, comments and the auto-generated DB migration script,
the change affects about 35 lines of code.


Cheers
-- 
Kevin Benton
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-23 Thread Jay Bryant
Great question Josh!

Have been doing a lot of mentoring within IBM for OpenStack and have now
been asked to formalize some of that work.  Not surprised there is an
external need as well.

Anne and Stefano.  Let me know if the kids anything I can do to help.

Jay
Hi all,

I was reading over a IMHO insightful hacker news thread last night:

https://news.ycombinator.com/item?id=8068547

Labeled/titled: 'I made a patch for Mozilla, and you can do it too'

It made me wonder what kind of mentoring support are we as a community
offering to newbies (a random google search for 'openstack mentoring' shows
mentors for GSoC, mentors for interns, outreach for women... but no mention
of mentors as a way for everyone to get involved)?

Looking at the comments in that hacker news thread, the article itself it
seems like mentoring is stressed over and over as the way to get involved.

Has there been ongoing efforts to establish such a program (I know there is
training work that has been worked on, but that's not exactly the same).

Thoughts, comments...?

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Thoughts on the patch test failure rate and moving forward

2014-07-23 Thread James E. Blair
OpenStack has a substantial CI system that is core to its development
process.  The goals of the system are to facilitate merging good code,
prevent regressions, and ensure that there is at least one configuration
of upstream OpenStack that we know works as a whole.  The "project
gating" technique that we use is effective at preventing many kinds of
regressions from landing, however more subtle, non-deterministic bugs
can still get through, and these are the bugs that are currently
plaguing developers with seemingly random test failures.

Most of these bugs are not failures of the test system; they are real
bugs.  Many of them have even been in OpenStack for a long time, but are
only becoming visible now due to improvements in our tests.  That's not
much help to developers whose patches are being hit with negative test
results from unrelated failures.  We need to find a way to address the
non-deterministic bugs that are lurking in OpenStack without making it
easier for new bugs to creep in.

The CI system and project infrastructure are not static.  They have
evolved with the project to get to where they are today, and the
challenge now is to continue to evolve them to address the problems
we're seeing now.  The QA and Infrastructure teams recently hosted a
sprint where we discussed some of these issues in depth.  This post from
Sean Dague goes into a bit of the background: [1].  The rest of this
email outlines the medium and long-term changes we would like to make to
address these problems.

[1] https://dague.net/2014/07/22/openstack-failures/

==Things we're already doing==

The elastic-recheck tool[2] is used to identify "random" failures in
test runs.  It tries to match failures to known bugs using signatures
created from log messages.  It helps developers prioritize bugs by how
frequently they manifest as test failures.  It also collects information
on unclassified errors -- we can see how many (and which) test runs
failed for an unknown reason and our overall progress on finding
fingerprints for random failures.

[2] http://status.openstack.org/elastic-recheck/

We added a feature to Zuul that lets us manually "promote" changes to
the top of the Gate pipeline.  When the QA team identifies a change that
fixes a bug that is affecting overall gate stability, we can move that
change to the top of the queue so that it may merge more quickly.

We added the clean check facility in reaction to the January gate break
down. While it does mean that any individual patch might see more tests
run on it, it's now largely kept the gate queue at a countable number of
hours, instead of regularly growing to more than a work day in
length. It also means that a developer can Approve a code merge before
tests have returned, and not ruin it for everyone else if there turned
out to be a bug that the tests could catch.

==Future changes==

===Communication===
We used to be better at communicating about the CI system.  As it and
the project grew, we incrementally added to our institutional knowledge,
but we haven't been good about maintaining that information in a form
that new or existing contributors can consume to understand what's going
on and why.

We have started on a major effort in that direction that we call the
"infra-manual" project -- it's designed to be a comprehensive "user
manual" for the project infrastructure, including the CI process.  Even
before that project is complete, we will write a document that
summarizes the CI system and ensure it is included in new developer
documentation and linked to from test results.

There are also a number of ways for people to get involved in the CI
system, whether focused on Infrastructure or QA, but it is not always
clear how to do so.  We will improve our documentation to highlight how
to contribute.

===Fixing Faster===

We introduce bugs to OpenStack at some constant rate, which piles up
over time. Our systems currently treat all changes as equally risky and
important to the health of the system, which makes landing code changes
to fix key bugs slow when we're at a high reset rate. We've got a manual
process of promoting changes today to get around this, but that's
actually quite costly in people time, and takes getting all the right
people together at once to promote changes. You can see a number of the
changes we promoted during the gate storm in June [3], and it was no
small number of fixes to get us back to a reasonably passing gate. We
think that optimizing this system will help us land fixes to critical
bugs faster.

[3] https://etherpad.openstack.org/p/gatetriage-june2014

The basic idea is to use the data from elastic recheck to identify that
a patch is fixing a critical gate related bug. When one of these is
found in the queues it will be given higher priority, including bubbling
up to the top of the gate queue automatically. The manual promote
process should no longer be needed, and instead bugs fixing elastic
recheck tracked issues will be promoted automatica

Re: [openstack-dev] [Keystone] Feasibility of adding global restrictions at trust creation time

2014-07-23 Thread Morgan Fainberg
On Wednesday, July 23, 2014, Russell Bryant  wrote:

> On 07/22/2014 11:00 PM, Nathan Kinder wrote:
> >
> >
> > On 07/22/2014 06:55 PM, Steven Hardy wrote:
> >> On Tue, Jul 22, 2014 at 05:20:44PM -0700, Nathan Kinder wrote:
> >>> Hi,
> >>>
> >>> I've had a few discussions recently related to Keystone trusts with
> >>> regards to imposing restrictions on trusts at a deployment level.
> >>> Currently, the creator of a trust is able to specify the following
> >>> restrictions on the trust at creation time:
> >>>
> >>>   - an expiration time for the trust
> >>>   - the number of times that the trust can be used to issue trust
> tokens
> >>>
> >>> If an expiration time (expires_at) is not specified by the creator of
> >>> the trust, then it never expires.  Similarly, if the number of uses
> >>> (remaining_uses) is not specified by the creator of the trust, it has
> an
> >>> unlimited number of uses.  The important thing to note is that the
> >>> restrictions are entirely in the control of the trust creator.
> >>>
> >>> There may be cases where a particular deployment wants to specify
> global
> >>> maximum values for these restrictions to prevent a trust from being
> >>> granted indefinitely.  For example, Keystone configuration could
> specify
> >>> that a trust can't be created that has >100 remaining uses or is valid
> >>> for more than 6 months.  This would certainly cause problems for some
> >>> deployments that may be relying on indefinite trusts, but it is also a
> >>> nice security control for deployments that don't want to allow
> something
> >>> so open-ended.
> >>>
> >>> I'm wondering about the feasibility of this sort of change,
> particularly
> >>> from an API compatibility perspective.  An attempt to create a trust
> >>> without an expires_at value should still be considered as an attempt to
> >>> create a trust that never expires, but Keystone could return a '403
> >>> Forbidden' response if this request violates the maximum specified in
> >>> configuration (this would be similar for remaining_uses).  The
> semantics
> >>> of the API remain the same, but the response has the potential to be
> >>> rejected for new reasons.  Is this considered as an API change, or
> would
> >>> this be considered to be OK to implement in the v3 API?  The existing
> >>> API docs [1][2] don't really go to this level of detail with regards to
> >>> when exactly a 403 will be returned for trust creation, though I know
> of
> >>> specific cases where this response is returned for the create-trust
> request.
> >>
> >> FWIW if you start enforcing either of these restrictions by default, you
> >> will break heat, and every other delegation-to-a-service use case I'm
> aware
> >> of, where you simply don't have any idea how long the lifetime of the
> thing
> >> created by the service (e.g heat stack, Solum application definition,
> >> Mistral workflow or whatever) will be.
> >>
> >> So while I can understand the desire to make this configurable for some
> >> environments, please leave the defaults as the current behavior and be
> >> aware that adding these kind of restrictions won't work for many
> existing
> >> trusts use-cases.
> >
> > I fully agree.  In no way should the default behavior change.
> >
> >>
> >> Maybe the solution would be some sort of policy defined exception to
> these
> >> limits?  E.g when delegating to a user in the service project, they do
> not
> >> apply?
> >
> > Role-based limits seem to be a natural progression of the idea, though I
> > didn't want to throw that out there from the get-go.
>
> I was concerned about this idea from an API compatibility perspective,
> but I think the way you have laid it out here makes sense.  Like both
> you and Steven said, the behavior of the API when the parameter is not
> specified should *not* change.  However, allowing deployment-specific
> policy that would reject the request seems fine.
>
> Thanks,
>
> --
> Russell Bryant
>
>
This all seems quite reasonable. And as long as the default behavior is
reasonable (doesn't change) I see this as quite doable and should not have
any negative impact on the API.

I can see a benefit to having this type of enforcement in some deployments.

--Morgan
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Denis Makogon
On Wed, Jul 23, 2014 at 7:33 PM, Tim Simpson 
wrote:

>  To summarize, this is a conversation about the following LaunchPad bug:
> https://launchpad.net/bugs/1325512
> and Gerrit review: https://review.openstack.org/#/c/97194/6
>
>  You are saying the function "_service_is_active" in addition to polling
> the datastore service status also polls the status of the Nova resource. At
> first I thought this wasn't the case, however looking at your pull request
> I was surprised to see on line 320 (
> https://review.openstack.org/#/c/97194/6/trove/taskmanager/models.py)
> polls Nova using the "get" method (which I wish was called "refresh" as to
> me it sounds like a lazy-loader or something despite making a full GET
> request each time).
> So moving this polling out of there into the two respective
> "create_server" methods as you have done is not only going to be useful for
> Heat and avoid the issue of calling Nova 99 times you describe but it will
> actually help operations teams to see more clearly that the issue was with
> a server that didn't provision. We actually had an issue in Staging the
> other day that took us forever to figure out because the
>

Agreed, i guess i would need to update bug-report to add more info about
given issue, but i'm really glad to hear that proposed change would be
useful. And i agree, that from operation/support team would be useful to
track provisioning issues that has nothing common with Trove but tight to
infrastructure.


> server wasn't provisioning, but before anything checked that it was ACTIVE
> the DNS code detected the server had no ip address (never mind it was in a
> FAILED state) so the logs surfaced this as a DNS error. This change should
> help us avoid such issues.
>
>  Thanks,
>
>  Tim
>
>
>  --
> *From:* Denis Makogon [dmako...@mirantis.com]
> *Sent:* Wednesday, July 23, 2014 7:30 AM
> *To:* OpenStack Development Mailing List
> *Subject:* [openstack-dev] [Trove] Guest prepare call polling mechanism
> issue
>
>Hello, Stackers.
>
>
>  I’d like to discuss guestagent prepare call polling mechanism issue (see
> [1]).
>
>  Let me first describe why this is actually an issue and why it should be
> fixed. For those of you who is familiar with Trove knows that Trove can
> provision instances through Nova API and Heat API (see [2] and see [3]).
>
>
>
> What’s the difference between this two ways (in general)? The answer
> is simple:
>
> - Heat-based provisioning method has polling mechanism that verifies that
> stack provisioning was completed with successful state (see [4]) which
> means that all stack resources are in ACTIVE state.
>
> - Nova-based provisioning method doesn’t do any polling (which is wrong,
> since instance can’t fail as fast as possible because Trove-taskmanager
> service doesn’t verify that launched server had reached ACTIVE state.
> That’s the issue #1 - compute instance state is unknown, but right after
> resources (deliverd by heat) already in ACTIVE states.
>
>  Once one method [2] or [3] finished, taskmanager trying to prepare data
> for guest (see [5]) and then it tries to send prepare call to guest (see
> [6]). Here comes issue #2 - polling mechanism does at least 100 API calls
> to Nova to define compute instance status.
>
> Also taskmanager does almost the same amount of calls to Trove backend to
> discover guest status which is totally normal.
>
>  So, here comes the question,  why should i call 99 times Nova for
> the same value if the value asked for the first time was completely
> acceptable?
>
>
>
> There’s only one way to fix it. Since heat-based provisioning
> delivers instance with status validation procedure, the same thing should
> be done for nova-base provisioning (we should extract compute instance
> status polling from guest prepare polling mechanism and integrate it into
> [2]) and leave only guest status discovering in guest prepare polling
> mechanism.
>
>
>
>
>  Benefits? Proposed fix will give an ability for fast-failing for
> corrupted instances, it would reduce amount of redundant Nova API calls
> while attempting to discover guest status.
>
>
>  Proposed fix for this issue - [7].
>
>  [1] - https://launchpad.net/bugs/1325512
>
> [2] -
> https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215
>
> [3] -
> https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197
>
> [4] -
> https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429
>
> [5] -
> https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256
>
> [6] -
> https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266
>
> [7] - https://review.openstack.org/#/c/97194/
>
>
>  Thoughts?
>
>  Best regards,
>
> Denis Makogon
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/op

Re: [openstack-dev] [Trove] Neutron integration test job

2014-07-23 Thread Denis Makogon
On Wed, Jul 23, 2014 at 8:12 PM, Kyle Mestery  wrote:

> On Wed, Jul 23, 2014 at 7:28 AM, Denis Makogon 
> wrote:
> > Hello, Stackers.
> >
> >
> >
> > For those of you who’s interested in Trove just letting you know, that
> for
> > now Trove can work with Neutron (hooray!!)
> >  instead of Nova-network, see [1] and [2]. It’s a huge step forward on
> the
> > road of advanced OpenStack integration.
> >
> > But let’s admit it’s not the end, we should deal with:
> >
> > Add Neutron-based configuration for DevStack to let folks try it (see
> [3]).
> >
> I have some comments on this patch which I've posted in the review.


Thanks for keeping an eye on it. So, you've suggested to use
PRIVATE_NETWORK_NAME
and PRIVATE_SUBNET_NAME.

Correct me if i'm wrong. According to [1] and [2] when neutron get's
deployed, it uses pre-defined network (defined at [1]) and sub-network name
(defined at [2]).

If that's it - i'm totally fine to update patchset with suggested changed.

[1]
https://github.com/openstack-dev/devstack/blob/89a8a15ebe31f4b06e40ecadd4918e687087874c/stackrc#L418-L420
[2]
https://github.com/openstack-dev/devstack/blob/1ecd43da5434b8ef7dafb49b9b30c9c1b18afffe/lib/neutron



> > Implementing/providing new type of testing job that will test on a
> regular
> > basis all Trove tests with enabled Neutron to verify that all our
> networking
> > preparations for instance are fine.
> >
> >
> > The last thing is the most interesting. And i’d like to discuss it with
> all
> > of you, folks.
> > So, i’ve wrote initial job template taking into account specific
> > configuration required by DevStack and Trove-integration, see [4], and
> i’d
> > like to receive all possible feedbacks as soon as possible.
> >
> This is great! I'd like to see this work land as well, thanks for
> taking this on. I'll add this to my backlog of items to review and
> provide some feedback as well.
>
> Sound amazing, thanks for keeping an eye on. The most interesting part for
me is a job template, i'd like to hear feedback in it as weel.

P.S.: sorry about putting job template on a gist instead of sending on the
review, but i thought it would be good enough to recieve a feedback.

 Best regards,
Denis Makogon

> Thanks,
> Kyle
>
> >
> >
> > [1] - Trove.
> >
> https://github.com/openstack/trove/commit/c68fef2b7a61f297b9fe7764dd430eefd4d4a767
> >
> > [2] - Trove integration.
> >
> https://github.com/openstack/trove-integration/commit/9f42f5c9b1a0d8844b3e527bcf2eb9474485d23a
> >
> > [3] - DevStack patchset. https://review.openstack.org/108966
> >
> > [4] - POC.
> https://gist.github.com/denismakogon/76d9bd3181781097c39b
> >
> >
> >
> > Best regards,
> >
> > Denis Makogon
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Feasibility of adding global restrictions at trust creation time

2014-07-23 Thread Russell Bryant
On 07/22/2014 11:00 PM, Nathan Kinder wrote:
> 
> 
> On 07/22/2014 06:55 PM, Steven Hardy wrote:
>> On Tue, Jul 22, 2014 at 05:20:44PM -0700, Nathan Kinder wrote:
>>> Hi,
>>>
>>> I've had a few discussions recently related to Keystone trusts with
>>> regards to imposing restrictions on trusts at a deployment level.
>>> Currently, the creator of a trust is able to specify the following
>>> restrictions on the trust at creation time:
>>>
>>>   - an expiration time for the trust
>>>   - the number of times that the trust can be used to issue trust tokens
>>>
>>> If an expiration time (expires_at) is not specified by the creator of
>>> the trust, then it never expires.  Similarly, if the number of uses
>>> (remaining_uses) is not specified by the creator of the trust, it has an
>>> unlimited number of uses.  The important thing to note is that the
>>> restrictions are entirely in the control of the trust creator.
>>>
>>> There may be cases where a particular deployment wants to specify global
>>> maximum values for these restrictions to prevent a trust from being
>>> granted indefinitely.  For example, Keystone configuration could specify
>>> that a trust can't be created that has >100 remaining uses or is valid
>>> for more than 6 months.  This would certainly cause problems for some
>>> deployments that may be relying on indefinite trusts, but it is also a
>>> nice security control for deployments that don't want to allow something
>>> so open-ended.
>>>
>>> I'm wondering about the feasibility of this sort of change, particularly
>>> from an API compatibility perspective.  An attempt to create a trust
>>> without an expires_at value should still be considered as an attempt to
>>> create a trust that never expires, but Keystone could return a '403
>>> Forbidden' response if this request violates the maximum specified in
>>> configuration (this would be similar for remaining_uses).  The semantics
>>> of the API remain the same, but the response has the potential to be
>>> rejected for new reasons.  Is this considered as an API change, or would
>>> this be considered to be OK to implement in the v3 API?  The existing
>>> API docs [1][2] don't really go to this level of detail with regards to
>>> when exactly a 403 will be returned for trust creation, though I know of
>>> specific cases where this response is returned for the create-trust request.
>>
>> FWIW if you start enforcing either of these restrictions by default, you
>> will break heat, and every other delegation-to-a-service use case I'm aware
>> of, where you simply don't have any idea how long the lifetime of the thing
>> created by the service (e.g heat stack, Solum application definition,
>> Mistral workflow or whatever) will be.
>>
>> So while I can understand the desire to make this configurable for some
>> environments, please leave the defaults as the current behavior and be
>> aware that adding these kind of restrictions won't work for many existing
>> trusts use-cases.
> 
> I fully agree.  In no way should the default behavior change.
> 
>>
>> Maybe the solution would be some sort of policy defined exception to these
>> limits?  E.g when delegating to a user in the service project, they do not
>> apply?
> 
> Role-based limits seem to be a natural progression of the idea, though I
> didn't want to throw that out there from the get-go.

I was concerned about this idea from an API compatibility perspective,
but I think the way you have laid it out here makes sense.  Like both
you and Steven said, the behavior of the API when the parameter is not
specified should *not* change.  However, allowing deployment-specific
policy that would reject the request seems fine.

Thanks,

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][oslo.messaging] Adding a new RPC backend for testing AMQP 1.0

2014-07-23 Thread Ken Giusti
Hi,

I'd like some help with $SUBJECT.  I've got a WIP patch up for review:

https://review.openstack.org/#/c/109118/

My goal is to have an RPC backend that I can use to test the new AMQP
1.0 oslo.messaging driver against.  I suspect this new backend would
initially only be used by tests specifically written against the
driver, but I'm hoping for wider adoption as the driver stabilizes and
AMQP 1.0 adoption increases.

As I said, this is only a WIP and doesn't completely work yet (though
it shouldn't break support for the existing backends).  I'm just
looking for some early feedback on whether or not this is the correct
approach.

thanks!

-- 
Ken Giusti  (kgiu...@gmail.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Jay Pipes

On 07/23/2014 03:04 PM, Dan Smith wrote:

FWIW, I do actually agree with not exposing plugin points to things
that are not stable APIs and if they didn't already exist, I'd not
approve adding them. I'd actually go further and say not even the
virt driver API should be a plugin point, since we arbitrarily change
it during development any time we need to. The latter is not a serious
or practical view right now though given our out of tree Docker/Ironic
drivers. I'm just concerned that we've had these various extension
points exposed for a long time and we've not clearly articulated
that they are liable to be killed off (besides marking vif_driver
as deprecated)


Yep, I think we agree. I think that as a project we've identified
exposing plug points that aren't stable (or intended to be replaceable)
as a bad thing, and thus we should be iterating on removing them.
Especially if we're generous with our deprecate-before-remove rules,
then I think that we're not likely to bite anyone suddenly with
something they're shipping while working it upstream in parallel. I
*really* thought we had called this one out on the ReleaseNotes, but
apparently that didn't happen (probably because we decide to throw in
those helper classes to avoid breaking configs). Going forward, marking
it deprecated in the code for a cycle, noting it on the release notes,
and then removing it the next cycle seems like plenty of warning.


The following are "plugin" points that I feel should be scrapped (sorry, 
I mean deprecated over a release cycle), as they really are not things 
that anyone actually provides extensions for and, IMO, they just add 
needless code abstraction, noise and indirection:


All of these are pointless:

* metadata_manager=nova.api.manager.MetadataManager
* compute_manager=nova.compute.manager.ComputeManager
* console_manager=nova.console.manager.ConsoleProxyManager
* consoleauth_manager=nova.consoleauth.manager.ConsoleAuthManager
* cert_manager=nova.cert.manager.CertManager
* scheduler_manager=nova.scheduler.manager.SchedulerManager
* db_driver=nova.db (pretty sure that ship has long since sailed)
* network_api_class=nova.network.api.API
* volume_api_class=nova.volume.cinder.API
* manager=nova.cells.manager.CellsManager
* manager=nova.conductor.manager.ConductorManager

Then there are the funnies:

This should not be a manager class at all, but rather a selector that 
switches the behaviour of the underlying network implementation -- i.e. 
it should not be swapped out by custom code but instead just have a 
switch option to indicate the type of network model in use):


* network_manager=nova.network.manager.VlanManager

Same goes for this one, which should just be selected based on the 
network model:


* l3_lib=nova.network.l3.LinuxNetL3

These ones should similarly be selected based on the binding_type, not 
provided as a plugin point (as Ian Wells alluded to):


* vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
* vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver

These config options should be renamed to use driver, not manager:

* floating_ip_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
* instance_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
* scheduler_host_manager=nova.scheduler.host_manager.HostManager
* power_manager=nova.virt.baremetal.ipmi.IPMI

This config option should be renamed to use driver, not api_class:

* api_class=nova.keymgr.conf_key_mgr.ConfKeyManager

This one should be renamed to use driver, not handler:

* image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore

This one... who knows? There are no other schedulers for the cells 
module other than this one, and it doesn't follow the same manager -> 
driver pattern as most of Nova, so, should it be called scheduler_driver 
or just scrapped?:


* scheduler=nova.cells.scheduler.CellsScheduler

This one isn't properly set up as a driver-based system but actually 
implements an API, which you'd have to then subclass identically and 
there would be zero point in doing that since you would need to return 
the same data as is set in the Stats class' methods:


* compute_stats_class=nova.compute.stats.Stats

I think it's pretty clear there's lots of room for consistency and 
improvements.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Jay Pipes

On 07/23/2014 03:04 PM, Dan Smith wrote:

FWIW, I do actually agree with not exposing plugin points to things
that are not stable APIs and if they didn't already exist, I'd not
approve adding them. I'd actually go further and say not even the
virt driver API should be a plugin point, since we arbitrarily change
it during development any time we need to. The latter is not a serious
or practical view right now though given our out of tree Docker/Ironic
drivers. I'm just concerned that we've had these various extension
points exposed for a long time and we've not clearly articulated
that they are liable to be killed off (besides marking vif_driver
as deprecated)


Yep, I think we agree. I think that as a project we've identified
exposing plug points that aren't stable (or intended to be replaceable)
as a bad thing, and thus we should be iterating on removing them.
Especially if we're generous with our deprecate-before-remove rules,
then I think that we're not likely to bite anyone suddenly with
something they're shipping while working it upstream in parallel. I
*really* thought we had called this one out on the ReleaseNotes, but
apparently that didn't happen (probably because we decide to throw in
those helper classes to avoid breaking configs). Going forward, marking
it deprecated in the code for a cycle, noting it on the release notes,
and then removing it the next cycle seems like plenty of warning.


The following are "plugin" points that I feel should be scrapped (sorry, 
I mean deprecated over a release cycle), as they really are not things 
that anyone actually provides extensions for and, IMO, they just add 
needless code abstraction, noise and indirection:


All of these are pointless:

* metadata_manager=nova.api.manager.MetadataManager
* compute_manager=nova.compute.manager.ComputeManager
* console_manager=nova.console.manager.ConsoleProxyManager
* consoleauth_manager=nova.consoleauth.manager.ConsoleAuthManager
* cert_manager=nova.cert.manager.CertManager
* scheduler_manager=nova.scheduler.manager.SchedulerManager
* db_driver=nova.db (pretty sure that ship has long since sailed)
* network_api_class=nova.network.api.API
* volume_api_class=nova.volume.cinder.API
* manager=nova.cells.manager.CellsManager
* manager=nova.conductor.manager.ConductorManager

Then there are the funnies:

This should not be a manager class at all, but rather a selector that 
switches the behaviour of the underlying network implementation -- i.e. 
it should not be swapped out by custom code but instead just have a 
switch option to indicate the type of network model in use):


* network_manager=nova.network.manager.VlanManager

Same goes for this one, which should just be selected based on the 
network model:


* l3_lib=nova.network.l3.LinuxNetL3

These ones should similarly be selected based on the binding_type, not 
provided as a plugin point (as Ian Wells alluded to):


* vif_driver=nova.virt.libvirt.vif.LibvirtGenericVIFDriver
* vif_driver=nova.virt.xenapi.vif.XenAPIBridgeDriver

These config options should be renamed to use driver, not manager:

* floating_ip_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
* instance_dns_manager=nova.network.noop_dns_driver.NoopDNSDriver
* scheduler_host_manager=nova.scheduler.host_manager.HostManager
* power_manager=nova.virt.baremetal.ipmi.IPMI

This config option should be renamed to use driver, not api_class:

* api_class=nova.keymgr.conf_key_mgr.ConfKeyManager

This one should be renamed to use driver, not handler:

* image_upload_handler=nova.virt.xenapi.image.glance.GlanceStore

This one... who knows? There are no other schedulers for the cells 
module other than this one, and it doesn't follow the same manager -> 
driver pattern as most of Nova, so, should it be called scheduler_driver 
or just scrapped?:


* scheduler=nova.cells.scheduler.CellsScheduler

This one isn't properly set up as a driver-based system but actually 
implements an API, which you'd have to then subclass identically and 
there would be zero point in doing that since you would need to return 
the same data as is set in the Stats class' methods:


* compute_stats_class=nova.compute.stats.Stats

I think it's pretty clear there's lots of room for consistency and 
improvements.


All the best,
-jay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Doug Hellmann

On Jul 23, 2014, at 3:49 PM, Ben Nemec  wrote:

> On 2014-07-23 13:25, gordon chung wrote:
> 
>> > I left a comment on one of the commits, but in general here are my 
>> > thoughts:
>> > 1) I would prefer not to do things like switch to oslo.i18n outside of 
>> > Gerrit.  I realize we don't have a specific existing policy for this, but 
>> > doing that significant 
>> > work outside of Gerrit is not desirable IMHO.  It needs to happen either 
>> > before graduation or after import into Gerrit.
>> > 2) I definitely don't want to be accepting "enable [hacking check]" 
>> > changes outside Gerrit.  The github graduation step is _just_ to get the 
>> > code in shape so it 
>> > can be imported with the tests passing.  It's perfectly acceptable to me 
>> > to just ignore any hacking checks during this step and fix them in Gerrit 
>> > where, again, 
>> > the changes can be reviewed.
>> > At a glance I don't see any problems with the changes that have been made, 
>> > but I haven't looked that closely and I think it brings up some topics for 
>> > clarification in the graduation process.
>> 
>> 
>> i'm ok to revert if there are concerns. i just vaguely remember a reference 
>> in another oslo lib about waiting for i18n graduation but tbh i didn't 
>> actually check back to see what conclusion was.
>> 
>>  
>> cheers,
>> gord
> I have no specific concerns, but I don't want to set a precedent where we 
> make a bunch of changes on Github and then import that code.  The work on 
> Github should be limited to the minimum necessary to get the unit tests 
> passing (basically if it's not listed in 
> https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary#Manual_Fixes then it 
> should happen in Gerrit).  Once that happens the project can be imported and 
> any further changes made under our standard review process.  Either that or 
> changes can be made in incubator before graduation and reviewed then.
> 
> So I guess I'm a soft -1 on this for right now, but I'll defer to the other 
> Oslo cores because I don't really have time to take a more detailed look at 
> the repo and I don't want to be a blocker when I may not be around to discuss 
> it.
> 
> 

I agree with Ben on minimizing the amount of work that happens outside of the 
review process. I would have liked some discussion of the “remove stray tests”, 
for example.

Gordon, could you prepare a version of the repository that stops with the 
export and whatever changes are needed to make the test jobs for the new 
library run? If removing some of those tests is part of making the suite run, 
we can talk about that on the list here, but if you can make the job run 
without that commit we should review it in gerrit after the repository is 
imported.

Doug

> -Ben
> 
>  
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Davanum Srinivas
I agree with Ben. (" I don't want to set a precedent where we make a
bunch of changes on Github and then import that code ")

-- dims

On Wed, Jul 23, 2014 at 3:49 PM, Ben Nemec  wrote:
> On 2014-07-23 13:25, gordon chung wrote:
>
>> I left a comment on one of the commits, but in general here are my
>> thoughts:
>> 1) I would prefer not to do things like switch to oslo.i18n outside of
>> Gerrit.  I realize we don't have a specific existing policy for this, but
>> doing that significant
>> work outside of Gerrit is not desirable IMHO.  It needs to happen either
>> before graduation or after import into Gerrit.
>> 2) I definitely don't want to be accepting "enable [hacking check]"
>> changes outside Gerrit.  The github graduation step is _just_ to get the
>> code in shape so it
>> can be imported with the tests passing.  It's perfectly acceptable to me
>> to just ignore any hacking checks during this step and fix them in Gerrit
>> where, again,
>> the changes can be reviewed.
>> At a glance I don't see any problems with the changes that have been made,
>> but I haven't looked that closely and I think it brings up some topics for
>> clarification in the graduation process.
>
>
> i'm ok to revert if there are concerns. i just vaguely remember a reference
> in another oslo lib about waiting for i18n graduation but tbh i didn't
> actually check back to see what conclusion was.
>
>
> cheers,
> gord
>
> I have no specific concerns, but I don't want to set a precedent where we
> make a bunch of changes on Github and then import that code.  The work on
> Github should be limited to the minimum necessary to get the unit tests
> passing (basically if it's not listed in
> https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary#Manual_Fixes then
> it should happen in Gerrit).  Once that happens the project can be imported
> and any further changes made under our standard review process.  Either that
> or changes can be made in incubator before graduation and reviewed then.
>
> So I guess I'm a soft -1 on this for right now, but I'll defer to the other
> Oslo cores because I don't really have time to take a more detailed look at
> the repo and I don't want to be a blocker when I may not be around to
> discuss it.
>
> -Ben
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: http://davanum.wordpress.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Weekly IRC Agenda

2014-07-23 Thread Jorge Miramontes
Hey LBaaS folks,

This is you friendly reminder to provide any agenda items for tomorrow's weekly 
IRC meeting. The agenda currently has two items:

  *   Review Updates
  *   TLS work division

Cheers,
--Jorge

P.S. Please don't forget to update the weekly standup ==> 
https://etherpad.openstack.org/p/neutron-lbaas-weekly-standup
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Looking for Coraid cinder contact

2014-07-23 Thread Duncan Thomas
Hi

I'm looking for a maintainer email address for the cinder coraid
driver. http://stackalytics.com/report/driverlog?project_id=openstack%2Fcinder
just lists it as "Alyseo team" with no contact details.


Thanks

-- 
Duncan Thomas

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Doug Wiegley
Great question, and to my knowledge, not at present.  There is an ongoing 
discussion about a common usage framework for ceilometer, for all the various 
*aaS things, but status I not included (yet!).  I think that spec is in gerrit.

Thanks,
Doug


From: Mike Spreitzer mailto:mspre...@us.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 23, 2014 at 2:03 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] health maintenance in autoscaling groups

Doug Wiegley mailto:do...@a10networks.com>> wrote on 
07/23/2014 03:43:02 PM:

> From: Doug Wiegley mailto:do...@a10networks.com>>
> ...
> The state of the world today: ‘status’ in the neutron database is
> configuration/provisioning status, not operational status.  Neutron-
> wide thing.  We were discussing adding operational status fields (or
> a neutron REST call to get the info from the backend) last month,
> but it’s something that isn’t planned for a serious conversation
> until Kilo, at present.

Thanks for the prompt response.  Let me just grasp at one last straw: is there 
any chance that Neutron will soon define and implement Ceilometer metrics that 
reveal PoolMember health?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Russell Bryant
On 07/23/2014 03:04 PM, Dan Smith wrote:
>> FWIW, I do actually agree with not exposing plugin points to
>> things that are not stable APIs and if they didn't already exist,
>> I'd not approve adding them. I'd actually go further and say not
>> even the virt driver API should be a plugin point, since we
>> arbitrarily change it during development any time we need to. The
>> latter is not a serious or practical view right now though given
>> our out of tree Docker/Ironic drivers. I'm just concerned that
>> we've had these various extension points exposed for a long time
>> and we've not clearly articulated that they are liable to be
>> killed off (besides marking vif_driver as deprecated)
> 
> Yep, I think we agree. I think that as a project we've identified 
> exposing plug points that aren't stable (or intended to be
> replaceable) as a bad thing, and thus we should be iterating on
> removing them. Especially if we're generous with our
> deprecate-before-remove rules, then I think that we're not likely
> to bite anyone suddenly with something they're shipping while
> working it upstream in parallel. I *really* thought we had called
> this one out on the ReleaseNotes, but apparently that didn't happen
> (probably because we decide to throw in those helper classes to
> avoid breaking configs). Going forward, marking it deprecated in
> the code for a cycle, noting it on the release notes, and then
> removing it the next cycle seems like plenty of warning.

+1 on this stance.  I'd like to remove all plug points that we don't
intend to be considered stable APIs with a reasonable deprecation cycle.

I personally don't consider any API in Nova except the v2 REST API to
be a stable API.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Mike Spreitzer
Doug Wiegley  wrote on 07/23/2014 03:43:02 PM:

> From: Doug Wiegley 
> ...
> The state of the world today: ‘status’ in the neutron database is 
> configuration/provisioning status, not operational status.  Neutron-
> wide thing.  We were discussing adding operational status fields (or
> a neutron REST call to get the info from the backend) last month, 
> but it’s something that isn’t planned for a serious conversation 
> until Kilo, at present.

Thanks for the prompt response.  Let me just grasp at one last straw: is 
there any chance that Neutron will soon define and implement Ceilometer 
metrics that reveal PoolMember health?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Ben Nemec
 

On 2014-07-23 13:25, gordon chung wrote: 

>> I left a comment on one of the commits, but in general here are my thoughts:
>> 1) I would prefer not to do things like switch to oslo.i18n outside of 
>> Gerrit. I realize we don't have a specific existing policy for this, but 
>> doing that significant 
>> work outside of Gerrit is not desirable IMHO. It needs to happen either 
>> before graduation or after import into Gerrit.
>> 2) I definitely don't want to be accepting "enable [hacking check]" changes 
>> outside Gerrit. The github graduation step is _just_ to get the code in 
>> shape so it 
>> can be imported with the tests passing. It's perfectly acceptable to me to 
>> just ignore any hacking checks during this step and fix them in Gerrit 
>> where, again, 
>> the changes can be reviewed.
>> At a glance I don't see any problems with the changes that have been made, 
>> but I haven't looked that closely and I think it brings up some topics for 
>> clarification in the graduation process.
> 
> i'm ok to revert if there are concerns. i just vaguely remember a reference 
> in another oslo lib about waiting for i18n graduation but tbh i didn't 
> actually check back to see what conclusion was. 
> 
> cheers,
> _gord_

I have no specific concerns, but I don't want to set a precedent where
we make a bunch of changes on Github and then import that code. The work
on Github should be limited to the minimum necessary to get the unit
tests passing (basically if it's not listed in
https://wiki.openstack.org/wiki/Oslo/CreatingANewLibrary#Manual_Fixes
then it should happen in Gerrit). Once that happens the project can be
imported and any further changes made under our standard review process.
Either that or changes can be made in incubator before graduation and
reviewed then. 

So I guess I'm a soft -1 on this for right now, but I'll defer to the
other Oslo cores because I don't really have time to take a more
detailed look at the repo and I don't want to be a blocker when I may
not be around to discuss it. 

-Ben 
 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Doug Wiegley
> But there *is* a mechanism for some outside thing to query the load balancer 
> for the health of a pool member, right?  I am thinking specifically of 
> http://docs.openstack.org/api/openstack-network/2.0/content/GET_showMember__v2.0_pools__pool_id__members__member_id__lbaas_ext_ops_member.html
>  --- whose response includes a "status" field for the member.  Is there 
> documentation for what values can appear in that field, and what each value 
> means?

The state of the world today: ‘status’ in the neutron database is 
configuration/provisioning status, not operational status.  Neutron-wide thing. 
 We were discussing adding operational status fields (or a neutron REST call to 
get the info from the backend) last month, but it’s something that isn’t 
planned for a serious conversation until Kilo, at present.

The current possible lbaas values (from neutron/plugins/common/constants.py):

# Service operation status constants
ACTIVE = "ACTIVE"
DOWN = "DOWN"
PENDING_CREATE = "PENDING_CREATE"
PENDING_UPDATE = "PENDING_UPDATE"
PENDING_DELETE = "PENDING_DELETE"
INACTIVE = "INACTIVE"
ERROR = "ERROR"

… It does look like you can make a stats() call for some backends and get 
limited operational information, but it will not be uniform, nor universally 
supported.

Thanks,
doug

From: Mike Spreitzer mailto:mspre...@us.ibm.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, July 23, 2014 at 1:27 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [heat] health maintenance in autoscaling groups

Doug Wiegley mailto:do...@a10networks.com>> wrote on 
07/16/2014 04:58:52 PM:

> You do recall correctly, and there are currently no mechanisms for
> notifying anything outside of the load balancer backend when the health
> monitor/member state changes.

But there *is* a mechanism for some outside thing to query the load balancer 
for the health of a pool member, right?  I am thinking specifically of 
http://docs.openstack.org/api/openstack-network/2.0/content/GET_showMember__v2.0_pools__pool_id__members__member_id__lbaas_ext_ops_member.html
 --- whose response includes a "status" field for the member.  Is there 
documentation for what values can appear in that field, and what each value 
means?

Supposing we can leverage the pool member status, there remains an issue: 
establishing a link between an OS::Neutron::PoolMember and the corresponding 
scaling group member.  We could conceivably expand the scaling group code so 
that if the member type is a stack then the contents of the stack are searched 
(perhaps recursively) for resources of type OS::Neutron::PoolMember, but that 
is a tad too automatic for my taste.  It could pick up irrelevant PoolMembers.  
And such a level of implicit behavior is outside our normal style of doing 
things.

We could follow the AWS style, by adding an optional property to the scaling 
group resource types --- where the value of that property can be the UUID of an 
OS::Neutron::LoadBalancer or an OS::Neutron::Pool.  But that still does not 
link up an individual scaling group member with its corresponding PoolMember.

Remember that if we are doing this at all, each scaling group member must be a 
stack.  I think the simplest way to solve this would be to define a way that a 
such stack can put in its outputs the ID of the corresponding PoolMember.  I 
would be willing to settle for simply saying that if such a stack has an output 
of type string and name "__OS_pool_member" then the value of that output is 
taken to be the ID of the corresponding PoolMember.  Some people do not like 
reserved names; if that must be avoided then we can expand the schema language 
with a way to identify which stack output carries the PoolMember ID.  Another 
alternative would be to add an optional scaling group property to carry the 
name of the stack output in question.

> There is also currently no way for an external system to inject health
> information about an LB or its members.

I do not know that the injection has to be to the LB; in AWS the injection is 
to the scaling group.  That would be acceptable to me too.

Thoughts?

Thanks,
Mike
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] health maintenance in autoscaling groups

2014-07-23 Thread Mike Spreitzer
Doug Wiegley  wrote on 07/16/2014 04:58:52 PM:

> You do recall correctly, and there are currently no mechanisms for
> notifying anything outside of the load balancer backend when the health
> monitor/member state changes.

But there *is* a mechanism for some outside thing to query the load 
balancer for the health of a pool member, right?  I am thinking 
specifically of 
http://docs.openstack.org/api/openstack-network/2.0/content/GET_showMember__v2.0_pools__pool_id__members__member_id__lbaas_ext_ops_member.html
 
--- whose response includes a "status" field for the member.  Is there 
documentation for what values can appear in that field, and what each 
value means?

Supposing we can leverage the pool member status, there remains an issue: 
establishing a link between an OS::Neutron::PoolMember and the 
corresponding scaling group member.  We could conceivably expand the 
scaling group code so that if the member type is a stack then the contents 
of the stack are searched (perhaps recursively) for resources of type 
OS::Neutron::PoolMember, but that is a tad too automatic for my taste.  It 
could pick up irrelevant PoolMembers.  And such a level of implicit 
behavior is outside our normal style of doing things.

We could follow the AWS style, by adding an optional property to the 
scaling group resource types --- where the value of that property can be 
the UUID of an OS::Neutron::LoadBalancer or an OS::Neutron::Pool.  But 
that still does not link up an individual scaling group member with its 
corresponding PoolMember.

Remember that if we are doing this at all, each scaling group member must 
be a stack.  I think the simplest way to solve this would be to define a 
way that a such stack can put in its outputs the ID of the corresponding 
PoolMember.  I would be willing to settle for simply saying that if such a 
stack has an output of type string and name "__OS_pool_member" then the 
value of that output is taken to be the ID of the corresponding 
PoolMember.  Some people do not like reserved names; if that must be 
avoided then we can expand the schema language with a way to identify 
which stack output carries the PoolMember ID.  Another alternative would 
be to add an optional scaling group property to carry the name of the 
stack output in question.

> There is also currently no way for an external system to inject health
> information about an LB or its members.

I do not know that the injection has to be to the LB; in AWS the injection 
is to the scaling group.  That would be acceptable to me too.

Thoughts?

Thanks,
Mike___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Dan Smith
> FWIW, I do actually agree with not exposing plugin points to things
> that are not stable APIs and if they didn't already exist, I'd not
> approve adding them. I'd actually go further and say not even the
> virt driver API should be a plugin point, since we arbitrarily change
> it during development any time we need to. The latter is not a serious
> or practical view right now though given our out of tree Docker/Ironic
> drivers. I'm just concerned that we've had these various extension
> points exposed for a long time and we've not clearly articulated
> that they are liable to be killed off (besides marking vif_driver
> as deprecated)

Yep, I think we agree. I think that as a project we've identified
exposing plug points that aren't stable (or intended to be replaceable)
as a bad thing, and thus we should be iterating on removing them.
Especially if we're generous with our deprecate-before-remove rules,
then I think that we're not likely to bite anyone suddenly with
something they're shipping while working it upstream in parallel. I
*really* thought we had called this one out on the ReleaseNotes, but
apparently that didn't happen (probably because we decide to throw in
those helper classes to avoid breaking configs). Going forward, marking
it deprecated in the code for a cycle, noting it on the release notes,
and then removing it the next cycle seems like plenty of warning.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Ian Wells
On 23 July 2014 10:52, Dan Smith  wrote:

> > What is our story for people who are developing new network or
> > storage drivers for Neutron / Cinder and wish to test Nova ? Removing
> > vif_driver and volume_drivers config parameters would mean that they
> > would have to directly modify the existing Nova libvirt
> > vif.py/volume.py codefiles.
> >
> > This isn't neccessarily bad because they'll have to do this anyway
> > if they want to actually submit it to Nova.
>
> I don't think there's any reason not to do that in nova itself, is
> there? Virt drivers are large, so maybe making an exception for that
> plug point makes sense purely for our own test efforts. However, for
> something smaller like you mention, I don't see why we need to keep
> them, especially given what it advertises (IMHO) to people.
>

We should encourage new developers to use a new binding_type, rather than
continue with vif_driver substitution.  Replacing the generic VIF driver
basically loses all the nice binding_type support implemented there, when
what we actually want to do is say 'here is another VIF type, and here is a
binding type value you will see when you should be using it'.  An argument,
I think, for coming up with a mechanism in K that allows that to happen
with a little bit of config that isn't as manky as complete vif_driver
substitution and one that doesn't require nova and neutron config to be
precisely in lockstep (which was always the problem with vif_driver and why
the generic VIF driver was developed originally).  With that in mind I
would, absolutely, agree with deprecating the vif_driver setting.

I can't really speak for "we", but certainly _I_ don't want to support
> that model. I think it leads to people thinking they can develop drivers
> for things like this out of tree permanently, which I'd really like to
> avoid.
>

I sympathise that we shouldn't expose any more interfaces to abuse than we
have to - not least because those interfaces then become frozen and hard to
change - but I think you need a stronger argument here.  It is useful to
have out of tree drivers for this stuff while people experiment, and
perhaps also in production systems.  It's clear by the variety of drivers
and their increasing number that we are still experimenting with VIF
plugging possibilities.  There's quite a lot of ways that you can attach a
VIF to a VM, and just because we happen to support a handful doesn't mean
to say we've provided every option.
-- 
Ian.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 10:52:54AM -0700, Dan Smith wrote:
> > If we're going to do that, then we should be consistent. eg there is
> > a volume_drivers parameter that serves the same purpose as
> > vif_driver
> 
> There are lots of them. We've had a bit of a background task running to
> remove them when possible/convenient and try to avoid adding new ones.
> I'm not opposed to aggressively removing them for sure, but it wouldn't
> be super high on my priority list. However, I definitely don't want to
> slide backwards when we have one already marked for removal :)
> 
> > What is our story for people who are developing new network or
> > storage drivers for Neutron / Cinder and wish to test Nova ? Removing
> > vif_driver and volume_drivers config parameters would mean that they
> > would have to directly modify the existing Nova libvirt
> > vif.py/volume.py codefiles.
> > 
> > This isn't neccessarily bad because they'll have to do this anyway
> > if they want to actually submit it to Nova.
> 
> I don't think there's any reason not to do that in nova itself, is
> there? Virt drivers are large, so maybe making an exception for that
> plug point makes sense purely for our own test efforts. However, for
> something smaller like you mention, I don't see why we need to keep
> them, especially given what it advertises (IMHO) to people.

The main reason for the plugin points I see is for vendors wishing to
ship custom out of tree extensions to their own customers/users without
sending them upstream, or before they've been released upstream. I don't
have much idea if this is a common thing vendors do though, as opposed
to just patching nova and giving their downstream consumers the entire
nova codebase instead of just a single extension file.

> > This could be a pain if they wish to provide the custom driver to
> > users/customers of the previous stable Nova release while waiting for
> > official support in next Nova release. It sounds like you're
> > explicitly saying we don't want to support that use case though.
> 
> I can't really speak for "we", but certainly _I_ don't want to support
> that model. I think it leads to people thinking they can develop drivers
> for things like this out of tree permanently, which I'd really like to
> avoid.

FWIW, I do actually agree with not exposing plugin points to things
that are not stable APIs and if they didn't already exist, I'd not
approve adding them. I'd actually go further and say not even the
virt driver API should be a plugin point, since we arbitrarily change
it during development any time we need to. The latter is not a serious
or practical view right now though given our out of tree Docker/Ironic
drivers. I'm just concerned that we've had these various extension
points exposed for a long time and we've not clearly articulated
that they are liable to be killed off (besides marking vif_driver
as deprecated)

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

2014-07-23 Thread Ian Wells
Speaking as someone who was reviewing both specs, I would personally
recommend you grant both exceptions.  The code changes are very limited in
scope - particularly the Nova one - which makes the code review simple, and
they're highly unlikely to affect anyone who isn't actually using DPDK OVS
(subject to the Neutron tests for its presence being solid), which makes
them low risk.  For even lower risk, we could have a config option to
enable the test for a CUSE-based binding (and yes, I know earlier in the
review everyone was against config items, but specifically what we didn't
want was *two* config items, one in Nova nd one in Neutron, that only
worked if they were in agreement; one solely in Neutron would, I think, be
acceptable).

All this subject to Sean getting all the CRs out of his spec, and maybe we
could add a spec test for that, because it's a right pain to have specs
full of CRs if you're trying to diff them online...
-- 
Ian.



On 23 July 2014 11:10, Mooney, Sean K  wrote:

> Hi kyle
>
> Thanks for your provisional support.
> I would agree that unless the nova spec is also granted an exception both
> specs should be moved
> To Kilo.
>
> I have now uploaded the most recent version of the specs.
> They are available to review here:
> https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost
> https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost
>
> regards
> sean
>
>
> -Original Message-
> From: Kyle Mestery [mailto:mest...@mestery.com]
> Sent: Tuesday, July 22, 2014 2:47 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron] [Spec freeze exception]
> ml2-use-dpdkvhost
>
> On Mon, Jul 21, 2014 at 10:04 AM, Mooney, Sean K 
> wrote:
> > Hi
> >
> > I would like to propose
> > https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost
> > .rst
> > for a spec freeze exception.
> >
> >
> >
> > https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost
> >
> >
> >
> > This blueprint adds support for the Intel(R) DPDK Userspace vHost
> >
> > port binding to the Open Vswitch and Open Daylight ML2 Mechanism Drivers.
> >
> In general, I'd be ok with approving an exception for this BP.
> However, please see below.
>
> >
> >
> > This blueprint enables nova changes tracked by the following spec:
> >
> > https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-us
> > vhost.rst
> >
> This BP appears to also require an exception from the Nova team. I think
> these both require exceptions for this work to have a shot at landing in
> Juno. Given this, I'm actually leaning to move this to Kilo. But if you can
> get a Nova freeze exception, I'd consider the same for the Neutron BP.
>
> Thanks,
> Kyle
>
> >
> >
> > regards
> >
> > sean
> >
> > --
> > Intel Shannon Limited
> > Registered in Ireland
> > Registered Office: Collinstown Industrial Park, Leixlip, County
> > Kildare Registered Number: 308263 Business address: Dromore House,
> > East Park, Shannon, Co. Clare
> >
> > This e-mail and any attachments may contain confidential material for
> > the sole use of the intended recipient(s). Any review or distribution
> > by others is strictly prohibited. If you are not the intended
> > recipient, please contact the sender and delete all copies.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> --
> Intel Shannon Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> Business address: Dromore House, East Park, Shannon, Co. Clare
>
> This e-mail and any attachments may contain confidential material for the
> sole use of the intended recipient(s). Any review or distribution by others
> is strictly prohibited. If you are not the intended recipient, please
> contact the sender and delete all copies.
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Closing registration for the Mid-cycle meetup

2014-07-23 Thread Devananda van der Veen
Hi all,

We have had a few last-minute registrations for the mid-cycle, and are now
up to 20 attendees. I am going to close registration at this point and look
forward to seeing you all on Monday (or Sunday, if you're getting pizza
with me)!

Cheers,
-Devananda
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] overuse of 'except Exception'

2014-07-23 Thread Doug Hellmann

On Jul 23, 2014, at 7:13 AM, Chris Dent  wrote:

> 
> I was having a bit of a browse through the ceilometer code and
> noticed there are a fair few instances (sixty-some) of
> `except Exception` scattered about.
> 
> While not as evil as a bare except, my Python elders always pointed
> out that doing `except Exception` is a bit like using a sledgehammer
> where something more akin to a gavel is what's wanted. The error
> condition is obliterated but there's no judgement on what happened
> and no apparent effort by the developer to effectively handle
> discrete cases.
> 
> A common idiom appears as:
> 
>except Exception:
>LOG.exception(_('something failed'))
>return
># or continue
> 
> There's no information here about what failed or why.

LOG.exception() logs the full traceback, with the argument as a bit of context.

> 
> That's bad enough, but much worse, this will catch all sorts of
> exceptions, even ones that are completely unexpected and ought to
> cause a more drastic (and thus immediately informative) failure
> than 'something failed’.

In most cases, we chose to handle errors this way to keep the service running 
even in the face of “bad” data, since we are trying to collect an audit stream 
and we don’t want to miss good data if we encounter bad data.

> 
> So, my question: Is this something we who dig around in the ceilometer
> code ought to care about and make an effort to clean up? If so, I'm
> happy to get started.

If you would like to propose some changes for cases where more detailed 
exception handling is appropriate, we could discuss them on a case-by-case 
basis. I don’t think anyone used this exception handling lightly style and I 
wouldn’t want to change it without due consideration.

Doug

> 
> Thanks.
> 
> -- 
> Chris Dent tw:@anticdent freenode:cdent
> https://tank.peermore.com/tanks/cdent
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-23 Thread Tim Freund

On 07/23/2014 02:16 PM, Cindy Pallares wrote:


On 07/23/2014 01:02 PM, Anne Gentle wrote:

On Wed, Jul 23, 2014 at 12:29 PM, Joshua Harlow 
wrote:


Hi all,

I was reading over a IMHO insightful hacker news thread last night:

https://news.ycombinator.com/item?id=8068547

Labeled/titled: 'I made a patch for Mozilla, and you can do it too'

It made me wonder what kind of mentoring support are we as a community
offering to newbies (a random google search for 'openstack mentoring' shows
mentors for GSoC, mentors for interns, outreach for women... but no mention
of mentors as a way for everyone to get involved)?

Looking at the comments in that hacker news thread, the article itself it
seems like mentoring is stressed over and over as the way to get involved.

Has there been ongoing efforts to establish such a program (I know there
is training work that has been worked on, but that's not exactly the same).

Thoughts, comments...?


I'll let Stefano answer further, but yes, we've discussed a centralized
mentoring program for a year or so. I'm not sure we have enough mentors
available, there are certainly plenty of people seeking and needing
mentoring. So he can elaborate more on our current thinking of how we'd
overcome the imbalance and get more centralized coordination in this area.

Thanks,
Anne


Mozilla also has "mentored bugs" system which provide a mentor who
commits to helping a newbie get a single bug fixed. It would be nice to
have that in OpenStack. It would also be a great way for people to get
their feet wet in mentoring or who don't want to commit themselves too
much.



I was a student in the OpenStack Upstream Training training that took 
place before the Atlanta Summit.  The training was great, but the weekly 
mentoring afterward really made the experience worth while.  Students 
selected bugs before the class, learned about the contribution process 
during the class, and then met weekly with a mentor until their 
contribution was merged.


Thanks,

Tim


--
Tim Freund
913-207-0983 | @timfreund
http://tim.freunds.net


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Doug Wiegley
@Doug: I think if the drivers see the TERMINATED_HTTPS protocol then
they can throw an exception.  I don't think a driver interface change is
needed.

They¹d have to know to throw it, which could be problematic.  But A
completely new protocol will probably result in some kind of exception, so
it¹s probably sufficient.

doug



On 7/23/14, 12:08 PM, "Brandon Logan"  wrote:

>@Evgeny: Did you intend on adding another patchset in the reviews I've
>been working on? If so I don't really see any changes, so if they're are
>some changes you needed in there let me know.
>
>@Doug: I think if the drivers see the TERMINATED_HTTPS protocol then
>they can throw an exception.  I don't think a driver interface change is
>needed.
>
>Thanks,
>Brandon
>
>
>On Wed, 2014-07-23 at 17:02 +, Doug Wiegley wrote:
>> Do we want any driver interface changes for this?  At one level, with
>>the
>> current interface, conforming drivers could just reference
>> listener.sni_containers, with no changes.  But, do we want something in
>> place so that the API can return an unsupported error for non-TLS v2
>> drivers?  Or must all v2 drivers support TLS?
>> 
>> doug
>> 
>> 
>> 
>> On 7/23/14, 10:54 AM, "Evgeny Fedoruk"  wrote:
>> 
>> >My code is here:
>> >https://review.openstack.org/#/c/109035/1
>> >
>> >
>> >
>> >-Original Message-
>> >From: Evgeny Fedoruk
>> >Sent: Wednesday, July 23, 2014 6:54 PM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work
>> >division
>> >
>> >Hi Carlos,
>> >
>> >As I understand you are working on common module for Barbican
>> >interactions.
>> >I will commit my code later today and I will appreciate if you and
>> >anybody else  who is interested will review this change.
>> >There is one specific spot for the common Barbican interactions module
>> >API integration.
>> >After the IRC meeting tomorrow, we can discuss the work items and
>>decide
>> >who is interested/available to do them.
>> >Does it make sense?
>> >
>> >Thanks,
>> >Evg
>> >
>> >-Original Message-
>> >From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
>> >Sent: Wednesday, July 23, 2014 6:15 PM
>> >To: OpenStack Development Mailing List (not for usage questions)
>> >Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work
>> >division
>> >
>> >Do you have any idea as to how we can split up the work?
>> >
>> >On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk 
>> > wrote:
>> >
>> >> Hi,
>> >> 
>> >> I'm working on TLS integration with loadbalancer v2 extension and db.
>> >> Basing on Brandon's  patches https://review.openstack.org/#/c/105609
>>,
>> >>https://review.openstack.org/#/c/105331/  ,
>> >>https://review.openstack.org/#/c/105610/
>> >> I will abandon previous 2 patches for TLS which are
>> >>https://review.openstack.org/#/c/74031/ and
>> >>https://review.openstack.org/#/c/102837/
>> >> Managing to submit my change later today. It will include lbaas
>> >>extension v2 modification, lbaas db v2 modifications, alembic
>>migration
>> >>for schema changes and new tests in unit testing for lbaas db v2.
>> >> 
>> >> Thanks,
>> >> Evg
>> >> 
>> >> -Original Message-
>> >> From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
>> >> Sent: Wednesday, July 23, 2014 3:54 AM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work
>>division
>> >> 
>> >>   Since it looks like the TLS blueprint was approved I''m sure were
>>all
>> >>eager to start coded so how should we divide up work on the source
>>code.
>> >>I have Pull requests in pyopenssl
>> >>"https://github.com/pyca/pyopenssl/pull/143";. and a few one liners in
>> >>pica/cryptography to expose the needed low-level that I'm hoping will
>>be
>> >>added pretty soon to that PR 143 test's can pass. Incase it doesn't we
>> >>will fall back to using the pyasn1_modules as it already also has a
>> >>means to fetch what we want at a lower level.
>> >> I'm just hoping that we can split the work up so that we can
>> >>collaborate together on this with out over serializing the work were
>> >>people become dependent on waiting for some one else to complete their
>> >>work or worse one person ending up doing all the work.
>> >> 
>> >>  
>> >>   Carlos D. Garza ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >> 
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >___
>> >OpenStack-dev mailing list
>> >OpenStack-dev@lists.openstack.org
>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >___

Re: [openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 06:08:52PM +, Day, Phil wrote:
> Hi Folks,
> 
> I'd like to propose the following as an exception to the spec freeze, on the 
> basis that it addresses a potential data corruption issues in the Guest.
> 
> https://review.openstack.org/#/c/89650
> 
> We were pretty close to getting acceptance on this before, apart from a 
> debate over whether one additional config value could be allowed to be set 
> via image metadata - so I've given in for now on wanting that feature from a 
> deployer perspective, and said that we'll hard code it as requested.
> 
> Initial parts of the implementation are here:
> https://review.openstack.org/#/c/68942/
> https://review.openstack.org/#/c/99916/

Per my comments already, I think this is important for Juno and will
sponsor it.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

2014-07-23 Thread Mooney, Sean K
Hi kyle

Thanks for your provisional support.
I would agree that unless the nova spec is also granted an exception both specs 
should be moved
To Kilo.

I have now uploaded the most recent version of the specs.
They are available to review here:
https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost
https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost

regards
sean


-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Tuesday, July 22, 2014 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

On Mon, Jul 21, 2014 at 10:04 AM, Mooney, Sean K  
wrote:
> Hi
>
> I would like to propose
> https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost
> .rst
> for a spec freeze exception.
>
>
>
> https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost
>
>
>
> This blueprint adds support for the Intel(R) DPDK Userspace vHost
>
> port binding to the Open Vswitch and Open Daylight ML2 Mechanism Drivers.
>
In general, I'd be ok with approving an exception for this BP.
However, please see below.

>
>
> This blueprint enables nova changes tracked by the following spec:
>
> https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-us
> vhost.rst
>
This BP appears to also require an exception from the Nova team. I think these 
both require exceptions for this work to have a shot at landing in Juno. Given 
this, I'm actually leaning to move this to Kilo. But if you can get a Nova 
freeze exception, I'd consider the same for the Neutron BP.

Thanks,
Kyle

>
>
> regards
>
> sean
>
> --
> Intel Shannon Limited
> Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County 
> Kildare Registered Number: 308263 Business address: Dromore House, 
> East Park, Shannon, Co. Clare
>
> This e-mail and any attachments may contain confidential material for 
> the sole use of the intended recipient(s). Any review or distribution 
> by others is strictly prohibited. If you are not the intended 
> recipient, please contact the sender and delete all copies.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Brandon Logan
@Evgeny: Did you intend on adding another patchset in the reviews I've
been working on? If so I don't really see any changes, so if they're are
some changes you needed in there let me know.

@Doug: I think if the drivers see the TERMINATED_HTTPS protocol then
they can throw an exception.  I don't think a driver interface change is
needed.

Thanks,
Brandon


On Wed, 2014-07-23 at 17:02 +, Doug Wiegley wrote:
> Do we want any driver interface changes for this?  At one level, with the
> current interface, conforming drivers could just reference
> listener.sni_containers, with no changes.  But, do we want something in
> place so that the API can return an unsupported error for non-TLS v2
> drivers?  Or must all v2 drivers support TLS?
> 
> doug
> 
> 
> 
> On 7/23/14, 10:54 AM, "Evgeny Fedoruk"  wrote:
> 
> >My code is here:
> >https://review.openstack.org/#/c/109035/1
> >
> >
> >
> >-Original Message-
> >From: Evgeny Fedoruk
> >Sent: Wednesday, July 23, 2014 6:54 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work
> >division
> >
> >Hi Carlos,
> >
> >As I understand you are working on common module for Barbican
> >interactions.
> >I will commit my code later today and I will appreciate if you and
> >anybody else  who is interested will review this change.
> >There is one specific spot for the common Barbican interactions module
> >API integration.
> >After the IRC meeting tomorrow, we can discuss the work items and decide
> >who is interested/available to do them.
> >Does it make sense?
> >
> >Thanks,
> >Evg
> >
> >-Original Message-
> >From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
> >Sent: Wednesday, July 23, 2014 6:15 PM
> >To: OpenStack Development Mailing List (not for usage questions)
> >Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work
> >division
> >
> >Do you have any idea as to how we can split up the work?
> >
> >On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk 
> > wrote:
> >
> >> Hi,
> >> 
> >> I'm working on TLS integration with loadbalancer v2 extension and db.
> >> Basing on Brandon's  patches https://review.openstack.org/#/c/105609 ,
> >>https://review.openstack.org/#/c/105331/  ,
> >>https://review.openstack.org/#/c/105610/
> >> I will abandon previous 2 patches for TLS which are
> >>https://review.openstack.org/#/c/74031/ and
> >>https://review.openstack.org/#/c/102837/
> >> Managing to submit my change later today. It will include lbaas
> >>extension v2 modification, lbaas db v2 modifications, alembic migration
> >>for schema changes and new tests in unit testing for lbaas db v2.
> >> 
> >> Thanks,
> >> Evg
> >> 
> >> -Original Message-
> >> From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
> >> Sent: Wednesday, July 23, 2014 3:54 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
> >> 
> >>Since it looks like the TLS blueprint was approved I''m sure were all
> >>eager to start coded so how should we divide up work on the source code.
> >>I have Pull requests in pyopenssl
> >>"https://github.com/pyca/pyopenssl/pull/143";. and a few one liners in
> >>pica/cryptography to expose the needed low-level that I'm hoping will be
> >>added pretty soon to that PR 143 test's can pass. Incase it doesn't we
> >>will fall back to using the pyasn1_modules as it already also has a
> >>means to fetch what we want at a lower level.
> >> I'm just hoping that we can split the work up so that we can
> >>collaborate together on this with out over serializing the work were
> >>people become dependent on waiting for some one else to complete their
> >>work or worse one person ending up doing all the work.
> >> 
> >> 
> >>   Carlos D. Garza ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >> 
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >___
> >OpenStack-dev mailing list
> >OpenStack-dev@lists.openstack.org
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/o

Re: [openstack-dev] Mentor program?

2014-07-23 Thread Anne Gentle
On Wed, Jul 23, 2014 at 12:29 PM, Joshua Harlow 
wrote:

> Hi all,
>
> I was reading over a IMHO insightful hacker news thread last night:
>
> https://news.ycombinator.com/item?id=8068547
>
> Labeled/titled: 'I made a patch for Mozilla, and you can do it too'
>
> It made me wonder what kind of mentoring support are we as a community
> offering to newbies (a random google search for 'openstack mentoring' shows
> mentors for GSoC, mentors for interns, outreach for women... but no mention
> of mentors as a way for everyone to get involved)?
>
> Looking at the comments in that hacker news thread, the article itself it
> seems like mentoring is stressed over and over as the way to get involved.
>
> Has there been ongoing efforts to establish such a program (I know there
> is training work that has been worked on, but that's not exactly the same).
>
> Thoughts, comments...?
>

I'll let Stefano answer further, but yes, we've discussed a centralized
mentoring program for a year or so. I'm not sure we have enough mentors
available, there are certainly plenty of people seeking and needing
mentoring. So he can elaborate more on our current thinking of how we'd
overcome the imbalance and get more centralized coordination in this area.

Thanks,
Anne


>
> -Josh
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][Spec Freeze Exception]Support dpdkvhost in ovs vif bindings

2014-07-23 Thread Mooney, Sean K
Hi
The third iteration of the specs are now available for review at the links below

https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost

https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost
thanks for the feedback given so far.
Hopefully the current iteration addresses the issues raised.

Regards
Sean.


From: Czesnowicz, Przemyslaw
Sent: Friday, July 18, 2014 1:03 PM
To: openstack-dev@lists.openstack.org
Cc: Mooney, Sean K; Hoban, Adrian
Subject: [openstack-dev][nova][Spec Freeze Exception]Support dpdkvhost in ovs 
vif bindings

Hi Nova Cores,

I would like to ask for spec approval deadline exception for:
https://review.openstack.org/#/c/95805/2

This feature allows using DPDK enabled Open vSwitch with Openstack.
This is an important feature for NFV workloads that require high performance 
network I/O.

If the spec is approved, implementation should be straight forward and should 
not disrupt any other work happening in Nova.


Thanks,
Przemek


--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mentor program?

2014-07-23 Thread Cindy Pallares

On 07/23/2014 01:02 PM, Anne Gentle wrote:
> On Wed, Jul 23, 2014 at 12:29 PM, Joshua Harlow 
> wrote:
>
>> Hi all,
>>
>> I was reading over a IMHO insightful hacker news thread last night:
>>
>> https://news.ycombinator.com/item?id=8068547
>>
>> Labeled/titled: 'I made a patch for Mozilla, and you can do it too'
>>
>> It made me wonder what kind of mentoring support are we as a community
>> offering to newbies (a random google search for 'openstack mentoring' shows
>> mentors for GSoC, mentors for interns, outreach for women... but no mention
>> of mentors as a way for everyone to get involved)?
>>
>> Looking at the comments in that hacker news thread, the article itself it
>> seems like mentoring is stressed over and over as the way to get involved.
>>
>> Has there been ongoing efforts to establish such a program (I know there
>> is training work that has been worked on, but that's not exactly the same).
>>
>> Thoughts, comments...?
>>
> I'll let Stefano answer further, but yes, we've discussed a centralized
> mentoring program for a year or so. I'm not sure we have enough mentors
> available, there are certainly plenty of people seeking and needing
> mentoring. So he can elaborate more on our current thinking of how we'd
> overcome the imbalance and get more centralized coordination in this area.
>
> Thanks,
> Anne
>
Mozilla also has "mentored bugs" system which provide a mentor who
commits to helping a newbie get a single bug fixed. It would be nice to
have that in OpenStack. It would also be a great way for people to get
their feet wet in mentoring or who don't want to commit themselves too
much.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Spec freeze exception] Controlled shutdown of GuestOS

2014-07-23 Thread Day, Phil
Hi Folks,

I'd like to propose the following as an exception to the spec freeze, on the 
basis that it addresses a potential data corruption issues in the Guest.

https://review.openstack.org/#/c/89650

We were pretty close to getting acceptance on this before, apart from a debate 
over whether one additional config value could be allowed to be set via image 
metadata - so I've given in for now on wanting that feature from a deployer 
perspective, and said that we'll hard code it as requested.

Initial parts of the implementation are here:
https://review.openstack.org/#/c/68942/
https://review.openstack.org/#/c/99916/


Phil
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Dan Smith
> If we're going to do that, then we should be consistent. eg there is
> a volume_drivers parameter that serves the same purpose as
> vif_driver

There are lots of them. We've had a bit of a background task running to
remove them when possible/convenient and try to avoid adding new ones.
I'm not opposed to aggressively removing them for sure, but it wouldn't
be super high on my priority list. However, I definitely don't want to
slide backwards when we have one already marked for removal :)

> What is our story for people who are developing new network or
> storage drivers for Neutron / Cinder and wish to test Nova ? Removing
> vif_driver and volume_drivers config parameters would mean that they
> would have to directly modify the existing Nova libvirt
> vif.py/volume.py codefiles.
> 
> This isn't neccessarily bad because they'll have to do this anyway
> if they want to actually submit it to Nova.

I don't think there's any reason not to do that in nova itself, is
there? Virt drivers are large, so maybe making an exception for that
plug point makes sense purely for our own test efforts. However, for
something smaller like you mention, I don't see why we need to keep
them, especially given what it advertises (IMHO) to people.

> This could be a pain if they wish to provide the custom driver to
> users/customers of the previous stable Nova release while waiting for
> official support in next Nova release. It sounds like you're
> explicitly saying we don't want to support that use case though.

I can't really speak for "we", but certainly _I_ don't want to support
that model. I think it leads to people thinking they can develop drivers
for things like this out of tree permanently, which I'd really like to
avoid.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mentor program?

2014-07-23 Thread Joshua Harlow
Hi all,

I was reading over a IMHO insightful hacker news thread last night:

https://news.ycombinator.com/item?id=8068547

Labeled/titled: 'I made a patch for Mozilla, and you can do it too'

It made me wonder what kind of mentoring support are we as a community offering 
to newbies (a random google search for 'openstack mentoring' shows mentors for 
GSoC, mentors for interns, outreach for women... but no mention of mentors as a 
way for everyone to get involved)?

Looking at the comments in that hacker news thread, the article itself it seems 
like mentoring is stressed over and over as the way to get involved.

Has there been ongoing efforts to establish such a program (I know there is 
training work that has been worked on, but that's not exactly the same).

Thoughts, comments...?

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Support for Django 1.7 in OpenStack

2014-07-23 Thread Thomas Goirand
On 07/23/2014 10:46 PM, Lyle, David wrote:
> Django 1.7 drops support for python 2.6 [1], so until OpenStack drops
> support for 2.6 which is slated for Kilo, Horizon is unfortunately capped
> at < 1.7.
> 
> David
> 
> [1] 
> https://docs.djangoproject.com/en/dev/releases/1.7/#python-compatibility

Having the gate putting a cap on Django < 1.7 doesn't mean that nobody
can write patches to support it. Just that it's going to be more
difficult to test.

Thomas Goirand (zigo)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 10:09:37AM -0700, Dan Smith wrote:
> > I don't see an issue with allowing people to configure 3rd party impl
> > for the VIF driver, provided we don't claim that the VIF driver API
> > contract is stable, same way we don't claim virt driver API is stable.
> > It lets users have a solution to enable custom NIC functionality while
> > waiting for Nova to officially support it. If we did remove it, then
> > users could still subclass the main libvirt driver class and make
> > it use their custom VIF driver, so they'd get to the same place just
> > with an extra inconvenient hoop to jump through. So is it worth removing
> > vif_driver ?
> 
> In my opinion, we should (continue to) remove any of those plug points
> that we don't want to actually support as plugin interfaces. The virt
> driver plug point at least serves to allow us to develop and test
> drivers outside of the tree (ironic and docker, for example) before
> merging. The vif_driver (and others) imply that it's a plugin interface,
> when we have no intention of making it one, and I think we should nuke them.

If we're going to do that, then we should be consistent. eg there is a
volume_drivers parameter that serves the same purpose as vif_driver

What is our story for people who are developing new network or storage
drivers for Neutron / Cinder and wish to test Nova ? Removing vif_driver
and volume_drivers config parameters would mean that they would have to
directly modify the existing Nova libvirt vif.py/volume.py codefiles.

This isn't neccessarily bad because they'll have to do this anyway if
they want to actually submit it to Nova.

It is, however, notably different from what they can do today where they
can drop in a impl for their new Neutron/Cinder driver without having to
modify any existing Nova code directly. This could be a pain if they wish
to provide the custom driver to users/customers of the previous stable
Nova release while waiting for official support in next Nova release. It
sounds like you're explicitly saying we don't want to support that use
case though.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread gordon chung
> I left a comment on one of the commits, but in general here are my thoughts:
> 1) I would prefer not to do things like switch to oslo.i18n outside of 
> Gerrit.  I realize we don't have a specific existing policy for this, but 
> doing that significant 
> work outside of Gerrit is not desirable IMHO.  It needs to happen either 
> before graduation or after import into Gerrit.
> 2) I definitely don't want to be accepting "enable [hacking check]" changes 
> outside Gerrit.  The github graduation step is _just_ to get the code in 
> shape so it 
> can be imported with the tests passing.  It's perfectly acceptable to me to 
> just ignore any hacking checks during this step and fix them in Gerrit where, 
> again, 
> the changes can be reviewed.
> At a glance I don't see any problems with the changes that have been made, 
> but I haven't looked that closely and I think it brings up some topics for 
> clarification in the graduation process.


i'm ok to revert if there are concerns. i just vaguely remember a reference in 
another oslo lib about waiting for i18n graduation but tbh i didn't actually 
check back to see what conclusion was.

cheers,
gord  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Neutron integration test job

2014-07-23 Thread Kyle Mestery
On Wed, Jul 23, 2014 at 7:28 AM, Denis Makogon  wrote:
> Hello, Stackers.
>
>
>
> For those of you who’s interested in Trove just letting you know, that for
> now Trove can work with Neutron (hooray!!)
>  instead of Nova-network, see [1] and [2]. It’s a huge step forward on the
> road of advanced OpenStack integration.
>
> But let’s admit it’s not the end, we should deal with:
>
> Add Neutron-based configuration for DevStack to let folks try it (see [3]).
>
I have some comments on this patch which I've posted in the review.

> Implementing/providing new type of testing job that will test on a regular
> basis all Trove tests with enabled Neutron to verify that all our networking
> preparations for instance are fine.
>
>
> The last thing is the most interesting. And i’d like to discuss it with all
> of you, folks.
> So, i’ve wrote initial job template taking into account specific
> configuration required by DevStack and Trove-integration, see [4], and i’d
> like to receive all possible feedbacks as soon as possible.
>
This is great! I'd like to see this work land as well, thanks for
taking this on. I'll add this to my backlog of items to review and
provide some feedback as well.

Thanks,
Kyle

>
>
> [1] - Trove.
> https://github.com/openstack/trove/commit/c68fef2b7a61f297b9fe7764dd430eefd4d4a767
>
> [2] - Trove integration.
> https://github.com/openstack/trove-integration/commit/9f42f5c9b1a0d8844b3e527bcf2eb9474485d23a
>
> [3] - DevStack patchset. https://review.openstack.org/108966
>
> [4] - POC. https://gist.github.com/denismakogon/76d9bd3181781097c39b
>
>
>
> Best regards,
>
> Denis Makogon
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Dan Smith
> I don't see an issue with allowing people to configure 3rd party impl
> for the VIF driver, provided we don't claim that the VIF driver API
> contract is stable, same way we don't claim virt driver API is stable.
> It lets users have a solution to enable custom NIC functionality while
> waiting for Nova to officially support it. If we did remove it, then
> users could still subclass the main libvirt driver class and make
> it use their custom VIF driver, so they'd get to the same place just
> with an extra inconvenient hoop to jump through. So is it worth removing
> vif_driver ?

In my opinion, we should (continue to) remove any of those plug points
that we don't want to actually support as plugin interfaces. The virt
driver plug point at least serves to allow us to develop and test
drivers outside of the tree (ironic and docker, for example) before
merging. The vif_driver (and others) imply that it's a plugin interface,
when we have no intention of making it one, and I think we should nuke them.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Tuskar] REST API spec for Juno questions

2014-07-23 Thread Petr Blaho
Hi all,

I am working on API endpoints for Tuskar according to
https://github.com/openstack/tripleo-specs/blob/master/specs/juno/tripleo-juno-tuskar-rest-api.rst
and I found some inconsistencies.

On following lines I will present what I think are mistakes or I do not 
understand
well. Please, correct me if I am wrong. Then I am ok to write patch
for that spec.

1) UUID vs. id.
I can see usage of UUIDs in urls
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L107)
and UUID is referenced in condition for 404 HTTP status
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L125).
On the other hand we have id in returned json for plan
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L148).
The same applies for roles and its UUIDs or ids.
The problem I am pointing at is not in format of value but in its name.
I am convinced that these should be consistent and we should use UUIDs.

2) Request Data when adding role to plan.
According to
https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L376
there should be name and version of the role but json example has only
id value
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L382-L384).
I understand that that json code is just an example but I was confused
by differences between words describing data and example.
I can see from json representation of roles list
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L508-L527)
that role can be identified both by UUID/id and combination of
name+version.
>From spec for DELETE role from plan 
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L405)
I can tell that we probably will be using name+version identifier to
know which role I want to add to plan so example mentioned above is just
missing name and version attributes.
Am I correct with this?

3) /v2/clouds in href for plan
This is probably remnant from previous versions of spec. We have
/v2/clouds where we probably should have /v2/plans
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L182).

4) Links to roles from plan json
We have a link for each role in plan that points to url like
/v2/roles/:role_uuid
(https://github.com/openstack/tripleo-specs/blame/master/specs/juno/tripleo-juno-tuskar-rest-api.rst#L158).
But we do not have an API endpoint returning single role.
We should either remove these links to single role or add GET
/v2/roles/:role_uuid endpoint and add this kind of links to list of
roles too.

I proposed solutions to points 1, 2 and 3 in
https://review.openstack.org/#/c/109040/.

Thanks for reading this.
I am looking for your input.
-- 
Petr Blaho, pbl...@redhat.com
Software Engineer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] [Glance] Image tagging

2014-07-23 Thread Tripp, Travis S
Thank you Serg,

Yes, what you are discussing in this thread is actually directly related to 
many of the original reasons we worked on the Graffiti concept POC and then 
revised into the metadata definitions catalog we are working on for Glance.
Basically, you can define objects and properties that you care about in the 
definitions catalog and then use the UI to apply metadata to things like 
images. The UI of course is pulling from a REST API, so this isn’t limited to 
UI use only.  The catalog ensures consistency of applying the metadata so that 
the metadata is useable for users as well as tool automation.  We’ve got 
multiple sets of code in progress which I’ve highlighted below and we have a 
session at the Glance mini-summit this week to talk about it further.

The below are work in progress, but you probably would be able to fetch the 
horizon ones to get an idea of where things currently are.

Glance Metadata Definitions Catalog: https://review.openstack.org/#/c/105904/
Python Glance Client support: https://review.openstack.org/#/c/105231/
Horizon Metadata Tagging Widget: https://review.openstack.org/#/c/104956/
Horizon Admin UI: https://review.openstack.org/#/c/104063/

For Juno, we’ve scaled back some of our original Graffiti concepts (which 
included inheritance, elastic search, etc) to help get things landed in this 
iteration, but then we want to build out from there and would love to work with 
you to help this meet your needs.

Thanks,
Travis

From: Serg Melikyan [mailto:smelik...@mirantis.com]
Sent: Wednesday, July 23, 2014 9:28 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Murano] Image tagging
Importance: High

I would also suggest to look at 
Graffiti project, I think Graffiti is 
designed to solve problems related to our with images however I don't know how 
well it is fit for us. They work very hard to make project functionality 
available as part of Glance.

If it's really can solve our problem we can design solution that expose 
functionality compatible in capabilities with Graffiti and have limited 
short-term implementation that eventually can be replaced by Glance [with 
Metadata Definitions Catalog feature].

On Wed, Jul 23, 2014 at 1:52 AM, Stan Lagun 
mailto:sla...@mirantis.com>> wrote:
How do you like alternate design: uses can chose any image he wants (say any 
Linux) but the JSON that is in image tag has enough information on what 
applications are installed on that image. And not just installed or not but the 
exact state installation was frozen (say binaries are deployed but config files 
are need to be modified). The deployment workflow can peak that state from 
image tag and continue right from the place it was stopped last time. So if 
user has chosen image with MySQL preinstalled the workflow will just 
post-configure it while if the user chosen clean Linux image it will do the 
whole deployment from scratch. Thus it will become only a matter of 
optimization and user will still be able to to share instance for several 
applications (good example is Firewall app) or deploy his app even if there is 
no image where it was built in.

Those are only my thoughts and this need a proper design. For now I agree that 
we need to improve tagging to support yours use case. But this need to be done 
in a way that would allow both user and machine to work with. UI at least needs 
to distinguish between Linux and Windows while for user a free-form tagging may 
be appropriate. Both can be stored in a single JSON tag.

So lets create blueprint/etherpad for this and both think on exact format that 
can be implemented right now

Sincerely yours,
Stan Lagun
Principal Software Engineer @ Mirantis

On Tue, Jul 22, 2014 at 10:08 PM, McLellan, Steven 
mailto:steve.mclel...@hp.com>> wrote:
Thanks for the response.

Primarily I’m thinking about a situation where I have an image that has a 
specific piece of software installed (let’s say MySQL for the sake of 
argument). My application (which configures mysql) requires a glance image that 
has MySQL pre-installed, and doesn’t particularly care what OS (though again 
for the sake of argument assume it’s linux of some kind, so that configuration 
files are expected to be in the same place regardless of OS).

Currently we have a list of three hardcoded values in the UI, and none of them 
apply properly. I’m suggesting instead of that list, we allow free-form text; 
if you’re tagging glance images, you are expected to know what applications 
will be looking for. This still leaves a problem in that I can upload a package 
but I don’t necessarily have the ability to mark any images as valid for it, 
but I think that can be a later evolution; for now, I’m focusing on the 
situation where an admin is both uploading glance images and murano packages.

As a slight side note, we do have the ability to filter image sizes b

[openstack-dev] [QA] Meeting Thursday July 24th at 22:00 UTC

2014-07-23 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, July 24th at 22:00 UTC in the #openstack-meeting channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

To help people figure out what time 22:00 UTC is in other timezones tomorrow's
meeting will be at:

18:00 EDT
07:00 JST
07:30 ACST
0:00 CEST
17:00 CDT
15:00 PDT

-Matt Treinish


pgpl6Qhb3DAC_.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Doug Wiegley
Do we want any driver interface changes for this?  At one level, with the
current interface, conforming drivers could just reference
listener.sni_containers, with no changes.  But, do we want something in
place so that the API can return an unsupported error for non-TLS v2
drivers?  Or must all v2 drivers support TLS?

doug



On 7/23/14, 10:54 AM, "Evgeny Fedoruk"  wrote:

>My code is here:
>https://review.openstack.org/#/c/109035/1
>
>
>
>-Original Message-
>From: Evgeny Fedoruk
>Sent: Wednesday, July 23, 2014 6:54 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work
>division
>
>Hi Carlos,
>
>As I understand you are working on common module for Barbican
>interactions.
>I will commit my code later today and I will appreciate if you and
>anybody else  who is interested will review this change.
>There is one specific spot for the common Barbican interactions module
>API integration.
>After the IRC meeting tomorrow, we can discuss the work items and decide
>who is interested/available to do them.
>Does it make sense?
>
>Thanks,
>Evg
>
>-Original Message-
>From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
>Sent: Wednesday, July 23, 2014 6:15 PM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work
>division
>
>Do you have any idea as to how we can split up the work?
>
>On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk 
> wrote:
>
>> Hi,
>> 
>> I'm working on TLS integration with loadbalancer v2 extension and db.
>> Basing on Brandon's  patches https://review.openstack.org/#/c/105609 ,
>>https://review.openstack.org/#/c/105331/  ,
>>https://review.openstack.org/#/c/105610/
>> I will abandon previous 2 patches for TLS which are
>>https://review.openstack.org/#/c/74031/ and
>>https://review.openstack.org/#/c/102837/
>> Managing to submit my change later today. It will include lbaas
>>extension v2 modification, lbaas db v2 modifications, alembic migration
>>for schema changes and new tests in unit testing for lbaas db v2.
>> 
>> Thanks,
>> Evg
>> 
>> -Original Message-
>> From: Carlos Garza [mailto:carlos.ga...@rackspace.com]
>> Sent: Wednesday, July 23, 2014 3:54 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
>> 
>>  Since it looks like the TLS blueprint was approved I''m sure were all
>>eager to start coded so how should we divide up work on the source code.
>>I have Pull requests in pyopenssl
>>"https://github.com/pyca/pyopenssl/pull/143";. and a few one liners in
>>pica/cryptography to expose the needed low-level that I'm hoping will be
>>added pretty soon to that PR 143 test's can pass. Incase it doesn't we
>>will fall back to using the pyasn1_modules as it already also has a
>>means to fetch what we want at a lower level.
>> I'm just hoping that we can split the work up so that we can
>>collaborate together on this with out over serializing the work were
>>people become dependent on waiting for some one else to complete their
>>work or worse one person ending up doing all the work.
>> 
>> 
>>   Carlos D. Garza ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 09:53:55AM -0700, Dan Smith wrote:
> > Hmm, that is not right. There's no intention to remove the vif_driver
> > parameter itself. We were supposed to merely deprecate the various
> > legacy VIF driver implementations in Nova, not remove the ability
> > to use 3rd party ones.
> 
> I'm pretty sure it was deprecated specifically for that reason. Once we
> stopped having the need to provide that as a way to control which
> implementation was used, we (IIRC) marked it as deprecated with the
> intention of removing it. We've been on a path to remove as many of the
> "provide your own class here" plugin points as possible in recent cycles.

I don't see an issue with allowing people to configure 3rd party impl
for the VIF driver, provided we don't claim that the VIF driver API
contract is stable, same way we don't claim virt driver API is stable.
It lets users have a solution to enable custom NIC functionality while
waiting for Nova to officially support it. If we did remove it, then
users could still subclass the main libvirt driver class and make
it use their custom VIF driver, so they'd get to the same place just
with an extra inconvenient hoop to jump through. So is it worth removing
vif_driver ?

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Dan Smith
> Hmm, that is not right. There's no intention to remove the vif_driver
> parameter itself. We were supposed to merely deprecate the various
> legacy VIF driver implementations in Nova, not remove the ability
> to use 3rd party ones.

I'm pretty sure it was deprecated specifically for that reason. Once we
stopped having the need to provide that as a way to control which
implementation was used, we (IIRC) marked it as deprecated with the
intention of removing it. We've been on a path to remove as many of the
"provide your own class here" plugin points as possible in recent cycles.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] FW: [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Evgeny Fedoruk
My code is here:
https://review.openstack.org/#/c/109035/1



-Original Message-
From: Evgeny Fedoruk 
Sent: Wednesday, July 23, 2014 6:54 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

Hi Carlos,

As I understand you are working on common module for Barbican  interactions.
I will commit my code later today and I will appreciate if you and anybody else 
 who is interested will review this change.
There is one specific spot for the common Barbican interactions module API 
integration.
After the IRC meeting tomorrow, we can discuss the work items and decide who is 
interested/available to do them.
Does it make sense?

Thanks,
Evg

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
Sent: Wednesday, July 23, 2014 6:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

Do you have any idea as to how we can split up the work?

On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk 
 wrote:

> Hi,
> 
> I'm working on TLS integration with loadbalancer v2 extension and db.
> Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
> https://review.openstack.org/#/c/105331/  , 
> https://review.openstack.org/#/c/105610/
> I will abandon previous 2 patches for TLS which are 
> https://review.openstack.org/#/c/74031/ and 
> https://review.openstack.org/#/c/102837/ 
> Managing to submit my change later today. It will include lbaas extension v2 
> modification, lbaas db v2 modifications, alembic migration for schema changes 
> and new tests in unit testing for lbaas db v2.
> 
> Thanks,
> Evg
> 
> -Original Message-
> From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
> Sent: Wednesday, July 23, 2014 3:54 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
> 
>   Since it looks like the TLS blueprint was approved I''m sure were all 
> eager to start coded so how should we divide up work on the source code. I 
> have Pull requests in pyopenssl "https://github.com/pyca/pyopenssl/pull/143";. 
> and a few one liners in pica/cryptography to expose the needed low-level that 
> I'm hoping will be added pretty soon to that PR 143 test's can pass. Incase 
> it doesn't we will fall back to using the pyasn1_modules as it already also 
> has a means to fetch what we want at a lower level. 
> I'm just hoping that we can split the work up so that we can collaborate 
> together on this with out over serializing the work were people become 
> dependent on waiting for some one else to complete their work or worse one 
> person ending up doing all the work.
> 
>   
>  Carlos D. Garza ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] graduating oslo.middleware

2014-07-23 Thread Ben Nemec
 

I left a comment on one of the commits, but in general here are my
thoughts: 

1) I would prefer not to do things like switch to oslo.i18n outside of
Gerrit. I realize we don't have a specific existing policy for this, but
doing that significant work outside of Gerrit is not desirable IMHO. It
needs to happen either before graduation or after import into Gerrit. 

2) I definitely don't want to be accepting "enable [hacking check]"
changes outside Gerrit. The github graduation step is _just_ to get the
code in shape so it can be imported with the tests passing. It's
perfectly acceptable to me to just ignore any hacking checks during this
step and fix them in Gerrit where, again, the changes can be reviewed. 

At a glance I don't see any problems with the changes that have been
made, but I haven't looked that closely and I think it brings up some
topics for clarification in the graduation process. 

Thanks. 

-Ben 

On 2014-07-22 08:44, gordon chung wrote: 

> hi, 
> 
> following the oslo graduation protocol, could the oslo team review the 
> oslo.middleware library[1] i've created and see if there are any issues. 
> 
> [1] https://github.com/chungg/oslo.middleware [2] 
> 
> cheers,
> _gord_ 
> _ _ 
> _ _ 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [1]

 

Links:
--
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] https://github.com/chungg/oslo.middleware
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] threading in nova (greenthreads, OS threads, etc.)

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 10:41:06AM -0600, Chris Friesen wrote:
> 
> Hi all,
> 
> I was wondering if someone could point me to a doc describing the threading
> model for nova.
> 
> I know that we use greenthreads to map multiple threads of execution onto a
> single native OS thread.  And the python GIL results in limitations as well.
> 
> According to the description at
> "https://bugs.launchpad.net/tripleo/+bug/1203906"; for nova-api we
> potentially fork off multiple instances because it's database-heavy and we
> don't want to serialize on the database.
> 
> If that's the case, why do we only run one instance of nova-conductor on a
> single OS thread?
> 
> And looking at nova-compute on a compute node with no instances running I
> see 22 OS threads.  Where do these come from?  Are these related to libvirt?
> Or are they forked the way that nova-api is?

Since native C API calls block greenthreads, nova has a native thread pool
that is used for each libvirt API call. A similar thing is done for the
libguestfs API calls and optionally you can do it in the database driver
too. Basically any python module involving native C calls should be a
candidate for a native thread pool

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Daniel P. Berrange
On Wed, Jul 23, 2014 at 07:38:05PM +0300, Itzik Brown wrote:
> Hi,
> 
> I see that the option to specify vif_driver in nova.conf for libvirt is
> deprecated for Juno release.

Hmm, that is not right. There's no intention to remove the vif_driver
parameter itself. We were supposed to merely deprecate the various
legacy VIF driver implementations in Nova, not remove the ability
to use 3rd party ones.

> What is the way to use an external VIF driver (i.e. that is out of the
> tree)?

Continue using the 'vif_driver' config parameter.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] threading in nova (greenthreads, OS threads, etc.)

2014-07-23 Thread Chris Friesen


Hi all,

I was wondering if someone could point me to a doc describing the 
threading model for nova.


I know that we use greenthreads to map multiple threads of execution 
onto a single native OS thread.  And the python GIL results in 
limitations as well.


According to the description at 
"https://bugs.launchpad.net/tripleo/+bug/1203906"; for nova-api we 
potentially fork off multiple instances because it's database-heavy and 
we don't want to serialize on the database.


If that's the case, why do we only run one instance of nova-conductor on 
a single OS thread?


And looking at nova-compute on a compute node with no instances running 
I see 22 OS threads.  Where do these come from?  Are these related to 
libvirt?  Or are they forked the way that nova-api is?


Any pointers would be appreciated.

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Trove] Guest prepare call polling mechanism issue

2014-07-23 Thread Tim Simpson
To summarize, this is a conversation about the following LaunchPad bug: 
https://launchpad.net/bugs/1325512
and Gerrit review: https://review.openstack.org/#/c/97194/6

You are saying the function "_service_is_active" in addition to polling the 
datastore service status also polls the status of the Nova resource. At first I 
thought this wasn't the case, however looking at your pull request I was 
surprised to see on line 320 
(https://review.openstack.org/#/c/97194/6/trove/taskmanager/models.py) polls 
Nova using the "get" method (which I wish was called "refresh" as to me it 
sounds like a lazy-loader or something despite making a full GET request each 
time).
So moving this polling out of there into the two respective "create_server" 
methods as you have done is not only going to be useful for Heat and avoid the 
issue of calling Nova 99 times you describe but it will actually help 
operations teams to see more clearly that the issue was with a server that 
didn't provision. We actually had an issue in Staging the other day that took 
us forever to figure out because the server wasn't provisioning, but before 
anything checked that it was ACTIVE the DNS code detected the server had no ip 
address (never mind it was in a FAILED state) so the logs surfaced this as a 
DNS error. This change should help us avoid such issues.

Thanks,

Tim



From: Denis Makogon [dmako...@mirantis.com]
Sent: Wednesday, July 23, 2014 7:30 AM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Trove] Guest prepare call polling mechanism issue


Hello, Stackers.


I’d like to discuss guestagent prepare call polling mechanism issue (see [1]).


Let me first describe why this is actually an issue and why it should be fixed. 
For those of you who is familiar with Trove knows that Trove can provision 
instances through Nova API and Heat API (see [2] and see [3]).



What’s the difference between this two ways (in general)? The answer is 
simple:

- Heat-based provisioning method has polling mechanism that verifies that stack 
provisioning was completed with successful state (see [4]) which means that all 
stack resources are in ACTIVE state.

- Nova-based provisioning method doesn’t do any polling (which is wrong, since 
instance can’t fail as fast as possible because Trove-taskmanager service 
doesn’t verify that launched server had reached ACTIVE state. That’s the issue 
#1 - compute instance state is unknown, but right after resources (deliverd by 
heat) already in ACTIVE states.


Once one method [2] or [3] finished, taskmanager trying to prepare data for 
guest (see [5]) and then it tries to send prepare call to guest (see [6]). Here 
comes issue #2 - polling mechanism does at least 100 API calls to Nova to 
define compute instance status.

Also taskmanager does almost the same amount of calls to Trove backend to 
discover guest status which is totally normal.


So, here comes the question,  why should i call 99 times Nova for the same 
value if the value asked for the first time was completely acceptable?



There’s only one way to fix it. Since heat-based provisioning delivers 
instance with status validation procedure, the same thing should be done for 
nova-base provisioning (we should extract compute instance status polling from 
guest prepare polling mechanism and integrate it into [2]) and leave only guest 
status discovering in guest prepare polling mechanism.





Benefits? Proposed fix will give an ability for fast-failing for corrupted 
instances, it would reduce amount of redundant Nova API calls while attempting 
to discover guest status.



Proposed fix for this issue - [7].


[1] - https://launchpad.net/bugs/1325512

[2] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L198-L215

[3] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L190-L197

[4] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L420-L429

[5] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L217-L256

[6] - 
https://github.com/openstack/trove/blob/master/trove/taskmanager/models.py#L254-L266

[7] - https://review.openstack.org/#/c/97194/



Thoughts?


Best regards,

Denis Makogon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Configuring libivrt VIF driver

2014-07-23 Thread Itzik Brown

Hi,

I see that the option to specify vif_driver in nova.conf for libvirt is 
deprecated for Juno release.
What is the way to use an external VIF driver (i.e. that is out of the 
tree)?


Itzik


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] putting [tag] in LP bug titles instead of using LP tags

2014-07-23 Thread Mike Scherbakov
I'm not against creating bugs initially with such a title to make visual
search easier.
However I think that re-titling existing bugs is not needed, as at leads to
spam.

Mike Scherbakov
#mihgen
On Jul 23, 2014 4:24 AM, "Dmitry Borodaenko" 
wrote:

> +1
>
> To provide some more context, we discussed this in the team meeting last
> week:
>
> http://eavesdrop.openstack.org/meetings/fuel/2014/fuel.2014-07-17-16.00.log.html#l-107
>
> and agreed to stop doing it until further discussion, or at all.
>
>
> On Tue, Jul 22, 2014 at 4:36 PM, Andrew Woodward  wrote:
> > There has been an increased occurrence of using [tag] in the title
> instead
> > of adding tag to the tags section of the LP bugs for Fuel.
> >
> > As we discussed in the Fuel meeting last Thursday, We should stop doing
> this
> > as it causes several issues
> > * It spams e-mail.
> > * It breaks threading that your mail client may perform as it changes the
> > subject.
> > * They aren't searchable as easily as tags
> > * They are going to look even more ugly when more tags are added or
> removed
> > from the bug.
> >
> > --
> > Andrew
> > Mirantis
> > Ceph community
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Dmitry Borodaenko
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [NFV] Meeting Summary - 2014-07-23-14.00

2014-07-23 Thread Steve Gordon
Hi all,

Please find the summaries and full logs for today's NFV sub team meeting at 
these locations:

Summary (HTML): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-23-14.00.html
Full Log (HTML): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-23-14.00.log.html
Summary (TXT): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-23-14.00.log.txt
Fully Log (TXT): 
http://eavesdrop.openstack.org/meetings/nfv/2014/nfv.2014-07-23-14.00.txt

Thanks,

Steve

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [marconi] Meeting time change

2014-07-23 Thread Kurt Griffiths
OK, I just checked and 1400 and 1500 are already taken, unless we want to
move our meetings to #openstack-meeting-3. If we want to stick with
#openstack-meeting-alt, it will have to be 1300 UTC.

On 7/22/14, 5:28 PM, "Flavio Percoco"  wrote:

>On 07/22/2014 06:08 PM, Kurt Griffiths wrote:
>> FYI, we chatted about this in #openstack-marconi today and decided to
>>try
>> 2100 UTC for tomorrow. If we would like to alternate at an earlier time
>> every other week, is 1900 UTC good, or shall we do something more like
>> 1400 UTC?
>
>
>We can keep the same time we're using, if possible. That is, 15UTC. If
>that slot is taken, then 14UTC sounds good.
>
>Cheers,
>Flavio

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] HTTPS client breaks nova

2014-07-23 Thread Rob Crittenden
Rob Crittenden wrote:
> It looks like the switch to requests in python-glanceclient
> (https://review.openstack.org/#/c/78269/) has broken nova when SSL is
> enabled.
> 
> I think it is related to the custom object that the glanceclient uses.
> If another connection gets pushed into the pool then things fail because
> the object isn't a glanceclient VerifiedHTTPSConnection object.
> 
> The error seen is:
> 
> 2014-07-22 16:20:57.571 ERROR nova.api.openstack
> req-e9a94169-9af4-45e8-ab95-1ccd3f8caf04 admin admin Caught error:
> VerifiedHTTPSConnection instance has no attribute 'insecure'
> 
> What I see is that nova works until glance is invoked.
> 
> These all work:
> 
> $ nova flavor-list
> $ glance image-list
> $ nova net-list
> 
> Now make it go boom:
> 
> $ nova image-list
> ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
> req-ee964e9a-c2a9-4be9-bd52-3f42c805cf2c)
> 
> Now that a bad object is now in the pool nothing in nova works:
> 
> $ nova list
> ERROR (Unauthorized): Unauthorized (HTTP 401) (Request-ID:
> req-f670db83-c830-4e75-b29f-44f61ae161a1)
> 
> A restart of nova gets things back to normal.
> 
> I'm working on enabling SSL everywhere
> (https://bugs.launchpad.net/devstack/+bug/1328226) either directly or
> using TLS proxies (stud).
> I'd like to eventually get SSL testing done as a gate job which will
> help catch issues like this in advance.
> 
> rob

FYI, my temporary workaround is to change the queue name (scheme) so the
glance clients are handled separately:

diff --git a/glanceclient/common/https.py b/glanceclient/common/https.py
index 6416c19..72ed929 100644
--- a/glanceclient/common/https.py
+++ b/glanceclient/common/https.py
@@ -72,7 +72,7 @@ class HTTPSAdapter(adapters.HTTPAdapter):
 def __init__(self, *args, **kwargs):
 # NOTE(flaper87): This line forces poolmanager to use
 # glanceclient HTTPSConnection
-poolmanager.pool_classes_by_scheme["https"] = HTTPSConnectionPool
+poolmanager.pool_classes_by_scheme["glance_https"] =
HTTPSConnectionPoo
 super(HTTPSAdapter, self).__init__(*args, **kwargs)

 def cert_verify(self, conn, url, verify, cert):
@@ -92,7 +92,7 @@ class
HTTPSConnectionPool(connectionpool.HTTPSConnectionPool):
 be used just when the user sets --no-ssl-compression.
 """

-scheme = 'https'
+scheme = 'glance_https'

 def _new_conn(self):
 self.num_connections += 1

This at least lets me continue working.

rob

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of "ip"

2014-07-23 Thread John Dickinson
Using hostnames instead of IPs is, as mentioned above, something under 
consideration in that patch.

However, note that until now, we've intentionally kept it as just IP addresses 
since using hostnames adds a lot of operational complexity and burden. I 
realize that hostnames may be preferred in some cases, but this places a very 
large strain on DNS systems. So basically, it's a question of do we add the 
feature, knowing that most people who use it will in fact be making their lives 
more difficult, or do we keep it out, knowing that we won't be serving those 
who actually require the feature.

--John



On Jul 23, 2014, at 2:29 AM, Matsuda, Kenichiro 
 wrote:

> Hi,
> 
> Thank you for the info.
> I was able to understand that hostname support is under developing.
> 
> Best Regards,
> Kenichiro Matsuda.
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

2014-07-23 Thread Evgeny Fedoruk
Hi Carlos,

As I understand you are working on common module for Barbican  interactions.
I will commit my code later today and I will appreciate if you and anybody else 
 who is interested will review this change.
There is one specific spot for the common Barbican interactions module API 
integration.
After the IRC meeting tomorrow, we can discuss the work items and decide who is 
interested/available to do them.
Does it make sense?

Thanks,
Evg

-Original Message-
From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
Sent: Wednesday, July 23, 2014 6:15 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] TLS capability - work division

Do you have any idea as to how we can split up the work?

On Jul 23, 2014, at 6:01 AM, Evgeny Fedoruk 
 wrote:

> Hi,
> 
> I'm working on TLS integration with loadbalancer v2 extension and db.
> Basing on Brandon's  patches https://review.openstack.org/#/c/105609 , 
> https://review.openstack.org/#/c/105331/  , 
> https://review.openstack.org/#/c/105610/
> I will abandon previous 2 patches for TLS which are 
> https://review.openstack.org/#/c/74031/ and 
> https://review.openstack.org/#/c/102837/ 
> Managing to submit my change later today. It will include lbaas extension v2 
> modification, lbaas db v2 modifications, alembic migration for schema changes 
> and new tests in unit testing for lbaas db v2.
> 
> Thanks,
> Evg
> 
> -Original Message-
> From: Carlos Garza [mailto:carlos.ga...@rackspace.com] 
> Sent: Wednesday, July 23, 2014 3:54 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [Neutron][LBaaS] TLS capability - work division
> 
>   Since it looks like the TLS blueprint was approved I''m sure were all 
> eager to start coded so how should we divide up work on the source code. I 
> have Pull requests in pyopenssl "https://github.com/pyca/pyopenssl/pull/143";. 
> and a few one liners in pica/cryptography to expose the needed low-level that 
> I'm hoping will be added pretty soon to that PR 143 test's can pass. Incase 
> it doesn't we will fall back to using the pyasn1_modules as it already also 
> has a means to fetch what we want at a lower level. 
> I'm just hoping that we can split the work up so that we can collaborate 
> together on this with out over serializing the work were people become 
> dependent on waiting for some one else to complete their work or worse one 
> person ending up doing all the work.
> 
>   
>  Carlos D. Garza ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Manage multiple clusters using a single nova service

2014-07-23 Thread Dan Smith
>> I'm just do not suppor the idea that Nova needs to change its 
>> fundamental design in order to support the *design* of other host 
>> management platforms.
> 
> The current implementation doesn't make nova change its design, the 
> scheduling decisions are still done by nova.

Nova's design is not just "making the scheduling decisions" but also
includes the deployment model, which is intended to be a single compute
service tied to a single hypervisor. I think that's important for scale
and failure isolation at least.

> Its only the deployment that has been changed. Agree that there are 
> no separate topic-exchange queues for each cluster.

I'm definitely with Jay here: I want to get away from hiding larger
systems behind a single compute host/service.

--Dan



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Image tagging

2014-07-23 Thread Serg Melikyan
I would also suggest to look at Graffiti
 project, I think Graffiti is
designed to solve problems related to our with images however I don't know
how well it is fit for us. They work very hard to make project
functionality available as part  of
Glance.

If it's really can solve our problem we can design solution that expose
functionality compatible in capabilities with Graffiti and have limited
short-term implementation that eventually can be replaced by Glance
[with *Metadata
Definitions Catalog* feature].


On Wed, Jul 23, 2014 at 1:52 AM, Stan Lagun  wrote:

> How do you like alternate design: uses can chose any image he wants (say
> any Linux) but the JSON that is in image tag has enough information on what
> applications are installed on that image. And not just installed or not but
> the exact state installation was frozen (say binaries are deployed but
> config files are need to be modified). The deployment workflow can peak
> that state from image tag and continue right from the place it was stopped
> last time. So if user has chosen image with MySQL preinstalled the workflow
> will just post-configure it while if the user chosen clean Linux image it
> will do the whole deployment from scratch. Thus it will become only a
> matter of optimization and user will still be able to to share instance for
> several applications (good example is Firewall app) or deploy his app even
> if there is no image where it was built in.
>
> Those are only my thoughts and this need a proper design. For now I agree
> that we need to improve tagging to support yours use case. But this need to
> be done in a way that would allow both user and machine to work with. UI at
> least needs to distinguish between Linux and Windows while for user a
> free-form tagging may be appropriate. Both can be stored in a single JSON
> tag.
>
> So lets create blueprint/etherpad for this and both think on exact format
> that can be implemented right now
>
> Sincerely yours,
> Stan Lagun
> Principal Software Engineer @ Mirantis
>
>  
>
>
> On Tue, Jul 22, 2014 at 10:08 PM, McLellan, Steven 
> wrote:
>
>>  Thanks for the response.
>>
>>
>>
>> Primarily I’m thinking about a situation where I have an image that has a
>> specific piece of software installed (let’s say MySQL for the sake of
>> argument). My application (which configures mysql) requires a glance image
>> that has MySQL pre-installed, and doesn’t particularly care what OS (though
>> again for the sake of argument assume it’s linux of some kind, so that
>> configuration files are expected to be in the same place regardless of OS).
>>
>>
>>
>> Currently we have a list of three hardcoded values in the UI, and none of
>> them apply properly. I’m suggesting instead of that list, we allow
>> free-form text; if you’re tagging glance images, you are expected to know
>> what applications will be looking for. This still leaves a problem in that
>> I can upload a package but I don’t necessarily have the ability to mark any
>> images as valid for it, but I think that can be a later evolution; for now,
>> I’m focusing on the situation where an admin is both uploading glance
>> images and murano packages.
>>
>>
>>
>> As a slight side note, we do have the ability to filter image sizes based
>> on glance properties (RAM, cpus), but this is in the UI code, not enforced
>> at the contractual level. I agree reengineering some of this to be at the
>> contract level is a good goal, but it seems like that would involve major
>> reengineering of the dashboard to make it much dumber and go through the
>> murano API for everything (which ultimately is probably a good thing).
>>
>>
>>
>> *From:* Stan Lagun [mailto:sla...@mirantis.com]
>> *Sent:* Sunday, July 20, 2014 5:42 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [Murano] Image tagging
>>
>>
>>
>> Hi!
>>
>>
>>
>> I think it would be usefull to share the original vision on tagging that
>> we had back in 0.4 era when it was introduced.
>>
>> Tagging was supposed to be JSON image metadata with extendable scheme.
>> Workflow should be able to both utilize that metadata and impose some
>> constraints on it. That feature was never really designed so I cannot tell
>> exactly how this JSON should work or look like. As far as I see it it can
>> contain:
>>
>>
>>
>> 1. Operating system information. For example "os": { "family": "Linux",
>> "name": "Ubuntu", "version": "12.04", "arch": "x86_x64" } (this also may be
>> encoded as a single string)
>>
>> Workflows (MuranoPL contracts) need to be able to express
>> requirements based on those attributes. For example
>>
>>
>>
>> image:
>>
>>   Contract($.class(Image).check($.family = Linux and $.arch = x86)
>>
>>
>>
>>In UI only those images that matches such contract should be displayed.
>>
>>
>>
>> 2. Human readable image title "Ubuntu Linux 12.04 x86"
>>
>>
>>
>> 

[openstack-dev] [neutron] [nova] neutron / nova-network parity meeting minutes

2014-07-23 Thread Kyle Mestery
For those interested in the progress of this particular task, meeting
minutes are available at the below:

http://eavesdrop.openstack.org/meetings/neutron_nova_network_parity/2014/

Thanks to all who attended!

Kyle

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >