Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread Rajdeep Dua
Congrats new reviewers



On Tuesday, January 28, 2014 8:54 AM, Noorul Islam Kamal Malmiyoda 
 wrote:
 
On Tue, Jan 28, 2014 at 2:30 AM, Adrian Otto  wrote:
> Solum Core Reviewers,
>
> Thanks everyone for your feedback. I have made the adjustments. Welcome to 
> the core group Angus and Noorul. Thanks again Monty.
>


Thank you all for the votes.

Regards,
Noorul



> On Jan 27, 2014, at 12:54 PM, Kurt Griffiths 
>  wrote:
>
>> +1
>>
>> On 1/27/14, 11:54 AM, "Monty Taylor"  wrote:
>>
>>> On 01/24/2014 05:32 PM, Adrian Otto wrote:
 Solum Core Reviewers,

 I propose the following changes to solum-core:

 +asalkeld
 +noorul
 -mordred

 Thanks very much to mordred for helping me to bootstrap the reviewer
 team. Please reply with your votes.
>>>
>>> +1
>>>
>>> My pleasure - you guys seem like you're off to the races -a nd asalkeld
>>> and noorul are both doing great.
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: Proposed Logging Standards

2014-01-27 Thread Haiming Yang
I think it is also good for general i18n effort 

-原始邮件-
发件人: "Christopher Yeoh" 
发送时间: ‎2014/‎1/‎28 11:02
收件人: "OpenStack Development Mailing List (not for usage questions)" 

主题: Re: [openstack-dev] Proposed Logging Standards

On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague  wrote:

On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
> Hi Sean,
>
> I'm currently working on moving away from the "built-in" logging to use 
> log_config= and the python logging framework so that we can start 
> shipping to logstash/sentry/.
>
> I'd be very interested in getting involved in this, especially from a "why do 
> we have log messages that are split across multiple lines" perspective!


Do we have many that aren't either DEBUG or TRACE? I thought we were
pretty clean there.


> Cheers,
>
> Matt
>
> P.S. FWIW, I'd also welcome details on what the "Audit" level gives us that 
> the others don't... :)


Well as far as I can tell the AUDIT level was a prior drive by
contribution that's not being actively maintained. Honestly, I think we
should probably rip it out, because I don't see any in tree tooling to
use it, and it's horribly inconsistent.





For the uses I've seen of it in the nova api code INFO would be perfectly fine 
in place of AUDIT.


I'd be happy to help out with patches to cleanup the logging in n-api.


One other thing to look at - I've noticed with logs is that when something like 
glanceclient code (just as an example) is called from nova,

we can get ERROR level messages for say image not found when its actually 
perfectly expected that this will occur.
I'm not sure if we should be changing the error level in glanceclient or just 
forcing any error logging in glanceclient when
called from Nova to a lower level though.



Chris___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Irena Berezovsky
Hi Nrupal,
We definitely consider both these cases.
Agree with you that we should aim to support both.

BR,
Irena


From: Jani, Nrupal [mailto:nrupal.j...@intel.com]
Sent: Monday, January 27, 2014 11:17 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi,

There are two possibilities for the hybrid compute nodes

-  In the first case, a compute node has two NICs,  one SRIOV NIC & the 
other NIC for the VirtIO

-  In the 2nd case, Compute node has only one SRIOV NIC, where VFs are 
used for the VMs, either macvtap or direct assignment.  And the PF is used for 
the uplink to the linux bridge or OVS!!

My question to the team is whether we consider both of these deployments or not?

Thx,

Nrupal

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image  --nic net-id= vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Sounds great! Let's do it on Thursday.

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Irena Berezovsky
Hi Robert,
Thank you for raising this issue.
Neutron side support for hybrid compute node is part of the mission  I want to 
achieve by implementing:
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type.
I think it should be allowed to run more than one agent on certain node and 
Mechanism driver will bind the port if:

1.   It supports requested vnic_type

2.   Capable to manage segment for requested port (taking in to account 
physical network, network type, alive agent,..)
I think at least for now, new agents will be added and not mixed into existing 
one. But it may be a good idea to come up with Modular Agent.

BR,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 11:16 PM
To: Irena Berezovsky; OpenStack Development Mailing List (not for usage 
questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Ok, this is something that's going to be added in ml2. I was looking at the 
bind_port() routine in mech_agent.py. The routine check_segment_for_agent() 
seems to be performing static check. So we are going to add something like 
check_vnic_type_for_agent(), I guess? Is the pairing of an agent with the mech 
driver predetermined? The routine bind_port() just throws warnings, though.

In any case, this is after the fact the scheduler has decided to place the VM 
onto the host.

Maybe not for now, but we need to consider how to support the hybrid compute 
nodes. Would an agent be able to support multiple vnic types? Or is it possible 
to reuse ovs agent, in the same time running another agent to support sriov? 
Any thoughts?

--Robert

On 1/27/14 4:01 PM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image  --nic net-id= vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either u

Re: [openstack-dev] extending keystone identity

2014-01-27 Thread Dolph Mathews
>From your original email, it sounds like you want to extend the existing
LDAP identity driver implementation, rather than writing a custom driver
from scratch, which is what you've written. The TemplatedCatalog driver
sort of follows that pattern with the KVS catalog driver, although it's not
a spectacular example.


On Mon, Jan 27, 2014 at 9:11 PM, Simon Perfer wrote:

> I dug a bit more and found this in the logs:
>
> (keystone.common.wsgi): 2014-01-27 19:07:13,851 WARNING The action you
> have requested has not been implemented.
>
>
> Despite basing my (super simple) code on the SQL or LDAP backends, I must
> be doing something wrong.
>
>
> -->> I've placed my backend code in 
> /usr/share/pyshared/keystone/identity/backends/nicira.py
> or /usr/share/pyshared/keystone/common/nicira.py
>
>
> -->> I DO see the "my authenticate module loaded" in the log
>
>
> I would appreciate any help in figuring out what I'm missing. Thanks!
>
>
>
> --
> From: simon.per...@hotmail.com
> To: openstack-dev@lists.openstack.org
> Date: Mon, 27 Jan 2014 21:58:43 -0500
>
> Subject: Re: [openstack-dev] extending keystone identity
>
> Dolph, I appreciate the response and pointing me in the right direction.
>
> Here's what I have so far:
>
> 
>
> CONF = config.CONF
>
> LOG = logging.getLogger(__name__)
>
>
> class Identity(identity.Driver):
>
> def __init__(self):
>
> super(Identity, self).__init__()
>
> LOG.debug('My authentication module loaded')
>
>
> def authenticate(self, user_id, password, domain_scope=None):
>
> LOG.debug('in authenticate method')
>
>
> When I request a user-list via the python-keystoneclient, we never make it
> into the authenticate method (as is evident by the missing debug log).
>
>
> Any thoughts on why I'm not hitting this method?
>
>
>
> --
> From: dolph.math...@gmail.com
> Date: Mon, 27 Jan 2014 18:14:50 -0600
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] extending keystone identity
>
> _check_password() is a private/internal API, so we make no guarantees
> about it's stability. Instead, override the public authenticate() method
> with something like this:
>
> def authenticate(self, user_id, password, domain_scope=None):
> if user_id in SPECIAL_LIST_OF_USERS:
># compare against value from keystone.conf
>pass
> else:
> return super(CustomIdentityDriver, self).authenticate(user_id,
> password, domain_scope)
>
> On Mon, Jan 27, 2014 at 3:27 PM, Simon Perfer wrote:
>
> I'm looking to create a simple Identity driver that will look at
> usernames. A small number of specific users should be authenticated by
> looking at a hard-coded password in keystone.conf, while any other users
> should fall back to LDAP authentication.
>
> I based my original driver on what's found here:
>
> http://waipeng.wordpress.com/2013/09/30/openstack-ldap-authentication/
>
> As can be seen in the github code (
> https://raw.github.com/waipeng/keystone/8c18917558bebbded0f9c588f08a84b0ea33d9ae/keystone/identity/backends/ldapauth.py),
> there's a _check_password() method which is supposedly called at some point.
>
> I've based my driver on this ldapauth.py file, and created an Identity
> class which subclasses sql.Identity. Here's what I have so far:
>
> CONF = config.CONF
> LOG = logging.getLogger(__name__)
>
>
> class Identity(sql.Identity):
> def __init__(self):
> super(Identity, self).__init__()
> LOG.debug('My authentication module loaded')
>
>
> def _check_password(self, password, user_ref):
> LOG.debug('Authenticating via my custom hybrid authentication')
>
>
> username = user_ref.get('name')
>
> LOG.debug('Username = %s' % username)
>
>
> I can see from the syslog output that we never enter the _check_password()
> function.
>
> Can someone point me in the right direction regarding which function calls
> the identity driver? Also, what is the entry function in the identity
> drivers? Why wouldn't check_password() be called, as we see in the github /
> blog example above?
>
> THANKS!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___ OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___ OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.open

[openstack-dev] [gantt] Scheduler sub-group meeting agenda 1/28

2014-01-27 Thread Dugger, Donald D
1) Memcached based scheduler updates
2) Scheduler code forklift
3) Opens

--
Don Dugger
"Censeo Toto nos in Kansa esse decisse." - D. Gale
Ph: 303/443-3786


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread Noorul Islam Kamal Malmiyoda
On Tue, Jan 28, 2014 at 2:30 AM, Adrian Otto  wrote:
> Solum Core Reviewers,
>
> Thanks everyone for your feedback. I have made the adjustments. Welcome to 
> the core group Angus and Noorul. Thanks again Monty.
>


Thank you all for the votes.

Regards,
Noorul



> On Jan 27, 2014, at 12:54 PM, Kurt Griffiths 
>  wrote:
>
>> +1
>>
>> On 1/27/14, 11:54 AM, "Monty Taylor"  wrote:
>>
>>> On 01/24/2014 05:32 PM, Adrian Otto wrote:
 Solum Core Reviewers,

 I propose the following changes to solum-core:

 +asalkeld
 +noorul
 -mordred

 Thanks very much to mordred for helping me to bootstrap the reviewer
 team. Please reply with your votes.
>>>
>>> +1
>>>
>>> My pleasure - you guys seem like you're off to the races -a nd asalkeld
>>> and noorul are both doing great.
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] extending keystone identity

2014-01-27 Thread Simon Perfer
I dug a bit more and found this in the logs:








(keystone.common.wsgi): 2014-01-27 19:07:13,851 WARNING The action you have 
requested has not been implemented.
Despite basing my (super simple) code on the SQL or LDAP backends, I must be 
doing something wrong.
-->> I've placed my backend code in 
/usr/share/pyshared/keystone/identity/backends/nicira.py or 
/usr/share/pyshared/keystone/common/nicira.py
-->> I DO see the "my authenticate module loaded" in the log
I would appreciate any help in figuring out what I'm missing. Thanks!

















From: simon.per...@hotmail.com
To: openstack-dev@lists.openstack.org
Date: Mon, 27 Jan 2014 21:58:43 -0500
Subject: Re: [openstack-dev] extending keystone identity




Dolph, I appreciate the response and pointing me in the right direction.
Here's what I have so far:








CONF = config.CONF
LOG = logging.getLogger(__name__)


class Identity(identity.Driver):
def __init__(self):
super(Identity, self).__init__()
LOG.debug('My authentication module loaded')


def authenticate(self, user_id, password, domain_scope=None):
LOG.debug('in authenticate method')
When I request a user-list via the python-keystoneclient, we never make it into 
the authenticate method (as is evident by the missing debug log).
Any thoughts on why I'm not hitting this method?

From: dolph.math...@gmail.com
Date: Mon, 27 Jan 2014 18:14:50 -0600
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] extending keystone identity

_check_password() is a private/internal API, so we make no guarantees about 
it's stability. Instead, override the public authenticate() method with 
something like this:
def authenticate(self, user_id, password, domain_scope=None):

if user_id in SPECIAL_LIST_OF_USERS:   # compare against value 
from keystone.conf   passelse:return 
super(CustomIdentityDriver, self).authenticate(user_id, password, domain_scope)


On Mon, Jan 27, 2014 at 3:27 PM, Simon Perfer  wrote:





I'm looking to create a simple Identity driver that will look at usernames. A 
small number of specific users should be authenticated by looking at a 
hard-coded password in keystone.conf, while any other users should fall back to 
LDAP authentication.


I based my original driver on what's found here:
http://waipeng.wordpress.com/2013/09/30/openstack-ldap-authentication/


As can be seen in the github code 
(https://raw.github.com/waipeng/keystone/8c18917558bebbded0f9c588f08a84b0ea33d9ae/keystone/identity/backends/ldapauth.py),
 there's a _check_password() method which is supposedly called at some point.


I've based my driver on this ldapauth.py file, and created an Identity class 
which subclasses sql.Identity. Here's what I have so far:








CONF = config.CONF
LOG = logging.getLogger(__name__)


class Identity(sql.Identity):
def __init__(self):
super(Identity, self).__init__()
LOG.debug('My authentication module loaded')




def _check_password(self, password, user_ref):
LOG.debug('Authenticating via my custom hybrid authentication')


username = user_ref.get('name')
























LOG.debug('Username = %s' % username)


I can see from the syslog output that we never enter the _check_password() 
function.

Can someone point me in the right direction regarding which function calls the 
identity driver? Also, what is the entry function in the identity drivers? Why 
wouldn't check_password() be called, as we see in the github / blog example 
above?


THANKS!   

___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] extending keystone identity

2014-01-27 Thread Simon Perfer
Dolph, I appreciate the response and pointing me in the right direction.
Here's what I have so far:








CONF = config.CONF
LOG = logging.getLogger(__name__)


class Identity(identity.Driver):
def __init__(self):
super(Identity, self).__init__()
LOG.debug('My authentication module loaded')


def authenticate(self, user_id, password, domain_scope=None):
LOG.debug('in authenticate method')
When I request a user-list via the python-keystoneclient, we never make it into 
the authenticate method (as is evident by the missing debug log).
Any thoughts on why I'm not hitting this method?

From: dolph.math...@gmail.com
Date: Mon, 27 Jan 2014 18:14:50 -0600
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] extending keystone identity

_check_password() is a private/internal API, so we make no guarantees about 
it's stability. Instead, override the public authenticate() method with 
something like this:
def authenticate(self, user_id, password, domain_scope=None):

if user_id in SPECIAL_LIST_OF_USERS:   # compare against value 
from keystone.conf   passelse:return 
super(CustomIdentityDriver, self).authenticate(user_id, password, domain_scope)


On Mon, Jan 27, 2014 at 3:27 PM, Simon Perfer  wrote:





I'm looking to create a simple Identity driver that will look at usernames. A 
small number of specific users should be authenticated by looking at a 
hard-coded password in keystone.conf, while any other users should fall back to 
LDAP authentication.


I based my original driver on what's found here:
http://waipeng.wordpress.com/2013/09/30/openstack-ldap-authentication/


As can be seen in the github code 
(https://raw.github.com/waipeng/keystone/8c18917558bebbded0f9c588f08a84b0ea33d9ae/keystone/identity/backends/ldapauth.py),
 there's a _check_password() method which is supposedly called at some point.


I've based my driver on this ldapauth.py file, and created an Identity class 
which subclasses sql.Identity. Here's what I have so far:








CONF = config.CONF
LOG = logging.getLogger(__name__)


class Identity(sql.Identity):
def __init__(self):
super(Identity, self).__init__()
LOG.debug('My authentication module loaded')




def _check_password(self, password, user_ref):
LOG.debug('Authenticating via my custom hybrid authentication')


username = user_ref.get('name')
























LOG.debug('Username = %s' % username)


I can see from the syslog output that we never enter the _check_password() 
function.

Can someone point me in the right direction regarding which function calls the 
identity driver? Also, what is the entry function in the identity drivers? Why 
wouldn't check_password() be called, as we see in the github / blog example 
above?


THANKS!   

___

OpenStack-dev mailing list

OpenStack-dev@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev   
  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Christopher Yeoh
On Tue, Jan 28, 2014 at 12:55 AM, Sean Dague  wrote:

> On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
> > Hi Sean,
> >
> > I'm currently working on moving away from the "built-in" logging to use
> log_config= and the python logging framework so that we can start
> shipping to logstash/sentry/.
> >
> > I'd be very interested in getting involved in this, especially from a
> "why do we have log messages that are split across multiple lines"
> perspective!
>
> Do we have many that aren't either DEBUG or TRACE? I thought we were
> pretty clean there.
>
> > Cheers,
> >
> > Matt
> >
> > P.S. FWIW, I'd also welcome details on what the "Audit" level gives us
> that the others don't... :)
>
> Well as far as I can tell the AUDIT level was a prior drive by
> contribution that's not being actively maintained. Honestly, I think we
> should probably rip it out, because I don't see any in tree tooling to
> use it, and it's horribly inconsistent.
>
>
For the uses I've seen of it in the nova api code INFO would be perfectly
fine in place of AUDIT.

I'd be happy to help out with patches to cleanup the logging in n-api.

One other thing to look at - I've noticed with logs is that when something
like glanceclient code (just as an example) is called from nova,
we can get ERROR level messages for say image not found when its actually
perfectly expected that this will occur.
I'm not sure if we should be changing the error level in glanceclient or
just forcing any error logging in glanceclient when
called from Nova to a lower level though.

Chris
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] [TripleO] adding process/service monitoring

2014-01-27 Thread Robert Collins
On 28 January 2014 14:59, Richard Su  wrote:
> Hi,
>
> I have been looking into how to add process/service monitoring to
> tripleo. Here I want to be able to detect when an openstack dependent
> component that is deployed on an instance has failed. And when a failure
> has occurred I want to be notified and eventually see it in Tuskar.

+1 on the goal.

We have two basic routes here:
 - use existing things
 - build something new

Service monitoring is a rich field, and there is lots of opportunity
to do new and interesting things. However, it is also a wicked
problem, because - well see all the prior art.

Further to that, like with Chef/Puppet deployers will often have
existing infrastructure investment we should support.

My suggestion is that we take a modular approach - we define an
interface, supply glue code to glue e.g. the assimilation monitoring
project into the interface, and then build on the interface.

There are some important questions - how rich is the interface, what
lives in the glue code, how many backends we'll eventually support,
but the key thing for me is that we don't go down the path of
rewriting e.g. nagios because we're afraid of the dependency: it's
optional to bring in any specific backend, and we can always write our
own later.

Another point is the crawl/walk/run cycle - lets just get the ability
to click through to a native monitoring screen to start with. That
should be abou t a thousand times faster to ring together than a
complete custom everything.

-Rob

-- 
Robert Collins 
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova]Why not allow to create a vm directly with two VIF in the same network

2014-01-27 Thread shihanzhang


Hi Paul:
  I am very glad to do the thing that puts together the practical use cases in 
which the same VM would benefit from multiple virtual connections to the same 
network, whatever it takes, I think we should at least guarantee the 
consistency of creating vms with nics and attaching nics.




在 2014-01-24 22:33:36,"CARVER, PAUL"  写道:


I agree that I’d like to see a set of use cases for this. This is the second 
time in as many days that I’ve heard about a desire to have such a thing but I 
still don’t think I understand any use cases adequately.

 

In the physical world it makes perfect sense, LACP, MLT, 
Etherchannel/Portchannel, etc. In the virtual world I need to see a detailed 
description of one or more use cases.

 

Shihanzhang, why don’t you start up an Etherpad or something and start putting 
together a list of one or more practical use cases in which the same VM would 
benefit from multiple virtual connections to the same network. If it really 
makes sense we ought to be able to clearly describe it.

 

--

Paul Carver

VO: 732-545-7377

Cell: 908-803-1656

E: pcar...@att.com

Q Instant Message

 

From: Day, Phil [mailto:philip@hp.com]
Sent: Friday, January 24, 2014 09:11
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova]Why not allow to create a vm directly with 
two VIF in the same network

 

I agree its oddly inconsistent (you’ll get used to that over time ;-)  - but to 
me it feels more like the validation is missing on the attach that that the 
create should allow two VIFs on the same network.   Since these are both 
virtualised (i.e share the same bandwidth, don’t provide any additional 
resilience, etc) I’m curious about why you’d want two VIFs in this 
configuration ?

 

From: shihanzhang [mailto:ayshihanzh...@126.com]
Sent: 24 January 2014 03:22
To:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [nova]Why not allow to create a vm directly with two 
VIF in the same network

 

I am a beginer of nova, there is a problem which has confused me, in the latest 
version, it not allowed to create a vm directly with two VIF in the same 
network, but allowed to add a VIF that it network is same with a existed 
VIF'network, there is the use case that a vm with two VIF in the same network, 
but why not allow to create the vm directly with two VIF in the same network?

 ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread laserjetyang
I think it is a good approach for i18n team, since I don't think all log
needs to be translated, and enterprise who needs i18n can take what ever
they need to translate, and not all of them.


On Tue, Jan 28, 2014 at 3:02 AM, Doug Hellmann
wrote:

>
>
>
> On Mon, Jan 27, 2014 at 1:51 PM, Daniel P. Berrange 
> wrote:
>
>> On Mon, Jan 27, 2014 at 01:12:19PM -0500, Doug Hellmann wrote:
>> > On Mon, Jan 27, 2014 at 12:58 PM, Daniel P. Berrange <
>> berra...@redhat.com>wrote:
>> >
>> > > On Mon, Jan 27, 2014 at 12:42:28PM -0500, Doug Hellmann wrote:
>> > > > We have a blueprint open for separating translated log messages into
>> > > > different domains so the translation team can prioritize them
>> differently
>> > > > (focusing on errors and warnings before debug messages, for
>> example) [1].
>> > >
>> > > > Feedback?
>> > >
>> > > > [1]
>> > > >
>> > >
>> https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
>> > >
>> > > IMHO we've created ourselves a problem we don't need to have in the
>> first
>> > > place by trying to translate every single log message. It causes pain
>> for
>> > > developers & vendors because debug logs from users can in any language
>> > > which the person receiving will often not be able to understand. It
>> creates
>> > > pain for translators by giving them an insane amount of work todo,
>> which
>> > > never ends since log message text is changed so often. Now we're
>> creating
>> > > yet more pain & complexity by trying to produce multiple log domains
>> to
>> > > solve
>> > > a problem of havin some many msgs to translate. I accept that some
>> people
>> > > will
>> > > like translated log messages, but I don't think this is a net win
>> when you
>> > > look at the overall burden they're imposing.
>> > >
>> > > Shouldn't we just say no to this burden and remove translation of all
>> log
>> > > messages, except for those at WARN/ERROR level which is likely to be
>> seen
>> > > by administrators in a day-to-day basis. There's few enough of those
>> that
>> > > we wouldn't need these extra translation domains. IMHO translating
>> stuff
>> > > at DEBUG/INFO level is a waste of limited time & resources.
>> > >
>> >
>> > Thanks for raising this point, I meant to address it in my original
>> email.
>> >
>> > Many deployers do in fact want to see the log messages in their native
>> > language, either instead of or in addition to English. This change is an
>> > attempt to accommodate them, while allowing other folks that don't care
>> to
>> > continue to not care.
>>
>> The implication of splitting the log messages into a separate translation
>> domain is that translators will then prioritize translation of text from
>> API error messages. IOW this split into translation domains will quite
>> likely mean that translators just ignore translation of the ever changing
>> log messages entirely. So even if deployers want translated log messages
>> they may well find they don't get them. Which again leads me to question
>> the whether the burden of this is justified.
>>
>
> The people actually doing that work have spoken to us and asked us to make
> this change. The work is being done.
>
> Doug
>
>
>
>>
>> Regards,
>> Daniel
>> --
>> |: http://berrange.com  -o-
>> http://www.flickr.com/photos/dberrange/ :|
>> |: http://libvirt.org  -o-
>> http://virt-manager.org :|
>> |: http://autobuild.org   -o-
>> http://search.cpan.org/~danberr/ :|
>> |: http://entangle-photo.org   -o-
>> http://live.gnome.org/gtk-vnc :|
>>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ceilometer] [TripleO] adding process/service monitoring

2014-01-27 Thread Richard Su
Hi,

I have been looking into how to add process/service monitoring to
tripleo. Here I want to be able to detect when an openstack dependent
component that is deployed on an instance has failed. And when a failure
has occurred I want to be notified and eventually see it in Tuskar.

Ceilometer doesn't handle this particular use case today. So I have been
doing some research and there are many options out there that provides
process checks: nagios, sensu, zabbix, and monit. I am a bit wary of
pulling one of these options into tripleo. There is some increased
operational and maintenance costs when pulling in each of them. And
physical device monitoring is currently in the works for Ceilometer
lessening the need for some of the other abilities that an another
monitoring tool would provide.

For the particular use case of monitoring processes/services, at a high
level, I am considering writing a simple daemon to perform the check.
Checks and failures are written out as messages to the notification bus.
Interested parties like Tuskar or Ceilometer can subscribe to these
messages.

In general does this sound like a reasonable approach?

There is also the question of how to configure or figure out which
processes we are interested in monitoring. I need to do more research
here but I'm considering either looking at the elements listed by
diskimage-builder or by looking at the orc post-configure.d scripts to
find service that are restarted.

I welcome your feedback and suggestions.

- Richard Su

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hacking repair scripts

2014-01-27 Thread Joshua Harlow
Hi all,

I have had in my ~/bin for a while a little script that I finally got around to 
tuning up and I thought others might be interested in it/find it useful.

The concept is similar to https://pypi.python.org/pypi/autopep8 but does a 
really simple action to start.

As many of u know the import order is a hacking rule, but its not sometimes 
clear how to fix the order to be correct; so the tool I fixed/cleaned up 
reorganizes the imports to be in the right order.

I initially hooked it into the hacking codebase @ 
https://review.openstack.org/#/c/68988

It could be something that could be built on to automate 'repairing' many of 
the hacking issues that are encountered (ones that are simple are the easiest, 
like imports).

Anyways,

Thought people might find it useful and it could become a part of automatic 
repairing/style adjustments in the future (similar to I guess what go has in 
`gofmt`).

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] bp proposal: quotas on users and projects per domain

2014-01-27 Thread Jamie Lennox


- Original Message -
> From: "Florent Flament" 
> To: openstack-dev@lists.openstack.org
> Sent: Friday, 24 January, 2014 8:07:28 AM
> Subject: Re: [openstack-dev] [Keystone] bp proposal: quotas on users and 
> projects per domain
> 
> I understand that not everyone may be interested in such feature.
> 
> On the other hand, some (maybe shallow) Openstack users may be
> interested in setting quotas on users or projects. Also, this feature
> wouldn't do any harm to the other users who wouldn't use it.
> 
> If some contributors are willing to spend some time in adding this
> feature to Openstack, is there any reason not to accept it ?

I have in general no problem with users/projects/domains/etc being quota-ed
for a business decision (and i don't work for a provider) but as part of a more
global initiative that all resource types in OpenStack can be quotaed and this
would be managed by some other service (This i think would be a difficult
service to write). 

I don't see the point in implementing this directly as a keystone feature.
As Dolph mentioned these are not resource heavy concepts that we have a 
practical 
need to limit. In most situations i imagine service providers who want this
have means to achieve it via the backend they use store to. 

Note that the idea of storing quota data in keystone has come up before
and has generally never gained much traction. 

Jamie

> On Thu, 2014-01-23 at 14:55 -0600, Dolph Mathews wrote:
> > 
> > On Thu, Jan 23, 2014 at 9:59 AM, Florent Flament
> >  wrote:
> > Hi,
> > 
> > 
> > Although it is true that projects and users don't consume a
> > lot of resources, I think that there may be cases where
> > setting quotas (possibly large) may be useful.
> > 
> > 
> > 
> > For instance, a cloud provider may wish to prevent domain
> > administrators to mistakingly create an infinite number of
> > users and/or projects, by calling APIs in a bugging loop.
> > 
> > 
> > 
> > That sounds like it would be better solved by API rate limiting, not
> > quotas.
> >  
> > 
> > 
> > 
> > Moreover, if quotas can be disabled, I don't see any reason
> > not to allow cloud operators to set quotas on users and/or
> > projects if they wishes to do so for whatever marketing reason
> > (e.g. charging more to allow more users or projects).
> > 
> > 
> > 
> > That's the shallow business decision I was alluding to, which I don't
> > think we have any reason to support in-tree.
> >  
> > 
> > 
> > 
> > Regards,
> > 
> > Florent Flament
> > 
> > 
> > 
> > 
> > 
> > __
> > From: "Dolph Mathews" 
> > To: "OpenStack Development Mailing List (not for usage
> > questions)" 
> > Sent: Thursday, January 23, 2014 3:09:51 PM
> > Subject: Re: [openstack-dev] [Keystone] bp proposal: quotas on
> > users and projects per domain
> > 
> > 
> > 
> > ... why? It strikes me as a rather shallow business decision
> > to limit the number of users or projects in a system, as
> > neither are actually cost-consuming resources.
> > 
> > 
> > On Thu, Jan 23, 2014 at 6:43 AM, Matthieu Huin
> >  wrote:
> > Hello,
> > 
> > I'd be interested in opinions and feedback on the
> > following blueprint:
> > 
> > https://blueprints.launchpad.net/keystone/+spec/tenants-users-quotas
> > 
> > The idea is to add a mechanism preventing the creation
> > of users or projects once a quota per domain is met. I
> > believe this could be interesting for cloud providers
> > who delegate administrative rights under domains to
> > their customers.
> > 
> > I'd like to hear the community's thoughts on this,
> > especially in terms of viability.
> > 
> > Many thanks,
> > 
> > Matthieu Huin
> > 
> > m...@enovance.com
> > http://www.enovance.com
> > eNovance SaS - 10 rue de la Victoire 75009 Paris -
> > France
> > 
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 
> > 
> > ___
> > OpenStack-dev mailing lis

Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread Angus Salkeld

On 27/01/14 21:00 +, Adrian Otto wrote:

Solum Core Reviewers,

Thanks everyone for your feedback. I have made the adjustments. Welcome to the 
core group Angus and Noorul. Thanks again Monty.


Thank you everyone for your support!

-Angus



Regards,

Adrian

On Jan 27, 2014, at 12:54 PM, Kurt Griffiths 
wrote:


+1

On 1/27/14, 11:54 AM, "Monty Taylor"  wrote:


On 01/24/2014 05:32 PM, Adrian Otto wrote:

Solum Core Reviewers,

I propose the following changes to solum-core:

+asalkeld
+noorul
-mordred

Thanks very much to mordred for helping me to bootstrap the reviewer
team. Please reply with your votes.


+1

My pleasure - you guys seem like you're off to the races -a nd asalkeld
and noorul are both doing great.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][heat] Migration to keystone v3 API questions

2014-01-27 Thread Jamie Lennox


- Original Message -
> From: "Steven Hardy" 
> To: openstack-dev@lists.openstack.org
> Sent: Thursday, 23 January, 2014 9:21:47 PM
> Subject: [openstack-dev] [keystone][heat] Migration to keystone v3 API
> questions
> 
> Hi all,
> 
> I've recently been working on migrating the heat internal interfaces to use
> the keystone v3 API exclusively[1].
> 
> This work has mostly been going well, but I've hit a couple of issues which
> I wanted to discuss, so we agree the most appropriate workarounds:
> 
> 1. keystoneclient v3 functionality not accessible when catalog contains a
> v2 endppoint:
> 
> In my test environment my keystone endpoint looks like:
> 
> http://127.0.0.1:5000/v2.0
> 
> And I'd guess this is similar to the majority of real deployments atm?
> 
> So when creating a keystoneclient object I've been doing:
> 
> from keystoneclient.v3 import client as kc_v3
> v3_endpoint = self.context.auth_url.replace('v2.0', 'v3')
> client = kc_v3.Client(auth_url=v3_endpoint, ...
> 
> Which, assuming the keystone service has both v2 and v3 API's enabled
> works, but any attempt to use v3 functionality fails with 404 because
> keystoneclient falls back to using the v2.0 endpoint from the catalog.
> 
> So to work around this I do this:
> 
> client = kc_v3.Client(auth_url=v3_endpoint, endpoint=v3_endpoint, ...
> client.authenticate()
> 
> Which results in the v3 features working OK.
> 
> So my questions are:
> - Is this a reasonable workaround for production environments?
> - What is the roadmap for moving keystone endpoints to be version agnostic?
> - Is there work ongoing to make the client smarter in terms of figuring out
>   what URL to use (version negotiation or substituting the appropriate path
>   when we are in an environment with a legacy v2.0 endpoint..)

This is a known issue and something that has come up for discussion many times 
and 
we don't have a great solution for it. This problem won't be unique to keystone 
it
is a side effect of having versioned endpoints in the service catalog. 

We are slowly attempting to transition the entire service catalog over to 
unversioned endpoints. There is a lot of problems though regarding this and 
maintaining comptability with existing clients and installations. There are some
hacks we are discussing that will hopefully allow us to transition keystone and 
the other clients over - this is but one advantage of getting a more common 
client
going. 

To more directly answer the question, there is slow work ongoing in this area 
but
for the time being the best advice i have is to set 
client.management_url = v3_endpoint and it will override the service 
catalog for the lifetime of the client (endpoint= should work but your above 
example 
will involve to authentication requests). 

If you have any ideas on how to handle this transition and provide backwards 
compat
I'd love to hear them. 

> 2. Client (CLI) support for v3 API
> 
> What is the status re porting keystoneclient to provide access to the v3
> functionality on the CLI?
> 
> In particular, Heat is moving towards using domains to encapsulate the
> in-instance users it creates[2], so administrators will require some way to
> manage users in a non-default domain, e.g to get visibility of what Heat is
> doing in that domain and debug in the event of any issues.
> 
> If anyone can provide any BP links or insight that would be much
> appreciated!

There is general consensus here that we will not be providing CLI access to
the V3 API via the keystoneclient package. This responsibility has been 
dumped on^H^H^H taken over by the common openstack client. I am not aware of 
when OSC will be considered production ready. 


Sorry i couldn't bring more positive news,

Jamie

> Thanks,
> 
> Steve
> 
> [1] https://blueprints.launchpad.net/heat/+spec/keystone-v3-only
> [2] https://wiki.openstack.org/wiki/Heat/Blueprints/InstanceUsers
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Oslo Context and SecurityContext

2014-01-27 Thread Angus Salkeld

On 27/01/14 22:53 +, Adrian Otto wrote:

On Jan 27, 2014, at 2:39 PM, Paul Montgomery 
wrote:


Solum community,

I created several different approaches for community consideration
regarding Solum context, logging and data confidentiality.  Two of these
approaches are documented here:

https://wiki.openstack.org/wiki/Solum/Logging

A) Plain Oslo Log/Config/Context is in the "Example of Oslo Log and Oslo
Context" section.

B) A hybrid Oslo Log/Config/Context but SecurityContext inherits the
RequestContext class and adds some confidentiality functions is in the
"Example of Oslo Log and Oslo Context Combined with SecurityContext"
section.

None of this code is production ready or tested by any means.  Please just
examine the general architecture before I polish too much.

I hope that this is enough information for us to agree on a path A or B.
I honestly am not tied to either path very tightly but it is time that we
reach a final decision on this topic IMO.

Thoughts?


I have a strong preference for using the SecurityContext approach. The main reason for my 
preference is outlined in the Pro/Con sections of the Wiki page. With the "A" approach, 
leakage of confidential information mint happen with *any* future addition of a logging call, a 
discipline which may be forgotten, or overlooked during future code reviews. The "B" 
approach handles the classification of data not when logging, but when placing the data into the 
SecurityContext. This is much safer from a long term maintenance perspective.


I think we seperate this out into:

1) we need to be security aware whenever we log information handed to
   us by the user. (I totally agree with this general statement)

2) should we log structured data, non structured data or use the notification 
mechanism (which is structured)
   There have been some talks at summit about the potential merging of
   the logging and notification api, I honestly don't know what
   happened to that but have no problem with structured logging. We
   should use the notification system so that ceilometer can take
   advantage of the events.

3) should we use a RequestContext in the spirit of the olso-incubator
  (and inherited from it too). OR one different from all other
  projects.

  IMHO we should just use oslo-incubator RequestContext. Remember the
  context is not a generic dumping ground for "I want to log stuff so
  lets put it into the context". It is for user credentials and things
  directly associated with the request (like the request_id). I don't
  see why we need a generic dict style approach, this is more likely
  to result in programming error 
  
  context.set_priv('userid', bla)

  instead of:
  context.set_priv('user_id', bla)

  I think my point is: We should very quickly zero in on the
  attributes we need in the context and they will seldom change.

  As far as security goes Paul has shown a good example of how to
  change the logging_context_format_string to achieve structured and
  secure logging of the context. oslo log module does not log whatever
  is in the context but only what is configured in the solum.conf (via
  logging_context_format_string). So I don't believe that the
  new/different RequestContext provides any improved security.



-Angus




Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] extending keystone identity

2014-01-27 Thread Dolph Mathews
_check_password() is a private/internal API, so we make no guarantees about
it's stability. Instead, override the public authenticate() method with
something like this:

def authenticate(self, user_id, password, domain_scope=None):
if user_id in SPECIAL_LIST_OF_USERS:
   # compare against value from keystone.conf
   pass
else:
return super(CustomIdentityDriver, self).authenticate(user_id,
password, domain_scope)

On Mon, Jan 27, 2014 at 3:27 PM, Simon Perfer wrote:

> I'm looking to create a simple Identity driver that will look at
> usernames. A small number of specific users should be authenticated by
> looking at a hard-coded password in keystone.conf, while any other users
> should fall back to LDAP authentication.
>
> I based my original driver on what's found here:
>
> http://waipeng.wordpress.com/2013/09/30/openstack-ldap-authentication/
>
> As can be seen in the github code (
> https://raw.github.com/waipeng/keystone/8c18917558bebbded0f9c588f08a84b0ea33d9ae/keystone/identity/backends/ldapauth.py),
> there's a _check_password() method which is supposedly called at some point.
>
> I've based my driver on this ldapauth.py file, and created an Identity
> class which subclasses sql.Identity. Here's what I have so far:
>
> CONF = config.CONF
>
> LOG = logging.getLogger(__name__)
>
>
> class Identity(sql.Identity):
>
> def __init__(self):
>
> super(Identity, self).__init__()
>
> LOG.debug('My authentication module loaded')
>
>
> def _check_password(self, password, user_ref):
>
> LOG.debug('Authenticating via my custom hybrid authentication')
>
>
> username = user_ref.get('name')
>
> LOG.debug('Username = %s' % username)
>
>
> I can see from the syslog output that we never enter the _check_password()
> function.
>
> Can someone point me in the right direction regarding which function calls
> the identity driver? Also, what is the entry function in the identity
> drivers? Why wouldn't check_password() be called, as we see in the github /
> blog example above?
>
> THANKS!
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] Tempest test validation

2014-01-27 Thread Christopher Yeoh

On Fri, 24 Jan 2014 11:58:42 -0800
Franck Yelles  wrote:
> Hi everyone,
> 

Hi - note that the openstack-qa mailing list is no longer being used
(we should really remove it!). I've CC'd your email along to
openstack-dev with the [qa] tag in the subject line.

> I would need some clarification of the Tempest testcases.
> I am trying to run tempest on a vanilla devstack environment.
> 
> My localrc file has the API_RATE_LIMIT set to false.
> This is the only modification that I have.
> 
> I would run ./stack.sh and then run ./run_tempest.sh and would have X
> errors. (running the  failing testcases manually works)
> Then I would unstack and stack again and run again ./run_tempest.sh
> and would have Y errors.
> 
> My VM has 2 dual quad core and 8Go of RAM
> 
> Why do I have this inconsistency ? Or am I doing something wrong ?

So you'll probably need to analyse the failures themselves to see what
is going on. Perhaps post some of the output to a pastebin so we can
look at it and make the log files available. Asking on #openstack-qa 
might get you some help as well.

This sort of inconsistency in test results I've found is often due to
insufficient memory (although 8Gb should be enough unless you are
running other things at the same time) and/or running too many tests in
parallel and hitting some of the resource quota limits. But there should
be some clues to this in the log files.

Regards,

Chris

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Oslo Context and SecurityContext

2014-01-27 Thread Adrian Otto
On Jan 27, 2014, at 2:39 PM, Paul Montgomery 
 wrote:

> Solum community,
> 
> I created several different approaches for community consideration
> regarding Solum context, logging and data confidentiality.  Two of these
> approaches are documented here:
> 
> https://wiki.openstack.org/wiki/Solum/Logging
> 
> A) Plain Oslo Log/Config/Context is in the "Example of Oslo Log and Oslo
> Context" section.
> 
> B) A hybrid Oslo Log/Config/Context but SecurityContext inherits the
> RequestContext class and adds some confidentiality functions is in the
> "Example of Oslo Log and Oslo Context Combined with SecurityContext"
> section.
> 
> None of this code is production ready or tested by any means.  Please just
> examine the general architecture before I polish too much.
> 
> I hope that this is enough information for us to agree on a path A or B.
> I honestly am not tied to either path very tightly but it is time that we
> reach a final decision on this topic IMO.
> 
> Thoughts?

I have a strong preference for using the SecurityContext approach. The main 
reason for my preference is outlined in the Pro/Con sections of the Wiki page. 
With the "A" approach, leakage of confidential information mint happen with 
*any* future addition of a logging call, a discipline which may be forgotten, 
or overlooked during future code reviews. The "B" approach handles the 
classification of data not when logging, but when placing the data into the 
SecurityContext. This is much safer from a long term maintenance perspective.

Adrian
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Solum]

2014-01-27 Thread Paul Montgomery
Solum community,

I created several different approaches for community consideration
regarding Solum context, logging and data confidentiality.  Two of these
approaches are documented here:

https://wiki.openstack.org/wiki/Solum/Logging

A) Plain Oslo Log/Config/Context is in the "Example of Oslo Log and Oslo
Context" section.

B) A hybrid Oslo Log/Config/Context but SecurityContext inherits the
RequestContext class and adds some confidentiality functions is in the
"Example of Oslo Log and Oslo Context Combined with SecurityContext"
section.

None of this code is production ready or tested by any means.  Please just
examine the general architecture before I polish too much.

I hope that this is enough information for us to agree on a path A or B.
I honestly am not tied to either path very tightly but it is time that we
reach a final decision on this topic IMO.

Thoughts?

---paulmo

PS: Please feel free to add new pros/cons.  I just took a first stab at it
and others may have better input.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mistral + taskflow mini-meetup

2014-01-27 Thread Joshua Harlow
Just a note also:

Taskflow's in a way is event-driven also, a workflow goes through various 
events and those events cause further actions (state-transitions, 
notifications, forward-progress).

I fully expect the https://review.openstack.org/#/c/63155 (yes not 
oslo.messaging, but someday when that library exists) to be more in your idea 
of event-driven.

To me u can model an event-driven system using an executor type (but not so 
much the other-way around); but perhaps it is possible and I can't think of it 
right now.

In fact if u look at what guido is doing with tulip [1] u can see a way to 
connect events to executors/futures to events (slightly similar to taskflows 
engine+futures).

I'd really like mistral to get back on using taskflow and helping converge 
instead of diverge, so lets make it happen :-)

-Josh

[1] http://www.python.org/dev/peps/pep-3156/

From: Renat Akhmerov mailto:rakhme...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Monday, January 27, 2014 at 12:31 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] Mistral + taskflow mini-meetup

Josh, thanks for sharing this with the community. Just a couple of words as an 
addition to that..

The driver for this conversation is that TaskFlow library and Mistral service 
in many ways do similar things: task processing combined somehow (flow or 
workflow). However, there’s a number of differences in approaches that the two 
technologies follow. Initially, when Mistral’s development phase started about 
a couple of months ago the team was willing to use TaskFlow at implementation 
level. Basically, we can potentially represent Mistral tasks as TaskFlow tasks 
and use TaskFlow API to run them. One of the problems though is that TaskFlow 
tasks are basically python methods and hence run synchronously (once we get out 
of the method the task is considered finished) whereas Mistral is primarily 
designed to run asynchronous tasks (send a signal to an external system and 
start waiting for a result which may arrive minutes or hours later. Mistral is 
more like event-driven system versus traditional executor architecture. So now 
Mistral PoC is not using TaskFlow but moving forward we we’d like to try to 
marry these two technologies to be more aligned in terms of APIs and feature 
sets.


Renat Akhmerov
@ Mirantis Inc.

On 27 Jan 2014, at 13:21, Joshua Harlow 
mailto:harlo...@yahoo-inc.com>> wrote:

Hi all,

In order to encourage further discussion off IRC and more in public I'd like to 
share a etherpad that was worked on during a 'meetup' with some of the mistral 
folks and me.

https://etherpad.openstack.org/p/taskflow-mistral-jan-meetup

It was more of a (mini) in-person meetup but I thought I'd be good to gather 
some feedback there and let the more general audience see this and ask any 
questions/feedback/other...

Some of the key distinctions between taskflow/mistral we talked about and as 
well other various DSL aspects and some possible action items.

Feel free to ask questions,

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] Savanna 2014.1.b2 (Icehouse-2) dev milestone available

2014-01-27 Thread Matthew Farrellee

On 01/23/2014 11:59 AM, Sergey Lukjanov wrote:

Hi folks,

the second development milestone of Icehouse cycle is now available for
Savanna.

Here is a list of new features and fixed bug:

https://launchpad.net/savanna/+milestone/icehouse-2

and here you can find tarballs to download it:

http://tarballs.openstack.org/savanna/savanna-2014.1.b2.tar.gz
http://tarballs.openstack.org/savanna-dashboard/savanna-dashboard-2014.1.b2.tar.gz
http://tarballs.openstack.org/savanna-image-elements/savanna-image-elements-2014.1.b2.tar.gz
http://tarballs.openstack.org/savanna-extra/savanna-extra-2014.1.b2.tar.gz

There are 15 blueprint implemented, 37 bugs fixed during the milestone.
It includes savanna, savanna-dashboard, savanna-image-element and
savanna-extra sub-projects. In addition python-savannaclient 0.4.1 that
was released early this week supports all new features introduced in
this savanna release.

Please, note that the next milestone, icehouse-3, is scheduled for
March, 6th.

Thanks.

--
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.


rdo packages -

el6 -
savanna - http://koji.fedoraproject.org/koji/buildinfo?buildID=494307
savanna-dashboard - 
http://koji.fedoraproject.org/koji/buildinfo?buildID=494286


f20 -
savanna - 
https://admin.fedoraproject.org/updates/openstack-savanna-2014.1.b2-3.fc20
savanna-dashboard - 
https://admin.fedoraproject.org/updates/python-django-savanna-2014.1.b2-1.fc20


notes -
 . you need paramiko >= 1.10.1 
(http://koji.fedoraproject.org/koji/buildinfo?buildID=492749)
 . you need stevedore >= 0.13 
(http://koji.fedoraproject.org/koji/buildinfo?buildID=494300) 
(https://bugs.launchpad.net/savanna/+bug/1273459)


best,


matt


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] What's Up Doc? Jan 27 2014

2014-01-27 Thread Anne Gentle
Sorry, must send a correction: Wednesday at 14:00:00 UTC is the time for
the Wednesday Doc Team meeting on IRC in #openstack-meeting-alt this week.
(Not 03:00). Thanks Matt and Summer for asking!

Anne


On Mon, Jan 27, 2014 at 9:35 AM, Anne Gentle  wrote:

> This week we have two chances to talk docs -- the IRC meeting for US and
> Europe is every other Wednesday in #openstack-meeting-alt at 03:00 UTC.
>
> We're also hosting a hangout on air on Wednesday, January 29, 2014 at
> 20:00:00 UTC. Look for a Hangouts On Air invitation on Google Plus.
>
> 1. In review and merged this past week:
>
> Last Thursday and Friday we did a mini sprint on the Operations Guide with
> the goal of addressing O'Reilly editor comments, documenting upgrades,
> vetting a new reference architecture, and getting all Havana updates
> completed. We merged about 25 changes and have about 20 more in the queue
> for review, including how to upgrade Compute from grizzly to havana.
>
> In the openstack-manuals repo, there are updates to the install guide,
> glossary edits, updates to the nova config options as well as some
> ceilometer cleanup.
>
> 2. High priority doc work:
>
> We're two months away from a March 27 release candidate. Highest priority
> is Icehouse documentation:
>
> Install Guide
>
> Config Ref
>
> Cloud Admin Guide
>
> On the API documentation side, there's going to be some API doc movement
> where the specs move into the project repositories. For example, from
> openstack/image-api to glance/doc/source.
>
> 3. Doc work going on that I know of:
>
> Shaun McCance is working on the configuration automation and reaching out
> to Oslo devs to ensure accuracy for incoming options. With 2400 options
> across OpenStack projects there's plenty to document.
>
> Diane and Andreas have been diligently getting the database-api samples
> tested and doc build working. Thanks for that. The Database project Trove
> does enter integration with the Icehouse release.
>
> 4. New incoming doc requests:
>
> Nick Chase is holding meetings about a new Networking Guide that would
> give the basic concepts for Neutron and software-defined networking in
> OpenStack.
>
> 5. Doc tools updates:
>
> Today I'll release 0.4 of the openstack-doc-tools repo which includes the
> ability to ignore sets of files, also greatly improves the options output,
> and offers the ability to auto-document the Command Line Interface help to
> output in a CLI reference. Nice work Andreas!
>
> For clouddocs-maven-plugin, the 1.13.0 release came out January 23 which
> now supports parts for the Operations Guide. Read all about it in the
> release notes
> https://github.com/stackforge/clouddocs-maven-plugin#release-notes.
>
> 6. Other doc news:
>
> I think that's enough excitement for this week! Carry on.
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-27 Thread Vipul Sabhaya
On Mon, Jan 27, 2014 at 1:43 PM, Don Kehn  wrote:

> check with the trove folks they might be testing percona.
>
>
We don’t test Nova or other Openstack pieces against Percona.  We do test
Percona as a underlying datastore within Trove though.


>
> On Mon, Jan 27, 2014 at 2:34 PM, Michael Still  wrote:
>
>> On Sat, Jan 25, 2014 at 5:32 AM, Tim Bell  wrote:
>>
>>>
>>>
>>> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to
>>> have MariaDB as the default MySQL-like DB.
>>>
>>>
>>>
>>> Can someone summarise the status of the OpenStack in terms of
>>>
>>>
>>>
>>> -What MySQL-flavor is/are currently tested in the gate ?
>>>
>> Turbo Hipster currently tests mysql and percona _upgrades_ for every
>> commit in nova. We have noted no percona specific problems, except for
>> percona being a little bit faster than mysql 5.5. It wouldn't be hard to
>> add mariadb to the upgrade cycle if people were interested in that.
>>
>> However, we're not currently testing devstack with percona anywhere that
>> I am aware of.
>>
>> Michael
>>
>>> --
>> Rackspace Australia
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> 
> Don Kehn
> 303-442-0060
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread yunhong jiang
On Mon, 2014-01-27 at 21:14 +, Jani, Nrupal wrote:
> Hi,
> 
>  
> 
> There are two possibilities for the hybrid compute nodes
> 
> - In the first case, a compute node has two NICs,  one SRIOV
> NIC & the other NIC for the VirtIO
> 
> - In the 2nd case, Compute node has only one SRIOV NIC, where
> VFs are used for the VMs, either macvtap or direct assignment.  And
> the PF is used for the uplink to the linux bridge or OVS!!
> 
>  
> 
> My question to the team is whether we consider both of these
> deployments or not?
> 
Nrupal, good question. I assume if a NIC will be used for vNIC type, it
will not be reported by hypervisor as assignable PCI devices since host
will own it and the OVS is setup based on it.

Irena/Ian, please correct me. At least this is assumption in nova PCI
code, I think.

Thanks
--jyh 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread yunhong jiang
On Mon, 2014-01-27 at 22:31 +0100, Ian Wells wrote:
> In any case, as discussed in the meeting, this is an optimisation and
> not something we have to solve in the initial release, because:
>   

+1 for this. We should keep it as future enhancement effort.

--jyh


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-27 Thread Don Kehn
check with the trove folks they might be testing percona.


On Mon, Jan 27, 2014 at 2:34 PM, Michael Still  wrote:

> On Sat, Jan 25, 2014 at 5:32 AM, Tim Bell  wrote:
>
>>
>>
>> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to
>> have MariaDB as the default MySQL-like DB.
>>
>>
>>
>> Can someone summarise the status of the OpenStack in terms of
>>
>>
>>
>> -What MySQL-flavor is/are currently tested in the gate ?
>>
> Turbo Hipster currently tests mysql and percona _upgrades_ for every
> commit in nova. We have noted no percona specific problems, except for
> percona being a little bit faster than mysql 5.5. It wouldn't be hard to
> add mariadb to the upgrade cycle if people were interested in that.
>
> However, we're not currently testing devstack with percona anywhere that I
> am aware of.
>
> Michael
>
>> --
> Rackspace Australia
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Don Kehn
303-442-0060
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer] Issues with transformers

2014-01-27 Thread Adrian Turjak


On 27/01/14 23:41, Julien Danjou wrote:

On Mon, Jan 27 2014, Adrian Turjak wrote:


I created a gauge metric that is updated via notifications, and a pollster.
The data from both of those needs to be transformed in to a cumulative
metric. The transformer object works as intended, but the issue is that
while my pollster/notifications combo does create samples for the same named
metric, two different transformer objects are being created for each data
input for the metric.

If I understand correctly what you're trying to do, it won't work
unfortunately. Transformers are run _before_ publishing to a collector,
and are local to a pipeline inside a Ceilometer deamon. The deamons
handling the pollsters and the notifications are two different programs,
and therefore don't communicate.



Is there a way around this or some other approach because unless I can 
get both sets of data the final calculations are not useful. Unless I 
can somehow force a notification on an interval... but that is likely 
impossible, or requires work in Nova.


What I've made is a gauge metric that represents the state of a VM at a 
given time so that I can then do accurate uptime calculations taking 
into account suspended states and shutdowns. The state metric I've made 
is in a blueprint here (and there is a link to my code):

https://blueprints.launchpad.net/ceilometer/+spec/state-meter

Since I need consistent state data as well as accurate transitional data 
it needs to be both notification and pollster. What my transformer then 
does is build a cumulative value of uptime based on the state 
information, and allows a user to define what states they care about for 
billing via the pipeline parameters.


Is there a way to transform data once it reaches the collector? Or would 
an approach be to build a separate agent to transform this, and likely 
other data, into usable billing data to post to the collector?



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 28th

2014-01-27 Thread Ian Wells
Live migration for the first release is intended to be covered by macvtap,
in my mind - direct mapped devices have limited support in hypervisors
aiui.  It seemed we had a working theory for that, which we test out and
see if it's going to work.
-- 
Ian.


On 27 January 2014 21:38, Robert Li (baoli)  wrote:

>  Hi Folks,
>
>  Check out  1 Agenda on Jan 28th, 
> 2014.
> Please update if I have missed any thing. Let's finalize who's doing what
> tomorrow.
>
>  I'm thinking to work on the nova SRIOV items, but the live migration may
> be a stretch for the initial release.
>
>  thanks,
> Robert
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] MariaDB support

2014-01-27 Thread Michael Still
On Sat, Jan 25, 2014 at 5:32 AM, Tim Bell  wrote:

>
>
> We are reviewing options between MySQL and MariaDB. RHEL 7 beta seems to
> have MariaDB as the default MySQL-like DB.
>
>
>
> Can someone summarise the status of the OpenStack in terms of
>
>
>
> -What MySQL-flavor is/are currently tested in the gate ?
>
Turbo Hipster currently tests mysql and percona _upgrades_ for every commit
in nova. We have noted no percona specific problems, except for percona
being a little bit faster than mysql 5.5. It wouldn't be hard to add
mariadb to the upgrade cycle if people were interested in that.

However, we're not currently testing devstack with percona anywhere that I
am aware of.

Michael

> --
Rackspace Australia
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Ian Wells
On 27 January 2014 15:58, Robert Li (baoli)  wrote:

>  Hi Folks,
>
>  In today's meeting, we discussed a scheduler issue for SRIOV. The basic
> requirement is for coexistence of the following compute nodes in a cloud:
>   -- SRIOV only compute nodes
>
  -- non-SRIOV only compute nodes
>   -- Compute nodes that can support both SRIOV and non-SRIOV ports.
> Lack of a proper name, let's call them compute nodes with hybrid NICs
> support, or simply hybrid compute nodes.
>
>  I'm not sure if it's practical in having hybrid compute nodes in a real
> cloud. But it may be useful in the lab to bench mark the performance
> differences between SRIOV, non-SRIOV, and coexistence of both.
>

I think in fact hybrid nodes would be the common case  - there's nothing
wrong with mixing virtual and physical NICs in a VM and it's been the
general case we've been discussing till now.  VMs that *only* support SRIOV
and have no soft switch sound like a complete outlier to me.  I'm assuming
that passthrough devices are a scarce resource and you wouldn't want to
waste them on a low traffic control connection, so you would always have a
softswitch on the host to take care of such cases.

I believe there *is* a use case here when  you have some, but not all,
machines that have SRIOV devices.  They will also have a softswitch of some
sort and are therefore not only 'SRIOV only' in that sense.  But the point
is that if you have a limited SRIOV resource you may want to preserve these
machines for VMs that have SRIOV requirements, and avoid mapping general
VMs with no SRIOV requirements onto them.

You can expand the problem further and avoid loading up machines with
specific PCI devices of any sort if you have a VM that doesn't need a
device of that sort, which comes down to prioritising your machines at
schedule time based on whether they're a good fit for the VM you intend to
schedule.

In any case, as discussed in the meeting, this is an optimisation and not
something we have to solve in the initial release, because:


>
> Irena brought up the idea of using host aggregate. This requires creation
> of a non-SRIOV host aggregate, and use that in the above 'nova boot'
> command. It should work.
>
>
So, while it's not the greatest solution, there's at least a way of
achieving it right now.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] extending keystone identity

2014-01-27 Thread Simon Perfer
I'm looking to create a simple Identity driver that will look at usernames. A 
small number of specific users should be authenticated by looking at a 
hard-coded password in keystone.conf, while any other users should fall back to 
LDAP authentication.
I based my original driver on what's found here:
http://waipeng.wordpress.com/2013/09/30/openstack-ldap-authentication/
As can be seen in the github code 
(https://raw.github.com/waipeng/keystone/8c18917558bebbded0f9c588f08a84b0ea33d9ae/keystone/identity/backends/ldapauth.py),
 there's a _check_password() method which is supposedly called at some point.
I've based my driver on this ldapauth.py file, and created an Identity class 
which subclasses sql.Identity. Here's what I have so far:








CONF = config.CONFLOG = logging.getLogger(__name__)
class Identity(sql.Identity):def __init__(self):super(Identity, 
self).__init__()LOG.debug('My authentication module loaded')
def _check_password(self, password, user_ref):
LOG.debug('Authenticating via my custom hybrid authentication')
username = user_ref.get('name')




















LOG.debug('Username = %s' % username)
I can see from the syslog output that we never enter the _check_password() 
function.
Can someone point me in the right direction regarding which function calls the 
identity driver? Also, what is the entry function in the identity drivers? Why 
wouldn't check_password() be called, as we see in the github / blog example 
above?
THANKS!   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Jani, Nrupal
Hi,

There are two possibilities for the hybrid compute nodes

-  In the first case, a compute node has two NICs,  one SRIOV NIC & the 
other NIC for the VirtIO

-  In the 2nd case, Compute node has only one SRIOV NIC, where VFs are 
used for the VMs, either macvtap or direct assignment.  And the PF is used for 
the uplink to the linux bridge or OVS!!

My question to the team is whether we consider both of these deployments or not?

Thx,

Nrupal

From: Irena Berezovsky [mailto:ire...@mellanox.com]
Sent: Monday, January 27, 2014 1:01 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image  --nic net-id= vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let's try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined,

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Robert Li (baoli)
Ok, this is something that's going to be added in ml2. I was looking at the 
bind_port() routine in mech_agent.py. The routine check_segment_for_agent() 
seems to be performing static check. So we are going to add something like 
check_vnic_type_for_agent(), I guess? Is the pairing of an agent with the mech 
driver predetermined? The routine bind_port() just throws warnings, though.

In any case, this is after the fact the scheduler has decided to place the VM 
onto the host.

Maybe not for now, but we need to consider how to support the hybrid compute 
nodes. Would an agent be able to support multiple vnic types? Or is it possible 
to reuse ovs agent, in the same time running another agent to support sriov? 
Any thoughts?

--Robert

On 1/27/14 4:01 PM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with ‘virtio’ vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot —flavor m1.large —image  --nic net-id= vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let’s try to cover the SRIOV side an

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread yunhong jiang
On Mon, 2014-01-27 at 14:58 +, Robert Li (baoli) wrote:
> Hi Folks,
> 
> 
> In today's meeting, we discussed a scheduler issue for SRIOV. The
> basic requirement is for coexistence of the following compute nodes in
> a cloud:
>   -- SRIOV only compute nodes
>   -- non-SRIOV only compute nodes
>   -- Compute nodes that can support both SRIOV and non-SRIOV
> ports. Lack of a proper name, let's call them compute nodes with
> hybrid NICs support, or simply hybrid compute nodes.
> 
> 
> I'm not sure if it's practical in having hybrid compute nodes in a
> real cloud. But it may be useful in the lab to bench mark the
> performance differences between SRIOV, non-SRIOV, and coexistence of
> both.
> 
> 
> In a cloud that supports SRIOV in some of the compute nodes, a request
> such as:
> 
> 
>  nova boot —flavor m1.large —image  --nic
> net-id= vm
> 
> 
> doesn't require a SRIOV port. However, it's possible for the nova
> scheduler to place it on a compute node that supports sriov port only.
> Since neutron plugin runs on the controller, port-create would succeed
> unless neutron knows the host doesn't support non-sriov port. But
> connectivity on the node would not be established since no agent is
> running on that host to establish such connectivity. 
> 
> 
> Irena brought up the idea of using host aggregate. This requires
> creation of a non-SRIOV host aggregate, and use that in the above
> 'nova boot' command. It should work.
> 
> 
> The patch I had introduced a new constraint in the existing PCI
> passthrough filter. 
> 
> 
> The consensus seems to be having a better solution in a later release.
> And for now, people can either use host aggregate or resort to their
> own means.
> 
> 
> Let's keep the discussion going on this. 
> 
> 
> Thanks,
> Robert
> 
> 
>  
> 
> 
> 
> 
> 
> 
> On 1/24/14 4:50 PM, "Robert Li (baoli)"  wrote:
> 
> 
> Hi Folks,
> 
> 
> Based on Thursday's discussion and a chat with Irena, I took
> the liberty to add a summary and discussion points for SRIOV
> on Monday and onwards. Check it
> out https://wiki.openstack.org/wiki/Meetings/Passthrough.
> Please feel free to update it. Let's try to finalize it next
> week. The goal is to determine the BPs that need to get
> approved, and to start coding. 
> 
> 
> thanks,
> Robert
> 
> 
> 
> 
> On 1/22/14 8:03 AM, "Robert Li (baoli)" 
> wrote:
> 
> 
> Sounds great! Let's do it on Thursday.
> 
> 
> --Robert
> 
> 
> On 1/22/14 12:46 AM, "Irena Berezovsky"
>  wrote:
> 
> 
> Hi Robert, all,
> 
> I would suggest not to delay the SR-IOV
> discussion to the next week.
> 
> Let’s try to cover the SRIOV side and
> especially the nova-neutron interaction points
> and interfaces this Thursday.
> 
> Once we have the interaction points well
> defined, we can run parallel patches to cover
> the full story.
> 
>  
> 
> Thanks a lot,
> 
> Irena 
> 
>  
> 
> From: Robert Li (baoli)
> [mailto:ba...@cisco.com] 
> Sent: Wednesday, January 22, 2014 12:02 AM
> To: OpenStack Development Mailing List (not
> for usage questions)
> Subject: [openstack-dev] [nova][neutron] PCI
> passthrough SRIOV
> 
> 
>  
> 
> Hi Folks,
> 
> 
>  
> 
> 
> As the debate about PCI flavor versus host
> aggregate goes on, I'd like to move forward
> with the SRIOV side of things in the same
> time. I know that tomorrow's IRC will be
> focusing on the BP review, and it may well
> continue into Thursday. Therefore, let's start
> discussing SRIOV side of things on Monday. 
> 
> 
>  
> 
>  

Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Jay S Bryant
Sean and John,

I would be happy to help out with this for Cinder.

Let me know how I can help.


Jay S. Bryant
   IBM Cinder Subject Matter Expert  &  Cinder Core Member
Department 7YLA, Building 015-2, Office E125, Rochester, MN
Telephone: (507) 253-4270, FAX (507) 253-6410
TIE Line: 553-4270
E-Mail:  jsbry...@us.ibm.com

 All the world's a stage and most of us are desperately unrehearsed.
   -- Sean O'Casey




From:   John Griffith 
To: "OpenStack Development Mailing List (not for usage questions)" 
, 
Date:   01/27/2014 02:52 PM
Subject:Re: [openstack-dev] Proposed Logging Standards



On Mon, Jan 27, 2014 at 6:07 AM, Sean Dague  wrote:
> Back at the beginning of the cycle, I pushed for the idea of doing some
> log harmonization, so that the OpenStack logs, across services, made
> sense. I've pushed a proposed changes to Nova and Keystone over the past
> couple of days.
>
> This is going to be a long process, so right now I want to just focus on
> making INFO level sane, because as someone that spends a lot of time
> staring at logs in test failures, I can tell you it currently isn't.
>
> https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
> written down so far, comments welcomed.
>
> We kind of need to solve this set of recommendations once and for all up
> front, because negotiating each change, with each project, isn't going
> to work (e.g - https://review.openstack.org/#/c/69218/)
>
> What I'd like to find out now:
>
> 1) who's interested in this topic?
> 2) who's interested in helping flesh out the guidelines for various log
> levels?
> 3) who's interested in helping get these kinds of patches into various
> projects in OpenStack?
> 4) which projects are interested in participating (i.e. interested in
> prioritizing landing these kinds of UX improvements)
>
> This is going to be progressive and iterative. And will require lots of
> folks involved.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Very interested in all of the above.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Irena Berezovsky
Hi Robert,
Please see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 10:29 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image  --nic net-id= vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?
[IrenaB] I think by adding the vnic_typem as we plan, Mechanism Driver will 
bind the port only if it supports vic_type and there is live agent on this 
host. So it should work

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let's try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PC

Re: [openstack-dev] [savanna] savannaclient v2 api

2014-01-27 Thread Trevor McKay
We should consider turning "mains" into a string instead of a list for
v2.

Hive and Pig Oozie actions use mains, and each may only specify a single

Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread Adrian Otto
Solum Core Reviewers,

Thanks everyone for your feedback. I have made the adjustments. Welcome to the 
core group Angus and Noorul. Thanks again Monty.

Regards,

Adrian

On Jan 27, 2014, at 12:54 PM, Kurt Griffiths 
 wrote:

> +1
> 
> On 1/27/14, 11:54 AM, "Monty Taylor"  wrote:
> 
>> On 01/24/2014 05:32 PM, Adrian Otto wrote:
>>> Solum Core Reviewers,
>>> 
>>> I propose the following changes to solum-core:
>>> 
>>> +asalkeld
>>> +noorul
>>> -mordred
>>> 
>>> Thanks very much to mordred for helping me to bootstrap the reviewer
>>> team. Please reply with your votes.
>> 
>> +1
>> 
>> My pleasure - you guys seem like you're off to the races -a nd asalkeld
>> and noorul are both doing great.
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread Kurt Griffiths
+1

On 1/27/14, 11:54 AM, "Monty Taylor"  wrote:

>On 01/24/2014 05:32 PM, Adrian Otto wrote:
>> Solum Core Reviewers,
>>
>> I propose the following changes to solum-core:
>>
>> +asalkeld
>> +noorul
>> -mordred
>>
>> Thanks very much to mordred for helping me to bootstrap the reviewer
>>team. Please reply with your votes.
>
>+1
>
>My pleasure - you guys seem like you're off to the races -a nd asalkeld
>and noorul are both doing great.
>
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread John Griffith
On Mon, Jan 27, 2014 at 6:07 AM, Sean Dague  wrote:
> Back at the beginning of the cycle, I pushed for the idea of doing some
> log harmonization, so that the OpenStack logs, across services, made
> sense. I've pushed a proposed changes to Nova and Keystone over the past
> couple of days.
>
> This is going to be a long process, so right now I want to just focus on
> making INFO level sane, because as someone that spends a lot of time
> staring at logs in test failures, I can tell you it currently isn't.
>
> https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
> written down so far, comments welcomed.
>
> We kind of need to solve this set of recommendations once and for all up
> front, because negotiating each change, with each project, isn't going
> to work (e.g - https://review.openstack.org/#/c/69218/)
>
> What I'd like to find out now:
>
> 1) who's interested in this topic?
> 2) who's interested in helping flesh out the guidelines for various log
> levels?
> 3) who's interested in helping get these kinds of patches into various
> projects in OpenStack?
> 4) which projects are interested in participating (i.e. interested in
> prioritizing landing these kinds of UX improvements)
>
> This is going to be progressive and iterative. And will require lots of
> folks involved.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Very interested in all of the above.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] stevedore 0.14

2014-01-27 Thread Doug Hellmann
I have just released a new version of stevedore, 0.14, which includes a
change to stop checking version numbers of dependencies for plugins. This
should eliminate one class of problems we've seen where we get conflicting
requirements to install, and the libraries are compatible, but the way
stevedore was using pkg_resources was causing errors when the plugins were
loaded.

Doug
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 28th

2014-01-27 Thread Robert Li (baoli)
Hi Folks,

Check out  1 Agenda on Jan 28th, 
2014.
 Please update if I have missed any thing. Let's finalize who's doing what 
tomorrow.

I'm thinking to work on the nova SRIOV items, but the live migration may be a 
stretch for the initial release.

thanks,
Robert
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Neutron: Seeking suggestions about vendor service driver persisting info...

2014-01-27 Thread Paul Michali
Hi,

I'm working on a vendor driver for VPNaaS. The device I'm using has different 
'id' requirements for some of the resources related to VPN. For example, the 
IPSec policy information can be identified by an ID up to 31 characters long, 
whereas with OpenStack the ID will be a 36 character UUID.

So, my plans were to generate unique IDs for items like this, and then persist 
them, with a mapping to the UUID. Questions…

Should I do this generation/mapping/persisting in the service driver (plugin 
side) and then pass the generated ID down to the device driver (agent side), or 
should this all be done in the device driver?

I was thinking the latter, as it would allow other devices, which may had 
different requirements to be handled in the device driver. On the other hand, I 
didn't know if there is any precedence to doing these actions (especially 
persisting) in the service driver.

I take it I would I create a new vendor specific VPN database to hold this 
information? 

Can anyone point me to examples of where this is done for service plugins? It 
would be nice to see some other examples as a reference, as I haven't done this 
before in OpenStack.


Thanks in advance!

PCM (Paul Michali)

MAIL  p...@cisco.com
IRCpcm_  (irc.freenode.net)
TW@pmichali
GPG key4525ECC253E31A83
Fingerprint 307A 96BB 1A4C D2C7 931D 8D2D 4525 ECC2 53E3 1A83



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Jay Pipes
On Mon, 2014-01-27 at 08:07 -0500, Sean Dague wrote:
> Back at the beginning of the cycle, I pushed for the idea of doing some
> log harmonization, so that the OpenStack logs, across services, made
> sense. I've pushed a proposed changes to Nova and Keystone over the past
> couple of days.
> 
> This is going to be a long process, so right now I want to just focus on
> making INFO level sane, because as someone that spends a lot of time
> staring at logs in test failures, I can tell you it currently isn't.
> 
> https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
> written down so far, comments welcomed.
> 
> We kind of need to solve this set of recommendations once and for all up
> front, because negotiating each change, with each project, isn't going
> to work (e.g - https://review.openstack.org/#/c/69218/)
> 
> What I'd like to find out now:
> 
> 1) who's interested in this topic?
> 2) who's interested in helping flesh out the guidelines for various log
> levels?
> 3) who's interested in helping get these kinds of patches into various
> projects in OpenStack?
> 4) which projects are interested in participating (i.e. interested in
> prioritizing landing these kinds of UX improvements)
> 
> This is going to be progressive and iterative. And will require lots of
> folks involved.

I'm interested, can contribute patches (feel free to assign me) and can
do reviews.

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Mistral + taskflow mini-meetup

2014-01-27 Thread Renat Akhmerov
Josh, thanks for sharing this with the community. Just a couple of words as an 
addition to that..

The driver for this conversation is that TaskFlow library and Mistral service 
in many ways do similar things: task processing combined somehow (flow or 
workflow). However, there’s a number of differences in approaches that the two 
technologies follow. Initially, when Mistral’s development phase started about 
a couple of months ago the team was willing to use TaskFlow at implementation 
level. Basically, we can potentially represent Mistral tasks as TaskFlow tasks 
and use TaskFlow API to run them. One of the problems though is that TaskFlow 
tasks are basically python methods and hence run synchronously (once we get out 
of the method the task is considered finished) whereas Mistral is primarily 
designed to run asynchronous tasks (send a signal to an external system and 
start waiting for a result which may arrive minutes or hours later. Mistral is 
more like event-driven system versus traditional executor architecture. So now 
Mistral PoC is not using TaskFlow but moving forward we we’d like to try to 
marry these two technologies to be more aligned in terms of APIs and feature 
sets. 


Renat Akhmerov
@ Mirantis Inc.

On 27 Jan 2014, at 13:21, Joshua Harlow  wrote:

> Hi all,
> 
> In order to encourage further discussion off IRC and more in public I'd like 
> to share a etherpad that was worked on during a 'meetup' with some of the 
> mistral folks and me.
> 
> https://etherpad.openstack.org/p/taskflow-mistral-jan-meetup
> 
> It was more of a (mini) in-person meetup but I thought I'd be good to gather 
> some feedback there and let the more general audience see this and ask any 
> questions/feedback/other...
> 
> Some of the key distinctions between taskflow/mistral we talked about and as 
> well other various DSL aspects and some possible action items.
> 
> Feel free to ask questions,
> 
> -Josh
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Robert Li (baoli)
Hi Irena,

I agree on your first comment.

see inline as well.

thanks,
Robert

On 1/27/14 10:54 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with ‘virtio’ vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot —flavor m1.large —image  --nic net-id= vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

[ROBERT] If a host supports SRIOV only, and there is an agent running on the 
host to support SRIOV, would binding succeed in ML2 plugin for the above 'nova 
boot' request?

On a hybrid compute node, can we run multiple neutron L2 agents on a single 
host? It seems possible.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let’s try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for —nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-

Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread CARVER, PAUL
Jay Pipes wrote:

>Have you ever tried using Google Translate for anything more than very
>simple phrases?

>The results can be... well, interesting ;) And given the amount of
>technical terms used in these messages, I doubt GT or any automated
>translating service would provide a whole lot of value...

Exactly what I wasn't suggesting and why I wasn't suggesting it. I meant
an OpenStack specific translation service taking advantage of the work
that the translators have already done and any work they do in the future.

I haven't looked at any of the current translation code in any OpenStack
project, but I presume there's basically a one to one mapping of English
messages to each other available language (maybe with rearrangement
of parameters to account for differences in grammar?)

I'd be surprised and impressed if the translators are applying some sort
of context sensitivity such that a particular English string could end up
getting translated to multiple different strings depending on something
that isn't captured in the English log message.

So basically instead of doing the "search and replace" of the static text
of each message before writing to the logfile, write the message to
the log in English and then have a separate process (I proposed web
based, but it could be as simple as a CLI script) to "search and replace"
the English with the desired target language after the fact.

If there's still a concern about ambiguity where you couldn't identify the
correct translation based only on knowing the original English static
text, then maybe it would be worth assigning unique ID numbers
to every translatable message so that it can be mapped uniquely
to the corresponding message in the target language.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Representing PEM Format file as string

2014-01-27 Thread Nachi Ueno
Hi Rajesh

yes. Please take a looks nova keypair-add implementation
https://github.com/openstack/python-novaclient/blob/e3d686f39ad9787a70894dff3db9352be6b3f0dd/novaclient/v1_1/shell.py#L2372

2014-01-27 Rajesh Mohan :
> Nachi,
>
> I did not know that we could give files names. Since we had String in the
> database, I assumed we need to give string as input.
>
> I guess, the neutron client will convert the file to string and then call
> the API. That should work. Thanks for the clarification.
>
>
>
>
> On Mon, Jan 27, 2014 at 10:49 AM, Nachi Ueno  wrote:
>>
>> Hi Rajesh
>>
>> May I ask why we need single line representation of PEM format?
>> For CLI, we will use file_name as same as nova keypair-add.
>> We won't specify PEM on the URL.
>>
>>
>>
>>
>> 2014-01-27 Rajesh Mohan :
>> > Thanks John.
>> >
>> > My initial approach is similar to Keystone's. This is mainly to unblock
>> > me
>> > from making progress on the driver. Nachi is doing the API part. I will
>> > discuss with him to explore other options.
>> >
>> > Can you send us the link to your review?
>> >
>> > Thanks,
>> > -Rajesh Mohan
>> >
>> >
>> >
>> >
>> > On Mon, Jan 27, 2014 at 6:00 AM, John Dennis  wrote:
>> >>
>> >> On 01/26/2014 05:36 PM, rajesh_moh...@dell.com wrote:
>> >> > I am working on SSL VPN BP.
>> >> >
>> >> > CA certificate is one of the resources. We decided to use PEM
>> >> > formatted
>> >> > certificates. It is multi-line string
>> >> >
>> >> >   1 -BEGIN CERTIFICATE-
>> >> >   2 MIID3TCCA0agAwIBAgIJAKRWnul3NJnrMA0GCSqGSIb3DQEBBQUAMIGmMQswCQYD
>> >> >  
>> >> >  21 0vO728pEcn6QtOpU7ZjEv8JLKRHwyq8kwd8gKMflWZRng4R2dj3cdd24oYJxn5HW
>> >> >  22 atXnq+N9H9dFgMfw5NNefwJrZ3zAE6mu0bAIoXVsKT2S
>> >> >  23 -END CERTIFICATE-
>> >> >
>> >> > Is there a standard way to represent this as single line string?
>> >> > Maybe
>> >> > there is some other project that passes certificates on command
>> >> > line/url.
>> >> >
>> >> > I am looking for some accepted way to represent PEM formatted file on
>> >> > command line.
>> >> >
>> >> > I am thinking of concatenating all lines into single string and
>> >> > rebuilding the file when configuration file is generated.Will we hit
>> >> > any CLI
>> >> > size limitations if we pass long strings.
>> >>
>> >> In general PEM formatted certificates and other X509 binary data
>> >> objects
>> >> should be exchanged in the original PEM format for interoperabilty
>> >> purposes. For command line tools it's best to pass PEM objects via a
>> >> filename.
>> >>
>> >> However, having said that there is at least one place in Openstack
>> >> which
>> >> passes PEM data via a HTTP header and/or URL, it's the Keystone token
>> >> id
>> >> which is a binary CMS object normally exchanged in PEM format. Keystone
>> >> strips the PEM header and footer, strips line endings and modifies one
>> >> of the base64 alphabet characters which was incompatible with HTTP and
>> >> URL encoding. However what keystone was doing was not correct and in
>> >> fact did not follow an existing RFC (e.g. URL safe base64).
>> >>
>> >> I fixed these problems and in the process wrote two small Python
>> >> modules
>> >> base64utils and pemutils to do PEM transformations correctly (plus
>> >> general utilities for working with base64 and PEM data). These were
>> >> submitted to both keystone and oslo, Oslo on the assumption they should
>> >> be general purpose utilities available to all of openstack. I believe
>> >> these have languished in review purgatory, because I was pulled off to
>> >> work on other issues I haven't had the time to babysit the review.
>> >>
>> >>
>> >> --
>> >> John
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-dev@lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] team meeting Friday 31 Jan 1400 UTC

2014-01-27 Thread Doug Hellmann
The Oslo team has a few items we need to discuss, so I'm calling a meeting
for this Friday, 31 Jan. Our normal slot is 1400 UTC Friday in
#openstack-meeting.

The agenda [1] includes 2 items (so far):

1. log translations (see the other thread started today)
2. parallelizing our tests

If you have anything else you would like to discuss, please add it to the
agenda.

See you Friday!
Doug


[1] https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Reviewday data download problem

2014-01-27 Thread Brant Knudson
A few days ago, a change I submitted to reviewday to generate JSON results
for easy consumption by an application merged. I was hoping that this could
be used with next-review to help me prioritize reviews.

So I was expecting to now be able to go to the URL and get the .json file,
like this:
 curl http://status.openstack.org/reviews/reviewday.json

Unfortunately I'm getting a 403 Forbidden error saying I don't have
permission. I think the script is working and generating reviewday.json
because if I use a different filename I get 404 Not Found instead.

I probably made some assumptions that I shouldn't have about how the
status.openstack.org web site works. Is there something else I can change
myself to open up access to the file, or someone I can contact that can
update the config?

- Brant
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Representing PEM Format file as string

2014-01-27 Thread Rajesh Mohan
Nachi,

I did not know that we could give files names. Since we had String in the
database, I assumed we need to give string as input.

I guess, the neutron client will convert the file to string and then call
the API. That should work. Thanks for the clarification.




On Mon, Jan 27, 2014 at 10:49 AM, Nachi Ueno  wrote:

> Hi Rajesh
>
> May I ask why we need single line representation of PEM format?
> For CLI, we will use file_name as same as nova keypair-add.
> We won't specify PEM on the URL.
>
>
>
>
> 2014-01-27 Rajesh Mohan :
> > Thanks John.
> >
> > My initial approach is similar to Keystone's. This is mainly to unblock
> me
> > from making progress on the driver. Nachi is doing the API part. I will
> > discuss with him to explore other options.
> >
> > Can you send us the link to your review?
> >
> > Thanks,
> > -Rajesh Mohan
> >
> >
> >
> >
> > On Mon, Jan 27, 2014 at 6:00 AM, John Dennis  wrote:
> >>
> >> On 01/26/2014 05:36 PM, rajesh_moh...@dell.com wrote:
> >> > I am working on SSL VPN BP.
> >> >
> >> > CA certificate is one of the resources. We decided to use PEM
> formatted
> >> > certificates. It is multi-line string
> >> >
> >> >   1 -BEGIN CERTIFICATE-
> >> >   2 MIID3TCCA0agAwIBAgIJAKRWnul3NJnrMA0GCSqGSIb3DQEBBQUAMIGmMQswCQYD
> >> >  
> >> >  21 0vO728pEcn6QtOpU7ZjEv8JLKRHwyq8kwd8gKMflWZRng4R2dj3cdd24oYJxn5HW
> >> >  22 atXnq+N9H9dFgMfw5NNefwJrZ3zAE6mu0bAIoXVsKT2S
> >> >  23 -END CERTIFICATE-
> >> >
> >> > Is there a standard way to represent this as single line string? Maybe
> >> > there is some other project that passes certificates on command
> line/url.
> >> >
> >> > I am looking for some accepted way to represent PEM formatted file on
> >> > command line.
> >> >
> >> > I am thinking of concatenating all lines into single string and
> >> > rebuilding the file when configuration file is generated.Will we hit
> any CLI
> >> > size limitations if we pass long strings.
> >>
> >> In general PEM formatted certificates and other X509 binary data objects
> >> should be exchanged in the original PEM format for interoperabilty
> >> purposes. For command line tools it's best to pass PEM objects via a
> >> filename.
> >>
> >> However, having said that there is at least one place in Openstack which
> >> passes PEM data via a HTTP header and/or URL, it's the Keystone token id
> >> which is a binary CMS object normally exchanged in PEM format. Keystone
> >> strips the PEM header and footer, strips line endings and modifies one
> >> of the base64 alphabet characters which was incompatible with HTTP and
> >> URL encoding. However what keystone was doing was not correct and in
> >> fact did not follow an existing RFC (e.g. URL safe base64).
> >>
> >> I fixed these problems and in the process wrote two small Python modules
> >> base64utils and pemutils to do PEM transformations correctly (plus
> >> general utilities for working with base64 and PEM data). These were
> >> submitted to both keystone and oslo, Oslo on the assumption they should
> >> be general purpose utilities available to all of openstack. I believe
> >> these have languished in review purgatory, because I was pulled off to
> >> work on other issues I haven't had the time to babysit the review.
> >>
> >>
> >> --
> >> John
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [savanna] paramiko requirement of >= 1.9.0?

2014-01-27 Thread Sergey Lukjanov
Currently we have paramiko >= 1.9.0 in both global-requirements and savanna:

https://review.openstack.org/#/c/68088/
https://review.openstack.org/#/c/69045/


On Tue, Jan 21, 2014 at 3:41 PM, Sergey Lukjanov wrote:

> Here is a change for global-requirements
> https://review.openstack.org/#/c/68088/
>
>
> On Tue, Jan 21, 2014 at 3:30 PM, Sergey Lukjanov 
> wrote:
>
>> Hey Matt,
>>
>> that's correct, we should bump paramiko version to >= 1.9.0. It was
>> released more than year ago and so all of us use paramiko >= 1.9.0
>>
>> Thanks for catching this.
>>
>>
>>
>> On Sun, Jan 19, 2014 at 7:44 AM, Matthew Farrellee wrote:
>>
>>> jon,
>>>
>>> please confirm a suspicion of mine.
>>>
>>> the neutron-private-net-provisioning bp impl added a sock= parameter to
>>> the ssh.connect call in remote.py (https://github.com/openstack/
>>> savanna/commit/9afb5f60).
>>>
>>> we currently require paramiko >= 1.8.0, but it looks like the sock param
>>> was only added to paramiko 1.9.0 (https://github.com/paramiko/
>>> paramiko/commit/31ea4f0734a086f2345aaea57fd6fc1c3ea4a87e)
>>>
>>> do we need paramiko >= 1.9.0 as our requirement?
>>>
>>> also, what version are you using in your installation?
>>>
>>> best,
>>>
>>>
>>> matt
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Sincerely yours,
>> Sergey Lukjanov
>> Savanna Technical Lead
>> Mirantis Inc.
>>
>
>
>
> --
> Sincerely yours,
> Sergey Lukjanov
> Savanna Technical Lead
> Mirantis Inc.
>



-- 
Sincerely yours,
Sergey Lukjanov
Savanna Technical Lead
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread Jay Pipes
On Mon, 2014-01-27 at 19:06 +, CARVER, PAUL wrote:
> Joshua Harlow wrote:
> >From what I know most all (correct me if I am wrong) open source projects
> >don't translate log messages; so it seems odd to be the special snowflake
> >project/s.
> 
> >Do people find this type of translation useful?
> 
> >It'd be nice to know how many people really do so the benefit/drawbacks of
> >doing it can be evaluated by real usage data.
> 
> This is just a wild idea off the top of my head, but what about creating a log
> translation service completely independent of running systems. Basically
> I'm thinking of a web based UI hosted on openstack.org where you can
> upload a logfile or copy/paste log lines and receive it back in the language
> of your choice.
> 
> When googling for an error message it would definitely be better for
> everyone to be using the same language because otherwise you'll
> only find forum posts and so forth in your own language and probably
> miss a solution posted in another language.
> 
> But I can certainly see that some people might actually be able to figure
> out problems for themselves if they could see the error in their native
> language.
> 
> So my idea is to log messages in English but create a standard way to
> get a translated version.
> 
> As a side effect, the usage of the web based translator server would
> give a way to answer your question about how many people use it.
> If it doesn't get any usage then people can stop investing the time in
> creating the translations.

Have you ever tried using Google Translate for anything more than very
simple phrases?

The results can be... well, interesting ;) And given the amount of
technical terms used in these messages, I doubt GT or any automated
translating service would provide a whole lot of value...

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread CARVER, PAUL
Joshua Harlow wrote:
>From what I know most all (correct me if I am wrong) open source projects
>don't translate log messages; so it seems odd to be the special snowflake
>project/s.

>Do people find this type of translation useful?

>It'd be nice to know how many people really do so the benefit/drawbacks of
>doing it can be evaluated by real usage data.

This is just a wild idea off the top of my head, but what about creating a log
translation service completely independent of running systems. Basically
I'm thinking of a web based UI hosted on openstack.org where you can
upload a logfile or copy/paste log lines and receive it back in the language
of your choice.

When googling for an error message it would definitely be better for
everyone to be using the same language because otherwise you'll
only find forum posts and so forth in your own language and probably
miss a solution posted in another language.

But I can certainly see that some people might actually be able to figure
out problems for themselves if they could see the error in their native
language.

So my idea is to log messages in English but create a standard way to
get a translated version.

As a side effect, the usage of the web based translator server would
give a way to answer your question about how many people use it.
If it doesn't get any usage then people can stop investing the time in
creating the translations.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread Doug Hellmann
On Mon, Jan 27, 2014 at 1:51 PM, Daniel P. Berrange wrote:

> On Mon, Jan 27, 2014 at 01:12:19PM -0500, Doug Hellmann wrote:
> > On Mon, Jan 27, 2014 at 12:58 PM, Daniel P. Berrange <
> berra...@redhat.com>wrote:
> >
> > > On Mon, Jan 27, 2014 at 12:42:28PM -0500, Doug Hellmann wrote:
> > > > We have a blueprint open for separating translated log messages into
> > > > different domains so the translation team can prioritize them
> differently
> > > > (focusing on errors and warnings before debug messages, for example)
> [1].
> > >
> > > > Feedback?
> > >
> > > > [1]
> > > >
> > >
> https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
> > >
> > > IMHO we've created ourselves a problem we don't need to have in the
> first
> > > place by trying to translate every single log message. It causes pain
> for
> > > developers & vendors because debug logs from users can in any language
> > > which the person receiving will often not be able to understand. It
> creates
> > > pain for translators by giving them an insane amount of work todo,
> which
> > > never ends since log message text is changed so often. Now we're
> creating
> > > yet more pain & complexity by trying to produce multiple log domains to
> > > solve
> > > a problem of havin some many msgs to translate. I accept that some
> people
> > > will
> > > like translated log messages, but I don't think this is a net win when
> you
> > > look at the overall burden they're imposing.
> > >
> > > Shouldn't we just say no to this burden and remove translation of all
> log
> > > messages, except for those at WARN/ERROR level which is likely to be
> seen
> > > by administrators in a day-to-day basis. There's few enough of those
> that
> > > we wouldn't need these extra translation domains. IMHO translating
> stuff
> > > at DEBUG/INFO level is a waste of limited time & resources.
> > >
> >
> > Thanks for raising this point, I meant to address it in my original
> email.
> >
> > Many deployers do in fact want to see the log messages in their native
> > language, either instead of or in addition to English. This change is an
> > attempt to accommodate them, while allowing other folks that don't care
> to
> > continue to not care.
>
> The implication of splitting the log messages into a separate translation
> domain is that translators will then prioritize translation of text from
> API error messages. IOW this split into translation domains will quite
> likely mean that translators just ignore translation of the ever changing
> log messages entirely. So even if deployers want translated log messages
> they may well find they don't get them. Which again leads me to question
> the whether the burden of this is justified.
>

The people actually doing that work have spoken to us and asked us to make
this change. The work is being done.

Doug



>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
> |: http://libvirt.org  -o- http://virt-manager.org:|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra] Meeting Tuesday January 28th at 19:00 UTC

2014-01-27 Thread Elizabeth Krumbach Joseph
The OpenStack Infrastructure (Infra) team is hosting our weekly
meeting tomorrow, Tuesday January 28th, at 19:00 UTC in
#openstack-meeting

Meeting agenda available here:
https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting (anyone is
welcome to to add agenda items)

Everyone interested in infrastructure and process surrounding
automated testing and deployment is encouraged to attend.

-- 
Elizabeth Krumbach Joseph || Lyz || pleia2
http://www.princessleia.com

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] A pair of mode keywords

2014-01-27 Thread Shixiong Shang
+1 for the ones “ipv6_” prefix.





On Jan 27, 2014, at 1:15 PM, Veiga, Anthony  
wrote:

> I vote address them (ipv6_).  There's no guarantee of forward
> compatibility with a new protocol and this way it can't be confused with a
> (non-existant) selection method for IPv4, either.  Also, future updates of
> other protocols would require a new attribute and break the API less.
> -Anthony
> 
> 
>> OK - any suggestions for the names of API attributes?
>> 
>> The PDF[0] shared does not specify the names of the attributes, so I had
>> two ideas for the names of the two new attributes being added to the
>> Subnet resource:
>> 
>> Either prefix them with "ipv6"
>> 
>> * ipv6_ra_mode
>> * ipv6_address_mode
>> 
>> Or don't prefix them:
>> 
>> * ra_mode
>> * address_mode
>> 
>> Thoughts?
>> 
>> [0]: 
>> https://www.dropbox.com/s/rq8xmbruqthef38/IPv6%20Two%20Modes%20v2.0.pdf
>> 
>> -- 
>> Sean M. Collins
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Representing PEM Format file as string

2014-01-27 Thread Jay Pipes
On Mon, 2014-01-27 at 10:49 -0800, Nachi Ueno wrote:
> Hi Rajesh
> 
> May I ask why we need single line representation of PEM format?
> For CLI, we will use file_name as same as nova keypair-add.
> We won't specify PEM on the URL.

++

-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread Daniel P. Berrange
On Mon, Jan 27, 2014 at 01:12:19PM -0500, Doug Hellmann wrote:
> On Mon, Jan 27, 2014 at 12:58 PM, Daniel P. Berrange 
> wrote:
> 
> > On Mon, Jan 27, 2014 at 12:42:28PM -0500, Doug Hellmann wrote:
> > > We have a blueprint open for separating translated log messages into
> > > different domains so the translation team can prioritize them differently
> > > (focusing on errors and warnings before debug messages, for example) [1].
> >
> > > Feedback?
> >
> > > [1]
> > >
> > https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
> >
> > IMHO we've created ourselves a problem we don't need to have in the first
> > place by trying to translate every single log message. It causes pain for
> > developers & vendors because debug logs from users can in any language
> > which the person receiving will often not be able to understand. It creates
> > pain for translators by giving them an insane amount of work todo, which
> > never ends since log message text is changed so often. Now we're creating
> > yet more pain & complexity by trying to produce multiple log domains to
> > solve
> > a problem of havin some many msgs to translate. I accept that some people
> > will
> > like translated log messages, but I don't think this is a net win when you
> > look at the overall burden they're imposing.
> >
> > Shouldn't we just say no to this burden and remove translation of all log
> > messages, except for those at WARN/ERROR level which is likely to be seen
> > by administrators in a day-to-day basis. There's few enough of those that
> > we wouldn't need these extra translation domains. IMHO translating stuff
> > at DEBUG/INFO level is a waste of limited time & resources.
> >
> 
> Thanks for raising this point, I meant to address it in my original email.
> 
> Many deployers do in fact want to see the log messages in their native
> language, either instead of or in addition to English. This change is an
> attempt to accommodate them, while allowing other folks that don't care to
> continue to not care.

The implication of splitting the log messages into a separate translation
domain is that translators will then prioritize translation of text from
API error messages. IOW this split into translation domains will quite
likely mean that translators just ignore translation of the ever changing
log messages entirely. So even if deployers want translated log messages
they may well find they don't get them. Which again leads me to question
the whether the burden of this is justified.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Representing PEM Format file as string

2014-01-27 Thread Nachi Ueno
Hi Rajesh

May I ask why we need single line representation of PEM format?
For CLI, we will use file_name as same as nova keypair-add.
We won't specify PEM on the URL.




2014-01-27 Rajesh Mohan :
> Thanks John.
>
> My initial approach is similar to Keystone's. This is mainly to unblock me
> from making progress on the driver. Nachi is doing the API part. I will
> discuss with him to explore other options.
>
> Can you send us the link to your review?
>
> Thanks,
> -Rajesh Mohan
>
>
>
>
> On Mon, Jan 27, 2014 at 6:00 AM, John Dennis  wrote:
>>
>> On 01/26/2014 05:36 PM, rajesh_moh...@dell.com wrote:
>> > I am working on SSL VPN BP.
>> >
>> > CA certificate is one of the resources. We decided to use PEM formatted
>> > certificates. It is multi-line string
>> >
>> >   1 -BEGIN CERTIFICATE-
>> >   2 MIID3TCCA0agAwIBAgIJAKRWnul3NJnrMA0GCSqGSIb3DQEBBQUAMIGmMQswCQYD
>> >  
>> >  21 0vO728pEcn6QtOpU7ZjEv8JLKRHwyq8kwd8gKMflWZRng4R2dj3cdd24oYJxn5HW
>> >  22 atXnq+N9H9dFgMfw5NNefwJrZ3zAE6mu0bAIoXVsKT2S
>> >  23 -END CERTIFICATE-
>> >
>> > Is there a standard way to represent this as single line string? Maybe
>> > there is some other project that passes certificates on command line/url.
>> >
>> > I am looking for some accepted way to represent PEM formatted file on
>> > command line.
>> >
>> > I am thinking of concatenating all lines into single string and
>> > rebuilding the file when configuration file is generated.Will we hit any 
>> > CLI
>> > size limitations if we pass long strings.
>>
>> In general PEM formatted certificates and other X509 binary data objects
>> should be exchanged in the original PEM format for interoperabilty
>> purposes. For command line tools it's best to pass PEM objects via a
>> filename.
>>
>> However, having said that there is at least one place in Openstack which
>> passes PEM data via a HTTP header and/or URL, it's the Keystone token id
>> which is a binary CMS object normally exchanged in PEM format. Keystone
>> strips the PEM header and footer, strips line endings and modifies one
>> of the base64 alphabet characters which was incompatible with HTTP and
>> URL encoding. However what keystone was doing was not correct and in
>> fact did not follow an existing RFC (e.g. URL safe base64).
>>
>> I fixed these problems and in the process wrote two small Python modules
>> base64utils and pemutils to do PEM transformations correctly (plus
>> general utilities for working with base64 and PEM data). These were
>> submitted to both keystone and oslo, Oslo on the assumption they should
>> be general purpose utilities available to all of openstack. I believe
>> these have languished in review purgatory, because I was pulled off to
>> work on other issues I haven't had the time to babysit the review.
>>
>>
>> --
>> John
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread Joshua Harlow
+1 I've never understood this either personally.

>From what I know most all (correct me if I am wrong) open source projects
don't translate log messages; so it seems odd to be the special snowflake
project/s.

Do people find this type of translation useful?

It'd be nice to know how many people really do so the benefit/drawbacks of
doing it can be evaluated by real usage data.

-Josh

On 1/27/14, 9:58 AM, "Daniel P. Berrange"  wrote:

>On Mon, Jan 27, 2014 at 12:42:28PM -0500, Doug Hellmann wrote:
>> We have a blueprint open for separating translated log messages into
>> different domains so the translation team can prioritize them
>>differently
>> (focusing on errors and warnings before debug messages, for example)
>>[1].
>
>> Feedback?
>
>> [1]
>> 
>>https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-doma
>>in
>
>IMHO we've created ourselves a problem we don't need to have in the first
>place by trying to translate every single log message. It causes pain for
>developers & vendors because debug logs from users can in any language
>which the person receiving will often not be able to understand. It
>creates
>pain for translators by giving them an insane amount of work todo, which
>never ends since log message text is changed so often. Now we're creating
>yet more pain & complexity by trying to produce multiple log domains to
>solve
>a problem of havin some many msgs to translate. I accept that some people
>will
>like translated log messages, but I don't think this is a net win when you
>look at the overall burden they're imposing.
>
>Shouldn't we just say no to this burden and remove translation of all log
>messages, except for those at WARN/ERROR level which is likely to be seen
>by administrators in a day-to-day basis. There's few enough of those that
>we wouldn't need these extra translation domains. IMHO translating stuff
>at DEBUG/INFO level is a waste of limited time & resources.
>
>Regards,
>Daniel
>-- 
>|: http://berrange.com  -o-
>http://www.flickr.com/photos/dberrange/ :|
>|: http://libvirt.org  -o-
>http://virt-manager.org :|
>|: http://autobuild.org   -o-
>http://search.cpan.org/~danberr/ :|
>|: http://entangle-photo.org   -o-
>http://live.gnome.org/gtk-vnc :|
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Representing PEM Format file as string

2014-01-27 Thread Rajesh Mohan
Thanks John.

My initial approach is similar to Keystone's. This is mainly to unblock me
from making progress on the driver. Nachi is doing the API part. I will
discuss with him to explore other options.

Can you send us the link to your review?

Thanks,
-Rajesh Mohan




On Mon, Jan 27, 2014 at 6:00 AM, John Dennis  wrote:

> On 01/26/2014 05:36 PM, rajesh_moh...@dell.com wrote:
> > I am working on SSL VPN BP.
> >
> > CA certificate is one of the resources. We decided to use PEM formatted
> certificates. It is multi-line string
> >
> >   1 -BEGIN CERTIFICATE-
> >   2 MIID3TCCA0agAwIBAgIJAKRWnul3NJnrMA0GCSqGSIb3DQEBBQUAMIGmMQswCQYD
> >  
> >  21 0vO728pEcn6QtOpU7ZjEv8JLKRHwyq8kwd8gKMflWZRng4R2dj3cdd24oYJxn5HW
> >  22 atXnq+N9H9dFgMfw5NNefwJrZ3zAE6mu0bAIoXVsKT2S
> >  23 -END CERTIFICATE-
> >
> > Is there a standard way to represent this as single line string? Maybe
> there is some other project that passes certificates on command line/url.
> >
> > I am looking for some accepted way to represent PEM formatted file on
> command line.
> >
> > I am thinking of concatenating all lines into single string and
> rebuilding the file when configuration file is generated.Will we hit any
> CLI size limitations if we pass long strings.
>
> In general PEM formatted certificates and other X509 binary data objects
> should be exchanged in the original PEM format for interoperabilty
> purposes. For command line tools it's best to pass PEM objects via a
> filename.
>
> However, having said that there is at least one place in Openstack which
> passes PEM data via a HTTP header and/or URL, it's the Keystone token id
> which is a binary CMS object normally exchanged in PEM format. Keystone
> strips the PEM header and footer, strips line endings and modifies one
> of the base64 alphabet characters which was incompatible with HTTP and
> URL encoding. However what keystone was doing was not correct and in
> fact did not follow an existing RFC (e.g. URL safe base64).
>
> I fixed these problems and in the process wrote two small Python modules
> base64utils and pemutils to do PEM transformations correctly (plus
> general utilities for working with base64 and PEM data). These were
> submitted to both keystone and oslo, Oslo on the assumption they should
> be general purpose utilities available to all of openstack. I believe
> these have languished in review purgatory, because I was pulled off to
> work on other issues I haven't had the time to babysit the review.
>
>
> --
> John
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Mistral + taskflow mini-meetup

2014-01-27 Thread Joshua Harlow
Hi all,

In order to encourage further discussion off IRC and more in public I'd like to 
share a etherpad that was worked on during a 'meetup' with some of the mistral 
folks and me.

https://etherpad.openstack.org/p/taskflow-mistral-jan-meetup

It was more of a (mini) in-person meetup but I thought I'd be good to gather 
some feedback there and let the more general audience see this and ask any 
questions/feedback/other...

Some of the key distinctions between taskflow/mistral we talked about and as 
well other various DSL aspects and some possible action items.

Feel free to ask questions,

-Josh
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] A pair of mode keywords

2014-01-27 Thread Veiga, Anthony
I vote address them (ipv6_).  There's no guarantee of forward
compatibility with a new protocol and this way it can't be confused with a
(non-existant) selection method for IPv4, either.  Also, future updates of
other protocols would require a new attribute and break the API less.
-Anthony


>OK - any suggestions for the names of API attributes?
>
>The PDF[0] shared does not specify the names of the attributes, so I had
>two ideas for the names of the two new attributes being added to the
>Subnet resource:
>
>Either prefix them with "ipv6"
>
>* ipv6_ra_mode
>* ipv6_address_mode
>
>Or don't prefix them:
>
>* ra_mode
>* address_mode
>
>Thoughts?
>
>[0]: 
>https://www.dropbox.com/s/rq8xmbruqthef38/IPv6%20Two%20Modes%20v2.0.pdf
>
>-- 
>Sean M. Collins
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread Doug Hellmann
On Mon, Jan 27, 2014 at 12:58 PM, Daniel P. Berrange wrote:

> On Mon, Jan 27, 2014 at 12:42:28PM -0500, Doug Hellmann wrote:
> > We have a blueprint open for separating translated log messages into
> > different domains so the translation team can prioritize them differently
> > (focusing on errors and warnings before debug messages, for example) [1].
>
> > Feedback?
>
> > [1]
> >
> https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain
>
> IMHO we've created ourselves a problem we don't need to have in the first
> place by trying to translate every single log message. It causes pain for
> developers & vendors because debug logs from users can in any language
> which the person receiving will often not be able to understand. It creates
> pain for translators by giving them an insane amount of work todo, which
> never ends since log message text is changed so often. Now we're creating
> yet more pain & complexity by trying to produce multiple log domains to
> solve
> a problem of havin some many msgs to translate. I accept that some people
> will
> like translated log messages, but I don't think this is a net win when you
> look at the overall burden they're imposing.
>
> Shouldn't we just say no to this burden and remove translation of all log
> messages, except for those at WARN/ERROR level which is likely to be seen
> by administrators in a day-to-day basis. There's few enough of those that
> we wouldn't need these extra translation domains. IMHO translating stuff
> at DEBUG/INFO level is a waste of limited time & resources.
>

Thanks for raising this point, I meant to address it in my original email.

Many deployers do in fact want to see the log messages in their native
language, either instead of or in addition to English. This change is an
attempt to accommodate them, while allowing other folks that don't care to
continue to not care.

Doug



>
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/:|
> |: http://libvirt.org  -o- http://virt-manager.org:|
> |: http://autobuild.org   -o- http://search.cpan.org/~danberr/:|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc:|
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] State preserving upgrades working, next MVP selection?

2014-01-27 Thread Dan Prince


- Original Message -
> From: "Clint Byrum" 
> To: "openstack-dev" 
> Sent: Monday, January 27, 2014 12:48:23 PM
> Subject: Re: [openstack-dev] [TripleO] State preserving upgrades working, 
> next MVP selection?
> 
> Excerpts from Dan Prince's message of 2014-01-27 09:22:21 -0800:
> > 
> > - Original Message -
> > > From: "Robert Collins" 
> > > To: "OpenStack Development Mailing List"
> > > 
> > > Sent: Sunday, January 26, 2014 3:30:22 PM
> > > Subject: [openstack-dev] [TripleO] State preserving upgrades working,
> > > next MVP selection?
> > > 
> > > So great news - we've now got state preserving upgrades actually
> > > working - we can now upgrade a deployed cloud (with downtime) without
> > > tossing away all the users valuable data. Yay. This isn't entirely
> > > done as we have a couple of outstanding patches we're running early
> > > versions of, but - still, it's time to pick a new focus.
> > > 
> > > So we need to pick the next step to focus on. In our roadmap we have:
> > > 
> > > MVP4: Keep VMs running during deploys.
> > > 
> > > This to my mind means two things:
> > >  - VMs need to not be interrupted
> > >  - network traffic needs to be uninterrupted
> > > 
> > > Now, as the kernel can be upgraded in a deploy, this still requires us
> > > to presume that we may have to reboot a machine - so we're not at the
> > > point of focusing on high performance updates yet.
> > > 
> > > Further consequences - we'll need two network nodes and two
> > > hypervisors, and live migration. 10m (the times-two reboot time for a
> > > server) is too long for the central DB to be down if we want
> > > neutron-agents not to get unhappy as well, so we'll really need two
> > > control plane nodes.
> > > 
> > > So I think the next MVP needs the following cards:
> > >  - HA DB
> > >  - HA APIs
> > >  - rolling upgrades
> > >  - nova live migration
> > 
> > It seems a bit fuzzy whether live migration violates the rules above
> > (no VM interruption, no network disruption). Live migration is certainly a
> > good feature to have in general... but wiring it into our upgrade strategy
> > seems like a bad idea. I would much rather see us put the effort into
> > an upgrade path which allows VMs to persist on the compute host machine
> > (uninterrupted) while the upgrade takes place. Live migrating things
> > back and forth all the time just seems like a thrashing, cool for a demo,
> > but bad idea in production sort of thing to me.
> > 
> 
> I'm not sure I understand. You must terminate the VMs when you update
> the kernel. Are you saying we should not handle kernel upgrades, or that
> we should just focus on evacuation for kernel upgrades?

I missed that the focus here was only on kernel upgrades. I thought this was 
just an upgrades in general thread with kernel upgrades being optional.

So.. if the subject here is really just "State preserving kernel upgrades" then 
carry on I guess...

> 
> Evacuation has the same problem, but with VM interruption added in. I
> think we should actually offer either as an option, but we have to offer
> one of them, or we'll have no way to update compute node kernels.
> 
> The non-kernel upgrade path is an order of magnitude easier, and we have
> discussed optimizations for it quite a bit already in other threads. We'll
> get to it. But leaving people without a way to upgrade the kernel on
> their compute nodes is not something I think we want to do.

If it is easier I would say lets go on and do it first then. From a priority 
standpoint an application redeployment of just OpenStack (without a kernel 
upgrade) is certainly going to be more useful on a day to day basis. Some shops 
may already have ways of performing hand cut in place kernel upgrades anyways 
so while an automated approach is valuable I'm not sure it is the most useful 
first order of business.

> 
> > 
> > >  - neutron agent migration *or* neutron distributed-HA setup
> > >  - scale the heat template to have 2 control plane nodes
> > >  - scale the heat template to have 2 hypervisor nodes
> > 
> > This is cool, especially for bare metal sorts of setups. For developers
> > though I would sort of like to consider a hybrid approach where we
> > still support a single control plan and compute (hypervisor) node for
> > the devtest scripts. Resources are just to limited to force everyone to
> > use HA setups by default, always. While HA is certainly important it is
> > only part of TripleO and there are many, things you might want to work
> > on without using it. So lets keep this as an optional production focused
> > sort of component.
> > 
> 
> I think that goes without saying. We can just develop in degraded mode. :)
> 
> > > 
> > > as a minimum - are these too granular or about right? I broke the heat
> > > template change into two because we can scale hypervisors right now,
> > > whereas control plane scaling will need changes and testing so that we
> > > only have one HA database created, not two non-

Re: [openstack-dev] [oslo] log message translations

2014-01-27 Thread Daniel P. Berrange
On Mon, Jan 27, 2014 at 12:42:28PM -0500, Doug Hellmann wrote:
> We have a blueprint open for separating translated log messages into
> different domains so the translation team can prioritize them differently
> (focusing on errors and warnings before debug messages, for example) [1].

> Feedback?

> [1]
> https://blueprints.launchpad.net/oslo/+spec/log-messages-translation-domain

IMHO we've created ourselves a problem we don't need to have in the first
place by trying to translate every single log message. It causes pain for
developers & vendors because debug logs from users can in any language
which the person receiving will often not be able to understand. It creates
pain for translators by giving them an insane amount of work todo, which
never ends since log message text is changed so often. Now we're creating
yet more pain & complexity by trying to produce multiple log domains to solve
a problem of havin some many msgs to translate. I accept that some people will
like translated log messages, but I don't think this is a net win when you
look at the overall burden they're imposing.

Shouldn't we just say no to this burden and remove translation of all log
messages, except for those at WARN/ERROR level which is likely to be seen
by administrators in a day-to-day basis. There's few enough of those that
we wouldn't need these extra translation domains. IMHO translating stuff
at DEBUG/INFO level is a waste of limited time & resources.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread Monty Taylor

On 01/24/2014 05:32 PM, Adrian Otto wrote:

Solum Core Reviewers,

I propose the following changes to solum-core:

+asalkeld
+noorul
-mordred

Thanks very much to mordred for helping me to bootstrap the reviewer team. 
Please reply with your votes.


+1

My pleasure - you guys seem like you're off to the races -a nd asalkeld 
and noorul are both doing great.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Susanne Balle
What I'd like to find out now:

1) who's interested in this topic?

Please include me.

2) who's interested in helping flesh out the guidelines for various log
levels?

Please include me.

3) who's interested in helping get these kinds of patches into various
projects in OpenStack?
4) which projects are interested in participating (i.e. interested in
prioritizing landing these kinds of UX improvements)

This is going to be progressive and iterative. And will require lots of
folks involved.

Regards Susanne

>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] State preserving upgrades working, next MVP selection?

2014-01-27 Thread Clint Byrum
Excerpts from Dan Prince's message of 2014-01-27 09:22:21 -0800:
> 
> - Original Message -
> > From: "Robert Collins" 
> > To: "OpenStack Development Mailing List" 
> > Sent: Sunday, January 26, 2014 3:30:22 PM
> > Subject: [openstack-dev] [TripleO] State preserving upgrades working,
> > next MVP selection?
> > 
> > So great news - we've now got state preserving upgrades actually
> > working - we can now upgrade a deployed cloud (with downtime) without
> > tossing away all the users valuable data. Yay. This isn't entirely
> > done as we have a couple of outstanding patches we're running early
> > versions of, but - still, it's time to pick a new focus.
> > 
> > So we need to pick the next step to focus on. In our roadmap we have:
> > 
> > MVP4: Keep VMs running during deploys.
> > 
> > This to my mind means two things:
> >  - VMs need to not be interrupted
> >  - network traffic needs to be uninterrupted
> > 
> > Now, as the kernel can be upgraded in a deploy, this still requires us
> > to presume that we may have to reboot a machine - so we're not at the
> > point of focusing on high performance updates yet.
> > 
> > Further consequences - we'll need two network nodes and two
> > hypervisors, and live migration. 10m (the times-two reboot time for a
> > server) is too long for the central DB to be down if we want
> > neutron-agents not to get unhappy as well, so we'll really need two
> > control plane nodes.
> > 
> > So I think the next MVP needs the following cards:
> >  - HA DB
> >  - HA APIs
> >  - rolling upgrades
> >  - nova live migration
> 
> It seems a bit fuzzy whether live migration violates the rules above
> (no VM interruption, no network disruption). Live migration is certainly a
> good feature to have in general... but wiring it into our upgrade strategy
> seems like a bad idea. I would much rather see us put the effort into
> an upgrade path which allows VMs to persist on the compute host machine
> (uninterrupted) while the upgrade takes place. Live migrating things
> back and forth all the time just seems like a thrashing, cool for a demo,
> but bad idea in production sort of thing to me.
> 

I'm not sure I understand. You must terminate the VMs when you update
the kernel. Are you saying we should not handle kernel upgrades, or that
we should just focus on evacuation for kernel upgrades?

Evacuation has the same problem, but with VM interruption added in. I
think we should actually offer either as an option, but we have to offer
one of them, or we'll have no way to update compute node kernels.

The non-kernel upgrade path is an order of magnitude easier, and we have
discussed optimizations for it quite a bit already in other threads. We'll
get to it. But leaving people without a way to upgrade the kernel on
their compute nodes is not something I think we want to do.

> 
> >  - neutron agent migration *or* neutron distributed-HA setup
> >  - scale the heat template to have 2 control plane nodes
> >  - scale the heat template to have 2 hypervisor nodes
> 
> This is cool, especially for bare metal sorts of setups. For developers
> though I would sort of like to consider a hybrid approach where we
> still support a single control plan and compute (hypervisor) node for
> the devtest scripts. Resources are just to limited to force everyone to
> use HA setups by default, always. While HA is certainly important it is
> only part of TripleO and there are many, things you might want to work
> on without using it. So lets keep this as an optional production focused
> sort of component.
> 

I think that goes without saying. We can just develop in degraded mode. :)

> > 
> > as a minimum - are these too granular or about right? I broke the heat
> > template change into two because we can scale hypervisors right now,
> > whereas control plane scaling will need changes and testing so that we
> > only have one HA database created, not two non-HA setup in parallel
> > :).
> > 
> > I'm going to put this into trello now, and will adjust as we discuss
> > 
> > -Rob
> > 
> > --
> > Robert Collins 
> > Distinguished Technologist
> > HP Converged Cloud
> > 
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] log message translations

2014-01-27 Thread Doug Hellmann
We have a blueprint open for separating translated log messages into
different domains so the translation team can prioritize them differently
(focusing on errors and warnings before debug messages, for example) [1].
Some concerns were raised related to the review [2], and I would like to
address those in this thread and see if we can reach consensus about how to
proceed.

The implementation in [2] provides a set of new marker functions similar to
_(), one for each log level (we have _LE, LW, _LI, _LD, etc.). These would
be used in conjunction with _(), and reserved for log messages. Exceptions,
API messages, and other user-facing messages all would still be marked for
translation with _() and would (I assume) receive the highest priority work
from the translation team.

When the string extraction CI job is updated, we will have one "main"
catalog for each app or library, and additional catalogs for the log
levels. Those show up in transifex separately, but will be named in a way
that they are obviously related. Each translation team will be able to
decide, based on the requirements of their users, how to set priorities for
translating the different catalogs.

Existing strings being sent to the log and marked with _() will be removed
from the main catalog and moved to the appropriate log-level-specific
catalog when their marker function is changed. My understanding is that
transifex is smart enough to recognize the same string from more than one
source, and to suggest previous translations when it sees the same text.
This should make it easier for the translation teams to "catch up" by
reusing the translations they have already done, in the new catalogs.

One concern that was raised was the need to mark all of the log messages by
hand. I investigated using extraction patterns like "LOG.debug(" and
"LOG.info(", but because of the way the translation actually works
internally we cannot do that. There are a few related reasons.

In other applications, the function _() translates a string at the point
where it is invoked, and returns a new string object. OpenStack has a
requirement that messages be translated multiple times, whether in the API
or the LOG (there is already support for logging in more than one language,
to different log files). This requirement means we delay the translation
operation until right before the string is output, at which time we know
the target language. We could update the log functions to create Message
objects dynamically, except...

Each app or library that uses the translation code will need its own
"domain" for the message catalogs. We get around that right now by not
translating many messages from the libraries, but that's obviously not what
we want long term (we at least want exceptions translated). If we had a
special version of a logger in oslo.log that knew how to create Message
objects for the format strings used in logging (the first argument to
LOG.debug for example), it would also have to know what translation domain
to use so the proper catalog could be loaded. The wrapper functions defined
in the patch [2] include this information, and can be updated to be
application or library specific when oslo.log eventually becomes its own
library.

Further, as part of moving the logging code from oslo-incubator to
oslo.log, and making our logging something we can use from other OpenStack
libraries, we are trying to change the implementation of the logging code
so it is no longer necessary to create loggers with our special wrapper
function. That would mean that oslo.log will be a library for *configuring*
logging, but the actual log calls can be handled with Python's standard
library, eliminating a dependency between new libraries and oslo.log. (This
is a longer, and separate, discussion, but I mention it here as backround.
We don't want to change the API of the logger in oslo.log because we don't
want to be using it directly in the first place.)

Another concern raised was the use of a prefix _L for these functions,
since it ties the priority definitions to "logs." I chose that prefix as an
explicit indicate that these *are* just for logs. I am not associating any
actual priority with them. The translators want us to move the log messages
out of the main catalog. Having them all in separate catalogs is a
refinement that gives them what they want -- some translators don't care
about log messages at all, some only care about errors, etc. We decided
that the translators should set priorities, and we would make that possible
by separating the catalogs into logical groups. Everything marked with _()
will still go into the main catalog, but beyond that it isn't up to the
developers to indicate "priority" for translations.

The alternative approach of using babel translator comments would, under
other circumstances, help because each message could have some indication
of its relative importance. However, it does not meet the requirement that
the translators (and not the developers) 

Re: [openstack-dev] [Neutron][IPv6] A pair of mode keywords

2014-01-27 Thread Collins, Sean
OK - any suggestions for the names of API attributes?

The PDF[0] shared does not specify the names of the attributes, so I had
two ideas for the names of the two new attributes being added to the
Subnet resource:

Either prefix them with "ipv6"

* ipv6_ra_mode
* ipv6_address_mode

Or don't prefix them:

* ra_mode
* address_mode

Thoughts?

[0]: https://www.dropbox.com/s/rq8xmbruqthef38/IPv6%20Two%20Modes%20v2.0.pdf

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] State preserving upgrades working, next MVP selection?

2014-01-27 Thread Dan Prince


- Original Message -
> From: "Robert Collins" 
> To: "OpenStack Development Mailing List" 
> Sent: Sunday, January 26, 2014 3:30:22 PM
> Subject: [openstack-dev] [TripleO] State preserving upgrades working, next 
> MVP selection?
> 
> So great news - we've now got state preserving upgrades actually
> working - we can now upgrade a deployed cloud (with downtime) without
> tossing away all the users valuable data. Yay. This isn't entirely
> done as we have a couple of outstanding patches we're running early
> versions of, but - still, it's time to pick a new focus.
> 
> So we need to pick the next step to focus on. In our roadmap we have:
> 
> MVP4: Keep VMs running during deploys.
> 
> This to my mind means two things:
>  - VMs need to not be interrupted
>  - network traffic needs to be uninterrupted
> 
> Now, as the kernel can be upgraded in a deploy, this still requires us
> to presume that we may have to reboot a machine - so we're not at the
> point of focusing on high performance updates yet.
> 
> Further consequences - we'll need two network nodes and two
> hypervisors, and live migration. 10m (the times-two reboot time for a
> server) is too long for the central DB to be down if we want
> neutron-agents not to get unhappy as well, so we'll really need two
> control plane nodes.
> 
> So I think the next MVP needs the following cards:
>  - HA DB
>  - HA APIs
>  - rolling upgrades
>  - nova live migration

It seems a bit fuzzy whether live migration violates the rules above (no VM 
interruption, no network disruption). Live migration is certainly a good 
feature to have in general... but wiring it into our upgrade strategy seems 
like a bad idea. I would much rather see us put the effort into an upgrade path 
which allows VMs to persist on the compute host machine (uninterrupted) while 
the upgrade takes place. Live migrating things back and forth all the time just 
seems like a thrashing, cool for a demo, but bad idea in production sort of 
thing to me.


>  - neutron agent migration *or* neutron distributed-HA setup
>  - scale the heat template to have 2 control plane nodes
>  - scale the heat template to have 2 hypervisor nodes

This is cool, especially for bare metal sorts of setups. For developers though 
I would sort of like to consider a hybrid approach where we still support a 
single control plan and compute (hypervisor) node for the devtest scripts. 
Resources are just to limited to force everyone to use HA setups by default, 
always. While HA is certainly important it is only part of TripleO and there 
are many, things you might want to work on without using it. So lets keep this 
as an optional production focused sort of component.

> 
> as a minimum - are these too granular or about right? I broke the heat
> template change into two because we can scale hypervisors right now,
> whereas control plane scaling will need changes and testing so that we
> only have one HA database created, not two non-HA setup in parallel
> :).
> 
> I'm going to put this into trello now, and will adjust as we discuss
> 
> -Rob
> 
> --
> Robert Collins 
> Distinguished Technologist
> HP Converged Cloud
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] [Ceilometer] Integration

2014-01-27 Thread Sylvain Bauza
Hi Julien,




2014/1/27 Julien Danjou 

> Hi,
>
> I've created a blueprint for Climate support in Ceilometer:
>
>   https://blueprints.launchpad.net/ceilometer/+spec/climate-support
>
> I've added a list of resources that I think should be metered. I'm not
> sure about the other ones that I encountered in Climate; feel free to
> amend that list in the whiteboard if you think some are missing, or if
> you have further details.
>

Great ! Thanks for your support. Supporting physical hosts reservations
also means we need to "dedicate" hosts into Climate (ie. putting these
hosts into a specific aggregate, thanks to a dedicated Climate API). Even
if this dedications is admin-only, I think it would be nice to get the
events related to it in Ceilometer, so we could in the future plan to have
kind of Autoscaling thanks to Heat.
At least, having these hosts in Ceilometer is good for scalability purposes.



>
> I didn't create specifically a blueprint in Climate as I found this one:
>
>   https://blueprints.launchpad.net/climate/+spec/notifications
>
> which should cover what's needed by Ceilometer. The implementation of
> this one will depend on the oslo-messaging blueprint, so I've added it
> as a dependency.
>

Well, that's a really good question. AFAIK, the "notifications' BP was
related to emails sending for being notified out of Openstack, but that's
something which needs to be discussed to see how we can leverage all those
Keystone/Marconi/Ceilometer concerns. See the etherpad Dina provided for
putting your comments on the notifications part, so we could discuss it on
a next weekly meeting.

Thanks,
-Sylvain
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [Tempest - Stress Test] : some improvements / issues on the stress test framework

2014-01-27 Thread Koderer, Marc
Hi Julien,

please don't forget the [qa] tag - otherwise your lost in the ML noise ;)

Ok thanks for the bug reports. I confirmed 1273245 and 1273254 but I am not 
totally sure with 1273186.
Could you give some more details how the CLI interface will look like? Or 
simply propose a patch.
It could end up in a quite confusing interface.. if you allow kwargs for such a 
generic case.

Are you already working on those bugs? If yes, could you assign them to you?

Regards,
Marc

> 
> From: LELOUP Julien [julien.lel...@3ds.com]
> Sent: Monday, January 27, 2014 4:01 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [Tempest - Stress Test] : some improvements / issues 
> on the stress test framework
>
> Hi everyone,
>
> I would like to discuss some ideas / behavior that seems broken in the stress 
> test part of Tempest.
>
> I opened some tickets in Launchpad and I would like to get the feedback of 
> the community on these ideas / issues :
>
> - Provide kwargs from UnitTest to stress test scenarios 
> (https://bugs.launchpad.net/tempest/+bug/1273186 )
>
> - Stress Test - tearDown() not called if an exception occurred  
> (https://bugs.launchpad.net/tempest/+bug/1273245 )
>
> - Stress Test - cleanUp() removing all test resources as an admin 
> (https://bugs.launchpad.net/tempest/+bug/1273254 )
>
> Best Regards,
>
> Julien LELOUP
> julien.lel...@3ds.com
>
>
> This email and any attachments are intended solely for the use of the 
> individual or entity to whom it is addressed and may be confidential and/or 
> privileged.
>
> If you are not one of the named recipients or have received this email in 
> error,
>
> (i) you should not read, disclose, or copy it,
>
> (ii) please notify sender of your receipt by reply email and delete this 
> email and all attachments,
>
> (iii) Dassault Systemes does not accept or assume any liability or 
> responsibility for any use of or reliance on this email.
>
> For other languages, go to http://www.3ds.com/terms/email-disclaimer
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Keystone] - Cloud federation on top of the Apache

2014-01-27 Thread Marek Denis

Dear all,

We have Identity Provider and mapping CRUD operations already merged, so 
it's a good point to prepare Keystone and Apache to handle SAML (as a 
starter) requests/responses.
For the next OpenStack release it'd be the Apache that handles SAML 
communication. In order to force SAML authentication, an admin defines 
so called 'protected resources', hidden behind certain URLs. In order to 
get access to aforementioned resources a SAML authenticated session is 
required. In terms of Keystone and federation this 'resource' would be 
just a token, ready to be later used against other OpenStack services. 
For obvious reasons we cannot make mod_shib watch all the Keystone URLs 
clients can access, so I think a dedicated URL should be used. That's 
right, a client who'd want to grab a token upon SAML authn would need to 
hit 
https://keystone:5000/v3/OS-FEDERATION/tokens/identity_provider//protocol/ 
.Such a URL would albo be kind of dynamic, because this would later let 
Keystone distinguish* what (already registered) IdP and federation 
protocol (SAML, OpenID, etc) is going to be used .


A simplified workflow could look like this:


Pre-req: Apache frontend is  configured to protect URLs matching regex 
/OS-FEDERATION/tokens/identity_provider/(.*?)/protocol/(.*?)


1) In order to get a valid token upon federated authentication a client 
enters protected resource, for instance 
https://keystone:5000/v3/OS-FEDERATION/tokens/identity_provider/{idp}/protocol/{protocol}
2) After the client is authenticated (with ECP/similar extension) the 
request enters Keystone public pipeline.
3) Apache modules  store parsed parameters from a SAML assertion in a 
wsgi environment,
4) A class inheriting from wsgi.Middleware checks whether the 
REQUEST_URL (or similar) environment variable matches aforementioned 
regexp (e.g. /OS-FEDERATION/tokens/identity_provider/.*?/protocol/.*?) 
and if the match is positive, fetches env parameters starting with 
certain value (a prefix configurable in the keystone.conf, say 'ADFS_' 
). The parameters are stored as a dictionary and passed in a structure, 
later available to other filters/middleware objects in the pipeline (TO 
BE CONFIRMED, MAYBE REWRITING PARAMS IS NOT REQUIRED).
5) keystone/contrib/federation/routers.py has defined URL routes and 
fires keystone/contrib/federation/controllers.py methods that fetch IdP, 
protocol entities as well as the corresponding mapping entity with the 
mapping rules included. The rules are applied  on the assertion 
parameters and list of local users/groups is issued. The OpenStack token 
is generated, stored in the DB and returned to the user (formed as a 
valid JSON response).

6) The token can now be used for next operations on the OpenStack cloud.


*)
At first I though the dynamic URLs 
(OS-FEDERATION/tokens/identity_provider/(.*?)/protocol/(.*?)) could be 
replaced with a one static, and information about the IdP and protocol 
could be sent as a HTTP POST input, but from what I have already noticed 
after the client is redirected to the IdP (and to the SP again) the 
initial input is lost.



I am looking forward to hear feedback from you.

Thanks,

--
Marek Denis
[marek.de...@cern.ch]

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Climate] [Ceilometer] Integration

2014-01-27 Thread Dina Belova
Julien, great :)

Thanks for starting this initiative, it's really interesting and we want to
include that functionality to 0.2 Climate - as discussed in
https://etherpad.openstack.org/p/climate-0.2

Thank you
Dina


On Mon, Jan 27, 2014 at 8:12 PM, Julien Danjou  wrote:

> Hi,
>
> I've created a blueprint for Climate support in Ceilometer:
>
>   https://blueprints.launchpad.net/ceilometer/+spec/climate-support
>
> I've added a list of resources that I think should be metered. I'm not
> sure about the other ones that I encountered in Climate; feel free to
> amend that list in the whiteboard if you think some are missing, or if
> you have further details.
>
> I didn't create specifically a blueprint in Climate as I found this one:
>
>   https://blueprints.launchpad.net/climate/+spec/notifications
>
> which should cover what's needed by Ceilometer. The implementation of
> this one will depend on the oslo-messaging blueprint, so I've added it
> as a dependency.
>
> Cheers,
> --
> Julien Danjou
> -- Free Software hacker - independent consultant
> -- http://julien.danjou.info
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-27 Thread Tzu-Mainn Chen
I'd argue that we should call it 'overcloud role' - at least from the modeling
point of view - since the tuskar-api calls a deployment an overcloud.

But I like the general direction of the term-renaming!

Mainn

- Original Message -
> Based on this thread which didn't seem to get clear outcome, I have one
> last suggestion:
> 
> * Deployment Role
> 
> It looks that it might satisfy participants of this discussion. When I
> internally talked to people it got the best reactions from already
> suggested terms.
> 
> Depending on your reactions for this suggestion, if we don't get to
> agreement of majority by the end of the week, I would call for voting
> starting next week.
> 
> Thanks
> -- Jarda
> 
> On 2014/21/01 15:19, Jaromir Coufal wrote:
> > Hi folks,
> >
> > when I was getting feedback on wireframes and we talked about Roles,
> > there were various objections and not much suggestions. I would love to
> > call for action and think a bit about the term for concept currently
> > known as Role (= Resource Category).
> >
> > Definition:
> > Role is a representation of a group of nodes, with specific behavior.
> > Each role contains (or will contain):
> > * one or more Node Profiles (which specify HW which is going in)
> > * association with image (which will be provisioned on new coming nodes)
> > * specific service settings
> >
> > So far suggested terms:
> > * Role *
> >- short name - plus points
> >- quite overloaded term (user role, etc)
> >
> > * Resource Category *
> >- pretty long (devs already shorten it - confusing)
> >- Heat specific term
> >
> > * Resource Class *
> >- older term
> >
> > Are there any other suggestions (ideally something short and accurate)?
> > Or do you prefer any of already suggested terms?
> >
> > Any ideas are welcome - we are not very good in finding the best match
> > for this particular term.
> >
> > Thanks
> > -- Jarda
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Climate] [Ceilometer] Integration

2014-01-27 Thread Julien Danjou
Hi,

I've created a blueprint for Climate support in Ceilometer:

  https://blueprints.launchpad.net/ceilometer/+spec/climate-support

I've added a list of resources that I think should be metered. I'm not
sure about the other ones that I encountered in Climate; feel free to
amend that list in the whiteboard if you think some are missing, or if
you have further details.

I didn't create specifically a blueprint in Climate as I found this one:

  https://blueprints.launchpad.net/climate/+spec/notifications

which should cover what's needed by Ceilometer. The implementation of
this one will depend on the oslo-messaging blueprint, so I've added it
as a dependency.

Cheers,
-- 
Julien Danjou
-- Free Software hacker - independent consultant
-- http://julien.danjou.info


signature.asc
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Macdonald-Wallace, Matthew
> > I've also noticed just now that we appear to be "re-inventing" some parts of
> the logging framework (openstack.common.log.WriteableLogger for example
> appears to be a "catchall" when we should just be handing off to the default
> logger and letting the python logging framework decide what to do IMHO).
> 
> WriteableLogger exists for a very specific reason: eventlet. Eventlet assumes 
> a
> file object for logging, not a python logger.
> 
> I've proposed a change for that -
> https://github.com/eventlet/eventlet/pull/75 - but it's not yet upstream.

Thanks for clearing that up, makes a lot more sense now!

So when the change is merged upstream we can get rid of that in our code as 
well?

Matt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Irena Berezovsky
Hi Robert, all,
My comments inline

Regards,
Irena
From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Monday, January 27, 2014 5:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.
[IrenaB]
I would like to clarify a bit on the requirements you stated below.
As I see it, the hybrid compute nodes actually can be preferred in the real 
cloud, since one can define VM with one vNIC attached via SR-IOV virtual 
function while the other via some vSwitch.
But it definitely make sense to land VM with 'virtio' vNICs only on the 
non-SRIOV compute node.

Maybe there should be some sort of preference order of suitable nodes in 
scheduler choice, based on vnic types required for the VM.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot -flavor m1.large -image  --nic net-id= vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.
[IrenaB] I
Having ML2 plugin as neutron backend, will fail to bind the port, in no agent 
is running on the Host

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let's try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for -nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Tuskar] Terminology Revival #1 - Roles

2014-01-27 Thread Jaromir Coufal
Based on this thread which didn't seem to get clear outcome, I have one 
last suggestion:


* Deployment Role

It looks that it might satisfy participants of this discussion. When I 
internally talked to people it got the best reactions from already 
suggested terms.


Depending on your reactions for this suggestion, if we don't get to 
agreement of majority by the end of the week, I would call for voting 
starting next week.


Thanks
-- Jarda

On 2014/21/01 15:19, Jaromir Coufal wrote:

Hi folks,

when I was getting feedback on wireframes and we talked about Roles,
there were various objections and not much suggestions. I would love to
call for action and think a bit about the term for concept currently
known as Role (= Resource Category).

Definition:
Role is a representation of a group of nodes, with specific behavior.
Each role contains (or will contain):
* one or more Node Profiles (which specify HW which is going in)
* association with image (which will be provisioned on new coming nodes)
* specific service settings

So far suggested terms:
* Role *
   - short name - plus points
   - quite overloaded term (user role, etc)

* Resource Category *
   - pretty long (devs already shorten it - confusing)
   - Heat specific term

* Resource Class *
   - older term

Are there any other suggestions (ideally something short and accurate)?
Or do you prefer any of already suggested terms?

Any ideas are welcome - we are not very good in finding the best match
for this particular term.

Thanks
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] What's Up Doc? Jan 27 2014

2014-01-27 Thread Anne Gentle
This week we have two chances to talk docs -- the IRC meeting for US and
Europe is every other Wednesday in #openstack-meeting-alt at 03:00 UTC.

We're also hosting a hangout on air on Wednesday, January 29, 2014 at
20:00:00 UTC. Look for a Hangouts On Air invitation on Google Plus.

1. In review and merged this past week:

Last Thursday and Friday we did a mini sprint on the Operations Guide with
the goal of addressing O'Reilly editor comments, documenting upgrades,
vetting a new reference architecture, and getting all Havana updates
completed. We merged about 25 changes and have about 20 more in the queue
for review, including how to upgrade Compute from grizzly to havana.

In the openstack-manuals repo, there are updates to the install guide,
glossary edits, updates to the nova config options as well as some
ceilometer cleanup.

2. High priority doc work:

We're two months away from a March 27 release candidate. Highest priority
is Icehouse documentation:

Install Guide

Config Ref

Cloud Admin Guide

On the API documentation side, there's going to be some API doc movement
where the specs move into the project repositories. For example, from
openstack/image-api to glance/doc/source.

3. Doc work going on that I know of:

Shaun McCance is working on the configuration automation and reaching out
to Oslo devs to ensure accuracy for incoming options. With 2400 options
across OpenStack projects there's plenty to document.

Diane and Andreas have been diligently getting the database-api samples
tested and doc build working. Thanks for that. The Database project Trove
does enter integration with the Icehouse release.

4. New incoming doc requests:

Nick Chase is holding meetings about a new Networking Guide that would give
the basic concepts for Neutron and software-defined networking in
OpenStack.

5. Doc tools updates:

Today I'll release 0.4 of the openstack-doc-tools repo which includes the
ability to ignore sets of files, also greatly improves the options output,
and offers the ability to auto-document the Command Line Interface help to
output in a CLI reference. Nice work Andreas!

For clouddocs-maven-plugin, the 1.13.0 release came out January 23 which
now supports parts for the Operations Guide. Read all about it in the
release notes
https://github.com/stackforge/clouddocs-maven-plugin#release-notes.

6. Other doc news:

I think that's enough excitement for this week! Carry on.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread Russell Bryant
On 01/24/2014 08:32 PM, Adrian Otto wrote:
> Solum Core Reviewers,
> 
> I propose the following changes to solum-core:
> 
> +asalkeld
> +noorul
> -mordred
> 
> Thanks very much to mordred for helping me to bootstrap the reviewer team. 
> Please reply with your votes.

+1

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Solum] Proposed changes to solum-core

2014-01-27 Thread devdatta kulkarni
+1


-Original Message-
From: "Murali Allada" 
Sent: Sunday, January 26, 2014 8:19pm
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [Solum] Proposed changes to solum-core

+1 



> On Jan 25, 2014, at 9:04 AM, "Roshan Agrawal"  
> wrote:
> 
> +1
> 
> From: Rajesh Ramchandani [rajesh.ramchand...@cumulogic.com]
> Sent: Friday, January 24, 2014 9:35 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: OpenStack Development Mailing List
> Subject: Re: [openstack-dev] [Solum] Proposed changes to solum-core
> 
> +1
> 
>> On Jan 24, 2014, at 5:35 PM, "Adrian Otto"  wrote:
>> 
>> Solum Core Reviewers,
>> 
>> I propose the following changes to solum-core:
>> 
>> +asalkeld
>> +noorul
>> -mordred
>> 
>> Thanks very much to mordred for helping me to bootstrap the reviewer team. 
>> Please reply with your votes.
>> 
>> Thanks,
>> 
>> Adrian
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Tempest - Stress Test] : some improvements / issues on the stress test framework

2014-01-27 Thread LELOUP Julien
Hi everyone,

I would like to discuss some ideas / behavior that seems broken in the stress 
test part of Tempest.

I opened some tickets in Launchpad and I would like to get the feedback of the 
community on these ideas / issues :

- Provide kwargs from UnitTest to stress test scenarios 
(https://bugs.launchpad.net/tempest/+bug/1273186 )

- Stress Test - tearDown() not called if an exception occurred  
(https://bugs.launchpad.net/tempest/+bug/1273245 )

- Stress Test - cleanUp() removing all test resources as an admin 
(https://bugs.launchpad.net/tempest/+bug/1273254 )

Best Regards,

Julien LELOUP
julien.lel...@3ds.com


This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Sean Dague
On 01/27/2014 09:44 AM, Macdonald-Wallace, Matthew wrote:
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: 27 January 2014 14:26
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] Proposed Logging Standards
>>
>> On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
>>> Hi Sean,
>>>
>>> I'm currently working on moving away from the "built-in" logging to use
>> log_config= and the python logging framework so that we can start
>> shipping to logstash/sentry/.
>>>
>>> I'd be very interested in getting involved in this, especially from a "why 
>>> do we
>> have log messages that are split across multiple lines" perspective!
>>
>> Do we have many that aren't either DEBUG or TRACE? I thought we were pretty
>> clean there.
> 
> True, most (all?!!) are DEBUG/TRACE and mainly from calling out to other 
> clients (Neutron/Glance/Cinder/etc), but if you're sending DEBUG somewhere 
> useful for future processing then trying to glue the split-lines back 
> together again can be "interesting".
> 
> At the moment we are assuming that anything that doesn't start with a 
> date-stamp is associated with the line above it.  This is probably OK for 
> now, however if anything changes in future that negates this rule we won't 
> catch it until it's too late!
> 
>>>
>>> P.S. FWIW, I'd also welcome details on what the "Audit" level gives us
>>> that the others don't... :)
>>
>> Well as far as I can tell the AUDIT level was a prior drive by contribution 
>> that's
>> not being actively maintained. Honestly, I think we should probably rip it 
>> out,
>> because I don't see any in tree tooling to use it, and it's horribly 
>> inconsistent.
> 
> +1 for this, I wondered if it was something to do with Ceilometer but I'm 
> guessing probably not from your comment here.
> 
> I've also noticed just now that we appear to be "re-inventing" some parts of 
> the logging framework (openstack.common.log.WriteableLogger for example 
> appears to be a "catchall" when we should just be handing off to the default 
> logger and letting the python logging framework decide what to do IMHO).

WriteableLogger exists for a very specific reason: eventlet. Eventlet
assumes a file object for logging, not a python logger.

I've proposed a change for that -
https://github.com/eventlet/eventlet/pull/75 - but it's not yet upstream.

-Sean

-- 
Sean Dague
Samsung Research America
s...@dague.net / sean.da...@samsung.com
http://dague.net



signature.asc
Description: OpenPGP digital signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][LBaaS] Securing RPC channel between the server and the agent

2014-01-27 Thread Russell Bryant
On 01/27/2014 09:37 AM, Eugene Nikanorov wrote:
> Hi folks,
> 
> As we are going to add ssl implementation to lbaas which would be based
> on well-known haproxy+stunnel combination, there is one problem that we
> need to solve: securing communication channel between neutron-server and
> the agent.
> 
> I see several approaches here:
> 1) Rely on secure messaging as described here:
> http://docs.openstack.org/security-guide/content/ch038_transport-security.html
> 
> pros: no or minor additional things to care of on neutron-server side
> and client side
> cons: might be more complex to test. Also I'm not sure testing
> infrastructure uses that.
> We'll need to state that lbaas ssl is only secure when transpost
> security is enabled.
> 
> 2) Provide neutron server/agent with certificate for encrypting
> keys/certificates that are dedicated to loadbalancers.
> 
> pros: doesn't depend on cloud-wide messaging security. We can say that
> 'ssl works' in any case.
> cons: more to implement, more complex deployment.
> 
> Unless I've missed some other obvious solution what do you think is the
> best approach here?
> (I'm not considering the usage of external secure store like barbican at
> this point)
> 
> What do you think?

Using existing available transport security is a good start (SSL to your
amqp broker).

For a step beyond that, we really need to look at a solution that
applies across all of OpenStack, as this is a very general problem that
needs to be solved across many components.

There was a proposal a while back:

https://wiki.openstack.org/wiki/MessageSecurity

This has since been moving forward.  Utilizing it has been blocked on
getting KDS in Keystone.  IIRC, KDS should be implemented in Icehouse,
so we can start utilizing it in other services in the Juno cycle.

-- 
Russell Bryant

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tempest - Stress test] : cleanup() removing resources for all tenants with an admin_manager

2014-01-27 Thread LELOUP Julien
Hello Boris,

Rally seemes to be a really interesting tool and I will eventually using it for 
full scale benchmarking.

However I will focus on tempest based tests for the time being in order to have 
these tests replayed regularly during the integration process.
I believe that my needs are covered by both Tempest and Rally and if time allow 
me to do so, I will submit the same kind of scenario for full scale 
benchmarking.

Best Regards,

Julien LELOUP
julien.lel...@3ds.com

From: LELOUP Julien [mailto:julien.lel...@3ds.com]
Sent: Tuesday, January 21, 2014 11:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Tempest - Stress test] : cleanup() removing 
resources for all tenants with an admin_manager

Hello Boris,

I'll check Rally in order to see what tool is the best for my tests.

Best Regards,

Julien LELOUP
julien.lel...@3ds.com


From: Boris Pavlovic [mailto:bpavlo...@mirantis.com]
Sent: Monday, January 20, 2014 5:52 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Tempest - Stress test] : cleanup() removing 
resources for all tenants with an admin_manager

Julien,

Probably you should try to use Rally for benchmarking.
https://wiki.openstack.org/wiki/Rally

There is already working generic cleanup...

There is already implemented framework that allows parametrized benchmarks:
https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L32-L39

Simple way to configure load using json (load will be created from real users, 
no admin, that will be pre created for each benchmark):
https://github.com/stackforge/rally/blob/master/doc/samples/tasks/nova/boot-and-delete.json


And simple CLI interface (now we are working around Web UI)


Best regards,
Boris Pavlovic

On Mon, Jan 20, 2014 at 8:32 PM, LELOUP Julien 
mailto:julien.lel...@3ds.com>> wrote:
Hi everyone,

I'm forwarding my own email previously posted on the QA list.

I would like to discuss about the cleanup() process used right after a stress 
test run in Tempest.

For what I see now by using it and by reading the code, the cleanup() seems a 
bit rough since it is using an "admin_manager" in order to get all kind of test 
resources actually available : servers, key pairs, volumes, .etc...
More precisely, when it comes to clean servers, it is searching for servers on 
all tenants. I find this behavior a little rough since it will blow all objects 
on the target OpenStack, even object unrelated to the stress tests that just 
ran.

Actually before reading the cleanup() I had a problem when one of my stress 
test erased all the servers and volumes on another tenant, which impaired other 
people working on our OpenStack.

I can imagine that for some scenarios, using an admin user to deeply clean an 
OpenStack is required, but I believe that most of the time the cleanup() 
process should focus only on the tenant used during the stress test and leave 
the other tenants alone.

Am I doing something wrong ? Is there a way to restrain the cleanup() process ?

If no parameters or configuration allows me to do so, should I improve the 
cleanup() code in order to allow it to remove only the test resources created 
for the test?
I do not wish to make this kind of code if the OpenStack community believe that 
the present behavior is totally intended and should not be modified.


Best Regards,

Julien LELOUP
julien.lel...@3ds.com

This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


This email and any attachments are intended solely for the use of the 
individual or entity to whom it is addressed and may be confidential and/or 
privileged.

If you are not one of the named recipients or have received this email in error,

(i) you should not read, disclose, or copy it,

(ii) please notify sender of your receipt by reply email and delete this email 
and all attachments,

(iii) Dassault Systemes does not accept or assume any liability or 
responsibility for any use of or reliance on this email.

For other languages, go to http://www.3ds.com/terms/email-disclaimer

This email and any attachments are intended solely for the use of the 
indi

Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV

2014-01-27 Thread Robert Li (baoli)
Hi Folks,

In today's meeting, we discussed a scheduler issue for SRIOV. The basic 
requirement is for coexistence of the following compute nodes in a cloud:
  -- SRIOV only compute nodes
  -- non-SRIOV only compute nodes
  -- Compute nodes that can support both SRIOV and non-SRIOV ports. Lack of 
a proper name, let's call them compute nodes with hybrid NICs support, or 
simply hybrid compute nodes.

I'm not sure if it's practical in having hybrid compute nodes in a real cloud. 
But it may be useful in the lab to bench mark the performance differences 
between SRIOV, non-SRIOV, and coexistence of both.

In a cloud that supports SRIOV in some of the compute nodes, a request such as:

 nova boot —flavor m1.large —image  --nic net-id= vm

doesn't require a SRIOV port. However, it's possible for the nova scheduler to 
place it on a compute node that supports sriov port only. Since neutron plugin 
runs on the controller, port-create would succeed unless neutron knows the host 
doesn't support non-sriov port. But connectivity on the node would not be 
established since no agent is running on that host to establish such 
connectivity.

Irena brought up the idea of using host aggregate. This requires creation of a 
non-SRIOV host aggregate, and use that in the above 'nova boot' command. It 
should work.

The patch I had introduced a new constraint in the existing PCI passthrough 
filter.

The consensus seems to be having a better solution in a later release. And for 
now, people can either use host aggregate or resort to their own means.

Let's keep the discussion going on this.

Thanks,
Robert





On 1/24/14 4:50 PM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Hi Folks,

Based on Thursday's discussion and a chat with Irena, I took the liberty to add 
a summary and discussion points for SRIOV on Monday and onwards. Check it out 
https://wiki.openstack.org/wiki/Meetings/Passthrough. Please feel free to 
update it. Let's try to finalize it next week. The goal is to determine the BPs 
that need to get approved, and to start coding.

thanks,
Robert


On 1/22/14 8:03 AM, "Robert Li (baoli)" 
mailto:ba...@cisco.com>> wrote:

Sounds great! Let's do it on Thursday.

--Robert

On 1/22/14 12:46 AM, "Irena Berezovsky" 
mailto:ire...@mellanox.com>> wrote:

Hi Robert, all,
I would suggest not to delay the SR-IOV discussion to the next week.
Let’s try to cover the SRIOV side and especially the nova-neutron interaction 
points and interfaces this Thursday.
Once we have the interaction points well defined, we can run parallel patches 
to cover the full story.

Thanks a lot,
Irena

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Wednesday, January 22, 2014 12:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [nova][neutron] PCI passthrough SRIOV

Hi Folks,

As the debate about PCI flavor versus host aggregate goes on, I'd like to move 
forward with the SRIOV side of things in the same time. I know that tomorrow's 
IRC will be focusing on the BP review, and it may well continue into Thursday. 
Therefore, let's start discussing SRIOV side of things on Monday.

Basically, we need to work out the details on:
-- regardless it's PCI flavor or host aggregate or something else, how 
to use it to specify a SRIOV port.
-- new parameters for —nic
-- new parameters for neutron net-create/neutron port-create
-- interface between nova and neutron
-- nova side of work
-- neutron side of work

We should start coding ASAP.

Thanks,
Robert


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Macdonald-Wallace, Matthew
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 27 January 2014 14:26
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Proposed Logging Standards
> 
> On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
> > Hi Sean,
> >
> > I'm currently working on moving away from the "built-in" logging to use
> log_config= and the python logging framework so that we can start
> shipping to logstash/sentry/.
> >
> > I'd be very interested in getting involved in this, especially from a "why 
> > do we
> have log messages that are split across multiple lines" perspective!
> 
> Do we have many that aren't either DEBUG or TRACE? I thought we were pretty
> clean there.
> 
> > Cheers,
> >
> > Matt
> >
> > P.S. FWIW, I'd also welcome details on what the "Audit" level gives us
> > that the others don't... :)
> 
> Well as far as I can tell the AUDIT level was a prior drive by contribution 
> that's
> not being actively maintained. Honestly, I think we should probably rip it 
> out,
> because I don't see any in tree tooling to use it, and it's horribly 
> inconsistent.
> 
>   -Sean

Just as an aside, AUDIT was introduced to the Nova code base as part of 
05ccbb75c45aa3c348162043495e1a3d279e5b06 however a "grep -r AUDIT *" (yes, I 
know, crude but it does work! :P ) across the nova codebase only returns the 
openstack.common.log libraries as having it listed in the code.

I don't know if other projects are making use of it, but if not then I agree 
that it should probably be removed from Oslo

Matt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] bp proposal: discovery of peer instances through metadata service

2014-01-27 Thread Justin Santa Barbara
Day, Phil wrote:

>
> >> We already have a mechanism now where an instance can push metadata as
> >> a way of Windows instances sharing their passwords - so maybe this could
> >> build on that somehow - for example each instance pushes the data its
> >> willing to share with other instances owned by the same tenant ?
> >
> > I do like that and think it would be very cool, but it is much more
> complex to
> > implement I think.
>
> I don't think its that complicated - just needs one extra attribute stored
> per instance (for example into instance_system_metadata) which allows the
> instance to be included in the list
>

Ah - OK, I think I better understand what you're proposing, and I do like
it.  The hardest bit of having the metadata store be full read/write would
be defining what is and is not allowed (rate-limits, size-limits, etc).  I
worry that you end up with a new key-value store, and with per-instance
credentials.  That would be a separate discussion: this blueprint is trying
to provide a focused replacement for multicast discovery for the cloud.

But: thank you for reminding me about the Windows password though...  It
may provide a reasonable model:

We would have a new endpoint, say 'discovery'.  An instance can POST a
single string value to the endpoint.  A GET on the endpoint will return any
values posted by all instances in the same project.

One key only; name not publicly exposed ('discovery_datum'?); 255 bytes of
value only.

I expect most instances will just post their IPs, but I expect other uses
will be found.

If I provided a patch that worked in this way, would you/others be on-board?


Justin
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Macdonald-Wallace, Matthew
> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: 27 January 2014 14:26
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] Proposed Logging Standards
> 
> On 01/27/2014 09:07 AM, Macdonald-Wallace, Matthew wrote:
> > Hi Sean,
> >
> > I'm currently working on moving away from the "built-in" logging to use
> log_config= and the python logging framework so that we can start
> shipping to logstash/sentry/.
> >
> > I'd be very interested in getting involved in this, especially from a "why 
> > do we
> have log messages that are split across multiple lines" perspective!
> 
> Do we have many that aren't either DEBUG or TRACE? I thought we were pretty
> clean there.

True, most (all?!!) are DEBUG/TRACE and mainly from calling out to other 
clients (Neutron/Glance/Cinder/etc), but if you're sending DEBUG somewhere 
useful for future processing then trying to glue the split-lines back together 
again can be "interesting".

At the moment we are assuming that anything that doesn't start with a 
date-stamp is associated with the line above it.  This is probably OK for now, 
however if anything changes in future that negates this rule we won't catch it 
until it's too late!

> >
> > P.S. FWIW, I'd also welcome details on what the "Audit" level gives us
> > that the others don't... :)
> 
> Well as far as I can tell the AUDIT level was a prior drive by contribution 
> that's
> not being actively maintained. Honestly, I think we should probably rip it 
> out,
> because I don't see any in tree tooling to use it, and it's horribly 
> inconsistent.

+1 for this, I wondered if it was something to do with Ceilometer but I'm 
guessing probably not from your comment here.

I've also noticed just now that we appear to be "re-inventing" some parts of 
the logging framework (openstack.common.log.WriteableLogger for example appears 
to be a "catchall" when we should just be handing off to the default logger and 
letting the python logging framework decide what to do IMHO).

Cheers,

Matt
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposed Logging Standards

2014-01-27 Thread Peter Portante
On Mon, Jan 27, 2014 at 8:07 AM, Sean Dague  wrote:

> Back at the beginning of the cycle, I pushed for the idea of doing some
> log harmonization, so that the OpenStack logs, across services, made
> sense. I've pushed a proposed changes to Nova and Keystone over the past
> couple of days.
>
> This is going to be a long process, so right now I want to just focus on
> making INFO level sane, because as someone that spends a lot of time
> staring at logs in test failures, I can tell you it currently isn't.
>
> https://wiki.openstack.org/wiki/LoggingStandards is a few things I've
> written down so far, comments welcomed.
>
> We kind of need to solve this set of recommendations once and for all up
> front, because negotiating each change, with each project, isn't going
> to work (e.g - https://review.openstack.org/#/c/69218/)
>
> What I'd like to find out now:
>
> 1) who's interested in this topic?
>

Interested.


> 2) who's interested in helping flesh out the guidelines for various log
> levels?
>

Interested.


> 3) who's interested in helping get these kinds of patches into various
> projects in OpenStack?
>

Interested, but too much already on plate, so can't guarantee consistent
help.


> 4) which projects are interested in participating (i.e. interested in
> prioritizing landing these kinds of UX improvements)
>
> This is going to be progressive and iterative. And will require lots of
> folks involved.
>
> -Sean
>
> --
> Sean Dague
> Samsung Research America
> s...@dague.net / sean.da...@samsung.com
> http://dague.net
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Proposal to add Matt Riedemann to nova-core

2014-01-27 Thread Dan Genin

As a reviewee of Matt I vote

+1

On 11/23/2013 10:17 AM, Gary Kotton wrote:
This message has been archived. View the original item 


+1

On 11/23/13 4:53 PM, "Sean Dague"  wrote:

>+1 would be happy to have Matt on the team
>
>On Fri, Nov 22, 2013 at 8:23 PM, Brian Elliott 
>wrote:
>> +1
>>
>> Solid reviewer!
>>
>> Sent from my iPad
>>
>>> On Nov 22, 2013, at 2:53 PM, Russell Bryant  
wrote:

>>>
>>> Greetings,
>>>
>>> I would like to propose adding Matt Riedemann to the nova-core review
>>>team.
>>>
>>> Matt has been involved with nova for a long time, taking on a wide
>>>range
>>> of tasks.  He writes good code.  He's very engaged with the 
development

>>> community.  Most importantly, he provides good code reviews and has
>>> earned the trust of other members of the review team.
>>>
>>>
>>>https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/
>>>%23/dashboard/6873&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo8NPZyF
>>>6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=W%2FbCZODtVcj75xh%2FtJc5Dt79rku6S
>>>ABkMbVij058%2FP4%3D%0A&s=00158ef2fef8fac11346e6ad5e5d49ae2367546c7002a80
>>>5d72e8a448db18431
>>>
>>>https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/
>>>%23/q/owner:6873%2Cn%2Cz&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxTUZo
>>>8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=W%2FbCZODtVcj75xh%2FtJc5Dt7
>>>9rku6SABkMbVij058%2FP4%3D%0A&s=1c5a657875ba709ce9765dc9cbbccaacbf66a8f4c
>>>5ed18ee5dbb7bb6b586a88e
>>>
>>>https://urldefense.proofpoint.com/v1/url?u=https://review.openstack.org/
>>>%23/q/reviewer:6873%2Cn%2Cz&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=eH0pxT
>>>UZo8NPZyF6hgoMQu%2BfDtysg45MkPhCZFxPEq8%3D%0A&m=W%2FbCZODtVcj75xh%2FtJc5
>>>Dt79rku6SABkMbVij058%2FP4%3D%0A&s=e4455cdcd7b6458ace3c28fe2742109b31e97f
>>>1cce0cca10ef2380fc38fd27ef
>>>
>>> Please respond with +1/-1, or any further comments.
>>>
>>> Thanks,
>>>
>>> --
>>> Russell Bryant
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.opens





smime.p7s
Description: S/MIME Cryptographic Signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] Securing RPC channel between the server and the agent

2014-01-27 Thread Eugene Nikanorov
Hi folks,

As we are going to add ssl implementation to lbaas which would be based on
well-known haproxy+stunnel combination, there is one problem that we need
to solve: securing communication channel between neutron-server and the
agent.

I see several approaches here:
1) Rely on secure messaging as described here:
http://docs.openstack.org/security-guide/content/ch038_transport-security.html

pros: no or minor additional things to care of on neutron-server side and
client side
cons: might be more complex to test. Also I'm not sure testing
infrastructure uses that.
We'll need to state that lbaas ssl is only secure when transpost security
is enabled.

2) Provide neutron server/agent with certificate for encrypting
keys/certificates that are dedicated to loadbalancers.

pros: doesn't depend on cloud-wide messaging security. We can say that 'ssl
works' in any case.
cons: more to implement, more complex deployment.

Unless I've missed some other obvious solution what do you think is the
best approach here?
(I'm not considering the usage of external secure store like barbican at
this point)

What do you think?

Thanks,
Eugene.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >