Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-24 Thread Anastasia Kuznetsova
Winson, Renat,

I have a few questions, because some of aspects aren't clear to me.

1) How does the end user will pass env variables to workflow?Will you add
one more optional parameter to execution-create command?
mistral execution-create wf wf_input wf_params wf_env
If yes than what is wf_env will be, json file?
2) Retuning to first example:
...
 action: std.sql conn_str={$.env.conn_str} query={$.query}
...
$.env - is it a name of environment or it will be a registered syntax to
getting access to values from env ?
3) Can user has a few environments?

On Wed, Dec 24, 2014 at 8:20 AM, Renat Akhmerov rakhme...@mirantis.com
wrote:

 Thanks Winson,

 Since we discussed all this already I just want to confirm that I fully
 support this model, it will significantly help us make much more concise,
 readable and maintainable workflows. I spent a lot of time thinking about
 it and don’t see any problems with it. Nice job!

 However, all additional comments and questions are more than welcomed!


 Renat Akhmerov
 @ Mirantis Inc.



 On 24 Dec 2014, at 04:32, W Chan m4d.co...@gmail.com wrote:

 After some online discussions with Renat, the following is a revision of
 the proposal to address the following related blueprints.
 *
 https://blueprints.launchpad.net/mistral/+spec/mistral-execution-environment
 * https://blueprints.launchpad.net/mistral/+spec/mistral-global-context
 *
 https://blueprints.launchpad.net/mistral/+spec/mistral-default-input-values
 * https://blueprints.launchpad.net/mistral/+spec/mistral-runtime-context

 Please refer to the following threads for backgrounds.
 *
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052643.html
 *
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052960.html
 *
 http://lists.openstack.org/pipermail/openstack-dev/2014-December/052824.html


 *Workflow Context Scope*
 1. context to workflow is passed to all its subflows and subtasks/actions
 (aka children) only explicitly via inputs
 2. context are passed by value (copy.deepcopy) to children
 3. change to context is passed to parent only when it's explicitly
 published at the end of the child execution
 4. change to context at the parent (after a publish from a child) is
 passed to subsequent children

 *Environment Variables*
 Solves the problem for quickly passing pre-defined inputs to a WF
 execution.  In the WF spec, environment variables are referenced as
 $.env.var1, $.env.var2, etc.  We should implement an API and DB model
 where users can pre-defined different environments with their own set of
 variables.  An environment can be passed either by name from the DB or
 adhoc by dict in start_workflow.  On workflow execution, a copy of the
 environment is saved with the execution object.  Action inputs are still
 declared explicitly in the WF spec.  This does not solve the problem where
 common inputs are specified over and over again.  So if there are multiple
 SQL tasks in the WF, the WF author still needs to supply the conn_str
 explicitly for each task.  In the example below, let's say we have a SQL
 Query Action that takes a connection string and a query statement as
 inputs.  The WF author can specify that the conn_str input is supplied from
 the $.env.conn_str.

 *Example:*

 # Assume this SqlAction is registered as std.sql in Mistral's Action table.
 class SqlAction(object):
 def __init__(self, conn_str, query):
 ...

 ...

 version: 2.0
 workflows:
 demo:
 type: direct
 input:
 - query
 output:
 - records
 tasks:
 query:
 action: std.sql conn_str={$.env.conn_str} query={$.query}
 publish:
 records: $

 ...

 my_adhoc_env = {
 conn_str: mysql://admin:secrete@localhost/test
 }

 ...

 # adhoc by dict
 start_workflow(wf_name, wf_inputs, env=my_adhoc_env)

 OR

 # lookup by name from DB model
 start_workflow(wf_name, wf_inputs, env=my_lab_env)


 *Define Default Action Inputs as Environment Variables*
 Solves the problem where we're specifying the same inputs to subflows and
 subtasks/actions over and over again.  On command execution, if action
 inputs are not explicitly supplied, then defaults will be lookup from the
 environment.

 *Example:*
 Using the same example from above, the WF author can still supply both
 conn_str and query inputs in the WF spec.  However, the author also has the
 option to supply that as default action inputs.  An example environment
 structure is below.  __actions should be reserved and immutable.  Users
 can specific one or more default inputs for the sql action as nested dict
 under __actions.  Recursive YAQL eval should be supported in the env
 variables.

 version: 2.0
 workflows:
 demo:
 type: direct
 input:
 - query
 output:
 - records
 tasks:
 query:
 action: std.sql query={$.query}
 publish:
  

Re: [openstack-dev] [infra] [storyboard] Nominating Yolanda Robla for StoryBoard Core

2014-12-24 Thread Ricardo Carrillo Cruz
Big +1 from me :-).

Yolanda is an amazing engineer, both frontend and backend.
As Michael said, she's not only been doing Storyboard but a bunch of other
infra stuff that will be beneficial for the project.

Regards!

2014-12-24 0:38 GMT+01:00 Zaro zaro0...@gmail.com:

 +1

 On Tue, Dec 23, 2014 at 2:34 PM, Michael Krotscheck krotsch...@gmail.com
 wrote:

 Hello everyone!

 StoryBoard is the much anticipated successor to Launchpad, and is a
 component of the Infrastructure Program. The storyboard-core group is
 intended to be a superset of the infra-core group, with additional
 reviewers who specialize in the field.

 Yolanda has been working on StoryBoard ever since the Atlanta Summit, and
 has provided a diligent and cautious voice to our development effort. She
 has consistently provided feedback on our reviews, and is neither afraid of
 asking for clarification, nor of providing constructive criticism. In
 return, she has been nothing but gracious and responsive when improvements
 were suggested to her own submissions.

 Furthermore, Yolanda has been quite active in the infrastructure team as
 a whole, and provides valuable context for us in the greater realm of infra.

 Please respond within this thread with either supporting commentary, or
 concerns about her promotion. Since many western countries are currently
 celebrating holidays, the review period will remain open until January 9th.
 If the consensus is positive, we will promote her then!

 Thanks,

 Michael


 References:

 https://review.openstack.org/#/q/reviewer:%22yolanda.robla+%253Cinfo%2540ysoft.biz%253E%22,n,z

 http://stackalytics.com/?user_id=yolanda.roblametric=marks


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] The state of nova-network to neutron migration

2014-12-24 Thread Oleg Bondarev
On Mon, Dec 22, 2014 at 10:08 PM, Anita Kuno ante...@anteaya.info wrote:

 On 12/22/2014 01:32 PM, Joe Gordon wrote:
  On Fri, Dec 19, 2014 at 9:28 AM, Kyle Mestery mest...@mestery.com
 wrote:
 
  On Fri, Dec 19, 2014 at 10:59 AM, Anita Kuno ante...@anteaya.info
 wrote:
 
  Rather than waste your time making excuses let me state where we are
 and
  where I would like to get to, also sharing my thoughts about how you
 can
  get involved if you want to see this happen as badly as I have been
 told
  you do.
 
  Where we are:
  * a great deal of foundation work has been accomplished to achieve
  parity with nova-network and neutron to the extent that those involved
  are ready for migration plans to be formulated and be put in place
  * a summit session happened with notes and intentions[0]
  * people took responsibility and promptly got swamped with other
  responsibilities
  * spec deadlines arose and in neutron's case have passed
  * currently a neutron spec [1] is a work in progress (and it needs
  significant work still) and a nova spec is required and doesn't have a
  first draft or a champion
 
  Where I would like to get to:
  * I need people in addition to Oleg Bondarev to be available to
 help
  come up with ideas and words to describe them to create the specs in a
  very short amount of time (Oleg is doing great work and is a fabulous
  person, yay Oleg, he just can't do this alone)
  * specifically I need a contact on the nova side of this complex
  problem, similar to Oleg on the neutron side
  * we need to have a way for people involved with this effort to
 find
  each other, talk to each other and track progress
  * we need to have representation at both nova and neutron weekly
  meetings to communicate status and needs
 
  We are at K-2 and our current status is insufficient to expect this
 work
  will be accomplished by the end of K-3. I will be championing this
 work,
  in whatever state, so at least it doesn't fall off the map. If you
 would
  like to help this effort please get in contact. I will be thinking of
  ways to further this work and will be communicating to those who
  identify as affected by these decisions in the most effective methods
 of
  which I am capable.
 
  Thank you to all who have gotten us as far as well have gotten in this
  effort, it has been a long haul and you have all done great work. Let's
  keep going and finish this.
 
  Thank you,
  Anita.
 
  Thank you for volunteering to drive this effort Anita, I am very happy
  about this. I support you 100%.
 
  I'd like to point out that we really need a point of contact on the nova
  side, similar to Oleg on the Neutron side. IMHO, this is step 1 here to
  continue moving this forward.
 
 
  At the summit the nova team marked the nova-network to neutron migration
 as
  a priority [0], so we are collectively interested in seeing this happen
 and
  want to help in any way possible.   With regard to a nova point of
 contact,
  anyone in nova-specs-core should work, that way we can cover more time
  zones.
 
  From what I can gather the first step is to finish fleshing out the first
  spec [1], and it sounds like it would be good to get a few nova-cores
  reviewing it as well.
 
 
 
 
  [0]
 
 http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html
  [1] https://review.openstack.org/#/c/142456/
 
 
 Wonderful, thank you for the support Joe.

 It appears that we need to have a regular weekly meeting to track
 progress in an archived manner.

 I know there was one meeting November but I don't know what it was
 called so so far I can't find the logs for that.


It wasn't official, we just gathered together on #novamigration. Attaching
the log here.


 So if those affected by this issue can identify what time (UTC please,
 don't tell me what time zone you are in it is too hard to guess what UTC
 time you are available) and day of the week you are available for a
 meeting I'll create one and we can start talking to each other.

 I need to avoid Monday 1500 and 2100 UTC, Tuesday 0800 UTC, 1400 UTC and
 1900 - 2200 UTC, Wednesdays 1500 - 1700 UTC, Thursdays 1400 and 2100 UTC.


I'm available each weekday 0700-1600 UTC, 1700-1800 UTC is also acceptable.

Thanks,
Oleg



 Thanks,
 Anita.

 
  Thanks,
  Kyle
 
 
  [0] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron
  [1] https://review.openstack.org/#/c/142456/
 
  ___
  OpenStack-operators mailing list
  openstack-operat...@lists.openstack.org
 
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Keystone] Bug in federation

2014-12-24 Thread David Chadwick


On 23/12/2014 21:56, Morgan Fainberg wrote:
 
 On Dec 23, 2014, at 1:08 PM, Dolph Mathews dolph.math...@gmail.com
 mailto:dolph.math...@gmail.com wrote:


 On Tue, Dec 23, 2014 at 1:33 PM, David
 Chadwick d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:

 Hi Adam

 On 23/12/2014 17:34, Adam Young wrote:
  On 12/23/2014 11:34 AM, David Chadwick wrote:
  Hi guys
 
  we now have the ABFAB federation protocol working with Keystone, 
 using a
  modified mod_auth_kerb plugin for Apache (available from the project
  Moonshot web site). However, we did not change Keystone configuration
  from its original SAML federation configuration, when it was talking 
 to
  SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone 
 code
  (which I believe had to be done for OpenID connect.) We simply 
 replaced
  mod_shibboleth with mod_auth_kerb and talked to a completely different
  IDP with a different protocol. And everything worked just fine.
 
  Consequently Keystone is broken, since you can configure it to trust a
  particular IDP, talking a particular protocol, but Apache will happily
  talk to another IDP, using a different protocol, and Keystone cannot
  tell the difference and will happily accept the authenticated user.
  Keystone should reject any authenticated user who does not come from 
 the
  trusted IDP talking the correct protocol. Otherwise there is no point 
 in
  configuring Keystone with this information, if it is ignored by 
 Keystone.
  The IDP and the Protocol should be passed from HTTPD in env vars. Can
  you confirm/deny that this is the case now?

 What is passed from Apache is the 'PATH_INFO' variable, and it is
 set to
 the URL of Keystone that is being protected, which in our case is
 /OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth

 There are also the following arguments passed to Keystone
 'wsgiorg.routing_args': (routes.util.URLGenerator object at
 0x7ffaba339190, {'identity_provider': u'KentProxy', 'protocol':
 u'saml2'})

 and

 'PATH_TRANSLATED':
 
 '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth'

 So Apache is telling Keystone that it has protected the URL that
 Keystone has configured to be protected.

 However, Apache has been configured to protect this URL with the ABFAB
 protocol and the local Radius server, rather than the KentProxy
 IdP and
 the SAML2 protocol. So we could say that Apache is lying to Keystone,
 and because Keystone trusts Apache, then Keystone trusts Apache's lies
 and wrongly thinks that the correct IDP and protocol were used.

 The only sure way to protect Keystone from a wrongly or mal-configured
 Apache is to have end to end security, where Keystone gets a token
 from
 the IDP that it can validate, to prove that it is the trusted IDP that
 it is talking to. In other words, if Keystone is given the original
 signed SAML assertion from the IDP, it will know for definite that the
 user was authenticated by the trusted IDP using the trusted protocol


 So the bug is a misconfiguration, not an actual bug. The goal was to
 trust and leverage httpd, not reimplement it and all it's extensions.
 
 Fixing this “bug” would be moving towards Keystone needing to implement
 all of the various protocols to avoid “misconfigurations”. There are
 probably some more values that can be passed down from the Apache layer
 to help provide more confidence in the IDP that is being used. I don’t
 see a real tangible benefit to moving away from leveraging HTTPD for
 handling the heavy lifting when handling federated Identity. 

Its not as heavy as you suggest. Apache would still do all the protocol
negotiation and validation. Keystone would only need to verify the
signature of the incoming SAML assertion in order to validate who the
IDP was, and that it was SAML. (Remember that Keystone already
implements SAML for sending out SAML assertions, which is much more
heavyweight.) ABFAB sends an unsigned SAML assertion embedded in a
Radius attribute, so obtaining this and doing a minimum of field
checking would be sufficient. There will be something similar that can
be done for OpenID Connect.

So we are not talking about redoing all the protocol handling, simply
checking that the trust rules that have already been configured into
Keystone, are actually being followed by Apache. Trust but verify in
the words of Ronald Regan.

regards

David

 
 —Morgan
 

 regards

 David

 
  On the Apache side we are looking to expand the set of variables set.
  
 http://www.freeipa.org/page/Environment_Variables#Proposed_Additional_Variables
 
 

 The original SAML assertion
 
  mod_shib does support Shib-Identity-Provider :
 
 
  
 

[openstack-dev] [Heat][oslo-incubator][oslo-log] Logging Unicode characters

2014-12-24 Thread Qiming Teng
Hi,

When trying to enable stack names in Heat to use unicode strings, I am
stuck by a weird behavior of logging.

Suppose I have a stack name assigned some non-ASCII string, then when
stack tries to log something here:

heat/engine/stack.py:

 536 LOG.info(_LI('Stack %(action)s %(status)s (%(name)s): '
 537  '%(reason)s'),
 538  {'action': action,
 539   'status': status,
 540   'name': self.name,   # type(self.name)==unicode here
 541   'reason': reason})

I'm seeing the following errors from h-eng session:

Traceback (most recent call last):
  File /usr/lib64/python2.6/logging/__init__.py, line 799, in emit
stream.write(fs % msg.decode('utf-8'))
  File /usr/lib64/python2.6/encodings/utf_8.py, line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 114-115: 
 ordinal not in range(128)

This means logging cannot handle Unicode correctly?  No.  I did the
following experiments:

$ cat logtest

#!/usr/bin/env python

import sys

from oslo.utils import encodeutils
from oslo import i18n

from heat.common.i18n import _LI
from heat.openstack.common import log as logging

i18n.enable_lazy()

LOG = logging.getLogger('logtest')
logging.setup('heat')

print('sys.stdin.encoding: %s' % sys.stdin.encoding)
print('sys.getdefaultencoding: %s' % sys.getdefaultencoding())

s = sys.argv[1]
print('s is: %s' % type(s))

stack_name = encodeutils.safe_decode(unis)
print('stack_name is: %s' % type(stack_name))

# stack_name is unicode here
LOG.error(_LI('stack name: %(name)s') % {'name': stack_name})

$ ./logtest some Chinese here

[tengqm@node1 heat]$ ./logtest 中文
sys.stdin.encoding: UTF-8
sys.getdefaultencoding: ascii
s is: type 'str'
stack_name is: type 'unicode'
2014-12-24 17:51:13.799 29194 ERROR logtest [-] stack name: 中文

It worked.  

After spending more than one day on this, I'm seeking help from people
here.  What's wrong with Unicode stack names here?

Any hints are appreciated.

Regards,
  - Qiming


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-24 Thread Steven Hardy
On Mon, Dec 22, 2014 at 03:42:37PM -0500, Zane Bitter wrote:
 On 22/12/14 13:21, Steven Hardy wrote:
 Hi all,
 
 So, lately I've been having various discussions around $subject, and I know
 it's something several folks in our community are interested in, so I
 wanted to get some ideas I've been pondering out there for discussion.
 
 I'll start with a proposal of how we might replace HARestarter with
 AutoScaling group, then give some initial ideas of how we might evolve that
 into something capable of a sort-of active/active failover.
 
 1. HARestarter replacement.
 
 My position on HARestarter has long been that equivalent functionality
 should be available via AutoScalingGroups of size 1.  Turns out that
 shouldn't be too hard to do:
 
   resources:
server_group:
  type: OS::Heat::AutoScalingGroup
  properties:
min_size: 1
max_size: 1
resource:
  type: ha_server.yaml
 
server_replacement_policy:
  type: OS::Heat::ScalingPolicy
  properties:
# FIXME: this adjustment_type doesn't exist yet
adjustment_type: replace_oldest
auto_scaling_group_id: {get_resource: server_group}
scaling_adjustment: 1
 
 One potential issue with this is that it is a little bit _too_ equivalent to
 HARestarter - it will replace your whole scaled unit (ha_server.yaml in this
 case) rather than just the failed resource inside.

Personally I don't see that as a problem, because the interface makes that
explicit - if you put a resource in an AutoScalingGroup, you expect it to
get created/deleted on group adjustment, so anything you don't want
replaced stays outside the group.

Happy to consider other alternatives which do less destructive replacement,
but to me this seems like the simplest possible way to replace HARestarter
with something we can actually support long term.

Even if just replace failed resource is somehow made available later,
we'll still want to support AutoScalingGroup, and replace_oldest is
likely to be useful in other situations, not just this use-case.

Do you have specific ideas of how the just-replace-failed-resource feature
might be implemented?  A way for a signal to declare a resource failed so
convergence auto-healing does a less destructive replacement?

 So, currently our ScalingPolicy resource can only support three adjustment
 types, all of which change the group capacity.  AutoScalingGroup already
 supports batched replacements for rolling updates, so if we modify the
 interface to allow a signal to trigger replacement of a group member, then
 the snippet above should be logically equivalent to HARestarter AFAICT.
 
 The steps to do this should be:
 
   - Standardize the ScalingPolicy-AutoScaling group interface, so
 aynchronous adjustments (e.g signals) between the two resources don't use
 the adjust method.
 
   - Add an option to replace a member to the signal interface of
 AutoScalingGroup
 
   - Add the new replace adjustment type to ScalingPolicy
 
 I think I am broadly in favour of this.

Ok, great - I think we'll probably want replace_oldest, replace_newest, and
replace_specific, such that both alarm and operator driven replacement have
flexibility over what member is replaced.

 I posted a patch which implements the first step, and the second will be
 required for TripleO, e.g we should be doing it soon.
 
 https://review.openstack.org/#/c/143496/
 https://review.openstack.org/#/c/140781/
 
 2. A possible next step towards active/active HA failover
 
 The next part is the ability to notify before replacement that a scaling
 action is about to happen (just like we do for LoadBalancer resources
 already) and orchestrate some or all of the following:
 
 - Attempt to quiesce the currently active node (may be impossible if it's
in a bad state)
 
 - Detach resources (e.g volumes primarily?) from the current active node,
and attach them to the new active node
 
 - Run some config action to activate the new node (e.g run some config
script to fsck and mount a volume, then start some application).
 
 The first step is possible by putting a SofwareConfig/SoftwareDeployment
 resource inside ha_server.yaml (using NO_SIGNAL so we don't fail if the
 node is too bricked to respond and specifying DELETE action so it only runs
 when we replace the resource).
 
 The third step is possible either via a script inside the box which polls
 for the volume attachment, or possibly via an update-only software config.
 
 The second step is the missing piece AFAICS.
 
 I've been wondering if we can do something inside a new heat resource,
 which knows what the current active member of an ASG is, and gets
 triggered on a replace signal to orchestrate e.g deleting and creating a
 VolumeAttachment resource to move a volume between servers.
 
 Something like:
 
   resources:
server_group:
  type: OS::Heat::AutoScalingGroup
  properties:
min_size: 2
max_size: 2
resource:
  type: 

Re: [openstack-dev] [Keystone] Bug in federation

2014-12-24 Thread Marco Fargetta
Hi All,

this bug was already reported and fixed in two steps:

https://bugs.launchpad.net/ossn/+bug/1390124


The first step is in the documentation. There should be also an OSS advice for 
previous
version of OpenStack. The solution consist in configuring shibboleth to use 
different IdPs for 
different URLs.

The second step, still in progress, is to include an ID in the IdP 
configuration. My patch is under review here:

https://review.openstack.org/#/c/142743/

Let me know if it is enough to solve the issue in your case.

Marco

 On 24 Dec 2014, at 10:08, David Chadwick d.w.chadw...@kent.ac.uk wrote:
 
 
 
 On 23/12/2014 21:56, Morgan Fainberg wrote:
 
 On Dec 23, 2014, at 1:08 PM, Dolph Mathews dolph.math...@gmail.com
 mailto:dolph.math...@gmail.com wrote:
 
 
 On Tue, Dec 23, 2014 at 1:33 PM, David
 Chadwick d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:
 
Hi Adam
 
On 23/12/2014 17:34, Adam Young wrote:
 On 12/23/2014 11:34 AM, David Chadwick wrote:
 Hi guys
 
 we now have the ABFAB federation protocol working with Keystone, using a
 modified mod_auth_kerb plugin for Apache (available from the project
 Moonshot web site). However, we did not change Keystone configuration
 from its original SAML federation configuration, when it was talking to
 SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code
 (which I believe had to be done for OpenID connect.) We simply replaced
 mod_shibboleth with mod_auth_kerb and talked to a completely different
 IDP with a different protocol. And everything worked just fine.
 
 Consequently Keystone is broken, since you can configure it to trust a
 particular IDP, talking a particular protocol, but Apache will happily
 talk to another IDP, using a different protocol, and Keystone cannot
 tell the difference and will happily accept the authenticated user.
 Keystone should reject any authenticated user who does not come from the
 trusted IDP talking the correct protocol. Otherwise there is no point in
 configuring Keystone with this information, if it is ignored by Keystone.
 The IDP and the Protocol should be passed from HTTPD in env vars. Can
 you confirm/deny that this is the case now?
 
What is passed from Apache is the 'PATH_INFO' variable, and it is
set to
the URL of Keystone that is being protected, which in our case is
/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth
 
There are also the following arguments passed to Keystone
'wsgiorg.routing_args': (routes.util.URLGenerator object at
0x7ffaba339190, {'identity_provider': u'KentProxy', 'protocol':
u'saml2'})
 
and
 
'PATH_TRANSLATED':

 '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth'
 
So Apache is telling Keystone that it has protected the URL that
Keystone has configured to be protected.
 
However, Apache has been configured to protect this URL with the ABFAB
protocol and the local Radius server, rather than the KentProxy
IdP and
the SAML2 protocol. So we could say that Apache is lying to Keystone,
and because Keystone trusts Apache, then Keystone trusts Apache's lies
and wrongly thinks that the correct IDP and protocol were used.
 
The only sure way to protect Keystone from a wrongly or mal-configured
Apache is to have end to end security, where Keystone gets a token
from
the IDP that it can validate, to prove that it is the trusted IDP that
it is talking to. In other words, if Keystone is given the original
signed SAML assertion from the IDP, it will know for definite that the
user was authenticated by the trusted IDP using the trusted protocol
 
 
 So the bug is a misconfiguration, not an actual bug. The goal was to
 trust and leverage httpd, not reimplement it and all it's extensions.
 
 Fixing this “bug” would be moving towards Keystone needing to implement
 all of the various protocols to avoid “misconfigurations”. There are
 probably some more values that can be passed down from the Apache layer
 to help provide more confidence in the IDP that is being used. I don’t
 see a real tangible benefit to moving away from leveraging HTTPD for
 handling the heavy lifting when handling federated Identity. 
 
 Its not as heavy as you suggest. Apache would still do all the protocol
 negotiation and validation. Keystone would only need to verify the
 signature of the incoming SAML assertion in order to validate who the
 IDP was, and that it was SAML. (Remember that Keystone already
 implements SAML for sending out SAML assertions, which is much more
 heavyweight.) ABFAB sends an unsigned SAML assertion embedded in a
 Radius attribute, so obtaining this and doing a minimum of field
 checking would be sufficient. There will be something similar that can
 be done for OpenID Connect.
 
 So we are not talking about redoing all the protocol handling, simply
 checking that the trust rules that have already been configured 

Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-24 Thread Renat Akhmerov

 On 24 Dec 2014, at 14:06, Anastasia Kuznetsova akuznets...@mirantis.com 
 wrote:
 
 1) How does the end user will pass env variables to workflow?Will you add one 
 more optional parameter to execution-create command? 
 mistral execution-create wf wf_input wf_params wf_env
 If yes than what is wf_env will be, json file?

Yes. IMO it should be possible to specify either a string (name of a previously 
stored environment) or a json file (so called ad-hoc environment).

 2) Retuning to first example:
 ...
  action: std.sql conn_str={$.env.conn_str} query={$.query}
 ...
 $.env - is it a name of environment or it will be a registered syntax to 
 getting access to values from env ?

So far we agreed that ‘key' should not be a registered key. Environment 
(optionally specified) is just another storage of variables going after 
workflow context in a lookup chain. So that if somewhere in a wf we have an 
expression $.something then this “something” will be first looked in workflow 
context and if it doesn’t exist there then looked in the specified environment.
But if we want to explicitly group a set of variables we can use any (except 
for reserved as __actions ) key, for example, “env”.

 3) Can user has a few environments?

Yes. That’s one of the goals of introducing a concept of environment. So that 
same workflows could be running in different environments (e.g with different 
email settings, any kinds of passports etc.).


Renat Akhmerov
@ Mirantis Inc.___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-24 Thread Renat Akhmerov
Hi

 Ok, I'm quite happy to accept this may be a better long-term solution, but
 can anyone comment on the current maturity level of Mistral?  Questions
 which spring to mind are:
 
 - Is the DSL stable now?

You can think “yes” because although we keep adding new features we do it in a 
backwards compatible manner. I personally try to be very cautious about this.

 - What's the roadmap re incubation (there are a lot of TBD's here:
https://wiki.openstack.org/wiki/Mistral/Incubation)

Ooh yeah, this page is very very obsolete which is actually my fault because I 
didn’t pay a lot of attention to this after I heard all these rumors about TC 
changing the whole approach around getting projects incubated/integrated.

I think incubation readiness from a technical perspective is good (various 
style checks, procedures etc.), even if there’s still something that we need to 
adjust it must not be difficult and time consuming. The main question for the 
last half a year has been “What OpenStack program best fits Mistral?”. So far 
we’ve had two candidates: Orchestration and some new program (e.g. Workflow 
Service). However, nothing is decided yet on that.

 - How does deferred authentication work for alarm triggered workflows, e.g
  if a ceilometer alarm (which authenticates as a stack domain user) needs
  to signal Mistral to start a workflow?

It works via Keystone trusts. It works but there’s still an issue that we are 
to fix. If we authenticate by a previously created trust and try to call Nova 
then it fails with an authentication error. I know it’s been solved in other 
projects (e.g. Heat) so we need to look at it.

 I guess a first step is creating a contrib Mistral resource and
 investigating it, but it would be great if anyone has first-hand
 experiences they can share before we burn too much time digging into it.

Yes, we already started discussing how we can create Mistral resource for Heat. 
Looks like there’s a couple of volunteers who can do that. Anyway, I’m totally 
for it and any help from our side can be provided (including implementation 
itself)



Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] no meetings next 2 weeks

2014-12-24 Thread Sergey Lukjanov
Hi sahara folks,

Lets cancel the next two weekly meetings because of Christmas and new years
day.

Thanks

P.S. Happy holidays!

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][oslo-incubator][oslo-log] Logging Unicode characters

2014-12-24 Thread Qiming Teng
Seems that the reason is in devstack 'screen' is not started with
Unicode support.  Still checking ...

Regards,
  Qiming

On Wed, Dec 24, 2014 at 05:48:56PM +0800, Qiming Teng wrote:
 Hi,
 
 When trying to enable stack names in Heat to use unicode strings, I am
 stuck by a weird behavior of logging.
 
 Suppose I have a stack name assigned some non-ASCII string, then when
 stack tries to log something here:
 
 heat/engine/stack.py:
 
  536 LOG.info(_LI('Stack %(action)s %(status)s (%(name)s): '
  537  '%(reason)s'),
  538  {'action': action,
  539   'status': status,
  540   'name': self.name,   # type(self.name)==unicode here
  541   'reason': reason})
 
 I'm seeing the following errors from h-eng session:
 
 Traceback (most recent call last):
   File /usr/lib64/python2.6/logging/__init__.py, line 799, in emit
 stream.write(fs % msg.decode('utf-8'))
   File /usr/lib64/python2.6/encodings/utf_8.py, line 16, in decode
 return codecs.utf_8_decode(input, errors, True)
 UnicodeEncodeError: 'ascii' codec can't encode characters in position 
 114-115: 
  ordinal not in range(128)
 
 This means logging cannot handle Unicode correctly?  No.  I did the
 following experiments:
 
 $ cat logtest
 
 #!/usr/bin/env python
 
 import sys
 
 from oslo.utils import encodeutils
 from oslo import i18n
 
 from heat.common.i18n import _LI
 from heat.openstack.common import log as logging
 
 i18n.enable_lazy()
 
 LOG = logging.getLogger('logtest')
 logging.setup('heat')
 
 print('sys.stdin.encoding: %s' % sys.stdin.encoding)
 print('sys.getdefaultencoding: %s' % sys.getdefaultencoding())
 
 s = sys.argv[1]
 print('s is: %s' % type(s))
 
 stack_name = encodeutils.safe_decode(unis)
 print('stack_name is: %s' % type(stack_name))
 
 # stack_name is unicode here
 LOG.error(_LI('stack name: %(name)s') % {'name': stack_name})
 
 $ ./logtest some Chinese here
 
 [tengqm@node1 heat]$ ./logtest 中文
 sys.stdin.encoding: UTF-8
 sys.getdefaultencoding: ascii
 s is: type 'str'
 stack_name is: type 'unicode'
 2014-12-24 17:51:13.799 29194 ERROR logtest [-] stack name: 中文
 
 It worked.  
 
 After spending more than one day on this, I'm seeking help from people
 here.  What's wrong with Unicode stack names here?
 
 Any hints are appreciated.
 
 Regards,
   - Qiming
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins] UI workflow, plugins enabling/disabling

2014-12-24 Thread Evgeniy L
Hi,

Recently we've discussed what plugins should look like from user point of
view [1].
On one of the meeting it was decided to have the next flow:

1. user installs fuel plugin (as usually with `fuel plugins install
fuel-plugin-name-1.0.0.fp`)
2. after that plugin can be seen on Plugins page, the button for this page
will be placed
somewhere between Environments and Releases buttons
3. each plugin on the page has checkbox, the checkbox represents the
default state of
plugin for new environments, if checkbox is checked, when user creates
environment,
he can see all of the buttons which are related to plugin, e.g. in case
of Contrail
he can see new option in the list of network providers on Network tab
in the wizard
4. during the environment configuration user should select options which
are related to the plugin,
the information about the list of options and where they should be
placed is described by
plugin developer
5. when user starts deployment, Nailgun parses tasks and depending on
conditions
sends them to Astute, the conditions are described for each task by
plugin developer,
example of condition cluster:net_provider != 'contrail' , if task
doesn't have conditions,
we always execute it

Any comments on that?

Thanks,

[1]
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg40878.html
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-24 Thread Anastasia Kuznetsova
Renat,

thanks for response!
One more question:

 So that same workflows could be running in different environments


Asking about using a few environments I meant within one workflow. For
example I need to work with two DB and I have two environments: env1 =
{conn_str: ip, user: user, password: passwd} and env2 = {conn_str:
ip2, user: user2, password: passwd2}. Will it be possible to do
something like this:

tasks:
   connect_first_db:
  action: std.sql conn_str={$.env1.conn_str} query={$.query}
  publish:
 records: $
   connect_second_db:
  action: std.sql conn_str={$.env2.conn_str} query={$.query}
  publish:
 records: $


Thanks,
Anastasia Kuznetsova

On Wed, Dec 24, 2014 at 2:19 PM, Renat Akhmerov rakhme...@mirantis.com
wrote:


 On 24 Dec 2014, at 14:06, Anastasia Kuznetsova akuznets...@mirantis.com
 wrote:

 1) How does the end user will pass env variables to workflow?Will you add
 one more optional parameter to execution-create command?
 mistral execution-create wf wf_input wf_params wf_env
 If yes than what is wf_env will be, json file?


 Yes. IMO it should be possible to specify either a string (name of a
 previously stored environment) or a json file (so called ad-hoc
 environment).

 2) Retuning to first example:
 ...
  action: std.sql conn_str={$.env.conn_str} query={$.query}
 ...
 $.env - is it a name of environment or it will be a registered syntax to
 getting access to values from env ?


 So far we agreed that ‘key' should not be a registered key. Environment
 (optionally specified) is just another storage of variables going after
 workflow context in a lookup chain. So that if somewhere in a wf we have an
 expression $.something then this “something” will be first looked in
 workflow context and if it doesn’t exist there then looked in the specified
 environment.
 But if we want to explicitly group a set of variables we can use any
 (except for reserved as __actions ) key, for example, “env”.

 3) Can user has a few environments?


 Yes. That’s one of the goals of introducing a concept of environment. So
 that same workflows could be running in different environments (e.g with
 different email settings, any kinds of passports etc.).


 Renat Akhmerov
 @ Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Murano CI maintenance

2014-12-24 Thread Dmitry Teselkin
Hi,

I'm going to update devstack on Murano CI server. This should take approx
2-3 hrs, if no obstacles.
During that period CI jobs will be disabled to avoid -1 scores with
NOT_REGISTERED status.

-- 
Thanks,
Dmitry Teselkin
Deployment Engineer
Mirantis
http://www.mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2014-12-24 Thread John Dennis
Can't this be solved with a couple of environment variables? The two
keys pieces of information needed are:

1) who authenticated the subject?

2) what authentication method was used?

There is already precedence for AUTH_TYPE, it's used in AJP to
initialize the authType property in a Java Servelet. AUTH_TYPE would
cover item 2. Numerous places in Apache already set AUTH_TYPE. Perhaps
there could be a convention that AUTH_TYPE could carry extra qualifying
parameters much like HTTP headers do. The first token would be the
primary mechanism, e.g. saml, negotiate, x509, etc. For authentication
types that support multiple mechanisms (e.g. EAP, SAML, etc.) an extra
parameter would qualify the actual mechanism used. For SAML that
qualifying extra parameter could be the value from AuthnContextClassRef.

Item 1 could be covered by a new environment variable AUTH_AUTHORITY.

If AUTH_TYPE is negotiate (i.e. kerberos) then the AUTH_AUTHORITY would
be the KDC. For SAML it would probably be taken from the
AuthenticatingAuthority element or the IdP entityID.

I'm not sure I see the need for other layers to receive the full SAML
assertion and validate the signature. One has to trust the server you're
running in. It's the same concept as trusting REMOTE_USER.

-- 
John

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2014-12-24 Thread David Chadwick
HI John

On 24/12/2014 14:15, John Dennis wrote:
 Can't this be solved with a couple of environment variables? The two
 keys pieces of information needed are:
 
 1) who authenticated the subject?

AUTH_AUTHORITY or similar would stop wrong configuration of Apache if it
was set by the protocol plugin module from the protocol messages it
received. But it may take time for all plugin suppliers to adopt this
and implement it.

 
 2) what authentication method was used?

Its not the authentication method that is being questioned (could be
un/pw, two factor or any other method), but rather the federation
protocol that was used. So I dont think AUTH-TYPE is the right parameter
for what is required.

 
 There is already precedence for AUTH_TYPE, it's used in AJP to
 initialize the authType property in a Java Servelet. AUTH_TYPE would
 cover item 2. Numerous places in Apache already set AUTH_TYPE. Perhaps
 there could be a convention that AUTH_TYPE could carry extra qualifying
 parameters much like HTTP headers do. The first token would be the
 primary mechanism, e.g. saml, negotiate, x509, etc. For authentication
 types that support multiple mechanisms (e.g. EAP, SAML, etc.) an extra
 parameter would qualify the actual mechanism used. For SAML that
 qualifying extra parameter could be the value from AuthnContextClassRef.
 
 Item 1 could be covered by a new environment variable AUTH_AUTHORITY.
 
 If AUTH_TYPE is negotiate (i.e. kerberos) then the AUTH_AUTHORITY would
 be the KDC. For SAML it would probably be taken from the
 AuthenticatingAuthority element or the IdP entityID.
 
 I'm not sure I see the need for other layers to receive the full SAML
 assertion and validate the signature. One has to trust the server you're
 running in. It's the same concept as trusting REMOTE_USER.
 

Not quite. A badly configured Apache would not (should not) effect
REMOTE_USER as this should be set by the authn plugin. Currently we have
nothing to check that Apache was correctly configured

regards

David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] ratio: created to attached

2014-12-24 Thread Tom Barron
On 12/22/14 4:48 PM, John Griffith wrote:
 On Sat, Dec 20, 2014 at 4:56 PM, Tom Barron t...@dyncloud.net wrote:
 Does anyone have real world experience, even data, to speak to the
 question: in an OpenStack cloud, what is the likely ratio of (created)
 cinder volumes to attached cinder volumes?
 
 Thanks,
 
 Tom Barron

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 Honestly I think the assumption is and should be 1:1, perhaps not 100%
 duty-cycle, but certainly periods of time when there is a 100% attach
 rate.
 

Certainly peak usage would be 1:1.  But that still allows for lots of
distributions - e.g. 1:1 2% of the time, 10:9 80%, 10:7 95% vs 1:1 90%, etc.

Some of the devs on this list also run clouds, so I'm curious if there
are data available indicating what kind of distribution of this ratio
they see in practice.




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hierarchical Multitenancy

2014-12-24 Thread Deepak Shetty
Raildo,
   Thanks for putting the blog, i really liked it as it helps to understand
how hmt works. I am interested to know more about how hmt can be exploited
for other OpenStack projects... Esp cinder, manila
On Dec 23, 2014 5:55 AM, Morgan Fainberg morgan.fainb...@gmail.com
wrote:

 Hi Raildo,

 Thanks for putting this post together. I really appreciate all the work
 you guys have done (and continue to do) to get the Hierarchical
 Mulittenancy code into Keystone. It’s great to have the base implementation
 merged into Keystone for the K1 milestone. I look forward to seeing the
 rest of the development land during the rest of this cycle and what the
 other OpenStack projects build around the HMT functionality.

 Cheers,
 Morgan



 On Dec 22, 2014, at 1:49 PM, Raildo Mascena rail...@gmail.com wrote:

 Hello folks, My team and I developed the Hierarchical Multitenancy concept
 for Keystone in Kilo-1 but What is Hierarchical Multitenancy? What have we
 implemented? What are the next steps for kilo?
 To answers these questions, I created a blog post 
 *http://raildo.me/hierarchical-multitenancy-in-openstack/
 http://raildo.me/hierarchical-multitenancy-in-openstack/*

 Any question, I'm available.

 --
 Raildo Mascena
 Software Engineer.
 Bachelor of Computer Science.
 Distributed Systems Laboratory
 Federal University of Campina Grande
 Campina Grande, PB - Brazil

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2014-12-24 Thread David Chadwick
If I understand the bug fix correctly, it is firmly tying the URL to the
IDP to the mapping rule. But I think this is going in the wrong
direction for several reasons:

1. With Shibboleth, if you use a WAYF service, then anyone from hundreds
of different federated IDPs may end up being used to authenticate the
user who is accessing OpenStack/Keystone. We dont want to have hundreds
of URLs. One is sufficient. Plus we dont know which IDP the user will
eventually choose, as this is decided by the WAYF service. So the
correct URL cannot be pre-chosen by the user.

2. With ABFAB, the IDP to be used is not known by the SP (Keystone)
until after authentication. This is because the realm is incorporated in
the user's ID (u...@real.com) and this is not visible to Keystone. So it
is not possible to have different URLs for different IDPs. They all have
to use the same URL.

So there should be one URL protecting Keystone, and when the response
comes from Apache, Keystone needs to be able to reliably determine

a) which IDP was used by the user
b) which protocol was used

and from this, choose which mapping rule to use

regards

david


On 24/12/2014 10:19, Marco Fargetta wrote:
 Hi All,
 
 this bug was already reported and fixed in two steps:
 
 https://bugs.launchpad.net/ossn/+bug/1390124
 
 
 The first step is in the documentation. There should be also an OSS advice 
 for previous
 version of OpenStack. The solution consist in configuring shibboleth to use 
 different IdPs for 
 different URLs.
 
 The second step, still in progress, is to include an ID in the IdP 
 configuration. My patch is under review here:
 
 https://review.openstack.org/#/c/142743/
 
 Let me know if it is enough to solve the issue in your case.
 
 Marco
 
 On 24 Dec 2014, at 10:08, David Chadwick d.w.chadw...@kent.ac.uk wrote:



 On 23/12/2014 21:56, Morgan Fainberg wrote:

 On Dec 23, 2014, at 1:08 PM, Dolph Mathews dolph.math...@gmail.com
 mailto:dolph.math...@gmail.com wrote:


 On Tue, Dec 23, 2014 at 1:33 PM, David
 Chadwick d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:

Hi Adam

On 23/12/2014 17:34, Adam Young wrote:
 On 12/23/2014 11:34 AM, David Chadwick wrote:
 Hi guys

 we now have the ABFAB federation protocol working with Keystone, using a
 modified mod_auth_kerb plugin for Apache (available from the project
 Moonshot web site). However, we did not change Keystone configuration
 from its original SAML federation configuration, when it was talking to
 SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code
 (which I believe had to be done for OpenID connect.) We simply replaced
 mod_shibboleth with mod_auth_kerb and talked to a completely different
 IDP with a different protocol. And everything worked just fine.

 Consequently Keystone is broken, since you can configure it to trust a
 particular IDP, talking a particular protocol, but Apache will happily
 talk to another IDP, using a different protocol, and Keystone cannot
 tell the difference and will happily accept the authenticated user.
 Keystone should reject any authenticated user who does not come from the
 trusted IDP talking the correct protocol. Otherwise there is no point in
 configuring Keystone with this information, if it is ignored by Keystone.
 The IDP and the Protocol should be passed from HTTPD in env vars. Can
 you confirm/deny that this is the case now?

What is passed from Apache is the 'PATH_INFO' variable, and it is
set to
the URL of Keystone that is being protected, which in our case is
/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth

There are also the following arguments passed to Keystone
'wsgiorg.routing_args': (routes.util.URLGenerator object at
0x7ffaba339190, {'identity_provider': u'KentProxy', 'protocol':
u'saml2'})

and

'PATH_TRANSLATED':

 '/var/www/keystone/main/v3/OS-FEDERATION/identity_providers/KentProxy/protocols/saml2/auth'

So Apache is telling Keystone that it has protected the URL that
Keystone has configured to be protected.

However, Apache has been configured to protect this URL with the ABFAB
protocol and the local Radius server, rather than the KentProxy
IdP and
the SAML2 protocol. So we could say that Apache is lying to Keystone,
and because Keystone trusts Apache, then Keystone trusts Apache's lies
and wrongly thinks that the correct IDP and protocol were used.

The only sure way to protect Keystone from a wrongly or mal-configured
Apache is to have end to end security, where Keystone gets a token
from
the IDP that it can validate, to prove that it is the trusted IDP that
it is talking to. In other words, if Keystone is given the original
signed SAML assertion from the IDP, it will know for definite that the
user was authenticated by the trusted IDP using the trusted protocol


 So the bug is a misconfiguration, not an actual bug. The goal was to
 trust and leverage httpd, 

Re: [openstack-dev] [Heat][oslo-incubator][oslo-log] Logging Unicode characters

2014-12-24 Thread Ben Nemec
On 12/24/2014 03:48 AM, Qiming Teng wrote:
 Hi,
 
 When trying to enable stack names in Heat to use unicode strings, I am
 stuck by a weird behavior of logging.
 
 Suppose I have a stack name assigned some non-ASCII string, then when
 stack tries to log something here:
 
 heat/engine/stack.py:
 
  536 LOG.info(_LI('Stack %(action)s %(status)s (%(name)s): '
  537  '%(reason)s'),
  538  {'action': action,
  539   'status': status,
  540   'name': self.name,   # type(self.name)==unicode here
  541   'reason': reason})
 
 I'm seeing the following errors from h-eng session:
 
 Traceback (most recent call last):
   File /usr/lib64/python2.6/logging/__init__.py, line 799, in emit
 stream.write(fs % msg.decode('utf-8'))
   File /usr/lib64/python2.6/encodings/utf_8.py, line 16, in decode
 return codecs.utf_8_decode(input, errors, True)
 UnicodeEncodeError: 'ascii' codec can't encode characters in position 
 114-115: 
  ordinal not in range(128)
 
 This means logging cannot handle Unicode correctly?  No.  I did the
 following experiments:
 
 $ cat logtest
 
 #!/usr/bin/env python
 
 import sys
 
 from oslo.utils import encodeutils
 from oslo import i18n
 
 from heat.common.i18n import _LI
 from heat.openstack.common import log as logging
 
 i18n.enable_lazy()
 
 LOG = logging.getLogger('logtest')
 logging.setup('heat')
 
 print('sys.stdin.encoding: %s' % sys.stdin.encoding)
 print('sys.getdefaultencoding: %s' % sys.getdefaultencoding())
 
 s = sys.argv[1]
 print('s is: %s' % type(s))
 
 stack_name = encodeutils.safe_decode(unis)

I think you may have a typo in your sample here because unis isn't
defined as far as I can tell.

In any case, I suspect this line is why your example works and Heat
doesn't.  I can reproduce the same error if I stuff some unicode data
into a unicode string without decoding it first:

 test = u'\xe2\x82\xac'
 test.decode('utf8')
Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/lib64/python2.7/encodings/utf_8.py, line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'ascii' codec can't encode characters in position
0-2: ordinal not in range(128)
 test = '\xe2\x82\xac'
 test.decode('utf8')
u'\u20ac'

Whether that's what is going on here I can't say for sure though.
Trying to figure out unicode in Python usually gives me a headache. :-)

 print('stack_name is: %s' % type(stack_name))
 
 # stack_name is unicode here
 LOG.error(_LI('stack name: %(name)s') % {'name': stack_name})
 
 $ ./logtest some Chinese here
 
 [tengqm@node1 heat]$ ./logtest 中文
 sys.stdin.encoding: UTF-8
 sys.getdefaultencoding: ascii
 s is: type 'str'
 stack_name is: type 'unicode'
 2014-12-24 17:51:13.799 29194 ERROR logtest [-] stack name: 中文
 
 It worked.  
 
 After spending more than one day on this, I'm seeking help from people
 here.  What's wrong with Unicode stack names here?
 
 Any hints are appreciated.
 
 Regards,
   - Qiming
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-24 Thread W Chan
Trying to clarify a few things...

* 2) Retuning to first example:
** ...
**  action: std.sql conn_str={$.env.conn_str} query={$.query}
** ...
** $.env - is it a name of environment or it will be a registered
syntax to getting access to values from env ?
*

I was actually thinking the environment will use the reserved word
env in the WF context.  The value for the env key will be the dict
supplied either DB lookup by name, by dict, or by JSON from CLI.

The nested dict for __actions (and all other keys with double
underscore) is special system purpose, in this case declaring defaults
for action inputs.  Similar to __execution where it's for containing
runtime data for the WF execution.

* 3) Can user has a few environments?*

I don't think we intend to mix one or more environments in a WF
execution.  The key was to supply any named environment at WF
execution time.  So the WF auth only needs to know the variables will
be under $.env.  If we allow one or more environments in a WF
execution, this means each environment needs to be referred to by name
(i.e. in your example env1 and env2).  We then would lost the ability
to swap any named environments for different executions of the same
WF.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Bug in federation

2014-12-24 Thread Marco Fargetta

 On 24 Dec 2014, at 17:34, David Chadwick d.w.chadw...@kent.ac.uk wrote:
 
 If I understand the bug fix correctly, it is firmly tying the URL to the
 IDP to the mapping rule. But I think this is going in the wrong
 direction for several reasons:
 
 1. With Shibboleth, if you use a WAYF service, then anyone from hundreds
 of different federated IDPs may end up being used to authenticate the
 user who is accessing OpenStack/Keystone. We dont want to have hundreds
 of URLs. One is sufficient. Plus we dont know which IDP the user will
 eventually choose, as this is decided by the WAYF service. So the
 correct URL cannot be pre-chosen by the user.
 


With the proposed configuration of shibboleth when you access the URL then you 
are
redirect only to the IdP configured for the URL. Since a URL is tied to only an 
IDP there
is not need of a WAYF.

Anyway, this is a change only in the documentation and it was the first fix 
because there was
an agreement to provide a solution also for Juno with the minimal change in the 
code.

The other fix I proposed, which is under review, requires an additional 
parameter when you
configure the IdP in OS-Federation. This accepts one or more EntityIDs so you 
can map the entities
with the URL. This also requires to specify the http variable where you can get 
the entityID (this
is a parameter so it can be compatible with different SAML plug-ins).
If you do not specify these values the behaviour is like the current 
implementation otherwise
providing the list of entities and the parameter the access to the URL is 
allowed only to the
IDP included in the list and the other are rejected.

I tried to be more compatible with the current implementation as possible.

Is this in the right direction? Could you comment on the review page? It will 
be better to understand
it the patch need extra work. The link is: 
https://review.openstack.org/#/c/142743/

 2. With ABFAB, the IDP to be used is not known by the SP (Keystone)
 until after authentication. This is because the realm is incorporated in
 the user's ID (u...@real.com) and this is not visible to Keystone. So it
 is not possible to have different URLs for different IDPs. They all have
 to use the same URL.
 
 So there should be one URL protecting Keystone, and when the response
 comes from Apache, Keystone needs to be able to reliably determine
 
 a) which IDP was used by the user
 b) which protocol was used
 
 and from this, choose which mapping rule to use
 


This would require a new design of the OS-Federation and you have proposed 
several specs
I was agreeing with. Nevertheless, it seems there was not consensus in the 
community so
I think you have to find a way to integrate ABFAB with the current model.

Is it possible to have a single mapping with many rules and keystone chose 
according to the
information coming after the authentication? Maybe this require to work on the 
mapping but it does not
require changes in the overall architecture, just an idea.




 regards
 
 david
 
 
 On 24/12/2014 10:19, Marco Fargetta wrote:
 Hi All,
 
 this bug was already reported and fixed in two steps:
 
 https://bugs.launchpad.net/ossn/+bug/1390124
 
 
 The first step is in the documentation. There should be also an OSS advice 
 for previous
 version of OpenStack. The solution consist in configuring shibboleth to use 
 different IdPs for 
 different URLs.
 
 The second step, still in progress, is to include an ID in the IdP 
 configuration. My patch is under review here:
 
 https://review.openstack.org/#/c/142743/
 
 Let me know if it is enough to solve the issue in your case.
 
 Marco
 
 On 24 Dec 2014, at 10:08, David Chadwick d.w.chadw...@kent.ac.uk wrote:
 
 
 
 On 23/12/2014 21:56, Morgan Fainberg wrote:
 
 On Dec 23, 2014, at 1:08 PM, Dolph Mathews dolph.math...@gmail.com
 mailto:dolph.math...@gmail.com wrote:
 
 
 On Tue, Dec 23, 2014 at 1:33 PM, David
 Chadwick d.w.chadw...@kent.ac.uk mailto:d.w.chadw...@kent.ac.uk wrote:
 
   Hi Adam
 
   On 23/12/2014 17:34, Adam Young wrote:
 On 12/23/2014 11:34 AM, David Chadwick wrote:
 Hi guys
 
 we now have the ABFAB federation protocol working with Keystone, using a
 modified mod_auth_kerb plugin for Apache (available from the project
 Moonshot web site). However, we did not change Keystone configuration
 from its original SAML federation configuration, when it was talking to
 SAML IDPs, using mod_shibboleth. Neither did we modify the Keystone code
 (which I believe had to be done for OpenID connect.) We simply replaced
 mod_shibboleth with mod_auth_kerb and talked to a completely different
 IDP with a different protocol. And everything worked just fine.
 
 Consequently Keystone is broken, since you can configure it to trust a
 particular IDP, talking a particular protocol, but Apache will happily
 talk to another IDP, using a different protocol, and Keystone cannot
 tell the difference and will happily accept the authenticated user.
 Keystone should reject any authenticated user who 

Re: [openstack-dev] [Mistral] Plans to load and performance testing

2014-12-24 Thread Boris Pavlovic
Guys,

I added patch to infra:
https://review.openstack.org/#/c/143879/

That allows to run Rally against Mistral in gates.

Best regards,
Boris Pavlovic

On Mon, Dec 22, 2014 at 4:25 PM, Anastasia Kuznetsova 
akuznets...@mirantis.com wrote:

 Dmitry,

 Now I see that my comments are not so informative, I will try to describe
 environment and scenarios in more details.

 1) *1 api 1 engine 1 executor  *it means that there were 3 Mistral
 processes running on the same box
 2) list-workbooks scenario was run when there were no workflow executions
 at the same time, I will notice this your comment and I will measure time
 in such situation, but I guess that it will take more time, the question is
 as far as.
 3) 60 % of success means that only 60 % of number of times execution of
 scenario 'list-workbooks' were successful, at the moment I have observed
 only one type of error:
 error connection to Rabbit : Error ConnectionError: ('Connection
 aborted.', error(104, 'Connection reset by peer'))
 4) we don't know the durability criteria of Mistral and under what load
 Mistral will 'die', we want to define the threshold.

 P.S. Dmitry, if you have any ideas/scenarios which you want to test,
 please share them.

 On Sat, Dec 20, 2014 at 9:35 AM, Dmitri Zimine dzim...@stackstorm.com
 wrote:

 Anastasia, any start is a good start.

 * 1 api 1 engine 1 executor, list-workbooks*

 what exactly doest it mean: 1) is mistral deployed on 3 boxes with
 component per box, or all three are processes on the same box? 2) is
 list-workbooks test running while workflow executions going on? How many?
 what’s the character of the load 3) when it says 60% success what exactly
 does it mean, what kind of failures? 4) what is the durability criteria,
 how long do we expect Mistral to withstand the load.

 Let’s discuss this in details on the next IRC meeting?

 Thanks again for getting this started.

 DZ.


 On Dec 19, 2014, at 7:44 AM, Anastasia Kuznetsova 
 akuznets...@mirantis.com wrote:

 Boris,

 Thanks for feedback!

  But I belive that you should put bigger load here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 As I said it is only beginning and  I will increase the load and change
 its type.

 As well concurrency should be at least 2-3 times bigger than times
 otherwise it won't generate proper load and you won't collect enough data
 for statistical analyze.
 
 As well use  rps runner that generates more real life load.
 Plus it will be nice to share as well output of rally task report
 command.

 Thanks for the advice, I will consider it in further testing and
 reporting.

 Answering to your question about using Rally for integration testing, as
 I mentioned in our load testing plan published on wiki page,  one of our
 final goals is to have a Rally gate in one of Mistral repositories, so we
 are interested in it and I already prepare first commits to Rally.

 Thanks,
 Anastasia Kuznetsova

 On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic bpavlo...@mirantis.com
 wrote:

 Anastasia,

 Nice work on this. But I belive that you should put bigger load here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 As well concurrency should be at least 2-3 times bigger than times
 otherwise it won't generate proper load and you won't collect enough data
 for statistical analyze.

 As well use  rps runner that generates more real life load.
 Plus it will be nice to share as well output of rally task report
 command.


 By the way what do you think about using Rally scenarios (that you
 already wrote) for integration testing as well?


 Best regards,
 Boris Pavlovic

 On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova 
 akuznets...@mirantis.com wrote:

 Hello everyone,

 I want to announce that Mistral team has started work on load and
 performance testing in this release cycle.

 Brief information about scope of our work can be found here:

 https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing

 First results are published here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results

 Thanks,
 Anastasia Kuznetsova
 @ Mirantis Inc.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [Keystone] Bug in federation

2014-12-24 Thread Marco Fargetta
Hi John,
the problem is not to establish which variable has the correct information but 
the association
between IDP and URL. In OS-Federation you define an authentication URL per IDP 
and protocol
and it is supposed to use the specified IDP and protocol for authenticate. 
Nevertheless, during the
authentication there is not code to check if the IDP and protocol are the one 
specified for the URL
and in the apache configuration for Juno there was no configuration in the 
apache side to bind the
IDP with the URL.

Therefore, you need to add something in OS_Federation to perform this control 
using the variable you
are proposing or others.

Marco

 On 24 Dec 2014, at 15:15, John Dennis jden...@redhat.com wrote:
 
 Can't this be solved with a couple of environment variables? The two
 keys pieces of information needed are:
 
 1) who authenticated the subject?
 
 2) what authentication method was used?
 
 There is already precedence for AUTH_TYPE, it's used in AJP to
 initialize the authType property in a Java Servelet. AUTH_TYPE would
 cover item 2. Numerous places in Apache already set AUTH_TYPE. Perhaps
 there could be a convention that AUTH_TYPE could carry extra qualifying
 parameters much like HTTP headers do. The first token would be the
 primary mechanism, e.g. saml, negotiate, x509, etc. For authentication
 types that support multiple mechanisms (e.g. EAP, SAML, etc.) an extra
 parameter would qualify the actual mechanism used. For SAML that
 qualifying extra parameter could be the value from AuthnContextClassRef.
 
 Item 1 could be covered by a new environment variable AUTH_AUTHORITY.
 
 If AUTH_TYPE is negotiate (i.e. kerberos) then the AUTH_AUTHORITY would
 be the KDC. For SAML it would probably be taken from the
 AuthenticatingAuthority element or the IdP entityID.
 
 I'm not sure I see the need for other layers to receive the full SAML
 assertion and validate the signature. One has to trust the server you're
 running in. It's the same concept as trusting REMOTE_USER.
 
 -- 
 John
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Eng. Marco Fargetta, PhD

Istituto Nazionale di Fisica Nucleare (INFN)
Catania, Italy

EMail: marco.farge...@ct.infn.it




smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-24 Thread Clint Byrum
Excerpts from Renat Akhmerov's message of 2014-12-24 03:40:22 -0800:
 Hi
 
  Ok, I'm quite happy to accept this may be a better long-term solution, but
  can anyone comment on the current maturity level of Mistral?  Questions
  which spring to mind are:
  
  - Is the DSL stable now?
 
 You can think “yes” because although we keep adding new features we do it in 
 a backwards compatible manner. I personally try to be very cautious about 
 this.
 
  - What's the roadmap re incubation (there are a lot of TBD's here:
 https://wiki.openstack.org/wiki/Mistral/Incubation)
 
 Ooh yeah, this page is very very obsolete which is actually my fault because 
 I didn’t pay a lot of attention to this after I heard all these rumors about 
 TC changing the whole approach around getting projects incubated/integrated.
 
 I think incubation readiness from a technical perspective is good (various 
 style checks, procedures etc.), even if there’s still something that we need 
 to adjust it must not be difficult and time consuming. The main question for 
 the last half a year has been “What OpenStack program best fits Mistral?”. So 
 far we’ve had two candidates: Orchestration and some new program (e.g. 
 Workflow Service). However, nothing is decided yet on that.
 

It's probably worth re-thinking the discussion above given the governance
changes that are being worked on:

http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [driver] DB operations

2014-12-24 Thread Mike Perez
On 06:05 Sat 20 Dec , Duncan Thomas wrote:
 No, I mean that if drivers are going to access database, then they should
 do it via a defined interface that limits what they can do to a sane set of
 operations. I'd still prefer that they didn't need extra access beyond the
 model update, but I don't know if that is possible.
 
 Duncan Thomas
 On Dec 19, 2014 6:43 PM, Amit Das amit@cloudbyte.com wrote:
 
  Thanks Duncan.
  Do you mean hepler methods in the specific driver class?
  On 19 Dec 2014 14:51, Duncan Thomas duncan.tho...@gmail.com wrote:
 
  So our general advice has historical been 'drivers should not be
  accessing the db directly'. I haven't had chance to look at your driver
  code yet, I've been on vacation, but my suggestion is that if you
  absolutely must store something in the admin metadata rather than somewhere
  that is covered by the model update (generally provider location and
  provider auth) then writing some helper methods that wrap the context bump
  and db call would be better than accessing it directly from the driver.
 
  Duncan Thomas
  On Dec 18, 2014 11:41 PM, Amit Das amit@cloudbyte.com wrote:

I've expressed in past reviews that we should have an interface that limits
drivers access to the database [1], but received quite a bit of push
back in Cinder. I recommend we stick to what has been decided, otherwise, Amit
you should spend some time on reading the history of this issue [2] from
previous meetings and start a rediscussion on it in the next meeting [3]. Not
discouraging it, but this has been something brought up at least a couple of
times now and it ends up with the same answer from the community.

[1] - https://review.openstack.org/#/c/107693/14
[2] - 
http://eavesdrop.openstack.org/meetings/cinder/2014/cinder.2014-10-15-16.00.log.html#l-186
[3] - https://wiki.openstack.org/wiki/CinderMeetings

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Docs] Move fuel-web/docs to fuel-docs

2014-12-24 Thread Christopher Aedo
I think it's worth pursuing these efforts to include the
auto-generated doc components in the fuel-docs build process.  The
additional dependencies required to build nailgun are not so
unreasonable, and the preparation of the build environment has already
been put into a single script,

From the CI perspective, where do the CI slaves start so to speak?
I do not think we are starting from a bare machine, installing OS,
installing dependencies, and then attempting a build of fuel-docs
right?  I'm wondering here whether it's reasonable to start from a
snapshot where the machine has the necessary dependencies (it must
start from some snapshotted point, otherwise just testing change of a
single line would take unreasonably long).

From your steps:
 6) Implement additional make target in fuel-docs
 to download and build autodocs from fuel-web
 repo as a separate chapter.

If we add this as an additional make target, then the environment
would have to support the necessary dependencies anyway, right?  If
that's the case, then this would have to be testable no matter what,
right?  Or is it suggested that this step would not be tested, and
would essentially stand off on it's own?

-Christopher

On Tue, Dec 23, 2014 at 9:20 AM, Aleksandra Fedorova
afedor...@mirantis.com wrote:
 Blueprint 
 https://blueprints.launchpad.net/fuel/+spec/fuel-dev-docs-merge-fuel-docs
 suggests us to move all documentation from fuel-web to fuel-docs
 repository.

 While I agree that moving Developer Guide to fuel-docs is a good idea,
 there is an issue with autodocs which currently blocks the whole
 process.

 If we move dev docs to fuel-docs as suggested by Christopher in [1] we
 will make it impossible to build fuel-docs without cloning fuel-web
 repository and installing all nailgun dependencies into current
 environment. And this is bad from both CI and user point of view.

 I think we should keep fuel-docs repository self-contained, i.e. one
 should be able to build docs without any external code. We can add a
 switch or separate make target to build 'addons' to this documentation
 when explicitly requested, but it shouldn't be default behaviour.

 Thus I think we need to split documentation in fuel-web/ repository
 and move the static part to fuel-docs, but keep dynamic
 auto-generated part in fuel-web repo. See patch [2].

 Then to move docs from fuel-web to fuel-docs we need to perform following 
 steps:

 1) Merge/abandon all docs-related patches to fuel-web, see full list [3]
 2) Merge updated patch [2] which removes docs from fuel-web repo,
 leaving autogenerated api docs only.
 3) Disable docs CI for fuel-web
 4) Add building of api docs to fuel-web/run_tests.sh.
 5) Update fuel-docs repository with new data as in patch [4] but
 excluding anything related to autodocs.
 6) Implement additional make target in fuel-docs to download and build
 autodocs from fuel-web repo as a separate chapter.
 7) Add this make target in fuel-docs CI.


 [1] https://review.openstack.org/#/c/124551/
 [2] https://review.openstack.org/#/c/143679/
 [3] 
 https://review.openstack.org/#/q/project:stackforge/fuel-web+status:open+file:%255Edoc.*,n,z
 [4] https://review.openstack.org/#/c/125234/

 --
 Aleksandra Fedorova
 Fuel Devops Engineer
 bookwar

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and don't recheck

2014-12-24 Thread Hirofumi Ichihara
Hi,

My patch(https://review.openstack.org/#/c/124011/) received verified-1 from 
VMware NSX CI.
But my patch isn’t related to VMware so I added comment “vmware-recheck-patch” 
according to VMware NSX CI comment.
However, VMware NSX CI don’t recheck.

I don’t know recheck word was wrong or CI broke.

Could someone help me?

Thanks,
Hirofumi___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Converging discussions on WF context and WF/task/action inputs

2014-12-24 Thread Renat Akhmerov

 On 24 Dec 2014, at 23:37, W Chan m4d.co...@gmail.com wrote:
  2) Retuning to first example:
  ...
   action: std.sql conn_str={$.env.conn_str} query={$.query}
  ...
  $.env - is it a name of environment or it will be a registered syntax to 
  getting access to values from env ?
 I was actually thinking the environment will use the reserved word env in 
 the WF context.  The value for the env key will be the dict supplied either 
 DB lookup by name, by dict, or by JSON from CLI.
Ok, probably here’s the place where I didn’t understand you before. I thought 
“env” here is just a arbitrary key that users themselves may want to have to 
just group some variables under a single umbrella. What you’re saying is that 
whatever is under “$.env” is just the exact same environment that we passed 
when we started the workflow? If yes then it definitely makes sense to me (it 
just allows to explicitly access environment, not through the implicit variable 
lookup). Please confirm.

One thing that I strongly suggest is that we clearly define all reserved keys 
like “env”, “__actions” etc. I think it’d be better if they all started with 
the same prefix, for example, double underscore.

 The nested dict for __actions (and all other keys with double underscore) 
 is special system purpose, in this case declaring defaults for action inputs. 
  Similar to __execution where it's for containing runtime data for the WF 
 execution.

Yes, that’s clear


Renat Akhmerov
@ Mirantis Inc.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] Plans to load and performance testing

2014-12-24 Thread Renat Akhmerov
Thanks Boris!

Renat Akhmerov
@ Mirantis Inc.


 On 24 Dec 2014, at 23:54, Boris Pavlovic bpavlo...@mirantis.com wrote:
 
 Guys, 
 
 I added patch to infra:
 https://review.openstack.org/#/c/143879/ 
 https://review.openstack.org/#/c/143879/
 
 That allows to run Rally against Mistral in gates.
 
 Best regards,
 Boris Pavlovic 
 
 On Mon, Dec 22, 2014 at 4:25 PM, Anastasia Kuznetsova 
 akuznets...@mirantis.com mailto:akuznets...@mirantis.com wrote:
 Dmitry,
 
 Now I see that my comments are not so informative, I will try to describe 
 environment and scenarios in more details.
 
 1) 1 api 1 engine 1 executor  it means that there were 3 Mistral processes 
 running on the same box
 2) list-workbooks scenario was run when there were no workflow executions at 
 the same time, I will notice this your comment and I will measure time in 
 such situation, but I guess that it will take more time, the question is as 
 far as.
 3) 60 % of success means that only 60 % of number of times execution of 
 scenario 'list-workbooks' were successful, at the moment I have observed only 
 one type of error: 
 error connection to Rabbit : Error ConnectionError: ('Connection aborted.', 
 error(104, 'Connection reset by peer'))
 4) we don't know the durability criteria of Mistral and under what load 
 Mistral will 'die', we want to define the threshold.
 
 P.S. Dmitry, if you have any ideas/scenarios which you want to test, please 
 share them.
 
 On Sat, Dec 20, 2014 at 9:35 AM, Dmitri Zimine dzim...@stackstorm.com 
 mailto:dzim...@stackstorm.com wrote:
 Anastasia, any start is a good start. 
 
  1 api 1 engine 1 executor, list-workbooks
 
 what exactly doest it mean: 1) is mistral deployed on 3 boxes with component 
 per box, or all three are processes on the same box? 2) is list-workbooks 
 test running while workflow executions going on? How many? what’s the 
 character of the load 3) when it says 60% success what exactly does it mean, 
 what kind of failures? 4) what is the durability criteria, how long do we 
 expect Mistral to withstand the load.  
 
 Let’s discuss this in details on the next IRC meeting? 
 
 Thanks again for getting this started. 
 
 DZ.
 
 
 On Dec 19, 2014, at 7:44 AM, Anastasia Kuznetsova akuznets...@mirantis.com 
 mailto:akuznets...@mirantis.com wrote:
 
 Boris,
 
 Thanks for feedback! 
 
  But I belive that you should put bigger load here: 
  https://etherpad.openstack.org/p/mistral-rally-testing-results 
  https://etherpad.openstack.org/p/mistral-rally-testing-results
 
 As I said it is only beginning and  I will increase the load and change its 
 type. 
 
 As well concurrency should be at least 2-3 times bigger than times 
 otherwise it won't generate proper load and you won't collect enough data 
 for statistical analyze.  
 
 As well use  rps runner that generates more real life load.
 Plus it will be nice to share as well output of rally task report command.
 
 Thanks for the advice, I will consider it in further testing and reporting.
 
 Answering to your question about using Rally for integration testing, as I 
 mentioned in our load testing plan published on wiki page,  one of our final 
 goals is to have a Rally gate in one of Mistral repositories, so we are 
 interested in it and I already prepare first commits to Rally.
 
 Thanks,
 Anastasia Kuznetsova
 
 On Fri, Dec 19, 2014 at 4:51 PM, Boris Pavlovic bpavlo...@mirantis.com 
 mailto:bpavlo...@mirantis.com wrote:
 Anastasia, 
 
 Nice work on this. But I belive that you should put bigger load here: 
 https://etherpad.openstack.org/p/mistral-rally-testing-results 
 https://etherpad.openstack.org/p/mistral-rally-testing-results
 
 As well concurrency should be at least 2-3 times bigger than times otherwise 
 it won't generate proper load and you won't collect enough data for 
 statistical analyze.  
 
 As well use  rps runner that generates more real life load.
 Plus it will be nice to share as well output of rally task report command.
 
 
 By the way what do you think about using Rally scenarios (that you already 
 wrote) for integration testing as well? 
 
 
 Best regards,
 Boris Pavlovic 
 
 On Fri, Dec 19, 2014 at 2:39 PM, Anastasia Kuznetsova 
 akuznets...@mirantis.com mailto:akuznets...@mirantis.com wrote:
 Hello everyone,
 
 I want to announce that Mistral team has started work on load and 
 performance testing in this release cycle.
 
 Brief information about scope of our work can be found here: 
 https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing 
 https://wiki.openstack.org/wiki/Mistral/Testing#Load_and_Performance_Testing
 
 First results are published here:
 https://etherpad.openstack.org/p/mistral-rally-testing-results 
 https://etherpad.openstack.org/p/mistral-rally-testing-results
 
 Thanks,
 Anastasia Kuznetsova
 @ Mirantis Inc.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org mailto:OpenStack-dev@lists.openstack.org
 

Re: [openstack-dev] [heat] Application level HA via Heat

2014-12-24 Thread Renat Akhmerov
Thanks Clint,

I actually didn’t see this before (like I said just rumors) so need to read it 
carefully.

Renat Akhmerov
@ Mirantis Inc.



 On 25 Dec 2014, at 00:18, Clint Byrum cl...@fewbar.com wrote:
 
 Excerpts from Renat Akhmerov's message of 2014-12-24 03:40:22 -0800:
 Hi
 
 Ok, I'm quite happy to accept this may be a better long-term solution, but
 can anyone comment on the current maturity level of Mistral?  Questions
 which spring to mind are:
 
 - Is the DSL stable now?
 
 You can think “yes” because although we keep adding new features we do it in 
 a backwards compatible manner. I personally try to be very cautious about 
 this.
 
 - What's the roadmap re incubation (there are a lot of TBD's here:
   https://wiki.openstack.org/wiki/Mistral/Incubation)
 
 Ooh yeah, this page is very very obsolete which is actually my fault because 
 I didn’t pay a lot of attention to this after I heard all these rumors about 
 TC changing the whole approach around getting projects incubated/integrated.
 
 I think incubation readiness from a technical perspective is good (various 
 style checks, procedures etc.), even if there’s still something that we need 
 to adjust it must not be difficult and time consuming. The main question for 
 the last half a year has been “What OpenStack program best fits Mistral?”. 
 So far we’ve had two candidates: Orchestration and some new program (e.g. 
 Workflow Service). However, nothing is decided yet on that.
 
 
 It's probably worth re-thinking the discussion above given the governance
 changes that are being worked on:
 
 http://governance.openstack.org/resolutions/20141202-project-structure-reform-spec.html
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and don't recheck

2014-12-24 Thread Gary Kotton
Hi,
We have a few CI issues. We are working on them at the moment. I hope that we 
get to the bottom of this soon.
Thanks
Gary

From: Hirofumi Ichihara 
ichihara.hirof...@lab.ntt.co.jpmailto:ichihara.hirof...@lab.ntt.co.jp
Reply-To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Date: Thursday, December 25, 2014 at 5:41 AM
To: OpenStack List 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org
Subject: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and 
don't recheck

Hi,

My patch(https://review.openstack.org/#/c/124011/) received verified-1 from 
VMware NSX CI.
But my patch isn't related to VMware so I added comment vmware-recheck-patch 
according to VMware NSX CI comment.
However, VMware NSX CI don't recheck.

I don't know recheck word was wrong or CI broke.

Could someone help me?

Thanks,
Hirofumi
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Nominating Kate Chernova for murano-core

2014-12-24 Thread Serg Melikyan
I'd like to propose that we add Kate Chernova to the murano-core.

Kate is active member of our community for more than a year, she is regular
participant in our IRC meeting and maintains a good score as contributor:

http://stackalytics.com/report/users/efedorova

Please vote by replying to this thread. As a reminder of your options, +1
votes from 5 cores is sufficient; a -1 is a veto.
-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and don't recheck

2014-12-24 Thread Hirofumi Ichihara
Hi Gary,

Thank you for your response.
I understand. I’m expecting good news.

Thanks,
Hirofumi

2014/12/25 15:57、Gary Kotton gkot...@vmware.com のメール:

 Hi,
 We have a few CI issues. We are working on them at the moment. I hope that we 
 get to the bottom of this soon.
 Thanks
 Gary
 
 From: Hirofumi Ichihara ichihara.hirof...@lab.ntt.co.jp
 Reply-To: OpenStack List openstack-dev@lists.openstack.org
 Date: Thursday, December 25, 2014 at 5:41 AM
 To: OpenStack List openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [devstack][VMware NSX CI] VMware NSX CI fails and 
 don't recheck
 
 Hi,
 
 My patch(https://review.openstack.org/#/c/124011/) received verified-1 from 
 VMware NSX CI.
 But my patch isn’t related to VMware so I added comment 
 “vmware-recheck-patch” according to VMware NSX CI comment.
 However, VMware NSX CI don’t recheck.
 
 I don’t know recheck word was wrong or CI broke.
 
 Could someone help me?
 
 Thanks,
 Hirofumi
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev