Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-17 Thread Eduard Matei
Thanks, i'll have a look.

Eduard

On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy ramy.asse...@hp.com wrote:

  Manually running the script requires a few environment settings. Take a
 look at the README here:

 https://github.com/openstack-infra/devstack-gate



 Regarding cinder, I’m using this repo to run our cinder jobs (fork from
 jaypipes).

 https://github.com/rasselin/os-ext-testing



 Note that this solution doesn’t use the Jenkins gerrit trigger pluggin,
 but zuul.



 There’s a sample job for cinder here. It’s in Jenkins Job Builder format.


 https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample



 You can ask more questions in IRC freenode #openstack-cinder. (irc#
 asselin)



 Ramy



 *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
 *Sent:* Tuesday, December 16, 2014 12:41 AM
 *To:* Bailey, Darragh
 *Cc:* OpenStack Development Mailing List (not for usage questions);
 OpenStack
 *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
 setting up CI



 Hi,



 Can someone point me to some working documentation on how to setup third
 party CI? (joinfu's instructions don't seem to work, and manually running
 devstack-gate scripts fails:

 Running gate_hook

 Job timeout set to: 163 minutes

 timeout: failed to run command 
 ‘/opt/stack/new/devstack-gate/devstack-vm-gate.sh’: No such file or directory

 ERROR: the main setup script run by this job failed - exit code: 127

 please look at the relevant log files to determine the root cause

 Cleaning up host

 ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz)

  Build step 'Execute shell' marked build as failure.



 I have a working Jenkins slave with devstack and our internal libraries, i
 have Gerrit Trigger Plugin working and triggering on patches created, i
 just need the actual job contents so that it can get to comment with the
 test results.



 Thanks,



 Eduard



 On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei 
 eduard.ma...@cloudfounders.com wrote:

  Hi Darragh, thanks for your input



 I double checked the job settings and fixed it:

 - build triggers is set to Gerrit event

 - Gerrit trigger server is Gerrit (configured from Gerrit Trigger Plugin
 and tested separately)

 - Trigger on: Patchset Created

 - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches:
 Type: Path, Pattern: ** (was Type Plain on both)

 Now the job is triggered by commit on openstack-dev/sandbox :)



 Regarding the Query and Trigger Gerrit Patches, i found my patch using
 query: status:open project:openstack-dev/sandbox change:139585 and i can
 trigger it manually and it executes the job.



 But i still have the problem: what should the job do? It doesn't actually
 do anything, it doesn't run tests or comment on the patch.

 Do you have an example of job?



 Thanks,

 Eduard



 On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh dbai...@hp.com wrote:

 Hi Eduard,


 I would check the trigger settings in the job, particularly which type
 of pattern matching is being used for the branches. Found it tends to be
 the spot that catches most people out when configuring jobs with the
 Gerrit Trigger plugin. If you're looking to trigger against all branches
 then you would want Type: Path and Pattern: ** appearing in the UI.

 If you have sufficient access using the 'Query and Trigger Gerrit
 Patches' page accessible from the main view will make it easier to
 confirm that your Jenkins instance can actually see changes in gerrit
 for the given project (which should mean that it can see the
 corresponding events as well). Can also use the same page to re-trigger
 for PatchsetCreated events to see if you've set the patterns on the job
 correctly.

 Regards,
 Darragh Bailey

 Nothing is foolproof to a sufficiently talented fool - Unknown

 On 08/12/14 14:33, Eduard Matei wrote:
  Resending this to dev ML as it seems i get quicker response :)
 
  I created a job in Jenkins, added as Build Trigger: Gerrit Event:
  Patchset Created, chose as server the configured Gerrit server that
  was previously tested, then added the project openstack-dev/sandbox
  and saved.
  I made a change on dev sandbox repo but couldn't trigger my job.
 
  Any ideas?
 
  Thanks,
  Eduard
 
  On Fri, Dec 5, 2014 at 10:32 AM, Eduard Matei
  eduard.ma...@cloudfounders.com
  mailto:eduard.ma...@cloudfounders.com wrote:
 
  Hello everyone,
 
  Thanks to the latest changes to the creation of service accounts
  process we're one step closer to setting up our own CI platform
  for Cinder.
 
  So far we've got:
  - Jenkins master (with Gerrit plugin) and slave (with DevStack and
  our storage solution)
  - Service account configured and tested (can manually connect to
  review.openstack.org http://review.openstack.org and get events
  and publish comments)
 
  Next step would be to set up a job to do the actual testing, 

[openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network

2014-12-17 Thread Padmanabhan Krishnan
  Hello,I have a question regarding the enable_dhcp option when creating a 
network. 

When a VM is attached to  a network where enable_dhcp is False, I understand 
that the DHCP namespace is not created for the network and the VM does not get 
any IP address after it boots up and sends a DHCP Discover.But, I also see that 
the Neutron port is filled with the fixed IP value from the network pool even 
though there's no DHCP associated with the subnet. So, for such VM's, does one 
need to statically configure the IP address with whatever Neutron has allocated 
from the pool?
What exactly is the use case of the above? 
I do understand that for providing public network access to VM's, an external 
network is generally created with enable-dhcp option set to False. Is it only 
for this purpose?
I was thinking of a case of external/provider DHCP servers from where VM's can 
get their IP addresses and when one does not want to use L3 agent/DVR. In such 
cases, one may want to disable DHCP when creating networks.  Isn't this a 
use-case?
Appreciate any response or corrections with my above understanding.

Thanks,Paddu  
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network

2014-12-17 Thread Pasquale Porreca
Just yesterday I asked a similar question on ML, this is the answer I got:

In Neutron IP address management and distribution are separated concepts.
IP addresses are assigned to ports even when DHCP is disabled. That IP
address is indeed used to configure anti-spoofing rules and security groups.

http://lists.openstack.org/pipermail/openstack-dev/2014-December/053069.html

On 12/17/14 09:15, Padmanabhan Krishnan wrote:
 Hello,
 I have a question regarding the enable_dhcp option when creating a
 network.

 When a VM is attached to  a network where enable_dhcp is False, I
 understand that the DHCP namespace is not created for the network and
 the VM does not get any IP address after it boots up and sends a DHCP
 Discover.
 But, I also see that the Neutron port is filled with the fixed IP
 value from the network pool even though there's no DHCP associated
 with the subnet. 
 So, for such VM's, does one need to statically configure the IP
 address with whatever Neutron has allocated from the pool?

 What exactly is the use case of the above? 

 I do understand that for providing public network access to VM's, an
 external network is generally created with enable-dhcp option set to
 False. Is it only for this purpose?

 I was thinking of a case of external/provider DHCP servers from where
 VM's can get their IP addresses and when one does not want to use L3
 agent/DVR. In such cases, one may want to disable DHCP when creating
 networks.  Isn't this a use-case?

 Appreciate any response or corrections with my above understanding.

 Thanks,
 Paddu 



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How should libvirt pools work with distributed storage drivers?

2014-12-17 Thread Peter Penchev
On Mon, Dec 1, 2014 at 9:52 PM, Solly Ross sr...@redhat.com wrote:
 Hi Peter,

 Right.  So just one more question now - seeing as the plan is to
 deprecate non-libvirt-pool drivers in Kilo and then drop them entirely
 in L, would it still make sense for me to submit a spec today for a
 driver that would keep the images in our own proprietary distributed
 storage format?  It would certainly seem to make sense for us and for
 our customers right now and in the upcoming months - a bird in the
 hand and so on; and we would certainly prefer it to be upstreamed in
 OpenStack, since subclassing imagebackend.Backend is a bit difficult
 right now without modifying the installed imagebackend.py (and of
 course I meant Backend when I spoke about subclassing DiskImage in my
 previous message).  So is there any chance that such a spec would be
 accepted for Kilo?

 It doesn't hurt to try submitting a spec.  On the one hand, the driver
 would come into life (so to speak) as deprecated, which seems kind
 of silly (if there's no libvirt support at all for your driver, you
 couldn't just subclass the libvirt storage pool backend).  On the
 other hand, it's preferable to have code be upstream, and since you
 don't have a libvirt storage driver yet, the only way to have support
 is to use a legacy-style driver.

Thanks for the understanding!

 Personally, I wouldn't mind having a new legacy driver as long as
 you're committed to getting your storage driver into libvirt, so that
 we don't have to do extra work when the time comes to remove the legacy
 drivers.

Yes, that's very reasonable, and we are indeed committed to getting
our work into libvirt proper.

 If you do end up submitting a spec, keep in mind is that, for ease of
 migration to the libvirt storage pool driver, you should have volume names of
 '{instance_uuid}_{disk_name}' (similarly to the way that LVM does it).

 If you have a spec or some code, I'd be happy to give some feedback,
 if you'd like (post it on Gerrit as WIP, or something like that).

Well, I might have mentioned this earlier, seeing as the Kilo-1 spec
deadline is almost upon us, but the spec itself is at
https://review.openstack.org/137830/ - it would be great if you could
spare a minute to look at it.  Thanks in advance!

G'luck,
Peter

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Not able to locate tests for glanceclient

2014-12-17 Thread Flavio Percoco

On 17/12/14 10:47 +0530, Abhishek Talwar/HYD/TCS wrote:

Hi All,

I am currently working on stable Juno release for a fix on glanceclient, but I
am not able to locate tests in glanceclient. So if you can help me locate it as
I need to add a unit test.
The current path for glanceclient is /usr/local/lib/python2.7/dist-packages/
glanceclient.


This is because glanceclient tests live outside the glanceclient
package.

https://github.com/openstack/python-glanceclient/tree/0cdc947bf998c7f00a23c11bf1be4bc5929b7803/tests

Cheers,
Flavio




Thanks and Regards
Abhishek Talwar

=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
@flaper87
Flavio Percoco


pgpOyzMXNqoky.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] RFC - Action spec CLI

2014-12-17 Thread Dmitri Zimine
The problem with existing syntax is it is not defined: there is no docs on 
inlining complex variables [*], and we haven’t tested it for anything more than 
the simplest cases: 
https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/workbook/v2/test_dsl_specs_v2.py#L114.
 I will be surprised if anyone figured how to provide a complex object as an 
inline parameter.

Do you think regex is the right approach for parsing arbitrary key-values where 
values is arbitrary json structures? Will it work with something like 
workflow: wf2 object_list=[ {“url”: 
“http://{$hostname}.example.com:8080?x=ay={$.b}}, 33, null, {{$.key}, 
[{$.value1}, {$.value2}]}
How much tests should we write to be confident we covered all cases? I share 
Lakshmi’s concern it is fragile and maintaining it reliably is difficult. 

But back to the original question, it’s about requirements, not implementation. 
My preference is “option 3”, “make it work as is now”. But if it’s too hard I 
am ok to compromise. 
Than option 2 as it resembles option 3 and YAML/JSON conversion makes complete 
sense. At the expense of quoting the objects. Slight change, not significant. 
Option 1 introduce a new syntax; although familiar to CLI users, I think it’s a 
bit out of place in YAML definition. 
Option 4 is no go :)

DZ. 

[*] “there is no docs to this” - I subscribe on fixing this.

On Dec 16, 2014, at 9:48 PM, Renat Akhmerov rakhme...@mirantis.com wrote:

 Ok, I would prefer to spend some time and think how to improve the existing 
 reg exp that we use to parse key-value pairs. We definitely can’t just drop 
 support of this syntax and can’t even change it significantly since people 
 already use it.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 17 Dec 2014, at 07:28, Lakshmi Kannan laks...@lakshmikannan.me wrote:
 
 Apologies for the long email. If this fancy email doesn’t render correctly 
 for you, please read it here: 
 https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a
 
 I was looking into fixing bug: 
 https://bugs.launchpad.net/mistral/+bug/1401039. My idea was to use shlex to 
 parse the string. This actually would work for anything that is supplied in 
 the linux shell syntax. Problem is this craps out when we want to support 
 complex data structures such as arrays and dicts as arguments. I did not 
 think we supported a syntax to take in complex data structures in a one line 
 format. Consider for example:
 
   task7:
 for-each:
   vm_info: $.vms
 workflow: wf2 is_true=true object_list=[1, null, str]
 on-complete:
   - task9
   - task10
 Specifically
 
 wf2 is_true=true object_list=[1, null, str]
 shlex will not handle this correctly because object_list is an array. Same 
 problem with dict.
 
 There are 3 potential options here:
 
 Option 1
 
 1) Provide a spec for specifying lists and dicts like so:
 list_arg=1,2,3 and dict_arg=h1:h2,h3:h4,h5:h6
 
 shlex will handle this fine but there needs to be a code that converts the 
 argument values to appropriate data types based on schema. (ActionSpec 
 should have a parameter schema probably in jsonschema). This is doable.
 
 wf2 is_true=true object_list=1,null,str
 Option 2
 
 2) Allow JSON strings to be used as arguments so we can json.loads them (if 
 it fails, use them as simple string). For example, with this approach, the 
 line becomes
 
 wf2 is_true=true object_list=[1, null, str]
 This would pretty much resemble 
 http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument
 
 Option 3
 
 3) Keep the spec as such and try to parse it. I have no idea how we can do 
 this reliably. We need a more rigorous lexer. This syntax doesn’t translate 
 well when we want to build a CLI. Linux shells cannot support this syntax 
 natively. This means people would have to use shlex syntax and a translation 
 needs to happen in CLI layer. This will lead to inconsistency. CLI uses some 
 syntax and the action input line in workflow definition will use another. We 
 should try and avoid this.
 
 Option 4
 
 4) Completely drop support for this fancy one line syntax in workflow. This 
 is probably the least desired option.
 
 My preference
 
 Looking the options, I like option2/option 1/option 4/option 3 in the order 
 of preference.
 
 With some documentation, we can tell people why this is hard. People will 
 also grok because they are already familiar with CLI limitations in linux.
 
 Thoughts?
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org

Re: [openstack-dev] [Nova] RemoteError: Remote error: OperationalError (OperationalError) (1048, Column 'instance_uuid' cannot be null)

2014-12-17 Thread Accela Zhao (bigzhao)

I have formatted the messy clutter in the middle of your trace log.

Traceback (most recent call last):
  File /usr/lib/python2.7/site-packages/nova/conductor/manager.py, line
400, in _object_dispatch
return getattr(target, method)(context, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/nova/objects/base.py, line 204,
in wrapper
return fn(self, ctxt, *args, **kwargs)
  File /usr/lib/python2.7/site-packages/nova/objects/instance.py, line
500, in save
columns_to_join=_expected_cols(expected_attrs))
  File /usr/lib/python2.7/site-packages/nova/db/api.py, line 746, in
instance_update_and_get_original
columns_to_join=columns_to_join)
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line
143, in wrapper
return f(*args, **kwargs)
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line
2289, in instance_update_and_get_original
columns_to_join=columns_to_join)
  File /usr/lib/python2.7/site-packages/nova/db/sqlalchemy/api.py, line
2380, in _instance_update
session.add(instance_ref)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py,
line 470, in __exit__
self.rollback()
  File 
/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, line
60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py,
line 467, in __exit__
self.commit()
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py,
line 377, in commit
self._prepare_impl()
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py,
line 357, in _prepare_impl
self.session.flush()
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py,
line 1919, in flush
self._flush(objects)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py,
line 2037, in _flush
transaction.rollback(_capture_exception=True)
  File 
/usr/lib64/python2.7/site-packages/sqlalchemy/util/langhelpers.py, line
60, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/session.py,
line 2001, in _flush
flush_context.execute()
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py,
line 372, in execute
rec.execute(self)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/unitofwork.py,
line 526, in execute
uow
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py,
line 60, in save_obj
mapper, table, update)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/orm/persistence.py,
line 518, in _emit_update_statements
execute(statement, params)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py,
line 729, in execute
return meth(self, multiparams, params)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/sql/elements.py,
line 321, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py,
line 826, in _execute_clauseelement
compiled_sql, distilled_params
  File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py,
line 958, in _execute_context
context)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py,
line 1156, in _handle_dbapi_exception
util.raise_from_cause(newraise, exc_info)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/util/compat.py,
line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/base.py,
line 951, in _execute_context
context)
  File /usr/lib64/python2.7/site-packages/sqlalchemy/engine/default.py,
line 436, in do_execute
cursor.execute(statement, parameters)
  File /usr/lib64/python2.7/site-packages/MySQLdb/cursors.py, line 174,
in execute
self.errorhandler(self, exc, value)
  File /usr/lib64/python2.7/site-packages/MySQLdb/connections.py, line
36, in defaulterrorhandler
raise errorclass, errorvalue
OperationalError: (OperationalError) (1048, Column 'instance_uuid' cannot
be null) 'UPDATE instance_extra SET updated_at=%s, instance_uuid=%s WHERE
instance_extra.id = %s' (datetime.datetime(2014, 12, 12, 9, 16, 52,
434376), None, 5L)

Looks like your new instance doesn't have uuid, and causes to
_allocate_network to fail. Instance uuid should have been allocated in
nova/compute/api.py::_provision_instances on default.

Thanks  Regards,
--
Accela Zhao



From:  joejiang ifz...@126.com
Reply-To:  OpenStack Development Mailing List (not for usage questions)
openstack-dev@lists.openstack.org
Date:  Friday, December 12, 2014 at 5:36 PM
To:  openstack-dev@lists.openstack.org
openstack-dev@lists.openstack.org, openst...@lists.openstack.org
openst...@lists.openstack.org
Subject:  [openstack-dev] [Nova] RemoteError: Remote error:
OperationalError (OperationalError) (1048, Column 'instance_uuid' cannot
be null)


Hi folks,
when i launch instance use cirros image in the 

Re: [openstack-dev] [Fuel] Image based provisioning

2014-12-17 Thread Vladimir Kozhukalov
In case of image based we need either to update image or run yum
update/apt-get upgrade right after first boot (second option partly
devalues advantages of image based scheme). Besides, we are planning to
re-implement image build script so as to be able to build images on a
master node (but unfortunately 6.1 is not a real estimate for that).

Vladimir Kozhukalov

On Wed, Dec 17, 2014 at 5:03 AM, Mike Scherbakov mscherba...@mirantis.com
wrote:

 Dmitry,
 as part of 6.1 roadmap, we are going to work on patching feature.
 There are two types of workflow to consider:
 - patch existing environment (already deployed nodes, aka target nodes)
 - ensure that new nodes, added to the existing and already patched envs,
 will install updated packages too.

 In case of anakonda/preseed install, we can simply update repo on master
 node and run createrepo/etc. What do we do in case of image? Will we need a
 separate repo alongside with main one, updates repo - and do
 post-provisioning yum update to fetch all patched packages?

 On Tue, Dec 16, 2014 at 11:09 PM, Andrey Danin ada...@mirantis.com
 wrote:

 Adding Mellanox team explicitly.

 Gil, Nurit, Aviram, can you confirm that you tested that feature? It can
 be enabled on every fresh ISO. You just need to enable the Experimental
 mode (please, see the documentation for instructions).

 On Tuesday, December 16, 2014, Dmitry Pyzhov dpyz...@mirantis.com
 wrote:

 Guys,

 we are about to enable image based provisioning in our master by
 default. I'm trying to figure out requirement for this change. As far as I
 know, it was not tested on scale lab. Is it true? Have we ever run full
 system tests cycle with this option?

 Do we have any other pre-requirements?



 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Mike Scherbakov
 #mihgen


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-17 Thread Sean Dague
On 12/16/2014 06:22 PM, Ben Nemec wrote:
 Some thoughts inline.  I'll go ahead and push a change to remove the
 things everyone seems to agree on.
 
 On 12/09/2014 09:05 AM, Sean Dague wrote:
 On 12/09/2014 09:11 AM, Doug Hellmann wrote:

 On Dec 9, 2014, at 6:39 AM, Sean Dague s...@dague.net wrote:

 I'd like to propose that for hacking 1.0 we drop 2 groups of rules 
 entirely.

 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).

 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm

 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.

 I don’t have the hacking rules memorized. Could you describe them briefly?

 Sure, the H8* group is git commit messages. It's checking for line
 length in the commit message.

 - [H802] First, provide a brief summary of 50 characters or less.  Summaries
   of greater then 72 characters will be rejected by the gate.

 - [H801] The first line of the commit message should provide an accurate
   description of the change, not just a reference to a bug or
   blueprint.


 H802 is mechanically enforced (though not the 50 characters part, so the
 code isn't the same as the rule).

 H801 is enforced by a regex that looks to see if the first line is a
 launchpad bug and fails on it. You can't mechanically enforce that
 english provides an accurate description.
 
 +1.  It would be nice to provide automatic notification to people if
 they submit something with an absurdly long commit message, but I agree
 that hacking isn't the place to do that.
 


 H3* are all the module import rules:

 Imports
 ---
 - [H302] Do not import objects, only modules (*)
 - [H301] Do not import more than one module per line (*)
 - [H303] Do not use wildcard ``*`` import (*)
 - [H304] Do not make relative imports
 - Order your imports by the full module path
 - [H305 H306 H307] Organize your imports according to the `Import order
   template`_ and `Real-world Import Order Examples`_ below.

 I think these remain reasonable guidelines, but H302 is exceptionally
 tricky to get right, and we keep not getting it right.

 H305-307 are actually impossible to get right. Things come in and out of
 stdlib in python all the time.
 
 tdlr; I'd like to remove H302, H305 and, H307 and leave the rest.
 Reasons below.
 
 +1 to H305 and H307.  I'm going to have to admit defeat and accept that
 I can't make them work in a sane fashion.
 
 H306 is different though - that one is only checking alphabetical order
 and only works on the text of the import so it doesn't have the issues
 around having modules installed or mis-categorizing.  AFAIK it has never
 actually caused any problems either (the H306 failure in
 https://review.openstack.org/#/c/140168/2/nova/tests/unit/test_fixtures.py
 is correct - nova.tests.fixtures should come before
 nova.tests.unit.conf_fixture).

The issue I originally had was in nova.tests.fixtures, where it resolved
fixtures as a relative import instead of an absolute one, and exploded.
It's not reproducing now though.

 As far as 301-304, only 302 actually depends on the is_module stuff.
 The others are all text-based too so I think we should leave them.  H302
 I'm kind of indifferent on - we hit an edge case with the olso namespace
 thing which is now fixed, but if removing that allows us to not install
 requirements.txt to run pep8 I think I'm onboard with removing it too.

H304 needs is_import_exception.

is_module and is_import_exception means we have to import all the code,
which means the depends for pep8 is *all* of requirements.txt, all of
test-requirements.txt all of any optional (not listed in those
requirements). If the content isn't in the venv, the check passes. So
adding / removing an optional requirement can change the flake8 test
results.

Evaluating the code is something that we should avoid.

-Sean

-- 
Sean Dague
http://dague.net

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-12-17 Thread Evgeniy L
Vitaly, what do you think about that?

On Fri, Dec 12, 2014 at 5:58 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 I don't agree with many of your statements but, I would like to
 continue discussion about really important topic i.e. UI flow, my
 suggestion was to add groups, for plugin in metadata.yaml plugin
 developer can have description of the groups which it belongs to:

 groups:
   - id: storage
 subgroup:
   - id: cinder

 With this information we can show a new option on UI (wizard),
 if option is selected, it means that plugin is enabled, if plugin belongs
 to several groups, we can use OR statement.

 The main point is, for environment creation we must specify
 ids of plugins. Yet another reason for that is plugins multiversioning,
 we must know exactly which plugin with which version
 is used for environment, and I don't see how conditions can help
 us with it.

 Thanks,




 On Wed, Dec 10, 2014 at 8:23 PM, Vitaly Kramskikh vkramsk...@mirantis.com
  wrote:



 2014-12-10 19:31 GMT+03:00 Evgeniy L e...@mirantis.com:



 On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:



 2014-12-10 16:57 GMT+03:00 Evgeniy L e...@mirantis.com:

 Hi,

 First let me describe what our plans for the nearest release. We want
 to deliver
 role as a simple plugin, it means that plugin developer can define his
 own role
 with yaml and also it should work fine with our current approach when
 user can
 define several fields on the settings tab.

 Also I would like to mention another thing which we should probably
 discuss
 in separate thread, how plugins should be implemented. We have two
 types
 of plugins, simple and complicated, the definition of simple - I can
 do everything
 I need with yaml, the definition of complicated - probably I have to
 write some
 python code. It doesn't mean that this python code should do absolutely
 everything it wants, but it means we should implement stable,
 documented
 interface where plugin is connected to the core.

 Now lets talk about UI flow, our current problem is how to get the
 information
 if plugins is used in the environment or not, this information is
 required for
 backend which generates appropriate tasks for task executor, also this
 information can be used in the future if we decide to implement
 plugins deletion
 mechanism.

 I didn't come up with a some new solution, as before we have two
 options to
 solve the problem:

 # 1

 Use conditional language which is currently used on UI, it will look
 like
 Vitaly described in the example [1].
 Plugin developer should:

 1. describe at least one element for UI, which he will be able to use
 in task

 2. add condition which is written in our own programming language

 Example of the condition for LBaaS plugin:

 condition: settings:lbaas.metadata.enabled == true

 3. add condition to metadata.yaml a condition which defines if plugin
 is enabled

 is_enabled: settings:lbaas.metadata.enabled == true

 This approach has good flexibility, but also it has problems:

 a. It's complicated and not intuitive for plugin developer.

 It is less complicated than python code


 I'm not sure why are you talking about python code here, my point
 is we should not force developer to use this conditions in any language.

 But that's how current plugin-like stuff works. There are various tasks
 which are run only if some checkboxes are set, so stuff like Ceph and
 vCenter will need conditions to describe tasks.

 Anyway I don't agree with the statement there are more people who know
 python than fuel ui conditional language.


 b. It doesn't cover case when the user installs 3rd party plugin
 which doesn't have any conditions (because of # a) and
 user doesn't have a way to disable it for environment if it
 breaks his configuration.

 If plugin doesn't have conditions for tasks, then it has invalid
 metadata.


 Yep, and it's a problem of the platform, which provides a bad interface.

 Why is it bad? It plugin writer doesn't provide plugin name or version,
 then metadata is invalid also. It is plugin writer's fault that he didn't
 write metadata properly.




 # 2

 As we discussed from the very beginning after user selects a release
 he can
 choose a set of plugins which he wants to be enabled for environment.
 After that we can say that plugin is enabled for the environment and
 we send
 tasks related to this plugin to task executor.

  My approach also allows to eliminate enableness of plugins which
 will cause UX issues and issues like you described above. vCenter and Ceph
 also don't have enabled state. vCenter has hypervisor and storage, Ceph
 provides backends for Cinder and Glance which can be used simultaneously 
 or
 only one of them can be used.

 Both of described plugins have enabled/disabled state, vCenter is
 enabled
 when vCenter is selected as hypervisor. Ceph is enabled when it's
 selected
 as a backend for Cinder or Glance.

 Nope, Ceph for Volumes can be used 

[openstack-dev] [cinder] Anyone knows whether there is freezing day of spec approval?

2014-12-17 Thread Jay Bryant
 Dave,

My apologies.  We have not yet set a day that we are freezing BP/Spec
approval for Cinder.

We had a deadline in November for new drivers being proposed but haven't
frozen other proposals yet.  I mixed things up with Nova's 12/18 cutoff.

Not sure when we will be cutting off BPs for Cinder.  The goal is to spend
as much of K-2 and K-3 on Cinder clean-up.  So, I wouldn't let anything you
want considered linger too long.

Thanks,
Jay

On 12/15/2014 09:16 PM, Chen, Wei D wrote:

Hi,

I know nova has such day around Dec. 18, is there a similar day in
Cinder project? thanks!

Best Regards,
Dave Chen





___
OpenStack-dev mailing listopenstack-...@lists.openstack.org
javascript:_e(%7B%7D,'cvml','OpenStack-dev@lists.openstack.org');http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
jsbry...@electronicjungle.net
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno

2014-12-17 Thread Thomas Goirand
On 12/16/2014 04:21 AM, Doug Hellmann wrote:
 The issue with stable/juno jobs failing because of the difference in the
 SQLAlchemy requirements between the older applications and the newer
 oslo.db is being addressed with a new release of the 1.2.x series. We 
 will then cap the requirements for stable/juno to 1.2.1. We decided we
 did not need to raise the minimum version of oslo.db allowed in kilo,
 because the old versions of the library do work, if they are installed
 from packages and not through setuptools.
 
 Jeremy created a feature/1.2 branch for us, and I have 2 patches up
 [1][2] to apply the requirements fix. The change to the oslo.db version
 in stable/juno is [3].
 
 After the changes in oslo.db merge, I will tag 1.2.1.
 
 Doug
 
 [1] https://review.openstack.org/#/c/141893/
 [2] https://review.openstack.org/#/c/141894/
 [3] https://review.openstack.org/#/c/141896/

Doug,

I'm not sure I get it. Is this related to newer versions of SQLAlchemy?
If so, then from my package maintainer point of view, keeping an older
version of SQLA (eg: 0.9.8) and oslo.db 1.0.2 for Juno is ok, right?

Will Kilo require a newer version of SQLA?

Cheers,

Thomas


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday December 18th at 17:00 UTC

2014-12-17 Thread Matthew Treinish
Hi everyone,

Just a quick reminder that the weekly OpenStack QA team IRC meeting will be
tomorrow Thursday, December 18th at 17:00 UTC in the #openstack-meeting
channel.

The agenda for tomorrow's meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting
Anyone is welcome to add an item to the agenda.

It's also worth noting that several weeks ago we started having a regular
dedicated Devstack topic during the meetings. So if anyone is interested in
Devstack development please join the meetings to be a part of the discussion.

To help people figure out what time 17:00 UTC is in other timezones tomorrow's
meeting will be at:

12:00 EST
02:00 JST
03:30 ACDT
18:00 CET
11:00 CST
9:00 PST

-Matt Treinish


pgp9_n3HA95jq.pgp
Description: PGP signature
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state?

2014-12-17 Thread Belmiro Moreira
Hi Vish,
do you have more info about the libvirt deadlocks that you observed?
Maybe I'm observing the same on SLC6 where I can't even kill libvirtd
process.

Belmiro

On Tue, Dec 16, 2014 at 12:01 AM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 I have seen deadlocks in libvirt that could cause this. When you are in
 this state, check to see if you can do a virsh list on the node. If not,
 libvirt is deadlocked, and ubuntu may need to pull in a fix/newer version.

 Vish

 On Dec 12, 2014, at 2:12 PM, pcrews glee...@gmail.com wrote:

  On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote:
  Hi,
 
  This case is always tested by Tempest on the gate.
 
 
 https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152
 
  So I guess this problem wouldn't happen on the latest version at least.
 
  Thanks
  Ken'ichi Ohmichi
 
  ---
 
  2014-12-10 6:32 GMT+09:00 Joe Gordon joe.gord...@gmail.com:
 
 
  On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) 
 dannc...@cisco.com
  wrote:
 
  Hi,
 
  I have a VM which is in ERROR state.
 
 
 
 +--+--+++-++
 
  | ID   | Name
  | Status | Task State | Power State | Networks   |
 
 
 
 +--+--+++-++
 
  | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 |
  cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR  | -  |
 NOSTATE
  ||
 
 
  I tried in both CLI “nova delete” and Horizon “terminate instance”.
  Both accepted the delete command without any error.
  However, the VM never got deleted.
 
  Is there a way to remove the VM?
 
 
  What version of nova are you using? This is definitely a serious bug,
 you
  should be able to delete an instance in error state. Can you file a
 bug that
  includes steps on how to reproduce the bug along with all relevant
 logs.
 
  bugs.launchpad.net/nova
 
 
 
  Thanks,
  Danny
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  Hi,
 
  I've encountered this in my own testing and have found that it appears
 to be tied to libvirt.
 
  When I hit this, reset-state as the admin user reports success (and
 state is set), *but* things aren't really working as advertised and
 subsequent attempts to do anything with the errant vm's will send them
 right back into 'FLAIL' / can't delete / endless DELETING mode.
 
  restarting libvirt-bin on my machine fixes this - after restart, the
 deleting vm's are properly wiped without any further user input to
 nova/horizon and all seems right in the world.
 
  using:
  devstack
  ubuntu 14.04
  libvirtd (libvirt) 1.2.2
 
  triggered via:
  lots of random create/reboot/resize/delete requests of varying validity
 and sanity.
 
  Am in the process of cleaning up my test code so as not to hurt anyone's
 brain with the ugly and will file a bug once done, but thought this worth
 sharing.
 
  Thanks,
  Patrick
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-17 Thread Chris St. Pierre
That's unfortunately too simple. You run into one of two cases:

1. If the job automatically removes the protected attribute when an image
is no longer in use, then you lose the ability to use protected on images
that are not in use. I.e., there's no way to say, nothing is currently
using this image, but please keep it around. (This seems particularly
useful for snapshots, for instance.)

2. If the job does not automatically remove the protected attribute, then
an image would be protected if it had ever been in use; to delete an image,
you'd have to manually un-protect it, which is a workflow that quite
explicitly defeats the whole purpose of flagging images as protected when
they're in use.

It seems like flagging an image as *not* in use is actually a fairly
difficult problem, since it requires consensus among all components that
might be using images.

The only solution that readily occurs to me would be to add something like
a filesystem link count to images in Glance. Then when Nova spawns an
instance, it increments the usage count; when the instance is destroyed,
the usage count is decremented. And similarly with other components that
use images. An image could only be deleted when its usage count was zero.

There are ample opportunities to get out of sync there, but it's at least a
sketch of something that might work, and isn't *too* horribly hackish.
Thoughts?

On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya vishvana...@gmail.com
wrote:

 A simple solution that wouldn’t require modification of glance would be a
 cron job
 that lists images and snapshots and marks them protected while they are in
 use.

 Vish

 On Dec 16, 2014, at 3:19 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:

  On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
  No, I'm looking to prevent images that are in use from being deleted.
 In
  use and protected are disjoint sets.
 
  I have seen multiple cases of images (and snapshots) being deleted while
  still in use in Nova, which leads to some very, shall we say,
  interesting bugs and support problems.
 
  I do think that we should try and determine a way forward on this, they
  are indeed disjoint sets. Setting an image as protected is a proactive
  measure, we should try and figure out a way to keep tenants from
  shooting themselves in the foot if possible.
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][stable] fixing SQLAlchemy and oslo.db requirements in stable/juno

2014-12-17 Thread Doug Hellmann
Now that yesterday’s patch to cap the version of oslo.db used in stable/juno 
1.1 merged, we have a bunch of updates pending in projects that use oslo.db or 
SQLAlchemy to fix the in-project requirement specifications [1]. Having the 
global requirements list updated takes care of our CI environment, but we 
should prioritize those reviews so we can get our stable branches into a good 
state for sites doing CD from stable branches.

Thanks,
Doug

[1] 
https://review.openstack.org/#/q/branch:stable/juno++is:open+owner:%22openstack+proposal+bot%22,n,z
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Plugins] Further development of plugin metadata format

2014-12-17 Thread Vitaly Kramskikh
As I said, it is not flexible and restrictive. What if there are some other
backends for anything appear? What to do if I want to write a plugin that
just adds some extra styles to the UI? Invent a new structures/flags on
demand? That's not viable.

I still think enableness of plugin is the root of all issues with your
approach. With your approach we lose single source of truth (cluster
attributes/settings tab) we'll need to search for strange solutions like
these groups/flags.

2014-12-17 12:33 GMT+01:00 Evgeniy L e...@mirantis.com:

 Vitaly, what do you think about that?

 On Fri, Dec 12, 2014 at 5:58 PM, Evgeniy L e...@mirantis.com wrote:

 Hi,

 I don't agree with many of your statements but, I would like to
 continue discussion about really important topic i.e. UI flow, my
 suggestion was to add groups, for plugin in metadata.yaml plugin
 developer can have description of the groups which it belongs to:

 groups:
   - id: storage
 subgroup:
   - id: cinder

 With this information we can show a new option on UI (wizard),
 if option is selected, it means that plugin is enabled, if plugin belongs
 to several groups, we can use OR statement.

 The main point is, for environment creation we must specify
 ids of plugins. Yet another reason for that is plugins multiversioning,
 we must know exactly which plugin with which version
 is used for environment, and I don't see how conditions can help
 us with it.

 Thanks,




 On Wed, Dec 10, 2014 at 8:23 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:



 2014-12-10 19:31 GMT+03:00 Evgeniy L e...@mirantis.com:



 On Wed, Dec 10, 2014 at 6:50 PM, Vitaly Kramskikh 
 vkramsk...@mirantis.com wrote:



 2014-12-10 16:57 GMT+03:00 Evgeniy L e...@mirantis.com:

 Hi,

 First let me describe what our plans for the nearest release. We want
 to deliver
 role as a simple plugin, it means that plugin developer can define
 his own role
 with yaml and also it should work fine with our current approach when
 user can
 define several fields on the settings tab.

 Also I would like to mention another thing which we should probably
 discuss
 in separate thread, how plugins should be implemented. We have two
 types
 of plugins, simple and complicated, the definition of simple - I can
 do everything
 I need with yaml, the definition of complicated - probably I have to
 write some
 python code. It doesn't mean that this python code should do
 absolutely
 everything it wants, but it means we should implement stable,
 documented
 interface where plugin is connected to the core.

 Now lets talk about UI flow, our current problem is how to get the
 information
 if plugins is used in the environment or not, this information is
 required for
 backend which generates appropriate tasks for task executor, also this
 information can be used in the future if we decide to implement
 plugins deletion
 mechanism.

 I didn't come up with a some new solution, as before we have two
 options to
 solve the problem:

 # 1

 Use conditional language which is currently used on UI, it will look
 like
 Vitaly described in the example [1].
 Plugin developer should:

 1. describe at least one element for UI, which he will be able to use
 in task

 2. add condition which is written in our own programming language

 Example of the condition for LBaaS plugin:

 condition: settings:lbaas.metadata.enabled == true

 3. add condition to metadata.yaml a condition which defines if plugin
 is enabled

 is_enabled: settings:lbaas.metadata.enabled == true

 This approach has good flexibility, but also it has problems:

 a. It's complicated and not intuitive for plugin developer.

 It is less complicated than python code


 I'm not sure why are you talking about python code here, my point
 is we should not force developer to use this conditions in any language.

 But that's how current plugin-like stuff works. There are various tasks
 which are run only if some checkboxes are set, so stuff like Ceph and
 vCenter will need conditions to describe tasks.

 Anyway I don't agree with the statement there are more people who know
 python than fuel ui conditional language.


 b. It doesn't cover case when the user installs 3rd party plugin
 which doesn't have any conditions (because of # a) and
 user doesn't have a way to disable it for environment if it
 breaks his configuration.

 If plugin doesn't have conditions for tasks, then it has invalid
 metadata.


 Yep, and it's a problem of the platform, which provides a bad interface.

 Why is it bad? It plugin writer doesn't provide plugin name or version,
 then metadata is invalid also. It is plugin writer's fault that he didn't
 write metadata properly.




 # 2

 As we discussed from the very beginning after user selects a release
 he can
 choose a set of plugins which he wants to be enabled for environment.
 After that we can say that plugin is enabled for the environment and
 we send
 tasks related to this plugin to task executor.

  My 

Re: [openstack-dev] oslo.db 1.2.1 release coming to fix stable/juno

2014-12-17 Thread Jeremy Stanley
On 2014-12-17 22:02:26 +0800 (+0800), Thomas Goirand wrote:
 I'm not sure I get it. Is this related to newer versions of SQLAlchemy?

It's related to how Setuptools 8 failed to parse our requirements
line for SQLAlchemy because it contained multiple version ranges.
That was fixed by converting it to a single range with a list of
excluded versions instead, but we still needed to backport that
requirements.txt entry to a version of oslo.db for stable/juno.

 If so, then from my package maintainer point of view, keeping an older
 version of SQLA (eg: 0.9.8) and oslo.db 1.0.2 for Juno is ok, right?

In the end we wound up with a 1.0.3 release of oslo.db and pinned
stable/juno requirements to oslo.db1.1 rather than the open-ended
maximum it previously had (which was including 1.1.x and 1.2.x
versions).

 Will Kilo require a newer version of SQLA?

In this case older SQLAlchemy is 0.8.x (we're listing supported
0.8.x and 0.9.x versions for stable/juno still), while Kilo may
release with only a supported range of SQLAlchemy 0.9.x versions.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network

2014-12-17 Thread John Kasperski
When enable_dhcp is False, Config drive or metadata service can be used 
to assign static IP addresses to the deployed VM if the image has 
cloud_init or something equivalent.


On 12/17/2014 2:15 AM, Padmanabhan Krishnan wrote:

Hello,
I have a question regarding the enable_dhcp option when creating a 
network.


When a VM is attached to  a network where enable_dhcp is False, I 
understand that the DHCP namespace is not created for the network and 
the VM does not get any IP address after it boots up and sends a DHCP 
Discover.
But, I also see that the Neutron port is filled with the fixed IP 
value from the network pool even though there's no DHCP associated 
with the subnet.
So, for such VM's, does one need to statically configure the IP 
address with whatever Neutron has allocated from the pool?


What exactly is the use case of the above?

I do understand that for providing public network access to VM's, an 
external network is generally created with enable-dhcp option set to 
False. Is it only for this purpose?


I was thinking of a case of external/provider DHCP servers from where 
VM's can get their IP addresses and when one does not want to use L3 
agent/DVR. In such cases, one may want to disable DHCP when creating 
networks.  Isn't this a use-case?


Appreciate any response or corrections with my above understanding.

Thanks,
Paddu



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
John Kasperski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder driver] A question about Kilo merge point

2014-12-17 Thread Erlon Cruz
Hi,

Yes, I think that the changes of being merged after K1 are few. Check this
docs with the priority list that the core team are working on:

https://etherpad.openstack.org/p/cinder-kilo-priorities

Erlon

On Tue, Dec 16, 2014 at 9:40 AM, liuxinguo liuxin...@huawei.com wrote:

  If a cinder driver can not be mergerd into Kilo before Kilo-1, does it
 means that this driver will has very little chance to be merged into Kilo?

 And what percentage of drivers will be merged before Kilo-1 according to
 the whole drivers that will be merged into Kilo at last?



 Thanks!

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] Weekly subteam status report

2014-12-17 Thread Jim Rollenhagen
Hi all,

Following is the subteam report for Ironic. As usual, this is pulled
directly from the Ironic whiteboard[0] and formatted.

Testing (adam_g)

Bugs (dtantsur)
(as of Mon, 8 Dec 17:00 UTC)
Open: 108 (+6). 5 new (+1), 26 in progress (+1),
0 critical, 12 high (+2) and 3 incomplete

Drivers:
IPA (jroll/JayF/JoshNang)
(update by JayF)
check-tempest-dsvm-ironic-agent_ssh-src is now voting on all
  ironic-python-agent changes
check-tempest-dsvm-ironic-agent_ssh-nv is still expected to pass on all
  Ironic changes, even though it doesn't vote, barring bugs #1398128
  and #139770 (existing gate bugs)

DRAC (ifarkas/lucas)
nothing new // lucasagomes

iLO (wanyen)
As of 12/15/14
Submitted full spec of partition image support for agent driver
  https://review.openstack.org/#/c/137363/.
  This spec needs input regarding what's the best way to refactor
  partition code from Ironic to a common library for IPA and Ironic 
code.
  Please review the spec and provide input.
Submitted code for two Nova specs
  https://review.openstack.org/#/c/141010/1
  https://review.openstack.org/#/c/141012/
  The 141012 changes Nova ironic driver code so please review.
Setting up 3rd-party CI

iRMC (naohirot)
[power driver] merged the spec and started implementation towords kilo-2
  https://review.openstack.org/#/c/134487/
[virtual media deploy driver] updated the spec to the patch set 7 for 
review
  https://review.openstack.org/#/c/134865/
[management driver] updated the spec to the patch set 5 for review
  https://review.openstack.org/#/c/136020/

AMT (lintan)
Proposed a patch to support the workflow of deploy on AMT/vPro PC
  https://review.openstack.org/#/c/135184/
AMT driver proposal now to use wsman instead of amttools

Oslo (GheRivero)
oslo.config
  https://review.openstack.org/#/c/137447/
More intrusive
All config options in the same file - less error prone
  https://review.openstack.org/#/c/128005/
Less intrusive
More error prone
Same approach than other projects
oslo.policy - WIP - https://review.openstack.org/#/c/126265/
Need to update to new oslo namespace and sync with oslo.incubator
  Waiting for oslo.config and oslo.policy patches to land
oslo.context go be graduated
  
https://blueprints.launchpad.net/oslo-incubator/+spec/graduate-oslo-context
context and logging to be removed from incubator soon
New setuptools causing problems with sqlalchemy and oslo.db
  New oslo.db release soon to fix this
Future libraries: memcached, tooz?

[0] https://etherpad.openstack.org/p/IronicWhiteBoard

// jim

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-*client] py33 jobs seem to be failing

2014-12-17 Thread Steve Martinelli
Wondering if anyone can shed some light on this, it seems like a few of 
the clients have been unable to build py33 environments lately:

http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTmFtZUVycm9yOiBuYW1lICdTdGFuZGFyZEVycm9yJyBpcyBub3QgZGVmaW5lZFwiIGJ1aWxkX3N0YXR1czonRkFJTFVSRSciLCJmaWVsZHMiOlsibWVzc2FnZSIsImJ1aWxkX25hbWUiLCJidWlsZF9zdGF0dXMiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTg4MzIzNzM5ODl9

If you want to see additional logs I went ahead and opened a bug against 
python-openstackclient since that's where I saw it first: 
https://bugs.launchpad.net/python-openstackclient/+bug/1403557

Though it seems at least glanceclient/neutronclient/keystoneclient are 
affected as well.

The stack trace leads me to believe that docutils or sphinx is the 
culprit, but neither has released a new version in the time the bug has 
been around, so I'm not sure what the root cause of the problem is.

Steve___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0042] Keystone token scoping provides no security benefit

2014-12-17 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Keystone token scoping provides no security benefit
- ---

### Summary ###
Keystone provides scoped tokens that are constrained to use by a
single project. A user may expect that their scoped token can only be
used to perform operations for the project it is scoped to, which is not
the case. A service or other party who obtains the scoped token can use
it to obtain a token for a different authorized scope, which may be
considered a privilege escalation.

### Affected Services / Software ###
Keystone, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, Kilo

### Discussion ###
This is not a bug in keystone, it's a design feature that some users may
expect to bring security enhancement when it does not. The OSSG is
issuing this security note to highlight the issue.

Many operations in OpenStack will take a token from the user and pass it
to another service to perform some portion of the intended operation.
This token is very powerful and can be used to perform many actions for
the user. Scoped tokens appear to limit their use to the project and
roles they were granted for but can also be used to request tokens with
other scopes. It's important to note that this only works with currently
valid tokens. Once a token expires it cannot be used to gain a new
token.

Token scoping helps avoid accidental leakage of tokens because using
tokens with other services requires the extra step of requesting a new
re-scoped token from keystone. Scoping can help with audit trails and
promote good code practices. There's currently no way to create a
tightly scoped token that cannot be used to request a re-scoped token. A
scoped token cannot be relied upon to restrict actions to only that
scope.

### Recommended Action ###
Users and deployers of OpenStack must not rely on the scope of tokens
to limit what actions can be performed using them.

Concerned users are encouraged to read (OSSG member) Nathan Kinder's
blog post on this issue and some of the potential future solutions.

### Contacts / References ###
Nathan Kinder on Token Scoping : https://blog-nkinder.rhcloud.com/?p=101
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0042
Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1341816
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUkazTAAoJEJa+6E7Ri+EVnj0H/jQWtbkVN+na2GzI3VbNSLsF
MPnGqO6tMcblKvI0m8okbyzhtpSDVAjPTCeoGY4PB5/AE31j1CDrlMT+bnm/Zk+O
rAXeYgBvyjw9FbP9/UeNZPjQPByWaxGr8L90kuSGiL7rBvgf8KoxFJ2Kb9zNDWLJ
bBAJ0A7QjOAri4RnyXoSINzKKawEJzM8va6R3iFtn6yF8Q/1ta3NBB5uWbgkS26M
jtIvTNU/wGxX4b2mQ6gOno/4TwTZIqX+qTdDRXE811NZqSwdNfGRTD1hUQPYYtRq
ioNBDrH/gXsmI4Lr/gXxki1zjPiSzWcbWOVu1PsnJTmFpYrI0FafguKwya4+YhI=
=w/r8
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Complexity check and v2 API

2014-12-17 Thread Pasquale Porreca
Hello

I am working on an API extension that adds a parameter on create server
call; to implement the v2 API I added few lines of code to
nova/api/openstack/compute/servers.py

In particular just adding something like

|new_param = None||
||if self.ext_mgr.is_loaded('os-new-param'):||
||new_param = server_dict.get('new_param')|

leads to a pep8 fail with message 'Controller.create' is too complex (47)
(Note that in tox.ini the max complexity is fixed to 47 and there is a
note specifying 46 is the max complexity present at the moment).

It is quite easy to make this test pass creating a new method just to
execute these lines of code, anyway all other extensions are handled in
that way and one of most important stylish rule states to be consistent
with surrounding code, so I don't think a separate function is the way
to go (unless it implies a change in how all other extensions are
handled too).

My thoughts on this situation:

1) New extensions should not consider v2 but only v2.1, so that file
should not be touched
2) Ignore this error and go on: if and when the extension will be merged
the complexity in tox.ini will be changed too
3) The complexity in tox.ini should be raised to allow new v2 extensions
4) The code of that module should be refactored to lower the complexity
(i.e. move the load of each extension in a separate function)

I would like to know if any of my point is close to the correct solution.

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Looking for feedback: spec for allowing additional IPs to be shared

2014-12-17 Thread Carl Baldwin
On Tue, Dec 16, 2014 at 10:32 AM, Thomas Maddox
thomas.mad...@rackspace.com wrote:
 Hey all,

 It seems I missed the Kilo proposal deadline for Neutron, unfortunately, but
 I still wanted to propose this spec for Neutron and get feedback/approval,
 sooner rather than later, so I can begin working on an implementation, even
 if it can't land in Kilo. I opted to put this in an etherpad for now for
 collaboration due to missing the Kilo proposal deadline.

 Spec markdown in etherpad:
 https://etherpad.openstack.org/p/allow-sharing-additional-ips

Thomas,

I did a quick look over and made a few comments because this looked
similar to other stuff that I've looked at recently.  I'd rather read
and comment on this proposal in gerrit where all other specs are
proposed.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] py33 jobs seem to be failing

2014-12-17 Thread James Polley
Tweaking subject as this seems to be broader than just the clients

It's been seen on os-apply-config as well; I've marked 1403557 as a dupe
1403510. It's also been reported on stackforge/yaql as well as
python-*client

There's been some discussion of this in #openstack-infra and it seems
dstufft has identified the cause; I'll update 1403510 once we have
confirmation that we know what the problem is.

On Wed, Dec 17, 2014 at 5:09 PM, Steve Martinelli steve...@ca.ibm.com
wrote:

 Wondering if anyone can shed some light on this, it seems like a few of
 the clients have been unable to build py33 environments lately:


 http://logstash.openstack.org/#eyJzZWFyY2giOiJtZXNzYWdlOlwiTmFtZUVycm9yOiBuYW1lICdTdGFuZGFyZEVycm9yJyBpcyBub3QgZGVmaW5lZFwiIGJ1aWxkX3N0YXR1czonRkFJTFVSRSciLCJmaWVsZHMiOlsibWVzc2FnZSIsImJ1aWxkX25hbWUiLCJidWlsZF9zdGF0dXMiXSwib2Zmc2V0IjowLCJ0aW1lZnJhbWUiOiI0MzIwMCIsImdyYXBobW9kZSI6ImNvdW50IiwidGltZSI6eyJ1c2VyX2ludGVydmFsIjowfSwic3RhbXAiOjE0MTg4MzIzNzM5ODl9

 If you want to see additional logs I went ahead and opened a bug against
 python-openstackclient since that's where I saw it first:
 https://bugs.launchpad.net/python-openstackclient/+bug/1403557

 Though it seems at least glanceclient/neutronclient/keystoneclient are
 affected as well.

 The stack trace leads me to believe that docutils or sphinx is the
 culprit, but neither has released a new version in the time the bug has
 been around, so I'm not sure what the root cause of the problem is.

 Steve
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp option when creating a network

2014-12-17 Thread Padmanabhan Krishnan
 
Thanks for the response, i saw the other thread in the morning. Will use that 
thread, if i have further questions.
-Paddu


 
From: Pasquale Porreca pasquale.porr...@dektech.com.au
Reply-To: OpenStack Development Mailing List (not for usage questions) 
openstack-dev@lists.openstack.org
Date: Wednesday, December 17, 2014 12:37 AM
To: openstack-dev@lists.openstack.org openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][Nova][DHCP] Use case of enable_dhcp 
option when creating a network

Just yesterday I asked a similar question on ML, this is the answer I got:

In Neutron IP address management and distribution are separated concepts.IP 
addresses are assigned to ports even when DHCP is disabled. That IP address is 
indeed used to configure anti-spoofing rules and security groups.
http://lists.openstack.org/pipermail/openstack-dev/2014-December/053069.html

On 12/17/14 09:15, Padmanabhan Krishnan wrote:

Hello,I have a question regarding the enable_dhcp option when creating a 
network.

When a VM is attached to  a network where enable_dhcp is False, I understand 
that the DHCP namespace is not created for the network and the VM does not get 
any IP address after it boots up and sends a DHCP Discover.But, I also see that 
the Neutron port is filled with the fixed IP value from the network pool even 
though there's no DHCP associated with the subnet. So, for such VM's, does one 
need to statically configure the IP address with whatever Neutron has allocated 
from the pool?
What exactly is the use case of the above? 
I do understand that for providing public network access to VM's, an external 
network is generally created with enable-dhcp option set to False. Is it only 
for this purpose?
I was thinking of a case of external/provider DHCP servers from where VM's can 
get their IP addresses and when one does not want to use L3 agent/DVR. In such 
cases, one may want to disable DHCP when creating networks.  Isn't this a 
use-case?
Appreciate any response or corrections with my above understanding.

Thanks,Paddu 

 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr



   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Complexity check and v2 API

2014-12-17 Thread Matthew Gilliard
Hello Pasquale

  The problem is that you are trying to add a new if/else branch into
a method which is already ~250 lines long, and has the highest
complexity of any function in the nova codebase. I assume that you
didn't contribute much to that complexity, but we've recently added a
limit to stop it getting any worse. So, regarding your 4 suggestions:

1/ As I understand it, v2.1 should be the same as v2 at the
moment, so they need to be kept the same
2/ You can't ignore it - it will fail CI
3/ No thank you. This limit should only ever be lowered :-)
4/ This is 'the right way'. Your suggestion for the refactor does
sound good.

I suggest a single patch that refactors and lowers the limit in
tox.ini.  Once you've done that then you can add the new parameter in
a following patch. Please feel free to add me to any patches you
create.

Matthew



On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca
pasquale.porr...@dektech.com.au wrote:
 Hello

 I am working on an API extension that adds a parameter on create server
 call; to implement the v2 API I added few lines of code to
 nova/api/openstack/compute/servers.py

 In particular just adding something like

 new_param = None
 if self.ext_mgr.is_loaded('os-new-param'):
 new_param = server_dict.get('new_param')

 leads to a pep8 fail with message 'Controller.create' is too complex (47)
 (Note that in tox.ini the max complexity is fixed to 47 and there is a note
 specifying 46 is the max complexity present at the moment).

 It is quite easy to make this test pass creating a new method just to
 execute these lines of code, anyway all other extensions are handled in that
 way and one of most important stylish rule states to be consistent with
 surrounding code, so I don't think a separate function is the way to go
 (unless it implies a change in how all other extensions are handled too).

 My thoughts on this situation:

 1) New extensions should not consider v2 but only v2.1, so that file should
 not be touched
 2) Ignore this error and go on: if and when the extension will be merged the
 complexity in tox.ini will be changed too
 3) The complexity in tox.ini should be raised to allow new v2 extensions
 4) The code of that module should be refactored to lower the complexity
 (i.e. move the load of each extension in a separate function)

 I would like to know if any of my point is close to the correct solution.

 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Complexity check and v2 API

2014-12-17 Thread Pasquale Porreca
Thank you for the answer.

my API proposal won't be merged in kilo release since the deadline for
approval is tomorrow, so I may propose the fix to lower the complexity
in another way, what do you think about a bug fix?

On 12/17/14 18:05, Matthew Gilliard wrote:
 Hello Pasquale

   The problem is that you are trying to add a new if/else branch into
 a method which is already ~250 lines long, and has the highest
 complexity of any function in the nova codebase. I assume that you
 didn't contribute much to that complexity, but we've recently added a
 limit to stop it getting any worse. So, regarding your 4 suggestions:

 1/ As I understand it, v2.1 should be the same as v2 at the
 moment, so they need to be kept the same
 2/ You can't ignore it - it will fail CI
 3/ No thank you. This limit should only ever be lowered :-)
 4/ This is 'the right way'. Your suggestion for the refactor does
 sound good.

 I suggest a single patch that refactors and lowers the limit in
 tox.ini.  Once you've done that then you can add the new parameter in
 a following patch. Please feel free to add me to any patches you
 create.

 Matthew



 On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca
 pasquale.porr...@dektech.com.au wrote:
 Hello

 I am working on an API extension that adds a parameter on create server
 call; to implement the v2 API I added few lines of code to
 nova/api/openstack/compute/servers.py

 In particular just adding something like

 new_param = None
 if self.ext_mgr.is_loaded('os-new-param'):
 new_param = server_dict.get('new_param')

 leads to a pep8 fail with message 'Controller.create' is too complex (47)
 (Note that in tox.ini the max complexity is fixed to 47 and there is a note
 specifying 46 is the max complexity present at the moment).

 It is quite easy to make this test pass creating a new method just to
 execute these lines of code, anyway all other extensions are handled in that
 way and one of most important stylish rule states to be consistent with
 surrounding code, so I don't think a separate function is the way to go
 (unless it implies a change in how all other extensions are handled too).

 My thoughts on this situation:

 1) New extensions should not consider v2 but only v2.1, so that file should
 not be touched
 2) Ignore this error and go on: if and when the extension will be merged the
 complexity in tox.ini will be changed too
 3) The complexity in tox.ini should be raised to allow new v2 extensions
 4) The code of that module should be refactored to lower the complexity
 (i.e. move the load of each extension in a separate function)

 I would like to know if any of my point is close to the correct solution.

 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Topic: Reschedule Router to a different agent with multiple external networks.

2014-12-17 Thread Vasudevan, Swaminathan (PNB Roseville)
Hi Folks,

Reschedule router if new external gateway is on other network
An L3 agent may be associated with just one external network.
If router's new external gateway is on other network then the router
needs to be rescheduled to the proper l3 agent

This patch was introduced when there was no support for L3-agent to handle 
multiple external networks.

Do we think we should still retain this original behavior even if we have 
support for multiple external networks by single L3-agent.

Can anyone comment on this.

Thanks

Swaminathan Vasudevan
Systems Software Engineer (TC)


HP Networking
Hewlett-Packard
8000 Foothills Blvd
M/S 5541
Roseville, CA - 95747
tel: 916.785.0937
fax: 916.785.1815
email: swaminathan.vasude...@hp.commailto:swaminathan.vasude...@hp.com


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-17 Thread Nikhil Komawar
That looks like a decent alternative if it works. However, it would be too racy 
unless we we implement a test-and-set for such properties or there is a 
different job which queues up these requests and perform sequentially for each 
tenant.

Thanks,
-Nikhil

From: Chris St. Pierre [chris.a.st.pie...@gmail.com]
Sent: Wednesday, December 17, 2014 10:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?

That's unfortunately too simple. You run into one of two cases:

1. If the job automatically removes the protected attribute when an image is no 
longer in use, then you lose the ability to use protected on images that are 
not in use. I.e., there's no way to say, nothing is currently using this 
image, but please keep it around. (This seems particularly useful for 
snapshots, for instance.)

2. If the job does not automatically remove the protected attribute, then an 
image would be protected if it had ever been in use; to delete an image, you'd 
have to manually un-protect it, which is a workflow that quite explicitly 
defeats the whole purpose of flagging images as protected when they're in use.

It seems like flagging an image as *not* in use is actually a fairly difficult 
problem, since it requires consensus among all components that might be using 
images.

The only solution that readily occurs to me would be to add something like a 
filesystem link count to images in Glance. Then when Nova spawns an instance, 
it increments the usage count; when the instance is destroyed, the usage count 
is decremented. And similarly with other components that use images. An image 
could only be deleted when its usage count was zero.

There are ample opportunities to get out of sync there, but it's at least a 
sketch of something that might work, and isn't *too* horribly hackish. Thoughts?

On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:
A simple solution that wouldn’t require modification of glance would be a cron 
job
that lists images and snapshots and marks them protected while they are in use.

Vish

On Dec 16, 2014, at 3:19 PM, Collins, Sean 
sean_colli...@cable.comcast.commailto:sean_colli...@cable.comcast.com wrote:

 On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
 No, I'm looking to prevent images that are in use from being deleted. In
 use and protected are disjoint sets.

 I have seen multiple cases of images (and snapshots) being deleted while
 still in use in Nova, which leads to some very, shall we say,
 interesting bugs and support problems.

 I do think that we should try and determine a way forward on this, they
 are indeed disjoint sets. Setting an image as protected is a proactive
 measure, we should try and figure out a way to keep tenants from
 shooting themselves in the foot if possible.

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-17 Thread Gurjar, Unmesh
 -Original Message-
 From: Zane Bitter [mailto:zbit...@redhat.com]
 Sent: Tuesday, December 16, 2014 9:43 AM
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
 showdown
 
 On 15/12/14 07:47, Murugan, Visnusaran wrote:
  Hi Zane,
 
  We have been going through this chain for quite some time now and we
 still feel a disconnect in our understanding.
 
 Yes, I thought last week that we were on the same page, but now it looks like
 we're drifting off again :(
 
  Can you put up a etherpad where we can understand your approach.
 
 Maybe you could put up an etherpad with your questions. Practically all of
 the relevant code is in Stack._create_or_update, Stack._dependencies and
 Converger.check_resource. That's 134 lines of code by my count.
 There's not a lot more I can usefully say about it without knowing which parts
 exactly you're stuck on, but I can definitely answer specific questions.
 
  For example: for storing resource dependencies, Are you storing its
  name, version tuple or just its ID.
 
 I'm storing a tuple of its name and database ID. The data structure is
 resource.GraphKey. I was originally using the name for something, but I
 suspect I could probably drop it now and just store the database ID, but I
 haven't tried it yet. (Having the name in there definitely makes debugging
 more pleasant though ;)
 

I agree, having name might come in handy while debugging!

 When I build the traversal graph each node is a tuple of the GraphKey and a
 boolean to indicate whether it corresponds to an update or a cleanup
 operation (both can appear for a single resource in the same graph).

Just to confirm my understanding, cleanup operation takes care of both:
1. resources which are deleted as a part of update and
2. previous versioned resource which was updated by replacing with a new 
resource (UpdateReplace scenario)
Also, the cleanup operation is performed after the update completes 
successfully. 

 
  If I am correct, you are updating all resources on update regardless
  of their change which will be inefficient if stack contains a million 
  resource.
 
 I'm calling update() on all resources regardless of change, but update() will
 only call handle_update() if something has changed (unless the plugin has
 overridden Resource._needs_update()).
 
 There's no way to know whether a resource needs to be updated before
 you're ready to update it, so I don't think of this as 'inefficient', just 
 'correct'.
 
  We have similar questions regarding other areas in your
  implementation, which we believe if we understand the outline of your
  implementation. It is difficult to get a hold on your approach just by 
  looking
 at code. Docs strings / Etherpad will help.
 
 
  About streams, Yes in a million resource stack, the data will be huge, but
 less than template.
 
 No way, it's O(n^3) (cubed!) in the worst case to store streams for each
 resource.
 
  Also this stream is stored
  only In IN_PROGRESS resources.
 
 Now I'm really confused. Where does it come from if the resource doesn't
 get it until it's already in progress? And how will that information help it?
 

When an operation on stack is initiated, the stream will be identified. To begin
the operation, the action is initiated on the leaf (or root) resource(s) and the
stream is stored (only) in this/these IN_PROGRESS resource(s).
The stream should then keep getting passed to the next/previous level of 
resource(s) as
and when the dependencies for the next/previous level of resource(s) are met.

  The reason to have entire dependency list to reduce DB queries while a
 stack update.
 
 But we never need to know that. We only need to know what just happened
 and what to do next.
 

As mentioned earlier, each level of resources in a graph pass on the dependency
list/stream to their next/previous level of resources. This is information 
should further
be used to determine what is to be done next and drive the operation to 
completion.

  When you have a singular dependency on each resources similar to your
  implantation, then we will end up loading Dependencies one at a time and
 altering almost all resource's dependency regardless of their change.
 
  Regarding a 2 template approach for delete, it is not actually 2
  different templates. Its just that we have a delete stream To be taken up
 post update.
 
 That would be a regression from Heat's current behaviour, where we start
 cleaning up resources as soon as they have nothing depending on them.
 There's not even a reason to make it worse than what we already have,
 because it's actually a lot _easier_ to treat update and clean up as the same
 kind of operation and throw both into the same big graph. The dual
 implementations and all of the edge cases go away and you can just trust in
 the graph traversal to do the Right Thing in the most parallel way possible.
 
  (Any post operation will be handled as an update) This approach is
  True 

Re: [openstack-dev] [qa] How to delete a VM which is in ERROR state?

2014-12-17 Thread Vishvananda Ishaya
There have been a few, but we were specifically hitting this one:

https://www.redhat.com/archives/libvir-list/2014-March/msg00501.html

Vish

On Dec 17, 2014, at 7:03 AM, Belmiro Moreira 
moreira.belmiro.email.li...@gmail.com wrote:

 Hi Vish,
 do you have more info about the libvirt deadlocks that you observed?
 Maybe I'm observing the same on SLC6 where I can't even kill libvirtd 
 process.
 
 Belmiro
 
 On Tue, Dec 16, 2014 at 12:01 AM, Vishvananda Ishaya vishvana...@gmail.com 
 wrote:
 I have seen deadlocks in libvirt that could cause this. When you are in this 
 state, check to see if you can do a virsh list on the node. If not, libvirt 
 is deadlocked, and ubuntu may need to pull in a fix/newer version.
 
 Vish
 
 On Dec 12, 2014, at 2:12 PM, pcrews glee...@gmail.com wrote:
 
  On 12/09/2014 03:54 PM, Ken'ichi Ohmichi wrote:
  Hi,
 
  This case is always tested by Tempest on the gate.
 
  https://github.com/openstack/tempest/blob/master/tempest/api/compute/servers/test_delete_server.py#L152
 
  So I guess this problem wouldn't happen on the latest version at least.
 
  Thanks
  Ken'ichi Ohmichi
 
  ---
 
  2014-12-10 6:32 GMT+09:00 Joe Gordon joe.gord...@gmail.com:
 
 
  On Sat, Dec 6, 2014 at 5:08 PM, Danny Choi (dannchoi) dannc...@cisco.com
  wrote:
 
  Hi,
 
  I have a VM which is in ERROR state.
 
 
  +--+--+++-++
 
  | ID   | Name
  | Status | Task State | Power State | Networks   |
 
 
  +--+--+++-++
 
  | 1cb5bf96-619c-4174-baae-dd0d8c3d40c5 |
  cirros--1cb5bf96-619c-4174-baae-dd0d8c3d40c5 | ERROR  | -  | 
  NOSTATE
  ||
 
 
  I tried in both CLI “nova delete” and Horizon “terminate instance”.
  Both accepted the delete command without any error.
  However, the VM never got deleted.
 
  Is there a way to remove the VM?
 
 
  What version of nova are you using? This is definitely a serious bug, you
  should be able to delete an instance in error state. Can you file a bug 
  that
  includes steps on how to reproduce the bug along with all relevant logs.
 
  bugs.launchpad.net/nova
 
 
 
  Thanks,
  Danny
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  Hi,
 
  I've encountered this in my own testing and have found that it appears to 
  be tied to libvirt.
 
  When I hit this, reset-state as the admin user reports success (and state 
  is set), *but* things aren't really working as advertised and subsequent 
  attempts to do anything with the errant vm's will send them right back into 
  'FLAIL' / can't delete / endless DELETING mode.
 
  restarting libvirt-bin on my machine fixes this - after restart, the 
  deleting vm's are properly wiped without any further user input to 
  nova/horizon and all seems right in the world.
 
  using:
  devstack
  ubuntu 14.04
  libvirtd (libvirt) 1.2.2
 
  triggered via:
  lots of random create/reboot/resize/delete requests of varying validity and 
  sanity.
 
  Am in the process of cleaning up my test code so as not to hurt anyone's 
  brain with the ugly and will file a bug once done, but thought this worth 
  sharing.
 
  Thanks,
  Patrick
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-17 Thread Chris St. Pierre
I was assuming atomic increment/decrement operations, in which case I'm not
sure I see the race conditions. Or is atomism assuming too much?

On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar 
nikhil.koma...@rackspace.com wrote:

  That looks like a decent alternative if it works. However, it would be
 too racy unless we we implement a test-and-set for such properties or there
 is a different job which queues up these requests and perform sequentially
 for each tenant.

 Thanks,
 -Nikhil
   --
 *From:* Chris St. Pierre [chris.a.st.pie...@gmail.com]
 *Sent:* Wednesday, December 17, 2014 10:23 AM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in
 use?

   That's unfortunately too simple. You run into one of two cases:

  1. If the job automatically removes the protected attribute when an
 image is no longer in use, then you lose the ability to use protected on
 images that are not in use. I.e., there's no way to say, nothing is
 currently using this image, but please keep it around. (This seems
 particularly useful for snapshots, for instance.)

  2. If the job does not automatically remove the protected attribute,
 then an image would be protected if it had ever been in use; to delete an
 image, you'd have to manually un-protect it, which is a workflow that quite
 explicitly defeats the whole purpose of flagging images as protected when
 they're in use.

  It seems like flagging an image as *not* in use is actually a fairly
 difficult problem, since it requires consensus among all components that
 might be using images.

  The only solution that readily occurs to me would be to add something
 like a filesystem link count to images in Glance. Then when Nova spawns an
 instance, it increments the usage count; when the instance is destroyed,
 the usage count is decremented. And similarly with other components that
 use images. An image could only be deleted when its usage count was zero.

  There are ample opportunities to get out of sync there, but it's at
 least a sketch of something that might work, and isn't *too* horribly
 hackish. Thoughts?

 On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya vishvana...@gmail.com
  wrote:

 A simple solution that wouldn’t require modification of glance would be a
 cron job
 that lists images and snapshots and marks them protected while they are
 in use.

 Vish

 On Dec 16, 2014, at 3:19 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:

  On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
  No, I'm looking to prevent images that are in use from being deleted.
 In
  use and protected are disjoint sets.
 
  I have seen multiple cases of images (and snapshots) being deleted while
  still in use in Nova, which leads to some very, shall we say,
  interesting bugs and support problems.
 
  I do think that we should try and determine a way forward on this, they
  are indeed disjoint sets. Setting an image as protected is a proactive
  measure, we should try and figure out a way to keep tenants from
  shooting themselves in the foot if possible.
 
  --
  Sean M. Collins
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




  --
 Chris St. Pierre

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




-- 
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Complexity check and v2 API

2014-12-17 Thread Christopher Yeoh
Hi,

Given the timing (no spec approved) it sounds like a v2.1 plus
microversions (just merging) with no v2 changes at all.

The v2.1 framework is more flexible and you should need no changes to
servers.py at all as there are hooks for adding extra parameters in
separate plugins. There are examples of this in the v3 directory which is
really v2.1 now.

Chris
On Thu, 18 Dec 2014 at 3:49 am, Pasquale Porreca 
pasquale.porr...@dektech.com.au wrote:

 Thank you for the answer.

 my API proposal won't be merged in kilo release since the deadline for
 approval is tomorrow, so I may propose the fix to lower the complexity
 in another way, what do you think about a bug fix?

 On 12/17/14 18:05, Matthew Gilliard wrote:
  Hello Pasquale
 
The problem is that you are trying to add a new if/else branch into
  a method which is already ~250 lines long, and has the highest
  complexity of any function in the nova codebase. I assume that you
  didn't contribute much to that complexity, but we've recently added a
  limit to stop it getting any worse. So, regarding your 4 suggestions:
 
  1/ As I understand it, v2.1 should be the same as v2 at the
  moment, so they need to be kept the same
  2/ You can't ignore it - it will fail CI
  3/ No thank you. This limit should only ever be lowered :-)
  4/ This is 'the right way'. Your suggestion for the refactor does
  sound good.
 
  I suggest a single patch that refactors and lowers the limit in
  tox.ini.  Once you've done that then you can add the new parameter in
  a following patch. Please feel free to add me to any patches you
  create.
 
  Matthew
 
 
 
  On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca
  pasquale.porr...@dektech.com.au wrote:
  Hello
 
  I am working on an API extension that adds a parameter on create server
  call; to implement the v2 API I added few lines of code to
  nova/api/openstack/compute/servers.py
 
  In particular just adding something like
 
  new_param = None
  if self.ext_mgr.is_loaded('os-new-param'):
  new_param = server_dict.get('new_param')
 
  leads to a pep8 fail with message 'Controller.create' is too complex
 (47)
  (Note that in tox.ini the max complexity is fixed to 47 and there is a
 note
  specifying 46 is the max complexity present at the moment).
 
  It is quite easy to make this test pass creating a new method just to
  execute these lines of code, anyway all other extensions are handled in
 that
  way and one of most important stylish rule states to be consistent with
  surrounding code, so I don't think a separate function is the way to go
  (unless it implies a change in how all other extensions are handled
 too).
 
  My thoughts on this situation:
 
  1) New extensions should not consider v2 but only v2.1, so that file
 should
  not be touched
  2) Ignore this error and go on: if and when the extension will be
 merged the
  complexity in tox.ini will be changed too
  3) The complexity in tox.ini should be raised to allow new v2 extensions
  4) The code of that module should be refactored to lower the complexity
  (i.e. move the load of each extension in a separate function)
 
  I would like to know if any of my point is close to the correct
 solution.
 
  --
  Pasquale Porreca
 
  DEK Technologies
  Via dei Castelli Romani, 22
  00040 Pomezia (Roma)
 
  Mobile +39 3394823805
  Skype paskporr
 
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
  ___
  OpenStack-dev mailing list
  OpenStack-dev@lists.openstack.org
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-cinderclient] Return request ID to caller

2014-12-17 Thread Mike Perez
On 05:54 Fri 12 Dec , Malawade, Abhijeet wrote:
 HI,
 
 I want your thoughts on blueprint 'Log Request ID Mappings' for cross 
 projects.
 BP: https://blueprints.launchpad.net/nova/+spec/log-request-id-mappings
 It will enable operators to get request id's mappings easily and will be 
 useful in analysing logs effectively.

I've weighed on this question a couple of times now and recently from the
Cinder meeting. Solution 1 please.

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Looking for feedback: spec for allowing additional IPs to be shared

2014-12-17 Thread Thomas Maddox
Sounds great. I went ahead and set up a Gerrit review here:
https://review.openstack.org/#/c/142566/.

Thanks for the feedback and your time!

-Thomas

On 12/17/14, 10:41 AM, Carl Baldwin c...@ecbaldwin.net wrote:

On Tue, Dec 16, 2014 at 10:32 AM, Thomas Maddox
thomas.mad...@rackspace.com wrote:
 Hey all,

 It seems I missed the Kilo proposal deadline for Neutron,
unfortunately, but
 I still wanted to propose this spec for Neutron and get
feedback/approval,
 sooner rather than later, so I can begin working on an implementation,
even
 if it can't land in Kilo. I opted to put this in an etherpad for now for
 collaboration due to missing the Kilo proposal deadline.

 Spec markdown in etherpad:
 https://etherpad.openstack.org/p/allow-sharing-additional-ips

Thomas,

I did a quick look over and made a few comments because this looked
similar to other stuff that I've looked at recently.  I'd rather read
and comment on this proposal in gerrit where all other specs are
proposed.

Carl

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder driver] A question about Kilo merge point

2014-12-17 Thread Mike Perez
On 11:40 Tue 16 Dec , liuxinguo wrote:
 If a cinder driver can not be mergerd into Kilo before Kilo-1, does it means 
 that this driver will has very little chance to be merged into Kilo?
 And what percentage of drivers will be merged before Kilo-1 according to the 
 whole drivers that will be merged into Kilo at last?
 
 Thanks!

All the details for this is here:

http://lists.openstack.org/pipermail/openstack-dev/2014-October/049512.html

-- 
Mike Perez

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] Please do not merge neutron test changes until client returns one value is merged

2014-12-17 Thread David Kranz
This https://review.openstack.org/#/c/141152/ gets rid of the useless second 
return value from neutron client methods according to this spec: 
https://github.com/openstack/qa-specs/blob/master/specs/clients-return-one-value.rst.

Because the client and test changes have to be in the same patch, this one is 
very large. So please let it merge before any other neutron stuff. 
Any neutron patches will require the simple change of removing the unused first 
return value from neutron client methods. Thanks!

 -David

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][gerrit] Showing all inline comments from all patch sets

2014-12-17 Thread James Polley
But equally I think finding out why the New Screen still doesn't do what
you want is valuable - it's likely other people want something similar to
what you want, so this kind of feedback can be used to decide on future
features

On Wed, Dec 17, 2014 at 8:38 AM, Radoslav Gerganov rgerga...@vmware.com
wrote:

 I am aware of this New Screen but it is not useful to me.  I'd like to
 see comments grouped by patchset, file and commented line rather than a
 flat view mixed with everything else.  Anyway, I guess there is no
 one-size-fits-all solution for this and everyone has different preferences
 which is cool.

 -Rado

 On 12/17/14, 8:58 AM, James Polley wrote:

 I was looking at the new change screen on https://review.openstack.org
 today[1] and it seems to do something vaguely similar.

 Rather than saying James polley made 4 inline comments, the contents
 of the comments are shown, along with a link to the file so you can see
 the context.

 Have you seen this? It seems fairly similar to what you're wanting.

 Have
 [1] To activate it, go to
 https://review.openstack.org/#/settings/preferences and set Change
 view to New Screen, then look at a change screen (such as
 https://review.openstack.org/#/c/127283/)

 On Tue, Dec 16, 2014 at 4:45 PM, Jeremy Stanley fu...@yuggoth.org
 mailto:fu...@yuggoth.org wrote:

 On 2014-12-16 17:19:55 +0200 (+0200), Radoslav Gerganov wrote:
  We don't need GoogleAppEngine if we decide that this is useful. We
  simply need to put the html page which renders the view on
 https://review.openstack.org. It is all javascript which talks
  asynchronously to the Gerrit backend.
 
  I am using GAE to simply illustrate the idea without having to
  spin up an entire Gerrit server.

 That makes a lot more sense--thanks for the clarification!

  I guess I can also submit a patch to the infra project and see how
  this works onhttps://review-dev.openstack.org if you want.

 If there's a general desire from the developer community for it,
 then that's probably the next step. However, ultimately this seems
 like something better suited as an upstream feature request for
 Gerrit (there may even already be thread-oriented improvements in
 the works for the new change screen--I haven't kept up with their
 progress lately).
 --
 Jeremy Stanley

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 mailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-17 Thread Nikhil Komawar
Guess that's a implementation detail. Depends on the way you go about using 
what's available now, I suppose.

Thanks,
-Nikhil

From: Chris St. Pierre [chris.a.st.pie...@gmail.com]
Sent: Wednesday, December 17, 2014 2:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?

I was assuming atomic increment/decrement operations, in which case I'm not 
sure I see the race conditions. Or is atomism assuming too much?

On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar 
nikhil.koma...@rackspace.commailto:nikhil.koma...@rackspace.com wrote:
That looks like a decent alternative if it works. However, it would be too racy 
unless we we implement a test-and-set for such properties or there is a 
different job which queues up these requests and perform sequentially for each 
tenant.

Thanks,
-Nikhil

From: Chris St. Pierre 
[chris.a.st.pie...@gmail.commailto:chris.a.st.pie...@gmail.com]
Sent: Wednesday, December 17, 2014 10:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?

That's unfortunately too simple. You run into one of two cases:

1. If the job automatically removes the protected attribute when an image is no 
longer in use, then you lose the ability to use protected on images that are 
not in use. I.e., there's no way to say, nothing is currently using this 
image, but please keep it around. (This seems particularly useful for 
snapshots, for instance.)

2. If the job does not automatically remove the protected attribute, then an 
image would be protected if it had ever been in use; to delete an image, you'd 
have to manually un-protect it, which is a workflow that quite explicitly 
defeats the whole purpose of flagging images as protected when they're in use.

It seems like flagging an image as *not* in use is actually a fairly difficult 
problem, since it requires consensus among all components that might be using 
images.

The only solution that readily occurs to me would be to add something like a 
filesystem link count to images in Glance. Then when Nova spawns an instance, 
it increments the usage count; when the instance is destroyed, the usage count 
is decremented. And similarly with other components that use images. An image 
could only be deleted when its usage count was zero.

There are ample opportunities to get out of sync there, but it's at least a 
sketch of something that might work, and isn't *too* horribly hackish. Thoughts?

On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya 
vishvana...@gmail.commailto:vishvana...@gmail.com wrote:
A simple solution that wouldn’t require modification of glance would be a cron 
job
that lists images and snapshots and marks them protected while they are in use.

Vish

On Dec 16, 2014, at 3:19 PM, Collins, Sean 
sean_colli...@cable.comcast.commailto:sean_colli...@cable.comcast.com wrote:

 On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
 No, I'm looking to prevent images that are in use from being deleted. In
 use and protected are disjoint sets.

 I have seen multiple cases of images (and snapshots) being deleted while
 still in use in Nova, which leads to some very, shall we say,
 interesting bugs and support problems.

 I do think that we should try and determine a way forward on this, they
 are indeed disjoint sets. Setting an image as protected is a proactive
 measure, we should try and figure out a way to keep tenants from
 shooting themselves in the foot if possible.

 --
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Chris St. Pierre

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.orgmailto:OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Chris St. Pierre
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-*client] py33 jobs seem to be failing

2014-12-17 Thread Jeremy Stanley
On 2014-12-17 11:09:59 -0500 (-0500), Steve Martinelli wrote:
[...]
 The stack trace leads me to believe that docutils or sphinx is the
 culprit, but neither has released a new version in the time the
 bug has been around, so I'm not sure what the root cause of the
 problem is.

It's an unforeseen interaction between new PBR changes to support
Setuptools 8 and the way docutils supports Py3K by running 2to3
during installation (entrypoint scanning causes pre-translated
docutils to be loaded into the execution space through the egg-info
writer PBR grew to be able to record Git SHA details outside of
version strings). A solution is currently being developed.
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon] [UX] Curvature interactive virtual network design

2014-12-17 Thread Jesse Pretorius
Yes please. :)
On Fri, 7 Nov 2014 at 16:19 John Davidge (jodavidg) jodav...@cisco.com
wrote:

   As discussed in the Horizon contributor meet up, here at Cisco we’re
 interested in upstreaming our work on the Curvature dashboard into Horizon.
 We think that it can solve a lot of issues around guidance for new users
 and generally improving the experience of interacting with Neutron.
 Possibly an alternative persona for novice users?

  For reference, see:

1. http://youtu.be/oFTmHHCn2-g – Video Demo
2.

 https://www.openstack.org/summit/portland-2013/session-videos/presentation/interactive-visual-orchestration-with-curvature-and-donabe
  –
Portland presentation
3. https://github.com/CiscoSystems/curvature – original (Rails based)
code

  We’d like to gauge interest from the community on whether this is
 something people want.

  Thanks,

  John, Brad  Sam

   ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-cinderclient] Supported client-side sort keys

2014-12-17 Thread Steven Kaufer

The cinder client supports passing a sort key via the --sort_key argument.
The client restricts the sort keys that the user can supply to the
following:
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/v2/volumes.py#L28-L29

This list of sort keys is not complete.  As far I know, all attributes on
this class are valid:
https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/models.py#L104

I noticed that the 'name' key is incorrect and it should instead be
'display_name'.  Before I create a bug/fix to address this, I have the
following questions:

Does anyone know the rational behind the client restricting the possible
sort keys?
Why not allow the user to supply any sort key (assuming that invalid keys
are gracefully handled)?

Note, if you try this out at home, you'll notice that the client table is
not actually sorted, fixed under:  https://review.openstack.org/#/c/141964/

Thanks,
Steven Kaufer___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara] team meeting Dec 18 1400 UTC

2014-12-17 Thread Sergey Lukjanov
Hi folks,

We'll be having the Sahara team meeting in
#openstack-meeting-3 channel.

Agenda: https://wiki.openstack.org/wiki/Meetings/SaharaAgenda#Next_meetings

http://www.timeanddate.com/worldclock/fixedtime.html?msg=Sahara+Meetingiso=20141218T14

NOTE: It's a new alternate time slot.

-- 
Sincerely yours,
Sergey Lukjanov
Sahara Technical Lead
(OpenStack Data Processing)
Principal Software Engineer
Mirantis Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [hacking] proposed rules drop for 1.0

2014-12-17 Thread Ben Nemec
For anyone who's interested, the final removals are in a series starting
here: https://review.openstack.org/#/c/142585/

On 12/09/2014 05:39 AM, Sean Dague wrote:
 I'd like to propose that for hacking 1.0 we drop 2 groups of rules entirely.
 
 1 - the entire H8* group. This doesn't function on python code, it
 functions on git commit message, which makes it tough to run locally. It
 also would be a reason to prevent us from not rerunning tests on commit
 message changes (something we could do after the next gerrit update).
 
 2 - the entire H3* group - because of this -
 https://review.openstack.org/#/c/140168/2/nova/tests/fixtures.py,cm
 
 A look at the H3* code shows that it's terribly complicated, and is
 often full of bugs (a few bit us last week). I'd rather just delete it
 and move on.
 
   -Sean
 


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] HTTPS for spice console

2014-12-17 Thread Akshik DBK
Are there any recommended approach to configure spice console proxy on a secure 
[https], could not find proper documentation for the same.can someone point me 
to the rigt direction  ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled

2014-12-17 Thread Padmanabhan Krishnan
This means whatever tools the operators are using, it need to make sure the IP 
address assigned inside the VM matches with Openstack has assigned to the 
port.Bringing the question that i had in another thread on the same topic:
If one wants to use the provider DHCP server and not have Openstack's DHCP or 
L3 agent/DVR, it may not be possible to do so even with DHCP disabled in 
Openstack network. Even if the provider DHCP server is configured with the same 
start/end range in the same subnet, there's no guarantee that it will match 
with Openstack assigned IP address for bulk VM launches or  when there's a 
failure case.So, how does one deploy external DHCP with Openstack?
If Openstack hasn't assigned a IP address when DHCP is disabled for a network, 
can't port_update be done with the provider DHCP specified IP address to put 
the anti-spoofing and security rules?With Openstack assigned IP address, 
port_update cannot be done since IP address aren't in sync and can overlap.
Thanks,Paddu



On 12/16/14 4:30 AM, Pasquale Porreca pasquale.porr...@dektech.com.au
wrote:

I understood and I agree that assigning the ip address to the port is
not a bug, however showing it to the user, at least in Horizon dashboard
where it pops up in the main instance screen without a specific search,
can be very confusing.

On 12/16/14 12:25, Salvatore Orlando wrote:
 In Neutron IP address management and distribution are separated
concepts.
 IP addresses are assigned to ports even when DHCP is disabled. That IP
 address is indeed used to configure anti-spoofing rules and security
groups.
 
 It is however understandable that one wonders why an IP address is
assigned
 to a port if there is no DHCP server to communicate that address.
Operators
 might decide to use different tools to ensure the IP address is then
 assigned to the instance's ports. On XenServer for instance one could
use a
 guest agent reading network configuration from XenStore; as another
 example, older versions of Openstack used to inject network
configuration
 into the instance file system; I reckon that today's configdrive might
also
 be used to configure instance's networking.
 
 Summarising I don't think this is a bug. Nevertheless if you have any
idea
 regarding improvements on the API UX feel free to file a bug report.
 
 Salvatore
 
 On 16 December 2014 at 10:41, Pasquale Porreca 
 pasquale.porr...@dektech.com.au wrote:

 Is there a specific reason for which a fixed ip is bound to a port on a
 subnet where dhcp is disabled? it is confusing to have this info shown
 when the instance doesn't have actually an ip on that port.
 Should I fill a bug report, or is this a wanted behavior?

 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

-- 
Pasquale Porreca

DEK Technologies
Via dei Castelli Romani, 22
00040 Pomezia (Roma)

Mobile +39 3394823805
Skype paskporr

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


   ___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python-cinderclient] Return request ID to caller

2014-12-17 Thread Jamie Lennox


- Original Message -
 From: Abhijeet Malawade abhijeet.malaw...@nttdata.com
 To: openstack-dev@lists.openstack.org
 Sent: Friday, 12 December, 2014 3:54:04 PM
 Subject: [openstack-dev] [python-cinderclient] Return request ID to caller
 
 
 
 HI,
 
 
 
 I want your thoughts on blueprint 'Log Request ID Mappings’ for cross
 projects.
 
 BP: https://blueprints.launchpad.net/nova/+spec/log-request-id-mappings
 
 It will enable operators to get request id's mappings easily and will be
 useful in analysing logs effectively.
 
 
 
 For logging 'Request ID Mappings', client needs to return
 'x-openstack-request-id' to the caller.
 
 Currently python-cinderclient do not return 'x-openstack-request-id' back to
 the caller.
 
 
 
 As of now, I could think of below two solutions to return 'request-id' back
 from cinder-client to the caller.
 
 
 
 1. Return tuple containing response header and response body from all
 cinder-client methods.
 
 (response header contains 'x-openstack-request-id').
 
 
 
 Advantages:
 
 A. In future, if the response headers are modified then it will be available
 to the caller without making any changes to the python-cinderclient code.
 
 
 
 Disadvantages:
 
 A. Affects all services using python-cinderclient library as the return type
 of each method is changed to tuple.
 
 B. Need to refactor all methods exposed by the python-cinderclient library.
 Also requires changes in the cross projects wherever python-cinderclient
 calls are being made.
 
 
 
 Ex. :-
 
 From Nova, you will need to call cinder-client 'get' method like below :-
 
 resp_header, volume = cinderclient(context).volumes.get(volume_id)
 
 
 
 x-openstack-request-id = resp_header.get('x-openstack-request-id', None)
 
 
 
 Here cinder-client will return both response header and volume. From response
 header, you can get 'x-openstack-request-id'.
 
 
 
 2. The optional parameter 'return_req_id' of type list will be passed to each
 of the cinder-client method. If this parameter is passed then cinder-client
 will append ‘'x-openstack-request-id' received from cinder api to this list.
 
 
 
 This is already implemented in glance-client (for V1 api only)
 
 Blueprint :
 https://blueprints.launchpad.net/python-glanceclient/+spec/return-req-id
 
 Review link : https://review.openstack.org/#/c/68524/7
 
 
 
 Advantages:
 
 A. Requires changes in the cross projects only at places wherever
 python-cinderclient calls are being made requiring 'x-openstack-request-id’.
 
 
 
 Dis-advantages:
 
 A. Need to refactor all methods exposed by the python-cinderclient library.
 
 
 
 Ex. :-
 
 From Nova, you will need to pass return_req_id parameter as a list.
 
 kwargs['return_req_id'] = []
 
 item = cinderclient(context).volumes.get(volume_id, **kwargs)
 
 
 
 if kwargs.get('return_req_id'):
 
 x-openstack-request-id = kwargs['return_req_id'].pop()
 
 
 
 python-cinderclient will add 'x-openstack-request-id' to the 'return_req_id'
 list if it is provided in kwargs.
 
 
 
 IMO, solution #2 is better than #1 for the reasons quoted above.
 
 Takashi NATSUME has already proposed a patch for solution #2. Please review
 patch https://review.openstack.org/#/c/104482/.
 
 Would appreciate if you can think of any other better solution than #2.
 
 
 
 Thank you.
 
 

Abhijeet

So option 1 is a massive compatibility break. There's no way you can pull of a 
change in the return value like that without a new major version and every 
getting annoyed. 

My question is why does it need to be returned to the caller? What is the 
caller going to do with it other than send it to the debug log? It's an admin 
who is trying to figure out those logs later that wants the request-id included 
in that information, not the application at run time. 

Why not just have cinderclient log it as part of the standard request logging: 
https://github.com/openstack/python-cinderclient/blob/master/cinderclient/client.py#L170



Jamie

 __
 Disclaimer: This email and any attachments are sent in strictest confidence
 for the sole use of the addressee and may contain legally privileged,
 confidential, and proprietary data. If you are not the intended recipient,
 please advise the sender by replying promptly to this email and then delete
 and destroy this email and any attachments without any further use, copying
 or forwarding.
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Question about Murano installation

2014-12-17 Thread Serg Melikyan
Hi, Raghavendra

Given screenshots that you have send, you are using extremely outdated
version of murano-dashboard (and probably outdated version of all other
components). That is why you may experience issues with manual
http://murano.readthedocs.org/ written for Juno version of Murano.

I encourage you to use Juno version of Murano, you can obtain it by
checking out sources, e.g. for murano-dashboard:

git clone https://github.com/stackforge/murano-dashboard
git checkout 2014.2

You also can download tarballs from http://tarballs.openstack.org/:

   - murano-2014.2.tar.gz
   http://tarballs.openstack.org/murano/murano-2014.2.tar.gz (wheel
   http://tarballs.openstack.org/murano/murano-2014.2-py2-none-any.whl)
   - murano-dashboard-2014.2.tar.gz
   
http://tarballs.openstack.org/murano-dashboard/murano-dashboard-2014.2.tar.gz
(wheel
   
http://tarballs.openstack.org/murano-dashboard/murano_dashboard-2014.2-py2-none-any.whl
   )


Please, find answers to your questions bellow:

should I use for mysql and the [murano] the localhost:8082, please clarify.

Option url in murano section should point to address where murano-api is
running. Option connection in database section, should point to address
where MySQL is runnings. Unfortunately I don't know your OpenStack
deployment scheme, so I can't answer more accurate.

How can I install Murano Agent please provide details?

murano-agent is agent which runs on guest VMs and responsible for
provisioning application on VM. It is not required, but many existing
applications use murano-agent to do application provisioning/configuration.

We use Disk Image Builder
https://github.com/openstack/diskimage-builder project
to build images with murano-agent installed, please refer to murano-agent
ReadMe https://github.com/stackforge/murano-agent#image-building-using-dib
for
details about how to build image with murano-agent.

We also have pre-built images with murano-agent for Fedora 17  Ubuntu
14.04:

   - http://murano-files.mirantis.com/F17-x86_64-cfntools.qcow2
   -
   http://murano-files.mirantis.com/ubuntu_14_04-murano-agent_stable_juno.qcow2

How to install dashboard can I follow doc and install using tox?
Where to update the murano_metadata url details?

To install murano-dashboard correctly you need just to follow manual and
use Murano 2014.2 version. There is no option MURANO_METADATA_URL anymore
in murano-dashboard.

On Tue, Dec 16, 2014 at 9:13 PM, raghavendra@accenture.com wrote:

  Hi Serg,



 Thank you for your response.



 I have the Openstack Ubuntu Juno version on 14.04 LTS.



 I am following the below link



 https://murano.readthedocs.org/en/latest/install/manual.html



 I have attached the error messages with this email.



 I am unable to see the Application menu on the Murano dashboard.

 I am unable to install Applications for the Murano. (Please let me know
 how we can install the Application packages)



 I would like to know about Murano Agent.



 Can we spin up a  VM from Openstack and install the Murano Agent
 components for creating image ?

 If not please provide details.





 Incase of any doubts please let me know.



 Regards,

 Raghavendra Lad

 Mobile: +9198800 40919



 *From:* Serg Melikyan [mailto:smelik...@mirantis.com]
 *Sent:* Wednesday, December 17, 2014 12:58 AM
 *To:* Lad, Raghavendra
 *Subject:* Murano Mailing-List



 Hi, Raghavendra



 I would like to mention that we don't use mailing-list on launchpad
 anymore, there is no reason to duplicate messages sent to openstack-dev@
 to the murano-...@lists.launchpad.net.



 You can also reach team working on Murano using IRC on #murano at FreeNode



 --

 Serg Melikyan, Senior Software Engineer at Mirantis, Inc.

 http://mirantis.com | smelik...@mirantis.com


 +7 (495) 640-4904, 0261

 +7 (903) 156-0836

 --

 This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise confidential information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the e-mail by you is prohibited. Where allowed
 by local law, electronic communications with Accenture and its affiliates,
 including e-mail and instant messaging (including content), may be scanned
 by our systems for the purposes of information security and assessment of
 internal compliance with Accenture policy.

 __

 www.accenture.com



-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] confused about trove-guestagent need nova's auth info

2014-12-17 Thread 乔建
When using trove, we need to configure nova’s user information in the 
configuration file of trove-guestagent, such as
 
l  nova_proxy_admin_user
 
l  nova_proxy_admin_pass
 
l  nova_proxy_admin_tenant_name
 
Is it necessary? In a public cloud environment, It will lead to serious 
security risks.
 
I traced the code, and noticed that the auth data mentioned above is packaged 
in a context object, then passed to the trove-conductor via message queue.
 
Is it more suitable for trove-conductor to get the corresponding information 
from its own conf file?
 
 
 
Thanks!
 
qiao___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-17 Thread Zane Bitter

On 17/12/14 13:05, Gurjar, Unmesh wrote:

I'm storing a tuple of its name and database ID. The data structure is
resource.GraphKey. I was originally using the name for something, but I
suspect I could probably drop it now and just store the database ID, but I
haven't tried it yet. (Having the name in there definitely makes debugging
more pleasant though ;)



I agree, having name might come in handy while debugging!


When I build the traversal graph each node is a tuple of the GraphKey and a
boolean to indicate whether it corresponds to an update or a cleanup
operation (both can appear for a single resource in the same graph).


Just to confirm my understanding, cleanup operation takes care of both:
1. resources which are deleted as a part of update and
2. previous versioned resource which was updated by replacing with a new
resource (UpdateReplace scenario)


Yes, correct. Also:

3. resource versions which failed to delete for whatever reason on a 
previous traversal



Also, the cleanup operation is performed after the update completes 
successfully.


NO! They are not separate things!

https://github.com/openstack/heat/blob/stable/juno/heat/engine/update.py#L177-L198


If I am correct, you are updating all resources on update regardless
of their change which will be inefficient if stack contains a million resource.


I'm calling update() on all resources regardless of change, but update() will
only call handle_update() if something has changed (unless the plugin has
overridden Resource._needs_update()).

There's no way to know whether a resource needs to be updated before
you're ready to update it, so I don't think of this as 'inefficient', just 
'correct'.


We have similar questions regarding other areas in your
implementation, which we believe if we understand the outline of your
implementation. It is difficult to get a hold on your approach just by looking

at code. Docs strings / Etherpad will help.



About streams, Yes in a million resource stack, the data will be huge, but

less than template.

No way, it's O(n^3) (cubed!) in the worst case to store streams for each
resource.


Also this stream is stored
only In IN_PROGRESS resources.


Now I'm really confused. Where does it come from if the resource doesn't
get it until it's already in progress? And how will that information help it?



When an operation on stack is initiated, the stream will be identified.


OK, this may be one of the things I was getting confused about - I 
though a 'stream' belonged to one particular resource and just contained 
all of the paths to reaching that resource. But here it seems like 
you're saying that a 'stream' is a representation of the entire graph? 
So it's essentially just a gratuitously bloated NIH serialisation of the 
Dependencies graph?



To begin
the operation, the action is initiated on the leaf (or root) resource(s) and the
stream is stored (only) in this/these IN_PROGRESS resource(s).


How does that work? Does it get deleted again when the resource moves to 
COMPLETE?



The stream should then keep getting passed to the next/previous level of 
resource(s) as
and when the dependencies for the next/previous level of resource(s) are met.


That sounds... identical to the way it's implemented in my prototype 
(passing a serialisation of the graph down through the notification 
triggers), except for the part about storing it in the Resource table. 
Why would we persist to the database data that we only need for the 
duration that we already have it in memory anyway?


If we're going to persist it we should do so once, in the Stack table, 
at the time that we're preparing to start the traversal.



The reason to have entire dependency list to reduce DB queries while a

stack update.

But we never need to know that. We only need to know what just happened
and what to do next.



As mentioned earlier, each level of resources in a graph pass on the dependency
list/stream to their next/previous level of resources. This is information 
should further
be used to determine what is to be done next and drive the operation to 
completion.


Why would we store *and* forward?


When you have a singular dependency on each resources similar to your
implantation, then we will end up loading Dependencies one at a time and

altering almost all resource's dependency regardless of their change.


Regarding a 2 template approach for delete, it is not actually 2
different templates. Its just that we have a delete stream To be taken up

post update.

That would be a regression from Heat's current behaviour, where we start
cleaning up resources as soon as they have nothing depending on them.
There's not even a reason to make it worse than what we already have,
because it's actually a lot _easier_ to treat update and clean up as the same
kind of operation and throw both into the same big graph. The dual
implementations and all of the edge cases go away and you can just trust in
the graph traversal to do the Right Thing in 

Re: [openstack-dev] [oslo] [taskflow] sprint review day

2014-12-17 Thread Joshua Harlow
Thanks for all those who showed up and helped in the sprint (even those 
in spirit; due to setuptools issues happening this week)!


We knocked out a good number of reviews and hopefully can keep knocking 
them out as time goes on...


Etherpad for those interested:

https://etherpad.openstack.org/p/taskflow-kilo-sprint

Feel free to keep on helping (it's always appreciated).

Thanks agains!

-Josh

Doug Hellmann wrote:

On Dec 10, 2014, at 2:12 PM, Joshua Harlowharlo...@outlook.com  wrote:


Hi everyone,

The OpenStack oslo team will be hosting a virtual sprint in the
Freenode IRC channel #openstack-oslo for the taskflow subproject on
Wednesday 12-17-2014 starting at 16:00 UTC and going for ~8 hours.

The goal of this sprint is to work on any open reviews, documentation
or any other integration questions, development and so-on, so that we
can help progress the taskflow subproject forward at a good rate.

Live version of the current documentation is available here:

http://docs.openstack.org/developer/taskflow/

The code itself lives in the openstack/taskflow respository.

http://git.openstack.org/cgit/openstack/taskflow/tree

Please feel free to join if interested, curious, or able.

Much appreciated,

Joshua Harlow

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Thanks for setting this up, Josh!

This day works for me. We need to make sure a couple of other Oslo cores can 
make it that day for the sprint to really be useful, so everyone please let us 
know if you can make it.

Doug


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] oslo.service graduating - Primary Maintainers

2014-12-17 Thread Sachi King
Hi,

Oslo service is graduating and is looking for a primary maintainer.

The following are the listed maintainers for the submodules that are not 
orphans.
service - Michael Still
periodic_task - Michael Still
Requestutils - Sandy Walsh
systemd - Alan Pevec

Would any of you like to take up being the primary maintainer for oslo.service?
Additionally do you have any pending work that we should delay graduation for?

Further details can be found in the in-progress spec.
https://review.openstack.org/#/c/142659/

Cheers,
Sachi

signature.asc
Description: This is a digitally signed message part.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Setting MTU size for tap device

2014-12-17 Thread Ryu Ishimoto
Hi All,

I noticed that in linux_net.py, the method to create a tap interface[1]
does not let you set the MTU size.  In other places, I see calls made to
set the MTU of the device [2].

I'm wondering if there is any technical reasons to why we can't also set
the MTU size when creating tap interfaces for general cases.  In certain
overlay solutions, this would come in handy.  If there isn't any, I would
love to submit a patch to accomplish this.

Thanks in advance!

Ryu

[1]
https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1374
[2]
https://github.com/openstack/nova/blob/master/nova/network/linux_net.py#L1309
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] ActionProvider

2014-12-17 Thread Renat Akhmerov
Winson,

The idea itself makes a lot of sense to me because we’ve had a number of 
discussions about how we could make action subsystem even more pluggable and 
flexible. One of the questions that we’d like to solve is to be able to add 
actions “on the fly” and at the same time stay safe. I think this whole thing 
is about specific technical details so I would like to see more of them. 
Generally speaking, you’re right about actions residing in a database, about 3 
months ago we made this refactoring and put all actions into db but it may not 
be 100% necessary. Btw, we already have a concept of action generator that we 
use to automatically build OpenStack actions so you can take a look at how they 
work. Long story short… We’ve already made some steps towards being more 
flexible and have some facilities that could be further improved.

Again, the idea is very interesting to me (and not only to me). Please share 
the details.

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 17 Dec 2014, at 13:22, W Chan m4d.co...@gmail.com wrote:
 
 Renat,
 
 We want to introduce the concept of an ActionProvider to Mistral.  We are 
 thinking that with an ActionProvider, a third party system can extend Mistral 
 with it's own action catalog and set of dedicated and specialized action 
 executors.  The ActionProvider will return it's own list of actions via an 
 abstract interface.  This minimizes the complexity and latency in managing 
 and sync'ing the Action table.  In the DSL, we can define provider specific 
 context/configuration separately and apply to all provider specific actions 
 without explicitly passing as inputs.  WDYT?
 
 Winson
   
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OSSN 0038] Suds client subject to cache poisoning by local attacker

2014-12-17 Thread Nathan Kinder
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Suds client subject to cache poisoning by local attacker
- ---

### Summary ###
Suds is a Python SOAP client for consuming Web Services. Its default
cache implementation stores pickled objects to a predictable path in
/tmp. This can be used by a local attacker to redirect SOAP requests via
symlinks or run a privilege escalation or code execution attack.

### Affected Services / Software ###
Cinder, Nova, Grizzly, Havana, Icehouse

### Discussion ###
The Python 'suds' package is used by oslo.vmware to interface with SOAP
service APIs and both Cinder and Nova have dependencies on oslo.vmware
when using VMware drivers. By default suds uses an on-disk cache that
places pickle files, serialised Python objects, into a known location
'/tmp/suds'. A local attacker could use symlinks or place crafted files
into this location that will later be deserialised by suds.

By manipulating the content of the cached pickle files, an attacker can
redirect or modify SOAP requests. Alternatively, pickle may be used to
run injected Python code during the deserialisation process. This can
allow the spawning of a shell to execute arbitrary OS level commands
with the permissions of the service using suds, thus leading to possible
privilege escalation.

At the time of writing, the suds package appears largely unmaintained
upstream. However, vendors have released patched versions that do not
suffer from the predictable cache path problem. Ubuntu is known to offer
one such patched version (python-suds_0.4.1-2ubuntu1.1).

### Recommended Actions ###
The recommended solution to this issue is to disable cache usage in the
configuration as shown:

  'client.set_options(cache=None)'

A fix has been released to oslo.vmware (0.6.0) that disables the use of
the disk cache by default. Cinder and Nova have both adjusted their
requirements to include this fixed version. Deployers wishing to
re-enable the cache should ascertain whether or not their vendor
shipped suds package is susceptible and consider the above advice.

### Contacts / References ###
This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0038
Original Launchpad Bug : https://bugs.launchpad.net/ossn/+bug/1341954
OpenStack Security ML : openstack-secur...@lists.openstack.org
OpenStack Security Group : https://launchpad.net/~openstack-ossg
Suds: https://pypi.python.org/pypi/suds
CVE: CVE-2013-2217
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iQEcBAEBAgAGBQJUkncTAAoJEJa+6E7Ri+EV4sQH/RUgDVqGRs5tdBGApTd3ljq0
ThqY8+5/3dqOYJ767/tTQ7WghGcPouFV8hXeco2ZS7oYS41kcBwQnvTRCol6bRqH
ayKjQIiNvaonHsSSwyhB1fMuUTjMzbTDg6w94xfy2Ibl+0XTskXkhQ2qqLB7yG4H
4sDWZNykE5sGcpn7zB2Xr+6IkODqNlPI5AAGmLBM9N1XB/Y98tQ+k8V7T3cvuF6+
77/o6WiyD5Q5g5s2/yaOuvOhZu4W3bxAXwKskYBvVIoxA90SPu66hQ2BQHPIzSIX
pZG0efK25s1slgY8yL8uNAG2GLIhhgvDk8aW5GkV9XJQ4jIh+15TILNmazSq9Q0=
=hEO/
-END PGP SIGNATURE-

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ask for usage of quota reserve

2014-12-17 Thread Eli Qiao(Li Yong Qiao)
hi all,
can anyone tell if we call quotas.reserve() but never call
quotas.commit() or quotas.rollback().
what will happen?

for example:

 1. when doing resize, we call quotas.reserve() to reservea a delta
quota.(new_flavor - old_flavor)
 2. for some reasons, nova-compute crashed, and not chance to call
quotas.commit() or quotas.rollback() /(called by finish_resize in
nova/compute/manager.py)/
 3. next time restart nova-compute server, is the delta quota still
reserved , or do we need any other operation on quotas?

Thanks in advance
-Eli.

ps: this is related to patch : Handle RESIZE_PREP status when nova
compute do init_instance(https://review.openstack.org/#/c/132827/)
https://review.openstack.org/#/c/132827/

-- 
Thanks Eli Qiao(qia...@cn.ibm.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] RFC - Action spec CLI

2014-12-17 Thread Renat Akhmerov
Dmitri,

Yes, it would be really cool if you could with the documentation. Btw, while 
doing it you could also think recommendations for others tests that should be 
added to make sure they provide enough coverage for needed case.

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 17 Dec 2014, at 15:56, Dmitri Zimine dzim...@stackstorm.com wrote:
 
 The problem with existing syntax is it is not defined: there is no docs on 
 inlining complex variables [*], and we haven’t tested it for anything more 
 than the simplest cases: 
 https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/workbook/v2/test_dsl_specs_v2.py#L114
  
 https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/workbook/v2/test_dsl_specs_v2.py#L114.
  I will be surprised if anyone figured how to provide a complex object as an 
 inline parameter.
 
 Do you think regex is the right approach for parsing arbitrary key-values 
 where values is arbitrary json structures? Will it work with something like 
   workflow: wf2 object_list=[ {“url”: “http://{$hostname}.example.com 
 http://example.com/:8080?x=ay={$.b}}, 33, null, {{$.key}, [{$.value1}, 
 {$.value2}]}
 How much tests should we write to be confident we covered all cases? I share 
 Lakshmi’s concern it is fragile and maintaining it reliably is difficult. 
 
 But back to the original question, it’s about requirements, not 
 implementation. 
 My preference is “option 3”, “make it work as is now”. But if it’s too hard I 
 am ok to compromise. 
 Than option 2 as it resembles option 3 and YAML/JSON conversion makes 
 complete sense. At the expense of quoting the objects. Slight change, not 
 significant. 
 Option 1 introduce a new syntax; although familiar to CLI users, I think it’s 
 a bit out of place in YAML definition. 
 Option 4 is no go :)
 
 DZ. 
 
 [*] “there is no docs to this” - I subscribe on fixing this.
 
 On Dec 16, 2014, at 9:48 PM, Renat Akhmerov rakhme...@mirantis.com 
 mailto:rakhme...@mirantis.com wrote:
 
 Ok, I would prefer to spend some time and think how to improve the existing 
 reg exp that we use to parse key-value pairs. We definitely can’t just drop 
 support of this syntax and can’t even change it significantly since people 
 already use it.
 
 Renat Akhmerov
 @ Mirantis Inc.
 
 
 
 On 17 Dec 2014, at 07:28, Lakshmi Kannan laks...@lakshmikannan.me 
 mailto:laks...@lakshmikannan.me wrote:
 
 Apologies for the long email. If this fancy email doesn’t render correctly 
 for you, please read it here: 
 https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a 
 https://gist.github.com/lakshmi-kannan/cf953f66a397b153254a
 I was looking into fixing bug: 
 https://bugs.launchpad.net/mistral/+bug/1401039 
 https://bugs.launchpad.net/mistral/+bug/1401039. My idea was to use shlex 
 to parse the string. This actually would work for anything that is supplied 
 in the linux shell syntax. Problem is this craps out when we want to 
 support complex data structures such as arrays and dicts as arguments. I 
 did not think we supported a syntax to take in complex data structures in a 
 one line format. Consider for example:
 
   task7:
 for-each:
   vm_info: $.vms
 workflow: wf2 is_true=true object_list=[1, null, str]
 on-complete:
   - task9
   - task10
 Specifically
 
 wf2 is_true=true object_list=[1, null, str]
 shlex will not handle this correctly because object_list is an array. Same 
 problem with dict.
 
 There are 3 potential options here:
 
 Option 1
 
 1) Provide a spec for specifying lists and dicts like so:
 list_arg=1,2,3 and dict_arg=h1:h2,h3:h4,h5:h6
 
 shlex will handle this fine but there needs to be a code that converts the 
 argument values to appropriate data types based on schema. (ActionSpec 
 should have a parameter schema probably in jsonschema). This is doable.
 
 wf2 is_true=true object_list=1,null,str
 Option 2
 
 2) Allow JSON strings to be used as arguments so we can json.loads them (if 
 it fails, use them as simple string). For example, with this approach, the 
 line becomes
 
 wf2 is_true=true object_list=[1, null, str]
 This would pretty much resemble 
 http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument
  
 http://stackoverflow.com/questions/7625786/type-dict-in-argparse-add-argument
 Option 3
 
 3) Keep the spec as such and try to parse it. I have no idea how we can do 
 this reliably. We need a more rigorous lexer. This syntax doesn’t translate 
 well when we want to build a CLI. Linux shells cannot support this syntax 
 natively. This means people would have to use shlex syntax and a 
 translation needs to happen in CLI layer. This will lead to inconsistency. 
 CLI uses some syntax and the action input line in workflow definition will 
 use another. We should try and avoid this.
 
 Option 4
 
 4) Completely drop support for this fancy one line syntax in workflow. This 
 is probably the least desired option.
 
 My preference
 
 Looking the 

Re: [openstack-dev] [pecan] [WSME] Different content-type in request and response

2014-12-17 Thread Renat Akhmerov
Doug,

Sorry for trying to resurrect this thread again. It seems to be pretty 
important for us. Do you have some comments on that? Or if you need more 
context please also let us know.

Thanks

Renat Akhmerov
@ Mirantis Inc.



 On 27 Nov 2014, at 17:43, Renat Akhmerov rakhme...@mirantis.com wrote:
 
 Doug, thanks for your answer! 
 
 My explanations below..
 
 
 On 26 Nov 2014, at 21:18, Doug Hellmann d...@doughellmann.com 
 mailto:d...@doughellmann.com wrote:
 
 
 On Nov 26, 2014, at 3:49 AM, Renat Akhmerov rakhme...@mirantis.com 
 mailto:rakhme...@mirantis.com wrote:
 
 Hi,
 
 I traced the WSME code and found a place [0] where it tries to get 
 arguments from request body based on different mimetype. So looks like WSME 
 supports only json, xml and “application/x-www-form-urlencoded”.
 
 So my question is: Can we fix WSME to also support “text/plain” mimetype? I 
 think the first snippet that Nikolay provided is valid from WSME standpoint.
 
 WSME is intended for building APIs with structured arguments. It seems like 
 the case of wanting to use text/plain for a single input string argument 
 just hasn’t come up before, so this may be a new feature.
 
 How many different API calls do you have that will look like this? Would 
 this be the only one in the API? Would it make sense to consistently use 
 JSON, even though you only need a single string argument in this case?
 
 We have 5-6 API calls where we need it.
 
 And let me briefly explain the context. In Mistral we have a language (we 
 call it DSL) to describe different object types: workflows, workbooks, 
 actions. So currently when we upload say a workbook we run in a command line:
 
 mistral workbook-create my_wb.yaml
 
 where my_wb.yaml contains that DSL. The result is a table representation of 
 actually create server side workbook. From technical perspective we now have:
 
 Request:
 
 POST /mistral_url/workbooks
 
 {
   “definition”: “escaped content of my_wb.yaml
 }
 
 Response:
 
 {
   “id”: “1-2-3-4”,
   “name”: “my_wb_name”,
   “description”: “my workbook”,
   ...
 }
 
 The point is that if we use, for example, something like “curl” we every time 
 have to obtain that “escaped content of my_wb.yaml” and create that, in fact, 
 synthetic JSON to be able to send it to the server side.
 
 So for us it would be much more convenient if we could just send a plain text 
 but still be able to receive a JSON as response. I personally don’t want to 
 use some other technology because generally WSME does it job and I like this 
 concept of rest resources defined as classes. If it supported text/plain it 
 would be just the best fit for us.
 
 
 Or if we don’t understand something in WSME philosophy then it’d nice to 
 hear some explanations from WSME team. Will appreciate that.
 
 
 Another issue that previously came across is that if we use WSME then we 
 can’t pass arbitrary set of parameters in a url query string, as I 
 understand they should always correspond to WSME resource structure. So, in 
 fact, we can’t have any dynamic parameters. In our particular use case it’s 
 very inconvenient. Hoping you could also provide some info about that: how 
 it can be achieved or if we can just fix it.
 
 Ceilometer uses an array of query arguments to allow an arbitrary number.
 
 On the other hand, it sounds like perhaps your desired API may be easier to 
 implement using some of the other tools being used, such as JSONSchema. Are 
 you extending an existing API or building something completely new?
 
 We want to improve our existing Mistral API. Basically, the idea is to be 
 able to apply dynamic filters when we’re requesting a collection of objects 
 using url query string. Yes, we could use JSONSchema if you say it’s 
 absolutely impossible to do and doesn’t follow WSME concepts, that’s fine. 
 But like I said generally I like the approach that WSME takes and don’t feel 
 like jumping to another technology just because of this issue.
 
 Thanks for mentioning Ceilometer, we’ll look at it and see if that works for 
 us.
 
 Renat

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] RFC - Action spec CLI

2014-12-17 Thread Renat Akhmerov
Hi,


 The problem with existing syntax is it is not defined: there is no docs on 
 inlining complex variables [*], and we haven’t tested it for anything more 
 than the simplest cases: 
 https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/workbook/v2/test_dsl_specs_v2.py#L114
  
 https://github.com/stackforge/mistral/blob/master/mistral/tests/unit/workbook/v2/test_dsl_specs_v2.py#L114.
  I will be surprised if anyone figured how to provide a complex object as an 
 inline parameter.

Documentation is really not complete. It’s one of the major problems we’re to 
fix. It’s just a matter or resources and priorities, as always.

Disagree on testing. We tested it for cases we were interested in. The test you 
pointed to is not the only one. General comment: If we find that our tests 
insufficient let’s just go ahead and improve them.

 Do you think regex is the right approach for parsing arbitrary key-values 
 where values is arbitrary json structures? Will it work with something like 
   workflow: wf2 object_list=[ {“url”: “http://{$hostname}.example.com 
 http://example.com/:8080?x=ay={$.b}}, 33, null, {{$.key}, [{$.value1}, 
 {$.value2}]}

With regular expressions it just works. As it turns out shlex doesn't. What 
else? The example you provided is a question of limitations that every 
convenient thing has. These limitations should be recognized and well 
documented.

 How much tests should we write to be confident we covered all cases? I share 
 Lakshmi’s concern it is fragile and maintaining it reliably is difficult. 

Again, proper documentation and recognition of limitations.

 My preference is “option 3”, “make it work as is now”. But if it’s too hard I 
 am ok to compromise. 

https://review.openstack.org/#/c/142452/ 
https://review.openstack.org/#/c/142452/. Took fairly reasonable amount of 
time for Nikolay to fix it. 

 Option 1 introduce a new syntax; although familiar to CLI users, I think it’s 
 a bit out of place in YAML definition. 

Yes, agree.

 Option 4 is no go :)

Out of discussion.


Renat Akhmerov
@ Mirantis Inc.


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev