[openstack-dev] [cinder] [driver] DB operations

2014-12-18 Thread Amit Das
Hi Stackers,

I have been developing a Cinder driver for CloudByte storage and have come
across some scenarios where the driver needs to do create, read & update
operations on cinder database (volume_admin_metadata table). This is
required to establish a mapping between OpenStack IDs with the backend
storage IDs.

Now, I have got some review comments w.r.t the usage of DB related
operations esp. w.r.t raising the context to admin.

In short, it has been advised not to use "*context.get_admin_context()*".

https://review.openstack.org/#/c/102511/15/cinder/volume/drivers/cloudbyte/cloudbyte.py

However, i get errors trying to use the default context as shown below:

*2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher   File
"/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 103, in
is_admin_context*
*2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher return
context.is_admin*
*2014-12-19 12:18:17.880 TRACE oslo.messaging.rpc.dispatcher
AttributeError: 'module' object has no attribute 'is_admin'*

So what is the proper way to run these DB operations from within a driver ?


Regards,
Amit
*CloudByte Inc.* 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tox] Connection to pypi.python.org timed out when installing markupsafe in tox

2014-12-18 Thread Tran, Steven
Hi,
   I'm new to Openstack & tox and I run into this issue with tox, so hopefully 
someone can point me a direction on how to resolve it.
   I try to run tox and out of many packages, tox gets timed out installing 
"markupsafe" but not those before it.  In fact,  the failure is on oslo.db and 
markupsafe is one of its dependencies.  I put "markupsafe" in a separate 
requirements.txt and try to install markupsafe before oslo.db and I still hit 
the timeout installing "markupsafe".  However, if I run the pip install 
manually with that same requirements.txt that contain "markupsafe", the install 
is successful.
   I suspect it's the proxy issue under tox.  But how do I include proxy in 
tox?  I have environment $http_proxy & $https_proxy set up properly.   I try to 
add proxy to pip: "install_command = pip install -U --proxy  {opts} 
{packages}" under tox.ini but it doesn't help.  I also increase the timeout for 
pip  or using an older version of pip (1.4) as someone suggest but that doesn't 
help either.
   I'm running Ubuntu 14.04.1.  pip 1.5.6.


stack@tv-156:/opt/stack/congress$ tox -e pep8 -v
using tox.ini: /opt/stack/congress/tox.ini
using tox-1.6.1 from /usr/local/lib/python2.7/dist-packages/tox/__init__.pyc
GLOB sdist-make: /opt/stack/congress/setup.py
  /opt/stack/congress$ /usr/bin/python /opt/stack/congress/setup.py sdist 
--formats=zip --dist-dir /opt/stack/congress/.tox/dist 
>/opt/stack/congress/.tox/log/tox-0.log
pep8 create: /opt/stack/congress/.tox/pep8
  /opt/stack/congress/.tox$ /usr/bin/python 
/usr/lib/python2.7/dist-packages/virtualenv.py --setuptools --python 
/usr/bin/python pep8 >/opt/stack/congress/.tox/pep8/log/pep8-0.log
pep8 installdeps: -r/opt/stack/congress/requirements.txt, 
-r/opt/stack/congress/requirements2.txt, 
-r/opt/stack/congress/requirements3.txt, 
-r/opt/stack/congress/test-requirements.txt
  /opt/stack/congress$ /opt/stack/congress/.tox/pep8/bin/pip install -U 
-r/opt/stack/congress/requirements.txt -r/opt/stack/congress/requirements2.txt 
-r/opt/stack/congress/requirements3.txt 
-r/opt/stack/congress/test-requirements.txt 
>/opt/stack/congress/.tox/pep8/log/pep8-1.log
ERROR: invocation failed, logfile: /opt/stack/congress/.tox/pep8/log/pep8-1.log
ERROR: actionid=pep8
msg=getenv
cmdargs=[local('/opt/stack/congress/.tox/pep8/bin/pip'), 'install', '-U', 
'-r/opt/stack/congress/requirements.txt', 
'-r/opt/stack/congress/requirements2.txt', 
'-r/opt/stack/congress/requirements3.txt', 
'-r/opt/stack/congress/test-requirements.txt']
env={'PYTHONIOENCODING': 'utf_8', 'NO_PROXY': 
'localhost,127.0.0.1,localaddress,.localdomain.com,192.168.178.88', 
'http_proxy': 'http://proxy.houston.hp.com:8080', 'FTP_PROXY': 
'http://proxy.houston.hp.com:8080', 'LESSOPEN': '| /usr/bin/lesspipe %s', 
'SSH_CLIENT': '192.168.2.49 62999 22', 'LOGNAME': 'stack', 'USER': 'stack', 
'PATH': 
'/opt/stack/congress/.tox/pep8/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games',
 'HOME': '/home/stack', 'LANG': 'en_US.UTF-8', 'TERM': 'xterm', 'SHELL': 
'/bin/bash', 'LANGUAGE': 'en_US:en', 'HTTPS_PROXY': 
'https://proxy.houston.hp.com:8080', 'SHLVL': '1', 'https_proxy': 
'https://proxy.houston.hp.com:8080', 'XDG_RUNTIME_DIR': '/run/user/1000', 
'VIRTUAL_ENV': '/opt/stack/congress/.tox/pep8', 'ftp_proxy': 
'http://proxy.houston.hp.com:8080', 'LC_ALL': 'C', 'XDG_SESSION_ID': '16', '_': 
'/usr/local/bin/tox', 'LS_COLORS': 
'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:',
 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'SSH_TTY': '/dev/pts/19', 'OLDPWD': 
'/home/stack', 'HTTP_PROXY': 'http://proxy.houston.hp.com:8080', 'no_proxy': 
'localhost,127.0.0.1,localaddress,.localdomain.com,192.168.178.88', 'PWD': 
'/opt/stack/congress', 'MAIL': '/

Re: [openstack-dev] [Mistral] For-each

2014-12-18 Thread Angus Salkeld
On Mon, Dec 15, 2014 at 8:00 PM, Nikolay Makhotkin 
wrote:
>
> Hi,
>
> Here is the doc with suggestions on specification for for-each feature.
>
> You are free to comment and ask questions.
>
>
> https://docs.google.com/document/d/1iw0OgQcU0LV_i3Lnbax9NqAJ397zSYA3PMvl6F_uqm0/edit?usp=sharing
>
>
>
Just as a drive by comment, there is a Heat spec for a "for-each":
https://review.openstack.org/#/c/140849/
(there hasn't been a lot of feedback for it yet tho')

Nice to have these somewhat consistent.

-Angus


>
> --
> Best Regards,
> Nikolay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Analysis of current API design

2014-12-18 Thread Everett Toews
Hi All,

At the recent API WG meeting [1] we discussed the need for more analysis of 
current API design.

We need to get better at doing analysis of current API design as part of our 
guideline proposals. We are not creating these guidelines in a vacuum. The 
current design should be analyzed and taken into account.

Naturally the type of analysis will vary from guideline to guideline but 
backing your proposals with some kind of analysis will only make them better. 
Let’s take some examples.

1. Anne Gentle and I want to improve the consistency of service catalogs across 
cloud providers both public and private. This is going to require the analysis 
of many providers and we’ve got a start on it here [2]. Hopefully a guideline 
for the service catalog should fall out of the analysis of the many providers.

2. There’s a guideline for metadata up for review [3]. I wasn’t aware of all of 
the places where the concept of metadata is used in OpenStack so I did some 
analysis [4]. I found that the representation was pretty consistent but how 
metadata was CRUDed wasn’t as consistent. I hope that information can help the 
review along.

3. This Guideline for collection resources' representation structures [5] 
basically codifies in a guideline what was found in the analysis. Good stuff 
and it has definitely helped the review along.

For more information about analysis of current API design see #1 of How to 
Contribute [5]

Any thoughts or feedback on the above?

Thanks,
Everett

[1] 
http://eavesdrop.openstack.org/meetings/api_wg/2014/api_wg.2014-12-18-16.00.log.html
[2] 
https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog
[3] https://review.openstack.org/#/c/141229/
[4] https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Metadata
[5] https://review.openstack.org/#/c/133660/
[6] https://wiki.openstack.org/wiki/API_Working_Group#How_to_Contribute
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Volunteer for BP 'Improve Nova KVM IO support'

2014-12-18 Thread Rui Chen
Hi,

Is Anybody still working on this nova BP 'Improve Nova KVM IO support'?
https://blueprints.launchpad.net/nova/+spec/improve-nova-kvm-io-support

I willing to complement nova-spec and implement this feature in kilo or
subsequent versions.

Feel free to assign this BP to me, thanks:)

Best Regards.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-18 Thread Zane Bitter

On 15/12/14 07:47, Murugan, Visnusaran wrote:

We have similar questions regarding other
areas in your implementation, which we believe if we understand the outline of 
your implementation. It is difficult to get
a hold on your approach just by looking at code. Docs strings / Etherpad will 
help.


I added a bunch of extra docstrings and comments:

https://github.com/zaneb/heat-convergence-prototype/commit/5d79e009196dc224bd588e19edef5f0939b04607

I also implemented a --pdb option that will automatically set 
breakpoints at the beginning of all of the asynchronous events, so that 
you'll be dropped into the debugger and can single-step through the 
code, look at variables and so on:


https://github.com/zaneb/heat-convergence-prototype/commit/2a7a56dde21cad979fae25acc9fb01c6b4d9c6f7

I hope that helps. If you have more questions, please feel free to ask.

cheers,
Zane.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] taskflow 0.6.0 released

2014-12-18 Thread Joshua Harlow

The Oslo team is pleased to announce the release of:

taskflow 0.6.0: Taskflow structured state management library.

For more details, please see the git log history below and:

http://launchpad.net/taskflow/+milestone/0.6.0

Please report issues through launchpad:

http://bugs.launchpad.net/taskflow/

Noteable changes


* Dialects are now correctly supported in the persistence
  backends (mainly the ability to use sqlalchemy dialects; for
  example 'mysql+pymysql').
* Task *local* notification mechanism (autobind, trigger,
  bind, unbind, listeners_iter methods) have been replaced with direct
  usage of the notification type. A transition will be needed (if
  you are currently using these functions) to the newer property and
  equivalent methods/functions that the notification type
  provides.

  To help perform this transition the *nearly* equivalent method
  mapping is/are the following:

  ===  
  Old method   New property/method/function
  ===  
  task.autobindnotifier.register_deregister
  task.bindtask.notifier.register
  task.unbind  task.notifier.deregister
  task.trigger task.notifier.notify
  task.listeners_iter  task.notifier.listeners_iter
  ===  

* The 'EngineBase', 'ListenerBase' have been renamed to 'Engine'
  and 'Listener' (removal of the 'Base' post-fix); these existing
  classes (with the 'Base' post-fix) are marked as deprecated and
  will be subject to removal in a future version.
* The existing listeners in taskflow/listeners now take an new
  constructor parameter; 'retry_listen_for' which specifies what
  notifications to listen for/recieve (by default ANY or '*'),
  existing derivatives of this class have been updated to pass this
  along (subclasses that have been created can omit it if they
  choose to).
* A new 'blather()/BLATHER' logging level has been added and used to
  avoid the low-level scope and runtime information that is being emitted
  from taskflow during compilation time and at engine runtime. This
  should reduce the amount of noise that is generated (and really only
  useful by taskflow developers).

  * This log level is currently set at number 5 (all log levels have
equivalent numbers) which appears to be a common pattern shared
by the multiprocessing logger and kazoo (the library) which use it
for a similar purpose.

* The engine helper run/load 'engine_conf' dictionary keyword argument
  has been marked as deprecated and should now be replaced with usage
  of the 'engine=' format (where the URI contains the engine type
  to use and any specific engine options in the URIs parameters). The
  'engine_conf' keyword argument will be subject to removal in a
  future version.
* Tasks can now be copied via a copy() method (this will be useful in an
  upcoming ability for an engine to run tasks using the multiprocess
  library; therefore creating a nice median between thread based engines
  and remote work based engines).
* A claims listener that can be used to connect jobboards to engines as
  well as a dynamic/useful logging listener that can adjust the logging
  levels based on notifications received.
* The engine 'task_notifier' has been renamed to its more general
  name of 'atom_notifier' and 'task_notifier' has been marked as
  deprecated and it will be subject to removal in a future version.
* The greenthread executor (previously in a utility module) has been
  moved to the types/futures.py module where it should now be acceptable
  to use this as a first-class type (previously it is/was not accepted
  to use internal utility classes/modules externally).
* A new types folder + useful helper modules aid taskflow (and likely
  can aid other projects as well); some of these modules are splitting
  off into there own projects for more general usage (this is
  happening on a as needed/ongoing basis).
* Storage methods 'ensure_retry' and 'ensure_task' have been replaced
  with the type agnostic 'ensure_atom' (which is now the internally
  supported and used API for ensuring the storage unit has allocated
  the details about a given atom, retry or task...).
* New and improved symbol scoping/finding support!
* New and improved documentation and examples!

Changes in /homes/harlowja/dev/os/taskflow 0.5.0..0.6.0
---

NOTE: Skipping requirement commits...

4e514f4 Move over to using oslo.utils [reflection, uuidutils]
6520b9c Add a basic map/reduce example to show how this can be done
cafa3b2 Add a parallel table mutation example
1b06183 Add a 'can_be_registered' method that checks before notifying
97b4e18 Base task executor should provide 'wait_for_any'
e9ecdc7 Replace autobind with a notifier module helper function
aa8d55d Cleanup some doc warnings/bad/broken links
1f4dd72 Use the notifier type in the task class/modul

Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-18 Thread Chris St. Pierre
Presumably to prevent images from being deleted for arbitrary reasons that
are left to the administrator(s) of each individual implementation of
OpenStack, though. Using the protected flag to prevent images that are in
use from being deleted obviates the ability to use it for arbitrary
protection. That is, it can either be used as a general purpose flag to
prevent deletion of an image; or it can be used as a flag for images that
are in use and thus must not be deleted; but it cannot be used for both.
(At least, not without a wild and woolly network of hacks to ensure that it
can serve both purposes.)

Given the general-purpose nature of the flag, I don't think that something
that should be taken away from the administrators. And yet, it's very
desirable to prevent deletion of images that are in use. I think both of
these things should be supported, at the same time on the same
installation. I do not consider it a solution to the problem that images
can be deleted in use to take the "protected" flag away from arbitrary,
bespoke use.

On Thu, Dec 18, 2014 at 6:44 PM, Jay Pipes  wrote:

> On 12/18/2014 02:08 PM, Chris St. Pierre wrote:
>
>> I wasn't suggesting that we *actually* use filesystem link count, and
>> make hard links or whatever for every time the image is used. That's
>> prima facie absurd, for a lot more reasons that you point out. I was
>> suggesting a new database field that tracks the number of times an image
>> is in use, by *analogy* with filesystem link counts. (If I wanted to be
>> unnecessarily abrasive I might say, "This is a textbook example of
>> something called an analogy," but I'm not interested in being
>> unnecessarily abrasive.)
>>
>> Overloading the protected flag is *still* a terrible hack. Even if we
>> tracked the initial state of "protected" and restored that state when an
>> image went out of use, that would negate the ability to make an image
>>
>
> I guess I don't understand what you consider to be overloading of the
> protected flag. The original purpose of the protected flag was to protect
> images from being deleted.
>
> Best,
> -jay
>
>  protected while it was in use and expect that change to remain in place.
>> So that just violates the principle of least surprise. Of course, we
>> could have glance modify the "original_protected_state" flag when that
>> flag is non-null and the user changes the actual "protected" flag, but
>> this is just layering hacks upon hacks. By actually tracking the number
>> of times an image is in use, we can preserve the ability to protect
>> images *and* avoid deleting images in use.
>>
>> On Thu, Dec 18, 2014 at 5:27 AM, Kuvaja, Erno > > wrote:
>>
>> I think that’s horrible idea. How do we do that store independent
>> with the linking dependencies?
>>
>> __ __
>>
>> We should not depend universal use case like this on limited subset
>> of backends, specially non-OpenStack ones. Glance (nor Nova) should
>> never depend having direct access to the actual medium where the
>> images are stored. I think this is school book example for something
>> called database. Well arguable if this should be tracked at Glance
>> or Nova, but definitely not a dirty hack expecting specific backend
>> characteristics.
>>
>> __ __
>>
>> As mentioned before the protected image property is to ensure that
>> the image does not get deleted, that is also easy to track when the
>> images are queried. Perhaps the record needs to track the original
>> state of protected flag, image id and use count. 3 column table and
>> couple of API calls. Lets not at least make it any more complicated
>> than it needs to be if such functionality is desired.
>>
>> __ __
>>
>> __-__Erno
>>
>> __ __
>>
>> *From:*Nikhil Komawar [mailto:nikhil.koma...@rackspace.com
>> ]
>> *Sent:* 17 December 2014 20:34
>>
>>
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [glance] Option to skip deleting
>> images in use?
>>
>> __ __
>>
>> Guess that's a implementation detail. Depends on the way you go
>> about using what's available now, I suppose.
>>
>> __ __
>>
>> Thanks,
>> -Nikhil
>>
>> 
>> 
>>
>> *From:*Chris St. Pierre [chris.a.st.pie...@gmail.com
>> ]
>> *Sent:* Wednesday, December 17, 2014 2:07 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [glance] Option to skip deleting
>> images in use?
>>
>> I was assuming atomic increment/decrement operations, in which case
>> I'm not sure I see the race conditions. Or is atomism assuming too
>> much?
>>
>> __ __
>>
>> On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar

Re: [openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2014-12-18 Thread Joe Gordon
On Thu, Dec 18, 2014 at 2:18 PM, Robert Li (baoli)  wrote:

>  Hi,
>
>  During the Kilo summit, the folks in the pci passthrough and SR-IOV
> groups discussed what we’d like to achieve in this cycle, and the result
> was documented in this Etherpad:
> https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough
>
>  To get the work going, we’ve submitted a few design specs:
>
>  Nova: Live migration with macvtap SR-IOV
> https://blueprints.launchpad.net/nova/+spec/sriov-live-migration
>
>  Nova: sriov interface attach/detach
> https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach
>
>   Nova: Api specify vnic_type
> https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type
>
>  Neutron-Network settings support for vnic-type
>
> https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type
>
>  Nova: SRIOV scheduling with stateless offloads
>
> https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads
>
>  Now that the specs deadline is approaching, I’d like to bring them up in
> here for exception considerations. A lot of works have been put into them.
> And we’d like to see them get through for Kilo.
>

We haven't started the spec exception process yet.


>
>  Regarding CI for PCI passthrough and SR-IOV, see the attached thread.
>

Can you share this via a link to something on
http://lists.openstack.org/pipermail/openstack-dev/


>
>  thanks,
> Robert
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] vm can't get ipv6 address in ra mode:slaac + address mode: slaac

2014-12-18 Thread Jerry Xinyu Zhao
I also saw that bugzilla bug report. but my vm is ubuntu 14.04. and i also
have tried to run rootwrap command manually with sudo but still no avail.

On Thu, Dec 18, 2014 at 7:03 AM, Ihar Hrachyshka 
wrote:
>
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> I suspect that's some Red Hat distro, and radvd lacks SELinux context
> set to allow neutron l3 agent to spawn it.
>
> On 18/12/14 15:50, Jerry Zhao wrote:
> > It seems that radvd was not spawned successfully in l3-agent log:
> >
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: Stderr: '/usr/bin/neutron-rootwrap: Unauthorized
> > command: ip netns exec qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7
> > radvd -C
> > /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf
> > -p
> >
> /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd
> >
> >
> (no filter matched)\n'
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent Traceback (most recent call last): Dec 18
> > 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent:
> > 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File
> >
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py",
> >
> >
> line 341, in call
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent return func(*args, **kwargs) Dec 18
> > 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent:
> > 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File
> >
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py",
> >
> >
> line 902, in process_router
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent self.root_helper) Dec 18 11:23:34
> > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> > 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File
> >
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py",
> >
> >
> line 111, in enable_ipv6_ra
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent _spawn_radvd(router_id, radvd_conf,
> > router_ns, root_helper) Dec 18 11:23:34
> > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> > 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File
> >
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py",
> >
> >
> line 95, in _spawn_radvd
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent radvd.enable(callback, True) Dec 18 11:23:34
> > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> > 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File
> >
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py",
> >
> >
> line 77, in enable
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent ip_wrapper.netns.execute(cmd,
> > addl_env=self.cmd_addl_env) Dec 18 11:23:34
> > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> > 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File
> >
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
> >
> >
> line 554, in execute
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent check_exit_code=check_exit_code,
> > extra_ok_codes=extra_ok_codes) Dec 18 11:23:34
> > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> > 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File
> >
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
> >
> >
> line 82, in execute
> > Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent raise RuntimeError(m) Dec 18 11:23:34
> > ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> > 11:23:34.611 18015 TRACE neutron.agent.l3_agent RuntimeError: Dec
> > 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> > neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> > neutron.agent.l3_agent Command: ['sudo',
> > '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip',
> > 'netns', 'exec', 'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7',
> > 'radvd', '-C',
> > '/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf',
> >
> >
> '-p',
> >
> '/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd']
> >
> >  Dec 18 11:23:34

Re: [openstack-dev] [rally] Rally scenario for network scale with VMs

2014-12-18 Thread Ajay Kalambur (akalambu)
Ok ill get them out right away than . You should see some reviews coming in 
next week
I will try the admin=True option and see if that works for me


Ajay


From: Boris Pavlovic mailto:bpavlo...@mirantis.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, December 18, 2014 at 4:40 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [rally] Rally scenario for network scale with VMs

Ajay,

Oh looks you are working hard on Rally. I am excited about your patches.

By the way, one small advice is to share results and ideas that you have with 
community ASAP.
it's perfectly to publish "not ideal" patches on review even if they don't have 
unit tests and doesn't work at all.
Because community can help with advices and safe a lot of your time.


2. Scenario for booting Vms on every compute node in the cloud….This has a 
dependency right now this is admin access. So for this I need Item 3
3. Provide an ability to make rally created users have admin access for things 
like forced host scheduling . Planning to add this to user context


Not quite sure that you need point 3. If you put on scenario: 
@validation.required_openstack(admin=True, users=True).
You'll have admin in scenario. So you can execute from him commands. E.g.:

self.admin_clients("neutron") (it's similar to self.clients but with admin 
access)

That's how live_migrate benchmark is implemented:
https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L253

 Does this make sense?


4. Iperf scenario we discussed

Nice!


Best regards,
Boris Pavlovic






On Fri, Dec 19, 2014 at 4:06 AM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
I created a new scenario which scales on network and VM at same time.
If you have no objection I would like to send out a review this week
I actually have following reviews to do next week

  1.  Scenario for network stress I.e VM+ network with subnet with unique cidr. 
So in future we can add a router to this and do apt-get update from all Vms and 
test network scale
  2.  Scenario for booting Vms on every compute node in the cloud….This has a 
dependency right now this is admin access. So for this I need Item 3
  3.  Provide an ability to make rally created users have admin access for 
things like forced host scheduling . Planning to add this to user context
  4.  Iperf scenario we discussed

If you have objection to these I can submit reviews for these. Have the code 
need to write unit tests for the scenarios since looking at other reviews that 
seems to be the case

Ajay


From: Boris Pavlovic mailto:bo...@pavlovic.me>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, December 18, 2014 at 2:16 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [rally] Rally scenario for network scale with VMs

Ajay,

Sorry for long reply.

At this point Rally supports only benchmarking from temporary created users and 
tenants.

Fortunately today we merged this Network context class: 
https://review.openstack.org/#/c/103306/96
it creates any amount of networks for each rally temporary tenant.

So basically you can use it and extend current benchmark scenarios in 
rally/benchmark/scenarios/nova/ or add new one that will attach N networks to 
created VM (which is just few lines of code). So task is quite easy resolvable 
now.


Best regards,
Boris Pavlovic


On Wed, Nov 26, 2014 at 9:54 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi
Is there  a Rally scenario under works where we create N networks and associate 
N Vms with each network.
This would be a decent stress tests of neutron
Is there any such scale scenario in works
I see scenario for N networks, subnet creation and a separate one for N VM 
bootups
I am looking for an integration of these 2



Ajay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-18 Thread Jay Pipes

On 12/18/2014 02:08 PM, Chris St. Pierre wrote:

I wasn't suggesting that we *actually* use filesystem link count, and
make hard links or whatever for every time the image is used. That's
prima facie absurd, for a lot more reasons that you point out. I was
suggesting a new database field that tracks the number of times an image
is in use, by *analogy* with filesystem link counts. (If I wanted to be
unnecessarily abrasive I might say, "This is a textbook example of
something called an analogy," but I'm not interested in being
unnecessarily abrasive.)

Overloading the protected flag is *still* a terrible hack. Even if we
tracked the initial state of "protected" and restored that state when an
image went out of use, that would negate the ability to make an image


I guess I don't understand what you consider to be overloading of the 
protected flag. The original purpose of the protected flag was to 
protect images from being deleted.


Best,
-jay


protected while it was in use and expect that change to remain in place.
So that just violates the principle of least surprise. Of course, we
could have glance modify the "original_protected_state" flag when that
flag is non-null and the user changes the actual "protected" flag, but
this is just layering hacks upon hacks. By actually tracking the number
of times an image is in use, we can preserve the ability to protect
images *and* avoid deleting images in use.

On Thu, Dec 18, 2014 at 5:27 AM, Kuvaja, Erno mailto:kuv...@hp.com>> wrote:

I think that’s horrible idea. How do we do that store independent
with the linking dependencies?

__ __

We should not depend universal use case like this on limited subset
of backends, specially non-OpenStack ones. Glance (nor Nova) should
never depend having direct access to the actual medium where the
images are stored. I think this is school book example for something
called database. Well arguable if this should be tracked at Glance
or Nova, but definitely not a dirty hack expecting specific backend
characteristics.

__ __

As mentioned before the protected image property is to ensure that
the image does not get deleted, that is also easy to track when the
images are queried. Perhaps the record needs to track the original
state of protected flag, image id and use count. 3 column table and
couple of API calls. Lets not at least make it any more complicated
than it needs to be if such functionality is desired.

__ __

__-__Erno

__ __

*From:*Nikhil Komawar [mailto:nikhil.koma...@rackspace.com
]
*Sent:* 17 December 2014 20:34


*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [glance] Option to skip deleting
images in use?

__ __

Guess that's a implementation detail. Depends on the way you go
about using what's available now, I suppose.

__ __

Thanks,
-Nikhil



*From:*Chris St. Pierre [chris.a.st.pie...@gmail.com
]
*Sent:* Wednesday, December 17, 2014 2:07 PM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [glance] Option to skip deleting
images in use?

I was assuming atomic increment/decrement operations, in which case
I'm not sure I see the race conditions. Or is atomism assuming too
much?

__ __

On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar
mailto:nikhil.koma...@rackspace.com>>
wrote:

That looks like a decent alternative if it works. However, it
would be too racy unless we we implement a test-and-set for such
properties or there is a different job which queues up these
requests and perform sequentially for each tenant.

__ __

Thanks,
-Nikhil



*From:*Chris St. Pierre [chris.a.st.pie...@gmail.com
]
*Sent:* Wednesday, December 17, 2014 10:23 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [glance] Option to skip deleting
images in use?

That's unfortunately too simple. You run into one of two cases: 

__ __

1. If the job automatically removes the protected attribute when
an image is no longer in use, then you lose the ability to use
"protected" on images that are not in use. I.e., there's no way
to say, "nothing is currently using this image, but please keep
it around." (This seems particularly useful for snapshots, for
instance.)

__ __

2. If the job does not automatically remove the pro

Re: [openstack-dev] [rally] Rally scenario for network scale with VMs

2014-12-18 Thread Boris Pavlovic
Ajay,

Oh looks you are working hard on Rally. I am excited about your patches.

By the way, one small advice is to share results and ideas that you have
with community ASAP.
it's perfectly to publish "not ideal" patches on review even if they don't
have unit tests and doesn't work at all.
Because community can help with advices and safe a lot of your time.


2. Scenario for booting Vms on every compute node in the cloud….This has a
> dependency right now this is admin access. So for this I need Item 3
> 3. Provide an ability to make rally created users have admin access for
> things like forced host scheduling . Planning to add this to user context



Not quite sure that you need point 3. If you put on scenario:
@validation.required_openstack(admin=True, users=True).
You'll have admin in scenario. So you can execute from him commands. E.g.:

self.admin_clients("neutron") (it's similar to self.clients but with
admin access)

That's how live_migrate benchmark is implemented:
https://github.com/stackforge/rally/blob/master/rally/benchmark/scenarios/nova/servers.py#L253

 Does this make sense?


4. Iperf scenario we discussed


Nice!


Best regards,
Boris Pavlovic






On Fri, Dec 19, 2014 at 4:06 AM, Ajay Kalambur (akalambu) <
akala...@cisco.com> wrote:
>
>  I created a new scenario which scales on network and VM at same time.
> If you have no objection I would like to send out a review this week
> I actually have following reviews to do next week
>
>1. Scenario for network stress I.e VM+ network with subnet with unique
>cidr. So in future we can add a router to this and do apt-get update from
>all Vms and test network scale
>2. Scenario for booting Vms on every compute node in the cloud….This
>has a dependency right now this is admin access. So for this I need Item 3
>3. Provide an ability to make rally created users have admin access
>for things like forced host scheduling . Planning to add this to user
>context
>4. Iperf scenario we discussed
>
> If you have objection to these I can submit reviews for these. Have the
> code need to write unit tests for the scenarios since looking at other
> reviews that seems to be the case
>
>  Ajay
>
>
>   From: Boris Pavlovic 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, December 18, 2014 at 2:16 PM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [rally] Rally scenario for network scale
> with VMs
>
>   Ajay,
>
>  Sorry for long reply.
>
>  At this point Rally supports only benchmarking from temporary created
> users and tenants.
>
>  Fortunately today we merged this Network context class:
> https://review.openstack.org/#/c/103306/96
> it creates any amount of networks for each rally temporary tenant.
>
>  So basically you can use it and extend current benchmark scenarios in
> rally/benchmark/scenarios/nova/ or add new one that will attach N networks
> to created VM (which is just few lines of code). So task is quite easy
> resolvable now.
>
>
>  Best regards,
> Boris Pavlovic
>
>
> On Wed, Nov 26, 2014 at 9:54 PM, Ajay Kalambur (akalambu) <
> akala...@cisco.com> wrote:
>>
>>  Hi
>> Is there  a Rally scenario under works where we create N networks and
>> associate N Vms with each network.
>> This would be a decent stress tests of neutron
>> Is there any such scale scenario in works
>> I see scenario for N networks, subnet creation and a separate one for N
>> VM bootups
>> I am looking for an integration of these 2
>>
>>
>>
>>  Ajay
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Rally scenario for network scale with VMs

2014-12-18 Thread Ajay Kalambur (akalambu)
I created a new scenario which scales on network and VM at same time.
If you have no objection I would like to send out a review this week
I actually have following reviews to do next week

  1.  Scenario for network stress I.e VM+ network with subnet with unique cidr. 
So in future we can add a router to this and do apt-get update from all Vms and 
test network scale
  2.  Scenario for booting Vms on every compute node in the cloud….This has a 
dependency right now this is admin access. So for this I need Item 3
  3.  Provide an ability to make rally created users have admin access for 
things like forced host scheduling . Planning to add this to user context
  4.  Iperf scenario we discussed

If you have objection to these I can submit reviews for these. Have the code 
need to write unit tests for the scenarios since looking at other reviews that 
seems to be the case

Ajay


From: Boris Pavlovic mailto:bo...@pavlovic.me>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Thursday, December 18, 2014 at 2:16 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [rally] Rally scenario for network scale with VMs

Ajay,

Sorry for long reply.

At this point Rally supports only benchmarking from temporary created users and 
tenants.

Fortunately today we merged this Network context class: 
https://review.openstack.org/#/c/103306/96
it creates any amount of networks for each rally temporary tenant.

So basically you can use it and extend current benchmark scenarios in 
rally/benchmark/scenarios/nova/ or add new one that will attach N networks to 
created VM (which is just few lines of code). So task is quite easy resolvable 
now.


Best regards,
Boris Pavlovic


On Wed, Nov 26, 2014 at 9:54 PM, Ajay Kalambur (akalambu) 
mailto:akala...@cisco.com>> wrote:
Hi
Is there  a Rally scenario under works where we create N networks and associate 
N Vms with each network.
This would be a decent stress tests of neutron
Is there any such scale scenario in works
I see scenario for N networks, subnet creation and a separate one for N VM 
bootups
I am looking for an integration of these 2



Ajay

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Mistral] For-each

2014-12-18 Thread Dmitri Zimine
Based on the feedback so far, I updated the document and added some more 
details from the comments and discussions. 

We still think for-each as a keyword confuses people by setting up some 
behavior expectations (e.g., it will run sequentially, you can work with data 
inside the loop, you can ‘nest’ for-each loops - while it’s not a loop at all, 
just a way to run actions which are not accepting arrays of data, with arrays 
of data). 

But no better idea on the keyword just yet. 

DZ. 


On Dec 15, 2014, at 10:53 PM, Renat Akhmerov  wrote:

> Thanks Nikolay,
> 
> I also left my comments and tend to like Alt2 better than others. Agree with 
> Dmitri that “all-permutations” thing can be just a different construct in the 
> language and “concurrency” should be rather a policy than a property of 
> “for-each” because it doesn’t have any impact on workflow logic itself, it 
> only influence the way how engine runs a task. So again, policies are engine 
> capabilities, not workflow ones.
> 
> One tricky question that’s still in the air is how to deal with publishing. I 
> mean in terms of requirements it’s pretty clear: we need to apply “publish” 
> once after all iterations and be able to access an array of iteration results 
> as $. But technically, it may be a problem to implement such behavior, need 
> to think about it more.
> 
> Renat Akhmerov
> @ Mirantis Inc.
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] 12/18 nova meeting and k-1 milestone target recap highlights (and low-lights)

2014-12-18 Thread Kyle Mestery
On Thu, Dec 18, 2014 at 4:18 PM, Matt Riedemann 
wrote:
>
> In the Nova meeting today [1] we went over the k-1 milestone targets [2]
> in open discussion.  I updated the etherpad with my notes.
>
> For the most part things are progressing nicely, the only thing that
> probably needs to be mentioned here is there was apparently some disconnect
> between the summit and now about who was doing what for the nova-network ->
> neutron migration, or how, i.e. was the spec supposed to be in neutron or
> nova?  The summit etherpad is here [3].
>
> Kyle Mestery said he'd figure out what's going on there and get some
> information back to the mailing list. This is listed as a project priority
> for Kilo [4] and if nova needed a spec the approval deadline was today, so
> we'd have to talk about exceptions in k-2 for this.
>
> We have a first cut spec for this in neutron now [6]. I'd encourage all
nova folks to review this one and provide comments there. It's a WIP now,
we'll work to get it in shape. It's unclear if we'll need a nova side spec,
we'll get that sorted by tomorrow.

Thanks,
Kyle

[6] https://review.openstack.org/#/c/142456/2


> Speaking of, the k-2 spec exception process is going to be discussed the
> first week of January, then expect the details in the mailing list after
> that.  If you feel there is nothing to do between now and then, remember we
> have an etherpad with review priorities [5]. Or, you know, close your
> laptop and spend time with friends and family over the break that at least
> some of us should be taking. :)
>
> [1] http://eavesdrop.openstack.org/meetings/nova/2014/nova.
> 2014-12-18-21.00.log.html
> [2] https://etherpad.openstack.org/p/kilo-nova-milestone-targets
> [3] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron
> [4] http://specs.openstack.org/openstack/nova-specs/
> priorities/kilo-priorities.html#nova-network-neutron-migration
> [5] https://etherpad.openstack.org/p/kilo-nova-priorities-tracking
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [rally] Rally scenario for network scale with VMs

2014-12-18 Thread Boris Pavlovic
Ajay,

Sorry for long reply.

At this point Rally supports only benchmarking from temporary created users
and tenants.

Fortunately today we merged this Network context class:
https://review.openstack.org/#/c/103306/96
it creates any amount of networks for each rally temporary tenant.

So basically you can use it and extend current benchmark scenarios in
rally/benchmark/scenarios/nova/ or add new one that will attach N networks
to created VM (which is just few lines of code). So task is quite easy
resolvable now.


Best regards,
Boris Pavlovic


On Wed, Nov 26, 2014 at 9:54 PM, Ajay Kalambur (akalambu) <
akala...@cisco.com> wrote:
>
>  Hi
> Is there  a Rally scenario under works where we create N networks and
> associate N Vms with each network.
> This would be a decent stress tests of neutron
> Is there any such scale scenario in works
> I see scenario for N networks, subnet creation and a separate one for N VM
> bootups
> I am looking for an integration of these 2
>
>
>
>  Ajay
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][sriov] SRIOV related specs pending for approval

2014-12-18 Thread Robert Li (baoli)
Hi,

During the Kilo summit, the folks in the pci passthrough and SR-IOV groups 
discussed what we’d like to achieve in this cycle, and the result was 
documented in this Etherpad:
https://etherpad.openstack.org/p/kilo_sriov_pci_passthrough

To get the work going, we’ve submitted a few design specs:

Nova: Live migration with macvtap SR-IOV
https://blueprints.launchpad.net/nova/+spec/sriov-live-migration

Nova: sriov interface attach/detach
https://blueprints.launchpad.net/nova/+spec/sriov-interface-attach-detach

 Nova: Api specify vnic_type
https://blueprints.launchpad.net/neutron/+spec/api-specify-vnic-type

Neutron-Network settings support for vnic-type
https://blueprints.launchpad.net/neutron/+spec/network-settings-support-vnic-type

Nova: SRIOV scheduling with stateless offloads
https://blueprints.launchpad.net/nova/+spec/sriov-sched-with-stateless-offloads

Now that the specs deadline is approaching, I’d like to bring them up in here 
for exception considerations. A lot of works have been put into them. And we’d 
like to see them get through for Kilo.

Regarding CI for PCI passthrough and SR-IOV, see the attached thread.

thanks,
Robert

--- Begin Message ---
Hi Steve,
Regarding SR-IOV testing, at Mellanox we have CI job running on bare metal
node with Mellanox SR-IOV NIC.  This job is reporting on neutron patches.
Currently API tests are executed.
The contact person for SRIOV CI job is listed at driverlog:
https://github.com/stackforge/driverlog/blob/master/etc/default_data.json#L
1439

The following items are in progress:
 - SR-IOV functional testing
 - Reporting CI job on nova patches
 - Multi-node setup
It worth to mention that we   want to start the collaboration on SR-IOV
testing effort as part of the pci pass-through subteam activity.
Please join the weekly meeting if you want to collaborate or have some
inputs: https://wiki.openstack.org/wiki/Meetings/Passthrough

BR,
Irena

-Original Message-
From: Steve Gordon [mailto:sgor...@redhat.com]
Sent: Wednesday, November 12, 2014 9:11 PM
To: itai mendelsohn; Adrian Hoban; Russell Bryant; Ian Wells (iawells);
Irena Berezovsky; ba...@cisco.com
Cc: Nikola Đipanov; Russell Bryant; OpenStack Development Mailing List (not
for usage questions)
Subject: [Nova][Neutron][NFV][Third-party] CI for NUMA, SR-IOV, and other
features that can't be tested on current infra.

Hi all,

We had some discussions last week - particularly in the Nova NFV design
session [1] - on the subject of ensuring that telecommunications and
NFV-related functionality has adequate continuous integration testing. In
particular the focus here is on functionality that can't easily be tested on
the public clouds that back the gate, including:

- NUMA (vCPU pinning, vCPU layout, vRAM layout, huge pages, I/O device
locality)
- SR-IOV with Intel, Cisco, and Mellanox devices (possibly others)
  
In each case we need to confirm where we are at, and the plan going
forward, with regards to having:

1) Hardware to run the CI on.
2) Tests that actively exercise the functionality (if not already in
existence).
3) Point person for each setup to maintain it and report into the
third-party meeting [2].
4) Getting the jobs operational and reporting [3][4][5][6].

In the Nova session we discussed a goal of having the hardware by K-1 (Dec
18) and having it reporting at least periodically by K-2 (Feb 5). I'm not
sure if similar discussions occurred on the Neutron side of the design
summit.

SR-IOV
==

Adrian and Irena mentioned they were already in the process of getting up
to speed with third party CI for their respective SR-IOV configurations.
Robert are you attempting similar with regards to Cisco devices? What is the
status of each of these efforts versus the four items I lifted above and
what do you need assistance with?

NUMA


We still need to identify some hardware to run third party CI for the
NUMA-related work, and no doubt other things that will come up. It's
expected that this will be an interim solution until OPNFV resources can be
used (note cdub jokingly replied 1-2 years when asked for a "rough" estimate
- I mention this because based on a later discussion some people took this
as a serious estimate).

Ian did you have any luck kicking this off? Russell and I are also
endeavouring to see what we can do on our side w.r.t. this short term
approach - in particular if you find hardware we still need to find an owner
to actually setup and manage it as discussed.

In theory to get started we need a physical multi-socket box and a virtual
machine somewhere on the same network to handle job control etc. I believe
the tests themselves can be run in VMs (just not those exposed by existing
public clouds) assuming a recent Libvirt and an appropriately crafted
Libvirt XML that ensures the VM gets a multi-socket topology etc. (we can
assist with this).

Thanks,

Steve

[1] https://etherpad.openstack.org/p/kilo-nova-nfv
[2] https://wiki.openstack.org/wiki/Meetings/ThirdParty
[3] http://ci.

[openstack-dev] [nova][neutron] 12/18 nova meeting and k-1 milestone target recap highlights (and low-lights)

2014-12-18 Thread Matt Riedemann
In the Nova meeting today [1] we went over the k-1 milestone targets [2] 
in open discussion.  I updated the etherpad with my notes.


For the most part things are progressing nicely, the only thing that 
probably needs to be mentioned here is there was apparently some 
disconnect between the summit and now about who was doing what for the 
nova-network -> neutron migration, or how, i.e. was the spec supposed to 
be in neutron or nova?  The summit etherpad is here [3].


Kyle Mestery said he'd figure out what's going on there and get some 
information back to the mailing list. This is listed as a project 
priority for Kilo [4] and if nova needed a spec the approval deadline 
was today, so we'd have to talk about exceptions in k-2 for this.


Speaking of, the k-2 spec exception process is going to be discussed the 
first week of January, then expect the details in the mailing list after 
that.  If you feel there is nothing to do between now and then, remember 
we have an etherpad with review priorities [5]. Or, you know, close your 
laptop and spend time with friends and family over the break that at 
least some of us should be taking. :)


[1] 
http://eavesdrop.openstack.org/meetings/nova/2014/nova.2014-12-18-21.00.log.html

[2] https://etherpad.openstack.org/p/kilo-nova-milestone-targets
[3] https://etherpad.openstack.org/p/kilo-nova-nova-network-to-neutron
[4] 
http://specs.openstack.org/openstack/nova-specs/priorities/kilo-priorities.html#nova-network-neutron-migration

[5] https://etherpad.openstack.org/p/kilo-nova-priorities-tracking

--

Thanks,

Matt Riedemann


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] Kilo-1 Dev Milestone Released!

2014-12-18 Thread Serg Melikyan
I forgot to mention that we also released handful of well-tested Murano
application, that can be used both as example and as ready applications for
your cloud. Application are available in our new repository:
https://github.com/stackforge/murano-apps

We released following applications in this milestone:

   - WordPress
   - Zabbix Monitoring
   - Apache HttpServer
   - Apache Tomcat
   - MySql
   - PostgreSql


On Thu, Dec 18, 2014 at 12:56 PM, Serg Melikyan 
wrote:
>
> Hi folks,
>
> I am happy to announce that first Kilo milestone is now available. You can
> download the kilo-1 release and review the changes here:
> https://launchpad.net/murano/kilo/kilo-1
>
> With this milestone we release several new and important features, that I
> would like to kindly ask to try and play with:
>
>- Handle auth expiration for long-running deployments
>- Per-class configuration files
>
> We added support for long-running deployments in Murano. Previously
> deployment time was restricted by token expiration time, in case when user
> started deployment close to token expiration time, deployment was failing
> on Heat stack creation.
>
> We've also implemented support for per-class configuration files during
> this milestone. Murano may be easily extended, for example with support for
> different third-party services, like monitoring or firewall. You can find
> demo-example of such extension here:
> https://github.com/sergmelikyan/murano/tree/third-party
>
> In this example we add ZabbixApi class that handles interaction with
> Zabbix monitoring system installed outside of the cloud and exposes API to
> all the applications in the catalog, giving ability to configure monitoring
> for themselves:
> https://github.com/sergmelikyan/murano/blob/third-party/murano/engine/contrib/zabbix.py
>
> Obviously we need to store credentials for Zabbix somewhere, and
> previously it was done in main Murano configuration file. Now each class
> may have own configuration file, with nice ability to automatically fill
> class properties by configuration values.
>
> Unfortunately these features are not yet documented, please refer to
> commit messages and implementation for details. We would be happy for any
> contribution to Murano and especially contribution to our documentation.
>
>   * https://review.openstack.org/134183
>   * https://review.openstack.org/119042
>
> Thank you!
>
> --
> Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
> http://mirantis.com | smelik...@mirantis.com
>
> +7 (495) 640-4904, 0261
> +7 (903) 156-0836
>


-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Murano] Kilo-1 Dev Milestone Released!

2014-12-18 Thread Serg Melikyan
Hi folks,

I am happy to announce that first Kilo milestone is now available. You can
download the kilo-1 release and review the changes here:
https://launchpad.net/murano/kilo/kilo-1

With this milestone we release several new and important features, that I
would like to kindly ask to try and play with:

   - Handle auth expiration for long-running deployments
   - Per-class configuration files

We added support for long-running deployments in Murano. Previously
deployment time was restricted by token expiration time, in case when user
started deployment close to token expiration time, deployment was failing
on Heat stack creation.

We've also implemented support for per-class configuration files during
this milestone. Murano may be easily extended, for example with support for
different third-party services, like monitoring or firewall. You can find
demo-example of such extension here:
https://github.com/sergmelikyan/murano/tree/third-party

In this example we add ZabbixApi class that handles interaction with Zabbix
monitoring system installed outside of the cloud and exposes API to all the
applications in the catalog, giving ability to configure monitoring for
themselves:
https://github.com/sergmelikyan/murano/blob/third-party/murano/engine/contrib/zabbix.py

Obviously we need to store credentials for Zabbix somewhere, and previously
it was done in main Murano configuration file. Now each class may have own
configuration file, with nice ability to automatically fill class
properties by configuration values.

Unfortunately these features are not yet documented, please refer to commit
messages and implementation for details. We would be happy for any
contribution to Murano and especially contribution to our documentation.

  * https://review.openstack.org/134183
  * https://review.openstack.org/119042

Thank you!

-- 
Serg Melikyan, Senior Software Engineer at Mirantis, Inc.
http://mirantis.com | smelik...@mirantis.com

+7 (495) 640-4904, 0261
+7 (903) 156-0836
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Adding code to add node to fuel UI

2014-12-18 Thread Andrey Danin
It's enough for you to just create a new role in openstack.yaml and maybe
some descriptions in UI components.

Then you should capture this role in Puppet manifests. Look at the 'case'
operator [1]. Just add a new case for your role and call your 'vim' class
here.

[1]
https://github.com/stackforge/fuel-library/blob/master/deployment/puppet/osnailyfacter/manifests/cluster_simple.pp#L227

On Thu, Dec 18, 2014 at 10:03 PM, Satyasanjibani Rautaray <
engg.s...@gmail.com> wrote:
>
> Hi Mike
>
> this Document helped a lot
>
> I may be missing something thing for which i need some help below is the
> details for which i require some help.
>
> i have a vim.pp file for testing which will install vim on the particular
> node which is not a part of controller or compute or any openstack
> component nodes
>
> The current zabbix-server under the manager.py did something as below
> 
> from nailgun.utils.zabbix import ZabbixManager
>
> @classmethod
> def get_zabbix_url(cls, cluster):
> zabbix_node = ZabbixManager.get_zabbix_node(cluster)
> if zabbix_node is None:
>  return None
> ip_cidr = cls.get_node_network_by_netname(
> zabbix_node, 'public'
> )['ip']
> ip = ip_cidr.split('/')[0]
> return 'http://{0}/zabbix'.format(ip)
>
> 
>
> at receiver.py
>
> 
>  zabbix_url = objects.Cluster.get_network_manager(
> task.cluster
> ).get_zabbix_url(task.cluster)
>
> if zabbix_url:
> zabbix_suffix = " Access Zabbix dashboard at {0}".format(
> zabbix_url
> )
> message += zabbix_suffix
>
> 
>
> at task.py
>
> 
>
> from nailgun.utils.zabbix import ZabbixManager
> # check if there's a zabbix server in an environment
> # and if there is, remove hosts
> if ZabbixManager.get_zabbix_node(task.cluster):
> zabbix_credentials = ZabbixManager.get_zabbix_credentials(
> task.cluster
> )
> logger.debug("Removing nodes %s from zabbix" % (nodes_to_delete))
> try:
> ZabbixManager.remove_from_zabbix(
> zabbix_credentials, nodes_to_delete
> )
> except (errors.CannotMakeZabbixRequest,
> errors.ZabbixRequestError) as e:
> logger.warning("%s, skipping removing nodes from Zabbix", e)
>
> 
>
> and
>
> https://review.openstack.org/#/c/84408/39/nailgun/nailgun/utils/zabbix.py
>
>
> i am not able to get how can i connect to the vim.pp file
>
> Thanks
> Satya
>
>
> On Wed, Dec 17, 2014 at 7:27 AM, Mike Scherbakov  > wrote:
>>
>> Hi,
>> did you come across
>> http://docs.mirantis.com/fuel-dev/develop/addition_examples.html ?
>>
>> I believe it should cover your use case.
>>
>> Thanks,
>>
>> On Tue, Dec 16, 2014 at 11:43 PM, Satyasanjibani Rautaray <
>> engg.s...@gmail.com> wrote:
>>>
>>> I just need to deploy the node and install my required packages.
>>> On 17-Dec-2014 1:31 am, "Andrey Danin"  wrote:
>>>
 Hello.

 What version of Fuel do you use? Did you reupload openstack.yaml into
 Nailgun? Do you want just to deploy an operating system and configure a
 network on a new node?

 I would really appreciate if you use a period at the end of sentences.

 On Tuesday, December 16, 2014, Satyasanjibani Rautaray <
 engg.s...@gmail.com> wrote:

> Hi,
>
> *i am in a process of creating an additional node by editing the code
> where the new node will be solving a different propose than installing
> openstack components just for testing currently the new node will install
> vim for me please help me what else i need to look into to create the
> complete setup and deploy with fuel i have edited openstack.yaml at
> /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP
> *
> --
> Thanks
> Satya
> Mob:9844101001
>
> No one is the best by birth, Its his brain/ knowledge which make him
> the best.
>


 --
 Andrey Danin
 ada...@mirantis.com
 skype: gcon.monolake


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> --
> Thanks
> Satya
> Mob:9844101001
>
> No one is the best by birth, Its his brain/ knowledge which make him the
> best.
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

-- 
Andrey Danin
ada...@mirantis.com
skype: gcon.monolake
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://list

[openstack-dev] [keystonemiddleware] Keystone Middleware 1.3.0 release

2014-12-18 Thread Morgan Fainberg
The Keystone development community would like to announce the 1.3.0 release of 
the keystone middleware package. 

This release can be installed from the following locations:
* http://tarballs.openstack.org/keystonemiddleware
* https://pypi.python.org/pypi/keystonemiddleware

1.3.0
---
* http_connect_timeout option is now an integer instead of a boolean.
* The service user for auth_token middlware can now be in a domain other than 
the default domain.

Detailed changes in this release beyond what is listed above:
https://launchpad.net/keystonemiddleware/+milestone/1.3.0
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python-keystoneclient] python-keystoneclient 1.0.0 release

2014-12-18 Thread Morgan Fainberg
The Keystone development community would like to announce the release of 
python-keystoneclient 1.0.0. The move to the 1.x.x development branch was made 
to match the perception that the library has long been considered stable. 
Beyond the move to the stable release version, this release is no different 
than any other python-keystoneclient release.

This release can be installed from the following locations:
* http://tarballs.openstack.org/python-keystoneclient 

* https://pypi.python.org/pypi/python-keystoneclient 


1.0.0
---
* Registered CLI Options will no longer use the default values instead of the 
ENV variables (if present)
* The `curl` examples from the debug output now includes `--globoff` for ipv6 
urls
* HTTPClient will no longer incorrectly raise AttributeError if authentication 
has not occurred when checking if `.has_service_catalog`

Detailed changes in this release beyond what is listed above:
https://launchpad.net/python-keystoneclient/+milestone/1.0.0
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-18 Thread Chris St. Pierre
I wasn't suggesting that we *actually* use filesystem link count, and make
hard links or whatever for every time the image is used. That's prima facie
absurd, for a lot more reasons that you point out. I was suggesting a new
database field that tracks the number of times an image is in use, by
*analogy* with filesystem link counts. (If I wanted to be unnecessarily
abrasive I might say, "This is a textbook example of something called an
analogy," but I'm not interested in being unnecessarily abrasive.)

Overloading the protected flag is *still* a terrible hack. Even if we
tracked the initial state of "protected" and restored that state when an
image went out of use, that would negate the ability to make an image
protected while it was in use and expect that change to remain in place. So
that just violates the principle of least surprise. Of course, we could
have glance modify the "original_protected_state" flag when that flag is
non-null and the user changes the actual "protected" flag, but this is just
layering hacks upon hacks. By actually tracking the number of times an
image is in use, we can preserve the ability to protect images *and* avoid
deleting images in use.

On Thu, Dec 18, 2014 at 5:27 AM, Kuvaja, Erno  wrote:

>  I think that’s horrible idea. How do we do that store independent with
> the linking dependencies?
>
>
>
> We should not depend universal use case like this on limited subset of
> backends, specially non-OpenStack ones. Glance (nor Nova) should never
> depend having direct access to the actual medium where the images are
> stored. I think this is school book example for something called database.
> Well arguable if this should be tracked at Glance or Nova, but definitely
> not a dirty hack expecting specific backend characteristics.
>
>
>
> As mentioned before the protected image property is to ensure that the
> image does not get deleted, that is also easy to track when the images are
> queried. Perhaps the record needs to track the original state of protected
> flag, image id and use count. 3 column table and couple of API calls. Lets
> not at least make it any more complicated than it needs to be if such
> functionality is desired.
>
>
>
> -  Erno
>
>
>
> *From:* Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
> *Sent:* 17 December 2014 20:34
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in
> use?
>
>
>
> Guess that's a implementation detail. Depends on the way you go about
> using what's available now, I suppose.
>
>
>
> Thanks,
> -Nikhil
>   --
>
> *From:* Chris St. Pierre [chris.a.st.pie...@gmail.com]
> *Sent:* Wednesday, December 17, 2014 2:07 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in
> use?
>
> I was assuming atomic increment/decrement operations, in which case I'm
> not sure I see the race conditions. Or is atomism assuming too much?
>
>
>
> On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar <
> nikhil.koma...@rackspace.com> wrote:
>
>  That looks like a decent alternative if it works. However, it would be
> too racy unless we we implement a test-and-set for such properties or there
> is a different job which queues up these requests and perform sequentially
> for each tenant.
>
>
>
> Thanks,
> -Nikhil
>   --
>
> *From:* Chris St. Pierre [chris.a.st.pie...@gmail.com]
> *Sent:* Wednesday, December 17, 2014 10:23 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [glance] Option to skip deleting images in
> use?
>
> That's unfortunately too simple. You run into one of two cases:
>
>
>
> 1. If the job automatically removes the protected attribute when an image
> is no longer in use, then you lose the ability to use "protected" on images
> that are not in use. I.e., there's no way to say, "nothing is currently
> using this image, but please keep it around." (This seems particularly
> useful for snapshots, for instance.)
>
>
>
> 2. If the job does not automatically remove the protected attribute, then
> an image would be protected if it had ever been in use; to delete an image,
> you'd have to manually un-protect it, which is a workflow that quite
> explicitly defeats the whole purpose of flagging images as protected when
> they're in use.
>
>
>
> It seems like flagging an image as *not* in use is actually a fairly
> difficult problem, since it requires consensus among all components that
> might be using images.
>
>
>
> The only solution that readily occurs to me would be to add something like
> a filesystem link count to images in Glance. Then when Nova spawns an
> instance, it increments the usage count; when the instance is destroyed,
> the usage count is decremented. And similarly with other components that
> use images. An image could only be deleted when

Re: [openstack-dev] [Fuel] Adding code to add node to fuel UI

2014-12-18 Thread Satyasanjibani Rautaray
Hi Mike

this Document helped a lot

I may be missing something thing for which i need some help below is the
details for which i require some help.

i have a vim.pp file for testing which will install vim on the particular
node which is not a part of controller or compute or any openstack
component nodes

The current zabbix-server under the manager.py did something as below

from nailgun.utils.zabbix import ZabbixManager

@classmethod
def get_zabbix_url(cls, cluster):
zabbix_node = ZabbixManager.get_zabbix_node(cluster)
if zabbix_node is None:
 return None
ip_cidr = cls.get_node_network_by_netname(
zabbix_node, 'public'
)['ip']
ip = ip_cidr.split('/')[0]
return 'http://{0}/zabbix'.format(ip)



at receiver.py


 zabbix_url = objects.Cluster.get_network_manager(
task.cluster
).get_zabbix_url(task.cluster)

if zabbix_url:
zabbix_suffix = " Access Zabbix dashboard at {0}".format(
zabbix_url
)
message += zabbix_suffix



at task.py



from nailgun.utils.zabbix import ZabbixManager
# check if there's a zabbix server in an environment
# and if there is, remove hosts
if ZabbixManager.get_zabbix_node(task.cluster):
zabbix_credentials = ZabbixManager.get_zabbix_credentials(
task.cluster
)
logger.debug("Removing nodes %s from zabbix" % (nodes_to_delete))
try:
ZabbixManager.remove_from_zabbix(
zabbix_credentials, nodes_to_delete
)
except (errors.CannotMakeZabbixRequest,
errors.ZabbixRequestError) as e:
logger.warning("%s, skipping removing nodes from Zabbix", e)



and

https://review.openstack.org/#/c/84408/39/nailgun/nailgun/utils/zabbix.py


i am not able to get how can i connect to the vim.pp file

Thanks
Satya


On Wed, Dec 17, 2014 at 7:27 AM, Mike Scherbakov 
wrote:
>
> Hi,
> did you come across
> http://docs.mirantis.com/fuel-dev/develop/addition_examples.html ?
>
> I believe it should cover your use case.
>
> Thanks,
>
> On Tue, Dec 16, 2014 at 11:43 PM, Satyasanjibani Rautaray <
> engg.s...@gmail.com> wrote:
>>
>> I just need to deploy the node and install my required packages.
>> On 17-Dec-2014 1:31 am, "Andrey Danin"  wrote:
>>
>>> Hello.
>>>
>>> What version of Fuel do you use? Did you reupload openstack.yaml into
>>> Nailgun? Do you want just to deploy an operating system and configure a
>>> network on a new node?
>>>
>>> I would really appreciate if you use a period at the end of sentences.
>>>
>>> On Tuesday, December 16, 2014, Satyasanjibani Rautaray <
>>> engg.s...@gmail.com> wrote:
>>>
 Hi,

 *i am in a process of creating an additional node by editing the code
 where the new node will be solving a different propose than installing
 openstack components just for testing currently the new node will install
 vim for me please help me what else i need to look into to create the
 complete setup and deploy with fuel i have edited openstack.yaml at
 /root/fuel-web/nailgun/nailgun/fixtures http://pastebin.com/P1MmDBzP
 *
 --
 Thanks
 Satya
 Mob:9844101001

 No one is the best by birth, Its his brain/ knowledge which make him
 the best.

>>>
>>>
>>> --
>>> Andrey Danin
>>> ada...@mirantis.com
>>> skype: gcon.monolake
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> --
> Mike Scherbakov
> #mihgen
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

-- 
Thanks
Satya
Mob:9844101001

No one is the best by birth, Its his brain/ knowledge which make him the
best.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-dev][Cinder] Driver stats return value: infinite or unavailable

2014-12-18 Thread Eduard Matei
Thanks John,
I updated to unknown.

Eduard

On Thu, Dec 18, 2014 at 8:09 PM, John Griffith 
wrote:

> On Thu, Dec 18, 2014 at 1:56 AM, Eduard Matei
>  wrote:
> > Hi everyone,
> >
> > We're in a bit of a predicament regarding review:
> > https://review.openstack.org/#/c/130733/
> >
> > Two days ago it got a -1 from John G asking to change infinite to
> > unavailable although the docs clearly say that "If the driver is unable
> to
> > provide a value for free_capacity_gb or total_capacity_gb, keywords can
> be
> > provided instead. Please use ‘unknown’ if the array cannot report the
> value
> > or ‘infinite’ if the array has no upper limit."
> > (http://docs.openstack.org/developer/cinder/devref/drivers.html)
> >
> > After i changed it, came Walter A. Boring IV and gave another -1 saying
> we
> > should return infinite.
> >
> > Since we use S3 as a backend and it has no upper limit (technically
> there is
> > a limit but for the purposes of our driver there's no limit as the
> backend
> > is "elastic") we could return infinite.
> >
> > Anyway, the problem is that now we missed the K-1 merge window although
> the
> > driver passed all tests (including cert tests).
> >
> > So please can someone decide which is the correct value so we can use
> that
> > and get the patched approved (unless there are other issues).
> >
> > Thanks,
> > Eduard
> > --
> >
> > Eduard Biceri Matei, Senior Software Developer
> > www.cloudfounders.com
> >  | eduard.ma...@cloudfounders.com
> >
> >
> >
> > CloudFounders, The Private Cloud Software Company
> >
> > Disclaimer:
> > This email and any files transmitted with it are confidential and
> intended
> > solely for the use of the individual or entity to whom they are
> addressed.
> > If you are not the named addressee or an employee or agent responsible
> for
> > delivering this message to the named addressee, you are hereby notified
> that
> > you are not authorized to read, print, retain, copy or disseminate this
> > message or any part of it. If you have received this email in error we
> > request you to notify us by reply e-mail and to delete all electronic
> files
> > of the message. If you are not the intended recipient you are notified
> that
> > disclosing, copying, distributing or taking any action in reliance on the
> > contents of this information is strictly prohibited.
> > E-mail transmission cannot be guaranteed to be secure or error free as
> > information could be intercepted, corrupted, lost, destroyed, arrive
> late or
> > incomplete, or contain viruses. The sender therefore does not accept
> > liability for any errors or omissions in the content of this message, and
> > shall have no liability for any loss or damage suffered by the user,
> which
> > arise as a result of e-mail transmission.
> >
> >
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> Hi Eduard,
>
> First, I owe you an apology; I stated "unavailable" in the review, but
> should've stated "unknown".  We're in the process of trying to
> eliminate the reporting of infinite as it screwed up the weighing
> scheduler.
>
> Note that Zhiteng adjusted the scheduler so this isn't such a big deal
> anymore by down-grading the handling of infinite and unknown [1].
>
> Anyway, my suggestion to not use infinite is because in the coming
> weeks I'd like to remove infinite from the stats reporting altogether,
> and for those backends that for whatever reason don't know how much
> capacity they have use a more accurate report of "unknown".
>
> Sorry for the confusion, I think the comments on your review have been
> updated to reflect this, if not I'll do that next.
>
> Thanks,
> John
>
> [1]:
> https://github.com/openstack/cinder/commit/ee9d30a73a74a2e1905eacc561c1b5188b62ca75
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot 

Re: [openstack-dev] [OpenStack-dev][Cinder] Driver stats return value: infinite or unavailable

2014-12-18 Thread John Griffith
On Thu, Dec 18, 2014 at 1:56 AM, Eduard Matei
 wrote:
> Hi everyone,
>
> We're in a bit of a predicament regarding review:
> https://review.openstack.org/#/c/130733/
>
> Two days ago it got a -1 from John G asking to change infinite to
> unavailable although the docs clearly say that "If the driver is unable to
> provide a value for free_capacity_gb or total_capacity_gb, keywords can be
> provided instead. Please use ‘unknown’ if the array cannot report the value
> or ‘infinite’ if the array has no upper limit."
> (http://docs.openstack.org/developer/cinder/devref/drivers.html)
>
> After i changed it, came Walter A. Boring IV and gave another -1 saying we
> should return infinite.
>
> Since we use S3 as a backend and it has no upper limit (technically there is
> a limit but for the purposes of our driver there's no limit as the backend
> is "elastic") we could return infinite.
>
> Anyway, the problem is that now we missed the K-1 merge window although the
> driver passed all tests (including cert tests).
>
> So please can someone decide which is the correct value so we can use that
> and get the patched approved (unless there are other issues).
>
> Thanks,
> Eduard
> --
>
> Eduard Biceri Matei, Senior Software Developer
> www.cloudfounders.com
>  | eduard.ma...@cloudfounders.com
>
>
>
> CloudFounders, The Private Cloud Software Company
>
> Disclaimer:
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you are not the named addressee or an employee or agent responsible for
> delivering this message to the named addressee, you are hereby notified that
> you are not authorized to read, print, retain, copy or disseminate this
> message or any part of it. If you have received this email in error we
> request you to notify us by reply e-mail and to delete all electronic files
> of the message. If you are not the intended recipient you are notified that
> disclosing, copying, distributing or taking any action in reliance on the
> contents of this information is strictly prohibited.
> E-mail transmission cannot be guaranteed to be secure or error free as
> information could be intercepted, corrupted, lost, destroyed, arrive late or
> incomplete, or contain viruses. The sender therefore does not accept
> liability for any errors or omissions in the content of this message, and
> shall have no liability for any loss or damage suffered by the user, which
> arise as a result of e-mail transmission.
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
Hi Eduard,

First, I owe you an apology; I stated "unavailable" in the review, but
should've stated "unknown".  We're in the process of trying to
eliminate the reporting of infinite as it screwed up the weighing
scheduler.

Note that Zhiteng adjusted the scheduler so this isn't such a big deal
anymore by down-grading the handling of infinite and unknown [1].

Anyway, my suggestion to not use infinite is because in the coming
weeks I'd like to remove infinite from the stats reporting altogether,
and for those backends that for whatever reason don't know how much
capacity they have use a more accurate report of "unknown".

Sorry for the confusion, I think the comments on your review have been
updated to reflect this, if not I'll do that next.

Thanks,
John

[1]: 
https://github.com/openstack/cinder/commit/ee9d30a73a74a2e1905eacc561c1b5188b62ca75

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-18 Thread Eduard Matei
Thanks for the input.

I managed to get another master working (on Ubuntu 13.10), again with some
issues since it was already setup.
I'm now working towards setting up the slave.

Will add comments to those reviews.

Thanks,
Eduard

On Thu, Dec 18, 2014 at 7:42 PM, Asselin, Ramy  wrote:

>  Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that
> the referenced script is just a wrapper that pulls all the latest from
> various locations in openstack-infra, e.g. [2].
>
> Ubuntu 14.04 support is WIP [3]
>
> FYI, there’s a spec to get an in-tree 3rd party ci solution [4]. Please
> add your comments if this interests you.
>
>
>
> [1] https://github.com/rasselin/os-ext-testing/blob/master/README.md
>
> [2]
> https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29
>
> [3] https://review.openstack.org/#/c/141518/
>
> [4] https://review.openstack.org/#/c/139745/
>
>
>
>
>
> *From:* Punith S [mailto:punit...@cloudbyte.com]
> *Sent:* Thursday, December 18, 2014 3:12 AM
> *To:* OpenStack Development Mailing List (not for usage questions);
> Eduard Matei
>
> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
> setting up CI
>
>
>
> Hi Eduard
>
>
>
> we tried running
> https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh
>
> on ubuntu master 12.04, and it appears to be working fine on 12.04.
>
>
>
> thanks
>
>
>
> On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei <
> eduard.ma...@cloudfounders.com> wrote:
>
>  Hi,
>
> Seems i can't install using puppet on the jenkins master using
> install_master.sh from
> https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh
> because it's running Ubuntu 11.10 and it appears unsupported.
>
> I managed to install puppet manually on master and everything else fails
>
> So i'm trying to manually install zuul and nodepool and jenkins job
> builder, see where i end up.
>
>
>
> The slave looks complete, got some errors on running install_slave so i
> ran parts of the script manually, changing some params and it appears
> installed but no way to test it without the master.
>
>
>
> Any ideas welcome.
>
>
>
> Thanks,
>
>
>
> Eduard
>
>
>
> On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy 
> wrote:
>
>   Manually running the script requires a few environment settings. Take a
> look at the README here:
>
> https://github.com/openstack-infra/devstack-gate
>
>
>
> Regarding cinder, I’m using this repo to run our cinder jobs (fork from
> jaypipes).
>
> https://github.com/rasselin/os-ext-testing
>
>
>
> Note that this solution doesn’t use the Jenkins gerrit trigger pluggin,
> but zuul.
>
>
>
> There’s a sample job for cinder here. It’s in Jenkins Job Builder format.
>
>
> https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample
>
>
>
> You can ask more questions in IRC freenode #openstack-cinder. (irc#
> asselin)
>
>
>
> Ramy
>
>
>
> *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
> *Sent:* Tuesday, December 16, 2014 12:41 AM
> *To:* Bailey, Darragh
> *Cc:* OpenStack Development Mailing List (not for usage questions);
> OpenStack
> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
> setting up CI
>
>
>
> Hi,
>
>
>
> Can someone point me to some working documentation on how to setup third
> party CI? (joinfu's instructions don't seem to work, and manually running
> devstack-gate scripts fails:
>
> Running gate_hook
>
> Job timeout set to: 163 minutes
>
> timeout: failed to run command 
> ‘/opt/stack/new/devstack-gate/devstack-vm-gate.sh’: No such file or directory
>
> ERROR: the main setup script run by this job failed - exit code: 127
>
> please look at the relevant log files to determine the root cause
>
> Cleaning up host
>
> ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz)
>
>  Build step 'Execute shell' marked build as failure.
>
>
>
> I have a working Jenkins slave with devstack and our internal libraries, i
> have Gerrit Trigger Plugin working and triggering on patches created, i
> just need the actual job contents so that it can get to comment with the
> test results.
>
>
>
> Thanks,
>
>
>
> Eduard
>
>
>
> On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei <
> eduard.ma...@cloudfounders.com> wrote:
>
>  Hi Darragh, thanks for your input
>
>
>
> I double checked the job settings and fixed it:
>
> - build triggers is set to Gerrit event
>
> - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin
> and tested separately)
>
> - Trigger on: Patchset Created
>
> - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches:
> Type: Path, Pattern: ** (was Type Plain on both)
>
> Now the job is triggered by commit on openstack-dev/sandbox :)
>
>
>
> Regarding the Query and Trigger Gerrit Patches, i found my patch using
> query: status:open project:openstack-dev/sandbox change:139585 and i can
> trigger it manually and it exe

Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-18 Thread Asselin, Ramy
Yes, Ubuntu 12.04 is tested as mentioned in the readme [1]. Note that the 
referenced script is just a wrapper that pulls all the latest from various 
locations in openstack-infra, e.g. [2].
Ubuntu 14.04 support is WIP [3]
FYI, there’s a spec to get an in-tree 3rd party ci solution [4]. Please add 
your comments if this interests you.

[1] https://github.com/rasselin/os-ext-testing/blob/master/README.md
[2] 
https://github.com/rasselin/os-ext-testing/blob/master/puppet/install_master.sh#L29
[3] https://review.openstack.org/#/c/141518/
[4] https://review.openstack.org/#/c/139745/


From: Punith S [mailto:punit...@cloudbyte.com]
Sent: Thursday, December 18, 2014 3:12 AM
To: OpenStack Development Mailing List (not for usage questions); Eduard Matei
Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting 
up CI

Hi Eduard

we tried running 
https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh
on ubuntu master 12.04, and it appears to be working fine on 12.04.

thanks

On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei 
mailto:eduard.ma...@cloudfounders.com>> wrote:
Hi,
Seems i can't install using puppet on the jenkins master using 
install_master.sh from 
https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh
 because it's running Ubuntu 11.10 and it appears unsupported.
I managed to install puppet manually on master and everything else fails
So i'm trying to manually install zuul and nodepool and jenkins job builder, 
see where i end up.

The slave looks complete, got some errors on running install_slave so i ran 
parts of the script manually, changing some params and it appears installed but 
no way to test it without the master.

Any ideas welcome.

Thanks,

Eduard

On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy 
mailto:ramy.asse...@hp.com>> wrote:
Manually running the script requires a few environment settings. Take a look at 
the README here:
https://github.com/openstack-infra/devstack-gate

Regarding cinder, I’m using this repo to run our cinder jobs (fork from 
jaypipes).
https://github.com/rasselin/os-ext-testing

Note that this solution doesn’t use the Jenkins gerrit trigger pluggin, but 
zuul.

There’s a sample job for cinder here. It’s in Jenkins Job Builder format.
https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample

You can ask more questions in IRC freenode #openstack-cinder. (irc# asselin)

Ramy

From: Eduard Matei 
[mailto:eduard.ma...@cloudfounders.com]
Sent: Tuesday, December 16, 2014 12:41 AM
To: Bailey, Darragh
Cc: OpenStack Development Mailing List (not for usage questions); OpenStack
Subject: Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting 
up CI

Hi,

Can someone point me to some working documentation on how to setup third party 
CI? (joinfu's instructions don't seem to work, and manually running 
devstack-gate scripts fails:

Running gate_hook

Job timeout set to: 163 minutes

timeout: failed to run command 
‘/opt/stack/new/devstack-gate/devstack-vm-gate.sh’: No such file or directory

ERROR: the main setup script run by this job failed - exit code: 127

please look at the relevant log files to determine the root cause

Cleaning up host

... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz)
Build step 'Execute shell' marked build as failure.

I have a working Jenkins slave with devstack and our internal libraries, i have 
Gerrit Trigger Plugin working and triggering on patches created, i just need 
the actual job contents so that it can get to comment with the test results.

Thanks,

Eduard

On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei 
mailto:eduard.ma...@cloudfounders.com>> wrote:
Hi Darragh, thanks for your input

I double checked the job settings and fixed it:
- build triggers is set to Gerrit event
- Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin and 
tested separately)
- Trigger on: Patchset Created
- Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches: Type: 
Path, Pattern: ** (was Type Plain on both)
Now the job is triggered by commit on openstack-dev/sandbox :)

Regarding the Query and Trigger Gerrit Patches, i found my patch using query: 
status:open project:openstack-dev/sandbox change:139585 and i can trigger it 
manually and it executes the job.

But i still have the problem: what should the job do? It doesn't actually do 
anything, it doesn't run tests or comment on the patch.
Do you have an example of job?

Thanks,
Eduard

On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh 
mailto:dbai...@hp.com>> wrote:
Hi Eduard,


I would check the trigger settings in the job, particularly which "type"
of pattern matching is being used for the branches. Found it tends to be
the spot that catches most people out when configuring jobs with the
Gerrit Trigger plugin. If you're looking to trigger against all bra

Re: [openstack-dev] [pecan] [WSME] Different content-type in request and response

2014-12-18 Thread Renat Akhmerov
Ok, Doug, we’ll look into it.

Thanks

Renat Akhmerov
@ Mirantis Inc.



> On 18 Dec 2014, at 22:59, Doug Hellmann  wrote:
> 
> 
> On Dec 18, 2014, at 2:53 AM, Renat Akhmerov  > wrote:
> 
>> Doug,
>> 
>> Sorry for trying to resurrect this thread again. It seems to be pretty 
>> important for us. Do you have some comments on that? Or if you need more 
>> context please also let us know.
> 
> WSME has separate handlers for JSON and XML now. You could look into adding 
> one for YAML. I think you’d want to start looking in 
> http://git.openstack.org/cgit/stackforge/wsme/tree/wsme/rest 
> 
> 
> By default WSME is going to want to encode the response in the same format as 
> the inputs, because it’s going to expect the clients to want that. I’m not 
> sure how hard it would be to change that assumption, or whether the other 
> WSME developers would really think it’s a good idea.
> 
> Doug
> 
>> 
>> Thanks
>> 
>> Renat Akhmerov
>> @ Mirantis Inc.
>> 
>> 
>> 
>>> On 27 Nov 2014, at 17:43, Renat Akhmerov  wrote:
>>> 
>>> Doug, thanks for your answer! 
>>> 
>>> My explanations below..
>>> 
>>> 
 On 26 Nov 2014, at 21:18, Doug Hellmann  wrote:
 
 
 On Nov 26, 2014, at 3:49 AM, Renat Akhmerov  wrote:
 
> Hi,
> 
> I traced the WSME code and found a place [0] where it tries to get 
> arguments from request body based on different mimetype. So looks like 
> WSME supports only json, xml and “application/x-www-form-urlencoded”.
> 
> So my question is: Can we fix WSME to also support “text/plain” mimetype? 
> I think the first snippet that Nikolay provided is valid from WSME 
> standpoint.
 
 WSME is intended for building APIs with structured arguments. It seems 
 like the case of wanting to use text/plain for a single input string 
 argument just hasn’t come up before, so this may be a new feature.
 
 How many different API calls do you have that will look like this? Would 
 this be the only one in the API? Would it make sense to consistently use 
 JSON, even though you only need a single string argument in this case?
>>> 
>>> We have 5-6 API calls where we need it.
>>> 
>>> And let me briefly explain the context. In Mistral we have a language (we 
>>> call it DSL) to describe different object types: workflows, workbooks, 
>>> actions. So currently when we upload say a workbook we run in a command 
>>> line:
>>> 
>>> mistral workbook-create my_wb.yaml
>>> 
>>> where my_wb.yaml contains that DSL. The result is a table representation of 
>>> actually create server side workbook. From technical perspective we now 
>>> have:
>>> 
>>> Request:
>>> 
>>> POST /mistral_url/workbooks
>>> 
>>> {
>>>  “definition”: “escaped content of my_wb.yaml"
>>> }
>>> 
>>> Response:
>>> 
>>> {
>>>  “id”: “1-2-3-4”,
>>>  “name”: “my_wb_name”,
>>>  “description”: “my workbook”,
>>>  ...
>>> }
>>> 
>>> The point is that if we use, for example, something like “curl” we every 
>>> time have to obtain that “escaped content of my_wb.yaml” and create that, 
>>> in fact, synthetic JSON to be able to send it to the server side.
>>> 
>>> So for us it would be much more convenient if we could just send a plain 
>>> text but still be able to receive a JSON as response. I personally don’t 
>>> want to use some other technology because generally WSME does it job and I 
>>> like this concept of rest resources defined as classes. If it supported 
>>> text/plain it would be just the best fit for us.
>>> 
> 
> Or if we don’t understand something in WSME philosophy then it’d nice to 
> hear some explanations from WSME team. Will appreciate that.
> 
> 
> Another issue that previously came across is that if we use WSME then we 
> can’t pass arbitrary set of parameters in a url query string, as I 
> understand they should always correspond to WSME resource structure. So, 
> in fact, we can’t have any dynamic parameters. In our particular use case 
> it’s very inconvenient. Hoping you could also provide some info about 
> that: how it can be achieved or if we can just fix it.
 
 Ceilometer uses an array of query arguments to allow an arbitrary number.
 
 On the other hand, it sounds like perhaps your desired API may be easier 
 to implement using some of the other tools being used, such as JSONSchema. 
 Are you extending an existing API or building something completely new?
>>> 
>>> We want to improve our existing Mistral API. Basically, the idea is to be 
>>> able to apply dynamic filters when we’re requesting a collection of objects 
>>> using url query string. Yes, we could use JSONSchema if you say it’s 
>>> absolutely impossible to do and doesn’t follow WSME concepts, that’s fine. 
>>> But like I said generally I like the approach that WSME takes and don’t 
>>> feel like jumping to another technology just because of t

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-18 Thread Clint Byrum
Excerpts from Anant Patil's message of 2014-12-16 07:36:58 -0800:
> On 16-Dec-14 00:59, Clint Byrum wrote:
> > Excerpts from Anant Patil's message of 2014-12-15 07:15:30 -0800:
> >> On 13-Dec-14 05:42, Zane Bitter wrote:
> >>> On 12/12/14 05:29, Murugan, Visnusaran wrote:
> 
> 
> > -Original Message-
> > From: Zane Bitter [mailto:zbit...@redhat.com]
> > Sent: Friday, December 12, 2014 6:37 AM
> > To: openstack-dev@lists.openstack.org
> > Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
> > showdown
> >
> > On 11/12/14 08:26, Murugan, Visnusaran wrote:
>  [Murugan, Visnusaran]
>  In case of rollback where we have to cleanup earlier version of
>  resources,
> >>> we could get the order from old template. We'd prefer not to have a
> >>> graph table.
> >>>
> >>> In theory you could get it by keeping old templates around. But that
> >>> means keeping a lot of templates, and it will be hard to keep track
> >>> of when you want to delete them. It also means that when starting an
> >>> update you'll need to load every existing previous version of the
> >>> template in order to calculate the dependencies. It also leaves the
> >>> dependencies in an ambiguous state when a resource fails, and
> >>> although that can be worked around it will be a giant pain to 
> >>> implement.
> >>>
> >>
> >> Agree that looking to all templates for a delete is not good. But
> >> baring Complexity, we feel we could achieve it by way of having an
> >> update and a delete stream for a stack update operation. I will
> >> elaborate in detail in the etherpad sometime tomorrow :)
> >>
> >>> I agree that I'd prefer not to have a graph table. After trying a
> >>> couple of different things I decided to store the dependencies in the
> >>> Resource table, where we can read or write them virtually for free
> >>> because it turns out that we are always reading or updating the
> >>> Resource itself at exactly the same time anyway.
> >>>
> >>
> >> Not sure how this will work in an update scenario when a resource does
> >> not change and its dependencies do.
> >
> > We'll always update the requirements, even when the properties don't
> > change.
> >
> 
>  Can you elaborate a bit on rollback.
> >>>
> >>> I didn't do anything special to handle rollback. It's possible that we 
> >>> need to - obviously the difference in the UpdateReplace + rollback case 
> >>> is that the replaced resource is now the one we want to keep, and yet 
> >>> the replaced_by/replaces dependency will force the newer (replacement) 
> >>> resource to be checked for deletion first, which is an inversion of the 
> >>> usual order.
> >>>
> >>
> >> This is where the version is so handy! For UpdateReplaced ones, there is
> >> an older version to go back to. This version could just be template ID,
> >> as I mentioned in another e-mail. All resources are at the current
> >> template ID if they are found in the current template, even if they is
> >> no need to update them. Otherwise, they need to be cleaned-up in the
> >> order given in the previous templates.
> >>
> >> I think the template ID is used as version as far as I can see in Zane's
> >> PoC. If the resource template key doesn't match the current template
> >> key, the resource is deleted. The version is misnomer here, but that
> >> field (template id) is used as though we had versions of resources.
> >>
> >>> However, I tried to think of a scenario where that would cause problems 
> >>> and I couldn't come up with one. Provided we know the actual, real-world 
> >>> dependencies of each resource I don't think the ordering of those two 
> >>> checks matters.
> >>>
> >>> In fact, I currently can't think of a case where the dependency order 
> >>> between replacement and replaced resources matters at all. It matters in 
> >>> the current Heat implementation because resources are artificially 
> >>> segmented into the current and backup stacks, but with a holistic view 
> >>> of dependencies that may well not be required. I tried taking that line 
> >>> out of the simulator code and all the tests still passed. If anybody can 
> >>> think of a scenario in which it would make a difference, I would be very 
> >>> interested to hear it.
> >>>
> >>> In any event though, it should be no problem to reverse the direction of 
> >>> that one edge in these particular circumstances if it does turn out to 
> >>> be a problem.
> >>>
>  We had an approach with depends_on
>  and needed_by columns in ResourceTable. But dropped it when we figured 
>  out
>  we had too many DB operations for Update.
> >>>
> >>> Yeah, I initially ran into this problem too - you have a bunch of nodes 
> >>> that are waiting on the current node, and now you have to go look them 
> >>> all up in the database to see what else they're waitin

Re: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy

2014-12-18 Thread Tim Hinrichs
Hi Yathi,

Thanks for the reminder about the nova solver scheduler.  It’s definitely a 
cool idea to look at integrating the two systems!

Ramki is definitely involved in this discussion.  We thought placement was a 
good first example of a broad class of problems that a linear solver could help 
address, esp. in the NFV context.  I like the idea of integrating Congress and 
the Nova solver scheduler and then generalizing what we learned to handle other 
kinds of optimization problems.  So that’s what I’m thinking long term.

Tim





On Dec 16, 2014, at 11:28 AM, Yathiraj Udupi (yudupi) 
mailto:yud...@cisco.com>> wrote:

To add to what I mentioned below… We from the Solver Scheduler team are a small 
team here at Cisco, trying to drive this project and slowly adding more complex 
use cases for scheduling and policy–driven placements.We would really love 
to have some real contributions from everyone in the community and build this 
the right way.
If it may interest – some interesting scheduler use cases are here based on one 
of our community meetings in IRC - 
https://etherpad.openstack.org/p/SchedulerUseCases  This could apply to 
Congress driving some of this too.

I am leading the effort for the  Solver Scheduler project ( 
https://github.com/stackforge/nova-solver-scheduler
 ) , and if any of you are willing to contribute code, API, benchmarks, and 
also work on integration, my team and I can help you guide through this.   We 
would be following the same processes under Stackforge at the moment.

Thanks,
Yathi.





On 12/16/14, 11:14 AM, "Yathiraj Udupi (yudupi)" 
mailto:yud...@cisco.com>> wrote:

Tim,

I read the conversation thread below and this got me interested as it relates 
to our discussion we had in the Policy Summit (mid cycle meet up) held in Palo 
Alto a few months ago.

This relates to our project – Nova Solver Scheduler, which I had talked about 
at the Policy summit.   Please see this - 
https://github.com/stackforge/nova-solver-scheduler

We already have a working constraints-based solver framework/engine that 
handles Nova placement, and we are currently active in Stackforge, and aim to 
get this integrated into the Gantt project 
(https://blueprints.launchpad.net/nova/+spec/solver-scheduler), based on our 
discussions in the Nova scheduler sub group.

When I saw discussions around using Linear programming (LP) solvers, PULP, etc, 
 I thought of pitching in here to say, we already have demonstrated integrating 
a LP based solver for Nova compute placements.   Please see: 
https://www.youtube.com/watch?v=7QzDbhkk-BI#t=942
 for a demo of this (from our talk at the Atlanta Openstack summit).
 Based on this email thread,  I believe Ramki, one of our early collaborators 
is driving a similar solution in the NFV ETSI research group.  Glad to know our 
Solver scheduler project is getting interest now.

As part of Congress integration,  at the policy summit, I had suggested, we can 
try to translate a Congress policy into our Solver Scheduler’s constraints,  
and use this to enforce Nova placement policies.
We can already demonstrate policy-driven nova placements using our pluggable 
constraints model.  So it should be easy to integrate with Congress.

The Nova solver scheduler team would be glad to help with any efforts wrt to 
trying out a Congress integration for Nova placements.

Thanks,
Yathi.



On 12/16/14, 10:24 AM, "Tim Hinrichs" 
mailto:thinri...@vmware.com>> wrote:

[Adding openstack-dev to this thread.  For those of you just joining… We 
started kicking around ideas for how we might integrate a special-purpose VM 
placement engine into Congress.]

Kudva: responses inline.


On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva 
mailto:ku...@us.ibm.com>> wrote:

Hi,

I am very interested in this.

So, it looks like there are two parts to this:
1. Policy analysis when there are a significant mix of logical and builtin 
predicates (i.e.,
runtime should identify a solution space when there are arithmetic operators). 
This will
require linear programming/ILP type solvers.  There might be a need to have a 
fu

Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution

2014-12-18 Thread Mike Kolesnik
Hi Mathieu,

Thanks for the quick reply, some comments inline..

Regards,
Mike

- Original Message -
> Hi mike,
> 
> thanks for working on this bug :
> 
> On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton  wrote:
> >
> >
> > On 12/18/14, 2:06 PM, "Mike Kolesnik"  wrote:
> >
> >>Hi Neutron community members.
> >>
> >>I wanted to query the community about a proposal of how to fix HA routers
> >>not
> >>working with L2Population (bug 1365476[1]).
> >>This bug is important to fix especially if we want to have HA routers and
> >>DVR
> >>routers working together.
> >>
> >>[1] https://bugs.launchpad.net/neutron/+bug/1365476
> >>
> >>What's happening now?
> >>* HA routers use distributed ports, i.e. the port with the same IP & MAC
> >>  details is applied on all nodes where an L3 agent is hosting this
> >>router.
> >>* Currently, the port details have a binding pointing to an arbitrary node
> >>  and this is not updated.
> >>* L2pop takes this "potentially stale" information and uses it to create:
> >>  1. A tunnel to the node.
> >>  2. An FDB entry that directs traffic for that port to that node.
> >>  3. If ARP responder is on, ARP requests will not traverse the network.
> >>* Problem is, the master router wouldn't necessarily be running on the
> >>  reported agent.
> >>  This means that traffic would not reach the master node but some
> >>arbitrary
> >>  node where the router master might be running, but might be in another
> >>  state (standby, fail).
> >>
> >>What is proposed?
> >>Basically the idea is not to do L2Pop for HA router ports that reside on
> >>the
> >>tenant network.
> >>Instead, we would create a tunnel to each node hosting the HA router so
> >>that
> >>the normal learning switch functionality would take care of switching the
> >>traffic to the master router.
> >
> > In Neutron we just ensure that the MAC address is unique per network.
> > Could a duplicate MAC address cause problems here?
> 
> gary, AFAIU, from a Neutron POV, there is only one port, which is the
> router Port, which is plugged twice. One time per port.
> I think that the capacity to bind a port to several host is also a
> prerequisite for a clean solution here. This will be provided by
> patches to this bug :
> https://bugs.launchpad.net/neutron/+bug/1367391
> 
> 
> >>This way no matter where the master router is currently running, the data
> >>plane would know how to forward traffic to it.
> >>This solution requires changes on the controller only.
> >>
> >>What's to gain?
> >>* Data plane only solution, independent of the control plane.
> >>* Lowest failover time (same as HA routers today).
> >>* High backport potential:
> >>  * No APIs changed/added.
> >>  * No configuration changes.
> >>  * No DB changes.
> >>  * Changes localized to a single file and limited in scope.
> >>
> >>What's the alternative?
> >>An alternative solution would be to have the controller update the port
> >>binding
> >>on the single port so that the plain old L2Pop happens and notifies about
> >>the
> >>location of the master router.
> >>This basically negates all the benefits of the proposed solution, but is
> >>wider.
> >>This solution depends on the report-ha-router-master spec which is
> >>currently in
> >>the implementation phase.
> >>
> >>It's important to note that these two solutions don't collide and could
> >>be done
> >>independently. The one I'm proposing just makes more sense from an HA
> >>viewpoint
> >>because of it's benefits which fit the HA methodology of being fast &
> >>having as
> >>little outside dependency as possible.
> >>It could be done as an initial solution which solves the bug for mechanism
> >>drivers that support normal learning switch (OVS), and later kept as an
> >>optimization to the more general, controller based, solution which will
> >>solve
> >>the issue for any mechanism driver working with L2Pop (Linux Bridge,
> >>possibly
> >>others).
> >>
> >>Would love to hear your thoughts on the subject.
> 
> You will have to clearly update the doc to mention that deployment
> with Linuxbridge+l2pop are not compatible with HA.

Yes this should be added and this is already the situation right now.
However if anyone would like to work on a LB fix (the general one or some
specific one) I would gladly help with reviewing it.

> 
> Moreover, this solution is downgrading the l2pop solution, by
> disabling the ARP-responder when VMs want to talk to a HA router.
> This means that ARP requests will be duplicated to every overlay
> tunnel to feed the OVS Mac learning table.
> This is something that we were trying to avoid with l2pop. But may be
> this is acceptable.

Yes basically you're correct, however this would be only limited to those
tunnels that connect to the nodes where the HA router is hosted, so we
would still limit the amount of traffic that is sent across the underlay.

Also bear in mind that ARP is actually good (at least in OVS case) since
it helps the VM locate on which tunnel the master is, so once it receives
the 

Re: [openstack-dev] [Congress] Re: Placement and Scheduling via Policy

2014-12-18 Thread Tim Hinrichs
Hi all,

Responses inline.

On Dec 16, 2014, at 10:57 PM, 
mailto:ruby.krishnasw...@orange.com>> 
mailto:ruby.krishnasw...@orange.com>> wrote:

Hi Tim & All

@Tim: I did not reply to openstack-dev. Do you think we could have an openstack 
list specific for “congress” to which anybody may subscribe?

Sending to openstack-dev is the right thing, as long as we put [Congress] in 
the subject.  Everyone I know sets up filters on openstack-dev so they only get 
the mail they care about.  I think you’re the only one in the group who isn’t 
subscribed to that list.



1) Enforcement:
   By this we mean “how will the actions computed by the policy engine 
be executed by the concerned OpenStack functional module”.


  In this case, it is better to first work this out for a “simpler” case, 
e.g. your running example concerning the network/groups.
Note: some actions concern only some data base (e.g. insert the 
user within some group).



2)  From Prabhakar’s mail

“Enforcement. That is with a large number of constraints in place for placement 
and
scheduling, how does the policy engine communicate and enforce the placement
constraints to nova scheduler. “

Nova scheduler (current): It assigns VMs to servers based on the 
policy set by the administrator (through filters and host aggregates).

  The administrator also configures a scheduling heuristic (implemented as a 
driver), for example “round-robin” driver.
 Then the computed assignment 
is sent back to the requestor (API server) that interacts with nova-compute to 
provision the VM.
 The current nova-scheduler has 
another function: It updates the allocation status of each compute node on the 
DB (through another indirection called nova-conductor)

So it is correct to re-interpret your statement as follows:

-   What is the entity with which the policy engine interacts for either 
proactive or reactive placement management?

-   How will the output from the policy engine (for example the placement 
matrix) be communicated back?

oProactive: this gives the mapping of VM to host

oReactive: this gives the new mapping of running VMs to hosts

-   How starting from the placement matrix, the correct migration plan will 
be executed? (for reactive case)



3) Currently openstack does not have “automated management of reactive 
placement”:  Hence if the policy engine is used for reactive placement, then 
there is a need for another “orchestrator” that can interpret the new proposed 
placement configuration (mapping of VM to servers) and execute the 
reconfiguration workflow.


4) So with a policy-based “placement engine” that is integrated with 
external solvers, then this engine will replace nova-scheduler?

Could we converge on this?



The notes from Yathiraj say that there is already a policy-based Nova scheduler 
we can use.  I suggest we look into that.  It could potentially simplify our 
problem to the point where we need only figure out how to convert a fragment of 
the Congress policy language into their policy language.  But those of you who 
are experts in placement will know better.

   
https://github.com/stackforge/nova-solver-scheduler

Tim


Regards
Ruby

De : Tim Hinrichs [mailto:thinri...@vmware.com]
Envoyé : mardi 16 décembre 2014 19:25
À : Prabhakar Kudva
Cc : KRISHNASWAMY Ruby IMT/OLPS; Ramki Krishnan 
(r...@brocade.com); Gokul B Kandiraju; openstack-dev
Objet : [Congress] Re: Placement and Scheduling via Policy

[Adding openstack-dev to this thread.  For those of you just joining… We 
started kicking around ideas for how we might integrate a special-purpose VM 
placement engine into Congress.]

Kudva: responses inline.


On Dec 16, 2014, at 6:25 AM, Prabhakar Kudva 
mailto:ku...@us.ibm.com>> wrote:


Hi,

I am very interested in this.

So, it looks like there are two parts to this:
1. Policy analysis when there are a significant mix of logical and builtin 
predicates (i.e.,
runtime should identify a solution space when there are arithmetic operators). 
This will
require linear programming/ILP type solvers.  There might be a need to have a 
function
in runtime.py that specifically deals with this (Tim?)

I think it’s right that we expect there to be a mix of builtins and standard 
predicates.  But what we’re considering here is having the linear solver be 
treated as if it were a domain-specific policy engine.  So that solver wouldn’t 
be embedded into the runtime.py necessarily.  Rather, we’d delegate part of the 
policy to that domain-specific policy eng

Re: [openstack-dev] #PERSONAL# : Git checkout command for Blueprints submission

2014-12-18 Thread Edgar Magana
It is git checkout -b bp/

Edgar

From: Swati Shukla1 mailto:swati.shuk...@tcs.com>>
Reply-To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, December 16, 2014 at 10:53 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] #PERSONAL# : Git checkout command for Blueprints 
submission


Hi All,

Generally, for bug submissions, we use ""git checkout -b bug/""

What is the similar 'git checkout' command for blueprints submission?

Swati Shukla
Tata Consultancy Services
Mailto: swati.shuk...@tcs.com
Website: http://www.tcs.com

Experience certainty. IT Services
Business Solutions
Consulting


=-=-=
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [pecan] [WSME] Different content-type in request and response

2014-12-18 Thread Doug Hellmann

On Dec 18, 2014, at 2:53 AM, Renat Akhmerov  wrote:

> Doug,
> 
> Sorry for trying to resurrect this thread again. It seems to be pretty 
> important for us. Do you have some comments on that? Or if you need more 
> context please also let us know.

WSME has separate handlers for JSON and XML now. You could look into adding one 
for YAML. I think you’d want to start looking in 
http://git.openstack.org/cgit/stackforge/wsme/tree/wsme/rest

By default WSME is going to want to encode the response in the same format as 
the inputs, because it’s going to expect the clients to want that. I’m not sure 
how hard it would be to change that assumption, or whether the other WSME 
developers would really think it’s a good idea.

Doug

> 
> Thanks
> 
> Renat Akhmerov
> @ Mirantis Inc.
> 
> 
> 
>> On 27 Nov 2014, at 17:43, Renat Akhmerov  wrote:
>> 
>> Doug, thanks for your answer! 
>> 
>> My explanations below..
>> 
>> 
>>> On 26 Nov 2014, at 21:18, Doug Hellmann  wrote:
>>> 
>>> 
>>> On Nov 26, 2014, at 3:49 AM, Renat Akhmerov  wrote:
>>> 
 Hi,
 
 I traced the WSME code and found a place [0] where it tries to get 
 arguments from request body based on different mimetype. So looks like 
 WSME supports only json, xml and “application/x-www-form-urlencoded”.
 
 So my question is: Can we fix WSME to also support “text/plain” mimetype? 
 I think the first snippet that Nikolay provided is valid from WSME 
 standpoint.
>>> 
>>> WSME is intended for building APIs with structured arguments. It seems like 
>>> the case of wanting to use text/plain for a single input string argument 
>>> just hasn’t come up before, so this may be a new feature.
>>> 
>>> How many different API calls do you have that will look like this? Would 
>>> this be the only one in the API? Would it make sense to consistently use 
>>> JSON, even though you only need a single string argument in this case?
>> 
>> We have 5-6 API calls where we need it.
>> 
>> And let me briefly explain the context. In Mistral we have a language (we 
>> call it DSL) to describe different object types: workflows, workbooks, 
>> actions. So currently when we upload say a workbook we run in a command line:
>> 
>> mistral workbook-create my_wb.yaml
>> 
>> where my_wb.yaml contains that DSL. The result is a table representation of 
>> actually create server side workbook. From technical perspective we now have:
>> 
>> Request:
>> 
>> POST /mistral_url/workbooks
>> 
>> {
>>   “definition”: “escaped content of my_wb.yaml"
>> }
>> 
>> Response:
>> 
>> {
>>   “id”: “1-2-3-4”,
>>   “name”: “my_wb_name”,
>>   “description”: “my workbook”,
>>   ...
>> }
>> 
>> The point is that if we use, for example, something like “curl” we every 
>> time have to obtain that “escaped content of my_wb.yaml” and create that, in 
>> fact, synthetic JSON to be able to send it to the server side.
>> 
>> So for us it would be much more convenient if we could just send a plain 
>> text but still be able to receive a JSON as response. I personally don’t 
>> want to use some other technology because generally WSME does it job and I 
>> like this concept of rest resources defined as classes. If it supported 
>> text/plain it would be just the best fit for us.
>> 
 
 Or if we don’t understand something in WSME philosophy then it’d nice to 
 hear some explanations from WSME team. Will appreciate that.
 
 
 Another issue that previously came across is that if we use WSME then we 
 can’t pass arbitrary set of parameters in a url query string, as I 
 understand they should always correspond to WSME resource structure. So, 
 in fact, we can’t have any dynamic parameters. In our particular use case 
 it’s very inconvenient. Hoping you could also provide some info about 
 that: how it can be achieved or if we can just fix it.
>>> 
>>> Ceilometer uses an array of query arguments to allow an arbitrary number.
>>> 
>>> On the other hand, it sounds like perhaps your desired API may be easier to 
>>> implement using some of the other tools being used, such as JSONSchema. Are 
>>> you extending an existing API or building something completely new?
>> 
>> We want to improve our existing Mistral API. Basically, the idea is to be 
>> able to apply dynamic filters when we’re requesting a collection of objects 
>> using url query string. Yes, we could use JSONSchema if you say it’s 
>> absolutely impossible to do and doesn’t follow WSME concepts, that’s fine. 
>> But like I said generally I like the approach that WSME takes and don’t feel 
>> like jumping to another technology just because of this issue.
>> 
>> Thanks for mentioning Ceilometer, we’ll look at it and see if that works for 
>> us.
>> 
>> Renat
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev

Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution

2014-12-18 Thread Mathieu Rohon
Hi mike,

thanks for working on this bug :

On Thu, Dec 18, 2014 at 1:47 PM, Gary Kotton  wrote:
>
>
> On 12/18/14, 2:06 PM, "Mike Kolesnik"  wrote:
>
>>Hi Neutron community members.
>>
>>I wanted to query the community about a proposal of how to fix HA routers
>>not
>>working with L2Population (bug 1365476[1]).
>>This bug is important to fix especially if we want to have HA routers and
>>DVR
>>routers working together.
>>
>>[1] https://bugs.launchpad.net/neutron/+bug/1365476
>>
>>What's happening now?
>>* HA routers use distributed ports, i.e. the port with the same IP & MAC
>>  details is applied on all nodes where an L3 agent is hosting this
>>router.
>>* Currently, the port details have a binding pointing to an arbitrary node
>>  and this is not updated.
>>* L2pop takes this "potentially stale" information and uses it to create:
>>  1. A tunnel to the node.
>>  2. An FDB entry that directs traffic for that port to that node.
>>  3. If ARP responder is on, ARP requests will not traverse the network.
>>* Problem is, the master router wouldn't necessarily be running on the
>>  reported agent.
>>  This means that traffic would not reach the master node but some
>>arbitrary
>>  node where the router master might be running, but might be in another
>>  state (standby, fail).
>>
>>What is proposed?
>>Basically the idea is not to do L2Pop for HA router ports that reside on
>>the
>>tenant network.
>>Instead, we would create a tunnel to each node hosting the HA router so
>>that
>>the normal learning switch functionality would take care of switching the
>>traffic to the master router.
>
> In Neutron we just ensure that the MAC address is unique per network.
> Could a duplicate MAC address cause problems here?

gary, AFAIU, from a Neutron POV, there is only one port, which is the
router Port, which is plugged twice. One time per port.
I think that the capacity to bind a port to several host is also a
prerequisite for a clean solution here. This will be provided by
patches to this bug :
https://bugs.launchpad.net/neutron/+bug/1367391


>>This way no matter where the master router is currently running, the data
>>plane would know how to forward traffic to it.
>>This solution requires changes on the controller only.
>>
>>What's to gain?
>>* Data plane only solution, independent of the control plane.
>>* Lowest failover time (same as HA routers today).
>>* High backport potential:
>>  * No APIs changed/added.
>>  * No configuration changes.
>>  * No DB changes.
>>  * Changes localized to a single file and limited in scope.
>>
>>What's the alternative?
>>An alternative solution would be to have the controller update the port
>>binding
>>on the single port so that the plain old L2Pop happens and notifies about
>>the
>>location of the master router.
>>This basically negates all the benefits of the proposed solution, but is
>>wider.
>>This solution depends on the report-ha-router-master spec which is
>>currently in
>>the implementation phase.
>>
>>It's important to note that these two solutions don't collide and could
>>be done
>>independently. The one I'm proposing just makes more sense from an HA
>>viewpoint
>>because of it's benefits which fit the HA methodology of being fast &
>>having as
>>little outside dependency as possible.
>>It could be done as an initial solution which solves the bug for mechanism
>>drivers that support normal learning switch (OVS), and later kept as an
>>optimization to the more general, controller based, solution which will
>>solve
>>the issue for any mechanism driver working with L2Pop (Linux Bridge,
>>possibly
>>others).
>>
>>Would love to hear your thoughts on the subject.

You will have to clearly update the doc to mention that deployment
with Linuxbridge+l2pop are not compatible with HA.

Moreover, this solution is downgrading the l2pop solution, by
disabling the ARP-responder when VMs want to talk to a HA router.
This means that ARP requests will be duplicated to every overlay
tunnel to feed the OVS Mac learning table.
This is something that we were trying to avoid with l2pop. But may be
this is acceptable.

I know that ofagent is also using l2pop, I would like to know if
ofagent deployment will be compatible with the workaround that you are
proposing.

My concern is that, with DVR, there are at least two major features
that are not compatible with Linuxbridge.
Linuxbridge is not running in the gate. I don't know if anybody is
running a 3rd party testing with Linuxbridge deployments. If anybody
does, it would be great to have it voting on gerrit!

But I really wonder what is the future of linuxbridge compatibility?
should we keep on improving OVS solution without taking into account
the linuxbridge implementation?

Regards,

Mathieu

>>
>>Regards,
>>Mike
>>
>>___
>>OpenStack-dev mailing list
>>OpenStack-dev@lists.openstack.org
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> 

Re: [openstack-dev] ask for usage of quota reserve

2014-12-18 Thread Kevin L. Mitchell
On Thu, 2014-12-18 at 15:34 +0800, Eli Qiao(Li Yong Qiao) wrote:
> can anyone tell if we call quotas.reserve() but never call
> quotas.commit() or quotas.rollback().
> what will happen?

A reservation is always created with an expiration time; by default,
this expiration time is 86400 seconds (1 day) after the time at which
the reservation is created.  Expired reservations are deleted by the
_expire_reservations() periodic task, which is defined on the scheduler.
Thus, if a resource is reserved, but never committed or rolled back, it
should continue to affect quota requests for approximately one day, then
be automatically rolled back by the scheduler.
-- 
Kevin L. Mitchell 
Rackspace


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled

2014-12-18 Thread Padmanabhan Krishnan
Hi John,Thanks for the pointers. I shall take a look and get back.
Regards,Paddu
 

 On Thursday, December 18, 2014 6:23 AM, John Belamaric 
 wrote:
   

 Hi Paddu,
Take a look at what we are working on in Kilo [1] for external IPAM. While this 
does not address DHCP specifically, it does allow you to use an external source 
to allocate the IP that OpenStack uses, which may solve your problem.
Another solution to your question is to invert the logic - you need to take the 
IP allocated by OpenStack and program the DHCP server to provide a fixed IP for 
that MAC.
You may be interested in looking at this Etherpad [2] that Don Kehn put 
together gathering all the various DHCP blueprints and related info, and also 
at this BP [3] for including a DHCP relay so we can utilize external DHCP more 
easily.
[1] https://blueprints.launchpad.net/neutron/+spec/neutron-ipam[2] 
https://etherpad.openstack.org/p/neutron-dhcp-org[3] 
https://blueprints.launchpad.net/neutron/+spec/dhcp-relay
John
From: Padmanabhan Krishnan 
Reply-To: Padmanabhan Krishnan , "OpenStack Development 
Mailing List (not for usage questions)" 
Date: Wednesday, December 17, 2014 at 6:06 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when 
dhcp is disabled

This means whatever tools the operators are using, it need to make sure the IP 
address assigned inside the VM matches with Openstack has assigned to the 
port.Bringing the question that i had in another thread on the same topic:
If one wants to use the provider DHCP server and not have Openstack's DHCP or 
L3 agent/DVR, it may not be possible to do so even with DHCP disabled in 
Openstack network. Even if the provider DHCP server is configured with the same 
start/end range in the same subnet, there's no guarantee that it will match 
with Openstack assigned IP address for bulk VM launches or  when there's a 
failure case.So, how does one deploy external DHCP with Openstack?
If Openstack hasn't assigned a IP address when DHCP is disabled for a network, 
can't port_update be done with the provider DHCP specified IP address to put 
the anti-spoofing and security rules?With Openstack assigned IP address, 
port_update cannot be done since IP address aren't in sync and can overlap.
Thanks,Paddu



On 12/16/14 4:30 AM, "Pasquale Porreca" 
wrote:

>I understood and I agree that assigning the ip address to the port is
>not a bug, however showing it to the user, at least in Horizon dashboard
>where it pops up in the main instance screen without a specific search,
>can be very confusing.
>
>On 12/16/14 12:25, Salvatore Orlando wrote:
>> In Neutron IP address management and distribution are separated
>>concepts.
>> IP addresses are assigned to ports even when DHCP is disabled. That IP
>> address is indeed used to configure anti-spoofing rules and security
>>groups.
>> 
>> It is however understandable that one wonders why an IP address is
>>assigned
>> to a port if there is no DHCP server to communicate that address.
>>Operators
>> might decide to use different tools to ensure the IP address is then
>> assigned to the instance's ports. On XenServer for instance one could
>>use a
>> guest agent reading network configuration from XenStore; as another
>> example, older versions of Openstack used to inject network
>>configuration
>> into the instance file system; I reckon that today's configdrive might
>>also
>> be used to configure instance's networking.
>> 
>> Summarising I don't think this is a bug. Nevertheless if you have any
>>idea
>> regarding improvements on the API UX feel free to file a bug report.
>> 
>> Salvatore
>> 
>> On 16 December 2014 at 10:41, Pasquale Porreca <
>> pasquale.porr...@dektech.com.au> wrote:
>>>
>>> Is there a specific reason for which a fixed ip is bound to a port on a
>>> subnet where dhcp is disabled? it is confusing to have this info shown
>>> when the instance doesn't have actually an ip on that port.
>>> Should I fill a bug report, or is this a wanted behavior?
>>>
>>> --
>>> Pasquale Porreca
>>>
>>> DEK Technologies
>>> Via dei Castelli Romani, 22
>>> 00040 Pomezia (Roma)
>>>
>>> Mobile +39 3394823805
>>> Skype paskporr
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>
>-- 
>Pasquale Porreca
>
>DEK Technologies
>Via dei Castelli Romani, 22
>00040 Pomezia (Roma)
>
>Mobile +39 3394823805
>Skype paskporr
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




   ___
OpenStack-dev mailing 

Re: [openstack-dev] [Heat] Convergence proof-of-concept showdown

2014-12-18 Thread Gurjar, Unmesh

> -Original Message-
> From: Zane Bitter [mailto:zbit...@redhat.com]
> Sent: Thursday, December 18, 2014 7:42 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Heat] Convergence proof-of-concept
> showdown
> 
> On 17/12/14 13:05, Gurjar, Unmesh wrote:
> >> I'm storing a tuple of its name and database ID. The data structure
> >> is resource.GraphKey. I was originally using the name for something,
> >> but I suspect I could probably drop it now and just store the
> >> database ID, but I haven't tried it yet. (Having the name in there
> >> definitely makes debugging more pleasant though ;)
> >>
> >
> > I agree, having name might come in handy while debugging!
> >
> >> When I build the traversal graph each node is a tuple of the GraphKey
> >> and a boolean to indicate whether it corresponds to an update or a
> >> cleanup operation (both can appear for a single resource in the same
> graph).
> >
> > Just to confirm my understanding, cleanup operation takes care of both:
> > 1. resources which are deleted as a part of update and 2. previous
> > versioned resource which was updated by replacing with a new resource
> > (UpdateReplace scenario)
> 
> Yes, correct. Also:
> 
> 3. resource versions which failed to delete for whatever reason on a previous
> traversal
> 
> > Also, the cleanup operation is performed after the update completes
> successfully.
> 
> NO! They are not separate things!
> 
> https://github.com/openstack/heat/blob/stable/juno/heat/engine/update.
> py#L177-L198
> 
> >>> If I am correct, you are updating all resources on update regardless
> >>> of their change which will be inefficient if stack contains a million
> resource.
> >>
> >> I'm calling update() on all resources regardless of change, but
> >> update() will only call handle_update() if something has changed
> >> (unless the plugin has overridden Resource._needs_update()).
> >>
> >> There's no way to know whether a resource needs to be updated before
> >> you're ready to update it, so I don't think of this as 'inefficient', just
> 'correct'.
> >>
> >>> We have similar questions regarding other areas in your
> >>> implementation, which we believe if we understand the outline of
> >>> your implementation. It is difficult to get a hold on your approach
> >>> just by looking
> >> at code. Docs strings / Etherpad will help.
> >>>
> >>>
> >>> About streams, Yes in a million resource stack, the data will be
> >>> huge, but
> >> less than template.
> >>
> >> No way, it's O(n^3) (cubed!) in the worst case to store streams for
> >> each resource.
> >>
> >>> Also this stream is stored
> >>> only In IN_PROGRESS resources.
> >>
> >> Now I'm really confused. Where does it come from if the resource
> >> doesn't get it until it's already in progress? And how will that 
> >> information
> help it?
> >>
> >
> > When an operation on stack is initiated, the stream will be identified.
> 
> OK, this may be one of the things I was getting confused about - I though a
> 'stream' belonged to one particular resource and just contained all of the
> paths to reaching that resource. But here it seems like you're saying that a
> 'stream' is a representation of the entire graph?
> So it's essentially just a gratuitously bloated NIH serialisation of the
> Dependencies graph?
> 
> > To begin
> > the operation, the action is initiated on the leaf (or root)
> > resource(s) and the stream is stored (only) in this/these IN_PROGRESS
> resource(s).
> 
> How does that work? Does it get deleted again when the resource moves to
> COMPLETE?
> 

Yes, IMO, upon resource completion, the stream can be deleted. I do not
foresee any situation where-in the storing the stream is required.
Since, when another operation is initiated on the stack, that template should
be parsed and the new stream should be identified and used.

> > The stream should then keep getting passed to the next/previous level
> > of resource(s) as and when the dependencies for the next/previous level
> of resource(s) are met.
> 
> That sounds... identical to the way it's implemented in my prototype (passing
> a serialisation of the graph down through the notification triggers), except 
> for
> the part about storing it in the Resource table.
> Why would we persist to the database data that we only need for the
> duration that we already have it in memory anyway?
> 

Earlier we thought of passing it along while initiating the next level of 
resource(s).
However, for the million resource stack, it will be quite large and passing it
around will be inefficient. So, we intended to have it stored in database.

Also, it can be used for resuming a stack operation when the processing engine 
goes
down and another engine has to resume that stack operation.

> If we're going to persist it we should do so once, in the Stack table, at the
> time that we're preparing to start the traversal.
> 
> >>> The reason to have entire dependency list to reduce DB queries while
> >>> a
> >> stack update.
> >>
>

[openstack-dev] [cinder] Multiple Backend for different volume_types

2014-12-18 Thread Chhavi Agarwal

Hi All,

As per the below link multi-backend support :-
https://wiki.openstack.org/wiki/Cinder-multi-backend

Its mentioned that we currently only support passing a multi backend
provider per volume_type. "There can be > 1 backend per volume_type, and
the capacity scheduler kicks in and keeps the backends of a particular
volume_type "

Is there a way to support multi backend provider across different
volume_type. For eg if I want my volume_type to have both SVC and LVM
drivers to be passed as my backend provider.

Thanks & Regards,
Chhavi Agarwal
Cloud System Software Group.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

2014-12-18 Thread A, Keshava
Hi Yuriy Babenko,

I am  little worried about the direction we need to think about Service 
Chaining .

OpenStack will focus on Service Chaining of its own internal features (like 
FWaaS, LBaaS, VPNasS , L2 Gateway Aas .. ? )
OR
will it consider Service Chaining of ‘Service-VM’ also ?

A. If we are considering ‘Service-VM’ service Chaining I have below points to 
mention ..


1.   Does the OpenStack needs to worry about Service-VM capability ?

2.   Does OpenStack worry if Service-VM also have OVS or not ?

3.   Does OpenStack worry if Service-VM has its own routing instance 
running in that ? Which can reconfigures the OVS flow .

4.   Can Service-VM configure OpenStack infrastructure OVS ?

5.   Can Service-VM have multiple features in it ? (Example DPI + FW + NAT 
) …

Is Service-VM is = vNFVC  ?

B. If we are thinking of Service-chaining of ‘OpenStack only Services’ :
Then have below points

For a Tennant:

1.   Can Services be  binded to a particular Compute node(CN) ?

2.   A Tennant may not want to run/enable  all the Services on all CN’s.

Tennant may want to run  FWaaS, VPNaaS  on different CNs  so that Tenant get 
better  infrastructure performance.

Then are we considering chaining of Services per Tennant ?

3.   If so how to control this ? (Please consider that tenants VM can get 
migrated to different CNs)

Let me know others opinion.

keshava

From: yuriy.babe...@telekom.de [mailto:yuriy.babe...@telekom.de]
Sent: Thursday, December 18, 2014 7:35 PM
To: openstack-dev@lists.openstack.org; stephen.kf.w...@gmail.com
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi,
in the IRC meeting yesterday we agreed to work on the use-case for service 
function chaining as it seems to be important for a lot of participants [1].
We will prepare the first draft and share it in the TelcoWG Wiki for discussion.

There is one blueprint in openstack on that in [2]


[1] 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt
[2] 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

Kind regards/Mit freundlichen Grüßen
Yuriy Babenko

Von: A, Keshava [mailto:keshav...@hp.com]
Gesendet: Mittwoch, 10. Dezember 2014 19:06
An: stephen.kf.w...@gmail.com; OpenStack 
Development Mailing List (not for usage questions)
Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should be.
Depending on this we at OpenStack also will have impact for ‘Service Chaining’.
Please find the mail attached w.r.t that discussion with NFV for ‘Service-VM + 
Openstack OVS related discussion”.


Regards,
keshava

From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Wednesday, December 10, 2014 10:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There is already a ServiceVM project (Tacker), currently under development 
on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki page 
above and see if the project's goals align with yours. If so, you are certainly 
welcome to join the IRC meeting and start to contribute to the project's 
direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B 
mailto:mbi...@gmail.com>> wrote:
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to service 
VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is 
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding the 
ovs-rules in OVS


Thanks
-Murali




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Who needs a pair of hands to write tons of python code

2014-12-18 Thread Michael Krotscheck
StoryBoard is always looking for help, and we've got a nice roadmap that
you can pull a feature from if you're so inclined:
https://wiki.openstack.org/wiki/StoryBoard/Roadmap

Come hang out on #storyboard and #openstack-infra :)

Michael

On Thu Dec 18 2014 at 6:52:28 AM Michael  wrote:

> Hi all,
>
> I am looking to write tons of code in python and looking for guidance.
> There are a lot of projects in openstack but it is hard to choose one. It
> also becomes harder when some of the components are aiming to become more
> stable instead of adding new feature.
>
> Regards,
> Michael
>  ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] Who needs a pair of hands to write tons of python code

2014-12-18 Thread Jeremy Stanley
On 2014-12-18 20:19:31 +0530 (+0530), Michael wrote:
> I am looking to write tons of code in python and looking for
> guidance. There are a lot of projects in openstack but it is hard
> to choose one. It also becomes harder when some of the components
> are aiming to become more stable instead of adding new feature.

As you've noticed, OpenStack already has "tons of code in Python"
and what we really need is help refining/fixing it rather than
piling more on top of it. Where most projects could _actually_
benefit from help is to investigate open bugs and review proposed
changes. Also here's a link to our Developer's Guide to get you
started:

http://docs.openstack.org/infra/manual/developers.html

Hope that helps, and welcome aboard!
-- 
Jeremy Stanley

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] vm can't get ipv6 address in ra mode:slaac + address mode: slaac

2014-12-18 Thread Ihar Hrachyshka
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

I suspect that's some Red Hat distro, and radvd lacks SELinux context
set to allow neutron l3 agent to spawn it.

On 18/12/14 15:50, Jerry Zhao wrote:
> It seems that radvd was not spawned successfully in l3-agent log:
> 
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: Stderr: '/usr/bin/neutron-rootwrap: Unauthorized
> command: ip netns exec qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7
> radvd -C 
> /var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf
> -p 
> /var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd
>
> 
(no filter matched)\n'
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent Traceback (most recent call last): Dec 18
> 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
> 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py",
>
> 
line 341, in call
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent return func(*args, **kwargs) Dec 18
> 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
> 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py",
>
> 
line 902, in process_router
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent self.root_helper) Dec 18 11:23:34
> ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py",
>
> 
line 111, in enable_ipv6_ra
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent _spawn_radvd(router_id, radvd_conf,
> router_ns, root_helper) Dec 18 11:23:34
> ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py",
>
> 
line 95, in _spawn_radvd
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent radvd.enable(callback, True) Dec 18 11:23:34
> ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py",
>
> 
line 77, in enable
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent ip_wrapper.netns.execute(cmd,
> addl_env=self.cmd_addl_env) Dec 18 11:23:34
> ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py",
>
> 
line 554, in execute
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent check_exit_code=check_exit_code,
> extra_ok_codes=extra_ok_codes) Dec 18 11:23:34
> ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
> "/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py",
>
> 
line 82, in execute
> Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent raise RuntimeError(m) Dec 18 11:23:34
> ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> 11:23:34.611 18015 TRACE neutron.agent.l3_agent RuntimeError: Dec
> 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent Command: ['sudo',
> '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip',
> 'netns', 'exec', 'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7', 
> 'radvd', '-C', 
> '/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf',
>
> 
'-p',
> '/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd']
>
>  Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3
> neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE
> neutron.agent.l3_agent Exit code: 99 Dec 18 11:23:34
> ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 2014-12-18
> 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stdout: '' Dec 18
> 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
> 2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l

Re: [openstack-dev] [Neutron] vm can't get ipv6 address in ra mode:slaac + address mode: slaac

2014-12-18 Thread Jerry Zhao

I couldn't see anything wrong.
in my l3.filters:

[Filters]

# arping
arping: CommandFilter, arping, root

# l3_agent
sysctl: CommandFilter, sysctl, root
route: CommandFilter, route, root
radvd: CommandFilter, radvd, root

# metadata proxy
metadata_proxy: CommandFilter, neutron-ns-metadata-proxy, root
# If installed from source (say, by devstack), the prefix will be
# /usr/local instead of /usr/bin.
metadata_proxy_local: CommandFilter, 
/usr/local/bin/neutron-ns-metadata-proxy, root
# RHEL invocation of the metadata proxy will report 
/opt/stack/venvs/openstack/bin/python

kill_metadata: KillFilter, root, python, -9
kill_metadata7: KillFilter, root, python2.7, -9
kill_radvd_usr: KillFilter, root, /usr/sbin/radvd, -9, -HUP
kill_radvd: KillFilter, root, /sbin/radvd, -9, -HUP

# ip_lib
ip: IpFilter, ip, root
ip_exec: IpNetnsExecFilter, ip, root


On 12/18/2014 06:50 AM, Jerry Zhao wrote:

It seems that radvd was not spawned successfully
in l3-agent log:

Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: Stderr: '/usr/bin/neutron-rootwrap: Unauthorized 
command: ip netns exec qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 
radvd -C 
/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf -p 
/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd 
(no filter matched)\n'
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent Traceback (most recent call last):
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py", 
line 341, in call
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent return func(*args, **kwargs)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py", 
line 902, in process_router
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent self.root_helper)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", 
line 111, in enable_ipv6_ra
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent _spawn_radvd(router_id, radvd_conf, 
router_ns, root_helper)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", 
line 95, in _spawn_radvd
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent radvd.enable(callback, True)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", 
line 77, in enable
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent ip_wrapper.netns.execute(cmd, 
addl_env=self.cmd_addl_env)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", 
line 554, in execute
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent check_exit_code=check_exit_code, 
extra_ok_codes=extra_ok_codes)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py", 
line 82, in execute
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent raise RuntimeError(m)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent RuntimeError:
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 
neutron-l3-agent: 2014-12-18 11:23:34.611 18015 TRACE 
neutron.agent.l3_agent Command: ['sudo', '/usr/bin/neutron-rootwrap', 
'/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 
'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7', 'radvd', '-C',

Re: [openstack-dev] [All] Who needs a pair of hands to write tons of python code

2014-12-18 Thread Boris Pavlovic
Michael,


Rally project (https://github.com/stackforge/rally) needs hands!
We have a billions of interesting, simple and complex tasks.

Please join us at #openstack-rally IRC chat

Thanks!

Best regards,
Boris Pavlovic

On Thu, Dec 18, 2014 at 6:49 PM, Michael  wrote:
>
> Hi all,
>
> I am looking to write tons of code in python and looking for guidance.
> There are a lot of projects in openstack but it is hard to choose one. It
> also becomes harder when some of the components are aiming to become more
> stable instead of adding new feature.
>
> Regards,
> Michael
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] vm can't get ipv6 address in ra mode:slaac + address mode: slaac

2014-12-18 Thread Jerry Zhao

It seems that radvd was not spawned successfully
in l3-agent log:

Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec 
qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C 
/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf -p 
/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd 
(no filter matched)\n'
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Traceback 
(most recent call last):
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/common/utils.py", 
line 341, in call
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent return 
func(*args, **kwargs)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/l3_agent.py", 
line 902, in process_router
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
self.root_helper)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", 
line 111, in enable_ipv6_ra
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
_spawn_radvd(router_id, radvd_conf, router_ns, root_helper)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ra.py", 
line 95, in _spawn_radvd
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
radvd.enable(callback, True)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/external_process.py", 
line 77, in enable
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
ip_wrapper.netns.execute(cmd, addl_env=self.cmd_addl_env)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/ip_lib.py", 
line 554, in execute
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent 
check_exit_code=check_exit_code, extra_ok_codes=extra_ok_codes)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent   File 
"/opt/stack/venvs/openstack/local/lib/python2.7/site-packages/neutron/agent/linux/utils.py", 
line 82, in execute
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent raise 
RuntimeError(m)
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent RuntimeError:
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Command: 
['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 
'ip', 'netns', 'exec', 'qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7', 
'radvd', '-C', 
'/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf', 
'-p', 
'/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd']
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Exit code: 99
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stdout: ''
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent: 
2014-12-18 11:23:34.611 18015 TRACE neutron.agent.l3_agent Stderr: 
'/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec 
qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C 
/var/run/neutron/ra/6066faaa-0e35-4e7b-8988-7337c493bad7.radvd.conf -p 
/var/run/neutron/external/pids/6066faaa-0e35-4e7b-8988-7337c493bad7.pid.radvd 
(no filter matched)\n'



On 12/18/2014 04:50 AM

[openstack-dev] [All] Who needs a pair of hands to write tons of python code

2014-12-18 Thread Michael
Hi all,

I am looking to write tons of code in python and looking for guidance.
There are a lot of projects in openstack but it is hard to choose one. It
also becomes harder when some of the components are aiming to become more
stable instead of adding new feature.

Regards,
Michael
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [third party][neutron] - OpenDaylight CI failing for past 6 days

2014-12-18 Thread Kyle Mestery
On Thu, Dec 18, 2014 at 4:44 AM, Anil Venkata 
wrote:
>
> Hi All
>
> Last successful build on OpenDaylight CI(
> https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/ ) was 6 days
> back.
> After that, OpenDaylight CI Jenkins job is failing for all the patches.
>
> Can we remove the voting rights for the OpenDaylight CI until it is fixed?
>
> I am working to disable this now. The OpenDaylight team has been working
to get this running but I think they need a few more days.

Thanks,
Kyle


> Thanks
> Anil.Venakata
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] fixed ip info shown for port even when dhcp is disabled

2014-12-18 Thread John Belamaric
Hi Paddu,

Take a look at what we are working on in Kilo [1] for external IPAM. While this 
does not address DHCP specifically, it does allow you to use an external source 
to allocate the IP that OpenStack uses, which may solve your problem.

Another solution to your question is to invert the logic - you need to take the 
IP allocated by OpenStack and program the DHCP server to provide a fixed IP for 
that MAC.

You may be interested in looking at this Etherpad [2] that Don Kehn put 
together gathering all the various DHCP blueprints and related info, and also 
at this BP [3] for including a DHCP relay so we can utilize external DHCP more 
easily.

[1] https://blueprints.launchpad.net/neutron/+spec/neutron-ipam
[2] https://etherpad.openstack.org/p/neutron-dhcp-org
[3] https://blueprints.launchpad.net/neutron/+spec/dhcp-relay

John

From: Padmanabhan Krishnan mailto:kpr...@yahoo.com>>
Reply-To: Padmanabhan Krishnan mailto:kpr...@yahoo.com>>, 
"OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, December 17, 2014 at 6:06 PM
To: 
"openstack-dev@lists.openstack.org" 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron] fixed ip info shown for port even when 
dhcp is disabled

This means whatever tools the operators are using, it need to make sure the IP 
address assigned inside the VM matches with Openstack has assigned to the port.
Bringing the question that i had in another thread on the same topic:

If one wants to use the provider DHCP server and not have Openstack's DHCP or 
L3 agent/DVR, it may not be possible to do so even with DHCP disabled in 
Openstack network. Even if the provider DHCP server is configured with the same 
start/end range in the same subnet, there's no guarantee that it will match 
with Openstack assigned IP address for bulk VM launches or  when there's a 
failure case.
So, how does one deploy external DHCP with Openstack?

If Openstack hasn't assigned a IP address when DHCP is disabled for a network, 
can't port_update be done with the provider DHCP specified IP address to put 
the anti-spoofing and security rules?
With Openstack assigned IP address, port_update cannot be done since IP address 
aren't in sync and can overlap.

Thanks,
Paddu



On 12/16/14 4:30 AM, "Pasquale Porreca" 
mailto:pasquale.porr...@dektech.com.au>>
wrote:

>I understood and I agree that assigning the ip address to the port is
>not a bug, however showing it to the user, at least in Horizon dashboard
>where it pops up in the main instance screen without a specific search,
>can be very confusing.
>
>On 12/16/14 12:25, Salvatore Orlando wrote:
>> In Neutron IP address management and distribution are separated
>>concepts.
>> IP addresses are assigned to ports even when DHCP is disabled. That IP
>> address is indeed used to configure anti-spoofing rules and security
>>groups.
>>
>> It is however understandable that one wonders why an IP address is
>>assigned
>> to a port if there is no DHCP server to communicate that address.
>>Operators
>> might decide to use different tools to ensure the IP address is then
>> assigned to the instance's ports. On XenServer for instance one could
>>use a
>> guest agent reading network configuration from XenStore; as another
>> example, older versions of Openstack used to inject network
>>configuration
>> into the instance file system; I reckon that today's configdrive might
>>also
>> be used to configure instance's networking.
>>
>> Summarising I don't think this is a bug. Nevertheless if you have any
>>idea
>> regarding improvements on the API UX feel free to file a bug report.
>>
>> Salvatore
>>
>> On 16 December 2014 at 10:41, Pasquale Porreca <
>> pasquale.porr...@dektech.com.au> 
>> wrote:
>>>
>>> Is there a specific reason for which a fixed ip is bound to a port on a
>>> subnet where dhcp is disabled? it is confusing to have this info shown
>>> when the instance doesn't have actually an ip on that port.
>>> Should I fill a bug report, or is this a wanted behavior?
>>>
>>> --
>>> Pasquale Porreca
>>>
>>> DEK Technologies
>>> Via dei Castelli Romani, 22
>>> 00040 Pomezia (Roma)
>>>
>>> Mobile +39 3394823805
>>> Skype paskporr
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>--
>Pasquale Porreca
>
>DEK Technologies
>Via dei Castelli Romani, 22
>00040 Pomezia (Roma)
>
>Mobile +39 3394823805
>Skype paskporr
>
>___
>OpenStack-dev 

Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

2014-12-18 Thread Yuriy.Babenko
Hi,
in the IRC meeting yesterday we agreed to work on the use-case for service 
function chaining as it seems to be important for a lot of participants [1].
We will prepare the first draft and share it in the TelcoWG Wiki for discussion.

There is one blueprint in openstack on that in [2]


[1] 
http://eavesdrop.openstack.org/meetings/telcowg/2014/telcowg.2014-12-17-14.01.txt
[2] 
https://blueprints.launchpad.net/group-based-policy/+spec/group-based-policy-service-chaining

Kind regards/Mit freundlichen Grüßen
Yuriy Babenko

Von: A, Keshava [mailto:keshav...@hp.com]
Gesendet: Mittwoch, 10. Dezember 2014 19:06
An: stephen.kf.w...@gmail.com; OpenStack Development Mailing List (not for 
usage questions)
Betreff: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There are many unknows w.r.t ‘Service-VM’ and how it should from NFV 
perspective.
In my opinion it was not decided how the Service-VM framework should be.
Depending on this we at OpenStack also will have impact for ‘Service Chaining’.
Please find the mail attached w.r.t that discussion with NFV for ‘Service-VM + 
Openstack OVS related discussion”.


Regards,
keshava

From: Stephen Wong [mailto:stephen.kf.w...@gmail.com]
Sent: Wednesday, December 10, 2014 10:03 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [NFV][Telco] Service VM v/s its basic framework

Hi Murali,

There is already a ServiceVM project (Tacker), currently under development 
on stackforge:

https://wiki.openstack.org/wiki/ServiceVM

If you are interested in this topic, please take a look at the wiki page 
above and see if the project's goals align with yours. If so, you are certainly 
welcome to join the IRC meeting and start to contribute to the project's 
direction and design.

Thanks,
- Stephen


On Wed, Dec 10, 2014 at 7:01 AM, Murali B 
mailto:mbi...@gmail.com>> wrote:
Hi keshava,

We would like contribute towards service chain and NFV

Could you please share the document if you have any related to service VM

The service chain can be achieved if we able to redirect the traffic to service 
VM using ovs-flows

in this case we no need to have routing enable on the service VM(traffic is 
redirected at L2).

All the tenant VM's in cloud could use this service VM services  by adding the 
ovs-rules in OVS


Thanks
-Murali




___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] static files handling, bower/

2014-12-18 Thread Radomir Dopieralski
Hello,

revisiting the package management for the Horizon's static files again,
I would like to propose a particular solution. Hopefully it will allow
us to both simplify the whole setup, and use the popular tools for the
job, without losing too much of benefits of our current process.

The changes we would need to make are as follows:

* get rid of XStatic entirely;
* add to the repository a configuration file for Bower, with all the
required bower packages listed and their versions specified;
* add to the repository a static_settings.py file, with a single
variable defined, STATICFILES_DIRS. That variable would be initialized
to a list of pairs mapping filesystem directories to URLs within the
/static tree. By default it would only have a single mapping, pointing
to where Bower installs all the stuff by default;
* add a line "from static_settings import STATICFILES_DIRS" to the
settings.py file;
* add jobs both to run_tests.sh and any gate scripts, that would run Bower;
* add a check on the gate that makes sure that all direct and indirect
dependencies of all required Bower packages are listed in its
configuration files (pretty much what we have for requirements.txt now);

That's all. Now, how that would be used.

1. The developers will just use Bower the way they would normally use
it, being able to install and test any of the libraries in any versions
they like. The only additional thing is that they would need to add any
additional libraries or changed versions to the Bower configuration file
before they push their patch for review and merge.

2. The packagers can read the list of all required packages from the
Bower configuration file, and make sure they have all the required
libraries packages in the required versions.

Next, they replace the static_settings.py file with one they have
prepared manually or automatically. The file lists the locations of all
the library directories, and, in the case when the directory structure
differs from what Bower provides, even mappings between subdirectories
and individual files.

3. Security patches need to go into the Bower packages directly, which
is good for the whole community.

4. If we aver need a library that is not packaged for Bower, we will
package it just as we had with the XStatic packages, only for Bower,
which has much larger user base and more chance of other projects also
using that package and helping with its testing.

What do you think? Do you see any disastrous problems with this system?
-- 
Radomir Dopieralski

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Complexity check and v2 API

2014-12-18 Thread Pasquale Porreca
I created a bug  report and proposed a fix for this issue:

https://bugs.launchpad.net/nova/+bug/1403586

@Matthew Gilliard: I added you as reviewer for my patch, since you asked
for it.

Thanks to anyone that will want to review the bug report and the patch.

On 12/18/14 09:33, Pasquale Porreca wrote:
> Yes, for v2.1 there is not this problem, moreover v2.1 corresponding
> server.py has much lower complexity than v2 one.
>
> On 12/17/14 20:10, Christopher Yeoh wrote:
>> Hi,
>>
>> Given the timing (no spec approved) it sounds like a v2.1 plus
>> microversions (just merging) with no v2 changes at all.
>>
>> The v2.1 framework is more flexible and you should need no changes to
>> servers.py at all as there are hooks for adding extra parameters in
>> separate plugins. There are examples of this in the v3 directory
>> which is really v2.1 now.
>>
>> Chris
>> On Thu, 18 Dec 2014 at 3:49 am, Pasquale Porreca
>> > > wrote:
>>
>> Thank you for the answer.
>>
>> my API proposal won't be merged in kilo release since the
>> deadline for
>> approval is tomorrow, so I may propose the fix to lower the
>> complexity
>> in another way, what do you think about a bug fix?
>>
>> On 12/17/14 18:05, Matthew Gilliard wrote:
>> > Hello Pasquale
>> >
>> >   The problem is that you are trying to add a new if/else
>> branch into
>> > a method which is already ~250 lines long, and has the highest
>> > complexity of any function in the nova codebase. I assume that you
>> > didn't contribute much to that complexity, but we've recently
>> added a
>> > limit to stop it getting any worse. So, regarding your 4
>> suggestions:
>> >
>> > 1/ As I understand it, v2.1 should be the same as v2 at the
>> > moment, so they need to be kept the same
>> > 2/ You can't ignore it - it will fail CI
>> > 3/ No thank you. This limit should only ever be lowered :-)
>> > 4/ This is 'the right way'. Your suggestion for the
>> refactor does
>> > sound good.
>> >
>> > I suggest a single patch that refactors and lowers the limit in
>> > tox.ini.  Once you've done that then you can add the new
>> parameter in
>> > a following patch. Please feel free to add me to any patches you
>> > create.
>> >
>> > Matthew
>> >
>> >
>> >
>> > On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca
>> > > > wrote:
>> >> Hello
>> >>
>> >> I am working on an API extension that adds a parameter on
>> create server
>> >> call; to implement the v2 API I added few lines of code to
>> >> nova/api/openstack/compute/servers.py
>> >>
>> >> In particular just adding something like
>> >>
>> >> new_param = None
>> >> if self.ext_mgr.is_loaded('os-new-param'):
>> >> new_param = server_dict.get('new_param')
>> >>
>> >> leads to a pep8 fail with message 'Controller.create' is too
>> complex (47)
>> >> (Note that in tox.ini the max complexity is fixed to 47 and
>> there is a note
>> >> specifying 46 is the max complexity present at the moment).
>> >>
>> >> It is quite easy to make this test pass creating a new method
>> just to
>> >> execute these lines of code, anyway all other extensions are
>> handled in that
>> >> way and one of most important stylish rule states to be
>> consistent with
>> >> surrounding code, so I don't think a separate function is the
>> way to go
>> >> (unless it implies a change in how all other extensions are
>> handled too).
>> >>
>> >> My thoughts on this situation:
>> >>
>> >> 1) New extensions should not consider v2 but only v2.1, so
>> that file should
>> >> not be touched
>> >> 2) Ignore this error and go on: if and when the extension will
>> be merged the
>> >> complexity in tox.ini will be changed too
>> >> 3) The complexity in tox.ini should be raised to allow new v2
>> extensions
>> >> 4) The code of that module should be refactored to lower the
>> complexity
>> >> (i.e. move the load of each extension in a separate function)
>> >>
>> >> I would like to know if any of my point is close to the
>> correct solution.
>> >>
>> >> --
>> >> Pasquale Porreca
>> >>
>> >> DEK Technologies
>> >> Via dei Castelli Romani, 22
>> >> 00040 Pomezia (Roma)
>> >>
>> >> Mobile +39 3394823805
>> >> Skype paskporr
>> >>
>> >>
>> >> ___
>> >> OpenStack-dev mailing list
>> >> OpenStack-dev@lists.openstack.org
>> 
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> > ___
>> > OpenStack-dev mailing list
>> > OpenStack-

Re: [openstack-dev] [Fuel] Support of warnings in Fuel UI

2014-12-18 Thread Vitaly Kramskikh
I also want to add that there is also a short form

for this:

  restrictions:
- "settings:common.libvirt_type.value != 'kvm'": "KVM only is
supported"

There are also a few restrictions in existing openstack.yaml like this:

  volumes_lvm:
label: "Cinder LVM over iSCSI for volumes"
restrictions:
  - "settings:storage.volumes_ceph.value == true or
settings:common.libvirt_type.value == 'vcenter'"

The restriction above is actually 2 restrictions for 2 unrelated things and
it should be separated like this:

restrictions:
  - "settings:storage.volumes_ceph.value == true": "This stuff
cannot be used with Ceph"
  - "settings:common.libvirt_type.value == 'vcenter'": "This
stuff cannot be used with vCenter"

So please add these messages for your features to improve Fuel UX.

2014-12-18 10:56 GMT+01:00 Julia Aranovich :
>
> Hi All,
>
> First of all, I would like to inform you that support of warnings was
> added on Settings tab in Fuel UI.
> Now you can add 'message' attribute to setting restriction and it will be
> displayed as a tooltip on the tab
>  if restriction
> condition is satisfied.
>
> So, setting restriction should have the following format in openstack.yaml
> 
> file:
>
> restrictions:
>   - condition: "settings:common.libvirt_type.value != 'kvm'"
> message: "KVM only is supported"
>
> This format is also eligible for setting group restrictions and
> restrictions of setting values (for setting with 'radio' type).
>
> Please also note that message attribute can be also added to role
> restrictions and will be displayed as a tooltip on Add Nodes screen.
>
>
>
> And the second goal of my letter is to ask you to go through
> openstack.yaml
> 
>  file
> and add an appropriate messages for restrictions. It will make Fuel UI more
> clear and informative.
>
> Thank you in advance!
>
> Julia
>
> --
> Kind Regards,
> Julia Aranovich,
> Software Engineer,
> Mirantis, Inc
> +7 (905) 388-82-61 (cell)
> Skype: juliakirnosova
> www.mirantis.ru
> jaranov...@mirantis.com 
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

-- 
Vitaly Kramskikh,
Software Engineer,
Mirantis, Inc.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution

2014-12-18 Thread Gary Kotton


On 12/18/14, 2:06 PM, "Mike Kolesnik"  wrote:

>Hi Neutron community members.
>
>I wanted to query the community about a proposal of how to fix HA routers
>not 
>working with L2Population (bug 1365476[1]).
>This bug is important to fix especially if we want to have HA routers and
>DVR
>routers working together.
>
>[1] https://bugs.launchpad.net/neutron/+bug/1365476
>
>What's happening now?
>* HA routers use distributed ports, i.e. the port with the same IP & MAC
>  details is applied on all nodes where an L3 agent is hosting this
>router.
>* Currently, the port details have a binding pointing to an arbitrary node
>  and this is not updated.
>* L2pop takes this "potentially stale" information and uses it to create:
>  1. A tunnel to the node.
>  2. An FDB entry that directs traffic for that port to that node.
>  3. If ARP responder is on, ARP requests will not traverse the network.
>* Problem is, the master router wouldn't necessarily be running on the
>  reported agent.
>  This means that traffic would not reach the master node but some
>arbitrary
>  node where the router master might be running, but might be in another
>  state (standby, fail).
>
>What is proposed?
>Basically the idea is not to do L2Pop for HA router ports that reside on
>the
>tenant network.
>Instead, we would create a tunnel to each node hosting the HA router so
>that
>the normal learning switch functionality would take care of switching the
>traffic to the master router.

In Neutron we just ensure that the MAC address is unique per network.
Could a duplicate MAC address cause problems here?

>This way no matter where the master router is currently running, the data
>plane would know how to forward traffic to it.
>This solution requires changes on the controller only.
>
>What's to gain?
>* Data plane only solution, independent of the control plane.
>* Lowest failover time (same as HA routers today).
>* High backport potential:
>  * No APIs changed/added.
>  * No configuration changes.
>  * No DB changes.
>  * Changes localized to a single file and limited in scope.
>
>What's the alternative?
>An alternative solution would be to have the controller update the port
>binding
>on the single port so that the plain old L2Pop happens and notifies about
>the
>location of the master router.
>This basically negates all the benefits of the proposed solution, but is
>wider.
>This solution depends on the report-ha-router-master spec which is
>currently in
>the implementation phase.
>
>It's important to note that these two solutions don't collide and could
>be done
>independently. The one I'm proposing just makes more sense from an HA
>viewpoint
>because of it's benefits which fit the HA methodology of being fast &
>having as
>little outside dependency as possible.
>It could be done as an initial solution which solves the bug for mechanism
>drivers that support normal learning switch (OVS), and later kept as an
>optimization to the more general, controller based, solution which will
>solve
>the issue for any mechanism driver working with L2Pop (Linux Bridge,
>possibly
>others).
>
>Would love to hear your thoughts on the subject.
>
>Regards,
>Mike
>
>___
>OpenStack-dev mailing list
>OpenStack-dev@lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] How do the CI clouds work?

2014-12-18 Thread Derek Higgins
On 18/12/14 08:48, Steve Kowalik wrote:
> Hai,
> 
>   I am finding myself at a loss at explaining how the CI clouds that run
> the tripleo jobs work from end-to-end. I am clear that we have a tripleo
> deployment running on those racks, with a seed, a HA undercloud and
> overcloud, but then I'm left with a number of questions, such as:
Yup, this is correct, from a CI point of view all that is relevant is
the overcloud and a set of baremetal test env hosts. The seed and
undercloud are there because we used tripleo to deploy the thing in the
first place.

> 
>   How do we run the testenv images on the overcloud?
nodepool talks to our overcloud to create an instance where the jenkins
jobs run. This "jenkins node" is where we build the images, jenkins
doesn't manage and isn't aware of the testenvs hosts.

The entry point for jenkins to run tripleo ci is toci_gate_test.sh, at
the end of this script you'll see a call to testenv-client[1]

testenv-client talks to gearman (an instance on our overcloud, a
different gearman instance to what infra have), gearman responds with a
json file representing one of the the testenv's that have been
registered with it.

testenv-client then runs the command "./toci_devtest.sh" and passes in
the json file (via $TE_DATAFILE). To prevent 2 CI jobs using the same
testenv, the testenv is now "locked" until toci_devtest exits. The
jenkins node now has all the relevant IPs and MAC addresses to talk to
the testenv.

> 
>   How do the testenv images interact with the nova-compute machines in
> the overcloud?
The images are built on instances in this cloud. The MAC address of eth1
on the seed in for the testenv has been registered with neutron on the
overcloud, so its IP is known (its in the json file we got in
$TE_DATAFILE). All traffic to the other instances in the CI testenv is
routed though the seed its eth2 shares a ovs bridge with eth1 from the
other VM's in the same testenv.

> 
>   Are the machines running the testenv images meant to be long-running,
> or are they recycled after n number of runs?
They are long running and in theory shouldn't need to be recycled, in
practice they get recycled sometimes for one of 2 reason
1. The image needs to be updated (e.g. to increase the amount of RAM on
the vibvirt domains they host)
2. If one is experiencing a problem, I usually do a "nova rebuild" on
it, this doesn't happen very frequently, we currently have 15 TE hosts
on rh1 7 have an uptime over 80 days, while the others are new HW that
was added last week. But problems we have encountered in the passed
causing a rebuild include a TE Host loosing its IP or
https://bugs.launchpad.net/tripleo/+bug/1335926
https://bugs.launchpad.net/tripleo/+bug/1314709

> 
> Cheers,
No problem I tried to document this at one stage here[2] but feel free
to add more or point out where its lacking or ask questions here and
I'll attempt to answer.

thanks,
Derek.


[1]
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_gate_test.sh?id=3d86dd4c885a68eabddb7f73a6dbe6f3e75fde64#n69
[2]
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/docs/TripleO-ci.rst

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][L2Pop][HA Routers] Request for comments for a possible solution

2014-12-18 Thread Mike Kolesnik
Hi Neutron community members.

I wanted to query the community about a proposal of how to fix HA routers not 
working with L2Population (bug 1365476[1]).
This bug is important to fix especially if we want to have HA routers and DVR
routers working together.

[1] https://bugs.launchpad.net/neutron/+bug/1365476

What's happening now?
* HA routers use distributed ports, i.e. the port with the same IP & MAC
  details is applied on all nodes where an L3 agent is hosting this router.
* Currently, the port details have a binding pointing to an arbitrary node
  and this is not updated.
* L2pop takes this "potentially stale" information and uses it to create: 
  1. A tunnel to the node.
  2. An FDB entry that directs traffic for that port to that node.
  3. If ARP responder is on, ARP requests will not traverse the network.
* Problem is, the master router wouldn't necessarily be running on the
  reported agent.
  This means that traffic would not reach the master node but some arbitrary
  node where the router master might be running, but might be in another
  state (standby, fail).

What is proposed?
Basically the idea is not to do L2Pop for HA router ports that reside on the
tenant network.
Instead, we would create a tunnel to each node hosting the HA router so that
the normal learning switch functionality would take care of switching the
traffic to the master router.
This way no matter where the master router is currently running, the data
plane would know how to forward traffic to it.
This solution requires changes on the controller only.

What's to gain?
* Data plane only solution, independent of the control plane.
* Lowest failover time (same as HA routers today).
* High backport potential:
  * No APIs changed/added.
  * No configuration changes.
  * No DB changes.
  * Changes localized to a single file and limited in scope.

What's the alternative?
An alternative solution would be to have the controller update the port binding
on the single port so that the plain old L2Pop happens and notifies about the
location of the master router.
This basically negates all the benefits of the proposed solution, but is wider.
This solution depends on the report-ha-router-master spec which is currently in
the implementation phase.

It's important to note that these two solutions don't collide and could be done
independently. The one I'm proposing just makes more sense from an HA viewpoint
because of it's benefits which fit the HA methodology of being fast & having as
little outside dependency as possible.
It could be done as an initial solution which solves the bug for mechanism
drivers that support normal learning switch (OVS), and later kept as an
optimization to the more general, controller based, solution which will solve
the issue for any mechanism driver working with L2Pop (Linux Bridge, possibly
others).

Would love to hear your thoughts on the subject.

Regards,
Mike

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Option to skip deleting images in use?

2014-12-18 Thread Kuvaja, Erno
I think that's horrible idea. How do we do that store independent with the 
linking dependencies?

We should not depend universal use case like this on limited subset of 
backends, specially non-OpenStack ones. Glance (nor Nova) should never depend 
having direct access to the actual medium where the images are stored. I think 
this is school book example for something called database. Well arguable if 
this should be tracked at Glance or Nova, but definitely not a dirty hack 
expecting specific backend characteristics.

As mentioned before the protected image property is to ensure that the image 
does not get deleted, that is also easy to track when the images are queried. 
Perhaps the record needs to track the original state of protected flag, image 
id and use count. 3 column table and couple of API calls. Lets not at least 
make it any more complicated than it needs to be if such functionality is 
desired.


-  Erno

From: Nikhil Komawar [mailto:nikhil.koma...@rackspace.com]
Sent: 17 December 2014 20:34
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?

Guess that's a implementation detail. Depends on the way you go about using 
what's available now, I suppose.

Thanks,
-Nikhil

From: Chris St. Pierre [chris.a.st.pie...@gmail.com]
Sent: Wednesday, December 17, 2014 2:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?
I was assuming atomic increment/decrement operations, in which case I'm not 
sure I see the race conditions. Or is atomism assuming too much?

On Wed, Dec 17, 2014 at 11:59 AM, Nikhil Komawar 
mailto:nikhil.koma...@rackspace.com>> wrote:
That looks like a decent alternative if it works. However, it would be too racy 
unless we we implement a test-and-set for such properties or there is a 
different job which queues up these requests and perform sequentially for each 
tenant.

Thanks,
-Nikhil

From: Chris St. Pierre 
[chris.a.st.pie...@gmail.com]
Sent: Wednesday, December 17, 2014 10:23 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [glance] Option to skip deleting images in use?
That's unfortunately too simple. You run into one of two cases:

1. If the job automatically removes the protected attribute when an image is no 
longer in use, then you lose the ability to use "protected" on images that are 
not in use. I.e., there's no way to say, "nothing is currently using this 
image, but please keep it around." (This seems particularly useful for 
snapshots, for instance.)

2. If the job does not automatically remove the protected attribute, then an 
image would be protected if it had ever been in use; to delete an image, you'd 
have to manually un-protect it, which is a workflow that quite explicitly 
defeats the whole purpose of flagging images as protected when they're in use.

It seems like flagging an image as *not* in use is actually a fairly difficult 
problem, since it requires consensus among all components that might be using 
images.

The only solution that readily occurs to me would be to add something like a 
filesystem link count to images in Glance. Then when Nova spawns an instance, 
it increments the usage count; when the instance is destroyed, the usage count 
is decremented. And similarly with other components that use images. An image 
could only be deleted when its usage count was zero.

There are ample opportunities to get out of sync there, but it's at least a 
sketch of something that might work, and isn't *too* horribly hackish. Thoughts?

On Tue, Dec 16, 2014 at 6:11 PM, Vishvananda Ishaya 
mailto:vishvana...@gmail.com>> wrote:
A simple solution that wouldn't require modification of glance would be a cron 
job
that lists images and snapshots and marks them protected while they are in use.

Vish

On Dec 16, 2014, at 3:19 PM, Collins, Sean 
mailto:sean_colli...@cable.comcast.com>> wrote:

> On Tue, Dec 16, 2014 at 05:12:31PM EST, Chris St. Pierre wrote:
>> No, I'm looking to prevent images that are in use from being deleted. "In
>> use" and "protected" are disjoint sets.
>
> I have seen multiple cases of images (and snapshots) being deleted while
> still in use in Nova, which leads to some very, shall we say,
> interesting bugs and support problems.
>
> I do think that we should try and determine a way forward on this, they
> are indeed disjoint sets. Setting an image as protected is a proactive
> measure, we should try and figure out a way to keep tenants from
> shooting themselves in the foot if possible.
>
> --
> Sean M. Collins
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailma

[openstack-dev] [TripleO] Bug squashing followup

2014-12-18 Thread Derek Higgins
While bug squashing yesterday, I went through quite a lot of bugs
closing those that were already fixed or no longer relevant, closing
around 40 bugs. I eventually ran out of time, but I'm pretty sure if we
split the task up between us we could weed out a lot more.

What I'd like to do is, as a once off, randomly split up all the bugs to
a group of volunteers (hopefully a large number of people), each person
gets assigned X number of bugs and is then responsible for just deciding
if it is still a relevant bug (or finding somebody who can help decide)
and closing if necessary. Nothing needs to get fixed here we just need
to make sure people are have a uptodate list of relevant bugs.

So who wants to volunteer? We probably need about 15+ people for this to
be split into manageable chunks. If your willing to help out just add
your name to this list
https://etherpad.openstack.org/p/tripleo-bug-weeding

If we get enough people I'll follow up by splitting out the load and
assigning to people.

The bug squashing day yesterday put a big dent in these, but wasn't
entirely focused on weeding out stale bugs, some people probably got
caught up fixing individual bugs and it wasn't helped by a temporary
failure of our CI jobs (provoked by a pbr update and we were building
pbr when we didn't need to be).

thanks,
Derek.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-18 Thread Punith S
Hi Eduard

we tried running
https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh
on ubuntu master 12.04, and it appears to be working fine on 12.04.

thanks

On Thu, Dec 18, 2014 at 1:57 PM, Eduard Matei <
eduard.ma...@cloudfounders.com> wrote:
>
> Hi,
> Seems i can't install using puppet on the jenkins master using
> install_master.sh from
> https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh
> because it's running Ubuntu 11.10 and it appears unsupported.
> I managed to install puppet manually on master and everything else fails
> So i'm trying to manually install zuul and nodepool and jenkins job
> builder, see where i end up.
>
> The slave looks complete, got some errors on running install_slave so i
> ran parts of the script manually, changing some params and it appears
> installed but no way to test it without the master.
>
> Any ideas welcome.
>
> Thanks,
>
> Eduard
>
> On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy 
> wrote:
>
>>  Manually running the script requires a few environment settings. Take a
>> look at the README here:
>>
>> https://github.com/openstack-infra/devstack-gate
>>
>>
>>
>> Regarding cinder, I’m using this repo to run our cinder jobs (fork from
>> jaypipes).
>>
>> https://github.com/rasselin/os-ext-testing
>>
>>
>>
>> Note that this solution doesn’t use the Jenkins gerrit trigger pluggin,
>> but zuul.
>>
>>
>>
>> There’s a sample job for cinder here. It’s in Jenkins Job Builder format.
>>
>>
>> https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample
>>
>>
>>
>> You can ask more questions in IRC freenode #openstack-cinder. (irc#
>> asselin)
>>
>>
>>
>> Ramy
>>
>>
>>
>> *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
>> *Sent:* Tuesday, December 16, 2014 12:41 AM
>> *To:* Bailey, Darragh
>> *Cc:* OpenStack Development Mailing List (not for usage questions);
>> OpenStack
>> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need
>> help setting up CI
>>
>>
>>
>> Hi,
>>
>>
>>
>> Can someone point me to some working documentation on how to setup third
>> party CI? (joinfu's instructions don't seem to work, and manually running
>> devstack-gate scripts fails:
>>
>> Running gate_hook
>>
>> Job timeout set to: 163 minutes
>>
>> timeout: failed to run command 
>> ‘/opt/stack/new/devstack-gate/devstack-vm-gate.sh’: No such file or directory
>>
>> ERROR: the main setup script run by this job failed - exit code: 127
>>
>> please look at the relevant log files to determine the root cause
>>
>> Cleaning up host
>>
>> ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz)
>>
>>  Build step 'Execute shell' marked build as failure.
>>
>>
>>
>> I have a working Jenkins slave with devstack and our internal libraries,
>> i have Gerrit Trigger Plugin working and triggering on patches created, i
>> just need the actual job contents so that it can get to comment with the
>> test results.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Eduard
>>
>>
>>
>> On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei <
>> eduard.ma...@cloudfounders.com> wrote:
>>
>>  Hi Darragh, thanks for your input
>>
>>
>>
>> I double checked the job settings and fixed it:
>>
>> - build triggers is set to Gerrit event
>>
>> - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger
>> Plugin and tested separately)
>>
>> - Trigger on: Patchset Created
>>
>> - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches:
>> Type: Path, Pattern: ** (was Type Plain on both)
>>
>> Now the job is triggered by commit on openstack-dev/sandbox :)
>>
>>
>>
>> Regarding the Query and Trigger Gerrit Patches, i found my patch using
>> query: status:open project:openstack-dev/sandbox change:139585 and i can
>> trigger it manually and it executes the job.
>>
>>
>>
>> But i still have the problem: what should the job do? It doesn't actually
>> do anything, it doesn't run tests or comment on the patch.
>>
>> Do you have an example of job?
>>
>>
>>
>> Thanks,
>>
>> Eduard
>>
>>
>>
>> On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh  wrote:
>>
>> Hi Eduard,
>>
>>
>> I would check the trigger settings in the job, particularly which "type"
>> of pattern matching is being used for the branches. Found it tends to be
>> the spot that catches most people out when configuring jobs with the
>> Gerrit Trigger plugin. If you're looking to trigger against all branches
>> then you would want "Type: Path" and "Pattern: **" appearing in the UI.
>>
>> If you have sufficient access using the 'Query and Trigger Gerrit
>> Patches' page accessible from the main view will make it easier to
>> confirm that your Jenkins instance can actually see changes in gerrit
>> for the given project (which should mean that it can see the
>> corresponding events as well). Can also use the same page to re-trigger
>> for PatchsetCreated events to see if you've set the patterns on the job
>> correctly.
>>
>>

[openstack-dev] [third party][neutron] - OpenDaylight CI failing for past 6 days

2014-12-18 Thread Anil Venkata
Hi All

Last successful build on OpenDaylight CI( 
https://jenkins.opendaylight.org/ovsdb/job/openstack-gerrit/ ) was 6 days back.
After that, OpenDaylight CI Jenkins job is failing for all the patches.

Can we remove the voting rights for the OpenDaylight CI until it is fixed?

Thanks
Anil.Venakata

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel] Support of warnings in Fuel UI

2014-12-18 Thread Julia Aranovich
Hi All,

First of all, I would like to inform you that support of warnings was added
on Settings tab in Fuel UI.
Now you can add 'message' attribute to setting restriction and it will be
displayed as a tooltip on the tab
 if restriction
condition is satisfied.

So, setting restriction should have the following format in openstack.yaml

file:

restrictions:
  - condition: "settings:common.libvirt_type.value != 'kvm'"
message: "KVM only is supported"

This format is also eligible for setting group restrictions and
restrictions of setting values (for setting with 'radio' type).

Please also note that message attribute can be also added to role
restrictions and will be displayed as a tooltip on Add Nodes screen.



And the second goal of my letter is to ask you to go through openstack.yaml

file
and add an appropriate messages for restrictions. It will make Fuel UI more
clear and informative.

Thank you in advance!

Julia

-- 
Kind Regards,
Julia Aranovich,
Software Engineer,
Mirantis, Inc
+7 (905) 388-82-61 (cell)
Skype: juliakirnosova
www.mirantis.ru
jaranov...@mirantis.com 
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve

2014-12-18 Thread Chen CH Ji
AFAIK, quota will expire in 24 hours

cfg.IntOpt('reservation_expire',
   default=86400,
   help='Number of seconds until a reservation expires'),

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82454158
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   "Eli Qiao(Li Yong Qiao)" 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   12/18/2014 04:34 PM
Subject:[openstack-dev] [nova][resend with correct subject prefix] ask
for usage of quota reserve



hi all,
can anyone tell if we call quotas.reserve() but never call quotas.commit()
or quotas.rollback().
what will happen?

for example:

   1.   when doing resize, we call quotas.reserve() to reservea a delta
  quota.(new_flavor - old_flavor)
   2.   for some reasons, nova-compute crashed, and not chance to call
  quotas.commit() or quotas.rollback() (called by finish_resize in
  nova/compute/manager.py)
   3.   next time restart nova-compute server, is the delta quota still
  reserved , or do we need any other operation on quotas?


Thanks in advance
-Eli.


ps: this is related to patch : Handle RESIZE_PREP status when nova compute
do init_instance(https://review.openstack.org/#/c/132827/)


--
Thanks Eli Qiao(qia...@cn.ibm.com)
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and collaboration

2014-12-18 Thread A, Keshava
Hi  Thomas,

Basically as per your thought, extend the 'vpn-label' to OVS itself.
So that, when  MPLS-over-GRE packet comes from OVS , use that  incoming label 
to index respective VPN table at DC-Edge side ?

Question:
1. Who tells which label to use to OVS ? 
You are thinking to have BGP-VPN session between DC-Edge to Compute 
Node(OVS) ? 
So that there it self-look at the BGP-VPN table and based on 
destination add that VPN label as MPLS label  in OVS ?
OR
 ODL or OpenStack controller will dictate  which VPN label to use to 
both DC-Edge and CN(ovs)?

2. How much will be the gain/advantage by generating the mpls from OVS 
? (compare the terminating VxLAN on DC-edge and then originating the mpls from 
there ?)


keshava

-Original Message-
From: Thomas Morin [mailto:thomas.mo...@orange.com] 
Sent: Tuesday, December 16, 2014 7:10 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron] [RFC] Floating IP idea solicitation and 
collaboration

Hi Keshava,

2014-12-15 11:52, A, Keshava :
>   I have been thinking of "Starting MPLS right from CN" for L2VPN/EVPN 
> scenario also.
>
>   Below are my queries w.r.t supporting MPLS from OVS :
>   1. MPLS will be used even for VM-VM traffic across CNs 
> generated by OVS  ?

If E-VPN is used only to interconnect outside of a Neutron domain, then MPLS 
does not have to be used for traffic between VMs.

If E-VPN is used inside one DC for VM-VM traffic, then MPLS is *one* of the 
possible encapsulation only: E-VPN specs have been defined to use VXLAN (handy 
because there is native kernel support), MPLS/GRE or MPLS/UDP are other 
possibilities.

>   2. MPLS will be originated right from OVS and will be mapped at 
> Gateway (it may be NN/Hardware router ) to SP network ?
>   So MPLS will carry 2 Labels ? (one for hop-by-hop, and 
> other one 
> for end to identify network ?)

On "will carry 2 Labels ?" : this would be one possibility, but not the one we 
target.
We would actually favor MPLS/GRE (GRE used instead of what you call the MPLS 
"hop-by-hop" label) inside the DC -- this requires only one label.
At the DC edge gateway, depending on the interconnection techniques to connect 
the WAN, different options can be used (RFC4364 section 10): 
Option A with back-to-back VRFs (no MPLS label, but typically VLANs), or option 
B (with one MPLS label), a mix of A/B is also possible and sometimes called 
option D (one label) ;  option C also exists, but is not a good fit here.

Inside one DC, if vswitches see each other across an Ethernet segment, we can 
also use MPLS with just one label (the VPN label) without a GRE encap.

In a way, you can say that in Option B, the label are "mapped" at the DC/WAN 
gateway(s), but this is really just MPLS label swaping, not to be misunderstood 
as mapping a DC label space to a WAN label space (see below, the label space is 
local to each device).


>   3. MPLS will go over even the "network physical infrastructure" 
>  also ?

The use of MPLS/GRE means we are doing an overlay, just like your typical 
VXLAN-based solution, and the network physical infrastructure does not need to 
be MPLS-aware (it just needs to be able to carry IP
traffic)

>   4. How the Labels will be mapped a/c virtual and physical world 
> ?

(I don't get the question, I'm not sure what you mean by "mapping labels")

>   5. Who manages the label space  ? Virtual world or physical 
> world or 
> both ? (OpenStack +  ODL ?)

In MPLS*, the label space is local to each device : a label is 
"downstream-assigned", i.e. allocated by the receiving device for a specific 
purpose (e.g. forwarding in a VRF). It is then (typically) avertized in a 
routing protocol; the sender device will use this label to send traffic to the 
receiving device for this specific purpose.  As a result a sender device may 
then use label 42 to forward traffic in the context of VPN X to a receiving 
device A, and the same label 42 to forward traffic in the context of another 
VPN Y to another receiving device B, and locally use label 42 to receive 
traffic for VPN Z.  There is no global label space to manage.

So, while you can design a solution where the label space is managed in a 
centralized fashion, this is not required.

You could design an SDN controller solution where the controller would manage 
one label space common to all nodes, or all the label spaces of all forwarding 
devices, but I think its hard to derive any interesting property from such a 
design choice.

In our BaGPipe distributed design (and this is also true in OpenContrail for 
instance) the label space is managed locally on each compute node (or network 
node if the BGP speaker is on a network node). More precisely in VPN 
implementation.

If you take a step back, the only naming space that has to be "managed" 
in BGP VPNs is the Route Target space. This is

Re: [openstack-dev] HTTPS for spice console

2014-12-18 Thread Jordan Pittier
Hi,
You'll need a recent version of spice-html5. Because this commit here
http://cgit.freedesktop.org/spice/spice-html5/commit/?id=293d405e15a4499219fe81e830862cc2b1518e3e
is recent.

Jordan

On Wed, Dec 17, 2014 at 11:29 PM, Akshik DBK  wrote:
>
> Are there any recommended approach to configure spice console proxy on a
> secure [https], could not find proper documentation for the same.
>
> can someone point me to the rigt direction
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-dev][Cinder] Driver stats return value: infinite or unavailable

2014-12-18 Thread Eduard Matei
Hi everyone,

We're in a bit of a predicament regarding review:
https://review.openstack.org/#/c/130733/

Two days ago it got a -1 from John G asking to change infinite to
unavailable although the docs clearly say that "If the driver is unable to
provide a value for free_capacity_gb or total_capacity_gb, keywords can be
provided instead. Please use ‘unknown’ if the array cannot report the value
or ‘infinite’ if the array has no upper limit." (
http://docs.openstack.org/developer/cinder/devref/drivers.html)

After i changed it, came Walter A. Boring IV and gave another -1 saying we
should return infinite.

Since we use S3 as a backend and it has no upper limit (technically there
is a limit but for the purposes of our driver there's no limit as the
backend is "elastic") we could return infinite.

Anyway, the problem is that now we missed the K-1 merge window although the
driver passed all tests (including cert tests).

So please can someone decide which is the correct value so we can use that
and get the patched approved (unless there are other issues).

Thanks,
Eduard
-- 

*Eduard Biceri Matei, Senior Software Developer*
www.cloudfounders.com
 | eduard.ma...@cloudfounders.com



*CloudFounders, The Private Cloud Software Company*

Disclaimer:
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed.
If you are not the named addressee or an employee or agent responsible
for delivering this message to the named addressee, you are hereby
notified that you are not authorized to read, print, retain, copy or
disseminate this message or any part of it. If you have received this
email in error we request you to notify us by reply e-mail and to
delete all electronic files of the message. If you are not the
intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
E-mail transmission cannot be guaranteed to be secure or error free as
information could be intercepted, corrupted, lost, destroyed, arrive
late or incomplete, or contain viruses. The sender therefore does not
accept liability for any errors or omissions in the content of this
message, and shall have no liability for any loss or damage suffered
by the user, which arise as a result of e-mail transmission.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] How do the CI clouds work?

2014-12-18 Thread Steve Kowalik
Hai,

I am finding myself at a loss at explaining how the CI clouds that run
the tripleo jobs work from end-to-end. I am clear that we have a tripleo
deployment running on those racks, with a seed, a HA undercloud and
overcloud, but then I'm left with a number of questions, such as:

How do we run the testenv images on the overcloud?

How do the testenv images interact with the nova-compute machines in
the overcloud?

Are the machines running the testenv images meant to be long-running,
or are they recycled after n number of runs?

Cheers,
-- 
Steve
In the beginning was the word, and the word was content-type: text/plain

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Complexity check and v2 API

2014-12-18 Thread Pasquale Porreca
Yes, for v2.1 there is not this problem, moreover v2.1 corresponding
server.py has much lower complexity than v2 one.

On 12/17/14 20:10, Christopher Yeoh wrote:
> Hi,
>
> Given the timing (no spec approved) it sounds like a v2.1 plus
> microversions (just merging) with no v2 changes at all.
>
> The v2.1 framework is more flexible and you should need no changes to
> servers.py at all as there are hooks for adding extra parameters in
> separate plugins. There are examples of this in the v3 directory which
> is really v2.1 now.
>
> Chris
> On Thu, 18 Dec 2014 at 3:49 am, Pasquale Porreca
>  > wrote:
>
> Thank you for the answer.
>
> my API proposal won't be merged in kilo release since the deadline for
> approval is tomorrow, so I may propose the fix to lower the complexity
> in another way, what do you think about a bug fix?
>
> On 12/17/14 18:05, Matthew Gilliard wrote:
> > Hello Pasquale
> >
> >   The problem is that you are trying to add a new if/else branch
> into
> > a method which is already ~250 lines long, and has the highest
> > complexity of any function in the nova codebase. I assume that you
> > didn't contribute much to that complexity, but we've recently
> added a
> > limit to stop it getting any worse. So, regarding your 4
> suggestions:
> >
> > 1/ As I understand it, v2.1 should be the same as v2 at the
> > moment, so they need to be kept the same
> > 2/ You can't ignore it - it will fail CI
> > 3/ No thank you. This limit should only ever be lowered :-)
> > 4/ This is 'the right way'. Your suggestion for the refactor
> does
> > sound good.
> >
> > I suggest a single patch that refactors and lowers the limit in
> > tox.ini.  Once you've done that then you can add the new
> parameter in
> > a following patch. Please feel free to add me to any patches you
> > create.
> >
> > Matthew
> >
> >
> >
> > On Wed, Dec 17, 2014 at 4:18 PM, Pasquale Porreca
> >  > wrote:
> >> Hello
> >>
> >> I am working on an API extension that adds a parameter on
> create server
> >> call; to implement the v2 API I added few lines of code to
> >> nova/api/openstack/compute/servers.py
> >>
> >> In particular just adding something like
> >>
> >> new_param = None
> >> if self.ext_mgr.is_loaded('os-new-param'):
> >> new_param = server_dict.get('new_param')
> >>
> >> leads to a pep8 fail with message 'Controller.create' is too
> complex (47)
> >> (Note that in tox.ini the max complexity is fixed to 47 and
> there is a note
> >> specifying 46 is the max complexity present at the moment).
> >>
> >> It is quite easy to make this test pass creating a new method
> just to
> >> execute these lines of code, anyway all other extensions are
> handled in that
> >> way and one of most important stylish rule states to be
> consistent with
> >> surrounding code, so I don't think a separate function is the
> way to go
> >> (unless it implies a change in how all other extensions are
> handled too).
> >>
> >> My thoughts on this situation:
> >>
> >> 1) New extensions should not consider v2 but only v2.1, so that
> file should
> >> not be touched
> >> 2) Ignore this error and go on: if and when the extension will
> be merged the
> >> complexity in tox.ini will be changed too
> >> 3) The complexity in tox.ini should be raised to allow new v2
> extensions
> >> 4) The code of that module should be refactored to lower the
> complexity
> >> (i.e. move the load of each extension in a separate function)
> >>
> >> I would like to know if any of my point is close to the correct
> solution.
> >>
> >> --
> >> Pasquale Porreca
> >>
> >> DEK Technologies
> >> Via dei Castelli Romani, 22
> >> 00040 Pomezia (Roma)
> >>
> >> Mobile +39 3394823805
> >> Skype paskporr
> >>
> >>
> >> ___
> >> OpenStack-dev mailing list
> >> OpenStack-dev@lists.openstack.org
> 
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> > ___
> > OpenStack-dev mailing list
> > OpenStack-dev@lists.openstack.org
> 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> --
> Pasquale Porreca
>
> DEK Technologies
> Via dei Castelli Romani, 22
> 00040 Pomezia (Roma)
>
> Mobile +39 3394823805
> Skype paskporr
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.o

[openstack-dev] [nova][resend with correct subject prefix] ask for usage of quota reserve

2014-12-18 Thread Eli Qiao(Li Yong Qiao)
hi all,
can anyone tell if we call quotas.reserve() but never call
quotas.commit() or quotas.rollback().
what will happen?

for example:

 1. when doing resize, we call quotas.reserve() to reservea a delta
quota.(new_flavor - old_flavor)
 2. for some reasons, nova-compute crashed, and not chance to call
quotas.commit() or quotas.rollback() /(called by finish_resize in
nova/compute/manager.py)/
 3. next time restart nova-compute server, is the delta quota still
reserved , or do we need any other operation on quotas?

Thanks in advance
-Eli.

ps: this is related to patch : Handle RESIZE_PREP status when nova
compute do init_instance(https://review.openstack.org/#/c/132827/)


-- 
Thanks Eli Qiao(qia...@cn.ibm.com)

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help setting up CI

2014-12-18 Thread Eduard Matei
Hi,
Seems i can't install using puppet on the jenkins master using
install_master.sh from
https://raw.githubusercontent.com/rasselin/os-ext-testing/master/puppet/install_master.sh
because it's running Ubuntu 11.10 and it appears unsupported.
I managed to install puppet manually on master and everything else fails
So i'm trying to manually install zuul and nodepool and jenkins job
builder, see where i end up.

The slave looks complete, got some errors on running install_slave so i ran
parts of the script manually, changing some params and it appears installed
but no way to test it without the master.

Any ideas welcome.

Thanks,

Eduard

On Wed, Dec 17, 2014 at 3:37 AM, Asselin, Ramy  wrote:

>  Manually running the script requires a few environment settings. Take a
> look at the README here:
>
> https://github.com/openstack-infra/devstack-gate
>
>
>
> Regarding cinder, I’m using this repo to run our cinder jobs (fork from
> jaypipes).
>
> https://github.com/rasselin/os-ext-testing
>
>
>
> Note that this solution doesn’t use the Jenkins gerrit trigger pluggin,
> but zuul.
>
>
>
> There’s a sample job for cinder here. It’s in Jenkins Job Builder format.
>
>
> https://github.com/rasselin/os-ext-testing-data/blob/master/etc/jenkins_jobs/config/dsvm-cinder-driver.yaml.sample
>
>
>
> You can ask more questions in IRC freenode #openstack-cinder. (irc#
> asselin)
>
>
>
> Ramy
>
>
>
> *From:* Eduard Matei [mailto:eduard.ma...@cloudfounders.com]
> *Sent:* Tuesday, December 16, 2014 12:41 AM
> *To:* Bailey, Darragh
> *Cc:* OpenStack Development Mailing List (not for usage questions);
> OpenStack
> *Subject:* Re: [openstack-dev] [OpenStack-Infra] [ThirdPartyCI] Need help
> setting up CI
>
>
>
> Hi,
>
>
>
> Can someone point me to some working documentation on how to setup third
> party CI? (joinfu's instructions don't seem to work, and manually running
> devstack-gate scripts fails:
>
> Running gate_hook
>
> Job timeout set to: 163 minutes
>
> timeout: failed to run command 
> ‘/opt/stack/new/devstack-gate/devstack-vm-gate.sh’: No such file or directory
>
> ERROR: the main setup script run by this job failed - exit code: 127
>
> please look at the relevant log files to determine the root cause
>
> Cleaning up host
>
> ... this takes 3 - 4 minutes (logs at logs/devstack-gate-cleanup-host.txt.gz)
>
>  Build step 'Execute shell' marked build as failure.
>
>
>
> I have a working Jenkins slave with devstack and our internal libraries, i
> have Gerrit Trigger Plugin working and triggering on patches created, i
> just need the actual job contents so that it can get to comment with the
> test results.
>
>
>
> Thanks,
>
>
>
> Eduard
>
>
>
> On Tue, Dec 9, 2014 at 1:59 PM, Eduard Matei <
> eduard.ma...@cloudfounders.com> wrote:
>
>  Hi Darragh, thanks for your input
>
>
>
> I double checked the job settings and fixed it:
>
> - build triggers is set to Gerrit event
>
> - Gerrit trigger server is "Gerrit" (configured from Gerrit Trigger Plugin
> and tested separately)
>
> - Trigger on: Patchset Created
>
> - Gerrit Project: Type: Path, Pattern openstack-dev/sandbox, Branches:
> Type: Path, Pattern: ** (was Type Plain on both)
>
> Now the job is triggered by commit on openstack-dev/sandbox :)
>
>
>
> Regarding the Query and Trigger Gerrit Patches, i found my patch using
> query: status:open project:openstack-dev/sandbox change:139585 and i can
> trigger it manually and it executes the job.
>
>
>
> But i still have the problem: what should the job do? It doesn't actually
> do anything, it doesn't run tests or comment on the patch.
>
> Do you have an example of job?
>
>
>
> Thanks,
>
> Eduard
>
>
>
> On Tue, Dec 9, 2014 at 1:13 PM, Bailey, Darragh  wrote:
>
> Hi Eduard,
>
>
> I would check the trigger settings in the job, particularly which "type"
> of pattern matching is being used for the branches. Found it tends to be
> the spot that catches most people out when configuring jobs with the
> Gerrit Trigger plugin. If you're looking to trigger against all branches
> then you would want "Type: Path" and "Pattern: **" appearing in the UI.
>
> If you have sufficient access using the 'Query and Trigger Gerrit
> Patches' page accessible from the main view will make it easier to
> confirm that your Jenkins instance can actually see changes in gerrit
> for the given project (which should mean that it can see the
> corresponding events as well). Can also use the same page to re-trigger
> for PatchsetCreated events to see if you've set the patterns on the job
> correctly.
>
> Regards,
> Darragh Bailey
>
> "Nothing is foolproof to a sufficiently talented fool" - Unknown
>
> On 08/12/14 14:33, Eduard Matei wrote:
> > Resending this to dev ML as it seems i get quicker response :)
> >
> > I created a job in Jenkins, added as Build Trigger: "Gerrit Event:
> > Patchset Created", chose as server the configured Gerrit server that
> > was previously tested, then added the project openstack-dev/sandbox
> > and saved.
> > I made a change on dev san

Re: [openstack-dev] Topic: Reschedule Router to a different agent with multiple external networks.

2014-12-18 Thread Oleg Bondarev
Hi Swaminathan Vasudevan,

please check the following docstring
of L3_NAT_dbonly_mixin._check_router_needs_rescheduling:


*def _check_router_needs_rescheduling(self, context, router_id,
gw_info):*
*"""Checks whether router's l3 agent can handle the given network*

*When external_network_bridge is set, each L3 agent can be
associated*
*with at most one external network. If router's new external
gateway*
*is on other network then the router needs to be rescheduled to the*
*proper l3 agent.*
*If external_network_bridge is not set then the agent*
*can support multiple external networks and rescheduling is not
needed*

So there can still be agents which can handle only one ext net - for such
agents resheduling is needed.

Thanks,
Oleg

On Wed, Dec 17, 2014 at 8:56 PM, Vasudevan, Swaminathan (PNB Roseville) <
swaminathan.vasude...@hp.com> wrote:
>
>  Hi Folks,
>
>
>
> *Reschedule router if new external gateway is on other network*
>
> An L3 agent may be associated with just one external network.
>
> If router's new external gateway is on other network then the router
>
> needs to be rescheduled to the proper l3 agent
>
>
>
> This patch was introduced when there was no support for L3-agent to handle
> multiple external networks.
>
>
>
> Do we think we should still retain this original behavior even if we have
> support for multiple external networks by single L3-agent.
>
>
>
> Can anyone comment on this.
>
>
>
> Thanks
>
>
>
> Swaminathan Vasudevan
>
> Systems Software Engineer (TC)
>
>
>
>
>
> HP Networking
>
> Hewlett-Packard
>
> 8000 Foothills Blvd
>
> M/S 5541
>
> Roseville, CA - 95747
>
> tel: 916.785.0937
>
> fax: 916.785.1815
>
> email: swaminathan.vasude...@hp.com
>
>
>
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Bug Squashing Day

2014-12-18 Thread Gregory Haynes
Excerpts from Gregory Haynes's message of 2014-12-16 19:47:54 +:
> > > On Wed, Dec 10, 2014 at 10:36 PM, Gregory Haynes 
> > > wrote:
> > >
> > >> A couple weeks ago we discussed having a bug squash day. AFAICT we all
> > >> forgot, and we still have a huge bug backlog. I'd like to propose we
> > >> make next Wed. (12/17, in whatever 24 window is Wed. in your time zone)
> > >> a bug squashing day. Hopefully we can add this as an item to our weekly
> > >> meeting on Tues. to help remind everyone the day before.
> 
> Friendly Reminder that tomorrow (or today for some time zones) is our
> bug squash day! I hope to see youall in IRC squashing some of our
> (least) favorite bugs.
> 
> Random Factoid: We currently have 299 open bugs.

Thanks to everyone who participated in our bug squash day! We are now
down to 264 open bugs (down from 299). There was also a fair number of
bugs filed today as part of our (anti) bug squashing efforts, bringing
our total bugs operated on today to >50.

Thanks, again!

Cheers,
Greg

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev