[openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Silvan Kaiser
Hello!
Do dependencies required only in some contexts belong into requirements.txt?

Yesterday we had a short discussion on #openstack-nova regarding how to
handle optional requirements. This was triggered by our quobyte nova driver
(https://review.openstack.org/#/c/110722/18), who requires xattr, which we
therefore added to requirements.txt (as it is provided by the requirements
project).

Points from the discussion:
- If we add this we will be adding every requirement for every component
---> this becomes to big.
- Remove this requirement, no optional entries in requirements.txt, a
'deployer' has to know what dependencies the components he wants to use have
---> Usually he does not know and installation becomes more issue prone
- Other (in between) ideas???

Please note that this has some urgency, the change set referenced above has
been in review for months and i'm trying to react asap on comments but the
deadline is approaching (next week) and if i have to do bigger changes I'd
like to know as fast as possible...

Best regards
SIlvan Kaiser

-- 

--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-27 Thread Vladimir Kuklin
Dmitry

This is an interesting topic. As per our discussions earlier, I suggest
that in the future we move to different serializers for each granule of our
deployment, so that we do not need to drag a lot of senseless data into
particular task being executed. Say, we have a fencing task, which has a
serializer module written in python. This module is imported by Nailgun and
what it actually does, it executes specific Nailgun core methods that
access database or other sources of information and retrieve data in the
way this task wants it instead of adjusting the task to the only
'astute.yaml'.

On Thu, Jan 22, 2015 at 8:59 PM, Evgeniy L  wrote:

> Hi Dmitry,
>
> The problem with merging is usually it's not clear how system performs
> merging.
> For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
> 3}]}, and I want
> {'list': [{'k': 4}]} to be merged, what system should do? Replace the list
> or add {'k': 4}?
> Both cases should be covered.
>
> Most of the users don't remember all of the keys, usually user gets the
> defaults, and
> changes some values in place, in this case we should ask user to remove
> the rest
> of the fields.
>
> The only solution which I see is to separate the data from the graph, not
> to send
> this information to user.
>
> Thanks,
>
> On Thu, Jan 22, 2015 at 5:18 PM, Dmitriy Shulyak 
> wrote:
>
>> Hi guys,
>>
>> I want to discuss the way we are working with deployment configuration
>> that were redefined for cluster.
>>
>> In case it was redefined by API - we are using that information instead
>> of generated.
>> With one exception, we will generate new repo sources and path to
>> manifest if we are using update (patching feature in 6.0).
>>
>> Starting from 6.1 this configuration will be populated by tasks, which is
>> a part of granular deployment
>> workflow and replacement of configuration will lead to inability to use
>> partial graph execution API.
>> Ofcourse it is possible to hack around and make it work, but imo we need
>> generic solution.
>>
>> Next problem - if user will upload replaced information, changes on
>> cluster attributes, or networks, wont be reflected in deployment anymore
>> and it constantly leads to problems for deployment engineers that are using
>> fuel.
>>
>> What if user want to add data, and use generated of networks, attributes,
>> etc?
>> - it may be required as a part of manual plugin installation (ha_fencing
>> requires a lot of configuration to be added into astute.yaml),
>> - or you need to substitute networking data, e.g add specific parameters
>> for linux bridges
>>
>> So given all this, i think that we should not substitute all information,
>> but only part that is present in
>> redefined info, and if there is additional parameters they will be simply
>> merged into generated info
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Yours Faithfully,
Vladimir Kuklin,
Fuel Library Tech Lead,
Mirantis, Inc.
+7 (495) 640-49-04
+7 (926) 702-39-68
Skype kuklinvv
45bk3, Vorontsovskaya Str.
Moscow, Russia,
www.mirantis.com 
www.mirantis.ru
vkuk...@mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][hot]

2015-01-27 Thread Dmitry
I have another question, is it possible to get the stack name in the hot
script?
E.g.
params:
 $stack_name: {get_global_variable: $stack.name}

On Tue, Jan 27, 2015 at 3:53 AM, Qiming Teng 
wrote:

> On Mon, Jan 26, 2015 at 07:44:25PM +0200, Dmitry wrote:
> > thanks, exactly what I was looking for:
> > curl http://169.254.169.254/1.0/meta-data/instance-id
>
> or, /var/lib/cloud/data/instance-id, if cloud-init is there.
>
> Regards,
>   Qiming
>
> > On Mon, Jan 26, 2015 at 7:31 PM, Zane Bitter  wrote:
> >
> > > On 25/01/15 10:41, Dmitry wrote:
> > >
> > >> Hello,
> > >> I need to receive instance id as part of the instance installation
> script.
> > >> Something like:
> > >> params:
> > >>$current_id: {get_param: $this.id }
> > >>
> > >
> > > I have no idea what this is supposed to mean, sorry.
> > >
> > >  Is it possible?
> > >>
> > >
> > > The get_resource function will return the server UUID for a server
> > > resource, but you can't use it from within that resource itself (it
> would
> > > be a circular reference).
> > >
> > > The UUID of a server is provided to the server through the Nova
> metadata;
> > > you should retrieve it from there in your user_data script.
> > >
> > > cheers,
> > > Zane.
> > >
> > >
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
>
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [UI] Deploy Changes dialog redesign

2015-01-27 Thread Nikolay Markov
Guys,

I'm now here and I don't agree that we need to remove "changes"
attribute. On the opposite, I think this is the only attribute which
should be looked at on UI and backend, and all these
"pending_addition" and "pending_someotherstuff" are obsolete and
needless.

Just assume, that we'll soon have some plugin or just some tech which
allows us to modify some settings on UI after environment was deployed
and somehow apply it onto nodes (like, for example, we're planning
such thing for VMWare). In this case there is no any
"pending_addition" or some other stuff, these are just changes to
apply on a node somehow, maybe just execute some script on them. And
the same goes to a lot of cases with plugins, which do some services
on target nodes configurable.

"Pending_addition" flag, on the other hand, is useless, because all
changes we should apply on node are already listed in "changes"
attribute. We can even probably add "provisioning" and "deployment"
into these pending changes do avoid logic duplication. But still, as
for me, this is the only working mechanism we should consider and
which will really help us to cver complex cases in the future.

On Tue, Jan 27, 2015 at 10:52 AM, Mike Scherbakov
 wrote:
> +1, I do not think it's usable as how it is now. Let's think though if we
> can come up with better idea how to show what has been changed (or even
> otherwise, what was not touched - and so might bring a surprise later).
> We might want to think about it after wizard-like UI is implemented.
>
> On Mon, Jan 26, 2015 at 8:26 PM, Igor Kalnitsky 
> wrote:
>>
>> +1 for removing attribute.
>>
>> @Evgeniy, I'm not sure that this attribute really shows all changes
>> that's going to be done.
>>
>> On Mon, Jan 26, 2015 at 7:11 PM, Evgeniy L  wrote:
>> > To be more specific, +1 for removing this information from UI, not from
>> > backend.
>> >
>> > On Mon, Jan 26, 2015 at 7:46 PM, Evgeniy L  wrote:
>> >>
>> >> Hi,
>> >>
>> >> I agree that this information is useless, but it's not really clear
>> >> what
>> >> you are going
>> >> to show instead, will you completely remove the information about nodes
>> >> for deployment?
>> >> I think the list of nodes for deployment (without detailed list of
>> >> changes) can be useful
>> >> for the user.
>> >>
>> >> Thanks,
>> >>
>> >> On Mon, Jan 26, 2015 at 7:23 PM, Vitaly Kramskikh
>> >>  wrote:
>> >>>
>> >>> +1 for removing "changes" attribute. It's useless now. If there are no
>> >>> plans to add something else there, let's remove it.
>> >>>
>> >>> 2015-01-26 11:39 GMT+03:00 Julia Aranovich :
>> 
>>  Hi All,
>> 
>>  Since we changed Deploy Changes pop-up and added processing of role
>>  limits and restrictions I would like to raise a question of it's
>>  subsequent
>>  refactoring.
>> 
>>  In particular, I mean 'changes' attribute of cluster model. It's
>>  displayed in Deploy Changes dialog in the following format:
>> 
>>  Changed disks configuration on the following nodes:
>> 
>>  
>> 
>>  Changed interfaces configuration on the following nodes:
>> 
>>  
>> 
>>  Changed network settings
>>  Changed OpenStack settings
>> 
>>  This list looks absolutely useless.
>> 
>>  It doesn't make any sense to display lists of new, not deployed nodes
>>  with changed disks/interfaces. It's obvious I think that new nodes
>>  attributes await deployment. At the same time user isn't able to
>>  change
>>  disks/interfaces on deployed nodes (at least in UI). So, such node
>>  name
>>  lists are definitely redundant.
>>  Networks and settings are also locked after deployment finished.
>> 
>> 
>>  I tend to get rid of cluster model 'changes' attribute at all.
>> 
>>  It is important for me to know your opinion, to make a final
>>  decision.
>>  Please feel free and share your ideas and concerns if any.
>> 
>> 
>>  Regards,
>>  Julia
>> 
>>  --
>>  Kind Regards,
>>  Julia Aranovich,
>>  Software Engineer,
>>  Mirantis, Inc
>>  +7 (905) 388-82-61 (cell)
>>  Skype: juliakirnosova
>>  www.mirantis.ru
>>  jaranov...@mirantis.com
>> 
>> 
>> 
>>  __
>>  OpenStack Development Mailing List (not for usage questions)
>>  Unsubscribe:
>>  openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Vitaly Kramskikh,
>> >>> Fuel UI Tech Lead,
>> >>> Mirantis, Inc.
>> >>>
>> >>>
>> >>>
>> >>> __
>> >>> OpenStack Development Mailing List (not for usage questions)
>> >>> Unsubscribe:
>> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/op

Re: [openstack-dev] [keystone] Flush expired tokens automatically ?

2015-01-27 Thread Thierry Carrez
Updating subject line to attract keystone devs

Daniel Comnea wrote:
> +100
> 
> Dani
> 
> On Mon, Jan 26, 2015 at 1:10 AM, Tim Bell  > wrote:
> 
> This is often mentioned as one of those items which catches every
> OpenStack cloud operator at some time. It’s not clear to me that
> there could not be a scheduled job built into the system with a
> default frequency (configurable, ideally).
> 
> __ __
> 
> If we are all configuring this as a cron job, is there a reason that
> it could not be built into the code ?
> 
> __ __
> 
> Tim
> 
> __ __
> 
> *From:*Mike Smith [mailto:mism...@overstock.com
> ]
> *Sent:* 24 January 2015 18:08
> *To:* Daniel Comnea
> *Cc:* OpenStack Development Mailing List (not for usage questions);
> openstack-operat...@lists.openstack.org
> 
> *Subject:* Re: [Openstack-operators]
> [openstack-dev][openstack-operators]flush expired tokens and moves
> deleted instance
> 
> __ __
> 
> It is still mentioned in the Juno installation docs: 
> 
> __ __
> 
> By default, the Identity service stores expired tokens in the
> database indefinitely. The
> 
> accumulation of expired tokens considerably increases the database
> size and might degrade
> 
> service performance, particularly in environments with limited
> resources.
> 
> We recommend that you use cron to configure a periodic task that
> purges expired tokens
> 
> hourly:
> 
> # (crontab -l -u keystone 2>&1 | grep -q token_flush) || \
> 
> echo '@hourly /usr/bin/keystone-manage token_flush
> >/var/log/keystone/
> 
> keystone-tokenflush.log 2>&1' \
> 
> >> /var/spool/cron/keystone
> 
> __ __
> 
> __ __
> 
> 
> Mike Smith
> Principal Engineer, Website Systems
> Overstock.com 
> 
> 
> 
> __ __
> 
> On Jan 24, 2015, at 10:03 AM, Daniel Comnea
> mailto:comnea.d...@gmail.com>> wrote:
> 
> __ __
> 
> Hi all,
> 
> 
> 
> I just bumped into Sebastien's blog where he suggested a cron
> job should run in production to tidy up expired tokens - see
> blog[1]
> 
> Could you please remind me if this is still required in
> IceHouse/ Juno? (i kind of remember i've seen some work being
> done in this direction but i can't find the emails)
> 
> 
> 
> Thanks,
> Dani
> 
> [1]
> 
> http://www.sebastien-han.fr/blog/2014/08/18/a-must-have-cron-job-on-your-openstack-cloud/
> 
> 
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> __ __
> 
> __ __
> 
> 
> 
> 
> CONFIDENTIALITY NOTICE: This message is intended only for the use
> and review of the individual or entity to which it is addressed and
> may contain information that is privileged and confidential. If the
> reader of this message is not the intended recipient, or the
> employee or agent responsible for delivering the message solely to
> the intended recipient, you are hereby notified that any
> dissemination, distribution or copying of this communication is
> strictly prohibited. If you have received this communication in
> error, please notify sender immediately by telephone or return
> email. Thank you.
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Agent] Moving Fuel Agent to a separate repo

2015-01-27 Thread Vladimir Kozhukalov
Mike,

You are absolutely right about our current priorities for 6.1 and this
thread is not about immediate action.

But just to be fair, moving Fuel Client to a separate repo was a priori
much more complicated procedure because it is tested together with nailgun.
For Fuel Agent we just need to create a separate repo (30 minutes) and make
changes for fuel-main (30 minutes). There is nothing to worry about because
it is completely independent. And taking into account our recent activities
about integrating it with Ironic I would say there is no reason to postpone
this until 7.0

Alexander,

As for fuel_agent_ci, it is not used currently on the regular basis, so I
think we need it to be moved to the same separate repo and then one day in
the future maybe we will use it for functional testing.



Vladimir Kozhukalov

On Tue, Jan 27, 2015 at 1:39 AM, Roman Prykhodchenko  wrote:

> I think the idea is not to work on it right at this moment but to accept
> the general idea of fuel-agent being moved somewhere it can be alone. I’m
> not sure there is one single approach for separating a component from the
> common repository because each of them has their own use-cases and
> requirements so for every single one of them there is a need to do the same
> job as we’ve done for Fuel Client.
>
> Thar said I’d like to note that only by having a clear specification of
> how work- data- and test-flaws have to be changed after the component is
> put to its own repository it will be possible to judge on the time frame
> and the number of resources required to accomplish this task.
>
>
> - romcheg
>
> > 26 січ. 2015 о 20:39 Mike Scherbakov 
> написав(ла):
> >
> > -1 to make changes now
> > +1 to Alexandra
> >
> > Let's finish fuel-client first. Also, it is about prioritization. We
> have many things to be resolved in 6.1 (e.g. package the rest of the stuff
> which not yet packaged into RPM/DEB; split repos openstack/fuel/linux,
> etc.), and fuel agent in particular has pretty low priority to me in 6.1.
> >
> > In examples I have provided, which are essential for 6.1, we are
> experiencing lack of hands. Let's see if we can focus our work on those
> items and many other essential things, and come to this question later.
> >
> > On Mon, Jan 26, 2015 at 10:29 PM, Aleksandra Fedorova <
> afedor...@mirantis.com> wrote:
> > It seems that we have general agreement about the idea, but to make it
> happen we need much more detailed proposal.
> >
> > Even with python-fuelclient it is not quite clear right now, which
> version of nailgun should be used to test it, and the opposite: which
> version of fuelclient we have to use in iso builds. We also don't handle it
> in the build system very well right now, as we use git hashes, and not
> fixed versions, or packages.
> >
> > Maybe we should complete the python-fuelclient transformation first and
> see how it is going to work for us?
> >
> > On Jan 26, 2015 8:59 PM, "Roman Prykhodchenko"  wrote:
> > Vladimir,
> >
> > As a fuel-separatist I give this initiative a big +1 because of the
> following advantages I can see:
> >
> >  - Git is designed for keeping smaller single-compoent repos, keeping
> everything to one repo is a discouraged pattern
> >  - Having a separate -core group that will only contain active core
> reviewers for fuel-agent project so getting core-reviews will be easier.
> >  - It makes possible to re-use some of the existing jobs in OpenStack CI
> >  - Making independent releases becomes possible
> >
> > AFAIK fuel-agent is positioned as an independent provisioning tool which
> will not be exclusively used by Fuel. There is a work in progress to
> integrate it with Ironic. Integrating it to any other provisioning system
> should also be possible then. From that perspective putting it into its own
> repo also brings the following advantages:
> >
> >  - Connecting 3rd party CI will be possible
> >  - Getting involved for the new folks will be much easier
> >
> >
> > - romcheg
> >
> >
> > > 26 січ. 2015 о 17:42 Vladimir Kozhukalov 
> написав(ла):
> > >
> > > Fuelers,
> > >
> > > As most of you might know we have a bunch of projects inside fuel-web
> repo which are not directly related to Fuel Web application. Some of them
> are tested together and it seemed we could end up with a set of
> incompatibility issues if we separated them and stopped tracking their
> versions on the git level (instead of release level).
> > >
> > > Recent activities about separating Fuel Client from Nailgun (api) make
> me think we are mature enough to move all other not related project out of
> fuel-web repo and bring them together not earlier than on the stage of
> system/functional testing.
> > >
> > > Next step would be moving out Fuel Agent project. The reason is that
> it is independent and potentially could be used even out of Fuel because
> its data parsing mechanism is implemented so as to be agnostic to the data
> format. Some people could be potentially interested in using it
> independe

[openstack-dev] Cross-Project meeting, Tue January 27th, 21:00 UTC

2015-01-27 Thread Thierry Carrez
Dear PTLs, cross-project liaisons and anyone else interested,

We'll have a cross-project meeting today at 21:00 UTC, with the
following agenda:

* Cross-project DevRef akin to Nova's ([1]) [2] (@sigmavirus24)
* Avoiding private symbols in Oslo libraries [3] (dhellmann)
* Discuss the importance of getting cross-project reviews of guidelines
(e.g. [4]) and how to best go about getting those reviews for the API
working group [5] (etoews)
* Progress (or lack thereof) on providing default config files
* openstack-specs discussion
  * CLI Sorting Argument Guidelines [6] -- ready to move to TC final
approval ?
  * Add TRACE definition to log guidelines [7]
* Open discussion & announcements

[1]
http://docs.openstack.org/developer/nova/devref/development.environment.html
[2]
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054837.html
[3]
http://lists.openstack.org/pipermail/openstack-dev/2015-January/054810.html
[4] https://review.openstack.org/#/c/145579/
[5] https://wiki.openstack.org/wiki/API_Working_Group
[6] https://review.openstack.org/#/c/145544/
[7] https://review.openstack.org/#/c/145245/

See you there !

For more details on this meeting, please see:
https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Murano] SQLite support - drop or not?

2015-01-27 Thread Andrew Pashkin
On 26.01.2015 18:34, Ruslan Kamaldinov wrote:
> On Mon, Jan 26, 2015 at 6:12 PM, Andrew Pashkin  wrote:
>> On 26.01.2015 18:05, Ruslan Kamaldinov wrote:
>>
>> I think it's still important to perform migration specific checks. We want
>> to make sure that DB is in expected state after each specific migration.
>>
>> Why?
> 
> 1. It's not just the schema we care about. It's the effect of
> particular DB migration script on data stored in DB. We need to make
> sure that data is not corrupted or lost in any way
> 2. Some migrations add or remove indexes and constraints, we might
> want to test that
Ok, I got it =)


-- 
With kind regards, Andrew Pashkin.
cell phone - +7 (985) 898 57 59
Skype - waves_in_fluids
e-mail - apash...@mirantis.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] fuel master monitoring

2015-01-27 Thread Fabrizio Soppelsa
The proposed monitoring options make sense, since the Fuel master has nothing 
yet, but I would like to ressurect this thread to see if we can discuss some 
strategies in order to avoid the /var/log fillup with consequent docker 
containers corruption.

Now, a customer facing this corruption can recover the master, as described in 
our documentation [1].
Of course, if we could avoid the problem at all, it would be even better. So, 
do we have a strategy in progress on this?

As a first attempt, I filed a blueprint, proposing to separate /var/log [2]. 

Thanks,
Fabrizio.


[1] 
http://docs.mirantis.com/openstack/fuel/fuel-5.1/operations.html#fuel-master-and-docker-disk-space-troubleshooting
 

[2] https://blueprints.launchpad.net/fuel/+spec/isolate-var-log-on-master 
 


> On 21 Nov 2014, at 12:01, Matthew Mosesohn  wrote:
> 
> I'm okay with Sensu or Monit, just as long as the results of
> monitoring can be represented in a web UI and has a configurable
> option for email alerting. Tight integration with Fuel Web is a
> nice-to-have (via AMQP perhaps), but anything that can solve our
> out-of-disk scenario is ideal. I did my best to tune our logging and
> logs rotation, but monitoring is the most sensible approach.
> 
> -Matthew
> 
> On Fri, Nov 21, 2014 at 12:21 PM, Przemyslaw Kaminski
>  wrote:
>> BTW, there's also Monit
>> 
>> http://mmonit.com/monit/
>> 
>> (though it's in C) that looks quite nice. Some config examples:
>> 
>> http://omgitsmgp.com/2013/09/07/a-monit-primer/
>> 
>> P.
>> 
>> On 11/20/2014 09:13 PM, Dmitriy Shulyak wrote:
>> 
>> Guys, maybe we can use existing software, for example Sensu [1]?
>> Maybe i am wrong, but i dont like the idea to start writing our "small"
>> monitoring applications..
>> Also something well designed and extendable can be reused for statistic
>> collector
>> 
>> 
>> 1. https://github.com/sensu
>> 
>> On Wed, Nov 12, 2014 at 12:47 PM, Tomasz Napierala 
>> wrote:
>>> 
>>> 
>>> On 06 Nov 2014, at 12:20, Przemyslaw Kaminski 
>>> wrote:
>>> 
 I didn't mean a robust monitoring system, just something simpler.
 Notifications is a good idea for FuelWeb.
>>> 
>>> I’m all for that, but if we add it, we need to document ways to clean up
>>> space.
>>> We could also add some kind of simple job to remove rotated logs, obsolete
>>> spanshots, etc., but this is out of scope for 6.0 I guess.
>>> 
>>> Regards,
>>> --
>>> Tomasz 'Zen' Napierala
>>> Sr. OpenStack Engineer
>>> tnapier...@mirantis.com
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
>> 
>> 
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Getting rid of kickstart/preseed for all NEW releases

2015-01-27 Thread Vladimir Kozhukalov
Guys,

First, we are not talking about deliberate disabling preseed based approach
just because we so crazy. The question is "What is the best way to achieve
our 6.1 goals?" We definitely need to be able to install two versions of
Ubuntu 12.04 and 14.04. Those versions have different sets of packages (for
example ntp related ones) and we install some of those packages during
provisioning (we point out which packages we need with their versions). To
make this working with preseed based approach we need either to insert some
"IF release==6.1" conditional lines into preseed (not very beautiful, isn't
it?) or to create different Distros and Profiles for different releases.
Second is not a problem for Cobbler but it is for nailgun/astute because we
do not deal with that stuff and it looks that we cannot implement this
easily.

IMO, the only options we have are to insert "IFs" into preseed (I would say
it is not more reliable than IBP) or to refuse preseed approach for ONLY
NEW UPCOMING releases. You can call "crazy" but for me having a set "IFs"
together with pmanager.py which are absolutely difficult to maintain is
crazy.



Vladimir Kozhukalov

On Tue, Jan 27, 2015 at 3:03 AM, Andrew Woodward  wrote:

> On Mon, Jan 26, 2015 at 10:47 AM, Sergii Golovatiuk
>  wrote:
> > Until we are sure IBP solves operation phase where we need to deliver
> > updated packages so client will be able to provision new machines with
> these
> > fixed packages, I would leave backward compatibility with normal
> provision.
> > ... Just in case.
>
> doesn't running 'apt-get upgrade' or 'yum update' after laying out the
> FS image resolve the gap until we can rebuild the images on the fly?
> >
> >
> >
> > --
> > Best regards,
> > Sergii Golovatiuk,
> > Skype #golserge
> > IRC #holser
> >
> > On Mon, Jan 26, 2015 at 4:56 PM, Vladimir Kozhukalov
> >  wrote:
> >>
> >> My suggestion is to make IBP the only option available for all upcoming
> >> OpenStack releases which are defined in openstack.yaml. It is to be
> possible
> >> to install OS using kickstart for all currently available OpenStack
> >> releases.
> >>
> >> Vladimir Kozhukalov
> >>
> >> On Mon, Jan 26, 2015 at 6:22 PM, Igor Kalnitsky <
> ikalnit...@mirantis.com>
> >> wrote:
> >>>
> >>> Just want to be sure I understand you correctly: do you propose to
> >>> FORBID kickstart/preseed installation way in upcoming release at all?
> >>>
> >>> On Mon, Jan 26, 2015 at 3:59 PM, Vladimir Kozhukalov
> >>>  wrote:
> >>> > Subject is changed.
> >>> >
> >>> > Vladimir Kozhukalov
> >>> >
> >>> > On Mon, Jan 26, 2015 at 4:55 PM, Vladimir Kozhukalov
> >>> >  wrote:
> >>> >>
> >>> >> Dear Fuelers,
> >>> >>
> >>> >> As you might know we need it to be possible to install several
> >>> >> versions of
> >>> >> a particular OS (Ubuntu and Centos) by 6.1  As far as having
> different
> >>> >> OS
> >>> >> versions also means having different sets of packages and some of
> the
> >>> >> packages are installed and configured during provisioning stage, we
> >>> >> need to
> >>> >> have a kind of kickstart/preseed version mechanism.
> >>> >>
> >>> >> Cobbler is exactly such a mechanism. It allows us to have several
> >>> >> Distros
> >>> >> (installer images) and profiles (kickstart/preseed files). But
> >>> >> unfortunately, for some reasons we have not been using those
> Cobbler's
> >>> >> capabilities since the beginning of Fuel and it doesn't seem to be
> >>> >> easily
> >>> >> introduced into Nailgun to deal with the whole Cobbler life cycle.
> >>> >>
> >>> >> Anyway, we are moving towards IBP (image based provisioning) and we
> >>> >> already have different images connected to different OpenStack
> >>> >> releases
> >>> >> (openstack.yaml) and everything else which is necessary for initial
> >>> >> node
> >>> >> configuration is serialized inside provision data (including profile
> >>> >> name
> >>> >> like 'ubuntu_1204' or 'ubuntu_1404') and we are able to choose
> >>> >> cloud-init
> >>> >> template by this profile name.
> >>> >>
> >>> >> And taking into account what it is written above, the suggestion is
> to
> >>> >> completely avoid using kickstart/preseed based way of OS
> provisioning
> >>> >> by 6.1
> >>> >> for all new releases allowing ONLY old ones to use this way.
> >>> >>
> >>> >> Any opinions about that stuff are welcome.
> >>> >>
> >>> >> Vladimir Kozhukalov
> >>> >
> >>> >
> >>> >
> >>> >
> >>> >
> __
> >>> > OpenStack Development Mailing List (not for usage questions)
> >>> > Unsubscribe:
> >>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>> >
> >>>
> >>>
> >>>
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org

Re: [openstack-dev] [Fuel] Getting rid of kickstart/preseed for all NEW releases

2015-01-27 Thread Vladimir Kozhukalov
Andrew is right about our ability to upgrade packages on a system using
"yum update" or "apt-get upgrade" because IBP installs standalone OS
(unlike cloud case). Even more, we'll build Ubuntu images on a master node
by 6.1 and of course we'll be able to use actual repo for that.

Vladimir Kozhukalov

On Tue, Jan 27, 2015 at 1:23 PM, Vladimir Kozhukalov <
vkozhuka...@mirantis.com> wrote:

> Guys,
>
> First, we are not talking about deliberate disabling preseed based
> approach just because we so crazy. The question is "What is the best way to
> achieve our 6.1 goals?" We definitely need to be able to install two
> versions of Ubuntu 12.04 and 14.04. Those versions have different sets of
> packages (for example ntp related ones) and we install some of those
> packages during provisioning (we point out which packages we need with
> their versions). To make this working with preseed based approach we need
> either to insert some "IF release==6.1" conditional lines into preseed (not
> very beautiful, isn't it?) or to create different Distros and Profiles for
> different releases. Second is not a problem for Cobbler but it is for
> nailgun/astute because we do not deal with that stuff and it looks that we
> cannot implement this easily.
>
> IMO, the only options we have are to insert "IFs" into preseed (I would
> say it is not more reliable than IBP) or to refuse preseed approach for
> ONLY NEW UPCOMING releases. You can call "crazy" but for me having a set
> "IFs" together with pmanager.py which are absolutely difficult to maintain
> is crazy.
>
>
>
> Vladimir Kozhukalov
>
> On Tue, Jan 27, 2015 at 3:03 AM, Andrew Woodward  wrote:
>
>> On Mon, Jan 26, 2015 at 10:47 AM, Sergii Golovatiuk
>>  wrote:
>> > Until we are sure IBP solves operation phase where we need to deliver
>> > updated packages so client will be able to provision new machines with
>> these
>> > fixed packages, I would leave backward compatibility with normal
>> provision.
>> > ... Just in case.
>>
>> doesn't running 'apt-get upgrade' or 'yum update' after laying out the
>> FS image resolve the gap until we can rebuild the images on the fly?
>> >
>> >
>> >
>> > --
>> > Best regards,
>> > Sergii Golovatiuk,
>> > Skype #golserge
>> > IRC #holser
>> >
>> > On Mon, Jan 26, 2015 at 4:56 PM, Vladimir Kozhukalov
>> >  wrote:
>> >>
>> >> My suggestion is to make IBP the only option available for all upcoming
>> >> OpenStack releases which are defined in openstack.yaml. It is to be
>> possible
>> >> to install OS using kickstart for all currently available OpenStack
>> >> releases.
>> >>
>> >> Vladimir Kozhukalov
>> >>
>> >> On Mon, Jan 26, 2015 at 6:22 PM, Igor Kalnitsky <
>> ikalnit...@mirantis.com>
>> >> wrote:
>> >>>
>> >>> Just want to be sure I understand you correctly: do you propose to
>> >>> FORBID kickstart/preseed installation way in upcoming release at all?
>> >>>
>> >>> On Mon, Jan 26, 2015 at 3:59 PM, Vladimir Kozhukalov
>> >>>  wrote:
>> >>> > Subject is changed.
>> >>> >
>> >>> > Vladimir Kozhukalov
>> >>> >
>> >>> > On Mon, Jan 26, 2015 at 4:55 PM, Vladimir Kozhukalov
>> >>> >  wrote:
>> >>> >>
>> >>> >> Dear Fuelers,
>> >>> >>
>> >>> >> As you might know we need it to be possible to install several
>> >>> >> versions of
>> >>> >> a particular OS (Ubuntu and Centos) by 6.1  As far as having
>> different
>> >>> >> OS
>> >>> >> versions also means having different sets of packages and some of
>> the
>> >>> >> packages are installed and configured during provisioning stage, we
>> >>> >> need to
>> >>> >> have a kind of kickstart/preseed version mechanism.
>> >>> >>
>> >>> >> Cobbler is exactly such a mechanism. It allows us to have several
>> >>> >> Distros
>> >>> >> (installer images) and profiles (kickstart/preseed files). But
>> >>> >> unfortunately, for some reasons we have not been using those
>> Cobbler's
>> >>> >> capabilities since the beginning of Fuel and it doesn't seem to be
>> >>> >> easily
>> >>> >> introduced into Nailgun to deal with the whole Cobbler life cycle.
>> >>> >>
>> >>> >> Anyway, we are moving towards IBP (image based provisioning) and we
>> >>> >> already have different images connected to different OpenStack
>> >>> >> releases
>> >>> >> (openstack.yaml) and everything else which is necessary for initial
>> >>> >> node
>> >>> >> configuration is serialized inside provision data (including
>> profile
>> >>> >> name
>> >>> >> like 'ubuntu_1204' or 'ubuntu_1404') and we are able to choose
>> >>> >> cloud-init
>> >>> >> template by this profile name.
>> >>> >>
>> >>> >> And taking into account what it is written above, the suggestion
>> is to
>> >>> >> completely avoid using kickstart/preseed based way of OS
>> provisioning
>> >>> >> by 6.1
>> >>> >> for all new releases allowing ONLY old ones to use this way.
>> >>> >>
>> >>> >> Any opinions about that stuff are welcome.
>> >>> >>
>> >>> >> Vladimir Kozhukalov
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> _

Re: [openstack-dev] [Telco][NFV] Meeting facilitator for January 28th

2015-01-27 Thread Marc Koderer
Hi Steve,

I can host it.

Regards
Marc

Am 27.01.2015 um 08:40 schrieb Steve Gordon :

> Hi all,
> 
> As mentioned in the notes from last week's meeting I am going to be in 
> transit during our 1400 UTC meeting this Wednesday (28th) [1]. Is anyone else 
> willing and able to facilitate in my absence?
> 
> Thanks,
> 
> Steve
> 
> [1] https://wiki.openstack.org/wiki/TelcoWorkingGroup#Technical_Team_Meetings
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] do we really need project tags in the governance repository?

2015-01-27 Thread Thierry Carrez
Doug Hellmann wrote:
> On Mon, Jan 26, 2015, at 12:02 PM, Thierry Carrez wrote:
> [...]
>> I'm open to alternative suggestions on where the list of tags, their
>> definition and the list projects they apply to should live. If you don't
>> like that being in the governance repository, what would have your
>> preference ?
> 
> From the very beginning I have taken the position that tags are by
> themselves not sufficiently useful for evaluating projects. If someone
> wants to choose between Ceilometer, Monasca, or StackTach, we're
> unlikely to come up with tags that will let them do that. They need
> in-depth discussions of deployment options, performance characteristics,
> and feature trade-offs.

They are still useful to give people a chance to discover that those 3
are competing in the same space, and potentially get an idea of which
one (if any) is deployed on more than one public cloud, better
documented, or security-supported. I agree with you that an
(opinionated) article comparing those 3 solutions would be a nice thing
to have, but I'm just saying that basic, clearly-defined reference
project metadata still has a lot of value, especially as we grow the
number of projects.

>> That said, I object to only saying "this is all information that can be
>> found elsewhere or should live elsewhere", because that is just keeping
>> the current situation -- where that information exists somewhere but
>> can't be efficiently found by our downstream consumers. We need a
>> taxonomy and clear definitions for tags, so that our users can easily
>> find, understand and navigate such project metadata.
> 
> As someone new to the project, I would not think to look in the
> governance documents for "state" information about a project. I would
> search for things like "install guide openstack" or "component list
> openstack" and expect to find them in the documentation. So I think
> putting the information in those (or similar) places will actually make
> it easier to find for someone that hasn't been involved in the
> discussion of tags and the governance repository.

The idea here is to have the reference information in some
Gerrit-controlled repository (currently openstack/governance, but I'm
open to moving this elsewhere), and have that reference information
consumed by the openstack.org website when you navigate to the
"Software" section, to present a browseable/searchable list of projects
with project metadata. I don't expect anyone to read the YAML file from
the governance repository. On the other hand, the software section of
the openstack.org website is by far the most visited page of all our web
properties, so I expect most people to see that.

> If we need a component list with descriptions, let's build that. It can
> be managed by a team of interested parties -- perhaps some of the
> operators or deployers, for example. I don't know if we have an existing
> place where it would make sense to put it, or if we need a new
> repository.
> 
> We've been applying DRY to the existing projects/programs
> list and saying that because we already have a list in the governance
> repository we shouldn't repeat that information elsewhere, but we're
> also starting to go to a lot of lengths to define a format to hold
> information (tags, with metadata, a taxonomy, etc.) that isn't needed
> for project governance. That makes me think we're trying to force-fit
> this idea into a single list.

If I understand you correctly, you'd like to have the project teams list
(previously known as programs) in the governance repository, together
with the list of their associated code repositories. Then you would have
a duplicate list of code repositories, with their associated tag
metadata, in some other repository. I understand the limits of DRY, but
that duplication still sounds like a maintenance nightmare (especially
given how often the repository list is updated)... How do you make sure
that repositories in A are in B ? Some check test at the gate ?

Alternatively we could have the project teams / code repositories
association live in the "other repository" and just duplicate the
project teams list, which arguably should be smaller. That means we
would also delegate the repository scope sanity-check to the "other
repository" maintainers, but I'm fine with that. We could have one file
per project team and a check test that validates the project team exists
in the governance repository. The only (small) issue with that option is
that code repositories translate into ATCs, which translate into TC
voters, so this is arguably a governance thing.

>> The tagging proposal (only one-month old) has so far received a pretty
>> good reception from operators and other downstream users, who see it as
>> a way to explain and contribute what type of information matters to
>> them. The Technical Committee members are not the only people who can
>> propose tags.
> 
> I agree that a product matrix with some basic information will be useful
> for depl

Re: [openstack-dev] [Fuel] Proposal for nominating people to python-fuelclient-core

2015-01-27 Thread Roman Prykhodchenko
I think the consensus was found and the resolution is positive.

> 26 січ. 2015 о 14:37 Tomasz Napierala  написав(ла):
> 
> +1
> 
> 
>> On 26 Jan 2015, at 11:33, Roman Prykhodchenko  wrote:
>> 
>> Hi Guys,
>> 
>> According to our previous thread [1] and the decision made there I’d like to 
>> initiate separation of the original fuel-core group.
>> 
>> At the first step propose the following python guys from the original 
>> fuel-core group to be nominated to python-fuelclient-core group.
>> 
>> Aleksey Kasatkin — akasatkin
>> Dmitry Pyzhov — dpyzhov
>> Evgeniy L — evgeniyl___
>> Igor Kalnitsky — ikalnitsky
>> 
>> The decision will be made basing on lazy consensus. Since I was in 
>> python-fuelclient-core group only for technical reasons, I will remove 
>> myself from there as soon as it’s populated.
>> All following nominations and de-nominations will be done according to the 
>> approved core-dev process [2].
>> 
>> 
>> References:
>> 
>> 1. 
>> http://lists.openstack.org/pipermail/openstack-dev/2015-January/055038.html
>> 2. https://wiki.openstack.org/wiki/Governance/Approved/CoreDevProcess
>> 
>> 
>> - romcheg
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> --
> Tomasz 'Zen' Napierala
> Sr. OpenStack Engineer
> tnapier...@mirantis.com
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][hot]

2015-01-27 Thread Angus Salkeld
On Tue, Jan 27, 2015 at 7:00 PM, Dmitry  wrote:

> I have another question, is it possible to get the stack name in the hot
> script?
> E.g.
> params:
>  $stack_name: {get_global_variable: $stack.name}
>

See:
http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#pseudo-parameters

Regards
Angus


>
> On Tue, Jan 27, 2015 at 3:53 AM, Qiming Teng 
> wrote:
>
>> On Mon, Jan 26, 2015 at 07:44:25PM +0200, Dmitry wrote:
>> > thanks, exactly what I was looking for:
>> > curl http://169.254.169.254/1.0/meta-data/instance-id
>>
>> or, /var/lib/cloud/data/instance-id, if cloud-init is there.
>>
>> Regards,
>>   Qiming
>>
>> > On Mon, Jan 26, 2015 at 7:31 PM, Zane Bitter 
>> wrote:
>> >
>> > > On 25/01/15 10:41, Dmitry wrote:
>> > >
>> > >> Hello,
>> > >> I need to receive instance id as part of the instance installation
>> script.
>> > >> Something like:
>> > >> params:
>> > >>$current_id: {get_param: $this.id }
>> > >>
>> > >
>> > > I have no idea what this is supposed to mean, sorry.
>> > >
>> > >  Is it possible?
>> > >>
>> > >
>> > > The get_resource function will return the server UUID for a server
>> > > resource, but you can't use it from within that resource itself (it
>> would
>> > > be a circular reference).
>> > >
>> > > The UUID of a server is provided to the server through the Nova
>> metadata;
>> > > you should retrieve it from there in your user_data script.
>> > >
>> > > cheers,
>> > > Zane.
>> > >
>> > >
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>>
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][infra] Zuul-Merger Error in CI

2015-01-27 Thread Punith S
Hi the merge fail error that we reported in ubuntu 12.04 has been solved in
the new ubuntu 14.04 CI master setup.

many thanks to ramy on the update of the repo :)
https://github.com/rasselin/os-ext-testing

cheers!

On Thu, Jan 8, 2015 at 7:16 PM, Punith S  wrote:

> hi,
>
> i'm running CI for openstack cinder patches, when zuul reads a new patch
> from gerrit, it tries to merge the patch to it's own cinder repo
> @/var/lib/zuul/git/openstack/cinder
>
> and returns the comment to gerrit saying
>
>
> hence it is failing to issue the dsvm-tempest-job in jenkins via gearman
> !!!
> ​
> the zuul/merger-debug log sanpshot
>
>
> 2015-01-08 19:03:43,320 DEBUG zuul.MergeServer: Got merge job.
> 2015-01-08 19:03:43,321 DEBUG zuul.Merger: Merging for change 145778,1.
> 2015-01-08 19:03:43,321 DEBUG zuul.Merger: Processing refspec
> refs/changes/78/145778/1 for project openstack/cinder / master ref
> Zbd4a4ad6ff3741c68ce382afa6d8df84
> 2015-01-08 19:03:43,383 DEBUG zuul.Merger: Unable to find commit for ref
> master/Zbd4a4ad6ff3741c68ce382afa6d8df84
> 2015-01-08 19:03:43,384 DEBUG zuul.Merger: No base commit found for
> (u'openstack/cinder', u'master')
> 2015-01-08 19:03:43,384 DEBUG zuul.Repo: Resetting repository
> /var/lib/zuul/git/openstack/cinder
> 2015-01-08 19:03:43,385 DEBUG zuul.Repo: Updating repository
> /var/lib/zuul/git/openstack/cinder
> 2015-01-08 19:03:54,507 DEBUG zuul.Repo: Checking out
> 5993660498f44e96b0f35ccc0f4d4a4c7b430363
> 2015-01-08 19:04:02,685 ERROR zuul.Merger: Exception while merging a
> change:
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/dist-packages/zuul/merger/merger.py",
> line 234, in _mergeChange
> commit = repo.merge(item['refspec'], 'resolve')
>   File "/usr/local/lib/python2.7/dist-packages/zuul/merger/merger.py",
> line 132, in merge
> self.fetch(ref)
>   File "/usr/local/lib/python2.7/dist-packages/zuul/merger/merger.py",
> line 145, in fetch
> origin.fetch(ref)
>   File "/usr/local/lib/python2.7/dist-packages/git/remote.py", line 598,
> in fetch
> return self._get_fetch_info_from_stderr(proc, progress or
> RemoteProgress())
>   File "/usr/local/lib/python2.7/dist-packages/git/remote.py", line 540,
> in _get_fetch_info_from_stderr
> for err_line, fetch_line in zip(fetch_info_lines, fetch_head_info))
>   File "/usr/local/lib/python2.7/dist-packages/git/remote.py", line 540,
> in 
> for err_line, fetch_line in zip(fetch_info_lines, fetch_head_info))
>   File "/usr/local/lib/python2.7/dist-packages/git/remote.py", line 252,
> in _from_line
> raise ValueError("Failed to parse line: %r" % line)
> ValueError: Failed to parse line: 'Total 7 (delta 5), reused 7 (delta 5)'
>
>
> is this a problem of python git ?
>
> i'm using ubuntu 12.04 and git 1.7
>
>
> thanks in advance
> --
> regards,
>
> punith s
> cloudbyte.com
>



-- 
regards,

punith s
cloudbyte.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][dev][Zuul] Merge Failed Error in CI - GitPython Error

2015-01-27 Thread Punith S
Hi the merge fail error that we reported in ubuntu 12.04 has been solved in
the new ubuntu 14.04 CI master setup.

many thanks to ramy on the update of the repo :)
https://github.com/rasselin/os-ext-testing

cheers!

On Fri, Jan 16, 2015 at 4:58 PM, Punith S  wrote:

>
> Hi stackers,
>
> i'm running CI for openstack cinder patches, when zuul reads a new patch
> from gerrit, it tries to merge the patch to it's own cinder repo
> @/var/lib/zuul/git/openstack/cinder
>
> and returns the comment to gerrit saying
>
>
> hence it is failing to issue the tempest-job in jenkins via gearman !!!
>
> on checking the trace the error seems to come from GitPython 0.3.2.1
> ​
> the zuul/merger-debug log snapshot ref -
> http://paste.openstack.org/show/158218/
>
> *raise ValueError("Failed to parse line: %r" % line) *
>
> *ValueError: Failed to parse line: 'Total 9 (delta 7), reused 9 (delta 7)'*
> '
>
>
> is this a problem of python git ?
>
> i'm using ubuntu 12.04 and git 1.7
>
>
> thanks
>
> regards,
>
> punith s
> cloudbyte.com
>
>
>
> --
> regards,
>
> punith s
> cloudbyte.com
>



-- 
regards,

punith s
cloudbyte.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-27 Thread Aleksandr Didenko
Hi,

After starting implementing granular deployment we've faced a bunch of
issues that would make further development of this feature much more
complicated if we have to support both Simple and HA deployment modes. For
example: simple mode does not require cluster (corosync, pacemaker, vips,
etc), so we had to skip this task for Simple mode somehow - we can use
conditional tasks, or conditional manifests in our tasks, or create
separate task graphs for different deployment modes, etc - either way it's
pretty much doubling the amount of work for some parts of Fuel and our
development cycle.

At the moment, CI blocks us from further development of fuel-library
modularization BP [2] because we still use Simple mode in CI. So in order
to proceed with this BP we have two options:

1) remove Simple mode from CI/QA and thus drop it completely from Fuel
2) double our efforts to support both Simple and HA modes in granular
deployment

We have a BP about single-controller HA [1]. HA with single controller
works just fine at the moment. So if you want to test Fuel on a minimum set
of nodes, you can do this on 3 nodes (Fuel master, controller, compute),
just like with Simple mode before. I suppose, it's time to finally drop
support for Simple mode in Fuel :)

[1] https://blueprints.launchpad.net/fuel/+spec/single-controller-ha
[2] https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization

--
Regards,
Aleksandr Didenko


On Tue, Aug 26, 2014 at 9:25 AM, Mike Scherbakov 
wrote:

> Definitely fuel spec is needed :)
>
>
> On Mon, Aug 25, 2014 at 8:45 PM, Evgeniy L  wrote:
>
>> Hi Andrew,
>>
>> I have some comments regarding to you action items
>>
>> >> 2) Removing simple mode from the ui and tests
>> >> 3) Removing simple mode support from nailgun (maybe we leave it) and
>> cli
>>
>> We shouldn't do it, because nailgun should handle both versions of
>> cluster.
>> What we have to do here is to use openstack.yaml to keep all possible
>> modes.
>> For new release there will be only ha, to manage previous releases we have
>> to create data migrations in nailgun to create the filed with modes i.e.
>> multinode
>> and ha.
>>
>> Also fixes for ui are required too, I think it mostly related to wizard,
>> 'mode' tab
>> where use can chose ha or non ha cluster in case of new release there
>> should
>> be only ha, and in case of old releases there should be ha and multinode.
>>
>> Thanks,
>>
>>
>>
>>  On Mon, Aug 25, 2014 at 8:19 PM, Andrew Woodward 
>> wrote:
>>
>>>  Started a new thread so that we don't hijack the older thread.
>>>  as
>>>
>>>
 Andrew, will you work on it in 6.0? What are remaining items there?
 Also, it might affect our tests - simple mode runs faster so we use it for
 smoke ISO test. Anastasia, please confirm that we can switch smoke to
 one-ha-controller model, or even drop smoke at all and use BVT only
 (running CentOS 3 HA controllers and same with Ubuntu).

>>>
>>> The primary reason that we haven't disabled single yet is was due to [0]
>>> where we where having problems adding additional controllers. With the
>>> changes to galera and rabbit clustering it appears that we ended up fixing
>>> it already.
>>>
>>> The remaining issues are:
>>> 1) Ensuring we have good test coverage for the cases we expect to
>>> support [1]
>>> 2) Removing simple mode from the ui and tests
>>> 3) Removing simple mode support from nailgun (maybe we leave it) and cli
>>> 4) Updating documentation
>>>
>>> [0] https://bugs.launchpad.net/fuel/+bug/1350266
>>> [1] https://bugs.launchpad.net/fuel/+bug/1350266/comments/7
>>>
>>> --
>>> Andrew
>>> Mirantis
>>> Ceph community
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Mike Scherbakov
> #mihgen
>
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel][Neutron ML2][VMWare]NetworkNotFoundForBridge: Network could not be found for bridge br-int

2015-01-27 Thread Foss Geek
Hi Xarses,

Actually it is multi-hypervisor environment. The error in the nova.log is:

 NetworkNotFoundForBridge: Network could not be found for bridge br-int

the above error disappears changing the mech driver order in
/etc/neutron/plugins/ml2/ml2_conf.ini  file from mechanism_drivers =
openvswitch,dvs  to mechanism_drivers =dvs, openvswitch



On Mon, Jan 19, 2015 at 12:31 PM, Foss Geek  wrote:

> Hi Xarses,
>
> Thanks for your time!
>
> I was not able to check my mail yesterday. Sorry for the delay.
>
> One of my colleague fixed this issue yesterday. I will understand the
> issue and update this thread.
>
> --
> Thanks & Regards
> E-Mail: thefossg...@gmail.com
> IRC: neophy
> Blog : http://lmohanphy.livejournal.com/
>
>
>
> On Sat, Jan 17, 2015 at 1:17 AM, Andrew Woodward  wrote:
>
>> neophy,
>>
>> It seems like there are left overs that fuel was using in the config
>> that would not be present when you installed neutron fresh. I'd
>> compare the config files and start backing out bits you dont need. I'd
>> start with the lines refrencing br-int, you dont need them on nodes
>> that aren't using the ovs agent.
>>
>> Poke me on IRC if you need more help
>>
>> Xarses (GMT-8)
>>
>> On Fri, Jan 9, 2015 at 1:08 PM, Foss Geek  wrote:
>> > Dear All,
>> >
>> > I am trying to integrate Openstack + vCenter + Neutron + VMware
>> dvSwitch ML2
>> > Mechanism driver.
>> >
>> > I deployed a two node openstack environment (controller + compute with
>> KVM)
>> > with Neutron VLAN + KVM using fuel 5.1. Again I installed nova-compute
>> using
>> > yum in controller node and configured nova-compute in controller to
>> point
>> > vCenter. I am also using Neutron VLAN with VMware dvSwitch ML2 Mechanism
>> > driver. My vCenter is properly configured as suggested by the doc:
>> >
>> https://www.mirantis.com/blog/managing-vmware-vcenter-resources-mirantis-openstack-5-0-part-1-create-vsphere-cluster/
>> >
>> > I am able to create network from Horizon and I can see the same network
>> > created in vCenter. When I try to create a VM I am getting the below
>> error
>> > in Horizon.
>> >
>> > Error: Failed to launch instance "test-01": Please try again later
>> [Error:
>> > No valid host was found. ].
>> >
>> > Here is the error message from Instance Overview tab:
>> >
>> > Instance Overview
>> > Info
>> > Name
>> > test-01
>> > ID
>> > 309a1f47-83b6-4ab4-9d71-642a2000c8a1
>> > Status
>> > Error
>> > Availability Zone
>> > nova
>> > Created
>> > Jan. 9, 2015, 8:16 p.m.
>> > Uptime
>> > 0 minutes
>> > Fault
>> > Message
>> > No valid host was found.
>> > Code
>> > 500
>> > Details
>> > File
>> "/usr/lib/python2.6/site-packages/nova/scheduler/filter_scheduler.py",
>> > line 108, in schedule_run_instance raise
>> exception.NoValidHost(reason="")
>> > Created
>> > Jan. 9, 2015, 8:16 p.m
>> >
>> > Getting the below error in nova-all.log:
>> >
>> >
>> > <183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.135 31870
>> DEBUG
>> > keystoneclient.middleware.auth_token
>> > [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Authenticating user token
>> > __call__
>> >
>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:676
>> > <183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.136 31870
>> DEBUG
>> > keystoneclient.middleware.auth_token
>> > [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Removing headers from
>> request
>> > environment:
>> >
>> X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role
>> > _remove_auth_headers
>> >
>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:733
>> > <183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.137 31870
>> DEBUG
>> > keystoneclient.middleware.auth_token
>> > [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Returning cached token
>> > _cache_get
>> >
>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1545
>> > <183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.138 31870
>> DEBUG
>> > keystoneclient.middleware.auth_token
>> > [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Storing token in cache store
>> >
>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1460
>> > <183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.139 31870
>> DEBUG
>> > keystoneclient.middleware.auth_token
>> > [req-c9ec0973-ff63-4ac3-a0f7-1d2d7b7aa470 ] Received request from user:
>> > 4564fea80fa14e1daed160afa074d389 with project_id :
>> > dd32714d9009495bb51276e284380d6a and roles: admin,_member_
>> > _build_user_headers
>> >
>> /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:996
>> > <183>Jan  9 20:16:23 node-18 nova-api 2015-01-09 20:16:23.141 31870
>> DEBUG
>> > routes.middleware [req-05089e83-e4c1-4d90-b7c5-065226e55d91 ] Matched
>> GET
>> >
>> /dd32714d9009495bb51276e284380d6a/servers/309a1f47-83

[openstack-dev] oslo.i18n 1.3.1 released

2015-01-27 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.i18n 1.3.1: oslo.i18n library

The primary reason for this release is a packaging issue
reported and fixed by Dan Smith.

For more details, please see the git log history below and:

http://launchpad.net/oslo.i18n/+milestone/1.3.1

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.i18n
Changes in /home/dhellmann/repos/openstack/oslo.i18n 1.3.0..1.3.1
-

d9b3ca6 Clear global cache in test_get_available_languages
e934009 Make setup.cfg packages include oslo.i18n

Diffstat (except docs and test files)
-

setup.cfg| 1 +
2 files changed, 4 insertions(+)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-27 Thread Stanislaw Bogatkin
+1

On Tue, Jan 27, 2015 at 4:05 PM, Aleksandr Didenko 
wrote:

> Hi,
>
> After starting implementing granular deployment we've faced a bunch of
> issues that would make further development of this feature much more
> complicated if we have to support both Simple and HA deployment modes. For
> example: simple mode does not require cluster (corosync, pacemaker, vips,
> etc), so we had to skip this task for Simple mode somehow - we can use
> conditional tasks, or conditional manifests in our tasks, or create
> separate task graphs for different deployment modes, etc - either way it's
> pretty much doubling the amount of work for some parts of Fuel and our
> development cycle.
>
> At the moment, CI blocks us from further development of fuel-library
> modularization BP [2] because we still use Simple mode in CI. So in order
> to proceed with this BP we have two options:
>
> 1) remove Simple mode from CI/QA and thus drop it completely from Fuel
> 2) double our efforts to support both Simple and HA modes in granular
> deployment
>
> We have a BP about single-controller HA [1]. HA with single controller
> works just fine at the moment. So if you want to test Fuel on a minimum set
> of nodes, you can do this on 3 nodes (Fuel master, controller, compute),
> just like with Simple mode before. I suppose, it's time to finally drop
> support for Simple mode in Fuel :)
>
> [1] https://blueprints.launchpad.net/fuel/+spec/single-controller-ha
> [2]
> https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
>
> --
> Regards,
> Aleksandr Didenko
>
>
> On Tue, Aug 26, 2014 at 9:25 AM, Mike Scherbakov  > wrote:
>
>> Definitely fuel spec is needed :)
>>
>>
>> On Mon, Aug 25, 2014 at 8:45 PM, Evgeniy L  wrote:
>>
>>> Hi Andrew,
>>>
>>> I have some comments regarding to you action items
>>>
>>> >> 2) Removing simple mode from the ui and tests
>>> >> 3) Removing simple mode support from nailgun (maybe we leave it) and
>>> cli
>>>
>>> We shouldn't do it, because nailgun should handle both versions of
>>> cluster.
>>> What we have to do here is to use openstack.yaml to keep all possible
>>> modes.
>>> For new release there will be only ha, to manage previous releases we
>>> have
>>> to create data migrations in nailgun to create the filed with modes i.e.
>>> multinode
>>> and ha.
>>>
>>> Also fixes for ui are required too, I think it mostly related to wizard,
>>> 'mode' tab
>>> where use can chose ha or non ha cluster in case of new release there
>>> should
>>> be only ha, and in case of old releases there should be ha and multinode.
>>>
>>> Thanks,
>>>
>>>
>>>
>>>  On Mon, Aug 25, 2014 at 8:19 PM, Andrew Woodward 
>>> wrote:
>>>
  Started a new thread so that we don't hijack the older thread.
  as


> Andrew, will you work on it in 6.0? What are remaining items there?
> Also, it might affect our tests - simple mode runs faster so we use it for
> smoke ISO test. Anastasia, please confirm that we can switch smoke to
> one-ha-controller model, or even drop smoke at all and use BVT only
> (running CentOS 3 HA controllers and same with Ubuntu).
>

 The primary reason that we haven't disabled single yet is was due to
 [0] where we where having problems adding additional controllers. With the
 changes to galera and rabbit clustering it appears that we ended up fixing
 it already.

 The remaining issues are:
 1) Ensuring we have good test coverage for the cases we expect to
 support [1]
 2) Removing simple mode from the ui and tests
 3) Removing simple mode support from nailgun (maybe we leave it) and cli
 4) Updating documentation

 [0] https://bugs.launchpad.net/fuel/+bug/1350266
 [1] https://bugs.launchpad.net/fuel/+bug/1350266/comments/7

 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu

Re: [openstack-dev] [heat][hot]

2015-01-27 Thread Dmitry
Thank you very much

On Tue, Jan 27, 2015 at 1:06 PM, Angus Salkeld 
wrote:

> On Tue, Jan 27, 2015 at 7:00 PM, Dmitry  wrote:
>
>> I have another question, is it possible to get the stack name in the hot
>> script?
>> E.g.
>> params:
>>  $stack_name: {get_global_variable: $stack.name}
>>
>
> See:
> http://docs.openstack.org/developer/heat/template_guide/hot_spec.html#pseudo-parameters
>
> Regards
> Angus
>
>
>>
>> On Tue, Jan 27, 2015 at 3:53 AM, Qiming Teng 
>> wrote:
>>
>>> On Mon, Jan 26, 2015 at 07:44:25PM +0200, Dmitry wrote:
>>> > thanks, exactly what I was looking for:
>>> > curl http://169.254.169.254/1.0/meta-data/instance-id
>>>
>>> or, /var/lib/cloud/data/instance-id, if cloud-init is there.
>>>
>>> Regards,
>>>   Qiming
>>>
>>> > On Mon, Jan 26, 2015 at 7:31 PM, Zane Bitter 
>>> wrote:
>>> >
>>> > > On 25/01/15 10:41, Dmitry wrote:
>>> > >
>>> > >> Hello,
>>> > >> I need to receive instance id as part of the instance installation
>>> script.
>>> > >> Something like:
>>> > >> params:
>>> > >>$current_id: {get_param: $this.id }
>>> > >>
>>> > >
>>> > > I have no idea what this is supposed to mean, sorry.
>>> > >
>>> > >  Is it possible?
>>> > >>
>>> > >
>>> > > The get_resource function will return the server UUID for a server
>>> > > resource, but you can't use it from within that resource itself (it
>>> would
>>> > > be a circular reference).
>>> > >
>>> > > The UUID of a server is provided to the server through the Nova
>>> metadata;
>>> > > you should retrieve it from there in your user_data script.
>>> > >
>>> > > cheers,
>>> > > Zane.
>>> > >
>>> > >
>>> > >
>>> __
>>> > > OpenStack Development Mailing List (not for usage questions)
>>> > > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> > >
>>>
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] need help with my first commit: The branch 'master' does not exist on the given remote 'gerrit'

2015-01-27 Thread Zaro
Hello.  I think there might be something wrong with git-review in your
env.  You might want try in another env, preferably linux if its handy.  I
noticed your origin is from github?  you should clone from here instead:
http://git.openstack.org/cgit/stackforge/congress

Here's what you should get when you run the commands to setup the remotes:

~/temp/congress$ git review -s
Creating a git remote called "gerrit" that maps to:
ssh://zaro0...@review.openstack.org:29418/stackforge/congress.git
~/temp/congress$ git remote -v
gerrit ssh://zaro0...@review.openstack.org:29418/stackforge/congress.git
(fetch)
gerrit ssh://zaro0...@review.openstack.org:29418/stackforge/congress.git
(push)
origin https://git.openstack.org/stackforge/congress (fetch)
origin https://git.openstack.org/stackforge/congress (push)



On Mon, Jan 26, 2015 at 9:09 PM, Tran, Steven  wrote:

>  Hi,
>
>Can someone point out what I miss that results in the following warning?
>
>
>
> $ git review
>
> The branch 'master' does not exist on the given remote 'gerrit'. If
>
> these changes are intended to start a new branch, re-run with the '-R'
>
> option enabled.
>
>
>
>
>
> I believe I shouldn’t do “git review -R” because it prompts me to submit
> multiple commits.
>
> I can do “git review -s” without any errors.
>
> I follow the steps at
> http://docs.openstack.org/infra/manual/developers.html#starting-a-change
>
> I’m using Cygwin.
>
>
>
> $ git status
>
> On branch bp/murano-driver
>
> nothing to commit, working directory clean
>
>
>
> $ git branch
>
> * bp/murano-driver
>
>   master
>
>
>
> $ git config --list
>
> core.repositoryformatversion=0
>
> core.filemode=true
>
> core.bare=false
>
> core.logallrefupdates=true
>
> core.ignorecase=true
>
> remote.origin.url=https://github.com/stackforge/congress.git
>
> remote.origin.fetch=+refs/heads/*:refs/remotes/origin/*
>
> branch.master.remote=origin
>
> branch.master.merge=refs/heads/master
>
> remote.gerrit.url=ssh://
> steven...@review.openstack.org:29418/stackforge/congress.git
>
> user.name=Steven Tran
>
> user.email=steven.tr...@hp.com
>
> gitreview.username=stevenldt
>
>
>
> $ git log
>
> commit 5567b6e4622246ad86cdb9d6c0643427573e8d13
>
> Author: Steven Tran 
>
> Date:   Mon Jan 26 17:19:38 2015 -0800
>
>
>
> Change-Id: Idc4abf32a7aa25231a7a6df511df998a0a3a50ad
>
> Implements: blueprint murano-driver
>
>
>
> Thanks,
>
> -Steven
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Hyper-V meeting

2015-01-27 Thread Peter Pouliot
Hi All,

Due to weather conditions and others traveling we will need to postpone the 
Hyper-V meeting until next week.
For issues or questions please email directly or contact one of us on the IRC 
channel.
We will resume next week at the usual time.

p

Peter J. Pouliot CISSP
Microsoft Cloud+Enterprise Solutions
C:\OpenStack
New England Research & Development Center
1 Memorial Drive
Cambridge, MA 02142
P: 1.(857).4536436
E: ppoul...@microsoft.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] How to pass through devstack config

2015-01-27 Thread Bharat Kumar

Hi,

I have seen Sean Dague's patch [1], if I understood correctly, by this 
patch we can reduce the number of DEVSTACK_GATE variables that we need.
Trying to follow this patch to configure my gate job 
"DEVSTACK_GATE_GLUSTERFS" [2].


I am not able to figure out the way to use this patch [1].
Please suggest me how to use the patch [1] to configure my gate job [2].

[1] https://review.openstack.org/#/c/145321/
[2] https://review.openstack.org/#/c/143308/7/devstack-vm-gate.sh

--
Warm Regards,
Bharat Kumar Kobagana
Software Engineer
OpenStack Storage – RedHat India

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-27 Thread Sergii Golovatiuk
+1

--
Best regards,
Sergii Golovatiuk,
Skype #golserge
IRC #holser

On Tue, Jan 27, 2015 at 2:44 PM, Stanislaw Bogatkin 
wrote:

> +1
>
> On Tue, Jan 27, 2015 at 4:05 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> After starting implementing granular deployment we've faced a bunch of
>> issues that would make further development of this feature much more
>> complicated if we have to support both Simple and HA deployment modes. For
>> example: simple mode does not require cluster (corosync, pacemaker, vips,
>> etc), so we had to skip this task for Simple mode somehow - we can use
>> conditional tasks, or conditional manifests in our tasks, or create
>> separate task graphs for different deployment modes, etc - either way it's
>> pretty much doubling the amount of work for some parts of Fuel and our
>> development cycle.
>>
>> At the moment, CI blocks us from further development of fuel-library
>> modularization BP [2] because we still use Simple mode in CI. So in order
>> to proceed with this BP we have two options:
>>
>> 1) remove Simple mode from CI/QA and thus drop it completely from Fuel
>> 2) double our efforts to support both Simple and HA modes in granular
>> deployment
>>
>> We have a BP about single-controller HA [1]. HA with single controller
>> works just fine at the moment. So if you want to test Fuel on a minimum set
>> of nodes, you can do this on 3 nodes (Fuel master, controller, compute),
>> just like with Simple mode before. I suppose, it's time to finally drop
>> support for Simple mode in Fuel :)
>>
>> [1] https://blueprints.launchpad.net/fuel/+spec/single-controller-ha
>> [2]
>> https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
>>
>> --
>> Regards,
>> Aleksandr Didenko
>>
>>
>> On Tue, Aug 26, 2014 at 9:25 AM, Mike Scherbakov <
>> mscherba...@mirantis.com> wrote:
>>
>>> Definitely fuel spec is needed :)
>>>
>>>
>>> On Mon, Aug 25, 2014 at 8:45 PM, Evgeniy L  wrote:
>>>
 Hi Andrew,

 I have some comments regarding to you action items

 >> 2) Removing simple mode from the ui and tests
 >> 3) Removing simple mode support from nailgun (maybe we leave it) and
 cli

 We shouldn't do it, because nailgun should handle both versions of
 cluster.
 What we have to do here is to use openstack.yaml to keep all possible
 modes.
 For new release there will be only ha, to manage previous releases we
 have
 to create data migrations in nailgun to create the filed with modes
 i.e. multinode
 and ha.

 Also fixes for ui are required too, I think it mostly related to
 wizard, 'mode' tab
 where use can chose ha or non ha cluster in case of new release there
 should
 be only ha, and in case of old releases there should be ha and
 multinode.

 Thanks,



  On Mon, Aug 25, 2014 at 8:19 PM, Andrew Woodward 
 wrote:

>  Started a new thread so that we don't hijack the older thread.
>  as
>
>
>> Andrew, will you work on it in 6.0? What are remaining items there?
>> Also, it might affect our tests - simple mode runs faster so we use it 
>> for
>> smoke ISO test. Anastasia, please confirm that we can switch smoke to
>> one-ha-controller model, or even drop smoke at all and use BVT only
>> (running CentOS 3 HA controllers and same with Ubuntu).
>>
>
> The primary reason that we haven't disabled single yet is was due to
> [0] where we where having problems adding additional controllers. With the
> changes to galera and rabbit clustering it appears that we ended up fixing
> it already.
>
> The remaining issues are:
> 1) Ensuring we have good test coverage for the cases we expect to
> support [1]
> 2) Removing simple mode from the ui and tests
> 3) Removing simple mode support from nailgun (maybe we leave it) and
> cli
> 4) Updating documentation
>
> [0] https://bugs.launchpad.net/fuel/+bug/1350266
> [1] https://bugs.launchpad.net/fuel/+bug/1350266/comments/7
>
> --
> Andrew
> Mirantis
> Ceph community
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-r

Re: [openstack-dev] [Fuel] removing single mode

2015-01-27 Thread Vladimir Kuklin
+1 to simple mode removal

On Tue, Jan 27, 2015 at 4:44 PM, Stanislaw Bogatkin 
wrote:

> +1
>
> On Tue, Jan 27, 2015 at 4:05 PM, Aleksandr Didenko 
> wrote:
>
>> Hi,
>>
>> After starting implementing granular deployment we've faced a bunch of
>> issues that would make further development of this feature much more
>> complicated if we have to support both Simple and HA deployment modes. For
>> example: simple mode does not require cluster (corosync, pacemaker, vips,
>> etc), so we had to skip this task for Simple mode somehow - we can use
>> conditional tasks, or conditional manifests in our tasks, or create
>> separate task graphs for different deployment modes, etc - either way it's
>> pretty much doubling the amount of work for some parts of Fuel and our
>> development cycle.
>>
>> At the moment, CI blocks us from further development of fuel-library
>> modularization BP [2] because we still use Simple mode in CI. So in order
>> to proceed with this BP we have two options:
>>
>> 1) remove Simple mode from CI/QA and thus drop it completely from Fuel
>> 2) double our efforts to support both Simple and HA modes in granular
>> deployment
>>
>> We have a BP about single-controller HA [1]. HA with single controller
>> works just fine at the moment. So if you want to test Fuel on a minimum set
>> of nodes, you can do this on 3 nodes (Fuel master, controller, compute),
>> just like with Simple mode before. I suppose, it's time to finally drop
>> support for Simple mode in Fuel :)
>>
>> [1] https://blueprints.launchpad.net/fuel/+spec/single-controller-ha
>> [2]
>> https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
>>
>> --
>> Regards,
>> Aleksandr Didenko
>>
>>
>> On Tue, Aug 26, 2014 at 9:25 AM, Mike Scherbakov <
>> mscherba...@mirantis.com> wrote:
>>
>>> Definitely fuel spec is needed :)
>>>
>>>
>>> On Mon, Aug 25, 2014 at 8:45 PM, Evgeniy L  wrote:
>>>
 Hi Andrew,

 I have some comments regarding to you action items

 >> 2) Removing simple mode from the ui and tests
 >> 3) Removing simple mode support from nailgun (maybe we leave it) and
 cli

 We shouldn't do it, because nailgun should handle both versions of
 cluster.
 What we have to do here is to use openstack.yaml to keep all possible
 modes.
 For new release there will be only ha, to manage previous releases we
 have
 to create data migrations in nailgun to create the filed with modes
 i.e. multinode
 and ha.

 Also fixes for ui are required too, I think it mostly related to
 wizard, 'mode' tab
 where use can chose ha or non ha cluster in case of new release there
 should
 be only ha, and in case of old releases there should be ha and
 multinode.

 Thanks,



  On Mon, Aug 25, 2014 at 8:19 PM, Andrew Woodward 
 wrote:

>  Started a new thread so that we don't hijack the older thread.
>  as
>
>
>> Andrew, will you work on it in 6.0? What are remaining items there?
>> Also, it might affect our tests - simple mode runs faster so we use it 
>> for
>> smoke ISO test. Anastasia, please confirm that we can switch smoke to
>> one-ha-controller model, or even drop smoke at all and use BVT only
>> (running CentOS 3 HA controllers and same with Ubuntu).
>>
>
> The primary reason that we haven't disabled single yet is was due to
> [0] where we where having problems adding additional controllers. With the
> changes to galera and rabbit clustering it appears that we ended up fixing
> it already.
>
> The remaining issues are:
> 1) Ensuring we have good test coverage for the cases we expect to
> support [1]
> 2) Removing simple mode from the ui and tests
> 3) Removing simple mode support from nailgun (maybe we leave it) and
> cli
> 4) Updating documentation
>
> [0] https://bugs.launchpad.net/fuel/+bug/1350266
> [1] https://bugs.launchpad.net/fuel/+bug/1350266/comments/7
>
> --
> Andrew
> Mirantis
> Ceph community
>
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>>
>>> --
>>> Mike Scherbakov
>>> #mihgen
>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubsc

Re: [openstack-dev] [nova][NFV][qa] Testing NUMA, CPU pinning and large pages

2015-01-27 Thread Steve Gordon
- Original Message -
> From: "Vladik Romanovsky" 
> To: openstack-dev@lists.openstack.org
> 
> Hi everyone,
> 
> Following Steve Gordon's email [1], regarding CI for NUMA, SR-IOV, and other
> features, I'd like to start a discussion about the NUMA testing in
> particular.
> 
> Recently we have started a work to test some of these features.
> The current plan is to use the functional tests, in the Nova tree, to
> exercise
> the code paths for NFV use cases. In general, these will contain tests
> to cover various scenarios regarding NUMA, CPU pinning, large pages and
> validate a correct placement/scheduling.

Hi Vladik,

There was some discussion of the above at the Nova mid-cycle yesterday, are you 
able to give a quick update on any progress with regards to creation of the 
above functional tests?

> In addition to the functional tests in Nova, we have also proposed two basic
> scenarios in Tempest [2][3]. One to make sure that an instance can boot
> with a
> minimal NUMA configuration (a topology that every host should have) and
> one that would request an "impossible" topology and fail with an expected
> exception.

We also discussed the above tempest changes and they will likely receive some 
more review cycles as a result of this discussion but it looks like there is 
already some feedback from Nikola that needs to be addressed. More broadly for 
the list it looks like we need to determine whether adding a negative test in 
this case is a valid/desireable use of Tempest.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Sean Dague
On 01/27/2015 12:18 AM, Silvan Kaiser wrote:
> Hello!
> Do dependencies required only in some contexts belong into requirements.txt?
> 
> Yesterday we had a short discussion on #openstack-nova regarding how to
> handle optional requirements. This was triggered by our quobyte nova
> driver (https://review.openstack.org/#/c/110722/18), who requires xattr,
> which we therefore added to requirements.txt (as it is provided by the
> requirements project).
> 
> Points from the discussion:
> - If we add this we will be adding every requirement for every component
> ---> this becomes to big.
> - Remove this requirement, no optional entries in requirements.txt, a
> 'deployer' has to know what dependencies the components he wants to use have
> ---> Usually he does not know and installation becomes more issue prone
> - Other (in between) ideas???
> 
> Please note that this has some urgency, the change set referenced above
> has been in review for months and i'm trying to react asap on comments
> but the deadline is approaching (next week) and if i have to do bigger
> changes I'd like to know as fast as possible...

Typically the answer is no. The libvirt volume driver architecture is
kind of weird in the fact that it doesn't actually split it's drivers
out into separate files, so there can be a file level boundry (including
loading) for these things.

That being said, in general, optional things are not in
requirements.txt. You will notice libvirt isn't even in
requirements.txt, because it's not a nova requirement.

Optional things should be handled in documentation at this point, and
the code should be structured to not fail when that's not installed if
it's not needed.

But, also, in staring at the code, I'm confused that xattr is used in
only 1 place, which is a fall back failure path. Seems like the code
could easily be redone to not need it, and get info from mount instead, no?

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Matt Riedemann



On 1/27/2015 2:18 AM, Silvan Kaiser wrote:

Hello!
Do dependencies required only in some contexts belong into requirements.txt?

Yesterday we had a short discussion on #openstack-nova regarding how to
handle optional requirements. This was triggered by our quobyte nova
driver (https://review.openstack.org/#/c/110722/18), who requires xattr,
which we therefore added to requirements.txt (as it is provided by the
requirements project).

Points from the discussion:
- If we add this we will be adding every requirement for every component
---> this becomes to big.
- Remove this requirement, no optional entries in requirements.txt, a
'deployer' has to know what dependencies the components he wants to use have
---> Usually he does not know and installation becomes more issue prone
- Other (in between) ideas???

Please note that this has some urgency, the change set referenced above
has been in review for months and i'm trying to react asap on comments
but the deadline is approaching (next week) and if i have to do bigger
changes I'd like to know as fast as possible...

Best regards
SIlvan Kaiser


--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com 
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



In my opinion the volume driver is optional and therefore the dependency 
is optional, it's all based on what the configuration is, the same as 
which DB or RPC backend you use, which is why those dependencies are in 
test-requirements.txt.


While it's more obvious to a deployer that if you're going to configure 
Nova to use MySQL you need some MySQL packages to make it work, any 
deployer that's adding support for this volume driver should also 
probably be testing their deployment scripts, e.g. chef 
cookbooks/recipes, and if they haven't written their script correctly 
they'll find out that it blows up with an ImportError because of a 
missing xattr.  Otherwise, [1].


[1] 
http://troll.me/images/the-most-interesting-man-in-the-world/i-dont-always-test-my-code-but-when-i-do-i-do-it-in-production.jpg


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][dev][Zuul] Merge Failed Error in CI - GitPython Error

2015-01-27 Thread Asselin, Ramy
Punith,

I think it’s because when you installed fresh again on 14.04, the zuul 
GitPython dependency was updated to the latest.

I proposed this zuul patch to fix it: https://review.openstack.org/#/c/149336/

Ramy

From: Punith S [mailto:punit...@cloudbyte.com]
Sent: Tuesday, January 27, 2015 4:33 AM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Openstack Team
Subject: Re: [openstack-dev] [Openstack][dev][Zuul] Merge Failed Error in CI - 
GitPython Error

Hi the merge fail error that we reported in ubuntu 12.04 has been solved in the 
new ubuntu 14.04 CI master setup.

many thanks to ramy on the update of the repo :) 
https://github.com/rasselin/os-ext-testing

cheers!

On Fri, Jan 16, 2015 at 4:58 PM, Punith S 
mailto:punit...@cloudbyte.com>> wrote:

Hi stackers,

i'm running CI for openstack cinder patches, when zuul reads a new patch from 
gerrit, it tries to merge the patch to it's own cinder repo 
@/var/lib/zuul/git/openstack/cinder

and returns the comment to gerrit saying

[cid:image001.jpg@01D03A03.67046AC0]

hence it is failing to issue the tempest-job in jenkins via gearman !!!

on checking the trace the error seems to come from GitPython 0.3.2.1
​
the zuul/merger-debug log snapshot ref - http://paste.openstack.org/show/158218/


raise ValueError("Failed to parse line: %r" % line)
ValueError: Failed to parse line: 'Total 9 (delta 7), reused 9 (delta 7)''


is this a problem of python git ?

i'm using ubuntu 12.04 and git 1.7


thanks

regards,

punith s
cloudbyte.com



--
regards,

punith s
cloudbyte.com



--
regards,

punith s
cloudbyte.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack][infra] Zuul-Merger Error in CI

2015-01-27 Thread Asselin, Ramy
FYI, it wasn’t a change to my repo, but to openstack upstream that solved the 
12.04 --> 14.04 issue: https://review.openstack.org/#/c/141518/
So this will help anyone using the Openstack-Infra zuul puppet modules!
Ramy

From: Punith S [mailto:punit...@cloudbyte.com]
Sent: Tuesday, January 27, 2015 4:32 AM
To: openstack-in...@lists.openstack.org; OpenStack Development Mailing List 
(not for usage questions)
Cc: Asselin, Ramy
Subject: Re: [Openstack][infra] Zuul-Merger Error in CI

Hi the merge fail error that we reported in ubuntu 12.04 has been solved in the 
new ubuntu 14.04 CI master setup.

many thanks to ramy on the update of the repo :) 
https://github.com/rasselin/os-ext-testing

cheers!

On Thu, Jan 8, 2015 at 7:16 PM, Punith S 
mailto:punit...@cloudbyte.com>> wrote:
hi,

i'm running CI for openstack cinder patches, when zuul reads a new patch from 
gerrit, it tries to merge the patch to it's own cinder repo 
@/var/lib/zuul/git/openstack/cinder

and returns the comment to gerrit saying

[cid:image001.jpg@01D03A03.B7884B60]

hence it is failing to issue the dsvm-tempest-job in jenkins via gearman !!!
​
the zuul/merger-debug log sanpshot

2015-01-08 19:03:43,320 DEBUG zuul.MergeServer: Got merge job.
2015-01-08 19:03:43,321 DEBUG zuul.Merger: Merging for change 145778,1.
2015-01-08 19:03:43,321 DEBUG zuul.Merger: Processing refspec 
refs/changes/78/145778/1 for project openstack/cinder / master ref 
Zbd4a4ad6ff3741c68ce382afa6d8df84
2015-01-08 19:03:43,383 DEBUG zuul.Merger: Unable to find commit for ref 
master/Zbd4a4ad6ff3741c68ce382afa6d8df84
2015-01-08 19:03:43,384 DEBUG zuul.Merger: No base commit found for 
(u'openstack/cinder', u'master')
2015-01-08 19:03:43,384 DEBUG zuul.Repo: Resetting repository 
/var/lib/zuul/git/openstack/cinder
2015-01-08 19:03:43,385 DEBUG zuul.Repo: Updating repository 
/var/lib/zuul/git/openstack/cinder
2015-01-08 19:03:54,507 DEBUG zuul.Repo: Checking out 
5993660498f44e96b0f35ccc0f4d4a4c7b430363
2015-01-08 19:04:02,685 ERROR zuul.Merger: Exception while merging a change:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/zuul/merger/merger.py", line 
234, in _mergeChange
commit = repo.merge(item['refspec'], 'resolve')
  File "/usr/local/lib/python2.7/dist-packages/zuul/merger/merger.py", line 
132, in merge
self.fetch(ref)
  File "/usr/local/lib/python2.7/dist-packages/zuul/merger/merger.py", line 
145, in fetch
origin.fetch(ref)
  File "/usr/local/lib/python2.7/dist-packages/git/remote.py", line 598, in 
fetch
return self._get_fetch_info_from_stderr(proc, progress or RemoteProgress())
  File "/usr/local/lib/python2.7/dist-packages/git/remote.py", line 540, in 
_get_fetch_info_from_stderr
for err_line, fetch_line in zip(fetch_info_lines, fetch_head_info))
  File "/usr/local/lib/python2.7/dist-packages/git/remote.py", line 540, in 

for err_line, fetch_line in zip(fetch_info_lines, fetch_head_info))
  File "/usr/local/lib/python2.7/dist-packages/git/remote.py", line 252, in 
_from_line
raise ValueError("Failed to parse line: %r" % line)
ValueError: Failed to parse line: 'Total 7 (delta 5), reused 7 (delta 5)'


is this a problem of python git ?

i'm using ubuntu 12.04 and git 1.7


thanks in advance
--
regards,

punith s
cloudbyte.com



--
regards,

punith s
cloudbyte.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Jay Pipes

On 01/27/2015 07:14 AM, Matt Riedemann wrote:

On 1/27/2015 2:18 AM, Silvan Kaiser wrote:

Hello!
Do dependencies required only in some contexts belong into
requirements.txt?

Yesterday we had a short discussion on #openstack-nova regarding how to
handle optional requirements. This was triggered by our quobyte nova
driver (https://review.openstack.org/#/c/110722/18), who requires xattr,
which we therefore added to requirements.txt (as it is provided by the
requirements project).

Points from the discussion:
- If we add this we will be adding every requirement for every component
---> this becomes to big.
- Remove this requirement, no optional entries in requirements.txt, a
'deployer' has to know what dependencies the components he wants to
use have
---> Usually he does not know and installation becomes more issue prone
- Other (in between) ideas???

Please note that this has some urgency, the change set referenced above
has been in review for months and i'm trying to react asap on comments
but the deadline is approaching (next week) and if i have to do bigger
changes I'd like to know as fast as possible...

Best regards
SIlvan Kaiser


--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com 
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender


__

OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



In my opinion the volume driver is optional and therefore the dependency
is optional, it's all based on what the configuration is, the same as
which DB or RPC backend you use, which is why those dependencies are in
test-requirements.txt.

While it's more obvious to a deployer that if you're going to configure
Nova to use MySQL you need some MySQL packages to make it work, any
deployer that's adding support for this volume driver should also
probably be testing their deployment scripts, e.g. chef
cookbooks/recipes, and if they haven't written their script correctly
they'll find out that it blows up with an ImportError because of a
missing xattr.  Otherwise, [1].


Couple things...

a) I agree with Sean and Matt here that this is an optional dependency 
and belongs in the deployment documentation and configuration management 
manifests.


b) The Glance API image cache can use xattr if SQLite is not desired 
[1], and Glance does *not* list xattr as a dependency in 
requirements.txt. Swift also has a dependency on python-xattr [2]. So, 
this particular Python library is not an unknown by any means.


c) Remember that even if you install python-xattr, that still doesn't 
mean it will automatically work. You still need to enable a filesystem 
that supports atime (i.e. noatime must not be set in fstab for the 
filesystem) [3]. Just an FYI.


Best,
-jay

[1] 
https://github.com/openstack/glance/blob/master/glance/image_cache/drivers/xattr.py

[2] https://github.com/openstack/swift/blob/master/requirements.txt#L11
[3] 
https://github.com/openstack/glance/blob/master/glance/image_cache/drivers/xattr.py#L23-L24


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Denis Makogon
On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann 
wrote:

>
> > On Jan 15, 2015, at 1:30 PM, Denis Makogon 
> wrote:
> >
> > Good day to All,
> >
> > The question that i’d like to raise here is not simple one, so i’d like
> to involve as much readers as i can. I’d like to speak about oslo.messaging
> performance testing. As community we’ve put lots of efforts in making
> oslo.messaging widely used drivers stable as much as possible. Stability is
> a good thing, but is it enough for saying “works well”? I’d say that it’s
> not.
> > Since oslo.messaging uses driver-based messaging workflow, it makes
> sense to dig into each driver and collect all required/possible performance
> metrics.
> > First of all, it does make sense to figure out how to perform
> performance testing, first that came into my mind is to simulate high load
> on one of corresponding drivers. Here comes the question of how it can be
> accomplished withing available oslo.messaging tools - high load on any
> driver can perform an application that:
> >   • can populate multiple emitters(rpc clients) and consumers (rpc
> servers).
> >   • can force clients to send messages of pre-defined number of
> messages of any length.
>
> That makes sense.
>
> > Another thing is why do we need such thing. Profiling, performance
> testing can improve the way in which our drivers were implemented. It can
> show us actual “bottlenecks” in messaging process, in general. In some
> cases it does make sense to figure out where problem takes its place -
> whether AMQP causes messaging problems or certain driver that speaks to
> AMQP fails.
> > Next thing that i want to discuss the architecture of
> profiling/performance testing. As i can see it seemed to be a “good” way to
> add profiling code to each driver. If there’s any objection or better
> solution, please bring them to the light.
>
> What sort of extra profiling code do you anticipate needing?
>
>
As i can foresee (taking into account [1]) couple decorators, possibly one
that handles metering process. The biggest part of code will take highload
tool that'll be a part of messaging. But another question adding certain
dependecies to the project.


> > Once we’d have final design for profiling we would need to figure out
> tools for profiling. After searching over the web, i found pretty
> interesting topic related to python profiling [1]. After certain
> investigations it does makes sense discuss next profiling options(apply one
> or both):
> >   • Line-by-line timing and execution frequency with a profiler
> (there are possible Pros and Cons, but i would say the per-line statistics
> is more than appreciable at initial performance testing steps)
> >   • Memory/CPU consumption
> > Metrics. The most useful metric for us is time, any time-based metric,
> since it is very useful to know at which step or/and by whom delay/timeout
> caused, for example, so as it said, we would be able to figure out whether
> AMQP or driver fails to do what it was designed for.
> > Before proposing spec i’d like to figure out any other requirements, use
> cases and restrictions for messaging performance testing. Also, if there
> any stories of success in boosting python performance - feel free to share
> it.
>
> The metrics to measure depend on the goal. Do we think the messaging code
> is using too much memory? Is it too slow? Or is there something else
> causing concern?
>
> It does make sense to have profiling for cases when trying to upscale
cluster and it'll be a good thing to have an ability to figure out if
scaled AMQP service has it's best configuration (i guess here would come
the question about doing performance testing using well-known tools), and
the most interesting question is about how messaging driver decreases (or
leaves untouched) throughput between RPC client and server. This metering
results can be compared to those tools that were designed for performance
testing. And that's why it'll be good step forward having
profiling/performance testing using high load technic.


> >
> >
> >
> > [1] http://www.huyng.com/posts/python-performance-analysis/
> >
> > Kind regards,
> > Denis Makogon
> > IRC: denis_makogon
> > dmako...@mirantis.com
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Kind regards,
Denis M.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-re

Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Ben Nemec
On 01/27/2015 02:18 AM, Silvan Kaiser wrote:
> Hello!
> Do dependencies required only in some contexts belong into requirements.txt?
> 
> Yesterday we had a short discussion on #openstack-nova regarding how to
> handle optional requirements. This was triggered by our quobyte nova driver
> (https://review.openstack.org/#/c/110722/18), who requires xattr, which we
> therefore added to requirements.txt (as it is provided by the requirements
> project).
> 
> Points from the discussion:
> - If we add this we will be adding every requirement for every component
> ---> this becomes to big.
> - Remove this requirement, no optional entries in requirements.txt, a
> 'deployer' has to know what dependencies the components he wants to use have
> ---> Usually he does not know and installation becomes more issue prone
> - Other (in between) ideas???
> 
> Please note that this has some urgency, the change set referenced above has
> been in review for months and i'm trying to react asap on comments but the
> deadline is approaching (next week) and if i have to do bigger changes I'd
> like to know as fast as possible...
> 
> Best regards
> SIlvan Kaiser

I will just put in another plug for some work I had started around this
(and never finished :-( ).

https://review.openstack.org/#/c/83150/ (despite the Jenkins results,
I'm pretty sure that was working locally for me)

Also the discussion at
http://lists.openstack.org/pipermail/openstack-dev/2014-February/026976.html

There was general consensus on the approach and support has been in pbr
for a long time now, but I've never had time to figure out what would be
needed to make it play nicely with global requirements.  I _think_
that's the only major blocker preventing this from being useful, so if
someone wanted to pick it up and run with it that would be awesome.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] required libvirtd/qemu versions for numa support?

2015-01-27 Thread Kashyap Chamarthy
On Mon, Jan 26, 2015 at 03:37:48PM -0800, Jay Pipes wrote:
> On 01/26/2015 07:33 AM, Chris Friesen wrote:
> >Hi,
> >
> >I'm interested in the recent work around NUMA support for guest
> >instances
> >(https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement), but
> >I'm having some difficulty figuring out what versions of libvirt and
> >qemu are required.
> >
> > From the research that I've done it seems like qemu 2.1 might be
> >required, but I've been unable to find a specific version listed in the
> >nova requirements or in the openstack global requirements.  Is it there
> >and I just can't find it?

Although, the MIN_LIBVIRT_NUMA_TOPOLOGY_VERSION points to libvirt 1.0.4. I
think newer the QEMU/libvirt, the better off you'll be. Specifically, upstream
libvirt has had some fixes in NUMA placement/vCPU pinning area -- so maybe you
can pick the newest upstream release 1.2.12, that was done today. (If you use
Fedora, it's already available in Fedora Rawhide distribution.)

> >If it's not specified, and yet openstack relies on it, perhaps it should
> >be added.  (Or at least documented somewhere.)
> 
> Hi Chris,
> 
> The constants starting here:
> 
> http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n340
> 
> should answer your questions.
> 
> All the best,
> -jay
 

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Daniel P. Berrange
On Tue, Jan 27, 2015 at 07:12:29AM -0800, Sean Dague wrote:
> On 01/27/2015 12:18 AM, Silvan Kaiser wrote:
> > Hello!
> > Do dependencies required only in some contexts belong into requirements.txt?
> > 
> > Yesterday we had a short discussion on #openstack-nova regarding how to
> > handle optional requirements. This was triggered by our quobyte nova
> > driver (https://review.openstack.org/#/c/110722/18), who requires xattr,
> > which we therefore added to requirements.txt (as it is provided by the
> > requirements project).
> > 
> > Points from the discussion:
> > - If we add this we will be adding every requirement for every component
> > ---> this becomes to big.
> > - Remove this requirement, no optional entries in requirements.txt, a
> > 'deployer' has to know what dependencies the components he wants to use have
> > ---> Usually he does not know and installation becomes more issue prone
> > - Other (in between) ideas???
> > 
> > Please note that this has some urgency, the change set referenced above
> > has been in review for months and i'm trying to react asap on comments
> > but the deadline is approaching (next week) and if i have to do bigger
> > changes I'd like to know as fast as possible...
> 
> Typically the answer is no. The libvirt volume driver architecture is
> kind of weird in the fact that it doesn't actually split it's drivers
> out into separate files, so there can be a file level boundry (including
> loading) for these things.

Not immediately relevant, but in the L cycle, I'm hoping to finally
split the volume.py up into a subdirectory of files. The nova.conf
'volume_drivers' config will be removed by thenm, avoiding the back
compat class naming problem we'd have if we split it in Kilo

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Bulk network operations

2015-01-27 Thread Mohammad Banikazemi
Saksham, The Neutron bulk operations are by definition atomic [1]: "Bulk
operations are always performed atomically, meaning that either all or none
of the objects in the request body are created."

Best,

Mohammad

[1] https://wiki.openstack.org/wiki/Neutron/APIv2-specification



From:   "Saksham Varma (sakvarma)" 
To: "OpenStack Development Mailing List (not for usage questions)"

Date:   01/26/2015 08:01 PM
Subject:Re: [openstack-dev] [Neutron] Bulk network operations



Modifying the subject line.

From: Saksham Varma 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
openstack-dev@lists.openstack.org>
Date: Tuesday, January 27, 2015 at 5:21 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] Bulk network operations

Hi,

I was working on bulk network operations for one of Cisco’s plugins. I was
wondering how do other plugins handle failures in bulk requests. Are all
the requests in the bulk payload rolled back, if any fails (like an atomic
strategy)? Or is it like a best effort, which is not necassarily atomic?

Thanks,
Saksham
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Policy][Group-based-policy] Policy violations investigation

2015-01-27 Thread Ariel Zeitlin
Hi,
I want to propose an idea of investigation of policy violations (for
white-list policies defined by GBP) by, for instance, redirecting the
violating sessions to a HoneyPot.
Meaning, that if the only communication between Group A and Group B is by
port 80 (as described in the GPB) then an access to port 22 from Group A to
Group B will be redirected to and answered by a HoneyPot that will
investigate the real reason for policy violation, or simply log and drop
the violating connection attempt.

In tightly defined policies world as achieved through GBP an attacker
trying to propagate inside the network is more likely to hit a wall and
then actually create a "golden lead" for his detection.

Do you think this concept can/should to be part of GBP and what would be
the best way to promote it (sorry, I am pretty new to OpenStack and GBP
specifically).

Thanks,
Ariel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Running tox on Centos 6.5 with Python26

2015-01-27 Thread Matt Riedemann



On 1/15/2015 7:49 PM, Joe Gordon wrote:



On Fri, Jan 16, 2015 at 12:13 PM, John Warren
mailto:jswar...@linux.vnet.ibm.com>> wrote:


Can someone tell me or point me to documentation about what the
required yum and pip packages are to be able to run tox for glance,
keystone, neutron, cinder and nova on Centos 6.5 with Python 2.6?  I
looked at the openstack-infra/puppet-jenkins project and thought I
installed everything in the slave.pp manifest, which presumably is
sufficient to be able to run tox for all openstack projects, but
when I try to run tox on nova (tox --recreate -epy26) I just get an
error that indicates that no test cases were found.


trunk OpenStack services (nova, neutron, keystone, cinder etc.)  no
longer support python 26

http://lists.openstack.org/pipermail/openstack-dev/2014-November/051551.html
https://review.openstack.org/#/c/128736/


Any help would be greatly appreciated.

John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request@lists.__openstack.org?subject:__unsubscribe

http://lists.openstack.org/__cgi-bin/mailman/listinfo/__openstack-dev 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What Joe said, python 2.6 isn't supported starting in Kilo for the 
server projects.


If you want to do this for < Kilo, the nova native package dependencies 
are here:


http://docs.openstack.org/developer/nova/devref/development.environment.html#linux-systems

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-27 Thread Tomasz Napierala
+1, long awaited


> On 27 Jan 2015, at 14:05, Aleksandr Didenko  wrote:
> 
> Hi,
> 
> After starting implementing granular deployment we've faced a bunch of issues 
> that would make further development of this feature much more complicated if 
> we have to support both Simple and HA deployment modes. For example: simple 
> mode does not require cluster (corosync, pacemaker, vips, etc), so we had to 
> skip this task for Simple mode somehow - we can use conditional tasks, or 
> conditional manifests in our tasks, or create separate task graphs for 
> different deployment modes, etc - either way it's pretty much doubling the 
> amount of work for some parts of Fuel and our development cycle.
> 
> At the moment, CI blocks us from further development of fuel-library 
> modularization BP [2] because we still use Simple mode in CI. So in order to 
> proceed with this BP we have two options:
> 
> 1) remove Simple mode from CI/QA and thus drop it completely from Fuel
> 2) double our efforts to support both Simple and HA modes in granular 
> deployment
> 
> We have a BP about single-controller HA [1]. HA with single controller works 
> just fine at the moment. So if you want to test Fuel on a minimum set of 
> nodes, you can do this on 3 nodes (Fuel master, controller, compute), just 
> like with Simple mode before. I suppose, it's time to finally drop support 
> for Simple mode in Fuel :)
> 
> [1] https://blueprints.launchpad.net/fuel/+spec/single-controller-ha
> [2] https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
> 
> --
> Regards,
> Aleksandr Didenko
> 
> 
> On Tue, Aug 26, 2014 at 9:25 AM, Mike Scherbakov  
> wrote:
> Definitely fuel spec is needed :)
> 
> 
> On Mon, Aug 25, 2014 at 8:45 PM, Evgeniy L  wrote:
> Hi Andrew, 
> 
> I have some comments regarding to you action items
> 
> >> 2) Removing simple mode from the ui and tests
> >> 3) Removing simple mode support from nailgun (maybe we leave it) and cli
> 
> We shouldn't do it, because nailgun should handle both versions of cluster.
> What we have to do here is to use openstack.yaml to keep all possible modes.
> For new release there will be only ha, to manage previous releases we have
> to create data migrations in nailgun to create the filed with modes i.e. 
> multinode
> and ha.
> 
> Also fixes for ui are required too, I think it mostly related to wizard, 
> 'mode' tab
> where use can chose ha or non ha cluster in case of new release there should
> be only ha, and in case of old releases there should be ha and multinode.
> 
> Thanks,
> 
> 
> 
> On Mon, Aug 25, 2014 at 8:19 PM, Andrew Woodward  wrote:
> Started a new thread so that we don't hijack the older thread.
>  as 
>  
> Andrew, will you work on it in 6.0? What are remaining items there? Also, it 
> might affect our tests - simple mode runs faster so we use it for smoke ISO 
> test. Anastasia, please confirm that we can switch smoke to one-ha-controller 
> model, or even drop smoke at all and use BVT only (running CentOS 3 HA 
> controllers and same with Ubuntu).
> 
> The primary reason that we haven't disabled single yet is was due to [0] 
> where we where having problems adding additional controllers. With the 
> changes to galera and rabbit clustering it appears that we ended up fixing it 
> already.
> 
> The remaining issues are:
> 1) Ensuring we have good test coverage for the cases we expect to support [1]
> 2) Removing simple mode from the ui and tests
> 3) Removing simple mode support from nailgun (maybe we leave it) and cli
> 4) Updating documentation
> 
> [0] https://bugs.launchpad.net/fuel/+bug/1350266
> [1] https://bugs.launchpad.net/fuel/+bug/1350266/comments/7
> 
> -- 
> Andrew
> Mirantis
> Ceph community
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Mike Scherbakov
> #mihgen
> 
> 
> ___
> OpenStack-dev mailing list
> OpenStack-dev@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Tomasz 'Zen' Napierala
Sr. OpenStack Engineer
tnapier...@mirantis.com







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread Doug Hellmann
The infra team has been working hard to update our Python 3 testing for all 
projects to run on 3.4 instead of 3.3. Two of the last projects to be able to 
shift are oslo.messaging and oslo.rootwrap. The test suites for both projects 
trigger a segfault bug in the 3.4 interpreter as it is shipped on Ubuntu 
Trusty. The fix for the segfault is already available upstream, and the team at 
Canonical is working on packaging a new release, but our schedules are out of 
sync. Maintaining a separate image and pool of testing nodes for 3.3 testing of 
just these two projects is going to be a bit of a burden, and so the infra team 
has asked if we’re willing to turn off the 3.3 jobs for the two projects, 
leaving us without 3.x testing in the gate until the 3.4 interpreter on Trusty 
is updated.

The latest word from Canonical is that they plan to package Python 3.4.3, due 
to be released in about a month. It will take some additional time to put it 
through their release process, and so there’s some uncertainty about how long 
we would be without 3.x gate jobs, but it doesn’t look like it will be 
indefinitely.

To mitigate that risk, fungi has suggested starting to work on Debian Jessie 
worker images, which would include a version of Python 3.4 that doesn’t have 
the segfault issue. His goal is to have something working by around the end of 
March. That gives Canonical up to a month to release the 3.4.3 package before 
we would definitely move those tests to Debian. Whether we move any of the 
other projects, or would move anyway if fungi gets Debian working more quickly 
than he expects, would remain to be seen.

Although we do have some risk of introducing Python 3 regressions into the two 
libraries, I am inclined to go along with the infra team’s request and disable 
the tests for a short period of time. The rootwrap library doesn’t see a lot of 
changes, and we can rely on the messaging lib devs to run tests locally for a 
little while.

Before I give the go-ahead, I want to hear concerns from the rest of the team. 
Let’s try to have an answer by the 29th (Thursday).

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][heat] potentially breaking release of oslo.messaging Tuesday 27th

2015-01-27 Thread Doug Hellmann


On Mon, Jan 26, 2015, at 06:57 PM, Angus Salkeld wrote:
> On Tue, Jan 27, 2015 at 8:05 AM, Doug Hellmann 
> wrote:
> 
> > We’ve held up the oslo.messaging release with the namespace package work
> > for a while now while we work with the nova, designate, and heat teams to
> > fix things up so their tests won’t break. We think the one remaining issue
> > is in heat, where some tests are mocking private parts of oslo.messaging.
> >
> > There’s a bug filed at https://bugs.launchpad.net/heat/+bug/1412836
> >
> > I think asalkeld fixed similar tests in
> > https://review.openstack.org/#/c/145094/1/heat/tests/test_stack_lock.py
> > but missed these at the time.
> >
> > I would propose a fix, but I really don’t understand the test suite or
> > what’s going on there. If someone else does propose a fix, please ping me
> > in #openstack-oslo on IRC and I’ll test the pre-release against it to see
> > if the issue is resolved.
> >
> >
> This should sort it out: https://review.openstack.org/150185
> 
> -Angus

Thanks!

Doug

> 
> 
> > I plan to release the new version of oslo.messaging around 15:00 UTC on 27
> > Jan, but I will wait if there’s a fix in heat’s merge queue.
> >
> > Doug
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread Mike Bayer


Doug Hellmann  wrote:

> The infra team has been working hard to update our Python 3 testing for all 
> projects to run on 3.4 instead of 3.3. Two of the last projects to be able to 
> shift are oslo.messaging and oslo.rootwrap. The test suites for both projects 
> trigger a segfault bug in the 3.4 interpreter as it is shipped on Ubuntu 
> Trusty. The fix for the segfault is already available upstream, and the team 
> at Canonical is working on packaging a new release, but our schedules are out 
> of sync. Maintaining a separate image and pool of testing nodes for 3.3 
> testing of just these two projects is going to be a bit of a burden, and so 
> the infra team has asked if we’re willing to turn off the 3.3 jobs for the 
> two projects, leaving us without 3.x testing in the gate until the 3.4 
> interpreter on Trusty is updated.
> 
> The latest word from Canonical is that they plan to package Python 3.4.3, due 
> to be released in about a month. It will take some additional time to put it 
> through their release process, and so there’s some uncertainty about how long 
> we would be without 3.x gate jobs, but it doesn’t look like it will be 
> indefinitely.
> 
> To mitigate that risk, fungi has suggested starting to work on Debian Jessie 
> worker images, which would include a version of Python 3.4 that doesn’t have 
> the segfault issue. His goal is to have something working by around the end 
> of March. That gives Canonical up to a month to release the 3.4.3 package 
> before we would definitely move those tests to Debian. Whether we move any of 
> the other projects, or would move anyway if fungi gets Debian working more 
> quickly than he expects, would remain to be seen.
> 
> Although we do have some risk of introducing Python 3 regressions into the 
> two libraries, I am inclined to go along with the infra team’s request and 
> disable the tests for a short period of time. The rootwrap library doesn’t 
> see a lot of changes, and we can rely on the messaging lib devs to run tests 
> locally for a little while.
> 
> Before I give the go-ahead, I want to hear concerns from the rest of the 
> team. Let’s try to have an answer by the 29th (Thursday).

I’m not involved with those two subprojects but as an Oslo member I’d +1 
disabling python3.x for now.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread victor stinner
Hi,

What is the Python bug? Do you have a reference to the bug report and the patch?

Python 3.4.3 release schedule:
"3.4.3rc1 will be tagged Saturday February 7 and released Sunday February 8.  
3.4.3 final will follow two weeks later, tagged Saturday February 21 and 
released Sunday February 22."
https://mail.python.org/pipermail/python-dev/2015-January/137773.html

Victor

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Mumble server for the mid-cycle meetup

2015-01-27 Thread Michael Still
Today's hangout:

https://plus.google.com/hangouts/_/gwjbog3l3omtk2f4tt5s5v4hn4a

Michael

On Tue, Jan 27, 2015 at 9:11 AM, Michael Still  wrote:
> For reference, that's because we were having lunch. It seems to be
> working well again.
>
> Michael
>
> On Tue, Jan 27, 2015 at 7:24 AM, Robert Collins
>  wrote:
>> every participant is muted... so there's no sound :(
>>
>> On 27 January 2015 at 07:42, Michael Still  wrote:
>>> Sigh, we had troubles with mumble being unreliable, so now we're
>>> playing with google hangouts:
>>>
>>> https://plus.google.com/hangouts/_/gvieuyvxsvpvvsgs2vdmtqtbbea
>>>
>>> Michael
>>>
>>> On Tue, Jan 27, 2015 at 4:18 AM, Michael Still  wrote:
 As an experiment, I've put the meetup onto a mumble server at
 nova.rcbops.com. The server password is "midcycle".

 At the least this should let people listen in and hopefully comment if
 they need to.

 Michael

 --
 Rackspace Australia
>>>
>>>
>>>
>>> --
>>> Rackspace Australia
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Robert Collins 
>> Distinguished Technologist
>> HP Converged Cloud
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Rackspace Australia



-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread Julien Danjou
On Tue, Jan 27 2015, Doug Hellmann wrote:

> The infra team has been working hard to update our Python 3 testing for all
> projects to run on 3.4 instead of 3.3. Two of the last projects to be able to
> shift are oslo.messaging and oslo.rootwrap. The test suites for both projects
> trigger a segfault bug in the 3.4 interpreter as it is shipped on Ubuntu
> Trusty. The fix for the segfault is already available upstream, and the team 
> at
> Canonical is working on packaging a new release, but our schedules are out of
> sync. Maintaining a separate image and pool of testing nodes for 3.3 testing 
> of
> just these two projects is going to be a bit of a burden, and so the infra 
> team
> has asked if we’re willing to turn off the 3.3 jobs for the two projects,
> leaving us without 3.x testing in the gate until the 3.4 interpreter on Trusty
> is updated.

Isn't there any way to have an ugly work-around in those libs that
wouldn't trigger the Python 3.4 segfault or at least disable the
responsible tests until Python 3.4 gets fixed?

-- 
Julien Danjou
# Free Software hacker
# http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] do we really need project tags in the governance repository?

2015-01-27 Thread Doug Hellmann


On Tue, Jan 27, 2015, at 05:46 AM, Thierry Carrez wrote:
> Doug Hellmann wrote:
> > On Mon, Jan 26, 2015, at 12:02 PM, Thierry Carrez wrote:
> > [...]
> >> I'm open to alternative suggestions on where the list of tags, their
> >> definition and the list projects they apply to should live. If you don't
> >> like that being in the governance repository, what would have your
> >> preference ?
> > 
> > From the very beginning I have taken the position that tags are by
> > themselves not sufficiently useful for evaluating projects. If someone
> > wants to choose between Ceilometer, Monasca, or StackTach, we're
> > unlikely to come up with tags that will let them do that. They need
> > in-depth discussions of deployment options, performance characteristics,
> > and feature trade-offs.
> 
> They are still useful to give people a chance to discover that those 3
> are competing in the same space, and potentially get an idea of which
> one (if any) is deployed on more than one public cloud, better
> documented, or security-supported. I agree with you that an
> (opinionated) article comparing those 3 solutions would be a nice thing
> to have, but I'm just saying that basic, clearly-defined reference
> project metadata still has a lot of value, especially as we grow the
> number of projects.

Right. My main argument is that this isn't something for the TC to do,
not that it shouldn't be done. I'm not convinced it's that useful, but I
don't have a problem if someone else does it.

> 
> >> That said, I object to only saying "this is all information that can be
> >> found elsewhere or should live elsewhere", because that is just keeping
> >> the current situation -- where that information exists somewhere but
> >> can't be efficiently found by our downstream consumers. We need a
> >> taxonomy and clear definitions for tags, so that our users can easily
> >> find, understand and navigate such project metadata.
> > 
> > As someone new to the project, I would not think to look in the
> > governance documents for "state" information about a project. I would
> > search for things like "install guide openstack" or "component list
> > openstack" and expect to find them in the documentation. So I think
> > putting the information in those (or similar) places will actually make
> > it easier to find for someone that hasn't been involved in the
> > discussion of tags and the governance repository.
> 
> The idea here is to have the reference information in some
> Gerrit-controlled repository (currently openstack/governance, but I'm
> open to moving this elsewhere), and have that reference information
> consumed by the openstack.org website when you navigate to the
> "Software" section, to present a browseable/searchable list of projects
> with project metadata. I don't expect anyone to read the YAML file from
> the governance repository. On the other hand, the software section of
> the openstack.org website is by far the most visited page of all our web
> properties, so I expect most people to see that.

Right, I didn't think anyone would be reading the YAML file either. I
didn't realize we were planning to publish the information anywhere
other than the published version of the governance docs. That's not
really the crux of my argument, though.

> 
> > If we need a component list with descriptions, let's build that. It can
> > be managed by a team of interested parties -- perhaps some of the
> > operators or deployers, for example. I don't know if we have an existing
> > place where it would make sense to put it, or if we need a new
> > repository.
> > 
> > We've been applying DRY to the existing projects/programs
> > list and saying that because we already have a list in the governance
> > repository we shouldn't repeat that information elsewhere, but we're
> > also starting to go to a lot of lengths to define a format to hold
> > information (tags, with metadata, a taxonomy, etc.) that isn't needed
> > for project governance. That makes me think we're trying to force-fit
> > this idea into a single list.
> 
> If I understand you correctly, you'd like to have the project teams list
> (previously known as programs) in the governance repository, together
> with the list of their associated code repositories. Then you would have
> a duplicate list of code repositories, with their associated tag
> metadata, in some other repository. I understand the limits of DRY, but

No. I don't think a product list needs to have a list of repositories.
It only needs a list of products. In some cases those map 1:1, but not
in all cases. In any case, I would expect the Nova team to want their
product called "Nova" and not "openstack/nova". For teams with more than
one product, or more than one repository creating a single product, we
need the product names somewhere anyway. So let's just make a list of
product names somewhere. We can use tags or sentences or whatever to
describe them. But doing that is just writing product documentation. It
isn't 

Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Silvan Kaiser

> Am 27.01.2015 um 16:51 schrieb Jay Pipes :
> 

[…snip...]

> a) I agree with Sean and Matt here that this is an optional dependency and 
> belongs in the deployment documentation and configuration management manifests
> 
> b) The Glance API image cache can use xattr if SQLite is not desired [1], and 
> Glance does *not* list xattr as a dependency in requirements.txt. Swift also 
> has a dependency on python-xattr [2]. So, this particular Python library is 
> not an unknown by any means.
Do you happen to know how Glance handles this if the dep. is not handled in 
requirements.txt?

> 
> c) Remember that even if you install python-xattr, that still doesn't mean it 
> will automatically work. You still need to enable a filesystem that supports 
> atime (i.e. noatime must not be set in fstab for the filesystem) [3]. Just an 
> FYI.
In our case the Quobyte file system does support that metadata and we use it to 
verify that a QB volume was mounted.

Best regards
Silvan


> 
> Best,
> -jay
> 
> [1] 
> https://github.com/openstack/glance/blob/master/glance/image_cache/drivers/xattr.py
> [2] https://github.com/openstack/swift/blob/master/requirements.txt#L11
> [3] 
> https://github.com/openstack/glance/blob/master/glance/image_cache/drivers/xattr.py#L23-L24
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

--
*Quobyte* GmbH
Boyenstr. 41 - 10115 Berlin-Mitte - Germany
+49-30-814 591 800 - www.quobyte.com
Amtsgericht Berlin-Charlottenburg, HRB 149012B
management board: Dr. Felix Hupfeld, Dr. Björn Kolbeck, Dr. Jan Stender

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Doug Hellmann


On Tue, Jan 27, 2015, at 10:56 AM, Denis Makogon wrote:
> On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann 
> wrote:
> 
> >
> > > On Jan 15, 2015, at 1:30 PM, Denis Makogon 
> > wrote:
> > >
> > > Good day to All,
> > >
> > > The question that i’d like to raise here is not simple one, so i’d like
> > to involve as much readers as i can. I’d like to speak about oslo.messaging
> > performance testing. As community we’ve put lots of efforts in making
> > oslo.messaging widely used drivers stable as much as possible. Stability is
> > a good thing, but is it enough for saying “works well”? I’d say that it’s
> > not.
> > > Since oslo.messaging uses driver-based messaging workflow, it makes
> > sense to dig into each driver and collect all required/possible performance
> > metrics.
> > > First of all, it does make sense to figure out how to perform
> > performance testing, first that came into my mind is to simulate high load
> > on one of corresponding drivers. Here comes the question of how it can be
> > accomplished withing available oslo.messaging tools - high load on any
> > driver can perform an application that:
> > >   • can populate multiple emitters(rpc clients) and consumers (rpc
> > servers).
> > >   • can force clients to send messages of pre-defined number of
> > messages of any length.
> >
> > That makes sense.
> >
> > > Another thing is why do we need such thing. Profiling, performance
> > testing can improve the way in which our drivers were implemented. It can
> > show us actual “bottlenecks” in messaging process, in general. In some
> > cases it does make sense to figure out where problem takes its place -
> > whether AMQP causes messaging problems or certain driver that speaks to
> > AMQP fails.
> > > Next thing that i want to discuss the architecture of
> > profiling/performance testing. As i can see it seemed to be a “good” way to
> > add profiling code to each driver. If there’s any objection or better
> > solution, please bring them to the light.
> >
> > What sort of extra profiling code do you anticipate needing?
> >
> >
> As i can foresee (taking into account [1]) couple decorators, possibly
> one
> that handles metering process. The biggest part of code will take
> highload
> tool that'll be a part of messaging. But another question adding certain
> dependecies to the project.
> 
> 
> > > Once we’d have final design for profiling we would need to figure out
> > tools for profiling. After searching over the web, i found pretty
> > interesting topic related to python profiling [1]. After certain
> > investigations it does makes sense discuss next profiling options(apply one
> > or both):
> > >   • Line-by-line timing and execution frequency with a profiler
> > (there are possible Pros and Cons, but i would say the per-line statistics
> > is more than appreciable at initial performance testing steps)
> > >   • Memory/CPU consumption
> > > Metrics. The most useful metric for us is time, any time-based metric,
> > since it is very useful to know at which step or/and by whom delay/timeout
> > caused, for example, so as it said, we would be able to figure out whether
> > AMQP or driver fails to do what it was designed for.
> > > Before proposing spec i’d like to figure out any other requirements, use
> > cases and restrictions for messaging performance testing. Also, if there
> > any stories of success in boosting python performance - feel free to share
> > it.
> >
> > The metrics to measure depend on the goal. Do we think the messaging code
> > is using too much memory? Is it too slow? Or is there something else
> > causing concern?
> >
> > It does make sense to have profiling for cases when trying to upscale
> cluster and it'll be a good thing to have an ability to figure out if
> scaled AMQP service has it's best configuration (i guess here would come
> the question about doing performance testing using well-known tools), and
> the most interesting question is about how messaging driver decreases (or
> leaves untouched) throughput between RPC client and server. This metering
> results can be compared to those tools that were designed for performance
> testing. And that's why it'll be good step forward having
> profiling/performance testing using high load technic.

That makes it sound like you want to build performance testing tools for
the infrastructure oslo.messaging is using, and not for oslo.messaging
itself. Is that right?

Doug

> 
> 
> > >
> > >
> > >
> > > [1] http://www.huyng.com/posts/python-performance-analysis/
> > >
> > > Kind regards,
> > > Denis Makogon
> > > IRC: denis_makogon
> > > dmako...@mirantis.com
> > >
> > >
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > _

Re: [openstack-dev] [Openstack-operators] [openstack-operators]flush expired tokens and moves deleted instance

2015-01-27 Thread gustavo panizzo (gfa)


On 01/28/2015 01:13 AM, Fischer, Matt wrote:
> Our keystone database is clustered across regions, so we have this job
> running on node1 in each site on alternating hours. I don’t think you’d
> want a bunch of cron jobs firing off all at once to cleanup tokens on
> multiple clustered nodes. That’s one reason I know not to put this in
> the code.

i prefer a cronjob to something on the code that i have to test,
configure and possible troubleshot

besides, i think is well documented. i don't see a problem there.


maybe distributions could ship the script into /etc/cron.daily by
default? i would remove it on my case but is a good default for simple
openstack installs

-- 
1AE0 322E B8F7 4717 BDEA BF1D 44BB 1BA7 9F6C 6333


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread Clark Boylan


On Tue, Jan 27, 2015, at 09:06 AM, victor stinner wrote:
> Hi,
> 
> What is the Python bug? Do you have a reference to the bug report and the
> patch?
> 
https://bugs.launchpad.net/ubuntu/trusty/+source/python3.4/+bug/1367907
Is the bug I filed with the ubuntu package and it has links back to the
upstream python bug.

https://bugs.launchpad.net/ubuntu/trusty/+source/python3.4/+bug/1382607
is less problematic but also an issue for rootwrap.
> Python 3.4.3 release schedule:
> "3.4.3rc1 will be tagged Saturday February 7 and released Sunday February
> 8.  3.4.3 final will follow two weeks later, tagged Saturday February 21
> and released Sunday February 22."
> https://mail.python.org/pipermail/python-dev/2015-January/137773.html
> 
> Victor
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Denis Makogon
On Tue, Jan 27, 2015 at 7:15 PM, Doug Hellmann 
wrote:

>
>
> On Tue, Jan 27, 2015, at 10:56 AM, Denis Makogon wrote:
> > On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann 
> > wrote:
> >
> > >
> > > > On Jan 15, 2015, at 1:30 PM, Denis Makogon 
> > > wrote:
> > > >
> > > > Good day to All,
> > > >
> > > > The question that i’d like to raise here is not simple one, so i’d
> like
> > > to involve as much readers as i can. I’d like to speak about
> oslo.messaging
> > > performance testing. As community we’ve put lots of efforts in making
> > > oslo.messaging widely used drivers stable as much as possible.
> Stability is
> > > a good thing, but is it enough for saying “works well”? I’d say that
> it’s
> > > not.
> > > > Since oslo.messaging uses driver-based messaging workflow, it makes
> > > sense to dig into each driver and collect all required/possible
> performance
> > > metrics.
> > > > First of all, it does make sense to figure out how to perform
> > > performance testing, first that came into my mind is to simulate high
> load
> > > on one of corresponding drivers. Here comes the question of how it can
> be
> > > accomplished withing available oslo.messaging tools - high load on any
> > > driver can perform an application that:
> > > >   • can populate multiple emitters(rpc clients) and consumers
> (rpc
> > > servers).
> > > >   • can force clients to send messages of pre-defined number of
> > > messages of any length.
> > >
> > > That makes sense.
> > >
> > > > Another thing is why do we need such thing. Profiling, performance
> > > testing can improve the way in which our drivers were implemented. It
> can
> > > show us actual “bottlenecks” in messaging process, in general. In some
> > > cases it does make sense to figure out where problem takes its place -
> > > whether AMQP causes messaging problems or certain driver that speaks to
> > > AMQP fails.
> > > > Next thing that i want to discuss the architecture of
> > > profiling/performance testing. As i can see it seemed to be a “good”
> way to
> > > add profiling code to each driver. If there’s any objection or better
> > > solution, please bring them to the light.
> > >
> > > What sort of extra profiling code do you anticipate needing?
> > >
> > >
> > As i can foresee (taking into account [1]) couple decorators, possibly
> > one
> > that handles metering process. The biggest part of code will take
> > highload
> > tool that'll be a part of messaging. But another question adding certain
> > dependecies to the project.
> >
> >
> > > > Once we’d have final design for profiling we would need to figure out
> > > tools for profiling. After searching over the web, i found pretty
> > > interesting topic related to python profiling [1]. After certain
> > > investigations it does makes sense discuss next profiling
> options(apply one
> > > or both):
> > > >   • Line-by-line timing and execution frequency with a profiler
> > > (there are possible Pros and Cons, but i would say the per-line
> statistics
> > > is more than appreciable at initial performance testing steps)
> > > >   • Memory/CPU consumption
> > > > Metrics. The most useful metric for us is time, any time-based
> metric,
> > > since it is very useful to know at which step or/and by whom
> delay/timeout
> > > caused, for example, so as it said, we would be able to figure out
> whether
> > > AMQP or driver fails to do what it was designed for.
> > > > Before proposing spec i’d like to figure out any other requirements,
> use
> > > cases and restrictions for messaging performance testing. Also, if
> there
> > > any stories of success in boosting python performance - feel free to
> share
> > > it.
> > >
> > > The metrics to measure depend on the goal. Do we think the messaging
> code
> > > is using too much memory? Is it too slow? Or is there something else
> > > causing concern?
> > >
> > > It does make sense to have profiling for cases when trying to upscale
> > cluster and it'll be a good thing to have an ability to figure out if
> > scaled AMQP service has it's best configuration (i guess here would come
> > the question about doing performance testing using well-known tools), and
> > the most interesting question is about how messaging driver decreases (or
> > leaves untouched) throughput between RPC client and server. This metering
> > results can be compared to those tools that were designed for performance
> > testing. And that's why it'll be good step forward having
> > profiling/performance testing using high load technic.
>
> That makes it sound like you want to build performance testing tools for
> the infrastructure oslo.messaging is using, and not for oslo.messaging
> itself. Is that right?
>
> I'd like to build tool that would be able to profile messaging over
various deployments. This "tool" would give me an ability to compare
results of performance testing produced by native tools and
oslo.messaging-based tool, eventually it would lead us into digging into
code and tryin

Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread Clark Boylan


On Tue, Jan 27, 2015, at 09:13 AM, Julien Danjou wrote:
> On Tue, Jan 27 2015, Doug Hellmann wrote:
> 
> > The infra team has been working hard to update our Python 3 testing for all
> > projects to run on 3.4 instead of 3.3. Two of the last projects to be able 
> > to
> > shift are oslo.messaging and oslo.rootwrap. The test suites for both 
> > projects
> > trigger a segfault bug in the 3.4 interpreter as it is shipped on Ubuntu
> > Trusty. The fix for the segfault is already available upstream, and the 
> > team at
> > Canonical is working on packaging a new release, but our schedules are out 
> > of
> > sync. Maintaining a separate image and pool of testing nodes for 3.3 
> > testing of
> > just these two projects is going to be a bit of a burden, and so the infra 
> > team
> > has asked if we’re willing to turn off the 3.3 jobs for the two projects,
> > leaving us without 3.x testing in the gate until the 3.4 interpreter on 
> > Trusty
> > is updated.
> 
> Isn't there any way to have an ugly work-around in those libs that
> wouldn't trigger the Python 3.4 segfault or at least disable the
> responsible tests until Python 3.4 gets fixed?
> 
So the issue is that the garbage collector segfaults on null objects in
the to be garbage collected list. Which means that by the time garbage
collection breaks you don't have the info you need to know what
references lead to the segfault. I spent a bit of time in gdb debugging
this and narrowed it down enough to realize what the bug was and find it
was fixed in later python releases but didn't have the time to sort out
how to figure out specifically which references in oslo.messaging caused
the garbage collector to fall over.

For oslo.rootwrap it affected any tests that ran with logging due to the
way rootwrap does logging iirc.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Jay Pipes

On 01/27/2015 09:13 AM, Silvan Kaiser wrote:

Am 27.01.2015 um 16:51 schrieb Jay Pipes :
b) The Glance API image cache can use xattr if SQLite is not
desired [1], and Glance does *not* list xattr as a dependency in
requirements.txt. Swift also has a dependency on python-xattr [2].
So, this particular Python library is not an unknown by any means.

Do you happen to know how Glance handles this if the dep. is not
handled in requirements.txt?


Yep, it's considered a documentation thing and handled in configuration 
management manifests...


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Requirements.txt and optional requirements

2015-01-27 Thread Sean Dague
On 01/27/2015 08:14 AM, Daniel P. Berrange wrote:
> On Tue, Jan 27, 2015 at 07:12:29AM -0800, Sean Dague wrote:
>> On 01/27/2015 12:18 AM, Silvan Kaiser wrote:
>>> Hello!
>>> Do dependencies required only in some contexts belong into requirements.txt?
>>>
>>> Yesterday we had a short discussion on #openstack-nova regarding how to
>>> handle optional requirements. This was triggered by our quobyte nova
>>> driver (https://review.openstack.org/#/c/110722/18), who requires xattr,
>>> which we therefore added to requirements.txt (as it is provided by the
>>> requirements project).
>>>
>>> Points from the discussion:
>>> - If we add this we will be adding every requirement for every component
>>> ---> this becomes to big.
>>> - Remove this requirement, no optional entries in requirements.txt, a
>>> 'deployer' has to know what dependencies the components he wants to use have
>>> ---> Usually he does not know and installation becomes more issue prone
>>> - Other (in between) ideas???
>>>
>>> Please note that this has some urgency, the change set referenced above
>>> has been in review for months and i'm trying to react asap on comments
>>> but the deadline is approaching (next week) and if i have to do bigger
>>> changes I'd like to know as fast as possible...
>>
>> Typically the answer is no. The libvirt volume driver architecture is
>> kind of weird in the fact that it doesn't actually split it's drivers
>> out into separate files, so there can be a file level boundry (including
>> loading) for these things.
> 
> Not immediately relevant, but in the L cycle, I'm hoping to finally
> split the volume.py up into a subdirectory of files. The nova.conf
> 'volume_drivers' config will be removed by thenm, avoiding the back
> compat class naming problem we'd have if we split it in Kilo

Woot! awesome.

-Sean

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Requesting FFE for opencontrail-nova-vif-driver-plugin

2015-01-27 Thread John Garbutt
Hi,

Apologies, we can re-approve that because your code was up before the deadline.

I was unable to do that yesterday as the code was not linked in the
blueprint whiteboard at that time, so it looked like there was no code
up for review. Apologies, its a gap in the tooling. Please try to make
sure the code is linked on the blueprint, and its marked as
NeedsCodeReview when all your code is up for review. Hopefully that
should help get your code reviewed quicker.

Thanks,
John


On 27 January 2015 at 00:58, Nati Ueno  wrote:
> Hi nova folks
>
> May I request FFE for vif driver for contrail?
> Spec was already approved, and 1st code review pushed Jan21, and it's
> just 95 lines code.
>
> BP
> https://blueprints.launchpad.net/nova/+spec/opencontrail-nova-vif-driver-plugin
>
> Code
> https://review.openstack.org/#/c/148805/1
>
> Best
> Nachi
>
> --
> Nachi Ueno
> email:nati.u...@gmail.com
> twitter:http://twitter.com/nati

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread Julien Danjou
On Tue, Jan 27 2015, Clark Boylan wrote:

> So the issue is that the garbage collector segfaults on null objects in
> the to be garbage collected list. Which means that by the time garbage
> collection breaks you don't have the info you need to know what
> references lead to the segfault. I spent a bit of time in gdb debugging
> this and narrowed it down enough to realize what the bug was and find it
> was fixed in later python releases but didn't have the time to sort out
> how to figure out specifically which references in oslo.messaging caused
> the garbage collector to fall over.

╯‵Д′)╯彡┻━┻

Ok, then let's disable it I guess. If there's a chance to keep something
has even a non-voting job, that'd be cool, but I'm not even sure that's
an option if it just doesn't work and we can't keep py33.

-- 
Julien Danjou
;; Free Software hacker
;; http://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][lbaas] meeting canceled next week (2/3)

2015-01-27 Thread Doug Wiegley
Since most of us will be at the lbaas mid-cycle, next week’s meeting is
canceled.

Thanks,
Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] do we really need project tags in the governance repository?

2015-01-27 Thread Clint Byrum
Excerpts from Thierry Carrez's message of 2015-01-27 02:46:03 -0800:
> Doug Hellmann wrote:
> > On Mon, Jan 26, 2015, at 12:02 PM, Thierry Carrez wrote:
> > [...]
> >> I'm open to alternative suggestions on where the list of tags, their
> >> definition and the list projects they apply to should live. If you don't
> >> like that being in the governance repository, what would have your
> >> preference ?
> > 
> > From the very beginning I have taken the position that tags are by
> > themselves not sufficiently useful for evaluating projects. If someone
> > wants to choose between Ceilometer, Monasca, or StackTach, we're
> > unlikely to come up with tags that will let them do that. They need
> > in-depth discussions of deployment options, performance characteristics,
> > and feature trade-offs.
> 
> They are still useful to give people a chance to discover that those 3
> are competing in the same space, and potentially get an idea of which
> one (if any) is deployed on more than one public cloud, better
> documented, or security-supported. I agree with you that an
> (opinionated) article comparing those 3 solutions would be a nice thing
> to have, but I'm just saying that basic, clearly-defined reference
> project metadata still has a lot of value, especially as we grow the
> number of projects.
> 

I agree with your statement that summary reference metadata is useful. I
agree with Doug that it is inappropriate for the TC to assign it.

> >> That said, I object to only saying "this is all information that can be
> >> found elsewhere or should live elsewhere", because that is just keeping
> >> the current situation -- where that information exists somewhere but
> >> can't be efficiently found by our downstream consumers. We need a
> >> taxonomy and clear definitions for tags, so that our users can easily
> >> find, understand and navigate such project metadata.
> > 
> > As someone new to the project, I would not think to look in the
> > governance documents for "state" information about a project. I would
> > search for things like "install guide openstack" or "component list
> > openstack" and expect to find them in the documentation. So I think
> > putting the information in those (or similar) places will actually make
> > it easier to find for someone that hasn't been involved in the
> > discussion of tags and the governance repository.
> 
> The idea here is to have the reference information in some
> Gerrit-controlled repository (currently openstack/governance, but I'm
> open to moving this elsewhere), and have that reference information
> consumed by the openstack.org website when you navigate to the
> "Software" section, to present a browseable/searchable list of projects
> with project metadata. I don't expect anyone to read the YAML file from
> the governance repository. On the other hand, the software section of
> the openstack.org website is by far the most visited page of all our web
> properties, so I expect most people to see that.
> 

Just like we gather docs and specs into single websites, we could also
gather project metadata. Let the projects set their tags. One thing
that might make sense for the TC to do is to elevate certain tags to
a more important status that they _will_ provide guidance on when to
use. However, the actual project to tag mapping would work quite well
as a single file in whatever repository the project team thinks would
be the best starting point for a new user.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Telco][NFV] Meeting facilitator for January 28th

2015-01-27 Thread Steve Gordon
- Original Message -
> From: "Marc Koderer" 
> To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> 
> Hi Steve,
> 
> I can host it.
> 
> Regards
> Marc

Thanks Marc!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread Doug Hellmann


On Tue, Jan 27, 2015, at 12:06 PM, victor stinner wrote:
> Hi,
> 
> What is the Python bug? Do you have a reference to the bug report and the
> patch?

http://bugs.python.org/issue21435

> 
> Python 3.4.3 release schedule:
> "3.4.3rc1 will be tagged Saturday February 7 and released Sunday February
> 8.  3.4.3 final will follow two weeks later, tagged Saturday February 21
> and released Sunday February 22."
> https://mail.python.org/pipermail/python-dev/2015-January/137773.html
> 
> Victor
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] temporarily disabling python 3.x testing for oslo.messaging and oslo.rootwrap

2015-01-27 Thread Doug Hellmann


On Tue, Jan 27, 2015, at 12:44 PM, Julien Danjou wrote:
> On Tue, Jan 27 2015, Clark Boylan wrote:
> 
> > So the issue is that the garbage collector segfaults on null objects in
> > the to be garbage collected list. Which means that by the time garbage
> > collection breaks you don't have the info you need to know what
> > references lead to the segfault. I spent a bit of time in gdb debugging
> > this and narrowed it down enough to realize what the bug was and find it
> > was fixed in later python releases but didn't have the time to sort out
> > how to figure out specifically which references in oslo.messaging caused
> > the garbage collector to fall over.
> 
> ╯‵Д′)╯彡┻━┻
> 
> Ok, then let's disable it I guess. If there's a chance to keep something
> has even a non-voting job, that'd be cool, but I'm not even sure that's
> an option if it just doesn't work and we can't keep py33.

I did think about a non-voting job, but there's not much point. We
expect it to fail with the segfault, so we would just be wasting
resources. :-/

> 
> -- 
> Julien Danjou
> ;; Free Software hacker
> ;; http://julien.danjou.info
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> Email had 1 attachment:
> + signature.asc
>   1k (application/pgp-signature)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators] [Keystone] flush expired tokens and moves deleted instance

2015-01-27 Thread Clint Byrum

Excerpts from Tim Bell's message of 2015-01-25 22:10:10 -0800:
> This is often mentioned as one of those items which catches every OpenStack 
> cloud operator at some time. It's not clear to me that there could not be a 
> scheduled job built into the system with a default frequency (configurable, 
> ideally).
> 
> If we are all configuring this as a cron job, is there a reason that it could 
> not be built into the code ?
> 
It has come up before.

The main reason not to build it into the code as it's even better to
just _never store tokens_:

https://blueprints.launchpad.net/keystone/+spec/non-persistent-tokens
http://git.openstack.org/cgit/openstack/keystone-specs/plain/specs/juno/non-persistent-tokens.rst

or just use certs:

https://blueprints.launchpad.net/keystone/+spec/keystone-tokenless-authz-with-x509-ssl-client-cert

The general thought is that putting lots of things in the database that
don't need to be stored anywhere is a bad idea. The need for the cron
job is just a symptom of that bug.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Doug Hellmann


On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:
> On Tue, Jan 27, 2015 at 7:15 PM, Doug Hellmann 
> wrote:
> 
> >
> >
> > On Tue, Jan 27, 2015, at 10:56 AM, Denis Makogon wrote:
> > > On Thu, Jan 15, 2015 at 8:56 PM, Doug Hellmann 
> > > wrote:
> > >
> > > >
> > > > > On Jan 15, 2015, at 1:30 PM, Denis Makogon 
> > > > wrote:
> > > > >
> > > > > Good day to All,
> > > > >
> > > > > The question that i’d like to raise here is not simple one, so i’d
> > like
> > > > to involve as much readers as i can. I’d like to speak about
> > oslo.messaging
> > > > performance testing. As community we’ve put lots of efforts in making
> > > > oslo.messaging widely used drivers stable as much as possible.
> > Stability is
> > > > a good thing, but is it enough for saying “works well”? I’d say that
> > it’s
> > > > not.
> > > > > Since oslo.messaging uses driver-based messaging workflow, it makes
> > > > sense to dig into each driver and collect all required/possible
> > performance
> > > > metrics.
> > > > > First of all, it does make sense to figure out how to perform
> > > > performance testing, first that came into my mind is to simulate high
> > load
> > > > on one of corresponding drivers. Here comes the question of how it can
> > be
> > > > accomplished withing available oslo.messaging tools - high load on any
> > > > driver can perform an application that:
> > > > >   • can populate multiple emitters(rpc clients) and consumers
> > (rpc
> > > > servers).
> > > > >   • can force clients to send messages of pre-defined number of
> > > > messages of any length.
> > > >
> > > > That makes sense.
> > > >
> > > > > Another thing is why do we need such thing. Profiling, performance
> > > > testing can improve the way in which our drivers were implemented. It
> > can
> > > > show us actual “bottlenecks” in messaging process, in general. In some
> > > > cases it does make sense to figure out where problem takes its place -
> > > > whether AMQP causes messaging problems or certain driver that speaks to
> > > > AMQP fails.
> > > > > Next thing that i want to discuss the architecture of
> > > > profiling/performance testing. As i can see it seemed to be a “good”
> > way to
> > > > add profiling code to each driver. If there’s any objection or better
> > > > solution, please bring them to the light.
> > > >
> > > > What sort of extra profiling code do you anticipate needing?
> > > >
> > > >
> > > As i can foresee (taking into account [1]) couple decorators, possibly
> > > one
> > > that handles metering process. The biggest part of code will take
> > > highload
> > > tool that'll be a part of messaging. But another question adding certain
> > > dependecies to the project.
> > >
> > >
> > > > > Once we’d have final design for profiling we would need to figure out
> > > > tools for profiling. After searching over the web, i found pretty
> > > > interesting topic related to python profiling [1]. After certain
> > > > investigations it does makes sense discuss next profiling
> > options(apply one
> > > > or both):
> > > > >   • Line-by-line timing and execution frequency with a profiler
> > > > (there are possible Pros and Cons, but i would say the per-line
> > statistics
> > > > is more than appreciable at initial performance testing steps)
> > > > >   • Memory/CPU consumption
> > > > > Metrics. The most useful metric for us is time, any time-based
> > metric,
> > > > since it is very useful to know at which step or/and by whom
> > delay/timeout
> > > > caused, for example, so as it said, we would be able to figure out
> > whether
> > > > AMQP or driver fails to do what it was designed for.
> > > > > Before proposing spec i’d like to figure out any other requirements,
> > use
> > > > cases and restrictions for messaging performance testing. Also, if
> > there
> > > > any stories of success in boosting python performance - feel free to
> > share
> > > > it.
> > > >
> > > > The metrics to measure depend on the goal. Do we think the messaging
> > code
> > > > is using too much memory? Is it too slow? Or is there something else
> > > > causing concern?
> > > >
> > > > It does make sense to have profiling for cases when trying to upscale
> > > cluster and it'll be a good thing to have an ability to figure out if
> > > scaled AMQP service has it's best configuration (i guess here would come
> > > the question about doing performance testing using well-known tools), and
> > > the most interesting question is about how messaging driver decreases (or
> > > leaves untouched) throughput between RPC client and server. This metering
> > > results can be compared to those tools that were designed for performance
> > > testing. And that's why it'll be good step forward having
> > > profiling/performance testing using high load technic.
> >
> > That makes it sound like you want to build performance testing tools for
> > the infrastructure oslo.messaging is using, and not for oslo.messaging
> > itself. Is that right?
> >
> 

Re: [openstack-dev] [Openstack-operators] [openstack-operators] [Keystone] flush expired tokens and moves deleted instance

2015-01-27 Thread John Dewey
This is one reason to use the memcached backend. Why replicate these tokens in 
the first place. 


On Tuesday, January 27, 2015 at 10:21 AM, Clint Byrum wrote:

> 
> Excerpts from Tim Bell's message of 2015-01-25 22:10:10 -0800:
> > This is often mentioned as one of those items which catches every OpenStack 
> > cloud operator at some time. It's not clear to me that there could not be a 
> > scheduled job built into the system with a default frequency (configurable, 
> > ideally).
> > 
> > If we are all configuring this as a cron job, is there a reason that it 
> > could not be built into the code ?
> It has come up before.
> 
> The main reason not to build it into the code as it's even better to
> just _never store tokens_:
> 
> https://blueprints.launchpad.net/keystone/+spec/non-persistent-tokens
> http://git.openstack.org/cgit/openstack/keystone-specs/plain/specs/juno/non-persistent-tokens.rst
> 
> or just use certs:
> 
> https://blueprints.launchpad.net/keystone/+spec/keystone-tokenless-authz-with-x509-ssl-client-cert
> 
> The general thought is that putting lots of things in the database that
> don't need to be stored anywhere is a bad idea. The need for the cron
> job is just a symptom of that bug.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Flush expired tokens automatically ?

2015-01-27 Thread Adam Young

Short term answers:

The amount of infrastructure we would have to build to replicate CRON is 
not worth it.


Figuring out a CRON strategy for nontrivial deployment is part of a 
larger data management scheme.



Long term answers:

Tokens should not be persisted.  We have been working toward ephemeral 
tokens for a long time, but the vision of how to get there is not 
uniformly shared among the team.  We spent a lot of time arguing about 
AE tokens, which looked promising, but do not support federation.


Where we are headed is a split of the data in the token into an 
ephemeral portion and a persisted portion.  The persisted portion would 
be reused, and would represent the delegation of authority. The 
epehmeral portion will represent the time aspects of the token: when 
issued, when expired, etc.  The ephemeral portion would refer to the 
persisted portion.


The revocation events code  is necessary for PKI tokens, and might be 
required depending on how we do the ephemeral/persisted split. With AE 
tokens it would have been necessary, but with a unified delegation 
mechanism, it would be less so.


If anyone feels the need for ephemeral tokens strongly enough to 
contribute, please let me know.  We've put a lot of design into where we 
are today, and I would encourage you to learn the issues before jumping 
in to the solutions.  I'm more than willing to guide any new development 
along these lines.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Policy][Group-based-policy] Policy violations investigation

2015-01-27 Thread Sumit Naiksatam
Hi Ariel,

This is indeed one of the use cases that is very relevant to, and can
be supported, with the GBP model. The GBP policy actions provide a way
to “redirect” to a service-instance/chain on matching a traffic
classifier. If you are able to represent the “honeypot” functionality
as a Neutron advanced service, or wrap it in an implemented service,
then you can integrate it with today’s implementation. The GBP team
will be happy to provide you with more information on how you can
propose and implement any changes that you may need to make for this
integration. Also, feel free to catch us in #openstack-gbp and/or
during the GBP weekly IRC meeting [1].

Thanks,
~Sumit.

[1] https://wiki.openstack.org/wiki/Meetings/Neutron_Group_Policy

On Tue, Jan 27, 2015 at 8:19 AM, Ariel Zeitlin  wrote:
> Hi,
> I want to propose an idea of investigation of policy violations (for
> white-list policies defined by GBP) by, for instance, redirecting the
> violating sessions to a HoneyPot.
> Meaning, that if the only communication between Group A and Group B is by
> port 80 (as described in the GPB) then an access to port 22 from Group A to
> Group B will be redirected to and answered by a HoneyPot that will
> investigate the real reason for policy violation, or simply log and drop the
> violating connection attempt.
>
> In tightly defined policies world as achieved through GBP an attacker trying
> to propagate inside the network is more likely to hit a wall and then
> actually create a "golden lead" for his detection.
>
> Do you think this concept can/should to be part of GBP and what would be the
> best way to promote it (sorry, I am pretty new to OpenStack and GBP
> specifically).
>
> Thanks,
> Ariel
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] oslo.messaging 1.6.0 released

2015-01-27 Thread Doug Hellmann
The Oslo team is pleased to announce the release of:

oslo.messaging 1.6.0: Oslo Messaging API

The primary reason for this release is to move the code
out of the oslo namespace package as part of
https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages

This release also includes requirements updates, and several months worth
of bug fixes.

For more details, please see the git log history below and:

http://launchpad.net/oslo.messaging/+milestone/1.6.0

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging
Changes in /home/dhellmann/repos/openstack/oslo.messaging 1.5.1..1.6.0
--

bfb8c97 Updated from global requirements
eb92511 Expose _impl_test for designate
ee31a84 Update Oslo imports to remove namespace package
563376c Speedup the rabbit tests
f286ef1 Fix functionnal tests
db7371c Fixed docstring for Notifier
386f5da zmq: Refactor test case shared code
7680897 Add more private symbols to the old namespace package
2832051 Updated from global requirements
b888ee3 Fixes test_two_pools_three_listener
0c49f0d Add TimerTestCase missing tests case
be9fca7 fix qpid test issue with eventlet monkey patching
0ca1b1e Make setup.cfg packages include oslo.messaging
408d0da Upgrade to hacking 0.10
a6d068a Add oslo.messaging._drivers.common for heat tests
1fa0e6a Port zmq driver to Python 3
bc8675a fix qpid test issue with eventlet monkey patching
e55a83e Move files out of the namespace package
31a149a Add a info log when a reconnection occurs
44132d4 rabbit: fix timeout timer when duration is None
c18f9f7 Don't log each received messages
3e2d142 Fix some comments in a backporting review session
c40ba04 Enable IPv6-support in libzmq by default
372bc49 Add a thread + futures executor based executor
56a9c55 safe_log Sanitize Passwords in List of Dicts
709c401 Updated from global requirements
98bfdd1 rabbit: add some tests when rpc_backend is set
d3e6ea1 Warns user if thread monkeypatch is not done
cd71c47 Add functional and unit 0mq driver tests
15aa5cb The executor doesn't need to set the timeout
43a9dc1 qpid: honor iterconsume timeout
023b7f4 rabbit: more precise iterconsume timeout
737afde Workflow documentation is now in infra-manual
66db2b3 Touch up grammar in warning messages
4e6dabb Make the RPCVersionCapError message clearer
254405d Doc: 'wait' releases driver connection, not 'stop'
09cd9c0 Don't allow call with fanout target
0844037 Add an optional executor callback to dispatcher
eb21f6b Warn user if needed when the process is forked
7ad0d7e Fix reconnect race condition with RabbitMQ cluster
1624793 Add more TLS protocols to rabbit impl
6987b8a Fix incorrect attribute name in matchmaker_redis

Diffstat (except docs and test files)
-

CONTRIBUTING.rst   |   7 +-
oslo/messaging/__init__.py |  15 +
oslo/messaging/_cmd/__init__.py|   1 -
oslo/messaging/_cmd/zmq_receiver.py|  39 -
oslo/messaging/_drivers/__init__.py|   1 -
oslo/messaging/_drivers/amqp.py| 222 -
oslo/messaging/_drivers/amqpdriver.py  | 472 --
oslo/messaging/_drivers/base.py| 108 ---
oslo/messaging/_drivers/common.py  | 343 +---
oslo/messaging/_drivers/impl_fake.py   | 233 -
oslo/messaging/_drivers/impl_qpid.py   | 731 
oslo/messaging/_drivers/impl_rabbit.py | 783 -
oslo/messaging/_drivers/impl_zmq.py| 941 
oslo/messaging/_drivers/matchmaker.py  | 321 ---
oslo/messaging/_drivers/matchmaker_redis.py| 139 ---
oslo/messaging/_drivers/matchmaker_ring.py | 104 ---
oslo/messaging/_drivers/pool.py|  88 --
oslo/messaging/_drivers/protocols/__init__.py  |   0
oslo/messaging/_drivers/protocols/amqp/__init__.py |   0
.../_drivers/protocols/amqp/controller.py  | 589 -
oslo/messaging/_drivers/protocols/amqp/driver.py   | 295 ---
.../messaging/_drivers/protocols/amqp/eventloop.py | 339 ---
oslo/messaging/_drivers/protocols/amqp/opts.py |  73 --
oslo/messaging/_executors/base.py  |  33 +-
oslo/messaging/_executors/impl_blocking.py |  56 --
oslo/messaging/_executors/impl_eventlet.py | 112 ---
oslo/messaging/_i18n.py|  35 -
oslo/messaging/_utils.py   |  41 -
oslo/messaging/conffixture.py  |  67 +-
oslo/messaging/exceptions.py   |  29 +-
oslo/messaging/localcontext.py |  44 +-
oslo/messaging/notify/__init__.py  |   1 +
oslo/messaging/notify/_impl_log.py |  35 -
oslo/messaging/notify/_impl_messaging.py   |  60 --
oslo/messaging/notify/_impl_noop.py   

Re: [openstack-dev] [keystone] Flush expired tokens automatically ?

2015-01-27 Thread Daniel Comnea
Thanks Adam, Thierry!

Dani

On Tue, Jan 27, 2015 at 1:43 PM, Adam Young  wrote:

> Short term answers:
>
> The amount of infrastructure we would have to build to replicate CRON is
> not worth it.
>
> Figuring out a CRON strategy for nontrivial deployment is part of a larger
> data management scheme.
>
>
> Long term answers:
>
> Tokens should not be persisted.  We have been working toward ephemeral
> tokens for a long time, but the vision of how to get there is not uniformly
> shared among the team.  We spent a lot of time arguing about AE tokens,
> which looked promising, but do not support federation.
>
> Where we are headed is a split of the data in the token into an ephemeral
> portion and a persisted portion.  The persisted portion would be reused,
> and would represent the delegation of authority. The epehmeral portion will
> represent the time aspects of the token: when issued, when expired, etc.
> The ephemeral portion would refer to the persisted portion.
>
> The revocation events code  is necessary for PKI tokens, and might be
> required depending on how we do the ephemeral/persisted split. With AE
> tokens it would have been necessary, but with a unified delegation
> mechanism, it would be less so.
>
> If anyone feels the need for ephemeral tokens strongly enough to
> contribute, please let me know.  We've put a lot of design into where we
> are today, and I would encourage you to learn the issues before jumping in
> to the solutions.  I'm more than willing to guide any new development along
> these lines.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-27 Thread Andrew Woodward
not to prolong single mode, I'd like to see it die. However we will
need to be able to add, change, remove, or noop portions of the tasks
graph in the future. Many of the plugins that cant currently be built
would rely on being able to sub out parts of the graph. How is that
going to factor into granular deployments?

On Tue, Jan 27, 2015 at 5:05 AM, Aleksandr Didenko
 wrote:
> Hi,
>
> After starting implementing granular deployment we've faced a bunch of
> issues that would make further development of this feature much more
> complicated if we have to support both Simple and HA deployment modes. For
> example: simple mode does not require cluster (corosync, pacemaker, vips,
> etc), so we had to skip this task for Simple mode somehow - we can use
> conditional tasks, or conditional manifests in our tasks, or create separate
> task graphs for different deployment modes, etc - either way it's pretty
> much doubling the amount of work for some parts of Fuel and our development
> cycle.
>
> At the moment, CI blocks us from further development of fuel-library
> modularization BP [2] because we still use Simple mode in CI. So in order to
> proceed with this BP we have two options:
>
> 1) remove Simple mode from CI/QA and thus drop it completely from Fuel
> 2) double our efforts to support both Simple and HA modes in granular
> deployment
>
> We have a BP about single-controller HA [1]. HA with single controller works
> just fine at the moment. So if you want to test Fuel on a minimum set of
> nodes, you can do this on 3 nodes (Fuel master, controller, compute), just
> like with Simple mode before. I suppose, it's time to finally drop support
> for Simple mode in Fuel :)
>
> [1] https://blueprints.launchpad.net/fuel/+spec/single-controller-ha
> [2] https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization
>
> --
> Regards,
> Aleksandr Didenko
>
>
> On Tue, Aug 26, 2014 at 9:25 AM, Mike Scherbakov 
> wrote:
>>
>> Definitely fuel spec is needed :)
>>
>>
>> On Mon, Aug 25, 2014 at 8:45 PM, Evgeniy L  wrote:
>>>
>>> Hi Andrew,
>>>
>>> I have some comments regarding to you action items
>>>
>>> >> 2) Removing simple mode from the ui and tests
>>> >> 3) Removing simple mode support from nailgun (maybe we leave it) and
>>> >> cli
>>>
>>> We shouldn't do it, because nailgun should handle both versions of
>>> cluster.
>>> What we have to do here is to use openstack.yaml to keep all possible
>>> modes.
>>> For new release there will be only ha, to manage previous releases we
>>> have
>>> to create data migrations in nailgun to create the filed with modes i.e.
>>> multinode
>>> and ha.
>>>
>>> Also fixes for ui are required too, I think it mostly related to wizard,
>>> 'mode' tab
>>> where use can chose ha or non ha cluster in case of new release there
>>> should
>>> be only ha, and in case of old releases there should be ha and multinode.
>>>
>>> Thanks,
>>>
>>>
>>>
>>> On Mon, Aug 25, 2014 at 8:19 PM, Andrew Woodward 
>>> wrote:

 Started a new thread so that we don't hijack the older thread.
  as

>
> Andrew, will you work on it in 6.0? What are remaining items there?
> Also, it might affect our tests - simple mode runs faster so we use it for
> smoke ISO test. Anastasia, please confirm that we can switch smoke to
> one-ha-controller model, or even drop smoke at all and use BVT only 
> (running
> CentOS 3 HA controllers and same with Ubuntu).


 The primary reason that we haven't disabled single yet is was due to [0]
 where we where having problems adding additional controllers. With the
 changes to galera and rabbit clustering it appears that we ended up fixing
 it already.

 The remaining issues are:
 1) Ensuring we have good test coverage for the cases we expect to
 support [1]
 2) Removing simple mode from the ui and tests
 3) Removing simple mode support from nailgun (maybe we leave it) and cli
 4) Updating documentation

 [0] https://bugs.launchpad.net/fuel/+bug/1350266
 [1] https://bugs.launchpad.net/fuel/+bug/1350266/comments/7

 --
 Andrew
 Mirantis
 Ceph community

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>>
>>> ___
>>> OpenStack-dev mailing list
>>> OpenStack-dev@lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Mike Scherbakov
>> #mihgen
>>
>>
>> ___
>> OpenStack-dev mailing list
>> OpenStack-dev@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-re

Re: [openstack-dev] [Fuel] Getting rid of kickstart/preseed for all NEW releases

2015-01-27 Thread Andrew Woodward
I don't see this as crazy, it's not a feature of the cloud, its a
mechanism to get us there. It's not even something that most of the
time anyone sees. Continuing to waste time supporting something we are
ready to replace, and have been testing for a release already is
crazy. It adds to the technical debt around provisioning that is
broken a chlot of the time. We spend around 11% of all commits of
fuel-library to cobbler, templates, pmanager etc

It's also not removing it, it will continue to be present to support
prior releases, so it's even still available if we cant make IBP work
the way we need to.

On Tue, Jan 27, 2015 at 2:23 AM, Vladimir Kozhukalov
 wrote:
> Guys,
>
> First, we are not talking about deliberate disabling preseed based approach
> just because we so crazy. The question is "What is the best way to achieve
> our 6.1 goals?" We definitely need to be able to install two versions of
> Ubuntu 12.04 and 14.04. Those versions have different sets of packages (for
> example ntp related ones) and we install some of those packages during
> provisioning (we point out which packages we need with their versions). To
> make this working with preseed based approach we need either to insert some
> "IF release==6.1" conditional lines into preseed (not very beautiful, isn't
> it?) or to create different Distros and Profiles for different releases.
> Second is not a problem for Cobbler but it is for nailgun/astute because we
> do not deal with that stuff and it looks that we cannot implement this
> easily.
>
> IMO, the only options we have are to insert "IFs" into preseed (I would say
> it is not more reliable than IBP) or to refuse preseed approach for ONLY NEW
> UPCOMING releases. You can call "crazy" but for me having a set "IFs"
> together with pmanager.py which are absolutely difficult to maintain is
> crazy.
>
>
>
> Vladimir Kozhukalov
>
> On Tue, Jan 27, 2015 at 3:03 AM, Andrew Woodward  wrote:
>>
>> On Mon, Jan 26, 2015 at 10:47 AM, Sergii Golovatiuk
>>  wrote:
>> > Until we are sure IBP solves operation phase where we need to deliver
>> > updated packages so client will be able to provision new machines with
>> > these
>> > fixed packages, I would leave backward compatibility with normal
>> > provision.
>> > ... Just in case.
>>
>> doesn't running 'apt-get upgrade' or 'yum update' after laying out the
>> FS image resolve the gap until we can rebuild the images on the fly?
>> >
>> >
>> >
>> > --
>> > Best regards,
>> > Sergii Golovatiuk,
>> > Skype #golserge
>> > IRC #holser
>> >
>> > On Mon, Jan 26, 2015 at 4:56 PM, Vladimir Kozhukalov
>> >  wrote:
>> >>
>> >> My suggestion is to make IBP the only option available for all upcoming
>> >> OpenStack releases which are defined in openstack.yaml. It is to be
>> >> possible
>> >> to install OS using kickstart for all currently available OpenStack
>> >> releases.
>> >>
>> >> Vladimir Kozhukalov
>> >>
>> >> On Mon, Jan 26, 2015 at 6:22 PM, Igor Kalnitsky
>> >> 
>> >> wrote:
>> >>>
>> >>> Just want to be sure I understand you correctly: do you propose to
>> >>> FORBID kickstart/preseed installation way in upcoming release at all?
>> >>>
>> >>> On Mon, Jan 26, 2015 at 3:59 PM, Vladimir Kozhukalov
>> >>>  wrote:
>> >>> > Subject is changed.
>> >>> >
>> >>> > Vladimir Kozhukalov
>> >>> >
>> >>> > On Mon, Jan 26, 2015 at 4:55 PM, Vladimir Kozhukalov
>> >>> >  wrote:
>> >>> >>
>> >>> >> Dear Fuelers,
>> >>> >>
>> >>> >> As you might know we need it to be possible to install several
>> >>> >> versions of
>> >>> >> a particular OS (Ubuntu and Centos) by 6.1  As far as having
>> >>> >> different
>> >>> >> OS
>> >>> >> versions also means having different sets of packages and some of
>> >>> >> the
>> >>> >> packages are installed and configured during provisioning stage, we
>> >>> >> need to
>> >>> >> have a kind of kickstart/preseed version mechanism.
>> >>> >>
>> >>> >> Cobbler is exactly such a mechanism. It allows us to have several
>> >>> >> Distros
>> >>> >> (installer images) and profiles (kickstart/preseed files). But
>> >>> >> unfortunately, for some reasons we have not been using those
>> >>> >> Cobbler's
>> >>> >> capabilities since the beginning of Fuel and it doesn't seem to be
>> >>> >> easily
>> >>> >> introduced into Nailgun to deal with the whole Cobbler life cycle.
>> >>> >>
>> >>> >> Anyway, we are moving towards IBP (image based provisioning) and we
>> >>> >> already have different images connected to different OpenStack
>> >>> >> releases
>> >>> >> (openstack.yaml) and everything else which is necessary for initial
>> >>> >> node
>> >>> >> configuration is serialized inside provision data (including
>> >>> >> profile
>> >>> >> name
>> >>> >> like 'ubuntu_1204' or 'ubuntu_1404') and we are able to choose
>> >>> >> cloud-init
>> >>> >> template by this profile name.
>> >>> >>
>> >>> >> And taking into account what it is written above, the suggestion is
>> >>> >> to
>> >>> >> completely avoid using kickstart/preseed based way of OS
>> >>> >> provi

Re: [openstack-dev] [cinder] [nova] [scheduler] Nova node name passed to Cinder

2015-01-27 Thread Vishvananda Ishaya

On Jan 26, 2015, at 10:16 PM, Philipp Marek  wrote:

> Hello Vish,
> 
>> Nova passes ip, iqn, and hostname into initialize_connection. That should 
>> give you the info you need.
> thank you, but that is on the _Nova_ side.
> 
> I need to know that on the Cinder node already:
> 
>>> For that the cinder volume driver needs to know at
> ...
>>> time which Nova host will be used to access the data.
> 
> but it's not passed in there:
> 
>>> The arguments passed to this functions already include an
>>> "attached_host" value, sadly it's currently given as "None"...
> 
> 
> Therefore my question where/when that value is calculated…

Initialize connection passes that data to cinder in the call. The connector
dictionary in the call should contain the info from nova:

https://github.com/openstack/cinder/blob/master/cinder/volume/driver.py#L1051


> 
> 
> Regards,
> 
> Phil
> 
> -- 
> : Ing. Philipp Marek
> : LINBIT | Your Way to High Availability
> : DRBD/HA support and consulting http://www.linbit.com :
> 
> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Telco][NFV] Meeting reminder - Wednesday 28th @ 1400 UTC in #openstack-meeting-alt

2015-01-27 Thread Steve Gordon
Hi all,

Just a friendly reminder that this week's OpenStack Telco Working Group meeting 
is tomorrow, Wednesday the 28th, at 1400 UTC in #openstack-meeting-alt. Please 
add any items you wish to discuss to the agenda at:

https://etherpad.openstack.org/p/nfv-meeting-agenda

Marc Koderer has kindly stepped up to run the meeting in my absence.

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.messaging] Performance testing. Initial steps.

2015-01-27 Thread Gordon Sim

On 01/27/2015 06:31 PM, Doug Hellmann wrote:

On Tue, Jan 27, 2015, at 12:28 PM, Denis Makogon wrote:

I'd like to build tool that would be able to profile messaging over
various deployments. This "tool" would give me an ability to compare
results of performance testing produced by native tools and
oslo.messaging-based tool, eventually it would lead us into digging into
code and trying to figure out where "bad things" are happening (that's
the
actual place where we would need to profile messaging code). Correct me
if
i'm wrong.


It would be interesting to have recommendations for deployment of rabbit
or qpid based on performance testing with oslo.messaging. It would also
be interesting to have recommendations for changes to the implementation
of oslo.messaging based on performance testing. I'm not sure you want to
do full-stack testing for the latter, though.

Either way, I think you would be able to start the testing without any
changes in oslo.messaging.


I agree. I think the first step is to define what to measure and then 
construct an application using olso.messaging that allows the data of 
interest to be captured using different drivers and indeed different 
configurations of a given driver.


I wrote a very simple test application to test one aspect that I felt 
was important, namely the scalability of the RPC mechanism as you 
increase the number of clients and servers involved. The code I used is 
https://github.com/grs/ombt, its probably stale at the moment, I only 
link to it as an example of approach.


Using that test code I was then able to compare performance in this one 
aspect across drivers (the 'rabbit', 'qpid' and new amqp 1.0 based 
drivers _ I wanted to try zmq, but couldn't figure out how to get it 
working at the time), and for different deployment options using a given 
driver (amqp 1.0 using qpidd or qpid dispatch router in either 
standalone or with multiple connected routers).


There are of course several other aspects that I think would be 
important to explore: notifications, more specific variations in the RPC 
'topology' i.e. number of clients on given server number of servers in 
single group etc, and a better tool (or set of tools) would allow all of 
these to be explored.


From my experimentation, I believe the biggest differences in 
scalability are going to come not from optimising the code in 
oslo.messaging so much as choosing different patterns for communication. 
Those choices may be constrained by other aspects as well of course, 
notably approach to reliability.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] required libvirtd/qemu versions for numa support?

2015-01-27 Thread Chris Friesen

On 01/26/2015 05:37 PM, Jay Pipes wrote:

On 01/26/2015 07:33 AM, Chris Friesen wrote:

Hi,

I'm interested in the recent work around NUMA support for guest
instances
(https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement), but
I'm having some difficulty figuring out what versions of libvirt and
qemu are required.

 From the research that I've done it seems like qemu 2.1 might be
required, but I've been unable to find a specific version listed in the
nova requirements or in the openstack global requirements.  Is it there
and I just can't find it?

If it's not specified, and yet openstack relies on it, perhaps it should
be added.  (Or at least documented somewhere.)


Hi Chris,

The constants starting here:

http://git.openstack.org/cgit/openstack/nova/tree/nova/virt/libvirt/driver.py#n340

should answer your questions.


Thanks, that's useful.

Looking at the history of that file, I see git commit d927635 adding in support 
for NUMA memory allocation policy using the libvirt numatune option, but it 
doesn't modify the libvirt or qemu version requirements.


When I asked on the libvirt list they said that qemu 2.1 was needed to support 
pinning memory on host NUMA nodes.  Do we get around that somehow?


Also, the _get_host_numa_topology() code uses pages.size and pages.total which 
would seem to depend on hugepages support in libvirt, but that was only added in 
1.2.6.  I don't see any hugepage-related versions listed here for libvirt.  (I 
actually ran into a problem here before upgrading libvirt, it threw an exception 
in _get_host_numa_topology().  If I recall it was because cell.mempages was 
empty since libvirt was too old.)


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Swift GUI (free or open source)?

2015-01-27 Thread Clay Gerrard
https://github.com/cschwede/django-swiftbrowser is done by a swift core dev

You should browse:

http://docs.openstack.org/developer/swift/associated_projects.html#associated-projects

On Mon, Jan 26, 2015 at 11:50 AM, Adam Lawson  wrote:

> I'm researching for a web-based visualization that simply displays
> OpenStack Swift and/or node status, cluster health etc in some manner.
> being able to run a command would be cool but a little more than I need.
> Does such a thing currently exist? I know about SwiftStack but I'm
> wondering if there are other efforts that have produced a way to visualize
> Swift telemetry.
>
> Has anyone run across such a thing?
>
>
> *Adam Lawson*
>
> AQORN, Inc.
> 427 North Tatnall Street
> Ste. 58461
> Wilmington, Delaware 19801-2230
> Toll-free: (844) 4-AQORN-NOW ext. 101
> International: +1 302-387-4660
> Direct: +1 916-246-2072
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [openstack-operators] [Keystone] flush expired tokens and moves deleted instance

2015-01-27 Thread Clint Byrum
The problem with running in memcached is now you have to keep _EVERY_
token in RAM. This is not any cheaper than cleaning out a giant on-disk
table.

Also worth noting is that memcached can produce frustrating results unless
you run it with -M. That is because without -M, your tokens may be removed
well before their expiration and well before memcached fills up if the
slabs that are allocated in the early days of running are filled up.
Also single users that have many tokens will overrun the per-item limit
in memcached with the size of the token ID list.

There's no magic bullet.. just trade-offs that may or may not work well
for your site.

Excerpts from John Dewey's message of 2015-01-27 10:41:33 -0800:
> This is one reason to use the memcached backend. Why replicate these tokens 
> in the first place. 
> 
> On Tuesday, January 27, 2015 at 10:21 AM, Clint Byrum wrote:
> 
> > 
> > Excerpts from Tim Bell's message of 2015-01-25 22:10:10 -0800:
> > > This is often mentioned as one of those items which catches every 
> > > OpenStack cloud operator at some time. It's not clear to me that there 
> > > could not be a scheduled job built into the system with a default 
> > > frequency (configurable, ideally).
> > > 
> > > If we are all configuring this as a cron job, is there a reason that it 
> > > could not be built into the code ?
> > It has come up before.
> > 
> > The main reason not to build it into the code as it's even better to
> > just _never store tokens_:
> > 
> > https://blueprints.launchpad.net/keystone/+spec/non-persistent-tokens
> > http://git.openstack.org/cgit/openstack/keystone-specs/plain/specs/juno/non-persistent-tokens.rst
> > 
> > or just use certs:
> > 
> > https://blueprints.launchpad.net/keystone/+spec/keystone-tokenless-authz-with-x509-ssl-client-cert
> > 
> > The general thought is that putting lots of things in the database that
> > don't need to be stored anywhere is a bad idea. The need for the cron
> > job is just a symptom of that bug.
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> > (mailto:openstack-dev-requ...@lists.openstack.org?subject:unsubscribe)
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> > 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Take back the naming process

2015-01-27 Thread Monty Taylor
I do not like how we are selecting names for our releases right now.
The current process is autocratic and opaque and not fun - which is the
exact opposite of what a community selected name should be.

I propose:

* As soon as development starts on release X, we open the voting for the
name of release X+1 (we're working on Kilo now, we should have known the
name of L at the K summit)

* Anyone can nominate a name - although we do suggest that something at
least related to the location of the associated summit would be nice

* We condorcet vote on the entire list of nominated names

* After we have the winning list, the foundation trademark checks the name

* If there is a trademark issue (and only a trademark issue - not a
"marketing doesn't like the name" issue) we'll move down to the next
name on the list

If we cannot have this process be completely open and democratic, then
what the heck is the point of having our massive meritocracy in the
first place? There's a lot of overhead we deal with by being a
leaderless collective you know - we should occasionally get to have fun
with it.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [api] Get servers with limit and IP address filter

2015-01-27 Thread Steven Kaufer


Hello,

When applying an IP address filter to a paginated servers query (eg,
supplying servers/detail?ip=192.168&limit=100), the IP address filtering is
only being applied against the non-filtered page of servers that were
retrieved from the DB; see [1].

I believe that the IP address filtering should be done before the limit is
applied, returning up to  servers that match the IP address filter.
Currently, if the servers in the page of data returned from the DB do not
happen to match the IP address filter (applied in the compute API), then no
servers will be returned by the REST API (even if there are servers that
match the IP address filter).

This seems like a bug to me, shouldn't all filtering be done at the DB
layer?

[1]:
https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2037-L2042

Thanks,
Steven Kaufer__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-27 Thread Kyle Mestery
On Tue, Jan 27, 2015 at 1:50 PM, Monty Taylor  wrote:

> I do not like how we are selecting names for our releases right now.
> The current process is autocratic and opaque and not fun - which is the
> exact opposite of what a community selected name should be.
>
> ++


> I propose:
>
> * As soon as development starts on release X, we open the voting for the
> name of release X+1 (we're working on Kilo now, we should have known the
> name of L at the K summit)
>
> * Anyone can nominate a name - although we do suggest that something at
> least related to the location of the associated summit would be nice
>
> * We condorcet vote on the entire list of nominated names
>
> * After we have the winning list, the foundation trademark checks the name
>
> * If there is a trademark issue (and only a trademark issue - not a
> "marketing doesn't like the name" issue) we'll move down to the next
> name on the list
>
> Huge +1 here.


> If we cannot have this process be completely open and democratic, then
> what the heck is the point of having our massive meritocracy in the
> first place? There's a lot of overhead we deal with by being a
> leaderless collective you know - we should occasionally get to have fun
> with it.
>
> Agree with all your points Monty. This puts naming into the hands of the
individual foundation members. Seems like it should be there.


> Monty
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] removing single mode

2015-01-27 Thread Dmitriy Shulyak
> not to prolong single mode, I'd like to see it die. However we will
> need to be able to add, change, remove, or noop portions of the tasks
> graph in the future. Many of the plugins that cant currently be built
> would rely on being able to sub out parts of the graph. How is that
> going to factor into granular deployments?
>

There is several ways to achieve "noop" task:

1. By condition on task itself (same expression parser that is used for UI
validation).
Right now we are able to add condtion like, cluster:mode != multinode,
but the problem is additional complexity to support different chains of
tasks, and additional refactoring in library.
2. Skip particular task in deployment API call

As for plugins and add/stubout/change - all of this is possible, there is
no plugins API for that stuff,
and we will need to think what exactly we want to expose, but from granular
deployment perspective
it is just a matter of changing data for particular task in graph
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-27 Thread Jim Meyer
+1 all the way down.

More fun double-plus-good.

—j

> On Jan 27, 2015, at 1:50 PM, Monty Taylor  wrote:
> 
> I do not like how we are selecting names for our releases right now.
> The current process is autocratic and opaque and not fun - which is the
> exact opposite of what a community selected name should be.
> 
> I propose:
> 
> * As soon as development starts on release X, we open the voting for the
> name of release X+1 (we're working on Kilo now, we should have known the
> name of L at the K summit)
> 
> * Anyone can nominate a name - although we do suggest that something at
> least related to the location of the associated summit would be nice
> 
> * We condorcet vote on the entire list of nominated names
> 
> * After we have the winning list, the foundation trademark checks the name
> 
> * If there is a trademark issue (and only a trademark issue - not a
> "marketing doesn't like the name" issue) we'll move down to the next
> name on the list
> 
> If we cannot have this process be completely open and democratic, then
> what the heck is the point of having our massive meritocracy in the
> first place? There's a lot of overhead we deal with by being a
> leaderless collective you know - we should occasionally get to have fun
> with it.
> 
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] oslo.messaging 1.6.0 released

2015-01-27 Thread Doug Hellmann
There were some issues with the build job, so this release has just gone
live. I apologize for the delay.

Doug

On Tue, Jan 27, 2015, at 02:05 PM, Doug Hellmann wrote:
> The Oslo team is pleased to announce the release of:
> 
> oslo.messaging 1.6.0: Oslo Messaging API
> 
> The primary reason for this release is to move the code
> out of the oslo namespace package as part of
> https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages
> 
> This release also includes requirements updates, and several months worth
> of bug fixes.
> 
> For more details, please see the git log history below and:
> 
> http://launchpad.net/oslo.messaging/+milestone/1.6.0
> 
> Please report issues through launchpad:
> 
> http://bugs.launchpad.net/oslo.messaging
> Changes in /home/dhellmann/repos/openstack/oslo.messaging 1.5.1..1.6.0
> --
> 
> bfb8c97 Updated from global requirements
> eb92511 Expose _impl_test for designate
> ee31a84 Update Oslo imports to remove namespace package
> 563376c Speedup the rabbit tests
> f286ef1 Fix functionnal tests
> db7371c Fixed docstring for Notifier
> 386f5da zmq: Refactor test case shared code
> 7680897 Add more private symbols to the old namespace package
> 2832051 Updated from global requirements
> b888ee3 Fixes test_two_pools_three_listener
> 0c49f0d Add TimerTestCase missing tests case
> be9fca7 fix qpid test issue with eventlet monkey patching
> 0ca1b1e Make setup.cfg packages include oslo.messaging
> 408d0da Upgrade to hacking 0.10
> a6d068a Add oslo.messaging._drivers.common for heat tests
> 1fa0e6a Port zmq driver to Python 3
> bc8675a fix qpid test issue with eventlet monkey patching
> e55a83e Move files out of the namespace package
> 31a149a Add a info log when a reconnection occurs
> 44132d4 rabbit: fix timeout timer when duration is None
> c18f9f7 Don't log each received messages
> 3e2d142 Fix some comments in a backporting review session
> c40ba04 Enable IPv6-support in libzmq by default
> 372bc49 Add a thread + futures executor based executor
> 56a9c55 safe_log Sanitize Passwords in List of Dicts
> 709c401 Updated from global requirements
> 98bfdd1 rabbit: add some tests when rpc_backend is set
> d3e6ea1 Warns user if thread monkeypatch is not done
> cd71c47 Add functional and unit 0mq driver tests
> 15aa5cb The executor doesn't need to set the timeout
> 43a9dc1 qpid: honor iterconsume timeout
> 023b7f4 rabbit: more precise iterconsume timeout
> 737afde Workflow documentation is now in infra-manual
> 66db2b3 Touch up grammar in warning messages
> 4e6dabb Make the RPCVersionCapError message clearer
> 254405d Doc: 'wait' releases driver connection, not 'stop'
> 09cd9c0 Don't allow call with fanout target
> 0844037 Add an optional executor callback to dispatcher
> eb21f6b Warn user if needed when the process is forked
> 7ad0d7e Fix reconnect race condition with RabbitMQ cluster
> 1624793 Add more TLS protocols to rabbit impl
> 6987b8a Fix incorrect attribute name in matchmaker_redis
> 
> Diffstat (except docs and test files)
> -
> 
> CONTRIBUTING.rst   |   7 +-
> oslo/messaging/__init__.py |  15 +
> oslo/messaging/_cmd/__init__.py|   1 -
> oslo/messaging/_cmd/zmq_receiver.py|  39 -
> oslo/messaging/_drivers/__init__.py|   1 -
> oslo/messaging/_drivers/amqp.py| 222 -
> oslo/messaging/_drivers/amqpdriver.py  | 472 --
> oslo/messaging/_drivers/base.py| 108 ---
> oslo/messaging/_drivers/common.py  | 343 +---
> oslo/messaging/_drivers/impl_fake.py   | 233 -
> oslo/messaging/_drivers/impl_qpid.py   | 731 
> oslo/messaging/_drivers/impl_rabbit.py | 783
> -
> oslo/messaging/_drivers/impl_zmq.py| 941
> 
> oslo/messaging/_drivers/matchmaker.py  | 321 ---
> oslo/messaging/_drivers/matchmaker_redis.py| 139 ---
> oslo/messaging/_drivers/matchmaker_ring.py | 104 ---
> oslo/messaging/_drivers/pool.py|  88 --
> oslo/messaging/_drivers/protocols/__init__.py  |   0
> oslo/messaging/_drivers/protocols/amqp/__init__.py |   0
> .../_drivers/protocols/amqp/controller.py  | 589 -
> oslo/messaging/_drivers/protocols/amqp/driver.py   | 295 ---
> .../messaging/_drivers/protocols/amqp/eventloop.py | 339 ---
> oslo/messaging/_drivers/protocols/amqp/opts.py |  73 --
> oslo/messaging/_executors/base.py  |  33 +-
> oslo/messaging/_executors/impl_blocking.py |  56 --
> oslo/messaging/_executors/impl_eventlet.py | 112 ---
> oslo/messaging/_i18n.py|  35 -
> oslo/messaging/_utils.py   |  41 -
> oslo/messaging/conffixture.py  

Re: [openstack-dev] [tc] Take back the naming process

2015-01-27 Thread Morgan Fainberg
++ absolutely! 

Sent via mobile

> On Jan 27, 2015, at 14:19, Jim Meyer  wrote:
> 
> +1 all the way down.
> 
> More fun double-plus-good.
> 
> —j
> 
>> On Jan 27, 2015, at 1:50 PM, Monty Taylor  wrote:
>> 
>> I do not like how we are selecting names for our releases right now.
>> The current process is autocratic and opaque and not fun - which is the
>> exact opposite of what a community selected name should be.
>> 
>> I propose:
>> 
>> * As soon as development starts on release X, we open the voting for the
>> name of release X+1 (we're working on Kilo now, we should have known the
>> name of L at the K summit)
>> 
>> * Anyone can nominate a name - although we do suggest that something at
>> least related to the location of the associated summit would be nice
>> 
>> * We condorcet vote on the entire list of nominated names
>> 
>> * After we have the winning list, the foundation trademark checks the name
>> 
>> * If there is a trademark issue (and only a trademark issue - not a
>> "marketing doesn't like the name" issue) we'll move down to the next
>> name on the list
>> 
>> If we cannot have this process be completely open and democratic, then
>> what the heck is the point of having our massive meritocracy in the
>> first place? There's a lot of overhead we deal with by being a
>> leaderless collective you know - we should occasionally get to have fun
>> with it.
>> 
>> Monty
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-27 Thread Adam Young

On 01/27/2015 05:19 PM, Jim Meyer wrote:

+1 all the way down.

More fun double-plus-good.

—j


On Jan 27, 2015, at 1:50 PM, Monty Taylor  wrote:

I do not like how we are selecting names for our releases right now.
The current process is autocratic and opaque and not fun - which is the
exact opposite of what a community selected name should be.

I propose:

* As soon as development starts on release X, we open the voting for the
name of release X+1 (we're working on Kilo now, we should have known the
name of L at the K summit)

* Anyone can nominate a name - although we do suggest that something at
least related to the location of the associated summit would be nice

* We condorcet vote on the entire list of nominated names

* After we have the winning list, the foundation trademark checks the name

* If there is a trademark issue (and only a trademark issue - not a
"marketing doesn't like the name" issue) we'll move down to the next
name on the list

If we cannot have this process be completely open and democratic, then
what the heck is the point of having our massive meritocracy in the
first place? There's a lot of overhead we deal with by being a
leaderless collective you know - we should occasionally get to have fun
with it.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I nominate Ladysmith and Langley as the two obvious l named locations 
closest to Vancouver.


Oh, and I think Monty is spot on




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [api] Get servers with limit and IP address filter

2015-01-27 Thread Vishvananda Ishaya
The network info for an instance is cached as a blob of data (neutron has the 
canonical version in most installs), so it isn’t particularly easy to do at the 
database layer. You would likely need a pretty complex stored procedure to do 
it accurately.

Vish

On Jan 27, 2015, at 2:00 PM, Steven Kaufer  wrote:

> Hello,
> 
> When applying an IP address filter to a paginated servers query (eg, 
> supplying servers/detail?ip=192.168&limit=100), the IP address filtering is 
> only being applied against the non-filtered page of servers that were 
> retrieved from the DB; see [1].
> 
> I believe that the IP address filtering should be done before the limit is 
> applied, returning up to  servers that match the IP address filter.  
> Currently, if the servers in the page of data returned from the DB do not 
> happen to match the IP address filter (applied in the compute API), then no 
> servers will be returned by the REST API (even if there are servers that 
> match the IP address filter).
> 
> This seems like a bug to me, shouldn't all filtering be done at the DB layer?
> 
> [1]: 
> https://github.com/openstack/nova/blob/master/nova/compute/api.py#L2037-L2042
> 
> Thanks,
> Steven Kaufer
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Plugins][Orchestration] Unclear handling of primary-controler and controller roles

2015-01-27 Thread Dmitriy Shulyak
Hello all,

You may know that for deployment configuration we are serializing
additional prefix for controller role (primary), with the goal of
deployment order control (primary-controller always should be deployed
before secondaries) and some condiions in fuel-library code.

However, we cannot guarantee that primary controller will be always the
same node, because it is not business of nailgun to control elections of
primary. Essentially user should not rely on nailgun
information to find primary, but we need to persist node elected as primary
in first deployment
to resolve orchestration issues (when new node added to cluster we should
not mark it as primary).

So we called primary-controller - "internal" role, which means that it is
not exposed to users (or external developers).
But with introduction of plugins and granular deployment, in my opinion, we
need to be able
to specify that task should run specifically on primary, or on secondaries.
Alternative to this approach would be - always run task on all controllers,
and let task itself to verify that it is  executed on primary or not.

Is it possible to have significantly different sets of tasks for controller
and primary-controller?
And same goes for mongo, and i think we had primary for swift also.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-27 Thread Michael Still
Greetings,

I would like to nominate Melanie Witt for the python-novaclient-core team.

(What is python-novaclient-core? Its a new group which will contain
all of nova-core as well as anyone else we think should have core
reviewer powers on just the python-novaclient code).

Melanie has been involved with nova for a long time now. She does
solid reviews in python-novaclient, and at least two current
nova-cores have suggested her as ready for core review powers on that
repository.

Please respond with +1s or any concerns.

References:


https://review.openstack.org/#/q/project:openstack/python-novaclient+reviewer:%22melanie+witt+%253Cmelwitt%2540yahoo-inc.com%253E%22,n,z

As a reminder, we use the voting process outlined at
https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
core team.

Thanks,
Michael

-- 
Rackspace Australia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-27 Thread James E. Blair
Monty Taylor  writes:

> I do not like how we are selecting names for our releases right now.
> The current process is autocratic and opaque and not fun - which is the
> exact opposite of what a community selected name should be.
>
> I propose:
>
> * As soon as development starts on release X, we open the voting for the
> name of release X+1 (we're working on Kilo now, we should have known the
> name of L at the K summit)
>
> * Anyone can nominate a name - although we do suggest that something at
> least related to the location of the associated summit would be nice
>
> * We condorcet vote on the entire list of nominated names
>
> * After we have the winning list, the foundation trademark checks the name
>
> * If there is a trademark issue (and only a trademark issue - not a
> "marketing doesn't like the name" issue) we'll move down to the next
> name on the list

Thank you, I agree!  I have proposed a change[1] to the governance repo
that I believe implements the suggested process.  Note that I kept the
existing rules about the locality of the name, since I think that's a
good part of the fun (anyone can come up with a cool "L" word, but
finding one near the summit is a challenge).  Of course it is easy to
modify the naming rules without changing the process if we desire, and I
made explicit the process for overriding the rules for names that sound
really cool.

Note that if this is approved, I would expect it to be used for the
Miyazaki release, but not before.

[1] https://review.openstack.org/150604

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-27 Thread Dmitriy Shulyak
On Thu, Jan 22, 2015 at 7:59 PM, Evgeniy L  wrote:

> The problem with merging is usually it's not clear how system performs
> merging.
> For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
> 3}]}, and I want
> {'list': [{'k': 4}]} to be merged, what system should do? Replace the list
> or add {'k': 4}?
> Both cases should be covered.
>
> What if we will replace based on root level? It feels enough for me.

Most of the users don't remember all of the keys, usually user gets the
> defaults, and
> changes some values in place, in this case we should ask user to remove
> the rest
> of the fields.
>
> And we are not going to force them delete something - if all information
is present than it is what user actually wants.

The only solution which I see is to separate the data from the graph, not
> to send
> this information to user.
>
Probably, i will follow same approach that is used for repo generation,
mainly because it is quite usefull for debuging - to see
how tasks are generated, but it doesnt solves two additional points:
1. There is constantly some data in nailgun becomes invalid just because we
are asking user to overwrite everything
(most common case is allocated ip addresses)
2. What if you only need to add some data, like in fencing plugin? It will
mean that such cluster is not going to be supportable,
what if we will want to upgrade that cluster and new serializer should be
used? i think there is even warning on UI.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] Cluster replaced deployment of provisioning information

2015-01-27 Thread Dmitriy Shulyak
On Tue, Jan 27, 2015 at 10:47 AM, Vladimir Kuklin 
wrote:

> This is an interesting topic. As per our discussions earlier, I suggest
> that in the future we move to different serializers for each granule of our
> deployment, so that we do not need to drag a lot of senseless data into
> particular task being executed. Say, we have a fencing task, which has a
> serializer module written in python. This module is imported by Nailgun and
> what it actually does, it executes specific Nailgun core methods that
> access database or other sources of information and retrieve data in the
> way this task wants it instead of adjusting the task to the only
> 'astute.yaml'.


I like this idea, and to make things easier we may provide read only access
for plugins, but i am not sure that everyone will agree
to expose database to distributed task serializers. It may be quite fragile
and we wont be able to change anything internally, consider
refactoring of volumes or networks.

On the other hand if we will be able to make single public interface for
inventory (this is how i am calling part of nailgun that is reponsible
for cluster information storage) and use that interface (through REST Api
??) in component that will be responsible for deployment serialization and
execution.

Basically, what i am saying is that we need to split nailgun to
microservices, and then reuse that api in plugins or in config generators
right in library.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Take back the naming process

2015-01-27 Thread Jonathan Bryce

> On Jan 27, 2015, at 3:50 PM, Monty Taylor  wrote:
> 
> I do not like how we are selecting names for our releases right now.
> The current process is autocratic and opaque and not fun - which is the
> exact opposite of what a community selected name should be.

Autocratic? Could you elaborate?


> I propose:
> 
> * As soon as development starts on release X, we open the voting for the
> name of release X+1 (we're working on Kilo now, we should have known the
> name of L at the K summit)
> 
> * Anyone can nominate a name - although we do suggest that something at
> least related to the location of the associated summit would be nice
> 
> * We condorcet vote on the entire list of nominated names
> 
> * After we have the winning list, the foundation trademark checks the name
> 
> * If there is a trademark issue (and only a trademark issue - not a
> "marketing doesn't like the name" issue) we'll move down to the next
> name on the list
> 
> If we cannot have this process be completely open and democratic, then
> what the heck is the point of having our massive meritocracy in the
> first place? There's a lot of overhead we deal with by being a
> leaderless collective you know - we should occasionally get to have fun
> with it.


If your goal is to actually involve our massive meritocracy, I’d suggest 
expanding this thread to include at least the community marketing mailing list 
rather than just the -dev mailing list (possibly also the Foundation mailing 
list?). The release names are some of our most prominent brands, meaning 
choosing them is by definition a marketing activity. Not including the part of 
our meritocracy with experience in branding and marketing feels 
counterintuitive to me (again if the goal is actually to be meritocratic).

Jonathan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-27 Thread Christopher Yeoh
On Wed, Jan 28, 2015 at 9:11 AM, Michael Still  wrote:

> Greetings,
>
> I would like to nominate Melanie Witt for the python-novaclient-core team.
>
> (What is python-novaclient-core? Its a new group which will contain
> all of nova-core as well as anyone else we think should have core
> reviewer powers on just the python-novaclient code).
>
> Melanie has been involved with nova for a long time now. She does
> solid reviews in python-novaclient, and at least two current
> nova-cores have suggested her as ready for core review powers on that
> repository.
>
> Please respond with +1s or any concerns.
>
>
+1
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-27 Thread Dan Smith
> Please respond with +1s or any concerns.

+1

--Dan



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-27 Thread Bhandaru, Malini K
+1

-Original Message-
From: Dan Smith [mailto:d...@danplanet.com] 
Sent: Tuesday, January 27, 2015 3:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] Nominating Melanie Witt for 
python-novaclient-core

> Please respond with +1s or any concerns.

+1

--Dan


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-27 Thread Sean Dague
On 01/27/2015 02:41 PM, Michael Still wrote:
> Greetings,
> 
> I would like to nominate Melanie Witt for the python-novaclient-core team.
> 
> (What is python-novaclient-core? Its a new group which will contain
> all of nova-core as well as anyone else we think should have core
> reviewer powers on just the python-novaclient code).
> 
> Melanie has been involved with nova for a long time now. She does
> solid reviews in python-novaclient, and at least two current
> nova-cores have suggested her as ready for core review powers on that
> repository.
> 
> Please respond with +1s or any concerns.
> 
> References:
> 
> 
> https://review.openstack.org/#/q/project:openstack/python-novaclient+reviewer:%22melanie+witt+%253Cmelwitt%2540yahoo-inc.com%253E%22,n,z
> 
> As a reminder, we use the voting process outlined at
> https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
> core team.
> 
> Thanks,
> Michael
> 

+1

-- 
Sean Dague
http://dague.net



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican] python-barbicanclient 3.0.2 released

2015-01-27 Thread Douglas Mendizabal
Hi openstack-dev,

The barbican team would like to announce the release of python-barbicanclient 
3.0.2.  This is a minor release that fixes a bug in the pbr versioning that was 
preventing the client from working correctly.

The release is available on PyPI

https://pypi.python.org/pypi/python-barbicanclient/3.0.2 


Thanks,
- Doug Mendizábal


Douglas Mendizábal
IRC: redrobot
PGP Key: 245C 7B6F 70E9 D8F3 F5D5  0CC9 AD14 1F30 2D58 923C



signature.asc
Description: Message signed with OpenPGP using GPGMail
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] novaclient support for V2.1 micro versions

2015-01-27 Thread Matthew Gilliard
As I understand it, novaclient would by default not pass the microversion
HTTP header (X-OpenStack-Compute-API-Version) so it would get the server's
MIN_VERSION, ie 2.1 for the foreseeable future.  It would be
straightforward to add something like a --api-version=xx command line
argument, or it might be useful to read a value from a local config file.
I don't think either of those has been done yet, though.

gilliard

On Fri, Jan 23, 2015 at 1:05 PM, Chen CH Ji  wrote:

> No, AFAICT it's not supported because the v2.1 microversion and related bp
> are still under implementation ,there is no change on novaclient now ...
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
> Phone: +86-10-82454158
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> [image: Inactive hide details for "Day, Phil" ---01/23/2015 05:56:47
> PM---Hi Folks, Is there any support yet in novaclient for requesti]"Day,
> Phil" ---01/23/2015 05:56:47 PM---Hi Folks, Is there any support yet in
> novaclient for requesting a specific microversion ?   (looking
>
> From: "Day, Phil" 
> To: "OpenStack Development Mailing List (openstack-dev@lists.openstack.org)"
> 
> Date: 01/23/2015 05:56 PM
> Subject: [openstack-dev] [nova] novaclient support for V2.1 micro versions
> --
>
>
>
> Hi Folks,
>
> Is there any support yet in novaclient for requesting a specific
> microversion ?   (looking at the final leg of extending clean-shutdown to
> the API, and wondering how to test this in devstack via the novaclient)
>
> Phil
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Nominating Melanie Witt for python-novaclient-core

2015-01-27 Thread Joe Gordon
On Tue, Jan 27, 2015 at 3:52 PM, Sean Dague  wrote:

> On 01/27/2015 02:41 PM, Michael Still wrote:
> > Greetings,
> >
> > I would like to nominate Melanie Witt for the python-novaclient-core
> team.
> >
> > (What is python-novaclient-core? Its a new group which will contain
> > all of nova-core as well as anyone else we think should have core
> > reviewer powers on just the python-novaclient code).
> >
> > Melanie has been involved with nova for a long time now. She does
> > solid reviews in python-novaclient, and at least two current
> > nova-cores have suggested her as ready for core review powers on that
> > repository.
> >
> > Please respond with +1s or any concerns.
> >
> > References:
> >
> >
> https://review.openstack.org/#/q/project:openstack/python-novaclient+reviewer:%22melanie+witt+%253Cmelwitt%2540yahoo-inc.com%253E%22,n,z
> >
> > As a reminder, we use the voting process outlined at
> > https://wiki.openstack.org/wiki/Nova/CoreTeam to add members to our
> > core team.
> >
> > Thanks,
> > Michael
> >
>
> +1
>
>
+1



> --
> Sean Dague
> http://dague.net
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >