Re: [Openstack-operators] [openstack-dev] Open letter/request to TC candidates (and existing elected officials)

2018-09-12 Thread Davanum Srinivas
On Wed, Sep 12, 2018 at 3:30 PM Dan Smith  wrote:

> > I'm just a bit worried to limit that role to the elected TC members. If
> > we say "it's the role of the TC to do cross-project PM in OpenStack"
> > then we artificially limit the number of people who would sign up to do
> > that kind of work. You mention Ildiko and Lance: they did that line of
> > work without being elected.
>
> Why would saying that we _expect_ the TC members to do that work limit
> such activities only to those that are on the TC? I would expect the TC
> to take on the less-fun or often-neglected efforts that we all know are
> needed but don't have an obvious champion or sponsor.
>
> I think we expect some amount of widely-focused technical or project
> leadership from TC members, and certainly that expectation doesn't
> prevent others from leading efforts (even in the areas of proposing TC
> resolutions, etc) right?
>

+1 Dan!


> --Dan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
Davanum Srinivas :: https://twitter.com/dims
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Openstack] Certifying SDKs

2017-12-15 Thread Davanum Srinivas
Joe,

+1 to edit the sheet directly.

Thanks,
Dims

On Fri, Dec 15, 2017 at 2:45 PM, Joe Topjian <j...@topjian.net> wrote:
> Hi all,
>
> I've been meaning to reply to this thread. Volodymyr, your reply reminded me
> :)
>
> I agree with what you said that the SDK should support everything that the
> API supports. In that way, one could simply review the API reference docs
> and create a checklist for each possible action. I've often thought about
> doing this for Gophercloud so devs/users can see its current state of what's
> supported and what's missing.
>
> But Melvin highlighted the word "guaranteed", so I think he's looking for
> the most common scenarios/actions rather than an exhaustive list. For that,
> I can recommend the suite of Terraform acceptance tests. I've added a test
> each time a user has either reported a bug or requested a feature, so
> they're scenarios that I know are being used "in the wild".
>
> You can find these tests here:
> https://github.com/terraform-providers/terraform-provider-openstack/tree/master/openstack
>
> Each file that begins with "resource" and ends in "_test.go" will contain
> various scenarios at the bottom. For example, compute instances:
> https://github.com/terraform-providers/terraform-provider-openstack/blob/master/openstack/resource_openstack_compute_instance_v2_test.go#L637-L1134
>
> This contains tests for:
>
> * Basic launch of an instance
> * Able to add and remove security groups from an existing instance
> * Able to boot from a new volume or an existing volume
> * Able to edit metadata of an instance.
> * Able to create an instance with multiple ephemeral disks
> * Able to create an instance with multiple NICs, some of which are on the
> same network, some of which are defined as ports.
>
> Terraform is not an SDK, but it's a direct consumer of Gophercloud and is
> more user-facing, so I think it's quite applicable here. The caveat being
> that if Terraform or Gophercloud does not support something, it's not
> available as a test. :)
>
> Melvin, if this is of interest, I can either post a raw list of these
> tests/scenarios here or edit the sheet directly.
>
> Thanks,
> Joe
>
>
> On Fri, Dec 15, 2017 at 12:43 AM, Volodymyr Litovka <doka...@gmx.com> wrote:
>>
>> Hi Melvin,
>>
>> isn't SDK the same as Openstack REST API? In my opinion (can be erroneous,
>> though), SDK should just support everything that API supports, providing
>> some basic checks of parameters (e.g. verify compliancy of passed parameter
>> to IP address format, etc) before calling API (in order to decrease load of
>> Openstack by eliminating obviously broken requests).
>>
>> Thanks.
>>
>>
>> On 12/11/17 8:35 AM, Melvin Hillsman wrote:
>>
>> Hey everyone,
>>
>> On the path to potentially certifying SDKs we would like to gather a list
>> of scenarios folks would like to see "guaranteed" by an SDK.
>>
>> Some examples - boot instance from image, boot instance from volume,
>> attach volume to instance, reboot instance; very much like InterOp works to
>> ensure OpenStack clouds provide specific functionality.
>>
>> Here is a document we can share to do this -
>> https://docs.google.com/spreadsheets/d/1cdzFeV5I4Wk9FK57yqQmp5JJdGfKzEOdB3Vtt9vnVJM/edit#gid=0
>>
>> --
>> Kind regards,
>>
>> Melvin Hillsman
>> mrhills...@gmail.com
>> mobile: (832) 264-2646
>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openst...@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>> --
>> Volodymyr Litovka
>>   "Vision without Execution is Hallucination." -- Thomas Edison
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Davanum Srinivas
Blair,

Please add #2 as a line proposal in:
https://etherpad.openstack.org/p/LTS-proposal

So far it's focused on #1

Thanks,
Dims

On Wed, Nov 15, 2017 at 3:30 AM, Blair Bethwaite
<blair.bethwa...@gmail.com> wrote:
> Hi all - please note this conversation has been split variously across
> -dev and -operators.
>
> One small observation from the discussion so far is that it seems as
> though there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
>
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
>
> On 14 November 2017 at 09:25, Doug Hellmann <d...@doughellmann.com> wrote:
>> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>>> >> The concept, in general, is to create a new set of cores from these
>>> >> groups, and use 3rd party CI to validate patches. There are lots of
>>> >> details to be worked out yet, but our amazing UC (User Committee) will
>>> >> be begin working out the details.
>>> >
>>> > What is the most worrying is the exact "take over" process. Does it mean 
>>> > that
>>> > the teams will give away the +2 power to a different team? Or will our 
>>> > (small)
>>> > stable teams still be responsible for landing changes? If so, will they 
>>> > have to
>>> > learn how to debug 3rd party CI jobs?
>>> >
>>> > Generally, I'm scared of both overloading the teams and losing the 
>>> > control over
>>> > quality at the same time :) Probably the final proposal will clarify it..
>>>
>>> The quality of backported fixes is expected to be a direct (and only?)
>>> interest of those new teams of new cores, coming from users and
>>> operators and vendors. The more parties to establish their 3rd party
>>
>> We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> should not assume that they are needed or will be present. They may be,
>> but we shouldn't build policy around the assumption that they will. Why
>> would we have third-party jobs on an old branch that we don't have on
>> master, for instance?
>>
>>> checking jobs, the better proposed changes communicated, which directly
>>> affects the quality in the end. I also suppose, contributors from ops
>>> world will likely be only struggling to see things getting fixed, and
>>> not new features adopted by legacy deployments they're used to maintain.
>>> So in theory, this works and as a mainstream developer and maintainer,
>>> you need no to fear of losing control over LTS code :)
>>>
>>> Another question is how to not block all on each over, and not push
>>> contributors away when things are getting awry, jobs failing and merging
>>> is blocked for a long time, or there is no consensus reached in a code
>>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>>> first step on that way, and giving every LTS team member a core rights
>>> maybe? Not sure if that works though.
>>
>> I'm not sure what change you're proposing for CI jobs and their voting
>> status. Do you mean we should make the jobs non-voting as soon as the
>> branch passes out of the stable support period?
>>
>> Regarding the review team, anyone on the review team for a branch
>> that goes out of stable support will need to have +2 rights in that
>> branch. Otherwise there's no point in saying that they're maintaining
>> the branch.
>>
>> Doug
>>
>> ______
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Cheers,
> ~Blairo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] LTS pragmatic example

2017-11-14 Thread Davanum Srinivas
Flavio, Saverio,

Agree, that review may be a good example of what could be done. More info below.

Saverio said - "with the old Stable Release thinking this patch would
not be accepted on old stable branches."
My response - "Those branches are still under stable policy. That has
not changed just because of an email thread or a discussion in Forum"

Saverio said - "Let's see if this gets accepted back to stable/newton"
My response - "The branch the review is against is still under stable
policy. So things that will or will not be backported will not change"

Saverio said - "Please note that a developers/operators that make the
effort of fixing this in master, should do also all the cherry-pickes
back. We dont have any automatic procudure for this."
My response - "How the cherry-picks are done for stable branches will
not change. This is a stable branch, so there is no automatic
procedure for backporting"

I really want folks to help with stable first, learn how things are
done and then propose changes to stable branch policies and help
execute them

If folks want to chase LTS, then we are outlining a procedure/process
that is a first step towards LTS eventually.

Thanks,
Dims

On Wed, Nov 15, 2017 at 2:46 AM, Flavio Percoco <fla...@redhat.com> wrote:
> On 14/11/17 22:33 +1100, Davanum Srinivas wrote:
>>
>> Saverio,
>>
>> This is still under the stable team reviews... NOT LTS.
>>
>> Your contacts for the Nova Stable team is ...
>> https://review.openstack.org/#/admin/groups/540,members
>>
>> Let's please be clear, we need new people to help with LTS plans.
>> Current teams can't scale, they should not have to and it's totally
>> unfair to expect them to do so.
>
>
> I think you may have misunderstood Saverio's email. IIUC, what he was trying
> to
> do was provide an example in favor of the LTS branches as discussed in
> Sydney,
> rather than requesting for reviews or suggesting the stable team should do
> LTS.
>
> Flavio
>
>> On Tue, Nov 14, 2017 at 8:02 PM, Saverio Proto <ziopr...@gmail.com> wrote:
>>>
>>> Hello,
>>>
>>> here an example of a trivial patch that is important for people that
>>> do operations, and they have to troubleshoot stuff.
>>>
>>> with the old Stable Release thinking this patch would not be accepted
>>> on old stable branches.
>>>
>>> Let's see if this gets accepted back to stable/newton
>>>
>>>
>>> https://review.openstack.org/#/q/If525313c63c4553abe8bea6f2bfaf75431ed18ea
>>>
>>> Please note that a developers/operators that make the effort of fixing
>>> this in master, should do also all the cherry-pickes back. We dont
>>> have any automatic procudure for this.
>>>
>>> thank you
>>>
>>> Saverio
>
>
> --
> @flaper87
> Flavio Percoco



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] LTS pragmatic example

2017-11-14 Thread Davanum Srinivas
Saverio,

This is still under the stable team reviews... NOT LTS.

Your contacts for the Nova Stable team is ...
https://review.openstack.org/#/admin/groups/540,members

Let's please be clear, we need new people to help with LTS plans.
Current teams can't scale, they should not have to and it's totally
unfair to expect them to do so.

Thanks,
Dims

On Tue, Nov 14, 2017 at 8:02 PM, Saverio Proto <ziopr...@gmail.com> wrote:
> Hello,
>
> here an example of a trivial patch that is important for people that
> do operations, and they have to troubleshoot stuff.
>
> with the old Stable Release thinking this patch would not be accepted
> on old stable branches.
>
> Let's see if this gets accepted back to stable/newton
>
> https://review.openstack.org/#/q/If525313c63c4553abe8bea6f2bfaf75431ed18ea
>
> Please note that a developers/operators that make the effort of fixing
> this in master, should do also all the cherry-pickes back. We dont
> have any automatic procudure for this.
>
> thank you
>
> Saverio
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-11 Thread Davanum Srinivas
 the people actually doing the work, along with the folks
> maintaining the tools used (in particular, the infra and QA teams).
>
> In preparation for the summit I went back through all of the notes
> I could find about stable branches from previous summits. The first
> mention of stable branches I found was at the Folsom summit, and
> that was a discussion of doing stable releases "more often," which
> implies that we had the stable branches at least as early as Essex.
> I wasn't around before Folsom, so I'm not sure when they actually
> started.  The first mention of an LTS release I found was the Juno
> summit, which was later than I expected.
>
> No one in the room disputed the assertion that what we're doing for
> stable releases is insufficient.  We are all trying to listen to
> users' needs.  Continuing to just say "we should do LTS releases"
> however doesn't acknowledge the other long standing fact, which is
> that over all of the time we have talked about it we have had *no
> contributors willing to actually support an LTS release model
> upstream.*
>
> We act like people have been saying "no, you are not allowed to
> maintain branches for longer than the stable team says is OK," or
> "no, we'll never provide an LTS release," but that's not how our
> community works. The policies are set by the contributors. If there
> are no contributors for an LTS release, there will be no LTS release.
> If there *are* contributors, then we'll find a way to make some
> sort of LTS model work within the other constraints we have.
>
> It seems now that we have people saying they would do some amount
> of maintenance work (probably less than we try to do for stable
> branches under our current model), if they could. The first change,
> what has been proposed, is to give them a place to do the work.
> Then we'll see if anyone actually does it, and if so we can plan
> further improvements.
>
>> Then we can ask the operators if they would prefer that we stop EOL'ing
>> things out from under them. We can make it a community goal to have a
>> "feature" that is "you can upgrade from the last one" and have "the last
>> one" be something older than 6 months, maybe even older than 1 year.
>>
>> [1] https://www.openstack.org/assets/survey/April2017SurveyReport.pdf
>>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-21 Thread Davanum Srinivas
Oops, Hit send before i finished

https://info.massopencloud.org/wp-content/uploads/2016/03/Workshop-Resource-Federation-in-a-Multi-Landlord-Cloud.pdf
https://git.openstack.org/cgit/openstack/mixmatch

Essentially you can do a single cinder proxy that can work with
multiple cinder backends (one use case)

Thanks,
Dims

On Tue, Mar 21, 2017 at 8:59 PM, Davanum Srinivas <dava...@gmail.com> wrote:
> Jonathan,
>
> The folks from Boston University have done some work around this idea:
>
> https://github.com/openstack/mixmatch/blob/master/doc/source/architecture.rst
>
>
> On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills <jonmi...@gmail.com> wrote:
>> Friends,
>>
>> I’m reaching out for assistance from anyone who may have confronted the
>> issue of dealing with ITAR data in an OpenStack cloud being used in some
>> department of the Federal Gov.
>>
>> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a less
>> restrictive level of security than classified data, but it has some thorny
>> aspects to it, particularly where media is concerned:
>>
>> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
>> drives, and any drive, once it has been “tainted” with any ITAR data, is now
>> an ITAR drive
>>
>> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
>> physically shred the drive.  No need to elaborate on how destructive this
>> can get if you accidentally mingle ITAR with non-ITAR
>>
>> Certainly the multi-tenant model of OpenStack holds great promise in Federal
>> agencies for supporting both ITAR and non-ITAR worlds, but great care must
>> be taken that *somehow* things like Glance and Cinder don’t get mixed up.
>> One must ensure that the ITAR tenants can only access Glance/Cinder in ways
>> such that their backend storage is physically separate from any non-ITAR
>> tenants.  Certainly I understand that Glance/Cinder can support multiple
>> storage backend types, such as File & Ceph, and maybe that is an avenue to
>> explore to achieving the physical separation.  But what if you want to have
>> multiple different File backends?
>>
>> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
>> Glance/Cinder backends, and vice versa?
>>
>> Or…is it simpler to just build two OpenStack clouds….?
>>
>> Your thoughts will be most appreciated,
>>
>>
>> Jonathan Mills
>>
>> NASA Goddard Space Flight Center
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
>
> --
> Davanum Srinivas :: https://twitter.com/dims



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Dealing with ITAR in OpenStack private clouds

2017-03-21 Thread Davanum Srinivas
Jonathan,

The folks from Boston University have done some work around this idea:

https://github.com/openstack/mixmatch/blob/master/doc/source/architecture.rst


On Tue, Mar 21, 2017 at 7:33 PM, Jonathan Mills <jonmi...@gmail.com> wrote:
> Friends,
>
> I’m reaching out for assistance from anyone who may have confronted the
> issue of dealing with ITAR data in an OpenStack cloud being used in some
> department of the Federal Gov.
>
> ITAR (https://www.pmddtc.state.gov/regulations_laws/itar.html) is a less
> restrictive level of security than classified data, but it has some thorny
> aspects to it, particularly where media is concerned:
>
> * you cannot co-mingle ITAR and non-ITAR data on the same physical hard
> drives, and any drive, once it has been “tainted” with any ITAR data, is now
> an ITAR drive
>
> * when ITAR data is destroyed, a DBAN is insufficient — instead, you
> physically shred the drive.  No need to elaborate on how destructive this
> can get if you accidentally mingle ITAR with non-ITAR
>
> Certainly the multi-tenant model of OpenStack holds great promise in Federal
> agencies for supporting both ITAR and non-ITAR worlds, but great care must
> be taken that *somehow* things like Glance and Cinder don’t get mixed up.
> One must ensure that the ITAR tenants can only access Glance/Cinder in ways
> such that their backend storage is physically separate from any non-ITAR
> tenants.  Certainly I understand that Glance/Cinder can support multiple
> storage backend types, such as File & Ceph, and maybe that is an avenue to
> explore to achieving the physical separation.  But what if you want to have
> multiple different File backends?
>
> Do the ACLs exist to ensure that non-ITAR tenants can’t access ITAR
> Glance/Cinder backends, and vice versa?
>
> Or…is it simpler to just build two OpenStack clouds….?
>
> Your thoughts will be most appreciated,
>
>
> Jonathan Mills
>
> NASA Goddard Space Flight Center
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [oslo] RabbitMQ queue TTL issues moving to Liberty

2016-07-28 Thread Davanum Srinivas
gt;>
>>>> Every time you restart something that has a fanout queue. Eg.
>>>> cinder-scheduler or the neutron agents you will have
>>>> a queue in rabbit that is still bound to the rabbitmq exchange (and so
>>>> still getting messages in) but no consumers.
>>>>
>>>> These messages in these queues are basically rubbish and don’t need to
>>>> exist. Rabbit will delete these queues after 10 mins (although the default
>>>> in master is now changed to 30 mins)
>>>>
>>>> During this time the queue will grow and grow with messages. This sets
>>>> off our nagios alerts and our ops guys have to deal with something that
>>>> isn’t really an issue. They basically delete the queue.
>>>>
>>>> A bad scenario is when you make a change to your cloud that means all
>>>> your 1000 neutron agents are restarted, this causes a couple of dead queues
>>>> per agent to hang around. (port updates and security group updates) We get
>>>> around 25 messages / second on these queues and so you can see after 10
>>>> minutes we have a ton of messages in these queues.
>>>>
>>>> 1000 x 2 x 25 x 600 = 30,000,000 messages in 10 minutes to be precise.
>>>>
>>>> Has anyone else been suffering with this before a raise a bug?
>>>>
>>>> Cheers,
>>>> Sam
>>>>
>>>>
>>>> ___
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators@lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>>
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova docker]need help for nova-docker

2015-11-02 Thread Davanum Srinivas
hittang,

please see the note about "long lived process" in nova-docker README file.
This happens when the process started in the container exits before network
is setup. you should also check docker logs to see if there is any error.

Thanks,
Dims


On Mon, Nov 2, 2015 at 1:30 AM, hittang <hit...@163.com> wrote:

> Hi, everyone.
>  I install nova docker driver by https://wiki.openstack.org/wiki/Docker .
> but when I create a docker instance, it doesn't work.the errors like that 
> "Cannot
> setup network: Cannot find any PID under container" Can anybody help me ?
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>


-- 
Davanum Srinivas :: https://twitter.com/dims
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [magnum] Trying to use Magnum with Kilo

2015-08-18 Thread Davanum Srinivas
Mike,

probably the wrong mailing list for RDO. i did find some info which may be
useful to you though at:
https://www.rdoproject.org/packaging/rdo-packaging.html#_how_to_add_a_new_package_to_rdo_master_packaging
https://www.rdoproject.org/packaging/

-- Dims

On Tue, Aug 18, 2015 at 5:09 PM, Mike Smith mism...@overstock.com wrote:

 Thanks.  I’ll check it out.

 Can anyone out there tell me how projects like python-magnumclient and the
 openstack-magnum software itself get picked up by the RDO folks?  I’d like
 to see those be picked up in their distro but I’m not sure where that work
 takes place.  Do project developers typically package up their projects and
 make them available to the RDO maintainers or do RedHat folks pick up
 sources from the projects, do the packaging, and make those packages
 available?

 We can start building our own packages for this of course, but as
 operators prefer not to because of all the dependency overhead.   Unless
 it’s something we can do to help get the packages into the RDO repos (i.e.
 become a package maintainer as a way of contributing)

 Mike Smith
 Principal Engineer / Cloud Team Lead
 Overstock.com



 On Aug 18, 2015, at 2:47 PM, David Medberry openst...@medberry.net
 wrote:

 http://git.openstack.org/cgit/openstack/python-magnumclient

 On Tue, Aug 18, 2015 at 12:21 PM, Mike Smith mism...@overstock.com
 wrote:

 I’m trying to use Magnum on our Openstack Kilo cloud which runs CentOS 7
 and RDO.   Since the Magnum RPMs are present in RDO, I’m using RPMs built
 by one of the Magnum developers (available at
 https://copr-be.cloud.fedoraproject.org/results/sdake/openstack-magnum/)

 Once I got rid of a conflicting UID that it tries to use for the magnum
 user, I’m able to start up the services.   However, following along with
 the Magnum documentation that exists (
 http://docs.openstack.org/developer/magnum/), the next step is to use
 the “magnum” command to define things like bay models and bays.

 However, the “magnum” command doesn’t seem to exist.  I’m not sure if
 it’s supposed to exist as a symlink to something else?

 Is anyone else out there using Magnum with RDO Kilo?  I’d love to chat
 with someone else that has worked through these issues.

 Thanks,
 Mike

 Mike Smith
 Principal Engineer / Cloud Team Lead
 Overstock.com http://overstock.com/




 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Davanum Srinivas :: https://twitter.com/dims
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] FYI: Rabbit Heartbeat Patch Landed

2015-05-03 Thread Davanum Srinivas
Sam,

1. Weird, did you pick it up from ubuntu cloud archive? we could raise
a bug against them
2. yes, not much we can do about that now i guess.
3. yes, Can you please log a bug for this?

Thanks!

On Sun, May 3, 2015 at 9:45 PM, Sam Morrison sorri...@gmail.com wrote:
 I’ve found a couple of issues with this:

 1. Upgrading the packages in ubuntu doesn’t seem to work, you need to remove 
 them all then install fresh. Some conflicts with file paths etc.
 2. With juno heat the requirements.txt has upper limits on the versions for 
 oslo deps. I just removed these and it seems to work fine.
 3. When using amqp_durable_queues it will no longer declare the exchanges 
 with this argument set so this will give errors when declaring the exchange. 
 (I think this is a bug, at least an upgrade bug as this will affect people 
 moving juno - kilo)




 On 4 May 2015, at 9:08 am, Sam Morrison sorri...@gmail.com wrote:

 We’re running:

 kombu: 3.0.7
 amqp: 1.4.5
 rabbitmq, 3.3.5
 erlang: R14B04


 On 2 May 2015, at 1:51 am, Kris G. Lindgren klindg...@godaddy.com wrote:

 We are running:
 kombu 3.0.24
 amqp 1.4.6
 rabbitmq 3.4.0
 erlang R16B-03.10
 

 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.



 On 5/1/15, 9:41 AM, Davanum Srinivas dava...@gmail.com wrote:

 may i request folks post the versions of rabbitmq and pip versions of
 kombu and amqp libraries?

 thanks,
 dims

 On Fri, May 1, 2015 at 11:29 AM, Mike Dorman mdor...@godaddy.com wrote:
 We¹ve been running the new oslo.messaging under Juno for about the last
 month, and we¹ve seen success with it, too.

 From: Sam Morrison
 Date: Thursday, April 30, 2015 at 11:02 PM
 To: David Medberry
 Cc: OpenStack Operators
 Subject: Re: [Openstack-operators] FYI: Rabbit Heartbeat Patch Landed

 Great, let me know how you get on.


 On 1 May 2015, at 12:21 pm, David Medberry openst...@medberry.net
 wrote:

 Great news Sam. I'll pull those packages into my Juno devel environment
 and
 see if it makes any difference.
 Much appreciated for the rebuilds/links.

 Also, good to connect with you at ... Connect AU.

 On Thu, Apr 30, 2015 at 7:30 PM, Sam Morrison sorri...@gmail.com
 wrote:

 I managed to get a juno environment with oslo.messaging 1.8.1 working
 in
 ubuntu 14.04

 I have a debian repo with all the required dependancies at:

 deb http://download.rc.nectar.org.au/nectar-ubuntu
 trusty-juno-testing-oslo main

 All it includes is ubuntu official packages from vivid.

 Have installed in our test environment and all looking good so far
 although haven¹t done much testing yet.

 Sam



 On 21 Mar 2015, at 2:35 am, David Medberry openst...@medberry.net
 wrote:

 Hi Sam,

 I started down the same path yesterday. If I have any success today,
 I'll
 post to this list.

 I'm also going to reach out to the Ubuntu Server (aka Cloud) team and
 so
 if they can throw up a PPA with this for Juno quickly (which they will
 likely NOT do but it doesn't hurt to ask.) We need to get the
 stable/juno
 team on board with this backport/regression.

 On Fri, Mar 20, 2015 at 4:14 AM, Sam Morrison sorri...@gmail.com
 wrote:

 I¹ve been trying to build a ubuntu deb of this in a juno environment.
 It¹s a bit of a nightmare as they have changed all the module names
 from
 oslo.XXX to oslo_XXX

 Have fixed those up with a few sed replaces and had to remove support
 for
 aioeventlet as the dependencies aren¹t in the ubuntu cloud archive
 juno.

 Still have a couple of tests failing but I think it *should* work in
 on
 our juno hosts.

 I have a branch of the 1.8.0 release that I¹m trying to build against
 Juno here [1] and I¹m hoping that it will be easy to integrate the
 heartbeat
 code.
 I¹m sure there is lots of people that would be keen to get a latest
 version of oslo.messaging working against a juno environment. What is
 the
 best way to make that happen though?

 Cheers,
 Sam

 [1] https://github.com/NeCTAR-RC/oslo.messaging/commits/nectar/1.8.0



 On 20 Mar 2015, at 8:59 am, Davanum Srinivas dava...@gmail.com
 wrote:

 So, talking about experiments, here's one:
 https://review.openstack.org/#/c/165981/

 Trying to run oslo.messaging trunk against stable/juno of the rest
 of
 the components.

 -- dims

 On Thu, Mar 19, 2015 at 5:10 PM, Matt Fischer m...@mattfischer.com
 wrote:
 I think everyone is highly interested in running this change or a
 newer OSLO
 messaging in general + this change in Juno rather than waiting for
 Kilo.
 Hopefully everyone could provide updates as they do experiments.


 On Thu, Mar 19, 2015 at 1:22 PM, Kevin Bringard (kevinbri)
 kevin...@cisco.com wrote:

 Can't speak to that concept, but I did try cherry picking the
 commit
 into
 the stable/juno branch of oslo.messaging and there'd definitely be
 some work
 to be done there. I fear that could mean havoc for trying to just
 use
 master
 oslo as well, but a good idea to try for sure.

 -- Kevin

 On Mar 19, 2015, at 1:13 PM, Jesse Keating j...@bluebox.net

Re: [Openstack-operators] FYI: Rabbit Heartbeat Patch Landed

2015-05-01 Thread Davanum Srinivas
may i request folks post the versions of rabbitmq and pip versions of
kombu and amqp libraries?

thanks,
dims

On Fri, May 1, 2015 at 11:29 AM, Mike Dorman mdor...@godaddy.com wrote:
 We’ve been running the new oslo.messaging under Juno for about the last
 month, and we’ve seen success with it, too.

 From: Sam Morrison
 Date: Thursday, April 30, 2015 at 11:02 PM
 To: David Medberry
 Cc: OpenStack Operators
 Subject: Re: [Openstack-operators] FYI: Rabbit Heartbeat Patch Landed

 Great, let me know how you get on.


 On 1 May 2015, at 12:21 pm, David Medberry openst...@medberry.net wrote:

 Great news Sam. I'll pull those packages into my Juno devel environment and
 see if it makes any difference.
 Much appreciated for the rebuilds/links.

 Also, good to connect with you at ... Connect AU.

 On Thu, Apr 30, 2015 at 7:30 PM, Sam Morrison sorri...@gmail.com wrote:

 I managed to get a juno environment with oslo.messaging 1.8.1 working in
 ubuntu 14.04

 I have a debian repo with all the required dependancies at:

 deb http://download.rc.nectar.org.au/nectar-ubuntu
 trusty-juno-testing-oslo main

 All it includes is ubuntu official packages from vivid.

 Have installed in our test environment and all looking good so far
 although haven’t done much testing yet.

 Sam



 On 21 Mar 2015, at 2:35 am, David Medberry openst...@medberry.net wrote:

 Hi Sam,

 I started down the same path yesterday. If I have any success today, I'll
 post to this list.

 I'm also going to reach out to the Ubuntu Server (aka Cloud) team and so
 if they can throw up a PPA with this for Juno quickly (which they will
 likely NOT do but it doesn't hurt to ask.) We need to get the stable/juno
 team on board with this backport/regression.

 On Fri, Mar 20, 2015 at 4:14 AM, Sam Morrison sorri...@gmail.com wrote:

 I’ve been trying to build a ubuntu deb of this in a juno environment.
 It’s a bit of a nightmare as they have changed all the module names from
 oslo.XXX to oslo_XXX

 Have fixed those up with a few sed replaces and had to remove support for
 aioeventlet as the dependencies aren’t in the ubuntu cloud archive juno.

 Still have a couple of tests failing but I think it *should* work in on
 our juno hosts.

 I have a branch of the 1.8.0 release that I’m trying to build against
 Juno here [1] and I’m hoping that it will be easy to integrate the heartbeat
 code.
 I’m sure there is lots of people that would be keen to get a latest
 version of oslo.messaging working against a juno environment. What is the
 best way to make that happen though?

 Cheers,
 Sam

 [1] https://github.com/NeCTAR-RC/oslo.messaging/commits/nectar/1.8.0



  On 20 Mar 2015, at 8:59 am, Davanum Srinivas dava...@gmail.com wrote:
 
  So, talking about experiments, here's one:
  https://review.openstack.org/#/c/165981/
 
  Trying to run oslo.messaging trunk against stable/juno of the rest of
  the components.
 
  -- dims
 
  On Thu, Mar 19, 2015 at 5:10 PM, Matt Fischer m...@mattfischer.com
  wrote:
  I think everyone is highly interested in running this change or a
  newer OSLO
  messaging in general + this change in Juno rather than waiting for
  Kilo.
  Hopefully everyone could provide updates as they do experiments.
 
 
  On Thu, Mar 19, 2015 at 1:22 PM, Kevin Bringard (kevinbri)
  kevin...@cisco.com wrote:
 
  Can't speak to that concept, but I did try cherry picking the commit
  into
  the stable/juno branch of oslo.messaging and there'd definitely be
  some work
  to be done there. I fear that could mean havoc for trying to just use
  master
  oslo as well, but a good idea to try for sure.
 
  -- Kevin
 
  On Mar 19, 2015, at 1:13 PM, Jesse Keating j...@bluebox.net wrote:
 
  On 3/19/15 10:15 AM, Davanum Srinivas wrote:
  Apologies. i was waiting for one more changeset to merge.
 
  Please try oslo.messaging master branch
  https://github.com/openstack/oslo.messaging/commits/master/
 
  (you need at least till Change-Id:
  I4b729ed1a6ddad2a0e48102852b2ce7d66423eaa - change id is in the
  commit
  message)
 
  Please note that these changes are NOT in the kilo branch that has
  been
  cut already
  https://github.com/openstack/oslo.messaging/commits/stable/kilo
 
  So we need your help with testing to promote it to kilo for you all
  to
  use it in Kilo :)
 
  Please file reviews or bugs or hop onto #openstack-oslo if you see
  issues etc.
 
  Many thanks to Kris Lindgren to help shake out some issues in his
  environment.
 
  How bad of an idea would it be to run master of oslo.messaging with
  juno
  code base? Explosions all over the place?
 
  --
  -jlk
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 
 
  ___
  OpenStack-operators mailing list
  OpenStack-operators@lists.openstack.org
 
  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

[Openstack-operators] Fwd: [openstack-dev] [release] oslo.messaging 1.8.1

2015-03-25 Thread Davanum Srinivas
FYI. for those waiting to try oslo.messaging rabbitmq heartbeat support.

-- dims

-- Forwarded message --
From: Doug Hellmann d...@doughellmann.com
Date: Wed, Mar 25, 2015 at 10:13 AM
Subject: [openstack-dev] [release] oslo.messaging 1.8.1
To: OpenStack Development Mailing List (not for usage questions)
openstack-...@lists.openstack.org


We are pleased to announce the release of:

oslo.messaging 1.8.1: Oslo Messaging API

This is a Kilo-series patch release, fixing several bugs.

For more details, please see the git log history below and:

http://launchpad.net/oslo.messaging/+milestone/1.8.1

Please report issues through launchpad:

http://bugs.launchpad.net/oslo.messaging

Changes in oslo.messaging 1.8.0..1.8.1
--

57fad97 Publish tracebacks only on debug level
b5f91b2 Reconnect on connection lost in heartbeat thread
ac8bdb6 cleanup connection pool return
ee18dc5 rabbit: Improves logging
db99154 fix up verb tense in log message
64bdd80 rabbit: heartbeat implementation
9b14d1a Add support for multiple namespaces in Targets

Diffstat (except docs and test files)
-

oslo_messaging/_drivers/amqp.py  |  44 ++-
oslo_messaging/_drivers/amqpdriver.py|  15 +-
oslo_messaging/_drivers/impl_qpid.py |   2 +-
oslo_messaging/_drivers/impl_rabbit.py   | 346 ---
oslo_messaging/rpc/dispatcher.py |   2 +-
oslo_messaging/target.py |   9 +-
11 files changed, 541 insertions(+), 70 deletions(-)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] FYI: Rabbit Heartbeat Patch Landed

2015-03-20 Thread Davanum Srinivas
Good point about the hosts, i'd agree with John and Abel.

fyi, some good news in CI testing of oslo.messaging trunk with rest of
components from stable/juno.
https://review.openstack.org/#/c/165981/

NOTE: CI testing does not exercise rabbitmq going up and down or
multiple rabbit mq hosts, it just checks if the good code path
against a single rabbitmq host works ok.

Still need people to try oslo.messaging trunk in their juno
environments and let us know. so we can promote the code from
oslo.messaging trunk to stable/kilo to release it with kilo.

thanks,
-- dims

On Fri, Mar 20, 2015 at 2:22 AM, Abel Lopez alopg...@gmail.com wrote:
 I tried that once as a test, it was pretty much a major fail. This was
 behind an F5 too. Just leaving the hosts in a list works better.


 On Thursday, March 19, 2015, John Dewey j...@dewey.ws wrote:

 Why would anyone want to run rabbit behind haproxy?  I get people did it
 post the ‘rabbit_servers' flag.  Allowing the client to detect, handle, and
 retry is a far better alternative than load balancer health check intervals.

 On Thursday, March 19, 2015 at 9:42 AM, Kris G. Lindgren wrote:

 I have been working with dism and sileht on testing this patch in one of
 our pre-prod environments. There are still issues with rabbitmq behind
 haproxy that we are working through. However, in testing if you are using
 a list of hosts you should see significantly better catching/fixing of
 faults.

 If you are using cells with the don¹t forget to also apply:
 https://review.openstack.org/#/c/152667/
 
 Kris Lindgren
 Senior Linux Systems Engineer
 GoDaddy, LLC.



 On 3/19/15, 10:22 AM, Mark Voelker mvoel...@vmware.com wrote:

 At the Operator¹s midcycle meetup in Philadelphia recently there was a
 lot of operator interest[1] in the idea behind this patch:

 https://review.openstack.org/#/c/146047/

 Operators may want to take note that it merged yesterday. Happy testing!


 [1] See bottom of https://etherpad.openstack.org/p/PHL-ops-rabbit-queue

 At Your Service,

 Mark T. Voelker
 OpenStack Architect


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] FYI: Rabbit Heartbeat Patch Landed

2015-03-19 Thread Davanum Srinivas
Apologies. i was waiting for one more changeset to merge.

Please try oslo.messaging master branch
https://github.com/openstack/oslo.messaging/commits/master/ 
https://github.com/openstack/oslo.messaging/commits/master/

(you need at least till Change-Id: I4b729ed1a6ddad2a0e48102852b2ce7d66423eaa - 
change id is in the commit message) 

Please note that these changes are NOT in the kilo branch that has been cut 
already
https://github.com/openstack/oslo.messaging/commits/stable/kilo 
https://github.com/openstack/oslo.messaging/commits/stable/kilo

So we need your help with testing to promote it to kilo for you all to use it 
in Kilo :)

Please file reviews or bugs or hop onto #openstack-oslo if you see issues etc.

Many thanks to Kris Lindgren to help shake out some issues in his environment. 

thanks,
dims


 On Mar 19, 2015, at 12:22 PM, Mark Voelker mvoel...@vmware.com wrote:
 
 At the Operator’s midcycle meetup in Philadelphia recently there was a lot of 
 operator interest[1] in the idea behind this patch:
 
 https://review.openstack.org/#/c/146047/
 
 Operators may want to take note that it merged yesterday.  Happy testing!
 
 
 [1] See bottom of https://etherpad.openstack.org/p/PHL-ops-rabbit-queue
 
 At Your Service,
 
 Mark T. Voelker
 OpenStack Architect
 
 
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev][all][qa][gabbi][rally][tempest] Extend rally verfiy to unify work with Gabbi, Tempest and all in-tree functional tests

2015-03-09 Thread Davanum Srinivas
Boris,

1. Suppose a project say Nova wants to enable this Rally Integration
for its functional tests, what does that project have to do? (other
than the existing well defined tox targets)
2. Is there a test project with Gabbi based tests that you know of?
3. What changes if any are needed in Gabbi to make this happen?

Guessing going forward, we can setup weekly Rally jobs against
different projects so we can compare performance over time etc?

thanks,
dims


On Fri, Mar 6, 2015 at 6:47 PM, Boris Pavlovic bo...@pavlovic.me wrote:
 Hi stackers,

 Intro (Gabbi)
 -

 Gabbi is amazing tool that allows you to describe in human readable way what
 API requests to execute and what you are expecting as a result. It
 Simplifies a lot API testing.

 It's based on unittest so it can be easily run using tox/tester/nose and so
 on..


 Intro (Functional in-tree tests)
 ---

 Keeping all tests in one project like Tempest, that is maintained by one
 team, was not enough scalable approach. To scale things, projects started
 maintaining their own functional tests in their own tree. This resolves
 scale issues and now new features can be merged with functional tests.


 The Problem
 -

 As far as you know there are a lot of OpenStack projects with their own
 functional tests / gabbi tests in tree. It becomes hard for developers,
 devops and operators to work with them. (Like it's hard to install OpenStack
 by hands without DevStack. )

 Usually, end users are choosing 2 approach:
 1) Make own tests
 2) Make scripts that runs somehow all these tests


 Small Intro (Rally)
 

 Rally idea is to make tool that simplifies all kinds of testing of multiple
 OpenStack clouds.
 It should be for human and as well simple to integrated in CI/CD process.

 Rally automates all testing process (managing testing systems / running
 tests / storing results / working with results)

 At this moment there are 3 major parts:
 *) deployment  - manages OpenStack deployments (creates or uses existing)
 *) verify - manages fully tempest (installs/configurtion/running/parsing
 output/storing results/working with results)
 *) task - own rally testing framework that allows you to do all kinds of
 testing functional/load/performance/scale/load/volume and others.

 I can say that rally verify command that automates work with Tempest is
 very popular. More details here:
 https://www.mirantis.com/blog/rally-openstack-tempest-testing-made-simpler/


 Proposal to make the life better
 --

 Recently Yair Fried and Prasanth Anbalagan proposed a great idea to extend
 rally verify command to add ability to run in-tree functional tests in the
 same way as tempest.

 In other words to have next syntax:  rally verify project command

 Something like this:

   rally verify swift start   # 1. Check is installed swift for active rally
 deployment.
  # IF NO:
  #   Downloads from default (our
 specified place) swift
  #   Switch to master or specified tag
  #   Installs in venv swift
  #   Configure swift functional test
 config for active deployment
  # 2. Run swift functional test
  # 3. Parse subunit output and store to
 Rally DB (for future work)

   rally verify swift list  # List all swift
 verification runs
   rally verify swift show UUID# Shows results
   rally verify swift compare UUID1 UUID2 # Compare results of two runs


 Why it makes sense?
 

 1) Unification of testing process.

 There is a simple to learn set of command rally verify project cmd
 that works for all projects in the same way.  End users like such things=)

 2) Simplification of testing process.

 rally verify project start - will automate all steps, so you won't need
 to install project manually, and configure functional test, collect and
 somewhere store results.

 3) Avoiding duplication of effort

 We don't need to implement part of rally verify functionality in every
 project.
 It is better to implement it in one place with plugin support. Adding new
 project means implementing new plugin (in most case it will be just
 functional test conf generation)

 4) Reusing already existing code

 Most of the code that we need is already implemented in Rally,
 it requires just small refactoring and generalization.


 Thoughts?


 Best regards,
 Boris Pavlovic


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Davanum Srinivas :: https://twitter.com/dims

Re: [Openstack-operators] Operators Summit: RabbitMQ

2015-03-05 Thread Davanum Srinivas
Mike,

I added a couple more reviews.

-- dims

On Thu, Mar 5, 2015 at 1:43 PM, Mike Dorman mdor...@godaddy.com wrote:
 I’ll follow the other posts today with some info on the RabbitMQ session for
 Tuesday morning.

 I’d like to start by quickly going over the different RMQ architectures that
 people run, and how you manage and maintain those.

 Then I think it’ll be helpful to classify the general issues people
 typically have, which I think are fairly well known at this point, but could
 use a recap.  Then we can dive deeper into how folks are handling and
 working around them.  I think this will be the most practical and useful
 topic for people.

 As always, please add details/comments/other topics to the etherpad:
 https://etherpad.openstack.org/p/PHL-ops-rabbit-queue

 If there are any reviews in flight around RMQ performance that we as
 operators can help comment on, please add those to the etherpad as well.  I
 only found one that seemed very relevant.

 Thanks!
 Mike


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] EXT4 as ephemeral disk

2015-01-14 Thread Davanum Srinivas
Nathanael, Abel,

disable the automatic formatting sounds like a good feature. Do you
mind helping us add it? (log a review? post a diff? create a
blueprint?).

thanks,
dims

On Wed, Jan 14, 2015 at 9:35 AM, Nathanael Burton
nathanael.i.bur...@gmail.com wrote:
 We just rely on the tenants to format the disks themselves. Whatever fs type
 we choose would inevitably be wrong for a large percentage of the users, so
 why bother.

 On Jan 13, 2015 7:37 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 01/13/2015 07:14 PM, Nathanael Burton wrote:

 We actually modified the code to disable the automatic formatting of
 ephemeral disks.  This was especially problematic with flavors that had
 larger sized ephemeral disks as it would slow the nova boot time.


 Hi Nate!

 So, do you have some sort of agent that formats the larger ephemeral disks
 post-boot? Or do you just rely on the tenant formatting the raw disks
 themselves?

 Best,
 -jay

 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




-- 
Davanum Srinivas :: https://twitter.com/dims

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators