Re: [openstack-dev] [tc][all] A culture change (nitpicking)
On Tue, May 29, 2018 at 10:52:04AM -0400, Mohammed Naser wrote: :On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz wrote: :> One idea would be that, once the meat of the patch :> has passed multiple rounds of reviews and looks good, and what remains :> is only nits, the reviewer themselves take on the responsibility of :> pushing a new patch that fixes the nits that they found. Doesn't the above suggestion sufficiently address the concern below? :I'd just like to point out that what you perceive as a 'finished :product that looks unprofessional' might be already hard enough for a :contributor to achieve. We have a lot of new contributors coming from :all over the world and it is very discouraging for them to have their :technical knowledge and work be categorized as 'unprofessional' :because of the language barrier. : :git-nit and a few minutes of your time will go a long way, IMHO. As very intermittent contributor and native english speaker with relatively poor spelling and typing I'd be much happier with a reviewer pushing a patch that fixes nits rather than having a ton of inline comments that point them out. maybe we're all saying the same thing here? -JOn __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova] Default scheduler filters survey
On Wed, Apr 18, 2018 at 05:20:13PM +, Tim Bell wrote: :I'd suggest asking on the openstack-operators list since there is only a subset of operators who follow openstack-dev. I'd second that, which I'm (obviously) subscribed to both I do pay more attention to operators, and almost missed this ask. but here's mine: scheduler_default_filters=ComputeFilter,AggregateInstanceExtraSpecsFilter,AggregateCoreFilter,AggregateRamFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,ImagePropertiesFilter,PciPassthroughFilter :Tim : :-Original Message- :From: Chris Friesen :Reply-To: "OpenStack Development Mailing List (not for usage questions)" :Date: Wednesday, 18 April 2018 at 18:34 :To: "openstack-dev@lists.openstack.org" :Subject: Re: [openstack-dev] [nova] Default scheduler filters survey : :On 04/18/2018 09:17 AM, Artom Lifshitz wrote: : :> To that end, we'd like to know what filters operators are enabling in :> their deployment. If you can, please reply to this email with your :> [filter_scheduler]/enabled_filters (or :> [DEFAULT]/scheduler_default_filters if you're using an older version) :> option from nova.conf. Any other comments are welcome as well :) : :RetryFilter :ComputeFilter :AvailabilityZoneFilter :AggregateInstanceExtraSpecsFilter :ComputeCapabilitiesFilter :ImagePropertiesFilter :NUMATopologyFilter :ServerGroupAffinityFilter :ServerGroupAntiAffinityFilter :PciPassthroughFilter : : :__ :OpenStack Development Mailing List (not for usage questions) :Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev : : :__ :OpenStack Development Mailing List (not for usage questions) :Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [ptg] Simplification in OpenStack
On Sat, Sep 23, 2017 at 12:05:38AM -0700, Adam Lawson wrote: :Lastly, I do think GUI's make deployments easier and because of that, I :feel they're critical. There is more than one vendor whose built and :distributes a free GUI to ease OpenStack deployment and management. That's :a good start but those are the opinions of a specific vendor - not he OS :community. I have always been a big believer in a default cloud :configuration to ease the shock of having so many options for everything. I :have a feeling however our commercial community will struggle with :accepting any method/project other than their own as being part a default :config. That will be a tough one to crack. Different people have differnt needs, so this is not meant to contradict Adam. But :) Any unique deployment tool would be of no value to me as OpenStack (or anyother infrastructure component) needs to fit into my environment. I'm not going to adopt something new that requires a new parallel management tool to what I use. I think focusing on the existing configuration management projects it the way to go. Getting Ansible/Puppet/Chef/etc.. to support a well know set of "constellations" in an opinionated would make deployment easy (for most people who are using one of those already) and , ussuming the opionions are the same :) make consumption easier as well. As an example when I started using OpenStack (Essex) we had recently switch to Ubuntu as our Linux platform and Pupept as our config management. Ubuntu had a "one click MAAS install of OpenStack" which was impossible as it made all sorts of assumptions about our environment and wanted controll of most of them so it could provide a full deployemnt solution. Puppet had a good integrated example config where I plugged in some local choices and and used existing deploy methodologies. I fought with MAAS's "simple" install for a week. When I gave up and went with Puppet I had live users on a substantial (for the time) cloud in less htan 2 days. I don't think this has to do with the relative value of MASS and Puppet at the time, but rather what fit my existing deploy workflows. Supporting multiple config tools may not be simple from an upstream perspective, but we do already have these projects and it is simpler to consume for brown field deployers at least. -Jon :That's what I got tonight. hve a great weekend. : ://adam : : :*Adam Lawson* : :Principal Architect :Office: +1-916-794-5706 : :On Thu, Sep 21, 2017 at 11:23 AM, Clint Byrum wrote: : :> Excerpts from Jeremy Stanley's message of 2017-09-21 16:17:00 +: :> > On 2017-09-20 17:39:38 -0700 (-0700), Clint Byrum wrote: :> > [...] :> > > Something about common use cases and the exact mix of :> > > projects + configuration to get there, and testing it? Help? :> > [...] :> > :> > Maybe you're thinking of the "constellations" suggestion? It found :> > its way into the TC vision statement, though the earliest mention I :> > can locate is in John's post to this ML thread: :> > :> > http://lists.openstack.org/pipermail/openstack-dev/2017- :> April/115319.html :> > :> :> Yes, constellations. Thanks! :> :> __ :> OpenStack Development Mailing List (not for usage questions) :> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev :> :__ :OpenStack Development Mailing List (not for usage questions) :Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project
I don't have a strong opinion on the split vs stay discussion. It does seem there's been sustained if ineffective attempts to keep this together so I lean toward supporting the divorce. But let's not pretend there are no costs for this. On Thu, Aug 04, 2016 at 07:02:48PM -0400, Jay Pipes wrote: :On 08/04/2016 06:40 PM, Clint Byrum wrote: :>But, if I look at this from a user perspective, if I do want to use :>anything other than images as cloud artifacts, the story is pretty :>confusing. : :Actually, I beg to differ. A unified OpenStack Artifacts API, :long-term, will be more user-friendly and less confusing since a :single API can be used for various kinds of similar artifacts -- :images, Heat templates, Tosca flows, Murano app manifests, maybe :Solum things, maybe eventually Nova flavor-like things, etc. The confusion is the current state of two API's, not having a future integrated API. Remember how well that served us with nova-network and neutron (né quantum). I also agree with Tim's point. Yes if a new project is fully documented and integrated well into packaging and config management implementing it is trivial, but history again teaches this is a long road. It also means extra dev overhead to create and mange these supporting structures to hide the complexity from end users. Now if the two project are sufficiently different this may not be a significant delta as the new docs and config management code would be need in the old project if the new service stayed stayed there. -Jon __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Timeframe for naming the P release?
On Tue, May 03, 2016 at 12:09:40AM -0400, Adam Young wrote: :On 05/02/2016 08:07 PM, Rochelle Grober wrote: :>But, the original spelling of the landing site is Plimoth Rock. There were still highway signs up in the 70's directing folks to "Plimoth Rock" There are trill signs with both spellings. Presumably for sligtly different contexts. Even having lived here basically my whole life I'm not sure if there's a consistent distinction except the legal entity that is the town is always with a y :> :>--Rocky :>Who should know about rocks ;-) :And Providnece is, I think close enough for inclusion as well. And :that is just the towns. : : :Plymouth is the only County in Mass with a P name, but Penobscott ME :used to be part of MA, and should probably be in the running as well. I'd second Providnece and Penobscott as 'close enough'. I'm actually partial to Providence... -Jon __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?
On Wed, Apr 13, 2016 at 01:52:38PM -0400, Jonathan D. Proulx wrote: :I've not been following this thread at all so appologies if I'm :confused. reading follow up emails relating to timing of various submissions, I back away slowly clearly not having all the context on this one. -Jon __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore - What about big tent?
On Wed, Apr 13, 2016 at 05:17:18PM +, Amrith Kumar wrote: :Today I was informed that after a lot of effort and testing, the installation guide for Trove/Mitaka which is ready and up for review[1] has been placed on hold pending the outcome of your discussions in Austin. I've not been following this thread at all so appologies if I'm confused. As an operator and a (former? it's been a while) docs contributor seems to me that the Newton proposal makes sense (given my breif reading). I don't see why Mitaka docs that are already written and tested should be help up on that though, seems the point of coordinated reseases is so other can rely on major structures being stable through them. My $0.02, -Jon : :The documentation that is now available and ready for review is for the Mitaka series and should not, I believe, be held up because there is now a proposal afoot to put non-core project installation guides somewhere else. If we choose to do that, that's a conversation for Newton, I believe, and I believe that the Trove installation guide for Mitaka should be considered for inclusion along with the other Mitaka documentation. : :The lack of installation guides for a project is a serious challenge for deployers and users, and much work has been expended getting the Trove documentation ready and thoroughly tested on Ubuntu, RDO and SUSE. : :I'm therefore requesting that the doc team consider this set of documentation for the Mitaka series and make it available with the other install guides for other projects after it has been reviewed, and not hold it subject to the outcome of some Newton focused discussion that is to happen in Austin. : :Thanks, : :-amrith : : :[1] https://review.openstack.org/#/c/298929/ : :> -Original Message- :> From: Andreas Jaeger [mailto:a...@suse.com] :> Sent: Monday, April 04, 2016 2:42 PM :> To: OpenStack Development Mailing List (not for usage questions) :> :> Subject: Re: [openstack-dev] [docs] Our Install Guides Only Cover Defcore :> - What about big tent? :> :> On 04/04/2016 12:12 PM, Thierry Carrez wrote: :> > Doug Hellmann wrote: :> >>> [...] :> >>> We would love to add all sufficiently mature projects to the :> >>> installation guide because it increases visibility and adoption by :> >>> operators, but we lack resources to develop a source installation :> >>> mechanism that retains as much simplicity as possible for our :> >>> audience. :> >> :> >> I think it would be a big mistake to try to create one guide for :> >> installing all OpenStack projects. As you say, testing what we have :> >> now is already a monumental task and impedes your ability to make :> >> changes. Adding more projects, with ever more dependencies and :> >> configuration issues to the work the same team is doing would bury :> >> the current documentation team. So I think focusing on the DefCore :> >> list, or even a smaller list of projects with tight installation :> >> integration requirements, makes sense for the team currently :> >> producing the installation guide. :> > :> > Yes, the base install guide should ideally serve as a reference to :> > reach that first step where you have all the underlying services :> > (MySQL, :> > Rabbit) and a base set of functionality (starterkit:compute ?) :> > installed and working. That is where we need high-quality, :> > proactively-checked, easy to understand content. :> > :> > Then additional guides (ideally produced by each project team with :> > tooling and mentoring from the docs team) can pick up from that base :> > first step, assuming their users have completed that first step :> > successfully. :> > :> :> Fully agreed. :> :> I just wrote a first draft spec for all of this and look forward to :> reviews. :> :> I'll enhance some more tomorrow, might copy a bit from above (saw this too :> late). :> :> https://review.openstack.org/301284 :> :> Andreas :> -- :> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi :> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany :>GF: Felix Imendörffer, Jane Smithard, Graham Norton, :>HRB 21284 (AG Nürnberg) :> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 :> :> :> __ :> OpenStack Development Mailing List (not for usage questions) :> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev : :__ :OpenStack Development Mailing List (not for usage questions) :Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsub
Re: [openstack-dev] [all] A proposal to separate the design summit
On Wed, Mar 02, 2016 at 04:11:48PM +, Alexis Lee wrote: :Walter A. Boring IV said on Mon, Feb 22, 2016 at 11:47:16AM -0800: :> I'm trying to follow this here. If we want all of the projects in :> the same location to hold a design summit, then all of the :> contributors are still going to have to do international travel, :> which is the primary cost for attendees. : :My understanding is that hotel cost tends to dwarf flight cost. Capital :city hotels tend to be (much) more expensive than provincial ones, so :moving to less glamorous locations could noticeably reduce the total :outlay. : :EG 750 flight + 300/night hotel * 5 nights = 2050 : 750 flight + 100/night hotel * 5 nights = 1250 : :(figures are approx) Not sure howmuch cost optimization is reasonable to attempt. It is true hotel costs in the current arrangement have been a multiple of flight costs for me at least. It's also true that hotels in secondary cities tend to be cheaper (not sure if they're 1/3 though). If we are going to consider detailed costs, we should also consider that flights to secondary cities are more expensive. A semi random pricing comparison of Manchester -vs- London from Boston USA (picked the dates of Austin summit since that about the time distance I book things for) BOS->LHR $900 (nonstop) + London hotel ($250 * 5) = $2150 BOS->MAN $1200 (1 stop) + Manchester hotel ($120 * 5 ) = $1800 So it is cheaper but $350 on a week's travel isn't a stay or go choice here. For different city pairs and times this will all move around so with out more detailed comparisons this is still pretty sloppy but I don't think the cost different enough to be significant. I think availible facilities and local OpenStack community should be larger factors in location selection than this level of travel cost optimization. -Jon __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Migration state machine proposal.
On Thu, Nov 05, 2015 at 09:33:37AM -0500, Andrew Laski wrote: :Can you be a little more specific on what API difference is important :to you? There are two differences currently between migrate and :resize in the API: : :1. There is a different policy check, but this only really protects :the next bit. : :2. Resize passes in a new flavor and migration does not. : :Both actions result in an instance being scheduled to a new host. If :they were consolidated into a single action with a policy check to :enforce that users specified a new flavor and admins could leave that :off would that be problematic for you? My typical use is live-migration (perhaps that is yet another code path?) which involves: 3. specify the host to migrate to This is what I really want to protect. my use case if it helps: The reason I want to specify the host (or if I could even better a host aggregaate) is that I use 'cpu_mode=host-passthrough' and have a few generations of hardware (and my instance types are not constrained to a particular generation which I realize is an option as we do that for other purposes) so left to the scheduler it might try to live-migrate to an older cpu generation which would fail so we'r ecurrently using human intelligence to try to migrate to same generation and if that's full move newer. This is an uncommon but important procedure mostly used for updates that require hypervisor reboot in which we roll everything from node-0 to node-N, update 0 then roll node-1 to node0 etc ... If I could constrain migration by host aggregate in ways that didn't map to instance type metadata constraints that would simplify this, but the current situation is adequate for me. This isn't an issue with non-live migration or rezize neither of which requite CPU consistency. -Jon __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Nova] Migration state machine proposal.
On Wed, Nov 04, 2015 at 06:17:17PM +, Murray, Paul (HP Cloud) wrote: :> From: Jay Pipes [mailto:jaypi...@gmail.com] :> A fair point. However, I think that a generic update VM API, which would :> allow changes to the resources consumed by the VM along with capabiities :> like CPU model or local disk performance (SSD) is a better way to handle this :> than a resize-specific API. : : :Sorry I am so late to this - but this stuck out for me. : :Resize is an operation that a cloud user would do to his VM. Usually the :cloud user does not know what host the VM is running on so a resize does :not appear to be a move at all. : :Migrate is an operation that a cloud operator does to a VM that is not normally :available to a cloud user. A cloud operator does not change the VM because :the operator just provides what the user asked for. He only choses where he is :going to put it. : :It seems clear to me that resize and migrate are very definitely different things, :even if they are implemented using the same code path internally for convenience. :At the very least I believe they need to be kept separate at the API so we can apply :different policy to control access to them. As an operator I'm with Paul on this. By all means use the same code path becasue behind the scenes it *is* the same thing. BUT, at the API level we do need the distinction particularly for access control policy. The UX 'findablility' is important too, but if that were the only issue a bit of syntactic sugar in the UI could take care of it. -Jon __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] service catalog: TNG
On Fri, Oct 09, 2015 at 02:17:26PM -0400, Monty Taylor wrote: :On 10/09/2015 01:39 PM, David Stanek wrote: :> :>On Fri, Oct 9, 2015 at 1:28 PM, Jonathan D. Proulx <mailto:j...@csail.mit.edu>> wrote: :>As an operator I'd be happy to use SRV records to define endpoints, :>though multiple regions could make that messy. :> :>would we make subdomins per region or include region name in the :>service name? :> :>_compute-regionone._tcp.example.com <http://tcp.example.com> :>-vs- :>_compute._tcp.regionone.example.com <http://tcp.regionone.example.com> :> :>Also not all operators can controll their DNS to this level so it :>couldn't be the only option. : :SO - XMPP does this. The way it works is that if your XMPP provider :has put the approriate records in DNS, then everything Just Works. If :not, then you, as a consumer, have several pieces of information you :need to provide by hand. : :Of course, there are already several pieces of information you have :to provide by hand to connect to OpenStack, so needing to download a :manifest file or something like that to talk to a cloud in an :environment where the people running a cloud do not have the ability :to add information to DNS (boggles) shouldn't be that terrible. yes but XMPP require 2 (maybe 3) SRV records so an equivelent number of local config options is managable. A cloud with X endpoints and Y regions is significantly more. Not to say this couldn't be done by packing more stuff into the openrc or equivelent so users don't need to directly enter all that, but that would be a significant change and one I think would be more difficult for smaller operations. :One could also imagine an in-between option where OpenStack could run :an _optional_ DNS for this purpose - and then the only 'by-hand' :you'd need for clouds with no real DNS is the location of the :discover DNS. Yes a special purpose DNS (a la dnsbl) might be preferable to pushing around static configs. -Jon __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] service catalog: TNG
On Fri, Oct 09, 2015 at 01:01:20PM -0400, Shamail wrote: :> On Oct 9, 2015, at 12:28 PM, Monty Taylor wrote: :> :>> On 10/09/2015 11:21 AM, Shamail wrote: :>> :>> :>>> On Oct 9, 2015, at 10:39 AM, Sean Dague wrote: :>>> :>>> It looks like some great conversation got going on the service catalog :>>> standardization spec / discussion at the last cross project meeting. :>>> Sorry I wasn't there to participate. :>> Apologize if this is a question that has already been address but why can't we just leverage something like consul.io? :> :> It's a good question and there have actually been some discussions about leveraging it on the backend. However, even if we did, we'd still need keystone to provide the multi-tenancy view on the subject. consul wasn't designed (quite correctly I think) to be a user-facing service for 50k users. :> :> I think it would be an excellent backend. :Thanks, that makes sense. I agree that it might be a good backend but not the overall solution... I was bringing it up to ensure we consider existing options (where possible) and spend cycles on the unsolved bits. As an operator I'd be happy to use SRV records to define endpoints, though multiple regions could make that messy. would we make subdomins per region or include region name in the service name? _compute-regionone._tcp.example.com -vs- _compute._tcp.regionone.example.com Also not all operators can controll their DNS to this level so it couldn't be the only option. Or are you talking about using an internal DNS implementation private to the OpenStack Deployment? I'm actually a bit less happy with that idea. -Jon __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev