On Mon, Sep 19, 2016 at 11:21 AM, Steven Hardy <sha...@redhat.com> wrote:

> Hi Alex,
>
> Firstly, thanks for this detailed feedback - it's very helpful to have
> someone with a fresh perspective look at the day-1 experience for TripleO,
> and while some of what follows are "know issues", it's great to get some
> perspective on them, as well as ideas re how we might improve things.
>
> On Thu, Sep 15, 2016 at 09:09:24AM -0600, Alex Schultz wrote:
> > Hi all,
> >
> > I've recently started looking at the various methods for deploying and
> > developing tripleo.  What I would like to bring up is the current
> > combination of the tooling for managing the VM instances and the
> > actual deployment method to launch the undercloud/overcloud
> > installation.  While running through the various methods and reading
> > up on the documentation, I'm concerned that they are not currently
> > flexible enough for a developer (or operator for that matter) to be
> > able to setup the various environment configurations for testing
> > deployments and doing development.  Additionally I ran into issues
> > just trying get them working at all so this probably doesn't help when
> > trying to attract new contributors as well.  The focus of this email
> > and of my experience seems to relate with workflow-simplification
> > spec[0].  I would like to share my experiences with the various
> > tooling available and raise some ideas.
> >
> > Example Situation:
> >
> > For example, I have a laptop with 16G of RAM and an SSD and I'd like
> > to get started with tripleo.  How can I deploy tripleo?
>
> So, this is probably problem #1, because while I have managed to deploy a
> minimal TripleO environment on a laptop with 16G of RAM, I think it's
> pretty widely known that it's not really enough (certainly with our default
> configuration, which has unfortunately grown over time as more and more
> things got integrated).
>
> I see two options here:
>
> 1. Document the reality (which is really you need a physical machine with
> at least 32G RAM unless you're prepared to deal with swapping).
>
> 2. Look at providing a "TripleO lite" install option, which disables some
> services (both on the undercloud and default overcloud install).
>
> Either of these are defintely possible, but (2) seems like the best
> long-term solution (although it probably means another CI job).
>
>
Yea I think 1 is an ok short term and 2 is the ideal solution.  I think we
really need to do a full evaluation of what services run and where to make
sure they are also properly tunable. I think it might help with some of
this memory utilization as well.


> > Tools:
> >
> > instack:
> >
> > I started with the tripleo docs[1] that reference using the instack
> > tools for virtual environment creation while deploying tripleo.   The
> > docs say you need at least 12G of RAM[2].  The docs lie (step 7[3]).
> > So after basically shutting everything down and letting it deploy with
> > all my RAM, the deployment fails because the undercloud runs out of
> > RAM and OOM killer kills off heat.  This was not because I had reduced
> > the amount of ram for the undercloud node or anything.  It was because
> > by default, 6GB of RAM with no swap is configured for the undercloud
> > (not sure if this is a bug?).  So I added a swap file to the
> > undercloud and continued. My next adventure was having the overcloud
> > deployment fail because lack of memory as puppet fails trying to spawn
> > a process and gets denied.  The instack method does not configure swap
> > for the VMs that are deployed and the deployment did not work with 5GB
> > RAM for each node.  So for a full 16GB I was unable to follow the
> > documentation and use instack to successfully deploy.  At this point I
> > switched over to trying to use tripleo-quickstart.  Eventually I was
> > able to figure out a configuration with instack to get it to deploy
> > when I figured out how to enable swap for the overcloud deployment.
>
> Yeah, so this definitely exposes that we need to update the docs, and also
> provide an easy install-time option to enable swap on all-the-things for
> memory contrained environments.
>

Even for non-memory constrained environments, having some would help
alleviate some of these random errors due to memory issues.


>
> > tripleo-quickstart:
> >
> > The next thing I attempted to use was the tripleo-quickstart[4].
> > Following the directions I attempted to deploy against my localhost.
> > I turns out that doesn't work as expected since ansible likes to do
> > magic when dealing with localhost[5].  Ultimately I was unable to get
> > it working against my laptop locally because I ran into some libvirt
> > issues.  But I was able to get it to work when I pointed it at a
> > separate machine.  It should be noted that tripleo-quickstart creates
> > an undercloud with swap which was nice because then it actually works,
> > but is an inconsistent experience depending on which tool you used for
> > your deployment.
>
> Yeah, so while a lot of folks have good luck with tripleo-quickstart, it
> has the disadvantage of not currently being the tool used in upstream
> TripleO CI (which folks have looked at fixing, but it's not yet happened).
>
> The original plan was for tripleo-quickstart to completely replace the
> instack-virt-setup workflow:
>
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-quickstart
>
> But for a variety of reasons, we never quite got to that - we may need a
> summit discussion on the path forward here.
>
> For me (as an upstream developer) it really boils down to the CI usage
> issue - at all times I want to use the tool which gets me closest to what
> runs in upstream CI (which although we actually use instack-virt-setup, we
> otherwise follow the tripleo-docs procedure pretty closely, using a helper
> script called tripleo.sh, which you can run locally):
>
> http://paste.fedoraproject.org/431073/30480114


I think this is another developer vs operator issue. Ideally a happy medium
is where a single tool provides both. I think it's unrealistic to assume
that the CI setup is going to work for an operator and vice  versa.  I'd
just like to make sure people keep this in mind when developing things that
having some flexibility can go a long way.  Also we really should include
tripleo.sh as part of quick start.  That paste provides an excellent
condensed example for getting started.


>
>
> > Thoughts:
> >
> > What these two methods showed me is that the deployment of tripleo is
> > not exactly a foolproof thing and that there are a lot of assumptions
> > that are being handled by the both of these tools.  My initial goal to
> > start this conversation around tooling and workflows was to bring the
> > idea of separation of the (virtual) environment configuration from the
> > actual deployment of tripleo as well as identifying places for
> > improvement as a way to speed up development and deployment testing.
> > I believe there are a few reasons why this can be beneficial.
>
> Yep, I think this goal is uncontentious, and it's pretty much the original
> aim of tripleo-quickstart.
>
> > The first reason is that as a developer, I would like to simplify the
> > development environment creation process and be able to draw the line
> > between environment and actual deployment tool.  By developing and
> > documenting a working development/deployment workflow, we can simplify
> > the onboarding experience as well as possibly accelerating the
> > existing development processes by reducing the time spent messing with
> > creating environments.  Does tripleo need to manage creation of VMs to
> > deploy on? The answer is probably no.  As the end user will want to
> > deploy tripleo on his or her gear, the focus for tripleo probably
> > should be on improving that process.  Now this doesn't mean that we
> > can't write stuff to do this, as it's important for development and
> > testing.  I'm not sure this is a core part of what should be
> > 'tripleo'.
>
> Yeah, agreed - the automation around setting up the VMs is really just a
> convenience, and it's not really a core part of TripleO - any tool could be
> used provided the VMs end up configured in the way we require.
>
> > Another reason why I think this is important is as we talk about
> > creating different scenarios for CI[6] to improve testing, it would
> > also be useful for a developer or qa engineer to be able to test
> > different environmental configurations that would be more realistic of
> > actual deployment scenarios without having to hunt down multiple
> > machines or configure actual hardware networking.  For example,
> > creating environments using multiple networks, changing NICs,
> > providing different sized nodes based on roles, etc can all be done
> > virtually.  While tripleo-quickstart has some of these options, it is
> > mixed in with the tripleo deployment process and does not seem to
> > align with being able to deploy tripleo in more real world networking
> > or environmental scenarios.
>
> Yeah, so I think this is one reason why the tripleo-quickstart discussion
> has sometimes proven tricky - the original spec was about replacing only
> the virt-setup pieces, but there was subsequently some scope-creep.  I
> think this is being adressed, but it'd be good to have folks working on
> that chime in here.
>
> > Since there are a bunch of assumptions baked into the existing
> > development scripts, I would say the current approach is more 'it
> > works in devstack' than 'it works for the end user'.  This is not to
> > say the currently tools don't have their uses as they currently work
> > for the existing CI setup and for many developers today.  I think we
> > can do better if we draw clearer lines between what is tripleo and
> > what is something that is environmental and a supporting tool.
>
> I'm not sure I agree here - after you have your virt stuff setup, the
> TripleO pieces which do the deployment are identical to those used by the
> end user (unlike Devstack where it's likely the entire environment has been
> configured using a different tool to production deployments).
>
>
I think for me the combination of the two tools hides what's actually
occurring under the covers.  I think this is more a commentary around the
quickstart tooling is that much of the items are done for you without
someone knowing what's actually going on.  Additionally some of the
flexibility/configurability actually gets lost due to the fact that the
configuration method changes from configuring tripleo to configure
quickstart.  Personally I like simple and bubbling up the configuration of
the underlying tool rather than writing additional complexity on top of
something to try and hide the warts or add functionality that was not
intended.  But that's my personal preference.


> > Ideas:
> >
> > As part of bringing something to get the conversation started and to
> > better understand how things work, I spent about two days coming up
> > with a PoC[7] for a workflow that splits the environment creation,
> > configuration, and management out from the actual deployment of the
> > undercloud/overcloud.  Having previously used other tools for managing
> > environments for deploying openstack, I thought I'd try deploying
> > tripleo using something I was familiar with, fuel-devops[8].  The
> > whole point of fuel-devops is to be able to create and manage entire
> > virtual environments (and their networking configuration) on a given
> > host.  With this, I was able to create my environment setup in a yaml
> > file[9] which would then be able to be reproduced.  So with this tool,
> > I'm able to create a number nodes of a given memory, disk, network
> > configuration as part of an 'environment'.  This environment is
> > completely separated from another environment which means given a
> > large enough virtual host, I could have multiple tripleo deployments
> > occurring simultaneously on their own networks.  This is a nice
> > feature, but just an added bonus to the tool (along with snapshotting
> > and a few other nifty things).  The bigger feature here is that this
> > is more representative of what someone using tripleo is going to
> > experience. They are going to have their environment already
> > configured and would like to deploy tripleo on it.  Once the
> > environment was created, I started to understand what it would be like
> > for an end user to take an undercloud image and deploy it.
> > Fortunately because we're still dealing with VMs, you can just point
> > the undercloud node at the undercloud image itself[10] for testing
> > purposes.
>
> This looks very interesting, thanks for sharing! :)
>
> That said, my main concern if we go this way is we'd end up with three ways
> to do a virt setup vs the current two ;)
>
> Definitely worthy of broader discussion though, particularly in the context
> of this vs the ansible based tripleo-quickstart.
>

I don't think we should have 3 ways. Ideally 1 but probably more realistic
is having 2 ways.  I would say that we should probably just kill off the
instack-vm-setup and push for quickstart/tripleo.sh.  But I think there
would be room for something like OVB/fuel-devops to come in as an
alternative to quickstart/tripleo.sh for when we need better deployment
testing and environments.  I guess for me quickstart/tripleo.sh == devstack
and OVB/fuel-devops == more real world scenarios.


>
> > Once the environment exists, it starts exposing what exactly it means
> > to deploy a tripleo undercloud/overcloud.  The majority of the effort
> > I had to expend for this PoC was actually related to the construction
> > of the instackenv.json to load the overcloud nodes into ironic.  As
> > mentioned in the workflow-simplification spec[0], this is a known
> > limitation and there are possible solutions and I think this is
> > important of the end user experience when trying to work with tripleo.
> > It should be noted that I managed to get the undercloud and
> > controller/compute deployed (but eating into VM swap space) in 12GB on
> > my laptop.  This was something I was unable to do with either instack
> > or tripleo-quickstart.
>
> So, I'm a little unclear here, presumably the actual RAM usage was the
> same, so is this just because you were able to easily configure swap on all
> the VMs?
>

It was a little of both.  It turns out the base image we create for the
undercloud actually has swap configured in it but I think something in the
instack vm setup breaks this.  The other part for the overcloud is because
I was able to configure the swap for the overcloud as part of the script I
was using to deploy.  Being able to tune the flavors for the overcloud
doesn't seem like something that is available in either the instack or
quickstart.  I think this goes back to previously where I said some of the
configurability is being lost in tooling and people aren't realizing it
because it 'works for them'.



> > There are some short coming with this particular tool choice. My
> > understanding is that fuel-devops is still limited to managing a
> > single host.  So you don't use it against remote nodes, but it is good
> > if you have decently sized physical machine or want to work locally.
> > I ran into issues with network configurations and pxe booting, but I
> > have a feeling that's more of a bug in libvirt and my lack of time to
> > devote to undercloud setup.  So it's not perfect, but it does show off
> > the basics of the concept.  Overall I think clearly defining the
> > tripleo installation process from the environment configuration is an
> > important step for end user usability and even developer workflows.
>
> I think the multi-node use-case is mostly handled via OVB[1] now which is
> where you basically use an OpenStack cloud to host the VMs used for a
> TripleO deployment (yes, that is OpenStack on OpenStack on OpenStack).
>
>
Yes I'm trying out OVB and it seems to be an improvement but it still lacks
two of my favor features from fuel-devops for development purposes. The
first is snapshoting of an environment so I can revert back and iterate,
but this falls under the really nice to have but not essential.  The second
which I think is even more important is being able to describe an
environment in yaml (or some other format as desired) and have it
automatically created for you.  Having the number of
nodes/nics/networks/etc being able to all be preconfigured would make
development and testing faster and help identify issues with specific real
world scenarios.  It's kinda there in OVB but not in as configurable of a
way as it is available from fuel-devops.  I think there's a start in OVB,
but being able to abstract that out further would be useful.  I think these
would be great improvements to OVB (and/or quickstart) as far as feature
improvements.


> We're using that in CI and it works pretty well, so I think the main gap is
> a super-easy day-1 workflow that allows users/developers to get up and
> running easily on a single node (as mentioned above tho, quickstart was
> aimed at closing this gapm and has been working well for a lot of folks).
>
> Thanks for the feedback - defintely more here we can discuss and hopefully
> refine into actionable bugs/specs/patches! :)
>
>
Just another item that I've noticed is that much of the complexity as part
of just deploying what's available seems to be centered around repo
management and the whole image building process.  If you notice in my
readme from my PoC, the actual deployment of pre-baked images (and
excluding the items for swap configuration) is really only 7 commands once
you have an environment available. That's really not that bad. The problem
seems to be that these core 7 items get buried in repo configuration, image
building, and actual compute/network environmental setup.

Thanks,
-Alex


> Steve
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to