On 02/08/18 13:03, Alex Schultz wrote:
On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya <bdobr...@redhat.com> wrote:
On 7/6/18 7:02 PM, Ben Nemec wrote:


On 07/05/2018 01:23 PM, Dan Prince wrote:
On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:

I would almost rather see us organize the directories by service
name/project instead of implementation.

Instead of:

puppet/services/nova-api.yaml
puppet/services/nova-conductor.yaml
docker/services/nova-api.yaml
docker/services/nova-conductor.yaml

We'd have:

services/nova/nova-api-puppet.yaml
services/nova/nova-conductor-puppet.yaml
services/nova/nova-api-docker.yaml
services/nova/nova-conductor-docker.yaml

(or perhaps even another level of directories to indicate
puppet/docker/ansible?)

I'd be open to this but doing changes on this scale is a much larger
developer and user impact than what I was thinking we would be willing
to entertain for the issue that caused me to bring this up (i.e. how to
identify services which get configured by Ansible).

Its also worth noting that many projects keep these sorts of things in
different repos too. Like Kolla fully separates kolla-ansible and
kolla-kubernetes as they are quite divergent. We have been able to
preserve some of our common service architectures but as things move
towards kubernetes we may which to change things structurally a bit
too.

True, but the current directory layout was from back when we intended to
support multiple deployment tools in parallel (originally
tripleo-image-elements and puppet).  Since I think it has become clear that
it's impractical to maintain two different technologies to do essentially
the same thing I'm not sure there's a need for it now.  It's also worth
noting that kolla-kubernetes basically died because there wasn't enough
people to maintain both deployment methods, so we're not the only ones who
have found that to be true.  If/when we move to kubernetes I would
anticipate it going like the initial containers work did - development for a
couple of cycles, then a switch to the new thing and deprecation of the old
thing, then removal of support for the old thing.

That being said, because of the fact that the service yamls are
essentially an API for TripleO because they're referenced in user

this ^^

resource registries, I'm not sure it's worth the churn to move everything
either.  I think that's going to be an issue either way though, it's just a
question of the scope.  _Something_ is going to move around no matter how we
reorganize so it's a problem that needs to be addressed anyway.

[tl;dr] I can foresee reorganizing that API becomes a nightmare for
maintainers doing backports for queens (and the LTS downstream release based
on it). Now imagine kubernetes support comes within those next a few years,
before we can let the old API just go...

I have an example [0] to share all that pain brought by a simple move of
'API defaults' from environments/services-docker to environments/services
plus environments/services-baremetal. Each time a file changes contents by
its old location, like here [1], I had to run a lot of sanity checks to
rebase it properly. Like checking for the updated paths in resource
registries are still valid or had to/been moved as well, then picking the
source of truth for diverged old vs changes locations - all that to loose
nothing important in progress.

So I'd say please let's do *not* change services' paths/namespaces in t-h-t
"API" w/o real need to do that, when there is no more alternatives left to
that.

Ok so it's time to dig this thread back up. I'm currently looking at
the chrony support which will require a new service[0][1]. Rather than
add it under puppet, we'll likely want to leverage ansible. So I guess
the question is where do we put services going forward?  Additionally
as we look to truly removing the baremetal deployment options and
puppet service deployment, it seems like we need to consolidate under
a single structure.  Given that we don't want force too much churn,
does this mean that we should align to the docker/services/*.yaml
structure or should we be proposing a new structure that we can try to
align on.

There is outstanding tech-debt around the nested stacks and references
within these services when we added the container deployments so it's
something that would be beneficial to start tackling sooner rather
than later.  Personally I think we're always going to have the issue
when we rename files that could have been referenced by custom
templates, but I don't think we can continue to carry the outstanding
tech debt around these static locations.  Should we be investing in
coming up with some sort of mappings that we can use/warn a user on
when we move files?

When Stein development starts, the puppet services will have been deprecated for an entire cycle. Can I suggest we use this reorganization as the time we delete the puppet services files? This would release us of the burden of maintaining a deployment method that we no longer use. Also we'll gain a deployment speedup by removing a nested stack for each docker based service.

Then I'd suggest doing an "mv docker/services services" and moving any remaining files in the puppet directory into that. This is basically the naming that James suggested, except we wouldn't have to suffix the files with -puppet.yaml, -docker.yaml unless we still had more than one deployment method for that service.

Finally, we could consider symlinking docker/services to services for a cycle. I'm not sure how a swift-stored plan would handle this, but this would be a great reason to land Ian's plan speedup patch[1] which stores tripleo-heat-templates in a tarball :)

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-August/132768.html

Thanks,
-Alex

[0] https://review.openstack.org/#/c/586679/
[1] https://review.openstack.org/#/c/588111/

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to