Re: What is our technical debt?

2020-09-21 Thread Pierre-Yves Chibon
On Thu, Jun 25, 2020 at 09:27:24PM +0200, Pierre-Yves Chibon wrote:
> Good Morning Everyone,
> 
> Just like every team we have technical debt in our work.
> I would like your help to try to define what it is for us.

Good Morning Everyone,

I've been quite late at sending this email and you have my apologies for that.

Basically, following on our discussions I've looked at all the applications we
maintain or/and run and built a matrix of our technical debt.
A brief overview:
  - 56 apps are listed in the matrix
  - 42 apps support python3 (or not concerned by python)
  - 39 apps support fedora-messaging
  - 26 apps have fedora-messaging schemas
  - 18 apps are clear documentation identified
  - 31 apps have clear unit-tests identified
  - 20 apps are clearly using pytest for their tests (the other could be either
unknown or using the deprecated nosetests)
  - 45 apps have support for OIDC (does not mean they are all currently using it
though!)
  - 43 apps have a clear primary point of contact
  - 16 apps have a clear secondary point of contact

So as you can see, there is not a single criteria by which we are ok for all our
apps. 

It is also worth noting that we lack a primary point of contact for 6 of our
applications that are considered "critical path" (ie: required to build Fedora
and ship it to our users).

The overall matrix is available at:
https://pingou.fedorapeople.org/fedora_tech_debt_matrix/Applications.html

We have also identified a few infrastructure technical debts that we have
recorded in a second sheet:
https://pingou.fedorapeople.org/fedora_tech_debt_matrix/Infrastructure.html
Some of these have already been submitted as initiative briefs so likely won't
but still need to be addressed at some point.

As you can see, we do not lack items in our backlog...


Pierre
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt? (fedora integration service)

2020-07-16 Thread Tomas Tomecek
On Thu, Jul 9, 2020 at 6:03 PM Pavel Raiskup  wrote:
>
> On Thursday, July 9, 2020 4:51:29 PM CEST Tomas Tomecek wrote:
> > The biggest problem is that on github or gitlab, one needs to explicitly
> > install the app or integration so the downstream service can receive the
> > events. Exactly what github2fedmsg is supposed to do.
>
> Exactly :-/ what I think would be nice to have one common integration in
> each forge on common "Fedora level".  So we don't have to have
> team-specific integration apps (I'm just refering to the component graph I
> cited in original mail).

But the problem here is if the specific (CI) system wants to take
advantage of the forge app identity - e.g. set a commit status on PR
identified by the "Fedora app". We'd need to figure out identity
federation, somehow.

> > Our team should have quarterly planning in a month. Pavel, if this is
> > important to you, we can bring this up on the planning session and start
> > with the refactor/unification so that we can at least share the code for
> > parsing and processing of the event payloads.
>
> Sure, I mean .. it depends on how you found this useful for Packit service
> itself, if you think that it makes sense to define the unified event
> format - and if you could consume it?
>
> > Or we can set up a call and discuss directly so we all would understand
> > the requirements.
>
> I'll try to ping you on irc, thanks.

I'm not using IRC much lately as you found out :D

For anyone interested in this, we've set up a meeting to discuss this
on Mon, July 27th - ping me if you wanna join.

As for the usefulness - I personally don't see much of a use of such
service for us. My only point here is to share code, expertise and
experience we've gained so far in this field.


Tomas
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt? (fedora integration service)

2020-07-09 Thread Pavel Raiskup
On Thursday, July 9, 2020 4:51:29 PM CEST Tomas Tomecek wrote:
> On Thu, Jul 9, 2020 at 4:21 PM Pavel Raiskup  wrote:
> >
> > On Thursday, July 9, 2020 3:13:47 PM CEST Tomas Tomecek wrote:
> > Do you expose some library which makes the conversion of various formats
> > of events from various forges into uniformly looking events, like:
> >
> > event_type: source_change
> > change:
> >   old_code:
> > code_location: git
> > clone_url: 
> > committish: 
> >   new_code:
> > code_location: git
> > clone_url: 
> > committish: 
> >
> > The format is to be discussed (with potential consumers, or anyone
> > interested), but this is what we need on Copr side basically and I suppose
> > that's something everyone else needs who wants to process the code change.
> >
> > I think we need a precisely defined set of events which is able to handle
> > not only git locations, changes, change requests (it should be flexible
> > enough to reference e.g. tarballs + patches, when we needed it in future).
> >
> > This is basically what we need on the "event reader" side.
> 
> This is really interesting. So far we've been only discussing
> unification of webhook or message payloads on our side. All of this is
> baked directly in packit-service's codebase [1] right now.

Yeah, the very same situation is on our side.

> The biggest problem is that on github or gitlab, one needs to explicitly
> install the app or integration so the downstream service can receive the
> events. Exactly what github2fedmsg is supposed to do.

Exactly :-/ what I think would be nice to have one common integration in
each forge on common "Fedora level".  So we don't have to have
team-specific integration apps (I'm just refering to the component graph I
cited in original mail).

> Our team should have quarterly planning in a month. Pavel, if this is
> important to you, we can bring this up on the planning session and start
> with the refactor/unification so that we can at least share the code for
> parsing and processing of the event payloads.

Sure, I mean .. it depends on how you found this useful for Packit service
itself, if you think that it makes sense to define the unified event
format - and if you could consume it?

> Or we can set up a call and discuss directly so we all would understand
> the requirements.

I'll try to ping you on irc, thanks.

Pavel

> [1] 
> https://github.com/packit-service/packit-service/blob/master/packit_service/service/events.py


___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt? (fedora integration service)

2020-07-09 Thread Tomas Tomecek
On Thu, Jul 9, 2020 at 4:21 PM Pavel Raiskup  wrote:
>
> On Thursday, July 9, 2020 3:13:47 PM CEST Tomas Tomecek wrote:
> Do you expose some library which makes the conversion of various formats
> of events from various forges into uniformly looking events, like:
>
> event_type: source_change
> change:
>   old_code:
> code_location: git
> clone_url: 
> committish: 
>   new_code:
> code_location: git
> clone_url: 
> committish: 
>
> The format is to be discussed (with potential consumers, or anyone
> interested), but this is what we need on Copr side basically and I suppose
> that's something everyone else needs who wants to process the code change.
>
> I think we need a precisely defined set of events which is able to handle
> not only git locations, changes, change requests (it should be flexible
> enough to reference e.g. tarballs + patches, when we needed it in future).
>
> This is basically what we need on the "event reader" side.

This is really interesting. So far we've been only discussing
unification of webhook or message payloads on our side. All of this is
baked directly in packit-service's codebase [1] right now.

The biggest problem is that on github or gitlab, one needs to
explicitly install the app or integration so the downstream service
can receive the events. Exactly what github2fedmsg is supposed to do.

Our team should have quarterly planning in a month. Pavel, if this is
important to you, we can bring this up on the planning session and
start with the refactor/unification so that we can at least share the
code for parsing and processing of the event payloads.

Or we can set up a call and discuss directly so we all would
understand the requirements.

[1] 
https://github.com/packit-service/packit-service/blob/master/packit_service/service/events.py


Cheers,
Tomas
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt? (fedora integration service)

2020-07-09 Thread Pavel Raiskup
On Thursday, July 9, 2020 3:13:47 PM CEST Tomas Tomecek wrote:
> Pavel, thanks for bringing this up!
> 
> Funny thing is that you just described a lot of functionality of packit as a
> service :)

Yes, I believe.  Same as the github2fedmsg, which is somewhat close as
well, I even mentioned you in the original email ;)

> getting events from multiple sources (fedora-messaging, CentOS'
> mqtt, GitHub webhooks, GitLab webhooks, prod/stg) and then have a mechanism to
> process those and provide updates. Big heads-up to everyone - it took us year+
> to get such functionality, polish it, make it secure, scalable, auditable,
> maintainable. It's a ton of work.
>
> If there is anything we can do to help, please let us know.

Do you expose some library which makes the conversion of various formats
of events from various forges into uniformly looking events, like:

event_type: source_change
change:
  old_code:
code_location: git
clone_url: 
committish: 
  new_code:
code_location: git
clone_url: 
committish: 

The format is to be discussed (with potential consumers, or anyone
interested), but this is what we need on Copr side basically and I suppose
that's something everyone else needs who wants to process the code change.

I think we need a precisely defined set of events which is able to handle
not only git locations, changes, change requests (it should be flexible
enough to reference e.g. tarballs + patches, when we needed it in future).

This is basically what we need on the "event reader" side.

> (one of the core components of packit's architecture is our library ogr
> [1], which serves as an abstraction layer on top of gitforge APIs -
> pagure, github, gitlab)

I've seen that one...  That sounds like library for actively communicating
over APIs with forges.  That's surely one part needed for sending back the
reaction results.

Pavel

> [1] https://github.com/packit-service/ogr/
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> 



___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt? (fedora integration service)

2020-07-09 Thread Tomas Tomecek
Pavel, thanks for bringing this up!

Funny thing is that you just described a lot of functionality of packit as a 
service :) getting events from multiple sources (fedora-messaging, CentOS' 
mqtt, GitHub webhooks, GitLab webhooks, prod/stg) and then have a mechanism to 
process those and provide updates. Big heads-up to everyone - it took us year+ 
to get such functionality, polish it, make it secure, scalable, auditable, 
maintainable. It's a ton of work.

If there is anything we can do to help, please let us know.

(one of the core components of packit's architecture is our library ogr [1], 
which serves as an abstraction layer on top of gitforge APIs - pagure, github, 
gitlab)

[1] https://github.com/packit-service/ogr/
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-08 Thread Clement Verna
On Tue, 7 Jul 2020 at 02:28, Ken Dreyer  wrote:

> On Wed, Jul 1, 2020 at 12:57 PM Stephen John Smoogen 
> wrote:
> >
> > On Wed, 1 Jul 2020 at 14:46, Miroslav Suchý  wrote:
> > >
> > > Dne 30. 06. 20 v 9:44 Pierre-Yves Chibon napsal(a):
> > > > What are you talking about here? The Fedora release process? The
> mass-branching
> > > > in dist-git?
> > >
> > > This. And creating new gpg keys, new mock configs, new tags in Koji,
> add release to retrace.f.o, Copr, ... I have a
> > > dream where you just bump up number in - let say - PDC and everything
> else will happen automagically. At right time.
> > >
> >
> > I think choosing the one tool which is so end of life to do it in.. is
> > a sign of why we can't do this. Every release some set of tools in
> > Fedora get added by some team who have been working on their own
> > schedules and their own API without any idea of any other teams
> > working on.
> > We then have to do a lot of integration and make it work
> > before the release deadline to make it work. Then usually after 1 or 2
> > releases that software team is no longer in existence and we have to
> > continue with it waiting for the promised replacement which will do
> > all those things you list above.. but instead get some other tool
> > which has to be shoved in.
>
> This is an excellent summary of the problem over the past couple of years.
>
> I think one of the problems with PDC was that so many teams had to
> adapt all their *giant* tools to it, and this integration effort was
> unsustainable when we have natural contributor turnover. Everything
> ended up tightly coupled together so that it was really difficult to
> remain agile as tools and business requirements (naturally) changed.
> Also we never drew the line and wrote a list of things that PDC was
> *not* going to do, so the Second Syndrome effect just kept growing
> until it collapsed.
>
> Instead of having everything having to talk to PDC to determine its
> configuration, I'm approaching this problem from the other end -
> making Ansible talk to all the services according to each service's
> API. Here are some things I like about this approach:
>
> 1. Ansible is really simple and well-documented.
>
> 2. It's easy to start small and get value incrementally.
>
> 3. The playbooks can (and do) change independently from the APIs. This
>kind of agility is essential because SOPs must be able to change over
>time.
>
> 4. If we haven't completely implemented something in Ansible yet, the
>service itself is not completely broken. The workaround does not
>require multiple teams of developers (like with PDC). The workaround
>is that the administrator simply does the thing they used to do
>manually (and file tickets for the missing RFEs in the Ansible modules)
>
> For the koji-ansible project, we're using it to configure some large
> products internally. Now we have to expand this concept to the rest of
> the pipeline (Bodhi, Pungi configuration, etc.)
>
> Clement started on https://github.com/cverna/bodhi-ansible but I think
> that's abandoned at this point.


Yeah this was more of a proof of concept, but I think it would be
interesting to continue working on it. Being able to manage the bodhi
releases (create, update status, update tags, etc ..) in Ansible would be
quite cool. We also have an example of playbook that is using koji-ansible
[0], I only played with it in staging before the data center move but it
would be cool to actually use this playbook when we branch F33.


[0] -
https://pagure.io/fedora-infra/ansible/blob/master/f/playbooks/manual/releng/koji-release-tags.yml


> I do think this is the way to go,
> though. In some ways this is the point - it should be possible for
> these Ansible modules to be isolated so that when contributor turnover
> happens, the whole system does not fall apart.
>

> - Ken
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
>
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-07 Thread Ken Dreyer
On Mon, Jul 6, 2020 at 7:11 PM Neal Gompa  wrote:
> This sort of worries me though: abusing Ansible to do this sort of
> thing is not what it was made for. It also makes mentally modeling how
> everything works so much harder because the sequencing (or execution
> flow) of actions is non-obvious.

You're right that Ansible originally only operated on systems with
SSH. However, there are a lot of modules and plugins now that do many
more things.

I completely agree that it's possible to make a mess and make Ansible
non-obvious. The sequencing and execution flow must be obvious or
we're not making the world better. I've found that when I implement
loops or complex conditionals in Ansible playbooks, that does get hard
to read and understand. Ansible shines when the modules are easy to
use for non-programmers. The essential complexity should live inside
the modules or plugins.

At a high level, we're talking about "configuration management", and
whether it's SSH or other systems, I've found Ansible is a good fit.

This script reads and executes top-to-bottom:
https://pagure.io/releng/blob/master/f/scripts/branching/make-koji-release-tags
, and a corresponding Ansible playbook would also read and execute
top-to-bottom.

> And Ansible's own APIs are horrifically unstable, to the point that
> I've had *bar conversations* about how people have to pin to specific
> Ansible releases because all the crap they build on top of it to bend
> Ansible to their needs relies on the part of Ansible that's
> deliberately *not* stable: the Python module extension interface.

I can't speak to or defend the decisions Ansible's made in the past,
and I can't say that they'll never break us again. There are two
things to consider though:

1. The Ansible core engineers are more incentivized than ever before
to make the Python API for out-of-tree modules stable, because they've
pulled almost all of the modules out of the ansible monorepo in 2.10.
https://www.ansible.com/resources/webinars-training/collections-future-of-how-ansible-content-is-handled
explains more about where RH's Ansible product managers intend to go
strategically here.

2. In the spirit of "trust-but-verify", we also test against multiple
versions of Ansible (and Python, etc). The CI system for koji-ansible
runs unit tests and integration tests for all the Ansible modules on
every pull request. The integration tests are a series of playbooks
that we run against ephemeral Koji hubs and assert that the hub's data
looks the way we expect. Today we run the integration tests against
the latest GA version of Ansible on PyPI (2.9) and the latest
pre-release on PyPI (2.10). It's important to do this because the
blast radius for bugs is pretty high. The Ansible pre-release testing
in particular gives us an early heads' up if something problematic is
coming.

- Ken
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-07 Thread Brendan Early
> Maybe I'm far out, but it's interesting to me that nobody had said
> ELK+Kibana+Logtash for monitoring. AFAIK there are ready to use
> containers with it, the configuration it's easy to do and the work
> needs to be done entirely for the dashboards you want to monitor.
I've setup a small ELK stack before. It's really nice once you get it
going, but it takes a lot of time and work to setup Logstash for all
your services.
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-07 Thread Stephen John Smoogen
On Tue, 7 Jul 2020 at 08:51, Eduard Lucena  wrote:
>
> Maybe I'm far out, but it's interesting to me that nobody had said 
> ELK+Kibana+Logtash for monitoring. AFAIK there are ready to use containers 
> with it, the configuration it's easy to do and the work needs to be done 
> entirely for the dashboards you want to monitor.
>
> Sorry if the suggestion is silly.
>
It isn't silly.. it is just a lot more complicated than people realize.

The front end set up is easy.. tuning it to be what you want and
setting up the backend is not easy. There is a reason why data
analysts for ELK are fulltime jobs and the backend usually requires a
multitude of servers. A lot of the setups are built around where the
data is already in the cloud so you spin up more elastic backends to
spread out the load, but our data is in a centralized area. Shipping
our data into Amazon/MS/etc cloud is a legal blackhole due to GDPR and
other regulations.


> Br,
> x3mboy
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org



-- 
Stephen J Smoogen.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt? (fedora integration service)

2020-07-07 Thread Stephen John Smoogen
On Tue, 7 Jul 2020 at 05:23, Pavel Raiskup  wrote:
>
> Hi all,
>
> as a Copr contributor, I am missing a _standard_ design for integrating
> our cool infrastructure into the _upstream_ work-flows.  We have a lot of
> teams trying to implement the same thing:
>

I think we need to work out what technical debt means. When I think of
technical debt, I am thinking of:

1. All our infrastructure relies on PDC which has a dead upstream, no
working replacement and more stuff needing to work from it.
2. Our mailing lists run on a beta of mailman3 and the current tools
are not packaged completely
3. mailman3 vm has possible disk issues
4. We have other servers we found we could not install to newer
versions but have to run on dead OS versions
5. Our account system, FAS2 runs on RHEL-6 (but is happier on RHEL5)
6. Our openshift is running on an older version but the newer version
needs a lot of planning of what hardware is going where.
7. ... etc etc etc

I expect there are other items of technical debt but a lot of these
take up most of my 50 to 60 hour weeks so it is what I think of versus
new workflows or other items.


-- 
Stephen J Smoogen.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-07 Thread Eduard Lucena
Maybe I'm far out, but it's interesting to me that nobody had said
ELK+Kibana+Logtash for monitoring. AFAIK there are ready to use containers
with it, the configuration it's easy to do and the work needs to be done
entirely for the dashboards you want to monitor.

Sorry if the suggestion is silly.

Br,
x3mboy
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Vagrant (Was: Re: What is our technical debt?)

2020-07-07 Thread Pavel Valena
>  Přeposlaná zpráva  Předmět:  Re: What is our technical debt?
> Datum:Thu, 25 Jun 2020 21:59:37 +0200
> Od:   Pierre-Yves Chibon 

> Adresa pro odpověď:   Fedora Infrastructure
> 

> Komu: Fedora Infrastructure 

> On Thu, Jun 25, 2020 at 03:51:42PM -0400, Neal Gompa wrote:

> > On Thu, Jun 25, 2020 at 3:27 PM Pierre-Yves Chibon 
> > wrote:
> 
> > >
> 
> > > Good Morning Everyone,
> 
> > >
> 
> > > Just like every team we have technical debt in our work.
> 
> > > I would like your help to try to define what it is for us.
> 
> > >
> 
> > > So far, I've come up with the following:
> 
> > > - python3 support/migration
> 
> > > - fedora-messaging
> 
> > > - fedora-messaging schema
> 
> > > - documentation
> 
> > > - (unit-)tests
> 
> > > - OpenID Connect
> 
> > >
> 
> > > What else would we want in there?
> 
> > >
> 

> > These are all good things, especially the documentation one. I'd like
> 
> > to zero in on a particular aspect of documentation, though: getting to
> 
> > hack on it. A lot of our projects are surprisingly difficult to get up
> 
> > and running for someone to play with and hack on, and this is
> 
> > increasingly true as we adopt OpenShift-style deployments. One way we
> 
> > solved this in Pagure is by providing some quick start processes in
> 
> > the documentation and a fully working Vagrant based process to boot up
> 
> > and have a working environment to hack on the code.
> 

> > I'm not necessarily going to specify it needs to be Vagrant for
> 
> > everything, but I think this is something we should have for all of
> 
> > our projects, so that people *can* easily get going to use and
> 
> > contribute.
> 

> I've recently had quite some pain with vagrant (just today, I've tried
> several
> time to start my bodhi vagrant box and lost my morning w/o success).

Hello, 

sorry I'm late to the party, I've heard of your Vagrant issues- 

I'm a Vagrant maintainer, and I'll gladly help if you encounter any Vagrant 
issues, feel free to ping me on IRC (`pvalena`). 
I can also do some review of your Vagrantfiles, and run some tests, if you like 
(to prevent instabilities etc.). 

One good thing I always advise to try are the latest stable (=rawhide) Vagrant 
builds from my COPR: 
https://copr.fedorainfracloud.org/coprs/pvalena/vagrant/ 

(Built for all Fedoras and CentOS 7+8, although CentOS 7 currently needs manual 
workarounds.) 

Pavel 

> I guess it may be nice to see if there is something else out there that we
> could
> leverage.
> If we could adopt one and try to get have it on most of our apps this may be
> a
> nice goal for us to work towards.

> Pierre
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt? (fedora integration service)

2020-07-07 Thread Pavel Raiskup
Hi all,

as a Copr contributor, I am missing a _standard_ design for integrating
our cool infrastructure into the _upstream_ work-flows.  We have a lot of
teams trying to implement the same thing:

1. Catch some upstream event (like push, pull-request, release, tag, ...)
2. React to the event.  Do something with this event, (e.g. do CI build,
   Copr build, do some scratch build using Jenkins, etc.)
3. Ideally report back upstream, somehow, the results.

By _standard_ I mean that (a) it should be absolutely trivial to start the
Fedora Infra <-> Upstream interaction (to integrate) from the package
contributor POV, and (b) that it should be _fairly simple_ to implement
the reaction (point 2 and 3) inside our infra.

Namely, consider that we in Copr want to trigger builds by the events from
GitHub (PRs, tags, pushes, ...), GitLab (MRs, ...), Pagure, BitBucket,
etc.  We sort of are able to do so (it is far from complete support!), but
for this to happen -- we had to implement pretty complicated webhook
interface on Copr side for all the services, and for Pagure we have to
listen on fedora-messaging bus.

The real challenge though is _how to notify_ upstream projects that
something inside our infra (eg Copr) happened.

Nothing dramatic happened in Copr so far.  We "only" have support for
notifying Pagure ATM (we are able to set "commit status" for PRs and
pushes).  To be able to communicate with other forges, it would be an
equivalent amount of repetitive work..  (we'd have to store somewhere the
access "tokens", know how to/where contact the upstream, etc.).  Quite
some work, ... worth doing only once in whole Fedora?

What I think we could have is something like a central gateway between
Fedora and Upstream, let's say "Fedora Integrator"?  That thing should cut
out and provide the part of Copr code which does this integration steps:

- It should "listen" on several forges (webhooks, on our bus, atm.),
  and translate the variously formatted event payloads into well-defined,
  uniform events.  Say "change", "change request", etc.

- Such an event would be dumped (back) on the fedora-messaging bus,
  through well-defined (schema) message.  The bus part is important because
  it is (a) asynchronous and (b) even services behind corporate VPN can
  react on them.

- Each such event would have its own ID, so **any** Fedora service can react
  on it by posting ReactsOn=EventID (again well-defined) "reaction" on
  fedora-message bus.

- The Integrator service would be able to translate the "reactions"
  on bus to forge responses -- e.g., when CI passes, notify
  Pagure/GitHub/GitLab/... commit status, send an email to upstream, etc.

I sketched a simple component graph how that could look like [1].  Feel
free to comment here or there...  the webhook/message payload translation
mechanism should be somehow wrapped into a library, so we could still get
the unified format of events without the integrator service (iff needed).

This way we would build a single place for collaboration on such things
(parsing the webhooks, forge interaction, etc.).  But we would have a
single orientation points for people who use our services for
CI/CD (single "Fedora Integrator" GitHub app, GitLab.com
"Fedora Integration", etc.).
Not only that would be super easy to find -- such forge apps are also
pretty privileged things (they can do various stuff with the
repositories), so people might not be very confident they should enable
them.  If we had one blessed Fedora place to enable those things per
forge, I think it would be more "trusted".

Where to start?  E.g. by using parts of github2fedmsg, copr and Packit
service?

We need to find a way to get the events on our bus easily, so I e.g. asked
GitLab people [1] how to easily implement this.

Do you think such initiative makes sense?

[1] https://github.com/praiskup/fedora-integration-service
[2] https://gitlab.com/gitlab-org/gitlab/-/issues/225151

Pavel



On Thursday, June 25, 2020 9:27:24 PM CEST Pierre-Yves Chibon wrote:
> Good Morning Everyone,
> 
> Just like every team we have technical debt in our work.
> I would like your help to try to define what it is for us.
> 
> So far, I've come up with the following:
> - python3 support/migration
> - fedora-messaging
> - fedora-messaging schema
> - documentation
> - (unit-)tests
> - OpenID Connect
> 
> What else would we want in there?
> 
> 
> Looking forward to your thoughts,
> Pierre
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> 



___
infrastructure mailing list -- infrastruct

Re: What is our technical debt?

2020-07-06 Thread Neal Gompa
On Mon, Jul 6, 2020 at 8:22 PM Ken Dreyer  wrote:
>
> On Wed, Jul 1, 2020 at 12:57 PM Stephen John Smoogen  wrote:
> >
> > On Wed, 1 Jul 2020 at 14:46, Miroslav Suchý  wrote:
> > >
> > > Dne 30. 06. 20 v 9:44 Pierre-Yves Chibon napsal(a):
> > > > What are you talking about here? The Fedora release process? The 
> > > > mass-branching
> > > > in dist-git?
> > >
> > > This. And creating new gpg keys, new mock configs, new tags in Koji, add 
> > > release to retrace.f.o, Copr, ... I have a
> > > dream where you just bump up number in - let say - PDC and everything 
> > > else will happen automagically. At right time.
> > >
> >
> > I think choosing the one tool which is so end of life to do it in.. is
> > a sign of why we can't do this. Every release some set of tools in
> > Fedora get added by some team who have been working on their own
> > schedules and their own API without any idea of any other teams
> > working on.
> > We then have to do a lot of integration and make it work
> > before the release deadline to make it work. Then usually after 1 or 2
> > releases that software team is no longer in existence and we have to
> > continue with it waiting for the promised replacement which will do
> > all those things you list above.. but instead get some other tool
> > which has to be shoved in.
>
> This is an excellent summary of the problem over the past couple of years.
>
> I think one of the problems with PDC was that so many teams had to
> adapt all their *giant* tools to it, and this integration effort was
> unsustainable when we have natural contributor turnover. Everything
> ended up tightly coupled together so that it was really difficult to
> remain agile as tools and business requirements (naturally) changed.
> Also we never drew the line and wrote a list of things that PDC was
> *not* going to do, so the Second Syndrome effect just kept growing
> until it collapsed.
>
> Instead of having everything having to talk to PDC to determine its
> configuration, I'm approaching this problem from the other end -
> making Ansible talk to all the services according to each service's
> API. Here are some things I like about this approach:
>
> 1. Ansible is really simple and well-documented.
>
> 2. It's easy to start small and get value incrementally.
>
> 3. The playbooks can (and do) change independently from the APIs. This
>kind of agility is essential because SOPs must be able to change over
>time.
>
> 4. If we haven't completely implemented something in Ansible yet, the
>service itself is not completely broken. The workaround does not
>require multiple teams of developers (like with PDC). The workaround
>is that the administrator simply does the thing they used to do
>manually (and file tickets for the missing RFEs in the Ansible modules)
>
> For the koji-ansible project, we're using it to configure some large
> products internally. Now we have to expand this concept to the rest of
> the pipeline (Bodhi, Pungi configuration, etc.)
>
> Clement started on https://github.com/cverna/bodhi-ansible but I think
> that's abandoned at this point. I do think this is the way to go,
> though. In some ways this is the point - it should be possible for
> these Ansible modules to be isolated so that when contributor turnover
> happens, the whole system does not fall apart.
>

This sort of worries me though: abusing Ansible to do this sort of
thing is not what it was made for. It also makes mentally modeling how
everything works so much harder because the sequencing (or execution
flow) of actions is non-obvious.

And Ansible's own APIs are horrifically unstable, to the point that
I've had *bar conversations* about how people have to pin to specific
Ansible releases because all the crap they build on top of it to bend
Ansible to their needs relies on the part of Ansible that's
deliberately *not* stable: the Python module extension interface.

We're potentially trading one kind of technical debt for one that's
arguably worse: debt we can't fix because we're eating our own tails.


-- 
真実はいつも一つ!/ Always, there's only one truth!
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-06 Thread Ken Dreyer
On Wed, Jul 1, 2020 at 12:57 PM Stephen John Smoogen  wrote:
>
> On Wed, 1 Jul 2020 at 14:46, Miroslav Suchý  wrote:
> >
> > Dne 30. 06. 20 v 9:44 Pierre-Yves Chibon napsal(a):
> > > What are you talking about here? The Fedora release process? The 
> > > mass-branching
> > > in dist-git?
> >
> > This. And creating new gpg keys, new mock configs, new tags in Koji, add 
> > release to retrace.f.o, Copr, ... I have a
> > dream where you just bump up number in - let say - PDC and everything else 
> > will happen automagically. At right time.
> >
>
> I think choosing the one tool which is so end of life to do it in.. is
> a sign of why we can't do this. Every release some set of tools in
> Fedora get added by some team who have been working on their own
> schedules and their own API without any idea of any other teams
> working on.
> We then have to do a lot of integration and make it work
> before the release deadline to make it work. Then usually after 1 or 2
> releases that software team is no longer in existence and we have to
> continue with it waiting for the promised replacement which will do
> all those things you list above.. but instead get some other tool
> which has to be shoved in.

This is an excellent summary of the problem over the past couple of years.

I think one of the problems with PDC was that so many teams had to
adapt all their *giant* tools to it, and this integration effort was
unsustainable when we have natural contributor turnover. Everything
ended up tightly coupled together so that it was really difficult to
remain agile as tools and business requirements (naturally) changed.
Also we never drew the line and wrote a list of things that PDC was
*not* going to do, so the Second Syndrome effect just kept growing
until it collapsed.

Instead of having everything having to talk to PDC to determine its
configuration, I'm approaching this problem from the other end -
making Ansible talk to all the services according to each service's
API. Here are some things I like about this approach:

1. Ansible is really simple and well-documented.

2. It's easy to start small and get value incrementally.

3. The playbooks can (and do) change independently from the APIs. This
   kind of agility is essential because SOPs must be able to change over
   time.

4. If we haven't completely implemented something in Ansible yet, the
   service itself is not completely broken. The workaround does not
   require multiple teams of developers (like with PDC). The workaround
   is that the administrator simply does the thing they used to do
   manually (and file tickets for the missing RFEs in the Ansible modules)

For the koji-ansible project, we're using it to configure some large
products internally. Now we have to expand this concept to the rest of
the pipeline (Bodhi, Pungi configuration, etc.)

Clement started on https://github.com/cverna/bodhi-ansible but I think
that's abandoned at this point. I do think this is the way to go,
though. In some ways this is the point - it should be possible for
these Ansible modules to be isolated so that when contributor turnover
happens, the whole system does not fall apart.

- Ken
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-06 Thread David Kirwan
Sure, I'm not an expert by any means, but I'm happy to share what little I
do know! :P wanna hop on a hangouts/bluejeans call some day ?

I have the POC example currently running on the ocp.stg.ci.centos.org, can
use that to give a quick demo.

On Sat, 4 Jul 2020 at 19:55, Kevin Fenzi  wrote:

> On Sat, Jul 04, 2020 at 01:14:42AM +0100, David Kirwan wrote:
> > Yeah makes sense Kevin,
> >
> > Hmm just threw a little POC together to show some of the basics of the
> > Openshift monitoring stack.
> >
> > - Sample configuration for the User Workload monitoring stack which is in
> > tech preview, eg data retention, and persistent storage claim size etc.
> > - small ruby app that has a /metrics endpoint, and 2 gauge metrics being
> > exported
> > - Prometheus ServiceMonitor to monitor the service
> > - Prometheus PrometheusRule to fire based on those alerts
> > - WIP, but I'll add example Grafana GrafanaDashboards which graph the
> > metrics at some future point
> >
> > https://github.com/davidkirwan/crypto_monitoring
>
> Cool!
>
> I'd love to have you go through this and describe how it works/looks?
>
> I'm sure others would be interested as well...
>
> kevin
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
>


-- 
David Kirwan
Software Engineer

Community Platform Engineering @ Red Hat

T: +(353) 86-8624108 IM: @dkirwan
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-04 Thread Kevin Fenzi
On Sat, Jul 04, 2020 at 01:14:42AM +0100, David Kirwan wrote:
> Yeah makes sense Kevin,
> 
> Hmm just threw a little POC together to show some of the basics of the
> Openshift monitoring stack.
> 
> - Sample configuration for the User Workload monitoring stack which is in
> tech preview, eg data retention, and persistent storage claim size etc.
> - small ruby app that has a /metrics endpoint, and 2 gauge metrics being
> exported
> - Prometheus ServiceMonitor to monitor the service
> - Prometheus PrometheusRule to fire based on those alerts
> - WIP, but I'll add example Grafana GrafanaDashboards which graph the
> metrics at some future point
> 
> https://github.com/davidkirwan/crypto_monitoring

Cool! 

I'd love to have you go through this and describe how it works/looks?

I'm sure others would be interested as well... 

kevin


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-04 Thread Kevin Fenzi
On Wed, Jul 01, 2020 at 08:46:27PM +0200, Miroslav Suchý wrote:
> Dne 30. 06. 20 v 9:44 Pierre-Yves Chibon napsal(a):
> > What are you talking about here? The Fedora release process? The 
> > mass-branching
> > in dist-git?
> 
> This. And creating new gpg keys, new mock configs, new tags in Koji, add 
> release to retrace.f.o, Copr, ... I have a
> dream where you just bump up number in - let say - PDC and everything else 
> will happen automagically. At right time.

Yeah, we sort of have been working toward that by using variables in
ansible. 

ie, change vars/all/00-FedoraCycleNumber.yaml from 32 to 33 and run
playbooks and boom, everything needing doing for release. :) 

But no matter how we do it, I share your dream... 

kevin


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-03 Thread David Kirwan
Yeah makes sense Kevin,

Hmm just threw a little POC together to show some of the basics of the
Openshift monitoring stack.

- Sample configuration for the User Workload monitoring stack which is in
tech preview, eg data retention, and persistent storage claim size etc.
- small ruby app that has a /metrics endpoint, and 2 gauge metrics being
exported
- Prometheus ServiceMonitor to monitor the service
- Prometheus PrometheusRule to fire based on those alerts
- WIP, but I'll add example Grafana GrafanaDashboards which graph the
metrics at some future point

https://github.com/davidkirwan/crypto_monitoring



On Wed, 1 Jul 2020 at 17:13, Kevin Fenzi  wrote:

> On Sun, Jun 28, 2020 at 01:01:31AM +0100, David Kirwan wrote:
> >
> > Hmm the (prometheus, grafana, alertmanager) stack itself is pretty
> simple I
> > would have said, but I agree it is certainly complex when
> > installed/integrated on Openshift.. (most things are needlessly complex
> on
> > Openshift tbh, and its an order of magnitude worse on Openshift 4 with
> > these operators added to the mix).
>
> Well, they may not be that complex... like I said, I haven't used them
> much, so I might be missing how they work.
>
> > It would be the obvious choice for me anyway considering this stack is
> > available by default on a fresh Openshift install. We could make use of
> > this cluster monitoring stack, especially if we're also deploying our
> > services on Openshift. I might throw a POC/demo together to show how
> "easy"
> > it is to get your app hooked into the Openshift cluster monitoring stack,
> > or the UserWorkload  tech preview monitoring stack[1].
>
> I agree it makes sense to use this for openshift apps.
> I am not sure at all we should use it for non openshift apps.
>
> > If we did use this stack it would add a little extra pain with regards to
> > monitoring storage maintenance/pruning. But maybe far less than
> > running/maintaining a whole separate monitoring stack outside the
> Openshift
> > cluster. There are also efficiencies to be made when developers are
> already
> > in the Openshift/Kubernetes mindset, creating an extra Service and
> > ServiceMonitor is a minor thing etc.
>
> Sure, but we have a lot of legacy stuff we want to monitor/review logs
> for too.
>
> The right answer might be to just seperate those two use cases with
> different solutions, but then we have 2 things to maintain.
> It's probibly going to take some investigation and some proof of concept
> working.
>
> kevin
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
>


-- 
David Kirwan
Software Engineer

Community Platform Engineering @ Red Hat

T: +(353) 86-8624108 IM: @dkirwan
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-01 Thread Stephen John Smoogen
On Wed, 1 Jul 2020 at 14:46, Miroslav Suchý  wrote:
>
> Dne 30. 06. 20 v 9:44 Pierre-Yves Chibon napsal(a):
> > What are you talking about here? The Fedora release process? The 
> > mass-branching
> > in dist-git?
>
> This. And creating new gpg keys, new mock configs, new tags in Koji, add 
> release to retrace.f.o, Copr, ... I have a
> dream where you just bump up number in - let say - PDC and everything else 
> will happen automagically. At right time.
>

I think choosing the one tool which is so end of life to do it in.. is
a sign of why we can't do this. Every release some set of tools in
Fedora get added by some team who have been working on their own
schedules and their own API without any idea of any other teams
working on. We then have to do a lot of integration and make it work
before the release deadline to make it work. Then usually after 1 or 2
releases that software team is no longer in existence and we have to
continue with it waiting for the promised replacement which will do
all those things you list above.. but instead get some other tool
which has to be shoved in.

Every attempt to stop this has been given great 'that sounds great,
but you have to  wait until we land OUR important tool which won't
actually meet any of those needs but has to be in place.' I think for
this to actually work, we need to redesign our entire application flow
and product lines.. which 20 years of Stockholm syndrome makes it hard
for me to see ever happening.

> --
> Miroslav Suchy, RHCA
> Red Hat, Associate Manager ABRT/Copr, #brno, #fedora-buildsys
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org



-- 
Stephen J Smoogen.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-01 Thread Miroslav Suchý
Dne 30. 06. 20 v 9:44 Pierre-Yves Chibon napsal(a):
> What are you talking about here? The Fedora release process? The 
> mass-branching
> in dist-git?

This. And creating new gpg keys, new mock configs, new tags in Koji, add 
release to retrace.f.o, Copr, ... I have a
dream where you just bump up number in - let say - PDC and everything else will 
happen automagically. At right time.

-- 
Miroslav Suchy, RHCA
Red Hat, Associate Manager ABRT/Copr, #brno, #fedora-buildsys
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-07-01 Thread Kevin Fenzi
On Sun, Jun 28, 2020 at 01:01:31AM +0100, David Kirwan wrote:
> 
> Hmm the (prometheus, grafana, alertmanager) stack itself is pretty simple I
> would have said, but I agree it is certainly complex when
> installed/integrated on Openshift.. (most things are needlessly complex on
> Openshift tbh, and its an order of magnitude worse on Openshift 4 with
> these operators added to the mix).

Well, they may not be that complex... like I said, I haven't used them
much, so I might be missing how they work. 

> It would be the obvious choice for me anyway considering this stack is
> available by default on a fresh Openshift install. We could make use of
> this cluster monitoring stack, especially if we're also deploying our
> services on Openshift. I might throw a POC/demo together to show how "easy"
> it is to get your app hooked into the Openshift cluster monitoring stack,
> or the UserWorkload  tech preview monitoring stack[1].

I agree it makes sense to use this for openshift apps. 
I am not sure at all we should use it for non openshift apps. 

> If we did use this stack it would add a little extra pain with regards to
> monitoring storage maintenance/pruning. But maybe far less than
> running/maintaining a whole separate monitoring stack outside the Openshift
> cluster. There are also efficiencies to be made when developers are already
> in the Openshift/Kubernetes mindset, creating an extra Service and
> ServiceMonitor is a minor thing etc.

Sure, but we have a lot of legacy stuff we want to monitor/review logs
for too. 

The right answer might be to just seperate those two use cases with
different solutions, but then we have 2 things to maintain. 
It's probibly going to take some investigation and some proof of concept
working.

kevin


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-30 Thread Clement Verna
On Thu, 25 Jun 2020 at 21:35, Pierre-Yves Chibon 
wrote:

> Good Morning Everyone,
>
> Just like every team we have technical debt in our work.
> I would like your help to try to define what it is for us.
>
> So far, I've come up with the following:
> - python3 support/migration
> - fedora-messaging
> - fedora-messaging schema
> - documentation
> - (unit-)tests
> - OpenID Connect
>
> What else would we want in there?
>

In my opinion the biggest struggle we have is too many code bases and we
don't have the time or interest to make sure that they are all in good
shape. I think that even if we were to spend the next 3 months just
focusing on paying back that debt (updating documentation, dependencies,
tests etc ) we would come back to our current situation in 1 year or so
because we just can't keep up.
In my opinion it would be really good to spend some time looking at all the
applications interactions and look at opportunities to reduce these
interactions and consolidate features in fewer applications. (this is
something that I started when looking at PDC and I still think that ideally
we should try to not replace PDC but enhance existing services to provide
the features we need.)
If anyone can draw a diagram of all the services we have and how they
interact with each other I would be super interested to see that and I
think that would be a great start to look at reducing our technical debt.


>
>
> Looking forward to your thoughts,
> Pierre
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
>
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-30 Thread Michal Konecny



On 30/06/2020 12:02, Ankur Sinha wrote:

On Tue, Jun 30, 2020 02:23:00 -0400, Neal Gompa wrote:

On Tue, Jun 30, 2020 at 2:22 AM Julen Landa Alustiza
 wrote:

20/6/26 11:32(e)an, David Kirwan igorleak idatzi zuen:

Hi all,

If we are moving towards openshift/kubernetes backed services, we should
probably be sticking with containers rather than Vagrant. We can use CRC
[1] (Code Ready Containers) or minikube [2] for most local dev work.

In my experience CRC is a pita for occasional contribution on different
projects due to it's one month lifecycle. It's fine if you are using it
daily but if you are using for just some few projects that only need
some ocassional coding having to redeploy the minicluster every time is
hard and time consuming.


I cannot run CRC on my laptop, full stop. I don't have enough RAM for it.

A note: running minikube on Fedora wasn't trivial the last time I'd
tried it either, on account of us aggressively pushing Podman to our
users, the minikube driver for which is still "experimental".
https://minikube.sigs.k8s.io/docs/drivers/podman/
I can confirm this. I was trying to run vagrant machine with minikube 
and ended up with using F30 to avoid some issues in newer Fedoras.


Michal


The Kubernetes stuff we use at work on our Google Cloud instance all
relies on Docker, not Podman, so I had to figure out how to install
Docker on Fedora 32 which was another PITA. A recent fedora magazine
post helps now:
https://fedoramagazine.org/docker-and-fedora-32/

I'm still developing using Podman myself. I just haven't had the time to
get minikube + Docker working again. If folks do use it, please document
the setup somewhere. I'm sure a lot of Fedora 32 users would really
appreciate it :)


___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


--
Role: Fedora CPE Team - Software Engineer
IRC: mkonecny
FAS: zlopez

___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-30 Thread Ankur Sinha
On Tue, Jun 30, 2020 02:23:00 -0400, Neal Gompa wrote:
> On Tue, Jun 30, 2020 at 2:22 AM Julen Landa Alustiza
>  wrote:
> > 20/6/26 11:32(e)an, David Kirwan igorleak idatzi zuen:
> > > Hi all,
> > >
> > > If we are moving towards openshift/kubernetes backed services, we should
> > > probably be sticking with containers rather than Vagrant. We can use CRC
> > > [1] (Code Ready Containers) or minikube [2] for most local dev work.
> >
> > In my experience CRC is a pita for occasional contribution on different
> > projects due to it's one month lifecycle. It's fine if you are using it
> > daily but if you are using for just some few projects that only need
> > some ocassional coding having to redeploy the minicluster every time is
> > hard and time consuming.
> >
> 
> I cannot run CRC on my laptop, full stop. I don't have enough RAM for it.

A note: running minikube on Fedora wasn't trivial the last time I'd
tried it either, on account of us aggressively pushing Podman to our
users, the minikube driver for which is still "experimental".
https://minikube.sigs.k8s.io/docs/drivers/podman/

The Kubernetes stuff we use at work on our Google Cloud instance all
relies on Docker, not Podman, so I had to figure out how to install
Docker on Fedora 32 which was another PITA. A recent fedora magazine
post helps now:
https://fedoramagazine.org/docker-and-fedora-32/

I'm still developing using Podman myself. I just haven't had the time to
get minikube + Docker working again. If folks do use it, please document
the setup somewhere. I'm sure a lot of Fedora 32 users would really
appreciate it :)

-- 
Thanks,
Regards,
Ankur Sinha "FranciscoD" (He / Him / His) | 
https://fedoraproject.org/wiki/User:Ankursinha
Time zone: Europe/London


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-30 Thread Pierre-Yves Chibon
On Mon, Jun 29, 2020 at 11:43:53PM +0200, Miroslav Suchý wrote:
> Dne 25. 06. 20 v 21:27 Pierre-Yves Chibon napsal(a):
> > What else would we want in there?
> 
> Automate branching.
> Or to be more general - automate new release. The whole process include way 
> too many manual steps.

What are you talking about here? The Fedora release process? The mass-branching
in dist-git?
Or do you mean our individual apps?

> Allow people to create groups in FAS without the need to create infra issue.

I don't know how much self-service we want to be for groups, there are multiple
groups and it's already confusing to some people, I fear that if we open this up
we may endup in a weird situation.
We're also using the FPCA+1 as a way to distinguish "active" contributors, so if
creating groups become self-service, we may end up opening the doors to spam
account creating their own group to become FPCA+1 giving them access to
@fedoraproject.org email alias as well as edit to the wiki (which they have
messed with a few times already in the past).


Pierre
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-29 Thread Neal Gompa
On Tue, Jun 30, 2020 at 2:22 AM Julen Landa Alustiza
 wrote:
>
>
>
> 20/6/26 11:32(e)an, David Kirwan igorleak idatzi zuen:
> > Hi all,
> >
> > If we are moving towards openshift/kubernetes backed services, we should
> > probably be sticking with containers rather than Vagrant. We can use CRC
> > [1] (Code Ready Containers) or minikube [2] for most local dev work.
>
> In my experience CRC is a pita for occasional contribution on different
> projects due to it's one month lifecycle. It's fine if you are using it
> daily but if you are using for just some few projects that only need
> some ocassional coding having to redeploy the minicluster every time is
> hard and time consuming.
>

I cannot run CRC on my laptop, full stop. I don't have enough RAM for it.



-- 
真実はいつも一つ!/ Always, there's only one truth!
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-29 Thread Julen Landa Alustiza


20/6/26 11:32(e)an, David Kirwan igorleak idatzi zuen:
> Hi all,
> 
> If we are moving towards openshift/kubernetes backed services, we should
> probably be sticking with containers rather than Vagrant. We can use CRC
> [1] (Code Ready Containers) or minikube [2] for most local dev work.

In my experience CRC is a pita for occasional contribution on different
projects due to it's one month lifecycle. It's fine if you are using it
daily but if you are using for just some few projects that only need
some ocassional coding having to redeploy the minicluster every time is
hard and time consuming.

-- 
Julen Landa Alustiza 
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-29 Thread Miroslav Suchý
Dne 25. 06. 20 v 21:27 Pierre-Yves Chibon napsal(a):
> What else would we want in there?

Automate branching.
Or to be more general - automate new release. The whole process include way too 
many manual steps.

Allow people to create groups in FAS without the need to create infra issue.

-- 
Miroslav Suchy, RHCA
Red Hat, Associate Manager ABRT/Copr, #brno, #fedora-buildsys
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-29 Thread Aurelien Bompard
>   It doesn't? What about https://github.com/freeipa/freeipa-container ?
>
> My understanding is that it is an experimental implementation
> currently. FreeIPA does not necessarily work very well broken up into
> containers right now.
>

Yes, and running FreeIPA in a container requires the container to run as
root, which is not allowed in our Openshift.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-28 Thread Stephen John Smoogen
On Fri, 26 Jun 2020 at 07:51, Stephen John Smoogen  wrote:
>
> On Thu, 25 Jun 2020 at 15:27, Pierre-Yves Chibon  wrote:
> >
> > Good Morning Everyone,
> >
> > Just like every team we have technical debt in our work.
> > I would like your help to try to define what it is for us.
> >
> > So far, I've come up with the following:
> > - python3 support/migration
> > - fedora-messaging
> > - fedora-messaging schema
> > - documentation
> > - (unit-)tests
> > - OpenID Connect
> >
> > What else would we want in there?
> >
>
> 1. mailman3. currently running in a broken vm which was transported from PHX2.
> 2. OpenShift is currently running in openshift 3 and may need to move
> to OS4 (I do not know eol for OS3)
> 3. PDC is EOL software with a replacement needing to be dealt with
> 4. Our website setup and running is a multi hour ansible run mess
> 5. Our docs on website setup is a multi-hour mess
> 6. We have NO working monitoring. It is going to take me a week to get
> it working and several months to replace it with something else
> 7. Any other vm's we shifted over from PHX2 to IAD2 versus rebuild
> from scratch should be considered unmaintained debt
> 8. Our staging needs to be designed from scratch and put in place with
> a rollout plan to replicate it in prod
> 9. OpenQA that Adam needs some specing out and work on it. It
> currently requires running on a 10.0.0.0/16 network.. The problem is
> that those IPs are also our running networks. This is causing leaks
> which are causing problems with our switch and routers.
> 10. Our deployment infrastructure of kickstarts/pxe/tftp falls under
> technical debt. It is based off of what we have been doing for 10+
> years and it has broken a lot in this transition. When it works its
> fine, and when it doesn't nothing works.

11. monitoring... which should have been higher. Our monitoring is
currently very broken. It is a set of jinja nagios templates I wrote
while trying to do 2 other things and not knowing jinja very well.
Updating it to work with nagios and our current infrastructure is
going to be a big job.. moving to a different monitoring.. is going to
be a big job. Having it not just be me who knows how it 'works'
(insert insane laughter) would be a great idea.

-- 
Stephen J Smoogen.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-27 Thread David Kirwan
On Fri, 26 Jun 2020 at 17:53, Kevin Fenzi  wrote:

> On Fri, Jun 26, 2020 at 10:32:14AM +0100, David Kirwan wrote:
> > Hi all,
> >
> > If we are moving towards openshift/kubernetes backed services, we should
> > probably be sticking with containers rather than Vagrant. We can use CRC
> > [1] (Code Ready Containers) or minikube [2] for most local dev work.
> >
> > I'd be very much in favour of having an Infra managed Prometheus instance
> > (+ grafana and alertmanager on Openshift), its something I hoped to work
> on
> > within CPE sustaining infact.
>
> You know, I'm not in love with that stack. It could well be that I just
> haven't used it enough or know enough about it, but it seems just
> needlessly complex. ;(
>

Hmm the (prometheus, grafana, alertmanager) stack itself is pretty simple I
would have said, but I agree it is certainly complex when
installed/integrated on Openshift.. (most things are needlessly complex on
Openshift tbh, and its an order of magnitude worse on Openshift 4 with
these operators added to the mix).

It would be the obvious choice for me anyway considering this stack is
available by default on a fresh Openshift install. We could make use of
this cluster monitoring stack, especially if we're also deploying our
services on Openshift. I might throw a POC/demo together to show how "easy"
it is to get your app hooked into the Openshift cluster monitoring stack,
or the UserWorkload  tech preview monitoring stack[1].

If we did use this stack it would add a little extra pain with regards to
monitoring storage maintenance/pruning. But maybe far less than
running/maintaining a whole separate monitoring stack outside the Openshift
cluster. There are also efficiencies to be made when developers are already
in the Openshift/Kubernetes mindset, creating an extra Service and
ServiceMonitor is a minor thing etc.


- [1]
https://docs.openshift.com/container-platform/4.4/monitoring/monitoring-your-own-services.html

-- 
David Kirwan
Software Engineer

Community Platform Engineering @ Red Hat

T: +(353) 86-8624108 IM: @dkirwan
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-27 Thread Kevin Fenzi
On Sat, Jun 27, 2020 at 06:38:43PM -0400, Stephen John Smoogen wrote:
> On Sat, 27 Jun 2020 at 08:05, Peter Robinson  wrote:
...snip...
> > From a VM PoV it should "just work" for VMs that use tianocore/UEFI on
> > x86, not sure what the default is for the infra VMs, but I would
> > suggest that any VMs that currently use the old "BIOS" firmware be
> > moved over to UEFI as they're rebuilt as in the general industry UEFI
> > is now the default, some cloud providers aside, and it's certainly the
> > case for x86/aarch64 HW.

Yeah, are vm's are all default/nomal/bios boot... 
we can look at moving them to uefi at some point. I assume thats just
making the firmware available to libvirt/qemu? 

> >
> > Not sure what the status is for Power/Z-series in this context.
> >
> > Also does the new DC support IPv6 for external services now?
> >
> 
> It does, but our services do not so they would sometimes talk back
> over ipv6 and sometimes over ipv4 to the same system and things
> wouldn't work. We turned it off until we could get our basic
> infrastructure in place so we were not debugging yet another thing
> that was not working. We expect to turn it back on in August.

My understanding/what I saw was that we were getting ipv6 addresses on
everything, but the routing was not working at all, so things would try
and use it and fail. 

But yeah, we should at least try and enable it again for the edge when
things are calmed down. 

kevin


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-27 Thread Stephen John Smoogen
On Sat, 27 Jun 2020 at 08:05, Peter Robinson  wrote:
>
> > > 10. Our deployment infrastructure of kickstarts/pxe/tftp falls under
> > > technical debt. It is based off of what we have been doing for 10+
> > > years and it has broken a lot in this transition. When it works its
> > > fine, and when it doesn't nothing works.
> >
> > I'm not sure any more 'modern' thing here would be much better on the
> > hardware level. For vm's, yeah, there's some annoyances with
> > virt-installs which we should either track down and fix, or just go to
> > the 'use a cloud image and adjust it' mode.
>
> HTTP Boot would be the "new" replacement for PXE/tftp in this context.
> Most modern HW should support it, whether it supports HTTPS is less
> sure, in the IoT gateway space we've had some rather dubious options,
> but HTTP worked. Over all it's more secure and more straightforward
> for firewalls etc as HTTP(S) is generally allowed.
>

The only thing I have found which supports it in our modern HW is our
Power systems which do it via petitboot. Everything else (even stuff
bought 3 months ago) has needed to get enough over pxe/tftp so that it
could do the http after. It may need some finagling somewhere in the
systems but it is buried or not clearly labeled in the Lenovo EMAGs or
Dell boxes. I spent a couple of hours trying to find it on these and
ended up going with what I knew worked. If someone can help me on this
I would appreciate it.

> From a VM PoV it should "just work" for VMs that use tianocore/UEFI on
> x86, not sure what the default is for the infra VMs, but I would
> suggest that any VMs that currently use the old "BIOS" firmware be
> moved over to UEFI as they're rebuilt as in the general industry UEFI
> is now the default, some cloud providers aside, and it's certainly the
> case for x86/aarch64 HW.
>
> Not sure what the status is for Power/Z-series in this context.
>
> Also does the new DC support IPv6 for external services now?
>

It does, but our services do not so they would sometimes talk back
over ipv6 and sometimes over ipv4 to the same system and things
wouldn't work. We turned it off until we could get our basic
infrastructure in place so we were not debugging yet another thing
that was not working. We expect to turn it back on in August.


> Peter
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org



-- 
Stephen J Smoogen.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-27 Thread Peter Robinson
> > 10. Our deployment infrastructure of kickstarts/pxe/tftp falls under
> > technical debt. It is based off of what we have been doing for 10+
> > years and it has broken a lot in this transition. When it works its
> > fine, and when it doesn't nothing works.
>
> I'm not sure any more 'modern' thing here would be much better on the
> hardware level. For vm's, yeah, there's some annoyances with
> virt-installs which we should either track down and fix, or just go to
> the 'use a cloud image and adjust it' mode.

HTTP Boot would be the "new" replacement for PXE/tftp in this context.
Most modern HW should support it, whether it supports HTTPS is less
sure, in the IoT gateway space we've had some rather dubious options,
but HTTP worked. Over all it's more secure and more straightforward
for firewalls etc as HTTP(S) is generally allowed.

From a VM PoV it should "just work" for VMs that use tianocore/UEFI on
x86, not sure what the default is for the infra VMs, but I would
suggest that any VMs that currently use the old "BIOS" firmware be
moved over to UEFI as they're rebuilt as in the general industry UEFI
is now the default, some cloud providers aside, and it's certainly the
case for x86/aarch64 HW.

Not sure what the status is for Power/Z-series in this context.

Also does the new DC support IPv6 for external services now?

Peter
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Kevin Fenzi
On Fri, Jun 26, 2020 at 10:32:14AM +0100, David Kirwan wrote:
> Hi all,
> 
> If we are moving towards openshift/kubernetes backed services, we should
> probably be sticking with containers rather than Vagrant. We can use CRC
> [1] (Code Ready Containers) or minikube [2] for most local dev work.
> 
> I'd be very much in favour of having an Infra managed Prometheus instance
> (+ grafana and alertmanager on Openshift), its something I hoped to work on
> within CPE sustaining infact.

You know, I'm not in love with that stack. It could well be that I just
haven't used it enough or know enough about it, but it seems just
needlessly complex. ;( 

I'd prefer we start out at a lower level... what are our requirements?
Then, see how we can setup something to meet those. 

Off the top of my head (I'm sure I can think of more): 

* Ability to collect/gather rsyslog output from all our machines. 
* Ability to generate reports of 'variances' from all that (ie, what odd
messages should a human look at?)
* Handle all the logs from openshift, possibly multiple clusters?
* Ability to easily drill down and look at some specifc historical logs
(ie, show me the logs for the bodhi-web pods from last week when there
was a issue). 

Perhaps prometheus/graphana/alertmanager is the solution, but there's
also tons of other open source projects out there too that we might look
into. 

kevin
--
> 
> 
> - [1] https://github.com/code-ready/crc
> - [2] https://minikube.sigs.k8s.io/docs/
> 
> 
> 
> On Fri, 26 Jun 2020 at 10:23, Luca BRUNO  wrote:
> 
> > On Thu, 25 Jun 2020 15:59:44 -0700
> > Kevin Fenzi  wrote:
> >
> > > > What else would we want in there?
> > >
> > > Monitoring - we will likely get our nagios setup again soon just
> > > because it's mostly easy, but it's also not ideal.
> >
> > On this one (or more broadly "observability") I'd still like to see an
> > infra-managed Prometheus to internally cover and sanity-check the
> > "openshift-apps" services.
> > I remember this was on the "backlog" dashboard at Flock'19 but I don't
> > know if it got translated to an actual action item/ticket in the end.
> >
> > Ciao, Luca
> > ___
> > infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> > To unsubscribe send an email to
> > infrastructure-le...@lists.fedoraproject.org
> > Fedora Code of Conduct:
> > https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> > List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> > List Archives:
> > https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
> >
> 
> 
> -- 
> David Kirwan
> Software Engineer
> 
> Community Platform Engineering @ Red Hat
> 
> T: +(353) 86-8624108 IM: @dkirwan

> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org



signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Kevin Fenzi
On Fri, Jun 26, 2020 at 07:51:25AM -0400, Stephen John Smoogen wrote:
> On Thu, 25 Jun 2020 at 15:27, Pierre-Yves Chibon  wrote:
> >
> > Good Morning Everyone,
> >
> > Just like every team we have technical debt in our work.
> > I would like your help to try to define what it is for us.
> >
> > So far, I've come up with the following:
> > - python3 support/migration
> > - fedora-messaging
> > - fedora-messaging schema
> > - documentation
> > - (unit-)tests
> > - OpenID Connect
> >
> > What else would we want in there?
> >
> 
> 1. mailman3. currently running in a broken vm which was transported from PHX2.
> 2. OpenShift is currently running in openshift 3 and may need to move
> to OS4 (I do not know eol for OS3)

June 2022... so we have time. 
https://access.redhat.com/support/policy/updates/openshift_noncurrent

> 3. PDC is EOL software with a replacement needing to be dealt with
> 4. Our website setup and running is a multi hour ansible run mess
> 5. Our docs on website setup is a multi-hour mess

This should now be fixed. 

> 6. We have NO working monitoring. It is going to take me a week to get
> it working and several months to replace it with something else
> 7. Any other vm's we shifted over from PHX2 to IAD2 versus rebuild
> from scratch should be considered unmaintained debt

Good reminder. In addtion to mailman, notifs/FMN is in this boat. 

> 8. Our staging needs to be designed from scratch and put in place with
> a rollout plan to replicate it in prod
> 9. OpenQA that Adam needs some specing out and work on it. It
> currently requires running on a 10.0.0.0/16 network.. The problem is
> that those IPs are also our running networks. This is causing leaks
> which are causing problems with our switch and routers.
> 10. Our deployment infrastructure of kickstarts/pxe/tftp falls under
> technical debt. It is based off of what we have been doing for 10+
> years and it has broken a lot in this transition. When it works its
> fine, and when it doesn't nothing works.

I'm not sure any more 'modern' thing here would be much better on the
hardware level. For vm's, yeah, there's some annoyances with
virt-installs which we should either track down and fix, or just go to
the 'use a cloud image and adjust it' mode. 

I'll also add: 

* datagrepper needs work. It's growing without bound and it's just a big
pile of messages. We could/should partition it at least and possibly
save it's data in a more clever way.

kevin


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


[OT] Re: What is our technical debt?

2020-06-26 Thread Tomasz Torcz
On Fri, Jun 26, 2020 at 07:07:51AM -0400, Neal Gompa wrote:
> 
> > > Everything will run in a virtual machine given that
> > > enough care has been put into creating the VM. I don't think the same
> > > can be said for containers.
> >
> >   I think in todays world we should develop for containers first.
> > Especially when k8s abstracts many things and provides useful
> > infrastructure for application.  A bit like systemd a decade ago, by
> > providing useful APIs like socket-activation, watchdog, restarts,
> > parallel invocations locks, applications do not have to care about
> > re-implementing boring stuff over and over again.
> >
> 
> The difference is that it's actually a huge pain for people to run
> containers on Kubernetes. All these things you described can be done
> with systemd units in regular RPMs. In fact, for the AAA solution, I
> *already* did that so that we can reuse it for the Fedora and openSUSE
> deployments[1].
> 
> While I think it'd be valuable to figure out the container workflow
> for apps deployed in containers, let's not forget all that stuff in
> our infrastructure requires OpenShift, and I don't know about most of
> you, but I'm fresh out of OpenShift at home to be able to do this sort
> of thing.

  Actually, I have 5 node 3.11 at home now (3 VMs and 2 old laptops).
Over the summer I'm looking at installing OKD 4.4 on laptops + 1 VM,
apparently 3-node could be done with "compact clusters".

https://docs.okd.io/latest/installing/installing_bare_metal/installing-bare-metal.html#installation-three-node-cluster_installing-bare-metal
https://github.com/openshift/enhancements/blob/master/enhancements/compact-clusters.md

  But I digress…

-- 
Tomasz Torcz   72->|   80->|
to...@pipebreaker.pl   72->|   80->|
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Stephen John Smoogen
On Thu, 25 Jun 2020 at 15:27, Pierre-Yves Chibon  wrote:
>
> Good Morning Everyone,
>
> Just like every team we have technical debt in our work.
> I would like your help to try to define what it is for us.
>
> So far, I've come up with the following:
> - python3 support/migration
> - fedora-messaging
> - fedora-messaging schema
> - documentation
> - (unit-)tests
> - OpenID Connect
>
> What else would we want in there?
>

1. mailman3. currently running in a broken vm which was transported from PHX2.
2. OpenShift is currently running in openshift 3 and may need to move
to OS4 (I do not know eol for OS3)
3. PDC is EOL software with a replacement needing to be dealt with
4. Our website setup and running is a multi hour ansible run mess
5. Our docs on website setup is a multi-hour mess
6. We have NO working monitoring. It is going to take me a week to get
it working and several months to replace it with something else
7. Any other vm's we shifted over from PHX2 to IAD2 versus rebuild
from scratch should be considered unmaintained debt
8. Our staging needs to be designed from scratch and put in place with
a rollout plan to replicate it in prod
9. OpenQA that Adam needs some specing out and work on it. It
currently requires running on a 10.0.0.0/16 network.. The problem is
that those IPs are also our running networks. This is causing leaks
which are causing problems with our switch and routers.
10. Our deployment infrastructure of kickstarts/pxe/tftp falls under
technical debt. It is based off of what we have been doing for 10+
years and it has broken a lot in this transition. When it works its
fine, and when it doesn't nothing works.




>
> Looking forward to your thoughts,
> Pierre
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org



-- 
Stephen J Smoogen.
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Neal Gompa
On Fri, Jun 26, 2020 at 6:15 AM Tomasz Torcz  wrote:
>
> On Fri, Jun 26, 2020 at 10:50:47AM +0100, Stephen Coady wrote:
> > On Fri, 26 Jun 2020 at 10:34, David Kirwan  wrote:
> > >
> > > Hi all,
> > >
> > > If we are moving towards openshift/kubernetes backed services, we should 
> > > probably be sticking with containers rather than Vagrant. We can use CRC 
> > > [1] (Code Ready Containers) or minikube [2] for most local dev work.
> > >
> >
> > The only problem with that is not everything runs in containers. For
> > example the new AAA service is backed by FreeIPA and that does not run
> > in a container.
>
>   It doesn't? What about https://github.com/freeipa/freeipa-container ?
>

My understanding is that it is an experimental implementation
currently. FreeIPA does not necessarily work very well broken up into
containers right now.

> > Everything will run in a virtual machine given that
> > enough care has been put into creating the VM. I don't think the same
> > can be said for containers.
>
>   I think in todays world we should develop for containers first.
> Especially when k8s abstracts many things and provides useful
> infrastructure for application.  A bit like systemd a decade ago, by
> providing useful APIs like socket-activation, watchdog, restarts,
> parallel invocations locks, applications do not have to care about
> re-implementing boring stuff over and over again.
>

The difference is that it's actually a huge pain for people to run
containers on Kubernetes. All these things you described can be done
with systemd units in regular RPMs. In fact, for the AAA solution, I
*already* did that so that we can reuse it for the Fedora and openSUSE
deployments[1].

While I think it'd be valuable to figure out the container workflow
for apps deployed in containers, let's not forget all that stuff in
our infrastructure requires OpenShift, and I don't know about most of
you, but I'm fresh out of OpenShift at home to be able to do this sort
of thing.

I have made something really simple that kind of works for OKD 3.x[2],
but no such equivalent exists for OKD 4.x, so that's been out of reach
for me for a while. Plain Kubernetes literally does not work. Aside
from plain Kubernetes being a pain to actually get working enough to
run applications, we actually use OpenShift features that do not exist
in Kubernetes.

So I would caution all of this by stating that at least for me as an
external no-name plain contributor, I'm more or less locked out of
contributing to apps that are deployed exclusively through OpenShift.

[1]: https://copr.fedorainfracloud.org/coprs/ngompa/fedora-aaa/
[2]: https://pagure.io/openshift-allinone-deployment-configuration

-- 
真実はいつも一つ!/ Always, there's only one truth!
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Michal Konecny



On 26/06/2020 12:14, Tomasz Torcz wrote:

On Fri, Jun 26, 2020 at 10:50:47AM +0100, Stephen Coady wrote:

On Fri, 26 Jun 2020 at 10:34, David Kirwan  wrote:

Hi all,

If we are moving towards openshift/kubernetes backed services, we should 
probably be sticking with containers rather than Vagrant. We can use CRC [1] 
(Code Ready Containers) or minikube [2] for most local dev work.


The only problem with that is not everything runs in containers. For
example the new AAA service is backed by FreeIPA and that does not run
in a container.

   It doesn't? What about https://github.com/freeipa/freeipa-container ?


Everything will run in a virtual machine given that
enough care has been put into creating the VM. I don't think the same
can be said for containers.

   I think in todays world we should develop for containers first.
Especially when k8s abstracts many things and provides useful
infrastructure for application.  A bit like systemd a decade ago, by
providing useful APIs like socket-activation, watchdog, restarts,
parallel invocations locks, applications do not have to care about
re-implementing boring stuff over and over again.
It will also make integration testing with other containerized apps much 
easier.
   



--
Role: Fedora CPE Team - Software Engineer
IRC: mkonecny
FAS: zlopez
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Michal Konecny
I also playing with idea to provide containerized environment for the 
apps I working on. Something that will be more similar with the 
production deployment in Openshift, but easy to use. Using the same for 
the CI and integration testing will be big plus.


Michal

On 25/06/2020 22:31, Till Maas wrote:

On Thu, Jun 25, 2020 at 09:59:37PM +0200, Pierre-Yves Chibon wrote:

I've recently had quite some pain with vagrant (just today, I've tried several
time to start my bodhi vagrant box and lost my morning w/o success).

I guess it may be nice to see if there is something else out there that we could
leverage.
If we could adopt one and try to get have it on most of our apps this may be a
nice goal for us to work towards.

Containers instead of VMs might be the next step or Vagrant with podman
(if this is supported). Also, having the test environment as part of the
CI might be nice.

Regards
Till
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


--
Role: Fedora CPE Team - Software Engineer
IRC: mkonecny
FAS: zlopez
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Tomasz Torcz
On Fri, Jun 26, 2020 at 10:50:47AM +0100, Stephen Coady wrote:
> On Fri, 26 Jun 2020 at 10:34, David Kirwan  wrote:
> >
> > Hi all,
> >
> > If we are moving towards openshift/kubernetes backed services, we should 
> > probably be sticking with containers rather than Vagrant. We can use CRC 
> > [1] (Code Ready Containers) or minikube [2] for most local dev work.
> >
> 
> The only problem with that is not everything runs in containers. For
> example the new AAA service is backed by FreeIPA and that does not run
> in a container.

  It doesn't? What about https://github.com/freeipa/freeipa-container ?

> Everything will run in a virtual machine given that
> enough care has been put into creating the VM. I don't think the same
> can be said for containers.

  I think in todays world we should develop for containers first.
Especially when k8s abstracts many things and provides useful
infrastructure for application.  A bit like systemd a decade ago, by
providing useful APIs like socket-activation, watchdog, restarts,
parallel invocations locks, applications do not have to care about
re-implementing boring stuff over and over again.
  

-- 
Tomasz Torcz   72->|   80->|
to...@pipebreaker.pl   72->|   80->|
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Stephen Coady
On Fri, 26 Jun 2020 at 10:34, David Kirwan  wrote:
>
> Hi all,
>
> If we are moving towards openshift/kubernetes backed services, we should 
> probably be sticking with containers rather than Vagrant. We can use CRC [1] 
> (Code Ready Containers) or minikube [2] for most local dev work.
>

The only problem with that is not everything runs in containers. For
example the new AAA service is backed by FreeIPA and that does not run
in a container. Everything will run in a virtual machine given that
enough care has been put into creating the VM. I don't think the same
can be said for containers.

> I'd be very much in favour of having an Infra managed Prometheus instance (+ 
> grafana and alertmanager on Openshift), its something I hoped to work on 
> within CPE sustaining infact.
>
>
> - [1] https://github.com/code-ready/crc
> - [2] https://minikube.sigs.k8s.io/docs/
>
>
>
> On Fri, 26 Jun 2020 at 10:23, Luca BRUNO  wrote:
>>
>> On Thu, 25 Jun 2020 15:59:44 -0700
>> Kevin Fenzi  wrote:
>>
>> > > What else would we want in there?
>> >
>> > Monitoring - we will likely get our nagios setup again soon just
>> > because it's mostly easy, but it's also not ideal.
>>
>> On this one (or more broadly "observability") I'd still like to see an
>> infra-managed Prometheus to internally cover and sanity-check the
>> "openshift-apps" services.
>> I remember this was on the "backlog" dashboard at Flock'19 but I don't
>> know if it got translated to an actual action item/ticket in the end.
>>
>> Ciao, Luca
>> ___
>> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
>> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
>> Fedora Code of Conduct: 
>> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
>> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
>> List Archives: 
>> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
>
>
>
> --
> David Kirwan
> Software Engineer
>
> Community Platform Engineering @ Red Hat
>
> T: +(353) 86-8624108 IM: @dkirwan
>
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct: 
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives: 
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org



-- 
Stephen Coady
Software Engineer
Red Hat
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread David Kirwan
Hi all,

If we are moving towards openshift/kubernetes backed services, we should
probably be sticking with containers rather than Vagrant. We can use CRC
[1] (Code Ready Containers) or minikube [2] for most local dev work.

I'd be very much in favour of having an Infra managed Prometheus instance
(+ grafana and alertmanager on Openshift), its something I hoped to work on
within CPE sustaining infact.


- [1] https://github.com/code-ready/crc
- [2] https://minikube.sigs.k8s.io/docs/



On Fri, 26 Jun 2020 at 10:23, Luca BRUNO  wrote:

> On Thu, 25 Jun 2020 15:59:44 -0700
> Kevin Fenzi  wrote:
>
> > > What else would we want in there?
> >
> > Monitoring - we will likely get our nagios setup again soon just
> > because it's mostly easy, but it's also not ideal.
>
> On this one (or more broadly "observability") I'd still like to see an
> infra-managed Prometheus to internally cover and sanity-check the
> "openshift-apps" services.
> I remember this was on the "backlog" dashboard at Flock'19 but I don't
> know if it got translated to an actual action item/ticket in the end.
>
> Ciao, Luca
> ___
> infrastructure mailing list -- infrastructure@lists.fedoraproject.org
> To unsubscribe send an email to
> infrastructure-le...@lists.fedoraproject.org
> Fedora Code of Conduct:
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/
> List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
> List Archives:
> https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org
>


-- 
David Kirwan
Software Engineer

Community Platform Engineering @ Red Hat

T: +(353) 86-8624108 IM: @dkirwan
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Luca BRUNO
On Thu, 25 Jun 2020 15:59:44 -0700
Kevin Fenzi  wrote:

> > What else would we want in there?
> 
> Monitoring - we will likely get our nagios setup again soon just
> because it's mostly easy, but it's also not ideal. 

On this one (or more broadly "observability") I'd still like to see an
infra-managed Prometheus to internally cover and sanity-check the
"openshift-apps" services.
I remember this was on the "backlog" dashboard at Flock'19 but I don't
know if it got translated to an actual action item/ticket in the end.

Ciao, Luca
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Stephen Coady
On Thu, 25 Jun 2020 at 21:00, Pierre-Yves Chibon  wrote:
>
> I've recently had quite some pain with vagrant (just today, I've tried several
> time to start my bodhi vagrant box and lost my morning w/o success).
>
> I guess it may be nice to see if there is something else out there that we 
> could
> leverage.
> If we could adopt one and try to get have it on most of our apps this may be a
> nice goal for us to work towards.
>
>
> Pierre

I've experienced the same, and since I don't have any Ruby experience
it is very difficult to debug when something goes wrong. Vagrant is
far from perfect but I do think it is much, much better than the
alternatives.

One big problem though is that even these boxes can quickly fall out
of date and stop working when dependencies are no longer happy running
inside the VM. In a project with a very occasional pull request this
is a problem. One possible solution here is to run nightly builds on
these projects. This would make it easier to see something that needs
to be updated before it gets out of hand and becomes a huge chunk of
work.

-- 
Stephen Coady
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-26 Thread Pierre-Yves Chibon
On Thu, Jun 25, 2020 at 04:03:35PM -0700, Adam Williamson wrote:
> On Thu, 2020-06-25 at 21:59 +0200, Pierre-Yves Chibon wrote:
> > On Thu, Jun 25, 2020 at 03:51:42PM -0400, Neal Gompa wrote:
> > > On Thu, Jun 25, 2020 at 3:27 PM Pierre-Yves Chibon  
> > > wrote:
> > > > 
> > > > Good Morning Everyone,
> > > > 
> > > > Just like every team we have technical debt in our work.
> > > > I would like your help to try to define what it is for us.
> > > > 
> > > > So far, I've come up with the following:
> > > > - python3 support/migration
> > > > - fedora-messaging
> > > > - fedora-messaging schema
> > > > - documentation
> > > > - (unit-)tests
> > > > - OpenID Connect
> > > > 
> > > > What else would we want in there?
> > > > 
> > > 
> > > These are all good things, especially the documentation one. I'd like
> > > to zero in on a particular aspect of documentation, though: getting to
> > > hack on it. A lot of our projects are surprisingly difficult to get up
> > > and running for someone to play with and hack on, and this is
> > > increasingly true as we adopt OpenShift-style deployments. One way we
> > > solved this in Pagure is by providing some quick start processes in
> > > the documentation and a fully working Vagrant based process to boot up
> > > and have a working environment to hack on the code.
> > > 
> > > I'm not necessarily going to specify it needs to be Vagrant for
> > > everything, but I think this is something we should have for all of
> > > our projects, so that people *can* easily get going to use and
> > > contribute.
> > 
> > I've recently had quite some pain with vagrant (just today, I've tried 
> > several
> > time to start my bodhi vagrant box and lost my morning w/o success).
> > 
> > I guess it may be nice to see if there is something else out there that we 
> > could
> > leverage.
> > If we could adopt one and try to get have it on most of our apps this may 
> > be a
> > nice goal for us to work towards.
> 
> The thing is, even if vagrant *itself* is shonky as hell (I agree), if
> you vagrant-ify a project, there is at least a recipe I can relatively
> easily follow in a manually setup VM or mock root or whatever I like.
> The fact that the recipe is designed for vagrant is almost incidental,
> the key thing is that there's an 'official' "here's how to set up a dev
> env from scratch" recipe.

Agreed, the basic ansible attached to the project is helpful to get an idea on
how to get it to work (with some caveats such as the vagrant box is more likely
configured for dev than for prod (ie: running from the git checkout mounted
within the vm than running via wsgi/gunicorn from an app properly installed)).

I wonder if https://github.com/karmab/kcli maybe a suitable replacement to
vagrant.


Pierre
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-25 Thread Adam Williamson
On Thu, 2020-06-25 at 21:59 +0200, Pierre-Yves Chibon wrote:
> On Thu, Jun 25, 2020 at 03:51:42PM -0400, Neal Gompa wrote:
> > On Thu, Jun 25, 2020 at 3:27 PM Pierre-Yves Chibon  
> > wrote:
> > > 
> > > Good Morning Everyone,
> > > 
> > > Just like every team we have technical debt in our work.
> > > I would like your help to try to define what it is for us.
> > > 
> > > So far, I've come up with the following:
> > > - python3 support/migration
> > > - fedora-messaging
> > > - fedora-messaging schema
> > > - documentation
> > > - (unit-)tests
> > > - OpenID Connect
> > > 
> > > What else would we want in there?
> > > 
> > 
> > These are all good things, especially the documentation one. I'd like
> > to zero in on a particular aspect of documentation, though: getting to
> > hack on it. A lot of our projects are surprisingly difficult to get up
> > and running for someone to play with and hack on, and this is
> > increasingly true as we adopt OpenShift-style deployments. One way we
> > solved this in Pagure is by providing some quick start processes in
> > the documentation and a fully working Vagrant based process to boot up
> > and have a working environment to hack on the code.
> > 
> > I'm not necessarily going to specify it needs to be Vagrant for
> > everything, but I think this is something we should have for all of
> > our projects, so that people *can* easily get going to use and
> > contribute.
> 
> I've recently had quite some pain with vagrant (just today, I've tried several
> time to start my bodhi vagrant box and lost my morning w/o success).
> 
> I guess it may be nice to see if there is something else out there that we 
> could
> leverage.
> If we could adopt one and try to get have it on most of our apps this may be a
> nice goal for us to work towards.

The thing is, even if vagrant *itself* is shonky as hell (I agree), if
you vagrant-ify a project, there is at least a recipe I can relatively
easily follow in a manually setup VM or mock root or whatever I like.
The fact that the recipe is designed for vagrant is almost incidental,
the key thing is that there's an 'official' "here's how to set up a dev
env from scratch" recipe.

Of course, it needs to be kept up to date and working...
-- 
Adam Williamson
Fedora QA Community Monkey
IRC: adamw | Twitter: AdamW_Fedora | XMPP: adamw AT happyassassin . net
http://www.happyassassin.net
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-25 Thread Kevin Fenzi
On Thu, Jun 25, 2020 at 09:27:24PM +0200, Pierre-Yves Chibon wrote:
> Good Morning Everyone,
> 
> Just like every team we have technical debt in our work.
> I would like your help to try to define what it is for us.
> 
> So far, I've come up with the following:
> - python3 support/migration
> - fedora-messaging
> - fedora-messaging schema
> - documentation
> - (unit-)tests
> - OpenID Connect
> 
> What else would we want in there?

fedmsg ? We still have things using it... we need to make a effort to
get everything on fedora-messaging. ;) 

Monitoring - we will likely get our nagios setup again soon just because
it's mostly easy, but it's also not ideal. 

central logging - we have log01 running rsyslog and aggregating logs
from real stuff. We used to have epylog comb through this and send a
report every 8 hours (which I suspect I was the only one who ever read
it), but it's gone now. And we have kibana/es/etc in openshift, which
is... not great. It would be great to get everything logging in one
place and have a process to show us 'unusual' messages and a way to
search the rest. 

ansible - our ansible repo is now almost 8 years old. It works pretty
well in practice, but there's a lot of different styles of things, lots
of stuff thats been depreciated or there's newer better ways to do
things, etc. I don't think we should drop it entirely and re-write it,
but I think we should as we go clean things up. 

I'm sure there's more... :) 

kevin


signature.asc
Description: PGP signature
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-25 Thread Till Maas
On Thu, Jun 25, 2020 at 09:59:37PM +0200, Pierre-Yves Chibon wrote:
> I've recently had quite some pain with vagrant (just today, I've tried several
> time to start my bodhi vagrant box and lost my morning w/o success).
> 
> I guess it may be nice to see if there is something else out there that we 
> could
> leverage.
> If we could adopt one and try to get have it on most of our apps this may be a
> nice goal for us to work towards.

Containers instead of VMs might be the next step or Vagrant with podman
(if this is supported). Also, having the test environment as part of the
CI might be nice.

Regards
Till
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-25 Thread Pierre-Yves Chibon
On Thu, Jun 25, 2020 at 03:51:42PM -0400, Neal Gompa wrote:
> On Thu, Jun 25, 2020 at 3:27 PM Pierre-Yves Chibon  
> wrote:
> >
> > Good Morning Everyone,
> >
> > Just like every team we have technical debt in our work.
> > I would like your help to try to define what it is for us.
> >
> > So far, I've come up with the following:
> > - python3 support/migration
> > - fedora-messaging
> > - fedora-messaging schema
> > - documentation
> > - (unit-)tests
> > - OpenID Connect
> >
> > What else would we want in there?
> >
> 
> These are all good things, especially the documentation one. I'd like
> to zero in on a particular aspect of documentation, though: getting to
> hack on it. A lot of our projects are surprisingly difficult to get up
> and running for someone to play with and hack on, and this is
> increasingly true as we adopt OpenShift-style deployments. One way we
> solved this in Pagure is by providing some quick start processes in
> the documentation and a fully working Vagrant based process to boot up
> and have a working environment to hack on the code.
> 
> I'm not necessarily going to specify it needs to be Vagrant for
> everything, but I think this is something we should have for all of
> our projects, so that people *can* easily get going to use and
> contribute.

I've recently had quite some pain with vagrant (just today, I've tried several
time to start my bodhi vagrant box and lost my morning w/o success).

I guess it may be nice to see if there is something else out there that we could
leverage.
If we could adopt one and try to get have it on most of our apps this may be a
nice goal for us to work towards.


Pierre
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


Re: What is our technical debt?

2020-06-25 Thread Neal Gompa
On Thu, Jun 25, 2020 at 3:27 PM Pierre-Yves Chibon  wrote:
>
> Good Morning Everyone,
>
> Just like every team we have technical debt in our work.
> I would like your help to try to define what it is for us.
>
> So far, I've come up with the following:
> - python3 support/migration
> - fedora-messaging
> - fedora-messaging schema
> - documentation
> - (unit-)tests
> - OpenID Connect
>
> What else would we want in there?
>

These are all good things, especially the documentation one. I'd like
to zero in on a particular aspect of documentation, though: getting to
hack on it. A lot of our projects are surprisingly difficult to get up
and running for someone to play with and hack on, and this is
increasingly true as we adopt OpenShift-style deployments. One way we
solved this in Pagure is by providing some quick start processes in
the documentation and a fully working Vagrant based process to boot up
and have a working environment to hack on the code.

I'm not necessarily going to specify it needs to be Vagrant for
everything, but I think this is something we should have for all of
our projects, so that people *can* easily get going to use and
contribute.


-- 
真実はいつも一つ!/ Always, there's only one truth!
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org


What is our technical debt?

2020-06-25 Thread Pierre-Yves Chibon
Good Morning Everyone,

Just like every team we have technical debt in our work.
I would like your help to try to define what it is for us.

So far, I've come up with the following:
- python3 support/migration
- fedora-messaging
- fedora-messaging schema
- documentation
- (unit-)tests
- OpenID Connect

What else would we want in there?


Looking forward to your thoughts,
Pierre
___
infrastructure mailing list -- infrastructure@lists.fedoraproject.org
To unsubscribe send an email to infrastructure-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/infrastructure@lists.fedoraproject.org