[Pulp-dev] Pulp 3 plugin for Chef cookbooks

2018-05-15 Thread Simon Baatz
I created the  beginnings of a Pulp 3 plugin to manage Chef cookbooks
[1].  Currently, it supports to create repos, create cookbook content
units, and publish repos.  A published & distributed repo will offer
a "universe" API endpoint for tools like Berkshelf.

I did not implement sync yet. I am waiting for "PendingVersion" to be available
first.

I ran into a couple of problems/uncertainties described below (sorry for the
lengthy mail). I am new to Django, DRF, and, obviously, Pulp 3 so any remarks or
suggestions are welcome:

- Create Content: The plugin reads as much meta-data as possible from the actual
  cookbook Artifact when creating a content unit. The motivation for this is:

  - One doesn't need a special tool to upload content, which makes uploading by 
e.g.
a CI job easier.
  - It ensures consistency between metadata stored in Pulp and the actual
metadata in the cookbook.

  However, this requires to extract a metadata file from a gzipped tar archive.
  Content unit creation is synchronous and doing this work in a synchronous call
  might not be optimal (we had a discussion in this topic on the pulp-dev
  mailing list already).

- Publication/Distribution: The metadata file ("universe") for a published
  cookbook repository contains absolute URLs for download (i.e. these point
  to published artifacts in a distribution).

  The current publication/distribution concept seems to have the underlying
  assumption that a Publication is fully relocatable: PublishedMetadata
  artifacts are created by the publishing task and creating a Distribution is a
  synchronous call that determines the base path of the published artifacts.

  This causes a problem with said "universe" file. Ideally, it could be
  pre-computed (it lists all artifacts in the repo).  However, this can't be
  done AFAIK since the base path is unknown at publication time and one can't
  generate additional metadata artifacts for a specific distribution later.

  The best solution I came up with was to implement a dynamic API. To reduce the
  amount of work to be done, the API does a simple string replacement: During
  publication, the universe file is pre-computed using placeholders. In the
  dynamic API these placeholders are replaced with the actual base URL of the
  distribution.

  However, I would prefer not to be forced to implement a dynamic API for static
  information. Is there a way to solve this differently?

- Content-Type Header: The "universe" file is JSON and must have a corresponding
  "Content-Type" HTTP header.

  However, content type of the development server seems to be "text/html" by
  default for all artifacts. Apparently, I can't set the content-type of a
  (meta-data) artifact(?)

- Getting the base url of a distribution in the dynamic API is surprisingly
  complicated and depends on the inner structure of pulp core (I took the
  implementation from 'pulp_ansible'). IMHO, a well defined way to obtain it
  should be part of the plugin API.

- "Content" class: The way to use only a single artifact in Content (like done
  in pulp_file) seems to require in-depth knowledge of the
  Content/ContentSerializer class and its inner workings.

  The downside of this can already be experienced in the "pulp_file" plugin: The
  fields "id" and "created" are missing, since the implementation there just
  overrides the 'fields' in the serializer).

  I think two Content types should be part of the plugin API: one with
  multiple artifacts, and a simpler one with a single artifact

- Uploading an Artifact that already exists returns an error, which is
  annoying if you use http/curl to import artifacts. Suppose some other user
  uploaded an artifact in the past. You won't get useful
  information from the POST request uploading the same artifact:

  HTTP/1.1 400 Bad Request
  Allow: GET, POST, HEAD, OPTIONS
  Content-Type: application/json
  Date: Sat, 12 May 2018 17:50:54 GMT
  Server: WSGIServer/0.2 CPython/3.6.2
  Vary: Accept

  {
  "non_field_errors": [
  "sha512 checksum must be unique."
  ]
  }

  This forced me to do something like:

...
sha256=$(sha256sum "$targz" | awk '{print $1}')
ARTIFACT_HREF=$(http :8000/pulp/api/v3/artifacts/?sha256=$sha256 | jq -r 
'.results[0]._href')
if [[ $ARTIFACT_HREF == "null" ]]; then
echo uploading artifact $cookbook_name sha256: $sha256
http --form POST http://localhost:8000/pulp/api/v3/artifacts/ 
file@$targz
ARTIFACT_HREF=$(http :8000/pulp/api/v3/artifacts/?sha256=$sha256 | jq 
-r '.results[0]._href')
...

  Perhaps a "303 See Other" to the existing artifact would help here.



[1]: https://github.com/gmbnomis/pulp_cookbook

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Milan Kovacik
On Tue, May 15, 2018 at 4:48 PM, Jeff Ortel  wrote:
>
>
> On 05/15/2018 09:29 AM, Milan Kovacik wrote:
>>
>> Hi,
>>
>> On Tue, May 15, 2018 at 3:22 PM, Dennis Kliban  wrote:
>>>
>>> On Mon, May 14, 2018 at 3:44 PM, Jeff Ortel  wrote:

 Let's brainstorm on something.

 Pulp needs to deal with remote repositories that are composed of
 multiple
 content types which may span the domain of a single plugin.  Here are a
 few
 examples.  Some Red Hat RPM repositories are composed of: RPMs, DRPMs, ,
 ISOs and Kickstart Trees.  Some OSTree repositories are composed of
 OSTrees
 & Kickstart Trees. This raises a question:

 How can pulp3 best support syncing with remote repositories that are
 composed of multiple (unrelated) content types in a way that doesn't
 result
 in plugins duplicating support for content types?

 Few approaches come to mind:

 1. Multiple plugins (Remotes) participate in the sync flow to produce a
 new repository version.
 2. Multiple plugins (Remotes) are sync'd successively each producing a
 new
 version of a repository.  Only the last version contains the fully
 sync'd
 composition.
 3. Plugins share code.
 4. Other?


 Option #1: Sync would be orchestrated by core or the user so that
 multiple
 plugins (Remotes) participate in populating a new repository version.
 For
 example: the RPM plugin (Remote) and the Kickstart Tree plugin (Remote)
 would both be sync'd against the same remote repository that is composed
 of
 both types.  The new repository version would be composed of the result
 of
 both plugin (Remote) syncs.  To support this, we'd need to provide a way
 for
 each plugin to operate seamlessly on the same (new) repository version.
 Perhaps something internal to the RepositoryVersion.  The repository
 version
 would not be marked "complete" until the last plugin (Remote) sync has
 succeeded.  More complicated than #2 but results in only creating truly
 complete versions or nothing.  No idea how this would work with current
 REST
 API whereby plugins provide sync endpoints.

>>> I like this approach because it allows the user to perform a single call
>>> to
>>> the REST API and specify multiple "sync methods" to use to create a
>>> single
>>> new repository version.
>>
>> Same here, esp. if the goal is an all-or-nothing behavior w/r the
>> mix-in remotes; i.e an atomic sync.
>> This has a benefit of a clear start and end of the sync procedure,
>> that the user might want to refer to.
>>
 Option #2: Sync would be orchestrated by core or the user so that
 multiple
 plugins (Remotes) create successive repository versions.  For example:
 the
 RPM plugin (Remote) and the Kickstart Tree plugin (Remote) would both be
 sync'd against the same remote repository that is a composition
 including
 both types.  The intermediate versions would be incomplete.  Only the
 last
 version contains the fully sync'd composition.  This approach can be
 supported by core today :) but will produce incomplete repository
 versions
 that are marked complete=True.  This /seems/ undesirable, right?  This
 may
 not be a problem for distribution since I would imaging that only the
 last
 (fully composed) version would be published.  But what about other
 usages of
 the repository's "latest" version?
>>
>> I'm afraid I don't see use of a middle-version esp. in case of
>> failures; e.g ostree failed to sync while rpm managed and kickstart
>> managed too; is the sync OK as a whole? What to do with the versions
>> created? Should I merge the successes into one and retry the failure?
>> How many versions would this introduce?
>
>
> (option 2) The partial versions would be created in both normal and failure
> scenarios.  The normal scenario is created because each plugin (Remote)
> creates a new version and only the last one is completed.  the intermediate
> versions are always partial.

right but is there a legitimate use of the intermediate version?
if not, maybe Option #1 is better (atomic)

>
>>
 Option #3: requires a plugin to be aware of specific repository
 composition(s); other plugins and creates a code dependency between
 plugins.
 For example, the RPM plugin could delegate ISOs to the File plugin and
 Kickstart Trees to the KickStart Tree plugin.
>>
>> Do you mean that the RPM plug-in would directly call into the File
>> plug-in?
>> If that's the case then I don't like it much, would be a pain every
>> time a new plug-in would be introduced (O(len(plugin)^2) of updates)
>> or if the API of a plug-in changed (O(len(plugin)) updates).
>> Esp. keeping the plugin code aware of other plugin updates would be ugly.
>
>
> Agreed.  The plugins could install libs into site-packages which would at
> least mitigate the complexity 

Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Brian Bouterse
I agree these are specific cases for a few content types that are used by
multiple plugins. I think the most productive thing would be for us to talk
in specific only about kickstart trees being shared between RPM and ostree.
It would be much easier to generalize after building something specific
once (I think).

A mentor I had once told all software that lives long enough goes through 3
phases. (1) A concrete implementation (2) generalizing that implementation,
and then (3) rewriting that implementation because of everything you didn't
know before. I'm advocating for us to think about the problem as a specific
plugin problem (step 1) and then after that is done, to look at
generalizing it (step 2).

On Tue, May 15, 2018 at 11:27 AM, Bryan Kearney  wrote:

> On 05/14/2018 03:44 PM, Jeff Ortel wrote:
> > Let's brainstorm on something.
> >
> > Pulp needs to deal with remote repositories that are composed of
> > multiple content types which may span the domain of a single plugin.
> > Here are a few examples.  Some Red Hat RPM repositories are composed of:
> > RPMs, DRPMs, , ISOs and Kickstart Trees.  Some OSTree repositories are
> > composed of OSTrees & Kickstart Trees. This raises a question:
> >
> > How can pulp3 best support syncing with remote repositories that are
> > composed of multiple (unrelated) content types in a way that doesn't
> > result in plugins duplicating support for content types?
> >
>
>
> Both these examples are cases of RPM repos, yes? If so, does this
> require a general purpose solution?
>
> -- bk
>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Jeff Ortel



On 05/15/2018 10:41 AM, Jeff Ortel wrote:



On 05/15/2018 10:27 AM, Bryan Kearney wrote:

On 05/14/2018 03:44 PM, Jeff Ortel wrote:

Let's brainstorm on something.

Pulp needs to deal with remote repositories that are composed of
multiple content types which may span the domain of a single plugin.
Here are a few examples.  Some Red Hat RPM repositories are composed 
of:

RPMs, DRPMs, , ISOs and Kickstart Trees.  Some OSTree repositories are
composed of OSTrees & Kickstart Trees. This raises a question:

How can pulp3 best support syncing with remote repositories that are
composed of multiple (unrelated) content types in a way that doesn't
result in plugins duplicating support for content types?



Both these examples are cases of RPM repos, yes? If so, does this
require a general purpose solution?


The example in the thread is mainly RPM but there are other 
repositories with shared content types.  Eg: OSTree repositories also 
containing Kickstart Trees.


I also think there is value in not having the RPM plugin be a /mega/ 
plugin that knows how to deal with several complicated types of content 
(like in pulp2).  Making each plugin responsible for specific closely 
related types of content would make them more maintainable.






-- bk






___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Jeff Ortel



On 05/15/2018 10:27 AM, Bryan Kearney wrote:

On 05/14/2018 03:44 PM, Jeff Ortel wrote:

Let's brainstorm on something.

Pulp needs to deal with remote repositories that are composed of
multiple content types which may span the domain of a single plugin.
Here are a few examples.  Some Red Hat RPM repositories are composed of:
RPMs, DRPMs, , ISOs and Kickstart Trees.  Some OSTree repositories are
composed of OSTrees & Kickstart Trees. This raises a question:

How can pulp3 best support syncing with remote repositories that are
composed of multiple (unrelated) content types in a way that doesn't
result in plugins duplicating support for content types?



Both these examples are cases of RPM repos, yes? If so, does this
require a general purpose solution?


The example in the thread is mainly RPM but there are other repositories 
with shared content types.  Eg: OSTree repositories also containing 
Kickstart Trees.




-- bk




___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Bryan Kearney
On 05/14/2018 03:44 PM, Jeff Ortel wrote:
> Let's brainstorm on something.
> 
> Pulp needs to deal with remote repositories that are composed of
> multiple content types which may span the domain of a single plugin. 
> Here are a few examples.  Some Red Hat RPM repositories are composed of:
> RPMs, DRPMs, , ISOs and Kickstart Trees.  Some OSTree repositories are
> composed of OSTrees & Kickstart Trees. This raises a question: 
> 
> How can pulp3 best support syncing with remote repositories that are
> composed of multiple (unrelated) content types in a way that doesn't
> result in plugins duplicating support for content types?
> 


Both these examples are cases of RPM repos, yes? If so, does this
require a general purpose solution?

-- bk




signature.asc
Description: OpenPGP digital signature
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Jeff Ortel



On 05/15/2018 09:29 AM, Milan Kovacik wrote:

Hi,

On Tue, May 15, 2018 at 3:22 PM, Dennis Kliban  wrote:

On Mon, May 14, 2018 at 3:44 PM, Jeff Ortel  wrote:

Let's brainstorm on something.

Pulp needs to deal with remote repositories that are composed of multiple
content types which may span the domain of a single plugin.  Here are a few
examples.  Some Red Hat RPM repositories are composed of: RPMs, DRPMs, ,
ISOs and Kickstart Trees.  Some OSTree repositories are composed of OSTrees
& Kickstart Trees. This raises a question:

How can pulp3 best support syncing with remote repositories that are
composed of multiple (unrelated) content types in a way that doesn't result
in plugins duplicating support for content types?

Few approaches come to mind:

1. Multiple plugins (Remotes) participate in the sync flow to produce a
new repository version.
2. Multiple plugins (Remotes) are sync'd successively each producing a new
version of a repository.  Only the last version contains the fully sync'd
composition.
3. Plugins share code.
4. Other?


Option #1: Sync would be orchestrated by core or the user so that multiple
plugins (Remotes) participate in populating a new repository version.  For
example: the RPM plugin (Remote) and the Kickstart Tree plugin (Remote)
would both be sync'd against the same remote repository that is composed of
both types.  The new repository version would be composed of the result of
both plugin (Remote) syncs.  To support this, we'd need to provide a way for
each plugin to operate seamlessly on the same (new) repository version.
Perhaps something internal to the RepositoryVersion.  The repository version
would not be marked "complete" until the last plugin (Remote) sync has
succeeded.  More complicated than #2 but results in only creating truly
complete versions or nothing.  No idea how this would work with current REST
API whereby plugins provide sync endpoints.


I like this approach because it allows the user to perform a single call to
the REST API and specify multiple "sync methods" to use to create a single
new repository version.

Same here, esp. if the goal is an all-or-nothing behavior w/r the
mix-in remotes; i.e an atomic sync.
This has a benefit of a clear start and end of the sync procedure,
that the user might want to refer to.


Option #2: Sync would be orchestrated by core or the user so that multiple
plugins (Remotes) create successive repository versions.  For example: the
RPM plugin (Remote) and the Kickstart Tree plugin (Remote) would both be
sync'd against the same remote repository that is a composition including
both types.  The intermediate versions would be incomplete.  Only the last
version contains the fully sync'd composition.  This approach can be
supported by core today :) but will produce incomplete repository versions
that are marked complete=True.  This /seems/ undesirable, right?  This may
not be a problem for distribution since I would imaging that only the last
(fully composed) version would be published.  But what about other usages of
the repository's "latest" version?

I'm afraid I don't see use of a middle-version esp. in case of
failures; e.g ostree failed to sync while rpm managed and kickstart
managed too; is the sync OK as a whole? What to do with the versions
created? Should I merge the successes into one and retry the failure?
How many versions would this introduce?


(option 2) The partial versions would be created in both normal and 
failure scenarios.  The normal scenario is created because each plugin 
(Remote) creates a new version and only the last one is completed.  the 
intermediate versions are always partial.





Option #3: requires a plugin to be aware of specific repository
composition(s); other plugins and creates a code dependency between plugins.
For example, the RPM plugin could delegate ISOs to the File plugin and
Kickstart Trees to the KickStart Tree plugin.

Do you mean that the RPM plug-in would directly call into the File plug-in?
If that's the case then I don't like it much, would be a pain every
time a new plug-in would be introduced (O(len(plugin)^2) of updates)
or if the API of a plug-in changed (O(len(plugin)) updates).
Esp. keeping the plugin code aware of other plugin updates would be ugly.


Agreed.  The plugins could install libs into site-packages which would 
at least mitigate the complexity of calling into each other through the 
pulp plugin framework but I don't think it helps much. Even the rpm 
dependency is undesirable.





For all options, plugins (Remotes) need to limit sync to affect only those
content types within their domain.  For example, the RPM (Remote) sync
cannot add/remove ISO or KS Trees.

I am an advocate of some from of options #1 or #2.  Combining plugins
(Remotes) as needed to deal with arbitrary combinations within remote
repositories seems very powerful; does not impose complexity on plugin
writers; and does not introduce code dependencies between plugins.

Thoughts?

_

Re: [Pulp-dev] PUP5 -- Adopting the "Common Cure Rights Commitment" for Pulp Core

2018-05-15 Thread Brian Bouterse
@ipanova, I think of the core team as only maintaining pulp/pulp and
pulp/devel so I limit the scope of this to those repos only. I think
pulp_rpm (or any plugin) could adopt the CCRC without a PUP by following
the "Displaying the CRCC section
"
in their own repo.

@dawalker, relicensing to GPLv3 is an alternative. It's not a bad option,
but it would be more complicated. Since every committer with even a single
line of current code is a copyright holder of the codebase, and it would
require a 100% signoff from all copyright holders, in practice this can be
difficult. Also someone may not even use that email anymore so it may not
even be possible. I haven't assessed how many Pulp3 committers we have
currently for the Pulp3 codebase.

I was recently part of a relicensing which failed
, but it
shows what the process looks like:
https://github.com/python-bugzilla/python-bugzilla/issues/25 If someone
wants to champion switching to GPLv3 and create an issue like that and get
all the signoffs I'm not opposed to relicensing to GPLv3 instead of
adopting the CRCC.

On Mon, May 14, 2018 at 1:34 PM, Dana Walker  wrote:

> Other than the noted point that it takes time, is there any reason why
> Pulp should stay on the current license instead of moving to GPLv3 (one of
> the stated alternatives in this PUP)?  I don't know much about the
> differences currently, but it strikes me that our new Pulp 3 using Python 3
> would be a good fit for moving to a new license as well that has taken
> various things such as this enforcement issue into account and evolved over
> time.
>
> Thoughts?
>
> --Dana
>
> Dana Walker
>
> Associate Software Engineer
>
> Red Hat
>
> 
> 
>
> On Mon, May 14, 2018 at 6:28 AM, Ina Panova  wrote:
>
>> *understanding
>>
>>
>>
>> 
>> Regards,
>>
>> Ina Panova
>> Software Engineer| Pulp| Red Hat Inc.
>>
>> "Do not go where the path may lead,
>>  go instead where there is no path and leave a trail."
>>
>> On Mon, May 14, 2018 at 12:27 PM, Ina Panova  wrote:
>>
>>> To make a concrete example to prove my understating:
>>>
>>> Since pulp_rpm is maintained by core team we could adopt this change,
>>> meanwhile pulp_deb is beyond our control and we( core team) cannot enforce
>>> or influence this change.
>>> Yes?
>>>
>>>
>>>
>>> 
>>> Regards,
>>>
>>> Ina Panova
>>> Software Engineer| Pulp| Red Hat Inc.
>>>
>>> "Do not go where the path may lead,
>>>  go instead where there is no path and leave a trail."
>>>
>>> On Tue, May 8, 2018 at 5:55 PM, Brian Bouterse 
>>> wrote:
>>>
 A Pulp Update Proposal (PUP) pull request has been opened by the
 go-to-lawyer for the Pulp community, Richard Fontana. The PUP is PUP5 [0].
 I don't want to paraphrase it here, so please read it [0] if you are
 interested to understand what it does.

 I am proposing a period of questions/discussion via the list/PR and
 then a call for a vote according to the process. All questions are welcome,
 please ask.


 # Timeline

 Today - May 18th mailing list and PR discussion
 May 18th - formally call for a vote which would end 12 calendar days
 from then May 30th
 May 30th - Merge or reject


 # FAQs

 Is this relicensing Pulp?
 No. It's still GPLv2. This adopts a procedural enforment approach
 within the existing license. See @rfontana's response here:
 https://github.com/pulp/pups/pull/9#issuecomment-384523020

 Do all prior contributors need to sign off on this change?
 No, because it's not a relicensing.

 Does this affect core, plugins, or both?
 This PR is only scoped to affect the GPLv2 codebases maintained by the
 core team. Plugins make their own decisions without PUPs. Initially this
 would be pulp/pulp, and as other GPLv2 repositories are maintained by the
 core team, it would apply to this in the future as well.


 [0]: https://github.com/pulp/pups/pull/9/files

 Thanks,
 Brian

 ___
 Pulp-dev mailing list
 Pulp-dev@redhat.com
 https://www.redhat.com/mailman/listinfo/pulp-dev


>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Jeff Ortel



On 05/15/2018 05:58 AM, Austin Macdonald wrote:

Here's another complexity, how do 2 plugins create a single publication?


The plugin API could make this seamless.

We basically have the same problem of 2 parallel operations creating 
content from a single source.


I don't think so.  plugins should not manipulate content outside of 
their domain (other plugins content) so either serial or parallel should 
be safe.




On Tue, May 15, 2018, 06:27 Ina Panova > wrote:


+1 on not introducing dependencies between plugins.

What will be the behavior in case there is a composed repo of rpm
and ks trees but just the rpm plugin is installed?

Do we fail and say we cannot sync this repo at all or we just sync
the rpm part?


Assuming plugins do not depend on each other, I think that when each 
plugin looks at the upstream repo, they will only "see" the content of 
that type. Conceptually, we will have 2 remotes, so it will feel like 
we are syncing from 2 totally distinct repositories.


The solution I've been imagining is a lot like 2. Each plugin would 
sync to a *separate repository.* These separate repositories are then 
published creating *separate publications*. This approach allows the 
plugins to live completely in ignorance of each other.


The final step is to associate *both publications to one 
distribution*, which composes the publications as they are served.


The downside is that we have to sync and publish twice, and that the 
resulting versions and publications aren't locked together. But I 
think this is better than leaving versions and publications unfinished 
with the assumption that another plugin will finish the job. Maybe 
linking them together could be a good use of the notes field.


Pulp should support repositories with composed (mixed) content for the 
same reason RH does.  The repository is a collection of content that 
users want to manage together.  Consider the promotion cases: dev, test, 
prod.





Depends how we plan this ^ i guess we'll decide which option 1 or
2 fits better.

Don't want to go wild, but what if notion of composed repos will
be so popular in the future that's its amount will increase? I
think we do want to at least partially being able to sync it and
not take the approach all or nothing?

#2 speaks to me more for now.





Regards,

Ina Panova
Software Engineer| Pulp| Red Hat Inc.

"Do not go where the path may lead,
 go instead where there is no path and leave a trail."

On Mon, May 14, 2018 at 9:44 PM, Jeff Ortel mailto:jor...@redhat.com>> wrote:

Let's brainstorm on something.

Pulp needs to deal with remote repositories that are composed
of multiple content types which may span the domain of a
single plugin.  Here are a few examples.  Some Red Hat RPM
repositories are composed of: RPMs, DRPMs, , ISOs and
Kickstart Trees.  Some OSTree repositories are composed of
OSTrees & Kickstart Trees. This raises a question:

How can pulp3 best support syncing with remote repositories
that are composed of multiple (unrelated) content types in a
way that doesn't result in plugins duplicating support for
content types?

Few approaches come to mind:

1. Multiple plugins (Remotes) participate in the sync flow to
produce a new repository version.
2. Multiple plugins (Remotes) are sync'd successively each
producing a new version of a repository.  Only the last
version contains the fully sync'd composition.
3. Plugins share code.
4. Other?


Option #1: Sync would be orchestrated by core or the user so
that multiple plugins (Remotes) participate in populating a
new repository version.  For example: the RPM plugin (Remote)
and the Kickstart Tree plugin (Remote) would both be sync'd
against the same remote repository that is composed of both
types.  The new repository version would be composed of the
result of both plugin (Remote) syncs.  To support this, we'd
need to provide a way for each plugin to operate seamlessly on
the same (new) repository version.  Perhaps something internal
to the RepositoryVersion.  The repository version would not be
marked "complete" until the last plugin (Remote) sync has
succeeded.  More complicated than #2 but results in only
creating truly complete versions or nothing.  No idea how this
would work with current REST API whereby plugins provide sync
endpoints.

Option #2: Sync would be orchestrated by core or the user so
that multiple plugins (Remotes) create successive repository
versions.  For example: the RPM plugin (Remote) and the
Kickstart Tree plugin (Remote) would both be sync'd against
the same remote repository that is a c

Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Jeff Ortel



On 05/15/2018 05:26 AM, Ina Panova wrote:

+1 on not introducing dependencies between plugins.

What will be the behavior in case there is a composed repo of rpm and 
ks trees but just the rpm plugin is installed?


I would expect the result would be to only sync the rpm content into the 
pulp repository.


Do we fail and say we cannot sync this repo at all or we just sync the 
rpm part?


No, I think it would be expected to succeed since the user has only 
installed the rpm plugin and requested that only rpm content be sync'd.  
The remote repository is composed of multiple content types out of 
convenience for managing the content.  Pulp should not be bound to the 
organization of remote repositories.




Depends how we plan this ^ i guess we'll decide which option 1 or 2 
fits better.


Don't want to go wild, but what if notion of composed repos will be so 
popular in the future that's its amount will increase? I think we do 
want to at least partially being able to sync it and not take the 
approach all or nothing?


#2 speaks to me more for now.


#2 will create repository version with partial content which are 
complete=True.  Given users can choose which version to publish, do you 
see this as a problem.  What about cases where the "latest" version is, 
at times, partial?








Regards,

Ina Panova
Software Engineer| Pulp| Red Hat Inc.

"Do not go where the path may lead,
 go instead where there is no path and leave a trail."

On Mon, May 14, 2018 at 9:44 PM, Jeff Ortel > wrote:


Let's brainstorm on something.

Pulp needs to deal with remote repositories that are composed of
multiple content types which may span the domain of a single
plugin.  Here are a few examples. Some Red Hat RPM repositories
are composed of: RPMs, DRPMs, , ISOs and Kickstart Trees.  Some
OSTree repositories are composed of OSTrees & Kickstart Trees.
This raises a question:

How can pulp3 best support syncing with remote repositories that
are composed of multiple (unrelated) content types in a way that
doesn't result in plugins duplicating support for content types?

Few approaches come to mind:

1. Multiple plugins (Remotes) participate in the sync flow to
produce a new repository version.
2. Multiple plugins (Remotes) are sync'd successively each
producing a new version of a repository.  Only the last version
contains the fully sync'd composition.
3. Plugins share code.
4. Other?


Option #1: Sync would be orchestrated by core or the user so that
multiple plugins (Remotes) participate in populating a new
repository version.  For example: the RPM plugin (Remote) and the
Kickstart Tree plugin (Remote) would both be sync'd against the
same remote repository that is composed of both types.  The new
repository version would be composed of the result of both plugin
(Remote) syncs.  To support this, we'd need to provide a way for
each plugin to operate seamlessly on the same (new) repository
version.  Perhaps something internal to the RepositoryVersion. 
The repository version would not be marked "complete" until the
last plugin (Remote) sync has succeeded.  More complicated than #2
but results in only creating truly complete versions or nothing. 
No idea how this would work with current REST API whereby plugins
provide sync endpoints.

Option #2: Sync would be orchestrated by core or the user so that
multiple plugins (Remotes) create successive repository versions. 
For example: the RPM plugin (Remote) and the Kickstart Tree plugin
(Remote) would both be sync'd against the same remote repository
that is a composition including both types.  The intermediate
versions would be incomplete. Only the last version contains the
fully sync'd composition.  This approach can be supported by core
today :) but will produce incomplete repository versions that are
marked complete=True.  This /seems/ undesirable, right? This may
not be a problem for distribution since I would imaging that only
the last (fully composed) version would be published.  But what
about other usages of the repository's "latest" version?

Option #3: requires a plugin to be aware of specific repository
composition(s); other plugins and creates a code dependency
between plugins.  For example, the RPM plugin could delegate ISOs
to the File plugin and Kickstart Trees to the KickStart Tree plugin.

For all options, plugins (Remotes) need to limit sync to affect
only those content types within their domain. For example, the RPM
(Remote) sync cannot add/remove ISO or KS Trees.

I am an advocate of some from of options #1 or #2. Combining
plugins (Remotes) as needed to deal with arbitrary combinations
within remote repositories seems very powerful; does not impose
complexity on plugin writers; and does not introduce code

Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Milan Kovacik
Hi,

On Tue, May 15, 2018 at 3:22 PM, Dennis Kliban  wrote:
> On Mon, May 14, 2018 at 3:44 PM, Jeff Ortel  wrote:
>>
>> Let's brainstorm on something.
>>
>> Pulp needs to deal with remote repositories that are composed of multiple
>> content types which may span the domain of a single plugin.  Here are a few
>> examples.  Some Red Hat RPM repositories are composed of: RPMs, DRPMs, ,
>> ISOs and Kickstart Trees.  Some OSTree repositories are composed of OSTrees
>> & Kickstart Trees. This raises a question:
>>
>> How can pulp3 best support syncing with remote repositories that are
>> composed of multiple (unrelated) content types in a way that doesn't result
>> in plugins duplicating support for content types?
>>
>> Few approaches come to mind:
>>
>> 1. Multiple plugins (Remotes) participate in the sync flow to produce a
>> new repository version.
>> 2. Multiple plugins (Remotes) are sync'd successively each producing a new
>> version of a repository.  Only the last version contains the fully sync'd
>> composition.
>> 3. Plugins share code.
>> 4. Other?
>>
>>
>> Option #1: Sync would be orchestrated by core or the user so that multiple
>> plugins (Remotes) participate in populating a new repository version.  For
>> example: the RPM plugin (Remote) and the Kickstart Tree plugin (Remote)
>> would both be sync'd against the same remote repository that is composed of
>> both types.  The new repository version would be composed of the result of
>> both plugin (Remote) syncs.  To support this, we'd need to provide a way for
>> each plugin to operate seamlessly on the same (new) repository version.
>> Perhaps something internal to the RepositoryVersion.  The repository version
>> would not be marked "complete" until the last plugin (Remote) sync has
>> succeeded.  More complicated than #2 but results in only creating truly
>> complete versions or nothing.  No idea how this would work with current REST
>> API whereby plugins provide sync endpoints.
>>
>
> I like this approach because it allows the user to perform a single call to
> the REST API and specify multiple "sync methods" to use to create a single
> new repository version.

Same here, esp. if the goal is an all-or-nothing behavior w/r the
mix-in remotes; i.e an atomic sync.
This has a benefit of a clear start and end of the sync procedure,
that the user might want to refer to.

>
>>
>> Option #2: Sync would be orchestrated by core or the user so that multiple
>> plugins (Remotes) create successive repository versions.  For example: the
>> RPM plugin (Remote) and the Kickstart Tree plugin (Remote) would both be
>> sync'd against the same remote repository that is a composition including
>> both types.  The intermediate versions would be incomplete.  Only the last
>> version contains the fully sync'd composition.  This approach can be
>> supported by core today :) but will produce incomplete repository versions
>> that are marked complete=True.  This /seems/ undesirable, right?  This may
>> not be a problem for distribution since I would imaging that only the last
>> (fully composed) version would be published.  But what about other usages of
>> the repository's "latest" version?

I'm afraid I don't see use of a middle-version esp. in case of
failures; e.g ostree failed to sync while rpm managed and kickstart
managed too; is the sync OK as a whole? What to do with the versions
created? Should I merge the successes into one and retry the failure?
How many versions would this introduce?

>>
>> Option #3: requires a plugin to be aware of specific repository
>> composition(s); other plugins and creates a code dependency between plugins.
>> For example, the RPM plugin could delegate ISOs to the File plugin and
>> Kickstart Trees to the KickStart Tree plugin.

Do you mean that the RPM plug-in would directly call into the File plug-in?
If that's the case then I don't like it much, would be a pain every
time a new plug-in would be introduced (O(len(plugin)^2) of updates)
or if the API of a plug-in changed (O(len(plugin)) updates).
Esp. keeping the plugin code aware of other plugin updates would be ugly.

>>
>> For all options, plugins (Remotes) need to limit sync to affect only those
>> content types within their domain.  For example, the RPM (Remote) sync
>> cannot add/remove ISO or KS Trees.
>>
>> I am an advocate of some from of options #1 or #2.  Combining plugins
>> (Remotes) as needed to deal with arbitrary combinations within remote
>> repositories seems very powerful; does not impose complexity on plugin
>> writers; and does not introduce code dependencies between plugins.
>>
>> Thoughts?
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>

Cheers,
milan

___
Pulp

Re: [Pulp-dev] MODIFIED redmine issues for 3.0 beta

2018-05-15 Thread Robin Chan
A few use cases where it would be useful for Pulp 3 issues released in
a Beta would be considered closed are:
1. Sprint content
2. Your own queries on what is Open & assigned to you.
 - Query can just be changed.
3. If a user is seeing an issue in Pulp 3 (or Pulp 2) - can they
easily find which functionality is available to them or not yet
available.

As I understand it, this is only a change for Pulp 3 items? It may be
useful to keep the behaviour consistent in redmine.

On Tue, May 15, 2018 at 6:03 AM, Ina Panova  wrote:
> If i am not mistaken as of now the person who takes care of a release needs
> to manually change the issue status.
> There is a possibility to select all the issues and actually move the status
> in 1 click. Do you think to automate this will pay off?
>
> I am fine to close issues as current release in case we set the target
> release specifically Beta, otherwise it sounds like it brings some
> confusion.
> There is a big period between Beta and GA and you never know what can happen
> to those set issues especially if they are targeted as GA.
>
>
>
> 
> Regards,
>
> Ina Panova
> Software Engineer| Pulp| Red Hat Inc.
>
> "Do not go where the path may lead,
>  go instead where there is no path and leave a trail."
>
> On Mon, May 14, 2018 at 7:03 PM, Dennis Kliban  wrote:
>>
>> Historically we would transition issues in Pulp's issue tracker from
>> MODIFIED to CLOSED - CURRENTRELEASE when a GA build went out the door. The
>> same approach is working against us for the duration of the Pulp 3.0 beta.
>> We have a large number of issues in MODIFIED state, but are considered
>> released.
>>
>> I propose that we transition issues to CLOSED - CURRENTRELEASE when they
>> have been shipped with a 3.0 beta.
>>
>> Could we add automation to do this at release time?
>>
>> What do you all think?
>>
>>
>> Thanks,
>> Dennis
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] MODIFIED redmine issues for 3.0 beta

2018-05-15 Thread Dennis Kliban
On Tue, May 15, 2018 at 6:03 AM, Ina Panova  wrote:

> If i am not mistaken as of now the person who takes care of a release
> needs to manually change the issue status.
> There is a possibility to select all the issues and actually move the
> status in 1 click. Do you think to automate this will pay off?
>
>
We are currently automating the releasing of pulpcore and pulp_file using
Travis. I was hoping to add this issue update step to that automation.


> I am fine to close issues as current release in case we set the target
> release specifically Beta, otherwise it sounds like it brings some
> confusion.
> There is a big period between Beta and GA and you never know what can
> happen to those set issues especially if they are targeted as GA.
>

I don't think we've been setting the target release field for the pulp 3
work. We could start doing that at release time. The value would be the
version of the beta release. That should reduce confusion.



>
>
>
> 
> Regards,
>
> Ina Panova
> Software Engineer| Pulp| Red Hat Inc.
>
> "Do not go where the path may lead,
>  go instead where there is no path and leave a trail."
>
> On Mon, May 14, 2018 at 7:03 PM, Dennis Kliban  wrote:
>
>> Historically we would transition issues in Pulp's issue tracker from
>> MODIFIED to CLOSED - CURRENTRELEASE when a GA build went out the door. The
>> same approach is working against us for the duration of the Pulp 3.0 beta.
>> We have a large number of issues in MODIFIED state, but are considered
>> released.
>>
>> I propose that we transition issues to CLOSED - CURRENTRELEASE when they
>> have been shipped with a 3.0 beta.
>>
>> Could we add automation to do this at release time?
>>
>> What do you all think?
>>
>>
>> Thanks,
>> Dennis
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Dennis Kliban
On Mon, May 14, 2018 at 3:44 PM, Jeff Ortel  wrote:

> Let's brainstorm on something.
>
> Pulp needs to deal with remote repositories that are composed of multiple
> content types which may span the domain of a single plugin.  Here are a few
> examples.  Some Red Hat RPM repositories are composed of: RPMs, DRPMs, ,
> ISOs and Kickstart Trees.  Some OSTree repositories are composed of OSTrees
> & Kickstart Trees. This raises a question:
>
> How can pulp3 best support syncing with remote repositories that are
> composed of multiple (unrelated) content types in a way that doesn't result
> in plugins duplicating support for content types?
>
> Few approaches come to mind:
>
> 1. Multiple plugins (Remotes) participate in the sync flow to produce a
> new repository version.
> 2. Multiple plugins (Remotes) are sync'd successively each producing a new
> version of a repository.  Only the last version contains the fully sync'd
> composition.
> 3. Plugins share code.
> 4. Other?
>
>
> Option #1: Sync would be orchestrated by core or the user so that multiple
> plugins (Remotes) participate in populating a new repository version.  For
> example: the RPM plugin (Remote) and the Kickstart Tree plugin (Remote)
> would both be sync'd against the same remote repository that is composed of
> both types.  The new repository version would be composed of the result of
> both plugin (Remote) syncs.  To support this, we'd need to provide a way
> for each plugin to operate seamlessly on the same (new) repository
> version.  Perhaps something internal to the RepositoryVersion.  The
> repository version would not be marked "complete" until the last plugin
> (Remote) sync has succeeded.  More complicated than #2 but results in only
> creating truly complete versions or nothing.  No idea how this would work
> with current REST API whereby plugins provide sync endpoints.
>
>
I like this approach because it allows the user to perform a single call to
the REST API and specify multiple "sync methods" to use to create a single
new repository version.


> Option #2: Sync would be orchestrated by core or the user so that multiple
> plugins (Remotes) create successive repository versions.  For example: the
> RPM plugin (Remote) and the Kickstart Tree plugin (Remote) would both be
> sync'd against the same remote repository that is a composition including
> both types.  The intermediate versions would be incomplete.  Only the
> last version contains the fully sync'd composition.  This approach can be
> supported by core today :) but will produce incomplete repository versions
> that are marked complete=True.  This /seems/ undesirable, right?  This may
> not be a problem for distribution since I would imaging that only the last
> (fully composed) version would be published.  But what about other usages
> of the repository's "latest" version?
>
> Option #3: requires a plugin to be aware of specific repository
> composition(s); other plugins and creates a code dependency between
> plugins.  For example, the RPM plugin could delegate ISOs to the File
> plugin and Kickstart Trees to the KickStart Tree plugin.
>
> For all options, plugins (Remotes) need to limit sync to affect only those
> content types within their domain.  For example, the RPM (Remote) sync
> cannot add/remove ISO or KS Trees.
>
> I am an advocate of some from of options #1 or #2.  Combining plugins
> (Remotes) as needed to deal with arbitrary combinations within remote
> repositories seems very powerful; does not impose complexity on plugin
> writers; and does not introduce code dependencies between plugins.
>
> Thoughts?
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Austin Macdonald
Here's another complexity, how do 2 plugins create a single publication? We
basically have the same problem of 2 parallel operations creating content
from a single source.

On Tue, May 15, 2018, 06:27 Ina Panova  wrote:

> +1 on not introducing dependencies between plugins.
>
> What will be the behavior in case there is a composed repo of rpm and ks
> trees but just the rpm plugin is installed?
>
Do we fail and say we cannot sync this repo at all or we just sync the rpm
> part?
>

Assuming plugins do not depend on each other, I think that when each plugin
looks at the upstream repo, they will only "see" the content of that type.
Conceptually, we will have 2 remotes, so it will feel like we are syncing
from 2 totally distinct repositories.

The solution I've been imagining is a lot like 2. Each plugin would sync to
a *separate repository.* These separate repositories are then published
creating *separate publications*. This approach allows the plugins to live
completely in ignorance of each other.

The final step is to associate *both publications to one distribution*,
which composes the publications as they are served.

The downside is that we have to sync and publish twice, and that the
resulting versions and publications aren't locked together. But I think
this is better than leaving versions and publications unfinished with the
assumption that another plugin will finish the job. Maybe linking them
together could be a good use of the notes field.


> Depends how we plan this ^ i guess we'll decide which option 1 or 2 fits
> better.
>
> Don't want to go wild, but what if notion of composed repos will be so
> popular in the future that's its amount will increase? I think we do want
> to at least partially being able to sync it and not take the approach all
> or nothing?
>
> #2 speaks to me more for now.
>
>
>
>
> 
> Regards,
>
> Ina Panova
> Software Engineer| Pulp| Red Hat Inc.
>
> "Do not go where the path may lead,
>  go instead where there is no path and leave a trail."
>
> On Mon, May 14, 2018 at 9:44 PM, Jeff Ortel  wrote:
>
>> Let's brainstorm on something.
>>
>> Pulp needs to deal with remote repositories that are composed of multiple
>> content types which may span the domain of a single plugin.  Here are a few
>> examples.  Some Red Hat RPM repositories are composed of: RPMs, DRPMs, ,
>> ISOs and Kickstart Trees.  Some OSTree repositories are composed of OSTrees
>> & Kickstart Trees. This raises a question:
>>
>> How can pulp3 best support syncing with remote repositories that are
>> composed of multiple (unrelated) content types in a way that doesn't result
>> in plugins duplicating support for content types?
>>
>> Few approaches come to mind:
>>
>> 1. Multiple plugins (Remotes) participate in the sync flow to produce a
>> new repository version.
>> 2. Multiple plugins (Remotes) are sync'd successively each producing a
>> new version of a repository.  Only the last version contains the fully
>> sync'd composition.
>> 3. Plugins share code.
>> 4. Other?
>>
>>
>> Option #1: Sync would be orchestrated by core or the user so that
>> multiple plugins (Remotes) participate in populating a new repository
>> version.  For example: the RPM plugin (Remote) and the Kickstart Tree
>> plugin (Remote) would both be sync'd against the same remote repository
>> that is composed of both types.  The new repository version would be
>> composed of the result of both plugin (Remote) syncs.  To support this,
>> we'd need to provide a way for each plugin to operate seamlessly on the
>> same (new) repository version.  Perhaps something internal to the
>> RepositoryVersion.  The repository version would not be marked "complete"
>> until the last plugin (Remote) sync has succeeded.  More complicated than
>> #2 but results in only creating truly complete versions or nothing.  No
>> idea how this would work with current REST API whereby plugins provide sync
>> endpoints.
>>
>> Option #2: Sync would be orchestrated by core or the user so that
>> multiple plugins (Remotes) create successive repository versions.  For
>> example: the RPM plugin (Remote) and the Kickstart Tree plugin (Remote)
>> would both be sync'd against the same remote repository that is a
>> composition including both types.  The intermediate versions would be
>> incomplete.  Only the last version contains the fully sync'd
>> composition.  This approach can be supported by core today :) but will
>> produce incomplete repository versions that are marked complete=True.  This
>> /seems/ undesirable, right?  This may not be a problem for distribution
>> since I would imaging that only the last (fully composed) version would be
>> published.  But what about other usages of the repository's "latest"
>> version?
>>
>> Option #3: requires a plugin to be aware of specific repository
>> composition(s); other plugins and creates a code dependency between
>> plugins.  For example, the RPM plugin could delegate ISOs to the File
>> plugin and Kickstart Trees to 

Re: [Pulp-dev] Composed Repositories

2018-05-15 Thread Ina Panova
+1 on not introducing dependencies between plugins.

What will be the behavior in case there is a composed repo of rpm and ks
trees but just the rpm plugin is installed?
Do we fail and say we cannot sync this repo at all or we just sync the rpm
part?

Depends how we plan this ^ i guess we'll decide which option 1 or 2 fits
better.

Don't want to go wild, but what if notion of composed repos will be so
popular in the future that's its amount will increase? I think we do want
to at least partially being able to sync it and not take the approach all
or nothing?

#2 speaks to me more for now.





Regards,

Ina Panova
Software Engineer| Pulp| Red Hat Inc.

"Do not go where the path may lead,
 go instead where there is no path and leave a trail."

On Mon, May 14, 2018 at 9:44 PM, Jeff Ortel  wrote:

> Let's brainstorm on something.
>
> Pulp needs to deal with remote repositories that are composed of multiple
> content types which may span the domain of a single plugin.  Here are a few
> examples.  Some Red Hat RPM repositories are composed of: RPMs, DRPMs, ,
> ISOs and Kickstart Trees.  Some OSTree repositories are composed of OSTrees
> & Kickstart Trees. This raises a question:
>
> How can pulp3 best support syncing with remote repositories that are
> composed of multiple (unrelated) content types in a way that doesn't result
> in plugins duplicating support for content types?
>
> Few approaches come to mind:
>
> 1. Multiple plugins (Remotes) participate in the sync flow to produce a
> new repository version.
> 2. Multiple plugins (Remotes) are sync'd successively each producing a new
> version of a repository.  Only the last version contains the fully sync'd
> composition.
> 3. Plugins share code.
> 4. Other?
>
>
> Option #1: Sync would be orchestrated by core or the user so that multiple
> plugins (Remotes) participate in populating a new repository version.  For
> example: the RPM plugin (Remote) and the Kickstart Tree plugin (Remote)
> would both be sync'd against the same remote repository that is composed of
> both types.  The new repository version would be composed of the result of
> both plugin (Remote) syncs.  To support this, we'd need to provide a way
> for each plugin to operate seamlessly on the same (new) repository
> version.  Perhaps something internal to the RepositoryVersion.  The
> repository version would not be marked "complete" until the last plugin
> (Remote) sync has succeeded.  More complicated than #2 but results in only
> creating truly complete versions or nothing.  No idea how this would work
> with current REST API whereby plugins provide sync endpoints.
>
> Option #2: Sync would be orchestrated by core or the user so that multiple
> plugins (Remotes) create successive repository versions.  For example: the
> RPM plugin (Remote) and the Kickstart Tree plugin (Remote) would both be
> sync'd against the same remote repository that is a composition including
> both types.  The intermediate versions would be incomplete.  Only the
> last version contains the fully sync'd composition.  This approach can be
> supported by core today :) but will produce incomplete repository versions
> that are marked complete=True.  This /seems/ undesirable, right?  This may
> not be a problem for distribution since I would imaging that only the last
> (fully composed) version would be published.  But what about other usages
> of the repository's "latest" version?
>
> Option #3: requires a plugin to be aware of specific repository
> composition(s); other plugins and creates a code dependency between
> plugins.  For example, the RPM plugin could delegate ISOs to the File
> plugin and Kickstart Trees to the KickStart Tree plugin.
>
> For all options, plugins (Remotes) need to limit sync to affect only those
> content types within their domain.  For example, the RPM (Remote) sync
> cannot add/remove ISO or KS Trees.
>
> I am an advocate of some from of options #1 or #2.  Combining plugins
> (Remotes) as needed to deal with arbitrary combinations within remote
> repositories seems very powerful; does not impose complexity on plugin
> writers; and does not introduce code dependencies between plugins.
>
> Thoughts?
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] MODIFIED redmine issues for 3.0 beta

2018-05-15 Thread Ina Panova
If i am not mistaken as of now the person who takes care of a release needs
to manually change the issue status.
There is a possibility to select all the issues and actually move the
status in 1 click. Do you think to automate this will pay off?

I am fine to close issues as current release in case we set the target
release specifically Beta, otherwise it sounds like it brings some
confusion.
There is a big period between Beta and GA and you never know what can
happen to those set issues especially if they are targeted as GA.




Regards,

Ina Panova
Software Engineer| Pulp| Red Hat Inc.

"Do not go where the path may lead,
 go instead where there is no path and leave a trail."

On Mon, May 14, 2018 at 7:03 PM, Dennis Kliban  wrote:

> Historically we would transition issues in Pulp's issue tracker from
> MODIFIED to CLOSED - CURRENTRELEASE when a GA build went out the door. The
> same approach is working against us for the duration of the Pulp 3.0 beta.
> We have a large number of issues in MODIFIED state, but are considered
> released.
>
> I propose that we transition issues to CLOSED - CURRENTRELEASE when they
> have been shipped with a 3.0 beta.
>
> Could we add automation to do this at release time?
>
> What do you all think?
>
>
> Thanks,
> Dennis
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev