Re: [Pulp-dev] 2.10.0 is blocked

2016-08-24 Thread Dennis Kliban
It's still a bug. The fix was incomplete. We are in the same position we were 
in yesterday. If we want a beta sooner than next week, then I can work on this 
first thing in the morning. QE should be able to test it in the afternoon. 

-Dennis

- Original Message -
> I think pushing out a beta until next week would be challenging. Since
> it sounds like this may be a case of a feature that we need additional
> development work on, does this no longer represent a blocker but
> instead a change in feature that should be included in the next
> release?
> 
> ~ Jen
> 
> 
> > On Aug 24, 2016, at 4:24 PM, Brian Bouterse  wrote:
> >
> > The original problem reported was fixed, but there were additional SELinux
> > denials after that one. @dkliban and I have a plan [0]. He is able to do
> > it, or I am happy to do it when I return on Tuesday.
> >
> > [0]: https://pulp.plan.io/issues/2199#note-4
> >
> > -Brian
> >
> >
> >> On 08/24/2016 02:51 PM, Elyezer Rezende wrote:
> >> The new build was built and tested but the issue continues.
> >>
> >> Jeremy is following that closely and provided [1] more information about
> >> what he found.
> >>
> >> Brian is taking a look and Dennis will take it over if Brian can't get
> >> it fixed before his PTO (Thursday and Friday).
> >>
> >> [1] https://pulp.plan.io/issues/2199#note-3
> >>
> >> On Tue, Aug 23, 2016 at 7:01 PM, Michael Hrivnak  >> > wrote:
> >>
> >>Outcomes from a meeting today about #2199:
> >>
> >>2.10.0 is blocked on https://pulp.plan.io/issues/2199
> >>
> >>
> >>The fix was merged to 2.10-dev yesterday.
> >>
> >>Next steps:
> >>1. Dennis and Elyezer will get a nightly build tested with smash
> >>2. either Dennis or Sean will build a beta 3
> >>3. The beta cycle starts over with beta 3
> >>
> >>Michael
> >>
> >>___
> >>Pulp-dev mailing list
> >>Pulp-dev@redhat.com 
> >>https://www.redhat.com/mailman/listinfo/pulp-dev
> >>
> >>
> >>
> >>
> >>
> >> --
> >> Elyézer Rezende
> >> Senior Quality Engineer
> >> irc: elyezer
> >>
> >>
> >> ___
> >> Pulp-dev mailing list
> >> Pulp-dev@redhat.com
> >> https://www.redhat.com/mailman/listinfo/pulp-dev
> >
> > ___
> > Pulp-dev mailing list
> > Pulp-dev@redhat.com
> > https://www.redhat.com/mailman/listinfo/pulp-dev
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] pulp3: task - worker relationship

2016-11-17 Thread Dennis Kliban
- Original Message -
> +1 to taking an action on this. The SET_NULL approach sounds fine with me for
> now. It is so simple. It does not help with the later log analysis though
> which I do think is useful, but maybe not something we need to facilitate
> with the MVP.
> 
> To brainstorm another idea, what if instead of deleting workers, we keep
> those records for much longer. With the same reasoning as Task, it would be
> useful to post-mortem analyze when workers are coming on and offline for
> example. FYI, the Worker table is exclusively managed by pulp_celerybeat. We
> could introduce a online boolean to the Worker model and update
> pulp_celerybeat to mark workers as online/offline instead of deleting them.
> I don't think this would be difficult to do or get right. It would solve the
> issue of the cascading deletes, provide the Task analysis use case, and
> provide the Worker analysis use case too. I would rather do this than add an
> additional field to Task.
> 
> I would be fine with either of ^ approaches, but I hope we don't add an
> additional field to Task. We could use SET_NULL for the 3.0 MVP and save
> this as a future refactor/bugfix. It's probably a bug for that field to
> become NULL when a worker is deleted. What do others think about this?

I would prefer that we add a boolean to the worker models that indicated the 
online/offline state. 

> 
> Thanks for bringing his up; we need to take some action.
> 
> -Brian
> 
> 
> 
> On Thu, Nov 17, 2016 at 10:03 AM, Jeff Ortel < jor...@redhat.com > wrote:
> 
> 
> 
> 
> On 11/16/2016 05:27 PM, Sean Myers wrote:
> > On 11/16/2016 05:28 PM, Michael Hrivnak wrote:
> >> Options:
> >> - We could set the policy to SET_NULL. When the worker entry gets deleted,
> >> the task would simply lose its record of which worker it ran on.
> > 
> > +1 to this.
> +1
> 
> > 
> > Since the worker no longer exists in that scenario, I don't think we lose
> > any
> > data there, right? A reference to a nonexistent worker is as good as NULL.
> > Do
> > we need to add a task scrubber to find tasks with NULL workers and make
> > sure
> > they get reassigned? We could also use SET() here, and pass it a callable
> > that
> > sets it to an extant worker pk, but at the moment I think I prefer
> > SET_NULL.
> > 
> > ___
> > Pulp-dev mailing list
> > Pulp-dev@redhat.com
> > https://www.redhat.com/mailman/listinfo/pulp-dev
> > 
> 
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 
> 
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] pulp-manage-db bug blocking 2.11.0

2016-12-07 Thread Dennis Kliban
I agree that we should remove this feature for the 2.11.0 release. 

- Original Message -
> Our initial assumption of pulp_workers records being cleaned up when
> pulp_celerybeat is down is false.
> The pulp_workers clean up [0] is being done in celerybeat [1] and not with a
> SIGTERM handler.
> 
> What this means is that if `systemctl stop pulp_celerybeat` is ran before
> `systemctl stop pulp_workers` our current pulp-manage-db logic will
> erroneously display the user prompt.
> Since this is the case I think we should remove the pulp-manage-db running
> worker detection feature for this release (but keep the celerybeat cleanup)
> and look into other solutions.
> 
> 
> [0]
> https://github.com/pulp/pulp/blob/master/server/pulp/server/async/worker_watcher.py#L85-L105
> [1]
> https://github.com/pulp/pulp/blob/master/server/pulp/server/async/scheduler.py#L75
> 
> On Wed, Dec 7, 2016 at 11:20 AM, Brian Bouterse < bbout...@redhat.com >
> wrote:
> 
> 
> 
> +1 to reopening 2468 and excluding pulp_celerybeat records from the check,
> and holding 2.11 until this is resolved
> 
> Note that ^ would allow us to remove the known issues problem from the
> release notes which should also be done[0]
> 
> +1 to removing the y/N interactive prompt which would also allow us to close
> this PR [1].
> 
> Also, we should close 2472 as NOTABUG or WORKSFORME as I commented on here
> [2].
> 
> [0]:
> https://github.com/pulp/pulp/pull/2878/files#diff-6852a97801e832e280bae8ad6507338aR34
> [1]: https://github.com/pulp/pulp/pull/2874
> [2]: https://pulp.plan.io/issues/2472#note-8
> 
> On Wed, Dec 7, 2016 at 10:02 AM, Michael Hrivnak < mhriv...@redhat.com >
> wrote:
> 
> 
> 
> We've re-opened issue #2468, and Bihan is going to make the PR that
> implements this change. If there are any additional questions or concerns,
> please bring them up ASAP.
> 
> https://pulp.plan.io/issues/2468
> 
> Thanks!
> Michael
> 
> On Wed, Dec 7, 2016 at 9:51 AM, Sean Myers < sean.my...@redhat.com > wrote:
> 
> 
> On 12/07/2016 08:59 AM, Bihan Zhang wrote:
> > +1 excluding pulp_celerybeat
> > 
> > Also since we have the --ignore-running-workers flag and are ignoring
> > celerybeat I would like to propose we stop prompting the user to continue
> > and instead just display an error message when we detect running workers:
> > 'Migration halted because there are still running workers, please stop all
> > workers before re-running this command. If you believe this message was
> > given in error please re-run the command with the --ignore-running-workers
> > flag'
> 
> I think doing what's proposed would fix #2472. Add that to the fixes
> from #2768 and #2769 and this should be good to ship another RC.
> 
> 
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 
> 
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 
> 
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] dropping i386 support in Pulp

2016-12-08 Thread Dennis Kliban
As I was adding Fedora 25 to Koji, I noticed that our Fedora 24 packages were 
not being built for i386. Even though most of the packages in the Pulp repo are 
'noarch', the pymongo related packages need to be compiled specifically for 
i386. With having said that, I don't think we should support i386 architecture. 
We should make a formal announcement about this. What are your thoughts?

-Dennis

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] How does the Developer Ansible Playbook Identify Dependencies?

2016-12-13 Thread Dennis Kliban
- Original Message -
> I was confused about how this works recently, so I dug around to get a better
> idea. I learned some things that might be helpful to you:
> 
> 
> 
> * The normal repo file is not created for some reason for f24:
> 
> https://github.com/pulp/devel/blob/master/ansible/roles/dev/tasks/main.yml#L15

This is a bug. I just filed it[0] in our issue tracker.

[0] https://pulp.plan.io/issues/2489 

> * This is where we use rpmspec to look for Requires:
> 
> https://github.com/pulp/devel/blob/ef8a10122eb7d64b20d6410ef32f768def147a26/ansible/library/pulp_facts.py#L42
> * This is where those deps are installed.
> 
> https://github.com/pulp/devel/blob/master/ansible/roles/dev/tasks/main.yml#L88
> 
> 
> 
> Also, I do expect the dev role to install pulp-server dependencies, but I do
> not expect the pulp_server role to run on a dev system.
> 
> On Sat, Dec 10, 2016 at 1:47 PM, Brian Bouterse < bbout...@redhat.com >
> wrote:
> 
> 
> 
> I ran into this issue [0] where vagrant up (master of pulp/devel) was not
> installing Kobo. Kobo is listed[1] as a dependency in the spec file on
> 2.10-dev, and I thought the Ansible dev playbook inspected the spec file for
> each plugin and platform and dnf installed those items. I was surprised when
> it was not installed by the Ansible playbook.
> 
> I thought this line would do it which installs pulp-server [2] via rpm which
> should bring in kobo as a dependency.
> 
> * Do others expect ansible dev to install pulp-server dependencies listed in
> the pulp.spec
> * Any insight into why this didn't happen?
> * Can someone explain to me how our dependencies are supposed to be treated
> by the dev Ansible playbook?
> 
> [0]: https://pulp.plan.io/issues/2481
> [1]:
> https://github.com/pulp/pulp/blob/60e85bf4383a8c0bc5ef054c42bfc3928777fd80/pulp.spec#L382
> [2]:
> https://github.com/pulp/devel/blob/8604223e2e23e208aa69e40d26e43c3c1f7c2c84/ansible/roles/pulp/tasks/pulp_server.yaml#L48
> 
> Thank you!
> Brian
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 
> 
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] dropping i386 support in Pulp

2016-12-14 Thread Dennis Kliban
It's been almost a week and I have not heard from anyone on this topic. Does 
that mean we all agree that Pulp should drop support for i386?

-Dennis

- Original Message -
> As I was adding Fedora 25 to Koji, I noticed that our Fedora 24 packages were
> not being built for i386. Even though most of the packages in the Pulp repo
> are 'noarch', the pymongo related packages need to be compiled specifically
> for i386. With having said that, I don't think we should support i386
> architecture. We should make a formal announcement about this. What are your
> thoughts?
> 
> -Dennis
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] dropping i386 support in Pulp

2016-12-16 Thread Dennis Kliban
- Original Message -
> Would this have any effect on katello-agent for i386 arch clients?

The spec file[0] for katello-agent says that it depends on the following:

Requires: gofer >= 2.5
Requires: python-gofer-proton >= 2.5
Requires: python-pulp-agent-lib >= 2.6
Requires: pulp-rpm-handlers >= 2.6

It looks like all of the above packages are noarch[1]. So theoretically you 
should still be able to use them on i386. Is my understanding correct here?

[0] 
https://github.com/Katello/katello-agent/blob/KATELLO-2.3/katello-agent.spec#L14-L17
[1] https://repos.fedorapeople.org/pulp/pulp/stable/2/fedora-24/i386/

 - Dennis


> 
> On Wed, Dec 14, 2016 at 2:34 PM, Brian Bouterse  wrote:
> 
> > Thank you for following up. +1 to dropping i386 support because**.
> > However, I think we should adjust the statement to be that Pulp only
> > supports X86_64 at this time until we can develop a plan to bring in more
> > architectures. One outcome of that is that i386 would be dropped. If there
> > is no one opposed, making 2 tickets on it would be a good next step. One
> > ticket to issue the statement via blog post and pulp-list, and another to
> > update the build machinery to stop publishing i386.
> >
> > **: (a) we provide an incomplete set of i386 packages today so it doesn't
> > actually work well, (b) we never QE on i386, and (c) I don't think anyone
> > is using them, but I have no evidence for that
> >
> > -Brian
> >
> >
> > On Wed, Dec 14, 2016 at 2:27 PM, Dennis Kliban  wrote:
> >
> >> It's been almost a week and I have not heard from anyone on this topic.
> >> Does that mean we all agree that Pulp should drop support for i386?
> >>
> >> -Dennis
> >>
> >> - Original Message -
> >> > As I was adding Fedora 25 to Koji, I noticed that our Fedora 24
> >> packages were
> >> > not being built for i386. Even though most of the packages in the Pulp
> >> repo
> >> > are 'noarch', the pymongo related packages need to be compiled
> >> specifically
> >> > for i386. With having said that, I don't think we should support i386
> >> > architecture. We should make a formal announcement about this. What are
> >> your
> >> > thoughts?
> >> >
> >> > -Dennis
> >> >
> >> > ___
> >> > Pulp-dev mailing list
> >> > Pulp-dev@redhat.com
> >> > https://www.redhat.com/mailman/listinfo/pulp-dev
> >> >
> >>
> >> ___
> >> Pulp-dev mailing list
> >> Pulp-dev@redhat.com
> >> https://www.redhat.com/mailman/listinfo/pulp-dev
> >>
> >
> >
> > ___
> > Pulp-dev mailing list
> > Pulp-dev@redhat.com
> > https://www.redhat.com/mailman/listinfo/pulp-dev
> >
> >
> 
> 
> --
> Og Maciel
> 
> Manager Quality Engineering
> Red Hat, Inc.
> irc: omaciel
> 

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] docs-builder-2.10-build and docs-builder-2.11-build jenkins jobs

2016-12-17 Thread Dennis Kliban
These two Jenkins jobs are currently failing and I am not sure where they are 
supposed to publish docs to. Can someone explain what the purpose of these jobs 
is?  I'd like to fix them.

-Dennis

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] vagrant up on master is broken by python-lxml dep solving

2016-12-21 Thread Dennis Kliban
- Original Message -
> When I try to "vagrant up" with all git repos on "master", I get an exciting
> explosion with this error at the core:
> 
> raise exc\r\ndnf.exceptions.DepsolveError: installed package
> python2-lxml-3.7.0-1.fc24.x86_64 obsoletes python-lxml < 3.7.0-1.fc24
> provided by python-lxml-3.4.4-4.fc24.x86_64
> 
> The steps leading up to this are that the setup:
> 
> - scrapes pulp spec files for Requires statements
> - finds one for "python-lxml" in the pulp_rpm spec file
> - tries to use dnf to install python-lxml, among many other dependencies
> 
> dnf doesn't like this one bit. I think it comes down to ambiguity over these
> two points:
> 
> python2-lxml is installed on the system and "Provides" python-lxml
> python-lxml is also an available RPM, but it's obsoleted by python2-lxml
> 
> When I run "dnf install python-lxml", it matches the RPM with that exact
> name, not the already-installed RPM that merely "Provides" that name, and
> then complains
> 
> But if I install something that "Requires: python-lxml", like
> pulp-rpm-plugins, dnf happily resolves that as you might expect.
> 
> Reading the man page for dnf, that behavior matches what is described in the
> "SPECIFYING PACKAGES" section:
> 
> "Failing to match the input argument to an existing package name based on the
> patterns above, DNF tries to see if the argument matches an existing
> provide."
> 
> dnf does match the argument to a package name, and when it encounters a dep
> solving error, it does not go back and continue its matching algorithm. It
> never gets to the point of trying to match the argument to a "provide".
> 
> So what should we do? This seems to be a quirk of F24 that might be unusual.
> 
> - We could change our spec file for F24+ to "Require: python2-lxml"
> - We could handle this as a special case in the ansible facts, and modify the
> value before trying to use it with dnf.

Since Pulp installs properly on Fedora 24, it sounds like we have a problem 
with how we provision our development environment. Modifying the ansible facts 
is the right approach for resolving this problem. Did you file an issue in 
Redmine for this?

> - $YOUR_IDEA_HERE
> 
> What do you all think?
> 
> Michael
> 
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
> 

___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] RFC process

2017-02-24 Thread Dennis Kliban
The things that I like about this proposal:

- The proposals are always merged so the community can reference them in
the future even if the proposal is not adopted. I like learning from
history.
- Revisions to the proposal are additional commits stored in git. Having a
record of changes can be valuable as the proposal lives on and evolves.
- All the proposals are available on pulpproject.org or some other wesbsite
for anyone to see - including search engines.

I am not too thrilled about the discussion living separate from the
proposal, but I am a fan of our mailing lists. I would be happy with this
proposal being merged as is so we can announce it for voting and/or further
discussion on pulp-dev list.

-Dennis

On Fri, Feb 24, 2017 at 5:34 PM, Brian Bouterse  wrote:

> I pushed a new version based on feedback on the PR. It outlines several
> alternatives that we should consider along with downsides.
>
> What about leaving it as a pull request for longer?
> 
> What about using Github for discussion?
> 
> We could store the PEPs in Redmine. Why aren't we using Redmine?
> 
>
> There are also Downsides
> 
> of this proposal.
>
> My opinion is a +1 to leaving it as a pull request for longer to allow
> more autonomy for creation and revision of proposals. Also, we can
> additionally use Github for feedback, but that email threaded discussion
> should also be allowed to include a broader input from users who don't live
> in Github like most of us do. Both of these not-yet-done simple rewrites
> would remove these from the list of alternatives and incorporate them into
> the proposal itself. I'm happy to make such changes with input from others.
>
> Also in the Unresolved Questions
> 
> section having an acronym or initialization of a name for these would be
> nice.
>
> It would be great to get feedback by 00:00 UTC on Tuesday the 28th (that's
> 7pm on Feb 28th). I'll try to be more responsive with the edits also.
>
> Thanks for the input so far.
>
> -Brian
>
> On Mon, Feb 13, 2017 at 5:09 PM, Elyezer Rezende 
> wrote:
>
>> I would like to comment about the C4 [1] which is "the Collective Code
>> Construction Contract (C4), [...], aimed at providing an optimal
>> collaboration model for free software projects".
>>
>> It does not mention about creating RFCs specifically but provides some
>> guidelines that may help when implementing them.
>>
>> [1] https://rfc.zeromq.org/spec:42/C4/
>>
>> On Mon, Feb 13, 2017 at 5:45 PM, Brian Bouterse 
>> wrote:
>>
>>> I want to share some ideas on a possible proposal process. It's inspired
>>> by processes in the Foreman, Python, and Django communities along with
>>> several discussions I've had with core and community users. This is written
>>> as a concrete proposal, but it is 100% changeable.
>>>
>>> I'm doing the meta thing and using the process I'm proposing to propose
>>> the process. The proposal is here [0]. It's unmerged (not the process)
>>> because I suspect we'll want a dedicated repo. This proposal, if adopted,
>>> is still a living document (like Python PEP 0001) so even if its approved
>>> it would still be an evolving document.
>>>
>>> Feedback and collaboration is welcome!
>>>
>>> [0]: https://github.com/pulp/pulpproject.org/pull/50
>>>
>>> All the best,
>>> Brian
>>>
>>> On Fri, Feb 10, 2017 at 6:01 PM, David Davis 
>>> wrote:
>>>
 I also like the idea of using plan.io for our RFCs. The only thing
 that github or etherpad offers over plan.io is the ability to
 edit/update the RFC. If the RFC is in the body of the story/task in
 Redmine, then I think it can only be edited by admins. Maybe we can use the
 comments or not worry about editing the RFC though.

 There were also some other points brought up this past week about
 RFCs—mostly around workflows. One important thing I forgot to consider is
 how to accept RFCs. Should we vote on them? Or perhaps try to arrive at
 some sort of consensus?


 David

 On Mon, Feb 6, 2017 at 12:31 PM, Ina Panova  wrote:

> I think all mentioned options could be used, but we need to have a
> starting point. Something that would track a discussion for a long time.
> And i lean towards ---> open a story/task (as a starting point).
> Having a story/task opened we can always reference it in mail
> discussion or etherpad.
> Why i prefer to have all/most of the discussion happen on the
> story/task?
> Because i cannot guarantee that i will not miss somehow the emai

Re: [Pulp-dev] PUPs Process Approved

2017-03-22 Thread Dennis Kliban
This is so great!

On Wed, Mar 22, 2017 at 9:59 AM, Brian Bouterse  wrote:

> The Pulp Update Process (an RFC process) has been approved and merged. A
> dedicated repo[0] will house the PUPs. The process for submitting future
> pups is outlined as pup1 [1].
>
> This is designed to be a living document, and the in-place process can
> also be used to modify/refine the process over time.
>
> Thank you to everyone who gave ideas, contributions, and feedback during
> this process. I will highlight this at tomorrow's sprint demo as part of
> the community update.
>
> [0]: https://github.com/pulp/pups
> [1]: https://github.com/pulp/pups/blob/master/pup-0001.md
>
> -Brian
>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] We need a new "closed" state in redmine.

2017-03-30 Thread Dennis Kliban
Let's add CLOSED - COMPLETE. Let's add this state today so we can close out
some issues.

On Tue, Mar 28, 2017 at 3:47 PM, Michael Hrivnak 
wrote:

> In redmine, we do not currently have a reasonable state for a "Task" to be
> in once it is complete. A task could be something like:
>
> - update the Pulp website
> - perform a packaging activity related to Pulp in Fedora
> - create a detailed plan for how to implement something
>
> As a reminder, here are all of the current "closed" states:
>
> CLOSED - CURRENTRELEASE
> CLOSED - DUPLICATE
> CLOSED - NOTABUG
> CLOSED - WONTFIX
> CLOSED - WORKSFORME
>
> The output of such tasks does not become part of a Pulp release, so the
> state "closed - currentrelease" does not make sense. Neither do any of the
> other "closed - " states we have right now. Here are two options to
> consider adding:
>
> "CLOSED" - It is simple and covers all potential reasons not listed as an
> explicit option. The downside is that it provides no information about
> whether the work got done. Did you close it because you finished the work?
> Or because the task became irrelevant? Or some other reason?
>
> "CLOSED - COMPLETE" or "CLOSED - DONE" - Something like that has the
> advantage of being clear that the reason for closing it is that the work
> got done.
>
> I lean toward the more explicit option that clearly represents the work as
> having been done, but I also see a case for the general option. Or maybe
> both?
>
> What do you all think?
>
> Michael
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] PyPI names for Pulp3

2017-04-07 Thread Dennis Kliban
This should definitely be a PUP. I like the pulpproj prefix.



On Fri, Apr 7, 2017 at 2:54 PM, Brian Bouterse  wrote:

> Pulp3 can't use the 'pulp' Python namespace like we did on Pulp2 because
> it's already taken on PyPI and we don't want to conflict. We need to decide
> on some new Python package names.
>
> I've updated a previous write-up[0] with options we have in this area. It
> talks about package name options for pip installing purposes, and it
> discusses how we will lay out the packages within site-packages.
>
> I prefer the prefix of 'pulpproj' with "idea 2". I also prefer all
> packages will install under a top level dir. So that would cause platform
> to pip install with:
>
> pip install pulpproj
> pip install pulpproj_cli
> pip install pulpproj_streamer
>
> All of ^ packages would be laid out on the filesystem as:
>
> /usr/lib/python3.5/site-packages/pulpproj/
> ├── cli
> ├── common
> ├── platform
> └── streamer
>
> What are your thoughts and ideas? What do you prefer? Also should this
> become a PUP?
>
> [0]: https://pulp.plan.io/issues/2444#note-7
>
> -Brian
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] pull request test results are now publicly available

2017-04-11 Thread Dennis Kliban
The Jenkins pull request test jobs are now configured to publish test
results to a publicly accessible web server. Each pull request that has
been tested by Jenkins will include a link to the unit tests results. The
link points to a directory on pulpadmin.fedorapeople.org that contains logs
from all the platforms on which the tests were run.
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] PyPI names for Pulp3

2017-04-11 Thread Dennis Kliban
I like using the pulp3 namespace.

On Tue, Apr 11, 2017 at 9:13 AM, Bihan Zhang  wrote:

> What about pulp3 as a potential namespace? With this naming we can
> communicate that this PyPI package is Pulp3 (not Pulp2), and that it is
> Python3 compatible.
>
> There's plenty of PyPI packages that utilizes the package3 naming strategy
> to show python3 compatibility.
> And since PuLP (the other pulp) is already py3 compatible I don't see them
> wanting the pulp3 namespace.
>
> If we use this prefix, length won't be a problem:
>   pip3 install pulp3
>   pip3 install pulp3_rpm_extensions
>   pip3 install pulp3_streamer
>
>
>
> On Tue, Apr 11, 2017 at 8:20 AM, Patrick Creech 
> wrote:
>
>> After spending the majority of the day hunting down the fine details of
>> this plan, I'm in agreement
>> with Michael that it isn't the best option here.  While it seemed
>> interesting on the surface, the
>> devil is in the details, as they say.  And this just appears to be a
>> little too non-standard for us.
>>
>> Patrick
>>
>> On Mon, 2017-04-10 at 16:49 -0400, Michael Hrivnak wrote:
>> > The "datadir" idea is a good option to have, and I can see how it could
>> work. That said, it has a
>> > couple of drawbacks worth considering.
>> >
>> > 1) I regularly think about the Principle of Least Surprise, and it
>> applies well here. Python devs
>> > know that python code usually goes in site-packages. Not finding Pulp
>> code there would be
>> > surprising in most cases. It may work great and be completely valid,
>> but I think we should have a
>> > very good reason before straying from such a convention. Python
>> packaging is a complicated enough
>> > topic as it is (see - vs _, setuptools vs. distutil vs distribute,
>> package name vs. python
>> > namespace, etc), that I think we will benefit from sticking to defaults
>> when possible and
>> > reasonable.
>> >
>> > This aspect is definitely not a deal-breaker. I'm sure other apps do
>> this successfully. It's just
>> > a factor that makes me lean another direction.
>> >
>> > 2) This would not entirely eliminate the namespace collision, if we
>> continued using the "pulp"
>> > namespace in python. Keep in mind that we're not just worried about a
>> collision in site-packages;
>> > we're worried about a collision at runtime in the interpreter's global
>> namespace. If we add a new
>> > location to PYTHONPATH, but the "pulp" namespace is used in the new
>> location AND in site-packages,
>> > that's asking for trouble. Maybe it would work ok by completely
>> overshadowing the "pulp" in site-
>> > packages (I'm not sure if it would), but it seems safer to just use a
>> different namespace than
>> > "pulp".
>> >
>> > And if we use a different namespace than "pulp", I don't think we gain
>> anything from installing to
>> > a separate location.
>> >
>> > This also may not be a deal-breaker, but it nudges me in the direction
>> of just using a non-"pulp"
>> > name in the standard location.
>> >
>> > Thanks Patrick for raising this as an option.
>> >
>> > Michael
>> >
>> > --
>> > Michael Hrivnak
>> > Principal Software Engineer, RHCE
>> > Red Hat
>> > ___
>> > Pulp-dev mailing list
>> > Pulp-dev@redhat.com
>> > https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] migration tool for Pulp 3

2017-04-18 Thread Dennis Kliban
Do we want to provide a tool for migrating from Pulp 2 to 3? If yes, then
...

Would the tool be able to migrate repository definitions and require the
user to sync and upload content to restore /var/lib/pulp/content?

Would this tool support installing Pulp 3 along side Pulp 2 and performing
a migration of database and /var/lib/pulp/content?

Would this tool be able to accept a mongodump of Pulp 2 MongoDB and a path
to a copy of Pulp 2's /var/lib/pulp directory and use that information to
populate Pulp 3?
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] PyPI names for Pulp3

2017-04-19 Thread Dennis Kliban
+1 to pulpproj

On Wed, Apr 19, 2017 at 12:59 PM, Sean Myers  wrote:

> On 04/19/2017 12:02 PM, Brian Bouterse wrote:
> > Two fyi's relating to the names. (1) pulpproj is our twitter handle. Both
> > pulp and pulpproject were already taken. (2) I agree that pulp3 could be
> a
> > headache down the road regardless of if the 3 is for Pulp3 or Python3.
>
> Yeah, I was only kidding about the python3 thing, it's too ambiguous.
>
> I'm still +1 pulpproj
>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] versioned repositories

2017-05-24 Thread Dennis Kliban
I noticed that the REST API examples don't mention anything about deleting
a particular version of a repository. This is a use case that we need to
support.

-Dennis

On Wed, May 17, 2017 at 10:03 PM, Michael Hrivnak 
wrote:

> We've discussed versioned repositories and their merits in the past, but
> I'd like to propose a specific direction, and inclusion in 3.0. As a recap
> of goals, versions can help us answer two important questions about the
> history of a repository:
>
> 1) What set of content is in a specific version of a repository?
> 2) What changed between two arbitrary versions of a repository?
>
> I am proposing a model where Pulp creates a new version of a repository
> for every operation that changes that repo's content. For example, a sync
> task would create a single new version.
>
> Basic Example
> ---
>
> - You create repository "foo".
> - You sync repository "foo", which produces version 1 of that repo.
> - You sync once per day for some period of time, automatically creating a
> new version each time.
> - You publish repo "foo", which defaults to publishing the most recent
> version.
> - You don't like something that's new in the repo, so you roll back by
> publishing a previous version.
>
> Data Model Basics
> ---
>
> In the past we've stored the relationship between a content unit and a
> repo as a standard many-to-many through table. There's a reference to a
> unit, and a reference to a repo.
>
> The version scheme I'm pitching adds two new fields to that through table:
>
> vadded - a foreign key to the repo version in which this content unit was
> added
> vremoved - a foreign key to the repo version in which this content unit
> was removed. This can be null.
>
> Multiple entries can exist for the same content unit and repo, so long as
> a new one is not added until the previous one's "vremoved" field is set.
>
> With this structure, it is easy to query the database to answer both
> questions we started with.
>
> REST API
> --
>
> Some endpoint will be made that gives access to the versions of a specific
> repository. Ideally we would have a nested endpoint like this:
>
> /api/v3/repositories/foo/versions/
>
> But nested views have been a problem for us with DRF (django rest
> framework). If we aren't able to make that happen, I've gotten this to work
> in my PoC branch:
>
> /api/v3/repositoryversions/?repository=foo
>
> It's not yet clear how best to represent content through the REST API. A
> nested endpoint within the repo version object would be ideal.
>
> /api/v3/repositories/foo/versions/4/content/
>
> Operations on a repo where a version could be chosen, such as a publish,
> should default to the latest version. It's an open question how best to
> represent that, and perhaps it takes the form of two endpoints:
>
> default to latest: POST /api/v3/repositories/foo/distributors/bar/publish
>
> specify a version: POST /api/v3/repositories/foo/versions/4/publish
>
> But that's just one idea. Much about our REST API layout has yet to be
> written in stone, and we have flexibility.
>
> Orphans
> -
>
> Notice that this changes the orphan workflow. Removing a content unit from
> a repo doesn't make it an orphan. This helps reduce the need to run an
> orphan cleanup task, which in turn helps avoid the inherent race condition
> that task can introduce.
>
> Trim History
> -
>
> But you may not want to keep history forever, so a valuable feature will
> be the ability to trim history. I think this would just be an operation
> that squashes a bunch of versions together, and it could optionally take
> that opportunity to immediately delete a content unit that becomes an
> orphan.
>
> Illustrating the workflow, if you wanted to squash history prior to
> version 10, the task would:
>
> - delete all of a repo's relationships in the through table where vremoved
> is a version <= 10
> - optionally check if each content unit is now an orphan and remove if so
> - update all remaining entries where vadded < 10 by setting vadded to 10
>
> PoC
> 
>
> I have a branch with proof-of-concept code here:
>
> https://github.com/pulp/pulp/compare/3.0-dev...mhrivnak:vers
> ioned-repos?expand=1
>
> The models are the most interesting place to look. In particular, I'm very
> pleased with how simple the "content()" method is, which returns a QuerySet
> matching all the content in a given version.
>
> The rest is REST ;) API stuff mostly, which isn't all that interesting
> except to demonstrate how the data could potentially be exposed. You can
> run the included tests (which I made just for dev purposes- not sure if
> they deserve a long-term home) which are found in the root of the git repo,
> and that loads some data into the database. Then you can hit this endpoint
> as an example:
>
> http://yourhost:8000/api/v3/repositoryversions/?repository=r1
>
> Obviously this code is rough, so please consider it for directional and
> conceptual purposes only. Assume major a

[Pulp-dev] [pulp 3] cast() method for casting from a Master to Detail model instance

2017-05-26 Thread Dennis Kliban
Looking at the cast() method[0] it looks like it's possible to call cast()
on a detail model. I would like to figure out when we expect to call cast()
on a detail model. Without fully knowing the motivation for this
implementation, I am inclined to raise an exception when the code reaches
line 113. The exception would inform the developer that calling cast() is
only appropriate on a master model. What are your thoughts?


[0]
https://github.com/pulp/pulp/blob/3.0-dev/platform/pulp/app/models/base.py#L113


-Dennis
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp 3] cast() method for casting from a Master to Detail model instance

2017-05-26 Thread Dennis Kliban
I agree with everything you said.

The problem occurs when a worker receives a task to perform a publish, but
it doesn't have the plugin installed. As a result the call to
publisher.cast() returns the master model. The publish task then tries to
call a method on the master model that does not exist. The user then sees
an attribute not found exception in the logs. It would be better to raise a
more useful exception in the cast() method itself when it is not able to
cast to anything.

One suggestion was to add an optional call_counter keyword argument to
cast(self, call_count=0). The argument would only be passed in to
cast(call_count=call_count+1) when it is being recursively called. Then we
could check if line 113 is reached with call_count==0 and raise an
exception saying that the master model could not be cast to a detail model.

Thoughts?

On Fri, May 26, 2017 at 1:23 PM, Michael Hrivnak 
wrote:

> Interesting question. It looks like in this implementation, even if you
> call cast() on a master model, the method itself will kind-of-recursively
> call cast() on detail models until it gets to the most detailed one, which
> will return itself. So every time cast() is called, eventually the most
> detailed model is expected to have its cast() method called and must return
> itself.
>
> We could add a special case where the last one raises an exception, and
> the next-to-last one catches it, but I'm not sure that extra complication
> would be worth it. We'd be making the most common case "exceptional".
>
> Having the call be idempotent is also potentially a perk, depending on how
> you look at it. Based on all that, plus the doc block confirming that the
> behavior is intentional, I don't see a problem with the current behavior.
>
> On Fri, May 26, 2017 at 11:41 AM, Dennis Kliban 
> wrote:
>
>> Looking at the cast() method[0] it looks like it's possible to call
>> cast() on a detail model. I would like to figure out when we expect to call
>> cast() on a detail model. Without fully knowing the motivation for this
>> implementation, I am inclined to raise an exception when the code reaches
>> line 113. The exception would inform the developer that calling cast() is
>> only appropriate on a master model. What are your thoughts?
>>
>>
>> [0] https://github.com/pulp/pulp/blob/3.0-dev/platform/pulp/app/
>> models/base.py#L113
>>
>>
>> -Dennis
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
>
> --
>
> Michael Hrivnak
>
> Principal Software Engineer, RHCE
>
> Red Hat
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] JWT Use Case Revisions for Pulp3

2017-05-31 Thread Dennis Kliban
We had a chance to discuss some of these use cases during our MVP call
yesterday. Here is the updated list of uses cases:

As an administrator, I can disable JWT token expiration.  This
configuration is in the settings file and is system-wide.
As an administrator, I can configure the JWT tokens to expire after a
configurable amount of time. This configuration is in the settings file
and is system-wide.
The JWT shall have a username identifier
As an API user, I can authenticate any API call (except to request a JWT)
with a JWT.
As an API user, I can invalidate all existing JWT tokens for a given user.
As an authenticated user, when deleting a user 'foo', all of user 'foo's
existing JWTs are invalidated.
As an autheticated user, I can invalidate a user's JWTs in the same
operation as updating the password.
As an un-authenticated user, I can obtain a JWT token by using a username
and password.

Let's polish them up on this email thread and then update the MVP wiki
page.

-Dennis

On Mon, May 29, 2017 at 1:57 PM, Brian Bouterse  wrote:

> We had a use case call which produced these use cases [0]. Then @fdobrovo
> investigated using the django-rest-framework-jwt [1] to fulfil those use
> cases and there are some small, but to fulfil the use cases written he had
> to write a good amount of code and maybe only used 50 or 100 lines of code
> actually from django-rest-framework-jwt.
>
> Through a lot of back and forth on the issue [2], we did a gap analysis
> and considered different ways the use cases could be aligned with the
> functionality provided by the django-rest-framework. We came up with the
> following revised use cases related to JWT that are effectively the same
> and would allow the plugin code to be used mostly as-is:
>
> * As an administrator, I can disable JWT token expiration.  This
> configuration is in the settings file and is system-wide.
> * As an administrator, I can configure the JWT tokens to expire after a
> configurable amount of time. This configuration is in the settings file and
> is system-wide.
> * The JWT shall have a username identifier
> * As an API user, I can authenticate any API call (except to request a
> JWT) with a JWT.
> * As an API user, I can invalidate all JWT tokens for a given user
> * As an authenticated user, when deleting a user 'foo', all of user 'foo's
> JWTs are invalidated.
> * As an un-authenticated user, I can obtain a JWT token, by passing a
> username and password via POST
>
> Comments and questions are welcome here. I also hope to append this topic
> onto one of the upcoming, Tuesday use case calls. The next call May 30th is
> on the Status API and Alternate Content Sources so hopefully there will be
> enough time to revisit the JWT use cases then too or on a following call.
>
> [0]: https://pulp.plan.io/projects/pulp/wiki/Pulp_3_Minimum_
> Viable_Product#Authentication
> [1]: http://getblimp.github.io/django-rest-framework-jwt/
> [2]: https://pulp.plan.io/issues/2359
>
> -Brian
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] Pull Request builder job changes for plugins

2017-06-02 Thread Dennis Kliban
In an effort to resolve issue 2751[0], I updated the PR builder job for
plugins. Each PR for a plugin will now be tested against the latest stable
release of the core found here[1]. This will ensure that the plugin is
maintaining compatibility with the latest stable core and that we are only
testing one change at a time.


[0] https://pulp.plan.io/issues/2751
[1] https://repos.fedorapeople.org/pulp/pulp/stable/latest/


-Dennis
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Pull Request builder job changes for plugins

2017-06-05 Thread Dennis Kliban
What if we ran our plugin unit tests against both the latest GA build and
nightly build of core?

If the tests pass with the GA version, the job is marked as successful. If
not, core packages are upgraded to the latest nightly and unit tests are
run again. If the unit tests fail again, the job is marked as failed. If
the unit tests pass with the latest nightly, the job is marked as
successful.

On Sun, Jun 4, 2017 at 12:03 PM, Brian Bouterse  wrote:

> After thinking about this more, I realized that for the remainder of Pulp2
> at least, we need to have the plugin unittest runner test against the
> nightly version of core and not the latest GA. Using the GAs won't work
> because not only is the 'I need unreleased code from platform' a problem
> with the PR that needs it but also a problem for all subsequent PRs after
> its merged. That second part makes using GA core as the basis for plugin
> testing probably a non-starter. Assuming that, the next step for 2751 is to
> update the GA urls to be the "stable nightly" URLs.
>
> We also need to look into the nightlies to check on their reliability. In
> theory, each night an "unstable nightly" of core gets built nightly in
> Jenkins, tested with pulp-smash, and if all tests pass it gets "promoted"
> to a separate URL for "stable nightlies". Let me know if we should move
> this to another thread, but I've got these questions about nightlies.
>
> 1. Who investigates when the "unstable nightly" fails to build?
> 2. Who investigates when a "unstable nightly" fails to be promoted to a
> "stable nightly" due to pulp-smash failures?
> 3. Who is in charge of maintaining these Jenkins jobs over time and are
> they currently maintained?
> 4. Who is in charge of managing the directory structure on
> repos.fedorapeople.org?
> 5. Where are the docs on ^?
>
> With Pulp3 I think we can switch to using the latest GA as the basis for
> plugin testing which would be better in several ways.
>
> -Brian
>
> On Fri, Jun 2, 2017 at 5:06 PM, Brian Bouterse 
> wrote:
>
>> That is a good point, and one we are giving some thought to through convo
>> on #pulp-dev and the issue [0]. The case of a plugin needing an unreleased
>> change from core would fail with this change. It's a tradeoff though
>> because if we go with nightlies as the version of core that is used,
>> whenever the nightlies break, the unittest PR runners also will, which has
>> been a reliability issue with the plugin unittest runner for a while.
>>
>> I wrote some on the issue about it, but I see the 'plugin needs
>> unreleased code from core' as a special case, not a normal case. It used to
>> be common, but it's getting less common, which is good, because
>> contributing to a plugin should not involve changes to the core as the
>> norm. It will happen from time to time, so we can handle the special case,
>> specially by running the unittests locally with the necessary unreleased
>> version of platform and posting the results as evidence that its safe to
>> merge.
>>
>> [0]: https://pulp.plan.io/issues/2751
>>
>> On Fri, Jun 2, 2017 at 4:43 PM, Michael Hrivnak 
>> wrote:
>>
>>> What about cases where a plugin wants to use something that's new in the
>>> unreleased core? The master branch of a plugin will usually be released
>>> with the master branch of the core in the next 2.y release for example.
>>> That seems like a normal scenario; is it facilitated somehow with this
>>> testing change?
>>>
>>> On Fri, Jun 2, 2017 at 4:33 PM, Dennis Kliban 
>>> wrote:
>>>
>>>> In an effort to resolve issue 2751[0], I updated the PR builder job for
>>>> plugins. Each PR for a plugin will now be tested against the latest stable
>>>> release of the core found here[1]. This will ensure that the plugin is
>>>> maintaining compatibility with the latest stable core and that we are only
>>>> testing one change at a time.
>>>>
>>>>
>>>> [0] https://pulp.plan.io/issues/2751
>>>> [1] https://repos.fedorapeople.org/pulp/pulp/stable/latest/
>>>>
>>>>
>>>> -Dennis
>>>>
>>>> ___
>>>> Pulp-dev mailing list
>>>> Pulp-dev@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>
>>>>
>>>
>>>
>>> --
>>>
>>> Michael Hrivnak
>>>
>>> Principal Software Engineer, RHCE
>>>
>>> Red Hat
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp-dev] Pulp vagrant is temporarily broken

2017-06-21 Thread Dennis Kliban
The problem has been resolved. I was able to vagrant up a few minutes ago.

On Wed, Jun 21, 2017 at 12:13 PM, Bihan Zhang  wrote:

> Due to an issue with dnf [0], pulp vagrant up is temporarily not working.
>
> The potential workarounds listed on the bugzilla can be tried by waiting
> for vagrant up to error,
> vagrant ssh
> # apply workarounds
> vagrant provision
>
> This dnf bug is being worked on[1] and will hopefully be resolved soon.
>
> [0] https://bugzilla.redhat.com/show_bug.cgi?id=1463561
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1463561#c33
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] proposing changes to pulp 3 upload API

2017-06-27 Thread Dennis Kliban
My motivations for writing this email include: recent discussion about pulp
2 upload API in #pulp and django's documentation on file uploads.

Files uploaded to Django are initially stored in memory (if under 2.5 mb)
or Python's tempfile module is used to write it to /tmp/ directory. The
file created in /tmp is deleted when and if the last file handle is closed.

If we implement the upload API as described in the MVP doc[0], then
according to Django docs[1] we will be performing a write to disk 2 or 3
times for each upload. In cases where a file is bigger than 2.5mb in size,
it will be first written to /tmp. The same file will then be written to
/var/lib/pulp/uploads (or similar location) when the FileUpload model is
saved. A third write will occur when an artifact is created using the
FileUpload. This third write will likely be a move though.

I propose that we eliminate writing the uploaded file to
/var/lib/pulp/upload and go directly to creating an artifact. The use cases
can then be rewritten as the following:


   - As an authenticated user, I can upload a file with an optional chunk
   size, and an optional offset. At the end up the of upload the server
   returns the JSON representation of the artifact.



   - As an authenticated user, I can create a new artifact by specifying an
   existing artifact id.



   - As an authenticated user, I can create a content unit by providing the
   content type, its Artifacts using IDs for each Artifact, and the metadata
   supplied in the POST body. This call is atomic, content unit is created in
   the database and on the filesystem or not at all.




[0]
https://pulp.plan.io/projects/pulp/wiki/Pulp_3_Minimum_Viable_Product#Upload-amp-Copy
[1]
https://docs.djangoproject.com/en/1.9/topics/http/file-uploads/#handling-uploaded-files-with-a-model
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-06-27 Thread Dennis Kliban
On Tue, Jun 27, 2017 at 1:24 PM, Michael Hrivnak 
wrote:

>
> On Tue, Jun 27, 2017 at 11:27 AM, Jeff Ortel  wrote:
>
>>
>> - The artifact FK to a content unit would need to become optional.
>>
>> - Need to add use cases for cleaning up artifacts not associated with a
>> content unit.
>>
>> - The upload API would need additional information needed to create an
>> artifact.  Like relative path, size,
>> checksums etc.
>>
>> - Since (I assume) you are proposing uploading/writing directly to
>> artifact storage (not staging in a working
>> dir), the flow would need to involve (optional) validation.  If
>> validation fails, the artifact must not be
>> inserted into the DB.
>
>
> Perhaps a decent middle ground would be to stick with the plan of keeping
> uploaded (or partially uploaded) files as a separate model until they are
> ready to be turned into a Content instance plus artifacts, and save their
> file data directly to somewhere within /var/lib/pulp/. It would be some
> path distinct from where Artifacts are stored. That's what I had imagined
> we would do anyway. Then as Dennis pointed out, turning that into an
> Artifact would only require a move operation on the same filesystem, which
> is super-cheap.
>
>
Would that address all the concerns? We'd write the data just once, and
> then move it once on the same filesystem. I haven't looked at django's
> support for this recently, but it seems like it should be doable.
>
> I was just looking at the dropbox API and noticed that they provide two
separate API endpoints for regular file uploads[0] (< 150mb) and large file
uploads[1]. It is the latter that supports chunking and requires using an
upload id. For the most common case they support uploading a file with one
API call. Our original proposal requires 2 for the same use case. Pulp API
users would appreciate having to only make one API call to upload a file.

[0] https://www.dropbox.com/developers-v1/core/docs#files_put
[1] https://www.dropbox.com/developers-v1/core/docs#chunked-upload



> --
>
> Michael Hrivnak
>
> Principal Software Engineer, RHCE
>
> Red Hat
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-06-27 Thread Dennis Kliban
On Tue, Jun 27, 2017 at 2:56 PM, Brian Bouterse  wrote:

> Picking up from @jortel's observations...
>
> +1 to allowing Artifacts to have an optional FK.
>
> If we have an Artifacts endpoint then we can allow for the deleting of a
> single artifact if it has no FK. I think we want to disallow the removal of
> an Artifact that has a foreign key. Also filtering should allow a single
> operation to clean up all unassociated artifacts by searching for FK=None
> or similar.
>
> Yes, we will need to allow the single call delivering a file to also
> specify the relative path, size, checksums etc. Since the POST body
> contains binary data we either need to accept this data as GET style params
> or use a multi-part MIME upload [0]. Note that this creation of an Artifact
> does not change the repository contents and therefore can be handled
> synchronously outside of the tasking system.
>
> +1 to the saving of an Artifact to perform validation
>
> [0]: https://www.w3.org/Protocols/rfc1341/7_2_Multipart.html
>
>

> -Brian
>

I also support this optional FK for Artifacts and validation on save.  We
should probably stick with accepting GET parameters for the MVP. Though
multi-part MIME support would be good to consider for 3.1+.


>
> On Tue, Jun 27, 2017 at 2:44 PM, Dennis Kliban  wrote:
>
>> On Tue, Jun 27, 2017 at 1:24 PM, Michael Hrivnak 
>> wrote:
>>
>>>
>>> On Tue, Jun 27, 2017 at 11:27 AM, Jeff Ortel  wrote:
>>>
>>>>
>>>> - The artifact FK to a content unit would need to become optional.
>>>>
>>>> - Need to add use cases for cleaning up artifacts not associated with a
>>>> content unit.
>>>>
>>>> - The upload API would need additional information needed to create an
>>>> artifact.  Like relative path, size,
>>>> checksums etc.
>>>>
>>>> - Since (I assume) you are proposing uploading/writing directly to
>>>> artifact storage (not staging in a working
>>>> dir), the flow would need to involve (optional) validation.  If
>>>> validation fails, the artifact must not be
>>>> inserted into the DB.
>>>
>>>
>>> Perhaps a decent middle ground would be to stick with the plan of
>>> keeping uploaded (or partially uploaded) files as a separate model until
>>> they are ready to be turned into a Content instance plus artifacts, and
>>> save their file data directly to somewhere within /var/lib/pulp/. It would
>>> be some path distinct from where Artifacts are stored. That's what I had
>>> imagined we would do anyway. Then as Dennis pointed out, turning that into
>>> an Artifact would only require a move operation on the same filesystem,
>>> which is super-cheap.
>>>
>>>
>> Would that address all the concerns? We'd write the data just once, and
>>> then move it once on the same filesystem. I haven't looked at django's
>>> support for this recently, but it seems like it should be doable.
>>>
>>> I was just looking at the dropbox API and noticed that they provide two
>> separate API endpoints for regular file uploads[0] (< 150mb) and large file
>> uploads[1]. It is the latter that supports chunking and requires using an
>> upload id. For the most common case they support uploading a file with one
>> API call. Our original proposal requires 2 for the same use case. Pulp API
>> users would appreciate having to only make one API call to upload a file.
>>
>> [0] https://www.dropbox.com/developers-v1/core/docs#files_put
>> [1] https://www.dropbox.com/developers-v1/core/docs#chunked-upload
>>
>>
>>
>>> --
>>>
>>> Michael Hrivnak
>>>
>>> Principal Software Engineer, RHCE
>>>
>>> Red Hat
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-06-27 Thread Dennis Kliban
On Tue, Jun 27, 2017 at 3:31 PM, Michael Hrivnak 
wrote:

> Could you re-summarize what problem would be solved by not having a
> FileUpload model, and giving the Artifact model the ability to have partial
> data and no Content foreign key?
>
> I understand the concern about where on the filesystem the data gets
> written and how many times, but I'm not seeing how that's related to
> whether we have a FileUpload model or not. Are we discussing two separate
> issues? 1) filesystem locations and copy efficiency, and 2) API design? Or
> is this discussion trying to connect them in a way I'm not seeing?
>

There were two concerns: 1) Filesystem  location and copy efficiency 2) API
design

The first one has been addressed. Thank you for pointing out that a second
write will be a move operation.

However, I am still concerned about the complexity of the API. A relatively
small file should not require an upload session to be uploaded. A single
API call to the Artifacts API should be enough to upload a file and create
an Artifact from it. In Pulp 3.1+ we can introduce the FileUpload model to
support chunked uploads. At the same time we would extend the Artifact API
to accept a FileUpload id for creating an Artifact.


> On Tue, Jun 27, 2017 at 3:20 PM, Dennis Kliban  wrote:
>
>> On Tue, Jun 27, 2017 at 2:56 PM, Brian Bouterse 
>> wrote:
>>
>>> Picking up from @jortel's observations...
>>>
>>> +1 to allowing Artifacts to have an optional FK.
>>>
>>> If we have an Artifacts endpoint then we can allow for the deleting of a
>>> single artifact if it has no FK. I think we want to disallow the removal of
>>> an Artifact that has a foreign key. Also filtering should allow a single
>>> operation to clean up all unassociated artifacts by searching for FK=None
>>> or similar.
>>>
>>> Yes, we will need to allow the single call delivering a file to also
>>> specify the relative path, size, checksums etc. Since the POST body
>>> contains binary data we either need to accept this data as GET style params
>>> or use a multi-part MIME upload [0]. Note that this creation of an Artifact
>>> does not change the repository contents and therefore can be handled
>>> synchronously outside of the tasking system.
>>>
>>> +1 to the saving of an Artifact to perform validation
>>>
>>> [0]: https://www.w3.org/Protocols/rfc1341/7_2_Multipart.html
>>>
>>>
>>
>>> -Brian
>>>
>>
>> I also support this optional FK for Artifacts and validation on save.  We
>> should probably stick with accepting GET parameters for the MVP. Though
>> multi-part MIME support would be good to consider for 3.1+.
>>
>>
>>>
>>> On Tue, Jun 27, 2017 at 2:44 PM, Dennis Kliban 
>>> wrote:
>>>
>>>> On Tue, Jun 27, 2017 at 1:24 PM, Michael Hrivnak 
>>>> wrote:
>>>>
>>>>>
>>>>> On Tue, Jun 27, 2017 at 11:27 AM, Jeff Ortel 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>> - The artifact FK to a content unit would need to become optional.
>>>>>>
>>>>>> - Need to add use cases for cleaning up artifacts not associated with
>>>>>> a content unit.
>>>>>>
>>>>>> - The upload API would need additional information needed to create
>>>>>> an artifact.  Like relative path, size,
>>>>>> checksums etc.
>>>>>>
>>>>>> - Since (I assume) you are proposing uploading/writing directly to
>>>>>> artifact storage (not staging in a working
>>>>>> dir), the flow would need to involve (optional) validation.  If
>>>>>> validation fails, the artifact must not be
>>>>>> inserted into the DB.
>>>>>
>>>>>
>>>>> Perhaps a decent middle ground would be to stick with the plan of
>>>>> keeping uploaded (or partially uploaded) files as a separate model until
>>>>> they are ready to be turned into a Content instance plus artifacts, and
>>>>> save their file data directly to somewhere within /var/lib/pulp/. It would
>>>>> be some path distinct from where Artifacts are stored. That's what I had
>>>>> imagined we would do anyway. Then as Dennis pointed out, turning that into
>>>>> an Artifact would only require a move operation on the same filesystem,
>>>>> which is super-cheap.
>>>>>
>>>>>
>>>> Would that address 

[Pulp-dev] Pulp 3.0 support for puppet client < 3.3

2017-06-28 Thread Dennis Kliban
Do we want to continue supporting puppet client versions prior to 3.3? More
information about what that means can be in the user guide[0] and the
technical guide[1] for pulp_puppet.

EPEL has Puppet 3.6[2] for EL7 and nothing for EL6.

I propose that the 3.0 release of the Puppet plugin does not support the
older client. We can add it in 3.1+ if we see value in it at the time.


[0]
http://docs.pulpproject.org/plugins/pulp_puppet/user-guide/recipes.html#id1
[1]
http://docs.pulpproject.org/plugins/pulp_puppet/tech-reference/forge_api.html#basic-auth
[2] https://apps.fedoraproject.org/packages/puppet
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-06-28 Thread Dennis Kliban
On Wed, Jun 28, 2017 at 12:44 PM, Brian Bouterse 
wrote:

> For a file to be received and saved in the right place once, we need the
> view saving the file to have all the info to form the complete path. After
> talking w/ @jortel, I think we should store Artifacts at the following path:
>
> MEDIA_ROOT/content/units/digest[0:2]/digest[2:]/
>
> Note that digest is the Artifact's sha256 digest. This is different from
> pulp2 which used the digest of the content unit. Note that  would
> be provided by the user along with  and/or .
>
> Note that this will cause an Artifact to live in exactly one place which
> means Artifacts are now unique by digest and would need to be able to be
> associated with multiple content units. I'm not sure why we didn't do this
> before, so I'm interested in exploring issues associated with this.
>

If my memory serves me correctly we wanted to be able to have multiple
copies of an Artifact when that Artifact can be a Content Unit by itself
and also be one part of a unit. E.g.: an RPM that belong to a distribution.
I am not sure what benefit we would derive from this, but I was hoping to
jog someone's memory.


> It would be a good workflow. For a single file content unit (e.g.) rpm
> upload would be a two step process.
>
> 1. POST/PUT the file's binary data and the  and 
> and/or  as GET parameters
> 2. Create a content unit with the unit metadata, and 0 .. n Artifacts
> referred to by ID. This could optionally associate the new unit with one
> repository as part of the atomic unit creation.
>
> Thoughts/Ideas?
>
>
If we provide an option to combine content unit creation with repo
association, this option should allow specifying multiple repositories.
Though for the MVP, I think we should support neither. Uploading a content
unit to a particular repository would involve 3 steps.

1. POST to Artifact API endpoint with  and  and/or
 as GET parameters
2. POST to Content Unit API endpoint with the unit metadata, and 0 .. n
Artifacts referred to by ID.
3. POST to the Repository Content Unit  API endpoint to associate the unit
with the repository.

Step 3 would be repeated for each repository the content unit should belong
to.



> -Brian
>
>
> On Tue, Jun 27, 2017 at 4:16 PM, Dennis Kliban  wrote:
>
>> On Tue, Jun 27, 2017 at 3:31 PM, Michael Hrivnak 
>> wrote:
>>
>>> Could you re-summarize what problem would be solved by not having a
>>> FileUpload model, and giving the Artifact model the ability to have partial
>>> data and no Content foreign key?
>>>
>>> I understand the concern about where on the filesystem the data gets
>>> written and how many times, but I'm not seeing how that's related to
>>> whether we have a FileUpload model or not. Are we discussing two separate
>>> issues? 1) filesystem locations and copy efficiency, and 2) API design? Or
>>> is this discussion trying to connect them in a way I'm not seeing?
>>>
>>
>> There were two concerns: 1) Filesystem  location and copy efficiency 2)
>> API design
>>
>> The first one has been addressed. Thank you for pointing out that a
>> second write will be a move operation.
>>
>> However, I am still concerned about the complexity of the API. A
>> relatively small file should not require an upload session to be uploaded.
>> A single API call to the Artifacts API should be enough to upload a file
>> and create an Artifact from it. In Pulp 3.1+ we can introduce the
>> FileUpload model to support chunked uploads. At the same time we would
>> extend the Artifact API to accept a FileUpload id for creating an Artifact.
>>
>>
>>> On Tue, Jun 27, 2017 at 3:20 PM, Dennis Kliban 
>>> wrote:
>>>
>>>> On Tue, Jun 27, 2017 at 2:56 PM, Brian Bouterse 
>>>> wrote:
>>>>
>>>>> Picking up from @jortel's observations...
>>>>>
>>>>> +1 to allowing Artifacts to have an optional FK.
>>>>>
>>>>> If we have an Artifacts endpoint then we can allow for the deleting of
>>>>> a single artifact if it has no FK. I think we want to disallow the removal
>>>>> of an Artifact that has a foreign key. Also filtering should allow a 
>>>>> single
>>>>> operation to clean up all unassociated artifacts by searching for FK=None
>>>>> or similar.
>>>>>
>>>>> Yes, we will need to allow the single call delivering a file to also
>>>>> specify the relative path, size, checksums etc. Since the POST body
>>>>> contains binary data we either need to accept this data as GE

Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-06-28 Thread Dennis Kliban
On Wed, Jun 28, 2017 at 1:10 PM, Jeff Ortel  wrote:

>
>
> On 06/28/2017 11:44 AM, Brian Bouterse wrote:
> > For a file to be received and saved in the right place once, we need the
> view saving the file to have all the
> > info to form the complete path. After talking w/ @jortel, I think we
> should store Artifacts at the following path:
> >
> > MEDIA_ROOT/content/units/digest[0:2]/digest[2:]/
>
> Consider:
> MEDIA_ROOT/artifact/digest[0:2]/digest[2:]/
>
> Since artifact would have an optional association with content.  And,
> given the many-to-many relationship, the
> content_id FK would not longer exist in the Artifact table.  Also, I have
> more plans for Artifacts in a
> "Publishing" proposal I'm writing to pulp-dev (spoiler alert).
>
> We would also want to enforce the same CAS (content addressed storage)
> uniqueness in the DB using a unique
> constraint on the Artifact.  Eg: unique (sha256, rel_path).  This ensure
> that each unique artifact (file) has
> exactly 1 DB record.
>
>
I don't think it makes sense for an Artifact to have a rel_path. It is just
a file. A ContentUnit should have the rel_path that will be used at publish
time to make the file backing the Artifact available at that rel_path. Is
my understanding of the rel_path correct? In that case the only thing that
should be unique is the sha256 digest. I've written a story that outlines
this use case: https://pulp.plan.io/issues/2843





> >
> > Note that digest is the Artifact's sha256 digest. This is different from
> pulp2 which used the digest of the
> > content unit. Note that  would be provided by the user along
> with  and/or .
> >
> > Note that this will cause an Artifact to live in exactly one place which
> means Artifacts are now unique by
> > digest and would need to be able to be associated with multiple content
> units. I'm not sure why we didn't do
> > this before, so I'm interested in exploring issues associated with this.
> >
> > It would be a good workflow. For a single file content unit (e.g.) rpm
> upload would be a two step process.
> >
> > 1. POST/PUT the file's binary data and the  and 
> and/or  as GET parameters
> > 2. Create a content unit with the unit metadata, and 0 .. n Artifacts
> referred to by ID. This could optionally
> > associate the new unit with one repository as part of the atomic unit
> creation.
> >
> > Thoughts/Ideas?
> >
> > -Brian
> >
> >
> > On Tue, Jun 27, 2017 at 4:16 PM, Dennis Kliban  <mailto:dkli...@redhat.com>> wrote:
> >
> > On Tue, Jun 27, 2017 at 3:31 PM, Michael Hrivnak <
> mhriv...@redhat.com <mailto:mhriv...@redhat.com>> wrote:
> >
> > Could you re-summarize what problem would be solved by not
> having a FileUpload model, and giving the
> > Artifact model the ability to have partial data and no Content
> foreign key?
> >
> > I understand the concern about where on the filesystem the data
> gets written and how many times, but
> > I'm not seeing how that's related to whether we have a
> FileUpload model or not. Are we discussing two
> > separate issues? 1) filesystem locations and copy efficiency,
> and 2) API design? Or is this discussion
> > trying to connect them in a way I'm not seeing?
> >
> >
> > There were two concerns: 1) Filesystem  location and copy efficiency
> 2) API design
> >
> > The first one has been addressed. Thank you for pointing out that a
> second write will be a move operation.
> >
> > However, I am still concerned about the complexity of the API. A
> relatively small file should not require
> > an upload session to be uploaded. A single API call to the Artifacts
> API should be enough to upload a file
> > and create an Artifact from it. In Pulp 3.1+ we can introduce the
> FileUpload model to support chunked
> > uploads. At the same time we would extend the Artifact API to accept
> a FileUpload id for creating an
> > Artifact.
> >
> >
> > On Tue, Jun 27, 2017 at 3:20 PM, Dennis Kliban <
> dkli...@redhat.com <mailto:dkli...@redhat.com>> wrote:
> >
> > On Tue, Jun 27, 2017 at 2:56 PM, Brian Bouterse <
> bbout...@redhat.com <mailto:bbout...@redhat.com>>
> > wrote:
> >
> > Picking up from @jortel's observations...
> >
> > +1 to allowing Artifacts to have an optional FK.
> >
> > If we have an Artifacts endpoint then we can allow f

Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-06-29 Thread Dennis Kliban
On Wed, Jun 28, 2017 at 11:55 PM, Michael Hrivnak 
wrote:

> For a unit like a Distribution, the relative path of the file does matter
> with respect to other files associated with the same content unit and needs
> to be preserved. That content type consists of a collection of arbitrary
> files in an arbitrary directory structure that we have to just preserve. I
> suppose it could be a field on the many-to-many table between an artifact
> and its content. If it were a field on the artifact, and it were part of
> the uniqueness constraint together with the checksum, that could also work.
> But it would oppose the goal of deduplicating files.
>

I definitely misspoke in my last email about the relative path belonging to
the content unit. As you pointed it out it belongs with the ContentArtifact
which represents the relationship between an artifact and a content unit.


>
> Speaking of, is that the goal of making artifacts and content a
> many-to-many relationship? Otherwise could someone re-summarize why that's
> being proposed?
>


The many to many relationship is between Artifact and ContentArtifact. This
allows a content unit to have multiple Artifacts associated with it.


>
> On Wed, Jun 28, 2017 at 6:38 PM, Dennis Kliban  wrote:
>
>> On Wed, Jun 28, 2017 at 1:10 PM, Jeff Ortel  wrote:
>>
>>>
>>>
>>> On 06/28/2017 11:44 AM, Brian Bouterse wrote:
>>> > For a file to be received and saved in the right place once, we need
>>> the view saving the file to have all the
>>> > info to form the complete path. After talking w/ @jortel, I think we
>>> should store Artifacts at the following path:
>>> >
>>> > MEDIA_ROOT/content/units/digest[0:2]/digest[2:]/
>>>
>>> Consider:
>>> MEDIA_ROOT/artifact/digest[0:2]/digest[2:]/
>>>
>>> Since artifact would have an optional association with content.  And,
>>> given the many-to-many relationship, the
>>> content_id FK would not longer exist in the Artifact table.  Also, I
>>> have more plans for Artifacts in a
>>> "Publishing" proposal I'm writing to pulp-dev (spoiler alert).
>>>
>>> We would also want to enforce the same CAS (content addressed storage)
>>> uniqueness in the DB using a unique
>>> constraint on the Artifact.  Eg: unique (sha256, rel_path).  This ensure
>>> that each unique artifact (file) has
>>> exactly 1 DB record.
>>>
>>>
>> I don't think it makes sense for an Artifact to have a rel_path. It is
>> just a file. A ContentUnit should have the rel_path that will be used at
>> publish time to make the file backing the Artifact available at that
>> rel_path. Is my understanding of the rel_path correct? In that case the
>> only thing that should be unique is the sha256 digest. I've written a story
>> that outlines this use case: https://pulp.plan.io/issues/2843
>>
>>
>>
>>
>>
>>> >
>>> > Note that digest is the Artifact's sha256 digest. This is different
>>> from pulp2 which used the digest of the
>>> > content unit. Note that  would be provided by the user along
>>> with  and/or .
>>> >
>>> > Note that this will cause an Artifact to live in exactly one place
>>> which means Artifacts are now unique by
>>> > digest and would need to be able to be associated with multiple
>>> content units. I'm not sure why we didn't do
>>> > this before, so I'm interested in exploring issues associated with
>>> this.
>>> >
>>> > It would be a good workflow. For a single file content unit (e.g.) rpm
>>> upload would be a two step process.
>>> >
>>> > 1. POST/PUT the file's binary data and the  and 
>>> and/or  as GET parameters
>>> > 2. Create a content unit with the unit metadata, and 0 .. n Artifacts
>>> referred to by ID. This could optionally
>>> > associate the new unit with one repository as part of the atomic unit
>>> creation.
>>> >
>>> > Thoughts/Ideas?
>>> >
>>> > -Brian
>>> >
>>> >
>>> > On Tue, Jun 27, 2017 at 4:16 PM, Dennis Kliban >> <mailto:dkli...@redhat.com>> wrote:
>>> >
>>> > On Tue, Jun 27, 2017 at 3:31 PM, Michael Hrivnak <
>>> mhriv...@redhat.com <mailto:mhriv...@redhat.com>> wrote:
>>> >
>>> > Could you re-summarize what problem would be solved by not
>>> having a FileUpload model, and giving the
>>> &g

Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-06-29 Thread Dennis Kliban
On Thu, Jun 29, 2017 at 7:40 AM, Michael Hrivnak 
wrote:

>
> On Thu, Jun 29, 2017 at 7:22 AM, Dennis Kliban  wrote:
>
>>
>> The many to many relationship is between Artifact and ContentArtifact.
>> This allows a content unit to have multiple Artifacts associated with it.
>>
>
> Could you elaborate on this? A content unit can have multiple artifacts
> just by artifact having a foreign key to a content unit. That's the
> one-to-many relationship we have on the model now in 3.0-dev.
>
> Also, what is a ContentArtifact?
>
>
Here are some definitions for the new proposal:

   - Artifact - a file stored in pulp
   - Content - a named collection of 0 or more Artifacts that can be
   associated with a repository as a single unit
   - ContentArtifact - a relationship between an Artifact and Content.
   There is 0 or more ContentArtifacts for each Content.
   - Repository - A named collection of content.
   - RepositoryContent - a relationship between Content and Repository.


In the proposal we have in the MVP we have the following:

   - FileUpload - Uploaded file that is used to create Artifacts and is
   then removed (definition for this is not present in the glossary of MVP)
   - Artifact - A file associated with one content (unit). Artifacts are
   not shared between content (units). Create a content unit using an uploaded
   file ID as the source for its metadata. Create Artifacts associated with
   the content unit using an uploaded file ID for each; commit as a single
   transaction.
   - Content (unit) - A single piece of content manged by Pulp. Each file
   associated with a content (unit) is called an Artifact. Each content (unit)
   may have zero or many Artifacts.
   - Repository - A named collection of content.
   - RepositoryContent - a relationship between Content and Repository
   (also not in the glossary of the MVP)

In the MVP in order to add a unit to a repository, a user would:

   1. Create a FileUpload by uploading a file
   2. Create an Artifact and a Content with one API call
   3. Associate a Content with a Repository
   4. Delete the FileUpload (or some cleanup job would do that for the user)

The newly proposed workflow:

   1. Create an Artifact by uploading a file
   2. Create a Content by specifying which Artifact(s) belongs to the
   Content and their relative paths inside the unit. This creates
   ContentArtifacts for each relationship.
   3. Associate a Content with a repository.

In the MVP workflow, once an FileUpload is deleted, it's hard to create
another Content from that file. I am sure we can come up with a way to do
it, but it won't be as straight forward as the above workflow.



>
> --
>
> Michael Hrivnak
>
> Principal Software Engineer, RHCE
>
> Red Hat
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-06-30 Thread Dennis Kliban
On Fri, Jun 30, 2017 at 12:00 PM, Jeff Ortel  wrote:

> Ah, I missed adding the relative path to the join table.  This is a fine
> idea as well.
>
> On 06/30/2017 10:15 AM, Michael Hrivnak wrote:
> >
> > Jeff, earlier in the thread we talked about using the through table to
> hold the path. I think that's the right
> > place, because the path would be a property of the relationship between
> an artifact and a content unit. It
> > also occurred to me that the file name could be different for different
> content, so maybe the path would need
> > to include the filename. That seems a bit weird, but I think it has to
> be the case if we use a many-to-many
> > relationship.
>
>
I imagined that the relative path would be included the name of the
Artifact as it should appear in the Content. So for an artifact that is
nested inside a directory, the relative path might be 'image/foo'. For a
unit that is a single Artifact, the relative path may be 'foo'. The
combination of Content Id, Artifact Id, and relative path would the
uniqueness constraint for the through table.



___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Flake8 in Pulp 3

2017-07-06 Thread Dennis Kliban
+1  https://pulp.plan.io/issues/2870

On Mon, Jul 3, 2017 at 2:08 PM, David Davis  wrote:

> +1 from me.
>
> At the very least, we could try it out for a while and see how it goes.
>
>
> David
>
> On Mon, Jul 3, 2017 at 1:05 PM, Brian Bouterse 
> wrote:
>
>> @daviddavis thanks for the flake8 fixes.
>>
>> I was thinking we should enable pep8speaks [1] for all of our branches
>> and repos. We can still have Travis or other places run flake8 but that
>> would put it right in the PR comments.
>>
>> [1]: https://github.com/OrkoHunter/pep8speaks
>>
>> On Mon, Jul 3, 2017 at 12:03 PM, David Davis 
>> wrote:
>>
>>> I just merged a PR to fix flake8 on the 3.0-dev branch of Pulp[0]. It
>>> also fixes the flake8 warnings that came up while flake8 was not running in
>>> Travis.
>>>
>>> If you’re developing Pulp 3, please rebase your branch with 3.0-dev from
>>> upstream. Also for bonus points, check the Travis logs for your PRs to make
>>> sure flake8 is running on the entire codebase and not via “git diff…”.
>>>
>>> Let me know if you need any help.
>>>
>>> Thanks!
>>>
>>> [0] https://github.com/pulp/pulp/pull/3019
>>>
>>> David
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-07-06 Thread Dennis Kliban
I added 2 more stories based on the discussion we've had so far. Please
provide feedback here or on the tickets.

https://pulp.plan.io/issues/2872
https://pulp.plan.io/issues/2873

On Fri, Jun 30, 2017 at 2:23 PM, Brian Bouterse  wrote:

> @jortel I think what you've written is what we should do. I think we can
> get a race-condition free implementation with this many-to-many table with
> the database transaction including the the filesystem operations. +1 to
> adding the relative path to the join table also.
>
> I'm also not sure about including  in File.path <--- FileField:
> MEDIA_ROOT/files/digest[0:2]/digest[2:]/  If there were collisions,
> having  would allow you to store two Files with different contents
> but the same hash. It's highly, highly improbable even if Pulp is storing
> billions of files, but it could happen. A content addressable store (not
> including ) would never be able to handle this case.
>
> Even still, I think we should disinclude the name from the path to the
> Artifact and just have it be the digest. Having a fully addressable File
> storage will be awesome, and a bit more complex code in core is worth it (I
> think). FWIW, I also don't think it will be that hard to get right.
>
> As a side point, I think when a file with the same sha256 is uploaded a
> second time it should be rejected rather than silently accepting it.
>
> On Fri, Jun 30, 2017 at 12:00 PM, Jeff Ortel  wrote:
>
>> Ah, I missed adding the relative path to the join table.  This is a fine
>> idea as well.
>>
>> On 06/30/2017 10:15 AM, Michael Hrivnak wrote:
>> >
>> > Jeff, earlier in the thread we talked about using the through table to
>> hold the path. I think that's the right
>> > place, because the path would be a property of the relationship between
>> an artifact and a content unit. It
>> > also occurred to me that the file name could be different for different
>> content, so maybe the path would need
>> > to include the filename. That seems a bit weird, but I think it has to
>> be the case if we use a many-to-many
>> > relationship.
>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] pulp 3 upload API validation

2017-07-10 Thread Dennis Kliban
The upload API for Artifacts is going to allow users to specify the
artifact size and a digest. The Artifact model currently supports  'md5',
'sha1', 'sha224', 'sha256', 'sha384', and 'sha512' digests.

Do we want to let users specify more than one digest per upload? e.g. md5
and sha256?

Do we want to store all 6 digests for each Artifact?
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] pulp 3 upload API validation

2017-07-10 Thread Dennis Kliban
On Mon, Jul 10, 2017 at 3:26 PM, Michael Hrivnak 
wrote:

>
>
> On Mon, Jul 10, 2017 at 3:06 PM, Dennis Kliban  wrote:
>
>> The upload API for Artifacts is going to allow users to specify the
>> artifact size and a digest. The Artifact model currently supports  'md5',
>> 'sha1', 'sha224', 'sha256', 'sha384', and 'sha512' digests.
>>
>> Do we want to let users specify more than one digest per upload? e.g. md5
>> and sha256?
>>
>
> There may be no harm in this, but it would add complexity to the
> verification and not add much value. I'd stick with just one unless there's
> a compelling reason for multiple.
>

I agree. The API is going to raise a validation exception when more than 1
digest is provided.


>
>
>>
>> Do we want to store all 6 digests for each Artifact?
>>
>
> The expensive part of calculating the digests is reading the file. As long
> as you're already reading the entire file, which we will during
> verification, you may as well stuff the bits through multiple hashers
> (digesters?) and get all the digests. Pulp 2 has a function that does this:
>
> https://github.com/pulp/pulp/blob/2.13-release/server/pulp/
> server/util.py#L327-L353
>
> But we can't always guarantee that we'll have all the checksums available,
> for at least two reasons. 1) If in the future if we want to use yet another
> algorithm, we probably won't want to run a migration that re-reads every
> file and calculates the additional digest. 2) For on-demand content, we
> don't have it locally, so we can't calculate any additional checksums until
> it gets fetched.
>
> So this may be one of those times where we use a good-ole-fashioned getter
> method that returns the requested digest if it's on the artifact,
> calculates it if not, or raises an exception if the value isn't available
> and can't be calculated.
>

For uploaded Artifacts, all of the digests will be calculated as the file
is being processed during the upload. So I don't think calculating all of
them should incur significantly more cost than just one. The code snippet
from Pulp 2 looks similar to what I am doing.

I haven't given much thought to the getter, but your idea sounds fine to
me.

Thanks,
Dennis



>
> --
>
> Michael Hrivnak
>
> Principal Software Engineer, RHCE
>
> Red Hat
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] pulp 3 upload API validation

2017-07-11 Thread Dennis Kliban
On Tue, Jul 11, 2017 at 1:20 PM, Brian Bouterse  wrote:

> We should not raise a validation exception because due to semver we can't
> stop raising that exception later. Specifically if we want to ever allow
> double checksum being specified in the future.
>
> For the MVP, I think the choices are: only respect sha256 and ignore the
> rest OR do as many standard validators as they specify. I'm ok with either
> of these with a slight preference for validating as many of them as are
> specified would be the best. Since we're already handling all of the data
> at save time, pushing it through additional digest validators will cost a
> bit of cpu but not much additional I/O. Also if the user asked us to do
> that validation then having it demand some additional resources is OK. Also
> having feature parity with the changesets is good. The changesets can
> validate all standard digest validators so having uploads do the same would
> be consistent.
>

I am reversing what I had previously said, and I agree that we should not
raise an exception if a user provides more than one checksum at upload
time. My current implementation checks every checksum specified by the
user.




> On Mon, Jul 10, 2017 at 5:11 PM, Jeff Ortel  wrote:
>
>>
>>
>> On 07/10/2017 02:36 PM, Dennis Kliban wrote:
>> > On Mon, Jul 10, 2017 at 3:26 PM, Michael Hrivnak > <mailto:mhriv...@redhat.com>> wrote:
>> >
>> >
>> >
>> > On Mon, Jul 10, 2017 at 3:06 PM, Dennis Kliban > <mailto:dkli...@redhat.com>> wrote:
>> >
>> > The upload API for Artifacts is going to allow users to specify
>> the artifact size and a digest. The
>> > Artifact model currently supports  'md5', 'sha1', 'sha224',
>> 'sha256', 'sha384', and 'sha512' digests.
>> >
>> > Do we want to let users specify more than one digest per
>> upload? e.g. md5 and sha256?
>> >
>> >
>> > There may be no harm in this, but it would add complexity to the
>> verification and not add much value. I'd
>> > stick with just one unless there's a compelling reason for multiple.
>> >
>> >
>> > I agree. The API is going to raise a validation exception when more
>> than 1 digest is provided.
>>
>> +1
>>
>> >
>> >
>> >
>> >
>> >
>> > Do we want to store all 6 digests for each Artifact?
>> >
>> >
>> > The expensive part of calculating the digests is reading the file.
>> As long as you're already reading the
>> > entire file, which we will during verification, you may as well
>> stuff the bits through multiple hashers
>> > (digesters?) and get all the digests. Pulp 2 has a function that
>> does this:
>> >
>> > https://github.com/pulp/pulp/blob/2.13-release/server/pulp/
>> server/util.py#L327-L353
>> > <https://github.com/pulp/pulp/blob/2.13-release/server/
>> pulp/server/util.py#L327-L353>
>> >
>> > But we can't always guarantee that we'll have all the checksums
>> available, for at least two reasons. 1) If
>> > in the future if we want to use yet another algorithm, we probably
>> won't want to run a migration that
>> > re-reads every file and calculates the additional digest. 2) For
>> on-demand content, we don't have it
>> > locally, so we can't calculate any additional checksums until it
>> gets fetched.
>> >
>> > So this may be one of those times where we use a good-ole-fashioned
>> getter method that returns the
>> > requested digest if it's on the artifact, calculates it if not, or
>> raises an exception if the value isn't
>> > available and can't be calculated.
>> >
>> >
>> > For uploaded Artifacts, all of the digests will be calculated as the
>> file is being processed during the
>> > upload. So I don't think calculating all of them should incur
>> significantly more cost than just one. The code
>> > snippet from Pulp 2 looks similar to what I am doing.
>>
>> This functionality should be a method on the Artifact and not a util
>> function somewhere.
>>
>> >
>> > I haven't given much thought to the getter, but your idea sounds fine
>> to me.
>> > Thanks,
>> > Dennis
>> >
>> >
>> >
>> >
>> > --
>> >
>> > Michael Hrivnak
>> >
>> > Principal Software Engineer, RHCE
>> >
>> > Red Hat
>> >
>> >
>> >
>> >
>> > ___
>> > Pulp-dev mailing list
>> > Pulp-dev@redhat.com
>> > https://www.redhat.com/mailman/listinfo/pulp-dev
>> >
>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] proposing changes to pulp 3 upload API

2017-07-12 Thread Dennis Kliban
I just submitted a PR[0] for uploading Artifacts. The API supports creating
Artifacts, listing all Artifacts, and viewing details of an Artifact.
Updates of Artifacts via REST API are disabled. On upload of a file, all
possible checksums are calculated and compared to any checksums provided by
the user. Any inconsistencies are raised as validation errors.

I am now starting to work on the story[1] #2872 for creating Content units
from Artifacts.


[0] https://github.com/pulp/pulp/pull/3080
[1] https://pulp.plan.io/issues/2872

On Thu, Jul 6, 2017 at 6:23 PM, Dennis Kliban  wrote:

> I added 2 more stories based on the discussion we've had so far. Please
> provide feedback here or on the tickets.
>
> https://pulp.plan.io/issues/2872
> https://pulp.plan.io/issues/2873
>
> On Fri, Jun 30, 2017 at 2:23 PM, Brian Bouterse 
> wrote:
>
>> @jortel I think what you've written is what we should do. I think we can
>> get a race-condition free implementation with this many-to-many table with
>> the database transaction including the the filesystem operations. +1 to
>> adding the relative path to the join table also.
>>
>> I'm also not sure about including  in File.path <--- FileField:
>> MEDIA_ROOT/files/digest[0:2]/digest[2:]/  If there were
>> collisions, having  would allow you to store two Files with different
>> contents but the same hash. It's highly, highly improbable even if Pulp is
>> storing billions of files, but it could happen. A content addressable store
>> (not including ) would never be able to handle this case.
>>
>> Even still, I think we should disinclude the name from the path to the
>> Artifact and just have it be the digest. Having a fully addressable File
>> storage will be awesome, and a bit more complex code in core is worth it (I
>> think). FWIW, I also don't think it will be that hard to get right.
>>
>> As a side point, I think when a file with the same sha256 is uploaded a
>> second time it should be rejected rather than silently accepting it.
>>
>> On Fri, Jun 30, 2017 at 12:00 PM, Jeff Ortel  wrote:
>>
>>> Ah, I missed adding the relative path to the join table.  This is a fine
>>> idea as well.
>>>
>>> On 06/30/2017 10:15 AM, Michael Hrivnak wrote:
>>> >
>>> > Jeff, earlier in the thread we talked about using the through table to
>>> hold the path. I think that's the right
>>> > place, because the path would be a property of the relationship
>>> between an artifact and a content unit. It
>>> > also occurred to me that the file name could be different for
>>> different content, so maybe the path would need
>>> > to include the filename. That seems a bit weird, but I think it has to
>>> be the case if we use a many-to-many
>>> > relationship.
>>>
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Pulp3 first released - User stories ready to be tested

2017-07-18 Thread Dennis Kliban
I couldn't login. can you make it public?

On Tue, Jul 18, 2017 at 4:14 PM, Kersom Moura Oliveira 
wrote:

> Hi,
>
> I have been working on a survey of new features of Pulp3, and creation of
> github issues.
>
> I read the user stories, and I read the Pulp3 MVP. And I would like to ask
> for your help in identifying the user stories that will be ready to be
> tested for Pulp3 first released. Please, add the user stories to file below.
>
> http://pulp-qe.etherpad.corp.redhat.com/145
>
> Based on this,  I will create Pulp-Smash Github issues, this will help us
> to have a better overview how to approach this.
>
> Thanks,
>
>
>
>
>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] [pulp 3] mutable Artifacts are too complicated

2017-07-20 Thread Dennis Kliban
As I was writing code that performs validation for Artifacts I realized how
complex the validation gets when we allow users to create Artifacts and
then update them in the future, but only one time.

The upload REST API becomes strange also:

1. Artifact with sha256 hash 123456 is created without a file (deferred
download)
2. User performs a POST to /api/v3/artifacts/ with an Artifact that has
sha256 hash 123456 and gets a 400 response saying that the artifact already
exists and can be updated by performing a PUT on the Artifact resource.
3. User performs PUT on Artifact with sha256 123456 and includes a file.
The file is uploaded and Artifact is now storing a path to the file.
4. User performs another PUT on the file to update it again and receives a
400 response saying the Artifact cannot be modified.

or

1. Artifact with sha256 hash 123456 is created without a file (deferred
download)
2. User performs a POST to /api/v3/artifacts/ with an Artifact that has
sha256 hash 123456 and gets a 201 response saying the Artifact was created.
  - This is very misleading because the Artifact already existed


Neither of these scenarios is appealing to me. The REST API would be
simplified if we made Artifacts immutable. Either a plugin creates an
Artifact once or an Artifact is created using the REST API. After that it
can only be retrieved or deleted.

Any information, such as expected size and checksum should be stored in a
separate table - together with the URL of the remote artifact. We could
call it RemoteArtifact.
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp 3] mutable Artifacts are too complicated

2017-07-20 Thread Dennis Kliban
On Thu, Jul 20, 2017 at 11:21 AM, Jeff Ortel  wrote:

>
>
> On 07/20/2017 09:48 AM, Dennis Kliban wrote:
> >
>   either of these scenarios is appealing to me. The REST API would be
> simplified if we made Artifacts
> > immutable. Either a plugin creates an Artifact once or an Artifact is
> created using the REST API. After that
> > it can only be retrieved or deleted.
> >
> > Any information, such as expected size and checksum should be stored in
> a separate table - together with the
> > URL of the remote artifact. We could call it RemoteArtifact.
>
> Just to clarify based on our discussions: The suggestion here is to
> broaden the scope of DownloadCatalog and
> rename to RemoteArtifact.  Correct?
>
>
That is correct.


>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] PUP Process: "obvious consensus"

2017-08-10 Thread Dennis Kliban
+1

On Thu, Aug 10, 2017 at 9:21 AM, David Davis  wrote:

> +1. I think this is worth trying out.
>
>
> David
>
> On Thu, Aug 10, 2017 at 8:54 AM, Austin Macdonald 
> wrote:
>
>> +1
>>
>> Thank you Brian!
>>
>> On Thu, Aug 10, 2017 at 5:33 AM, Brian Bouterse 
>> wrote:
>>
>>> A small language clarification was pushed based on feedback via
>>> comment:  https://github.com/bmbouter/pups/commit/f5b7282b2d2e369b90f1
>>> 49e4cc25226bb093171b
>>>
>>> Voting is open for the PUP1 revisions. Normally the voting window is
>>> longer, but this topic has been discussed for a long time. The core team
>>> earlier this week decided a shorter voting window was appropriate in this
>>> case. Voting will close at midnight UTC on Friday Aug 11th. Please raise
>>> any concerns around this process. Otherwise, please send in votes via this
>>> thread. I'll cast mine now.
>>>
>>> +1 to passing the pup1 revisions.
>>>
>>> Thanks to everyone who has contributed comments and energy into this
>>> topic.
>>>
>>> -Brian
>>>
>>>
>>> On Mon, Aug 7, 2017 at 10:15 AM, Brian Bouterse 
>>> wrote:
>>>
 After some in-person convo, the core team wants to open PUP1 revision
 voting on Wednesday and close it at midnight UTC on Friday Aug 11th. We
 will pass/not-pass according this the voting outlined in PUP1 itself (a
 variation on self-hosting [0]). We also want to ask that any comments on
 the PUP1 revisions by posted before midnight UTC tomorrow Aug 8th.

 [0]: https://en.wikipedia.org/wiki/Self-hosting

 -Brian



 On Mon, Jul 31, 2017 at 9:24 AM, Brian Bouterse 
 wrote:

> I've pushed a new commit [3] to the PR. It includes the following
> changes. Please review and comment. If there are any major/blocking
> concerns about adopting this please raise them. Once the PUP1 revisions 
> are
> resolved, PUP2 can also be accepted based on the votes it had previously.
>
> * Adjusts the +1 approvals to come from anywhere, not just core devs
> * Explicitly allows for votes to be recast
> * Explains two examples where votes are recast. One is based on many
> other -1 votes being cast. The other is when concerns are addressed and a
> -1 vote is recast.
>
> [3]: https://github.com/pulp/pups/pull/5/commits/959c67f5a4d16a26
> e1d97ea6fe4aa570066db768
>
> -Brian
>
>
> On Tue, Jun 27, 2017 at 3:33 PM, Brian Bouterse 
> wrote:
>
>> From the discussion on the call last week, I've made some revisions
>> [2] to explore the idea of having a lazy consensus model. Comments, 
>> ideas,
>> concerns are welcome either on the PR or via this thread.
>>
>> As @mhrivnak pointed out, the adoption of a lazy consensus model is
>> meaningfully different than the language we have in pup1 today which uses
>> "obvious consensus". I want to be up front about that change [2]. If 
>> anyone
>> significantly disagrees with this direction, or has concerns, please 
>> raise
>> them.
>>
>> [2]: https://github.com/pulp/pups/pull/5/
>>
>> -Brian
>>
>> On Mon, Jun 19, 2017 at 1:48 PM, Brian Bouterse 
>> wrote:
>>
>>> After some in-person discussion, we will have a call to discuss
>>> ideas and options regarding the pup1 process. We will use this etherpad 
>>> [0]
>>> for notes, and we will recap the information to the list also. In
>>> preparation, please continue to share ideas, perspectives and concerns 
>>> via
>>> this list.
>>>
>>> When: June 22nd, 1pm UTC. See this in your local timezone here [1].
>>> The call will last no longer than 1 hour.
>>>
>>> How to connect:
>>> video chat:https://bluejeans.com/697488960
>>> phone only: + 800 451 8679   Enter Meeting ID: 697488960
>>>
>>> [0]: http://pad-katello.rhcloud.com/p/Pulp_PUP_Process_Revisited
>>> [1]: http://bit.ly/2rJqegX
>>>
>>> -Brian
>>>
>>>
>>> On Mon, Jun 19, 2017 at 9:23 AM, Michael Hrivnak <
>>> mhriv...@redhat.com> wrote:
>>>
 Back to where we started, having digested the discussion here and
 references cited, it seems clear that we have a system based on 
 consensus,
 and that there is strong desire for decisions about process to continue
 being made with consensus. In terms of "obvious consensus", I'll 
 propose
 that if any core member thinks it has not been reached, it has 
 (perhaps by
 definition) not been reached.

 PUP0001 simply states in that case, "If obvious consensus is not
 reached, then the core devs decide." We don't need to over-complicate 
 this.
 We've had reasonable success for many years at making process changes 
 and
 agreeing on them. The PUP system should be a tool that helps us define 
 a
 proposal as best we can, while providing a focal 

[Pulp-dev] Call for Presenters: Community Demo, Thursday September 7th

2017-09-01 Thread Dennis Kliban
The next community demo is scheduled for Thursday, September 7th at 14:00
UTC [0]. If you've done any of the following as part of this sprint [1],
please consider signing up as a presenter [2].

- Notable feature
- Notable bugfix
- QE update/activity


[0] http://bit.ly/2vQ2IvO
[1] https://pulp.plan.io/issues?query_id=93
[2] http://pad-katello.rhcloud.com/p/Pulp_Sprint_Demo_Agenda

Thanks,
Dennis
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] pulp3 exception logging

2017-09-13 Thread Dennis Kliban
The tasking system in Pulp 3 supports recording non fatal exceptions in the
database. Unhandled (fatal) exceptions are also recorded in the database.
Both types of exceptions appear in the logs also. Is this behavior
intentional or do we want to only store the exceptions in the database? I
like the current behavior, but I wanted to get some input from others.

-Dennis
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] natural key fields (task #3025)

2017-09-26 Thread Dennis Kliban
The Content model in pulpcore defines a 'natural_key_fields' tuple that
models inheriting from it need to populate with field names that make that
content type unique. At the same time each model defines database
uniqueness constraints in the Class Meta of the model.

In pulp_example[0] I've demonstrated how the database uniqueness constraint
can be used to get a list of all of the unique fields for content. As part
of this task I'd like to move this code out of pulp_example and into
pulpcore so all plugins can use it. I will also remove the
'natural_key_fields' tuple.

Thoughts? Objections?



[0]
https://github.com/pulp/pulp_example/blob/master/pulp_example/app/models.py#L111
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp-internal] Recommend #2950 be re-prioritized.

2017-09-28 Thread Dennis Kliban
On Tue, Sep 26, 2017 at 11:14 AM, Jeff Ortel  wrote:

> Team,
>
> I am fine with revisiting storage as some point but disagree that #2950
> should be *high* priority (higher than
> most other tasks) and should not aligned with sprint 26.  As noted in
> redmine, Our FileStorage implementation
> conforms to the django storage interface, is simple and tested.  The
> django provided FileSystemStorage has
> concerning code quality and is completely undocumented.  To safely
> subclass it will require inspecting the
> code line-by-line to ensure predictable behavior when overriding any of
> it's methods.  As you all know,
> reliable storage is a critical part of Pulp.
>

We use the rest of Django without inspecting every line of code, so I don't
see a reason to treat the FileSystem storage backend any different. We are
using Django so we can reduce the amount of code we are maintaining
ourselves. Completely reimplementing the storage backend goes against that
goal. I plan to work on this issue today.

-Dennis

>
> As I said, it's a fine idea to revisit this.  But, looking at the other
> tasks aligned to sprint 26 (and, all
> the work left to do for the MVP), this is not higher priority.
>
> -jeff
>
>
> https://pulp.plan.io/issues/2950
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp-internal] Recommend #2950 be re-prioritized.

2017-09-28 Thread Dennis Kliban
I've posted a PR[0] that resolves this issue. I tested that I can upload
content, publish it, download it from a distribution. I also tested that I
can sync, publish, download from the distribution. The file path is also
correct when requesting an artifact via REST API.

[0] https://github.com/pulp/pulp/pull/3178

On Thu, Sep 28, 2017 at 1:27 PM, Jeff Ortel  wrote:

> On 09/28/2017 10:39 AM, Brian Bouterse wrote:
> > One of the things I heard was that we aren't sure why we have this
> custom storage backend> I was very surprised to hear that because it was
> developed and merged.
>
> Yes, implementing a custom storage back-end for no good reason would be
> surprising.  But, that's not what
> happened.
>
> The detailed reasons where very well understood at the time.  Basically,
> the behavior of the FileSystemStorage
> with regard to the way it calculated actual storage paths and how it
> stored files was incompatible with our
> needs.  I should have documented those exact details but didn't.  I was
> more interested in getting pulp3
> storage working.  Given the unexpected (and undocumented) behavior and
> code complexity in the
> FileSystemStorage, it seemed safer and easier to implement a few extra
> methods than to extend.
>
> > I want to make sure we understand custom code we've written before we go
> too much further. > I think that is why we put it on the sprint currently.
>
> If the purpose is to document requirements, do a gap analysis and
> re-design a solution, this task should not
> yet be groomed and aligned to a sprint.
>
> >
> > On Thu, Sep 28, 2017 at 11:06 AM, Jeff Ortel  jor...@redhat.com>> wrote:
> >
> >
> >
> > On 09/28/2017 08:56 AM, Dennis Kliban wrote:
> > > On Tue, Sep 26, 2017 at 11:14 AM, Jeff Ortel  <mailto:jor...@redhat.com> <mailto:jor...@redhat.com  jor...@redhat.com>>> wrote:
> > >
> > > Team,
> > >
> > > I am fine with revisiting storage as some point but disagree
> that #2950 should be *high* priority (higher than
> > > most other tasks) and should not aligned with sprint 26.  As
> noted in redmine, Our FileStorage implementation
> > > conforms to the django storage interface, is simple and
> tested.  The django provided FileSystemStorage has
> > > concerning code quality and is completely undocumented.  To
> safely subclass it will require inspecting the
> > > code line-by-line to ensure predictable behavior when
> overriding any of it's methods.  As you all know,
> > > reliable storage is a critical part of Pulp.
> > >
> > >
> > > We use the rest of Django without inspecting every line of code,
> so I don't see a reason to treat the
> > > FileSystem storage backend any different. We are using Django so
> we can reduce the amount of code we are
> > > maintaining ourselves. Completely reimplementing the storage
> backend goes against that goal. I plan to work on
> > > this issue today.
> >
> > The rest of django is documented.  The FileSystemStorage class is
> not.  Not even docstrings.  It has
> > undocumented behaviors and the only way to understand them is to
> read the code.
> >
> > I just have a hard time understanding why this is higher priority
> than these other sprint tasks like:
> >
> > 3024content creation API does not validate the hostname portion
> of the URL.
> > 3021Database writes are not all recorded in DB
> > 2994Erratum not updated after upstream change
> > 2988Exception when raising a user-Defined that has a custom
> __init__.
> > 2373Planning on how to support global importer
> >
> > And ... everything else left to do for the MVP.
> >
> > >
> > > -Dennis
> > >
> > >
> > > As I said, it's a fine idea to revisit this.  But, looking at
> the other tasks aligned to sprint 26 (and, all
> > > the work left to do for the MVP), this is not higher priority.
> > >
> > > -jeff
> > >
> > >
> > > https://pulp.plan.io/issues/2950 <https://pulp.plan.io/issues/
> 2950>
> > <https://pulp.plan.io/issues/2950 <https://pulp.plan.io/issues/2950
> >>
> > >
> > >
> >
> >
> > ___
> > Pulp-dev mailing list
> > Pulp-dev@redhat.com <mailto:Pulp-dev@redhat.com>
> > https://www.redhat.com/mailman/listinfo/pulp-dev <
> https://www.redhat.com/mailman/listinfo/pulp-dev>
> >
> >
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Reconsidering PUP-3

2017-09-29 Thread Dennis Kliban
+1

On Fri, Sep 29, 2017 at 9:17 AM, David Davis  wrote:

> I went back and looked at PUP-3 and it does lay out some of the items
> @pcreech mentions although at a higher, more general level. I’ll leave the
> document as is unless someone disagrees.
>
> With that in mind, let's go ahead and vote on PUP-3. We’ll end the voting
> on October 8th which is about 10 days away.
>
> To refresh everyone’s memory, voting is outlined in PUP-1:
>
> https://github.com/pulp/pups/blob/master/pup-0001.md#voting
>
> And here’s the PUP in question:
>
> https://github.com/daviddavis/pups/blob/pup3/pup-0003.md
>
> Please respond to this thread with your vote or any comments/questions.
>
>
> David
>
> On Thu, Sep 28, 2017 at 12:15 PM, Brian Bouterse 
> wrote:
>
>> Thanks @pcreech for all the comments. I also believe that switching to a
>> cherry-picking model will provide many benefits.
>>
>> As a general FYI, the way PUP-3 is written, it allows us to adopt it
>> (assuming it passes at vote) and then figure out how to roll it out later
>> in coordination w/ release engineering.
>>
>> @daviddavis, should we start casting votes or should we wait for you to
>> declare it open after maybe pushing an update?
>>
>> Thanks!
>> Brian
>>
>> On Mon, Sep 25, 2017 at 1:38 PM, David Davis 
>> wrote:
>>
>>> Patrick,
>>>
>>> Thanks for the feedback. I’d like to update PUP-3 in the next couple
>>> days with the pain points you mention.
>>>
>>> Also, I’d love the idea of having some tooling that tells us exactly
>>> which commits to cherry pick into which release branch. I think we should
>>> have this in place before we switch to cherry-picking if we decide to go
>>> that route.
>>>
>>>
>>> David
>>>
>>> On Fri, Sep 22, 2017 at 1:56 PM, Patrick Creech 
>>> wrote:
>>>
 Since I was one of the early voices against cherrypicking during the
 initial vote, I figured I'd send this e-mail along with some points that
 have helped me be in favor of cherry picking before voting
 starts.

 In taking over the release engineering process, I have gained some
 perspective on our current situation and have found Cherrypicking to be an
 enticing concept for pulp.  Most notably, these are the
 things I ran into during the release process for 2.13.4 that caused
 some headaches and frustrations.

 Firstly, we had an issue come up with the Pulp Docker 2 line that does
 not exist with the new Pulp Docker 3 line.  Dockerhub V2 Schema2 has some
 manifest issues that cause syncs in the Pulp Docker 2
 line to fail.  A change specific to this issue was created and merged
 to the 2.4-dev branch.  It's only application is the 2 line, but to satisfy
 our current tooling and policy, this change had to be
 merged forward through 3.0-dev and to Master, where it no longer
 applies and the code no longer exists in this form.  I took great care to
 verify that no code changes happened on 3.0-dev and master,
 but there is the window open for issues here.

 Another issue that happened is when issues that are merged from a -dev
 branch aren't merged forward.  In this case, two issues that landed on the
 most recent -dev branch weren't merged forward along
 to master before a helper script was ran.  When this helper script ran,
 it was ran with the merge strategy of "ours" to ensure it's changes don't
 persist forward.  When "ours" is used, conflicting
 changes are automatically dropped from the source branch to the
 destination branch.  This caused the code for these two changes to
 dissapear on the master branch, while their commit hashes were there
 in the history.  I had to cherry-pick these changes forward to master
 from the branch they landed on to ensure the modified code exists.

 And lastly, since 2.13.4 was a 2.13.z release that was done after
 2.14.0 went out, changes had to be cherry-picked back from 2.14-dev to
 2.13-dev.  Since the hash changed, these changes yet again had
 to be merged forward to 2.14-dev and then Master, even though they
 already existed in these branches, thus helping to pollute the repo history
 further with more duplication.

 While a large portion of these issues can be attributed to the merge
 forward everything policy, I have been in talks with other teams that
 follow a cherrypicking strategy about their workflow since
 I'm in the process of revamping pulp's release engineering process.
 Something that caught my attention as beneficial is a team's strategy that
 everything goes on master, and with some automated
 tooling and bookeeping in their issue tracker they can identify what
 cherrypicks need to be pulled back to the release branch and spit out a
 command for the release engineer to run to do the
 cherrypicks.  The release engineer resolves any conflicts, and then
 puts up a PR to merge into the release branch so the work goes throu

Re: [Pulp-dev] DRF 3.7.0 Issues

2017-10-06 Thread Dennis Kliban
I put in a PR[0] to fix the actual incompatibility.

[0] https://github.com/pulp/pulp/pull/3186

On Fri, Oct 6, 2017 at 2:40 PM, Brian Bouterse  wrote:

> 2.6.4  should have read   3.6.4
>
> On Fri, Oct 6, 2017 at 2:37 PM, Brian Bouterse 
> wrote:
>
>> Today DRF pushed 3.7.0 to PyPI which breaks the pulp, the docs builders
>> and vagrant environments. This issue is tracked here [0] along with a
>> workaround for your vagrant environment.
>>
>> I think we should triage [0] onto the sprint w/ high prio. Should we?
>>
>> The resolution of [0] is to update Pulp to be 3.7.0 compatible right?
>>
>> In the meantime @daviddavis and I fixed the docs builders by pinning them
>> to 2.6.4 and pushing the config using JJB. There is still this PR [2] which
>> pins core to 2.6.4 also, so we should merge that too right?
>>
>> Even if we do merge [2] the ansible is still broken if someone knows the
>> answer to this [3].
>>
>> [0]: https://pulp.plan.io/issues/3057
>> [1]: https://github.com/pulp/pulp_packaging/pull/436/files
>> [2]: https://github.com/pulp/pulp/pull/3185/files
>> [3]: https://pulp.plan.io/issues/3057#note-3
>>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] 'Pulp 3 installer' tag added to pulp.plan.io

2017-10-17 Thread Dennis Kliban
This tag should be used when reporting issues related to the Ansible roles
used for installing Pulp 3.
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] 'Pulp 3 installer' tag added to pulp.plan.io

2017-10-17 Thread Dennis Kliban
We should tag all these as 'Pulp 3 installer'. If you have some issues in
mind, please add that tag.

On Tue, Oct 17, 2017 at 3:02 PM, Austin Macdonald 
wrote:

> +1, thanks Dennis. I see some issues on the "external" tracker, and some
> in the "pulp" tracker. Is the plan to move them all to the "pulp" tracker
> and add the installer tag?
>
> On Tue, Oct 17, 2017 at 2:53 PM, Dennis Kliban  wrote:
>
>> This tag should be used when reporting issues related to the Ansible
>> roles used for installing Pulp 3.
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Many-to-many joins in the API

2017-10-18 Thread Dennis Kliban
Exposing the RepoContent model via REST API leaves us with the most
flexibility in the future. We decided on this design in issue 2873[0].

[0] https://pulp.plan.io/issues/2873

On Wed, Oct 18, 2017 at 1:38 PM, Michael Hrivnak 
wrote:

> Whenever we get to versioning repositories, that will have a big impact on
> the issues you're raising. Regardless of how exactly a user causes content
> to be added or removed from a repo, the main result of the operation will
> be the creation of a new repo version. For example, a sync task would
> create a new version as its output, and that version would reference the
> creation and removal of relationships to content.
>
> From a REST standpoint, that changes what expectations we need to meet.
> For one, the RepositoryContent model will change, and we probably won't
> want to allow users to directly create/update/delete those entries, on the
> principle that a repo version is immutable. If the user wants to add or
> remove content from a repo, the output must always be a new version.
>
> POSTing and DELETing to either a relationship endpoint or a content
> endpoint isn't a great fit either. If we had a path "
> /api/v3/repositories//content/", it would only be a shortcut to
> the set of content in the repo's current version. A user should not expect
> to be able to change a content set directly, so giving them write access
> there could be misleading.
>
> A simple option could be to allow a POST to a repo version endpoint. If a
> user did a GET on "/api/v3/repositories//versions//", they
> should probably get a representation that includes what content was added
> and removed by that version. We could allow them to POST a similar
> representation to  "/api/v3/repositories//versions/", which
> would create a new version that makes the desired additions and removals.
> From a REST standpoint, I think that's the most natural way to facilitate
> arbitrary adding and removing of content given a model that includes repo
> versions.
>
> That approach could also solve our upload problem, where we don't want 100
> uploads to create 100 repo versions. A user could create 100 new content
> units via upload, and then associate them to a repo in one operation.
>
> Thoughts on that?
>
> On Wed, Oct 18, 2017 at 12:45 PM, David Davis 
> wrote:
>
>> Working on issue #3073 [1], there was a discussion that came up about how
>> to best handle updating many-to-many joins in the API. We currently have a
>> many-to-many relationship between repositories and contents which is
>> handled by a RepositoryContent model. The api for this model is at
>> /api/v3/repositorycontents/ (more info here [2]). But we also have an API
>> already at /api/v3/repository//content as well but it currently
>> only lists the contents for a repository.
>>
>> I think there are two options for supporting many-to-many joins in the
>> API. First, we could continue to expose the join model and have routes like:
>>
>> POST /api/v3/repositorycontents/ (which takes a repo id and content id)
>> DELETE /api/v3/repositorycontents//
>>
>> We would have to start exposing the repo content id in the api to get
>> this second link working.
>>
>> However, alternatively, we could use a nested route to handle
>> adding/removing repo contents:
>>
>> POST /api/v3/repositories//content/ (which takes a content id)
>> DELETE /api/v3/repositories//content//
>>
>> This second scheme would essentially hide the RepositoryContent model
>> from API users. I am not sure if that’s a good thing or a bad thing.
>>
>> Thoughts?
>>
>> [1] https://pulp.plan.io/issues/3073
>> [2] https://pulp.plan.io/issues/2873
>>
>> David
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
>
> --
>
> Michael Hrivnak
>
> Principal Software Engineer, RHCE
>
> Red Hat
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] [pulp 3] proposed change to publishing REST api

2017-10-19 Thread Dennis Kliban
@jortel and I have been discussing[0] how a user should find out what
publication was created after a request is made to
http://localhost:8000/api/v3/repositories/foo/publishers/example/bar/publish/

I propose that we get rid of the above URL from our REST API and add
ability to POST to
http://localhost:8000/api/v3/repositories/foo/publishers/example/bar/publications/
instead. The response would be a 201. Each publication would have a task
associated with it.

This work would probably be done by whoever picks up issue 3033[1].

[0] https://pulp.plan.io/issues/3035
[1] https://pulp.plan.io/issues/3033
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp 3] proposed change to publishing REST api

2017-10-23 Thread Dennis Kliban
On Mon, Oct 23, 2017 at 10:40 AM, Jeremy Audet  wrote:

> > http://localhost:8000/api/v3/repositories/foo/publishers/exa
> mple/bar/publications/
>
>
> 
> AFAIK, the form of this URL is: scheme://netloc/api/v3/
> repositories/{repository_id}/publishers/example/{publisher_id}/publications/.
> Is "example" supposed to be in this URL? I don't see what purpose it
> serves.
>

'example' is an identifier for the publisher type.
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp 3] proposed change to publishing REST api

2017-10-23 Thread Dennis Kliban
On Mon, Oct 23, 2017 at 10:55 AM, Michael Hrivnak 
wrote:

> Unless the publication can be created before the response is returned, the
> response code will need to still be 202.
>
> As for the path, either way seems workable, although I have two
> hesitations about POSTing to publications/.
>
> 1) Normally in REST when a user creates a resource via POST to a
> collection endpoint, they are expected to provide a representation of the
> new resource, even if it is only partial. In the case of initiating a
> publish task, we do not want the user to provide any part of the new
> publication's state. We only want the user to optionally provide a bit of
> information about *how* to create a new publication. Should the publication
> be incremental or not? Which repo version should be published? etc. The
> difference may seem subtle, but I think it's important.
>
> 2) The act of creating a publication may also change state of other
> resources, and not only subordinate resources such as a publication
> artifact. For example, if there is a Distribution with auto_update set to
> True, its state will be changed by a publish task. That could be seen as an
> unexpected side effect when merely POSTing to a publications/ endpoint.
> When an operation affects state across multiple resources and resource
> types, that's usually a good time to use a "controller" type endpoint that
> is specific to the operation.
>

We should probably reevaluate the value of 'auto_update' on a Distribution.
Information about distributions that need to be updated can be passed in
the body of the POST to publications/. This way the user explicitly
instructs Pulp to perform the update.


>
> Our asynchronous tasks will often need to create one or more resources. A
> publish task creates a publication. An upload-related task may create one
> or more content units. A sync/associate/unassociate task will create a new
> repository version. New resources are the output of those tasks. However
> each of those tasks will sometimes not create any resources, such as when
> an equivalent resource already exists. Creating resources is a common
> characteristic of tasks, so it would make sense to report that in a
> standard part of the task status.
>

A repository version would probably be it's own REST resource. So you would
perform a POST to repositories/foo/versions/ with information about what
should be done to create a new version: sync with a particular importer or
associate/unassosiate content.



> A task status should not include an exhaustive list of every resource
> created. For example, a publish task should not include a reference to
> every metadata artifact it made. It would be sufficient to include a
> reference to the publication, the task's primary output, which then can be
> used to reference subordinate resources.
>
> On a task status representation, this could be included in a field called
> "created_resources", "output", "return_value", or similar.
>
> Thoughts on that idea?
>
> On Fri, Oct 20, 2017 at 11:27 AM, Mihai Ibanescu  > wrote:
>
>> That seems sensible, and in line with REST's mantra of "nouns in resource
>> URLs, not verbs".
>>
>> On Thu, Oct 19, 2017 at 3:27 PM, Dennis Kliban 
>> wrote:
>>
>>> @jortel and I have been discussing[0] how a user should find out what
>>> publication was created after a request is made to
>>> http://localhost:8000/api/v3/repositories/foo/publishers/exa
>>> mple/bar/publish/
>>>
>>> I propose that we get rid of the above URL from our REST API and add
>>> ability to POST to http://localhost:8000/api/v3/r
>>> epositories/foo/publishers/example/bar/publications/ instead. The
>>> response would be a 201. Each publication would have a task associated with
>>> it.
>>>
>>> This work would probably be done by whoever picks up issue 3033[1].
>>>
>>> [0] https://pulp.plan.io/issues/3035
>>> [1] https://pulp.plan.io/issues/3033
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
>
> --
>
> Michael Hrivnak
>
> Principal Software Engineer, RHCE
>
> Red Hat
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp 3] proposed change to publishing REST api

2017-10-23 Thread Dennis Kliban
On Mon, Oct 23, 2017 at 10:56 AM, Jeff Ortel  wrote:

> This is interesting.
>
> Some thoughts:
>
> If adopted, I propose the publication task create the publication and pass
> to the publisher which would
> require a change in the plugin API - Publisher.publish(publication).  If
> the publisher fails, I think the
> publication should be deleted.
>

The ViewSet would create the publication, dispatch a publish task with the
publication id as an argument, update the publication with the task id,
return a serialized Publication to the API user. The user is responsible
for deleting any publication that is not created successfully.


>
> On 10/19/2017 02:27 PM, Dennis Kliban wrote:
> > @jortel and I have been discussing[0] how a user should find out what
> publication was created after a request
> > is made to http://localhost:8000/api/v3/repositories/foo/publishers/exa
> mple/bar/publish/
> >
> > I propose that we get rid of the above URL from our REST API and add
> ability to POST to
> > http://localhost:8000/api/v3/repositories/foo/publishers/exa
> mple/bar/publications/ instead. The response would
> > be a 201. Each publication would have a task associated with it.
>
> Associated how?  I hope you are not suggesting adding Publication.task_id
> (FK).  I don't think that would be a
> good idea.
>

That is exactly what I had in mind. Though the field can be NULL if the
task has been removed from the database already. This way a serialized
version of a Publication would provide a reference to a task that can be
tracked to see if the publication was successfully created. If a failure
occurs, the user can choose to delete the publication. Why do you think
it's not a good idea to add this association?


> I still like the idea of adding Publication.name as a natural key that can
> be specified by the user.  It can
> default to the task ID when not specified.  This gives users something
> meaningful to use when selecting a
> publication for association to a Distribution or when deleting.
>

I also think it's valuable to let users name their publications. However,
we should avoid forcing users to form URLs to resources on their own.
Jeremy put it well in his response.


>
> >
> > This work would probably be done by whoever picks up issue 3033[1].
>

Since 3033 is well under way, this work would be done as issue 3035.



> >
> > [0] https://pulp.plan.io/issues/3035
> > [1] https://pulp.plan.io/issues/3033
> >
> >
> > ___
> > Pulp-dev mailing list
> > Pulp-dev@redhat.com
> > https://www.redhat.com/mailman/listinfo/pulp-dev
> >
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp 3] proposed change to publishing REST api

2017-10-23 Thread Dennis Kliban
On Mon, Oct 23, 2017 at 3:20 PM, Michael Hrivnak 
wrote:

>
>
> On Mon, Oct 23, 2017 at 12:30 PM, Dennis Kliban 
> wrote:
>
>> On Mon, Oct 23, 2017 at 10:56 AM, Jeff Ortel  wrote:
>>
>>> This is interesting.
>>>
>>> Some thoughts:
>>>
>>> If adopted, I propose the publication task create the publication and
>>> pass to the publisher which would
>>> require a change in the plugin API - Publisher.publish(publication).
>>> If the publisher fails, I think the
>>> publication should be deleted.
>>>
>>
>> The ViewSet would create the publication, dispatch a publish task with
>> the publication id as an argument, update the publication with the task id,
>> return a serialized Publication to the API user. The user is responsible
>> for deleting any publication that is not created successfully.
>>
>
> For me, your wording illustrates the problem well. Why should a user have
> to delete a resource that was never created?
>
> This sounds like we'd be introducing a partially-created state for
> publications. There would be some kind of placeholder representation that
> could be referenced as a location where a real publication *might or might
> not* eventually appear. And this representation would live side-by-side in
> a "publications/" endpoint with representations of actual publications? How
> would a user know which are which? It seems like this just shifts the async
> problem onto the publication model.
>
> I go back to this: When creation of a resource is requested, the response
> should either be 201 if the resource was created, or 202 if creation is
> deferred. We should not attempt partial creation.
>


> It's easy to lose sight of this, so maybe it's worth also observing that a
> resource is not just a DB record or some JSON. The existence of a resource
> representation requires that the resource itself exists in every way that
> is necessary for it to make sense. We should be careful not to misrepresent
> the existence of a publication.
>

The description of issue 3033[0] does not clearly establish what a
serialized version of a Publication looks like. In our current design, I
imagine that it will contain three fields: _href, created, and publisher.
@jortel, do you have the same vision?

If we start associating tasks with Publications, then the serialized
publication would have 4 fields: _href, created, publisher, task. The API
would then allow filtering based on the status of the associated task. e.g.
publications/?task__status=successful to retrieve all publications that are
successfully created.

We could also add validation on the Distribution that will check whether
the publication being associated with the Distribution has a task
associated with it, and if so that it successfully completed.

A POST to /publications/ could return a 202 and a serialized version of the
publication. This lets the user know that the task of creating a
publication was accepted. Any GET requests to
/publications/ would return 202 until the publication task
has completed. Once the publication task is complete a GET request to
/publications/ would return 200 if the task finished
successfully or 410 (gone) if it did not complete successfully.


[0] https://pulp.plan.io/issues/3033

>
> --
>
> Michael Hrivnak
>
> Principal Software Engineer, RHCE
>
> Red Hat
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp_python] Roadmap and wishlist for future versions of the Pulp Python plugin

2017-10-24 Thread Dennis Kliban
Great job on putting this wiki page together! I really like the use cases
presented. Since this document is meant to be shared with Pulp users and
most of our users are probably not subscribed to pulp-dev list, we should
probably send out this email to the pulp-list also. What do you think?

-Dennis

On Mon, Oct 23, 2017 at 5:11 PM, Daniel Alley  wrote:

> Pulp 3 development is in full swing, and we've begun thinking about what
> we may want out of future versions of the Python plugin.  We would love
> your input, too!
>
> We've created a wiki page on pulp.plan.io detailing our initial thoughts
> on what the Pulp Python plugin should look like, and how the work should be
> prioritized.
>
> https://pulp.plan.io/projects/pulp/wiki/Pulp_Python_Roadmap
>
> Please feel free to put forward any suggestions you may have for the
> future of the Python plugin.  If you have a comment on what you would like
> to see from the roadmaps generally, please  participate in the discussion
> started by Robin on this list!
>
> Thank you,
> Daniel
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp 3] proposed change to publishing REST api

2017-10-25 Thread Dennis Kliban
On Wed, Oct 25, 2017 at 11:24 AM, David Davis  wrote:

> I don’t know that the ambiguity around whether a task has a publication or
> not is a big deal. If I call the publication endpoint, I’d expect a
> publication task which either has 1 publication or 0 (if the publication
> failed) attached to it.
>
> In terms of ambiguity, I see a worse problem around adding a task_id field
> to publications. As a user, I don’t know if a publication failed or not
> when I get back a publication object. Instead, I have to look up the task
> to see if it is a real (or successful) publication. Moreover, since we
> allow users to remove/clean up tasks, that task may not even exist anymore.
>
>
I agree that the ephemeral nature of tasks makes the originally proposed
solution non-deterministic. I am open to associating 'resources created'
with a task instead.

However, I still think there is value in changing the rest API endpoint for
starting a publish task to POST
/api/v3/repositories//publishers///publications/.
However, I will start a separate thread for that discussion.

 - Dennis


>
> David
>
> On Wed, Oct 25, 2017 at 11:03 AM, Brian Bouterse 
> wrote:
>
>>
>>
>> On Tue, Oct 24, 2017 at 10:00 PM, Michael Hrivnak 
>> wrote:
>>
>>>
>>>
>>> On Tue, Oct 24, 2017 at 2:11 PM, Brian Bouterse 
>>> wrote:
>>>
 Thanks everyone for all the discussion! I'll try to recap the problem
 and some of the solutions I've heard. I'll also share some of my
 perspective on them too.

 What problem are we solving?
 When a user calls "publish" (the action API endpoint) they get a 202 w/
 a link to the task. That task will produce a publication. How can the user
 find the publication that was produced by the task? How can the user be
 sure the publication is fully complete?


 What are our options?
 1) Start linking to created objects from task status. I believe its
 been clearly stated about why we can't do this. If it's not clear, or if
 there are other things we should consider, let's talk about it.
 Acknowledging or establishing agreement on this is crucial because a change
 like this would bring back a lot of the user pain from pulp2. I believe the
 HAL suggestion falls into this area.

>>>
>>> I may have missed something, but I do not think this is clear. I know
>>> that Pulp 2's API included a lot of unstructured data, but that is not at
>>> all what I'm suggesting here.
>>>
>>> It is standard and recommended practice for REST API responses to
>>> include links to resources along with information about what type of
>>> resource each link references. We could include a reference to the created
>>> resource and an identifier for what type of resource it is, and that would
>>> be well within the bounds of good REST API design. HAL is just one of
>>> several ways to accomplish that, and I'm not pitching any particular
>>> solution there. In any case, I'm not sure what the problem would be with
>>> this approach.
>>>
>>
>> I agree it is a standard practice for a resource to include links to
>> other resources, but the proposal is to include "generic" links is
>> different and creates a different user experience. I believe referencing
>> the task from the publication will be easier for users and clients. When a
>> user looks up a publication, they will always know they'll get between 0
>> and 1 links to a task. You can use that to check the state of the
>> publication. If we link to "generic" resources (like a publication) from a
>> task, then if I ask a user "do you expect task
>> ede3af3e-d5cf-4e18-8c57-69ac4d4e4de6 to contain a link to a publication
>> or not?" you can't know until you query it. I think that ambiguity was a
>> pain point in Pulp2. I don't totally reject this solution, but this is an
>> undesirable property (I think).
>>
>>
>>>

 2) Have the user find the publication via query that sorts on time and
 filters only for a specific publisher. This could be fragile because with a
 multi-user system and no hard references between publications and tasks,
 answering the question "which is the publication for me" is hard because
 another user could have submitted a publish too. While not totally perfect,
 this could work.

>>>
>>> In theory if a user queried for a publication from a specific publisher
>>> that was created between the start and end times of the task, that should
>>> unambiguously identify the correct publication. But depending on timestamps
>>> is not a particularly robust nor confidence-inspiring way to reference a
>>> resource.
>>>
>> Agreed and Agreed
>>
>>
>>>

 3) Have the user create a publication directly like any other REST
 resource, and help the user understand the state of that resource over
 time. I believe the proposal at the start of this thread is recommending
 this solution. I'm also +1 on this solution.

>>>
>>> I think the problem with this is that a user cannot create

Re: [Pulp-dev] Pulp3 - JWT Authorization Header

2017-10-30 Thread Dennis Kliban
On Mon, Oct 30, 2017 at 10:55 AM, Brian Bouterse 
wrote:

> I think it would be ideal if we used 'Bearer: ' instead of 'JWT: '. If you
> use our docs, you'll be able to submit your JWT correctly. If you say 'oh I
> see Pulp uses JWT' and you follow the example in the official (I think?)
> JWT site [0] you'll submit a JWT to Pulp using those docs it won't work.
> This is also a problem in practice; I've heard of two separate occasions
> where JWT was thought to be broken because it was submitted 'Bearer: '
> which Pulp wants 'JWT: '.
>
> The reasoning for the plugin to choose JWT over Bearer has to do with
> their goals of being able to be used side-by-side a OAuth2 *and* allow your
> auth types to be in any order. I don't think this affects Pulp because Pulp
> isn't supporting OAuth2 anytime soon if ever, and even if we do, I don't
> think that's a good reason to invent a new way to submit a JWT (which they
> did).
>
> I'm +1 to filing a story against Pulp to configure our usage of the plugin
> to have the JWT be submitted using 'Bearer: ' instead of 'JWT: '. Shall I
> file this? What do you all think?
>
>
+1 to this as well.



> [0]: https://jwt.io/introduction/
>
> -Brian
>
>
> On Fri, Oct 27, 2017 at 9:03 AM, David Davis 
> wrote:
>
>> There was some discussion on the PR about this:
>>
>> https://github.com/pulp/pulp/pull/3109#discussion_r138202256
>>
>> Basically the package we’re using decided on JWT. See their reasoning
>> here:
>>
>> https://github.com/GetBlimp/django-rest-framework-jwt/pull/4
>>
>>
>> David
>>
>> On Fri, Oct 27, 2017 at 8:26 AM, Kersom Moura Oliveira > > wrote:
>>
>>> Hi,
>>>
>>> I noticed that JWT authorization header was adopted as the default one
>>> for Pulp3. [0]
>>>
>>> Also I read in a few places about Bearer authorization header,  as the
>>> typical one used for JWT.[1]
>>>
>>> Is there a specific reason to chose one over the other in Pulp3?
>>>
>>> Regards,
>>>
>>> [0] https://docs.pulpproject.org/en/3.0/nightly/integration_guid
>>> e/rest_api/authentication.html#using-a-token
>>> [1] https://jwt.io/introduction/
>>> [2] https://tools.ietf.org/html/rfc6750
>>> [3 ]https://tools.ietf.org/html/rfc7523
>>>
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] [pulp 3] proposed change to publishing REST api

2017-10-31 Thread Dennis Kliban
On Tue, Oct 31, 2017 at 3:40 PM, Brian Bouterse  wrote:

> +1 to updating #3033 to have a created_resources attribute which would be
> a list of GenericForeignKeys. It also needs docs, but I'm not entirely sure
> where.
>
> If we're going to introduce the above attribute, I think having the
> controller endpoint as-is would be the most usable. @dkliban do you see
> value in changing the URL structure if the created_resources attribute is
> introduced?
>
>
This API call creates a publication resource. A POST to
publishers//publications/ seems most appropriate for creating new
publication resources.

I can help review/groom these if that is helpful.
>
> -Brian
>
>
> On Tue, Oct 31, 2017 at 1:39 PM, David Davis 
> wrote:
>
>> Personally I am not opposed to the url endpoint you suggest.
>>
>> It also seems like there is some consensus around adding a ‘created
>> resources’ relationship to Task or at least prototyping that out to see
>> what it would look like.
>>
>> If no one disagrees, should I update issue #3033 with those two items?
>>
>>
>> David
>>
>> On Wed, Oct 25, 2017 at 1:23 PM, Dennis Kliban 
>> wrote:
>>
>>> On Wed, Oct 25, 2017 at 11:24 AM, David Davis 
>>> wrote:
>>>
>>>> I don’t know that the ambiguity around whether a task has a publication
>>>> or not is a big deal. If I call the publication endpoint, I’d expect a
>>>> publication task which either has 1 publication or 0 (if the publication
>>>> failed) attached to it.
>>>>
>>>> In terms of ambiguity, I see a worse problem around adding a task_id
>>>> field to publications. As a user, I don’t know if a publication failed or
>>>> not when I get back a publication object. Instead, I have to look up the
>>>> task to see if it is a real (or successful) publication. Moreover, since we
>>>> allow users to remove/clean up tasks, that task may not even exist anymore.
>>>>
>>>>
>>> I agree that the ephemeral nature of tasks makes the originally proposed
>>> solution non-deterministic. I am open to associating 'resources created'
>>> with a task instead.
>>>
>>> However, I still think there is value in changing the rest API endpoint
>>> for starting a publish task to POST /api/v3/repositories/
>>> /publishers///publications/. However, I will start a
>>> separate thread for that discussion.
>>>
>>>  - Dennis
>>>
>>>
>>>>
>>>> David
>>>>
>>>> On Wed, Oct 25, 2017 at 11:03 AM, Brian Bouterse 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Oct 24, 2017 at 10:00 PM, Michael Hrivnak >>>> > wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Oct 24, 2017 at 2:11 PM, Brian Bouterse 
>>>>>> wrote:
>>>>>>
>>>>>>> Thanks everyone for all the discussion! I'll try to recap the
>>>>>>> problem and some of the solutions I've heard. I'll also share some of my
>>>>>>> perspective on them too.
>>>>>>>
>>>>>>> What problem are we solving?
>>>>>>> When a user calls "publish" (the action API endpoint) they get a 202
>>>>>>> w/ a link to the task. That task will produce a publication. How can the
>>>>>>> user find the publication that was produced by the task? How can the 
>>>>>>> user
>>>>>>> be sure the publication is fully complete?
>>>>>>>
>>>>>>>
>>>>>>> What are our options?
>>>>>>> 1) Start linking to created objects from task status. I believe its
>>>>>>> been clearly stated about why we can't do this. If it's not clear, or if
>>>>>>> there are other things we should consider, let's talk about it.
>>>>>>> Acknowledging or establishing agreement on this is crucial because a 
>>>>>>> change
>>>>>>> like this would bring back a lot of the user pain from pulp2. I believe 
>>>>>>> the
>>>>>>> HAL suggestion falls into this area.
>>>>>>>
>>>>>>
>>>>>> I may have missed something, but I do not think this is clear. I know
>>>>>> that Pulp 2's API included a lot of unstructured d

Re: [Pulp-dev] [pulp 3] proposed change to publishing REST api

2017-10-31 Thread Dennis Kliban
On Tue, Oct 31, 2017 at 3:52 PM, Brian Bouterse  wrote:

> Would that return the 202 w/ a link to the task because the publication
> hasn't been created yet? Then using the created_resources they can see what
> was created, and in the event of failure the task fails and there are no
> created_resources.
>
> @dkliban is ^ the idea?
>
>
Yes, the response would the same as it for the /publish URL right now. This
is just a change in the URL that is used to make the request.



> On Tue, Oct 31, 2017 at 3:48 PM, Dennis Kliban  wrote:
>
>>
>>
>> On Tue, Oct 31, 2017 at 3:40 PM, Brian Bouterse 
>> wrote:
>>
>>> +1 to updating #3033 to have a created_resources attribute which would
>>> be a list of GenericForeignKeys. It also needs docs, but I'm not entirely
>>> sure where.
>>>
>>> If we're going to introduce the above attribute, I think having the
>>> controller endpoint as-is would be the most usable. @dkliban do you see
>>> value in changing the URL structure if the created_resources attribute is
>>> introduced?
>>>
>>>
>> This API call creates a publication resource. A POST to
>> publishers//publications/ seems most appropriate for creating new
>> publication resources.
>>
>> I can help review/groom these if that is helpful.
>>>
>>> -Brian
>>>
>>>
>>> On Tue, Oct 31, 2017 at 1:39 PM, David Davis 
>>> wrote:
>>>
>>>> Personally I am not opposed to the url endpoint you suggest.
>>>>
>>>> It also seems like there is some consensus around adding a ‘created
>>>> resources’ relationship to Task or at least prototyping that out to see
>>>> what it would look like.
>>>>
>>>> If no one disagrees, should I update issue #3033 with those two items?
>>>>
>>>>
>>>> David
>>>>
>>>> On Wed, Oct 25, 2017 at 1:23 PM, Dennis Kliban 
>>>> wrote:
>>>>
>>>>> On Wed, Oct 25, 2017 at 11:24 AM, David Davis 
>>>>> wrote:
>>>>>
>>>>>> I don’t know that the ambiguity around whether a task has a
>>>>>> publication or not is a big deal. If I call the publication endpoint, I’d
>>>>>> expect a publication task which either has 1 publication or 0 (if the
>>>>>> publication failed) attached to it.
>>>>>>
>>>>>> In terms of ambiguity, I see a worse problem around adding a task_id
>>>>>> field to publications. As a user, I don’t know if a publication failed or
>>>>>> not when I get back a publication object. Instead, I have to look up the
>>>>>> task to see if it is a real (or successful) publication. Moreover, since 
>>>>>> we
>>>>>> allow users to remove/clean up tasks, that task may not even exist 
>>>>>> anymore.
>>>>>>
>>>>>>
>>>>> I agree that the ephemeral nature of tasks makes the originally
>>>>> proposed solution non-deterministic. I am open to associating 'resources
>>>>> created' with a task instead.
>>>>>
>>>>> However, I still think there is value in changing the rest API
>>>>> endpoint for starting a publish task to POST 
>>>>> /api/v3/repositories/
>>>>> /publishers///publications/. However, I will start a
>>>>> separate thread for that discussion.
>>>>>
>>>>>  - Dennis
>>>>>
>>>>>
>>>>>>
>>>>>> David
>>>>>>
>>>>>> On Wed, Oct 25, 2017 at 11:03 AM, Brian Bouterse >>>>> > wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Oct 24, 2017 at 10:00 PM, Michael Hrivnak <
>>>>>>> mhriv...@redhat.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Tue, Oct 24, 2017 at 2:11 PM, Brian Bouterse <
>>>>>>>> bbout...@redhat.com> wrote:
>>>>>>>>
>>>>>>>>> Thanks everyone for all the discussion! I'll try to recap the
>>>>>>>>> problem and some of the solutions I've heard. I'll also share some of 
>>>>>>>>> my
>>>>>>>>> perspective on them too.
>>>>>>>>>
&

Re: [Pulp-dev] Webserver owning the entire url namespace?

2017-11-03 Thread Dennis Kliban
On Thu, Nov 2, 2017 at 5:19 PM, Brian Bouterse  wrote:

> We're looking at developing apache/nginx scripts, and I was thinking about
> documenting the webserver requirements. I think Pulp probably has to be
> rooted at / on any given site so that it can host live APIs. Users can
> still vhost multiple sites at other hostnames so I think it's ok, but I'm
> interested in what others think. I wrote this up here [0] for some
> discussion on the issue.
>
>
I like Pulp owning all the URLs for a hostname. This enables plugin writers
to provide any API endpoints they need without having to deploy a separate
WSGI application.

Was there any good reason why this was not done for Pulp 2?



> [0]: https://pulp.plan.io/issues/3114
>
> -Brian
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Webserver owning the entire url namespace?

2017-11-08 Thread Dennis Kliban
Please see my comments inline.

On Tue, Nov 7, 2017 at 3:28 PM, Michael Hrivnak  wrote:

>
>
> On Mon, Nov 6, 2017 at 9:34 AM, Brian Bouterse 
> wrote:
>
>> Yes the REST API can be scoped to a base path. Pulp can also serve
>> content even if its scoped to a base path. So Pulp itself will work great
>> even if scoped to a base path.
>>
>> The issue is 100% around the "content serving apps" like Crane, Forge,
>> etc. I call those things "live content APIs". The current plan AIUI is that
>> "live content APIs" will be satisfied using a custom viewset so the plugin
>> developer does not need to package+ship+version+configure a separate app,
>> e.g. crane, forge, etc.
>>
>
> That may work in some cases, but I don't think it's a good fit for cases
> like the docker registry API.
>
> The registry API has enough path complexity that a viewset would not be
> sufficient, so it would need to provide a mix of routers and viewsets. It's
> an entire app worth of routes and views, including its own auth and search.
> DRF is not a great tool for that job, and it's valuable to enable plugin
> writers to use whatever tools/frameworks/languages make sense. For example,
> right now there is an effort underway to replace crane with an app that
> uses the "docker distribution" code to serve the API, but can still read
> crane's data files and serve Pulp publications. That level of flexibility
> is important.
>

I believe you are suggesting that a Pulp backend could be built for a
Docker registry. This backend would know how to consume information about
docker content published by Pulp. This would indeed be a separate
application. However, until such a registry backend exists, it would be
good to allow the Docker plugin authors to provide a docker API as part of
the same application.


> From a deployment perspective, it's been a key use case to deploy crane at
> the perimeter, rsync published image files out to a file or CDN service,
> and run the rest of Pulp on a well-protected internal network.
>

Pulp can also be installed at the perimeter. Core should support a setting
that enables/disables the REST API. Each plugin could support a setting
that enables/disables its content API.


>
>
>>
>> So we want to simplify the common cases and allow for complex cases to
>> still work. To me that is:
>>
>> * allow plugin developers to deliver live content APIs in the form of
>> viewsets. They are free to root them anywhere in the url namespace they
>> want to. Their requirements require that.
>> * Recommend that Pulp be run not scoped to a base path (simplest). If
>> users follow this recommendation 100% of their live APIs will work.
>>
>> Then for allowing scoping Pulp to a base path:
>>
>> * Pulp can be scoped to a base path and it will work without any extra
>> config. The docs should state this is possible, but that "live APIs" may
>> not work.
>> * Users will need to figure out to make the live APIs work. That's really
>> between plugin writers and users at that point.
>>
>> Note that currently one WSGI process is serving both the REST API, the
>> Content APIs, and the "live content APIs". I don't see a use case to
>> separate them at this point. If there is a believe that (a) we will have
>> more than 1 WSGI process and (b) why, please share those thoughts.
>>
>
> We should definitely keep the REST API separate from content serving, as
> it is in Pulp 2. They are very different services with different goals,
> needs and characteristics. The streamer is a third independent service that
> likely makes sense to keep separate.
>
> The REST API and content apps have different resource needs. Content
> serving can use read-only access to a DB and filesystem, and it does not
> need message broker access. We could probably get away with only giving it
> access to a few tables in the DB. It does not need access to much of the
> config or secrets that the REST API needs. The REST API app probably needs
> a lot more memory and CPU than the content app.
>
> They have different audience/access needs also. A small group of humans
> and/or automation need to infrequently use the REST API to manage what
> content Pulp makes available. A much larger audience of content consumers
> needs to access publications. The two audiences often exist on different
> networks. More downtime can be tolerated from the REST API than the content
> app.
>
> Related to the access differences, the two apps have different scalability
> needs. The amount of traffic likely to be handled by the REST API vs
> content app are very different. And on the uptime issue, we definitely have
> a use case for continuing to serve publications while Pulp is being
> upgraded or is otherwise down for maintenance.
>
> All of that said, there's no reason why a user couldn't use a web server
> like httpd to run all three WSGI apps in the same process, multiplied
> across its normal pool of processes. We should make the apps available as
> separate WSGI apps, and users can deploy them 

Re: [Pulp-dev] Move to pulpcore

2017-11-08 Thread Dennis Kliban
+1

On Wed, Nov 8, 2017 at 11:13 AM, Brian Bouterse  wrote:

> +1, Nov 13
>
> On Wed, Nov 8, 2017 at 8:57 AM, David Davis  wrote:
>
>> I am working on issue #3089 [0] to rename the ‘platform' directory to
>> ‘pulpcore'. I should hopefully have some PRs open today but I’d like to go
>> ahead and set a date for making this change as it has the potential to mess
>> up pulp PRs. It looks like there are only two Pulp 3 PRs that aren’t marked
>> as WIP. I was thinking about the morning of Monday, November 13. Would that
>> give people enough time?
>>
>> [0] https://pulp.plan.io/issues/3089
>>
>> David
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] Let's use immutable resource URIs for all resources in pulp 3

2017-11-09 Thread Dennis Kliban
Pulp 3 currently uses a resource's 'name' attribute to form a URI for that
resource. However, the name is usually mutable and as a result can cause
some clients to have references to resources that no longer exist. All
resources in Pulp 3 have a primary key that is a UUID. I propose that we
switch to using the UUID for forming the resource URI.

As I was working on a related issue[0], I put together a PR[1] that does
this for the repository resource. I just now filed an issue[2] to do the
same thing for Importers and Publishers.

Thoughts?


[0] https://pulp.plan.io/issues/3101
[1] https://github.com/pulp/pulp/pull/3218
[2] https://pulp.plan.io/issues/3125
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Task tagging in Pulp 3

2017-11-09 Thread Dennis Kliban
On Mon, Nov 6, 2017 at 2:17 PM, David Davis  wrote:

> Originally I scheduled a meeting for tomorrow but on second thought, I
> figured a pulp-dev thread would be more inclusive than a meeting. I hope to
> get this resolved by the end of this week and if not then maybe we can have
> a meeting.
>
> This is to design out the replacement of task tags in Pulp 3. I’ve got
> feedback from a few other developers in terms of how to do that so I wrote
> up a sort of outline of the problem and two possible proposals. Looking for
> feedback/questions/etc on what people prefer.
>
>
> Background
> ---
>
> In Pulp 2, tasks have tags that either provide a task name/description or
> info on what resources a task acts on. Tasks also have reserved resources,
> which provide a way for tasks to lock a particular resource.
>
> In Pulp 3, we have models TaskTag and ReservedResource[0]. Tasks are
> associated with the resources they work on via TaskTag. If a resource is
> locked, a ReservedResource record is created in the db and then removed
> from the db once the resource is unlocked.
>
>
> Problem
> ---
>
> The task tag model doesn't really fit Pulp 3. It's perhaps too generic and
> totally unnecessary (see Proposal 1), or it could be redesigned to
> accomodate other things (see Proposal 2).
>
> Also, we need to support created resources (e.g. publications) with tasks.
> Refactoring task tags might provide an opportunity to do so.
>
>
> User stories
> ---
>
> As an authenticated user, I can see what resource(s) a task acted on.
> As an authenticated user, I can search for a tasks based on what resource
> they acted on.
>
>
> Proposal 1
> ---
>
> Since tags and reserved resources in Pulp 3 will only store information
> about a particular repository (not 100% sure here), it should be possible
> to simplify the data model. We could ditch both TaskTag and
> ReservedResource models and just have a direct relationship between Tasks
> and Repositories (e.g. TaskRepository). This model could also have some
> sort of field to indicate whether a particular field is locked (e.g.
> is_locked). Unlike ReservedResource, this relationship would be
> persisted—only the is_locked field would be updated when a task is done.
>
>
> Proposal 2
> ---
>
> We could keep the TaskTag relationship (perhaps even rename it to
> TaskResource) and we could add a field to indicate the nature of the
> relationship between task and resource (e.g. created, updated, etc). This
> field could not only capture what TaskTag is currently used for but also
> stuff like created resources (e.g. publications). We could also have a
> field to indicate which task resources are locked (e.g. is_locked).
>
>
I like proposal number 2.



> This would be useful for https://pulp.plan.io/issues/3033.
>
>
> Questions
> ---
>
> - What proposal do we want to adopt?
> - When do we need to address these changes?
> - Do we really need to allow users to search tasks by a resource/repo at
> all?
>
>
> [0] https://git.io/vF8iH
>
> David
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Pulp's code of conduct

2017-11-13 Thread Dennis Kliban
On Sat, Nov 11, 2017 at 12:44 PM, David Davis  wrote:

> I am working on the PUP that deals with Pulp’s plugin teams. We decided
> during PulpCon that the only requirement for plugin teams was to follow our
> code of conduct and I have a couple questions regarding our code of conduct.
>
> First, should the code of conduct be its own PUP? I feel it should because
> it'll govern more than just the plugin teams and it is important enough to
> be considered separately. However, I’d like other people’s thoughts.
>

Yes it should be it's own PUP.  I'd like to help with that PUP. However, I
don't think I'll be able to help until after Thanksgiving. Feel free to get
started without me.


>
> Second, I am wondering how to go about actually writing the code of
> conduct. Should we start out with Django’s code of conduct[0]? Or should we
> maybe have a meeting or email thread and design it from scratch?
>
> [0] https://www.djangoproject.com/conduct/
>

OSAS recommends that projects adopt the latest version of the contributor
covenant[0]. I would like us to follow this suggestion. Thoughts?


[0] https://www.contributor-covenant.org/version/1/4/code-of-conduct.html


>
> David
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] Pulp Code of Conduct PUP discussion

2017-11-27 Thread Dennis Kliban
Pulp should adopt the Contributor Covenant Code of Conduct[0] as it's code
of conduct.

I have looked at other projects that have adopted this CoC and have
discovered that we have a few options for how to publish the CoC.

a) Add a CODE_OF_CONDUCT.md to root of repo and refer to it from
pulpproject.org[1]

b) Append the CoC to the contributing guide in CONTRIBUTING.md[2].

c) Add a CODE_OF_CONDUCT.md to root of repo and link to the Contributor
Covenant site.[3]

d) Add a CoC page to pulpproject.org and link to it from
CODE_OF_CONDUCT.md[4]

e) Include the CoC in the PUP. Add a CODE_OF_CONDUCT.md and link to the PUP
on GitHub or an HTML version of the PUP on the website.

f) Include the CoC in the PUP. Add a CoC section to CONTRIBUTING.md and
link to the PUP on GitHub or an HTML version of the PUP on the website.


I prefer option F, but if we do this work before we add CONTRIBUTING.md[5],
I am ok with option E.


We should adopt it verbatim. If anyone wants to adopt a modified version,
please add your revisions here[6] and notify the list when your changes are
ready to review.



[0] https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[1] https://github.com/rom-rb/rom/blob/master/CODE_OF_CONDUCT.md
[2]
https://github.com/gitlabhq/gitlabhq/blob/master/CONTRIBUTING.md#code-of-conduct
[3] https://github.com/patternfly/patternfly/blob/master/CODE_OF_CONDUCT.md
[4] https://github.com/rails/rails/blob/master/CODE_OF_CONDUCT.md
[5] https://pulp.plan.io/issues/3150
[6] http://pad-theforeman.rhcloud.com/p/Pulp-CoC
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] repository versions update

2017-11-28 Thread Dennis Kliban
I have a hard objection to including versioned repositories in 3.0. We
agreed to make sure that our current design would not prevent us from
adding versioned repositories in the future. We did NOT agree to including
versioned repositories in 3.0 release. This is a big code change that did
not go through our regular planning process. I greatly appreciate your
effort in driving this feature forward, but we should take a step back and
go through our regular process. I am also concerned that adding such a big
change at this time will delay the beta.

-Dennis


On Tue, Nov 28, 2017 at 10:10 AM, Michael Hrivnak 
wrote:

> Following up on previous discussions, I did an analysis of how repository
> versioning would impact Pulp 3's current REST API and plugin API. A lot has
> changed since we last discussed the topic (in May 2017), such as how we
> handle publications, and how the REST API is laid out. You can read the
> analysis here:
>
> https://pulp.plan.io/projects/pulp/wiki/Repository_Versions
>
> We previously discussed and vetted the mechanics at great length. While
> there was broad agreement on the value to Pulp 3, there was uncertainty
> about the details of how it would impact REST clients and plugin writers,
> and also uncertainty about how long it would take to fully implement.
>
> In the course of my recent analysis, two things became clear. 1) both
> current APIs are not compatible and would have to change. Details are on
> the wiki page above. 2) the PoC from earlier this year indeed covers the
> hard parts, leaving mostly DRF details to sort out.
>

I don't agree with your assessment that the current REST API is not
compatible with adding repository versions. A repository version is it's
own resource that can be added


>
> I started rebasing the PoC onto current 3.0-dev, and within an hour I had
> it working with the updated REST endpoints. With that having been so easy,
> I threw caution to the wind, and within a few hours I had a fully
> functional branch that covered all the key use cases.
>
> - sync creates a new version
> - versions and their content sets are visible through the REST API
> - each version shows what content was added and removed
> - versions can be deleted, which queues a task that squashes changes as
> previously discussed
> - the ChangeSet and pulp_file were updated to work with versions
> - publish defaults to using the latest version
>
> I also created a set of tests to help prove that it behaves correctly:
>
> https://gist.github.com/mhrivnak/69af54063dff7465212914094dff34c2
>
> I have just about 12 hours of recent work into it, and the code is
> PR-ready. It's just missing doc updates and release notes. It's been
> difficult to keep discussion moving toward a full plan due to the
> uncertainties mentioned above, so hopefully this can alleviate those
> concerns and give everyone something concrete to look at.
>
> https://github.com/pulp/pulp/pull/3228
> https://github.com/pulp/pulp_file/pull/20
>
> Two notable items are missing. One is that there is no way to arbitrarily
> add and remove content from a repo now, since this removes the
> "repositorycontent" endpoint. But we need to solve that with a more formal
> and bulk add/remove API anyway. I also found that the "repositorycontent"
> endpoint was not using tasks, and thus there was no repo locking, so it
> needed additional work anyway. Based on this overall effort, I think it
> will be very easy to add if we just agree on what the endpoints should look
> like.
>
> The other is that publish does not in this PR accept a reference to a
> version. It always uses the latest. That would also be a very easy
> enhancement to make.
>
> I am happy to support getting this merged as I transition to being a more
> passive community member, assuming there are no objections. I am also of
> course happy to help support this into the future, as I believe strongly in
> its value and importance (see previous thread).
>
> Please provide feedback and questions. If a live meeting this week would
> help expedite evaluation of this effort, I'm happy to schedule that. And
> assuming there are no hard objections, I'm happy to proceed with
> documentation updates.
>
> Thanks!
>
> --
>
> Michael Hrivnak
> Principal Software Engineer, RHCE
>
> Red Hat
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] Pulp 3: using JWT to request a JWT

2017-11-28 Thread Dennis Kliban
Our MVP doc currently states "As an API user, I can authenticate any API
call (except to request a JWT) with a JWT. (not certain if this should be
the behavior) [in progress]"

The uncertainty was due to the "except to request a JWT" clause.

I propose that Pulp 3 should support requesting a new JWT by using an
existing JWT. Automated systems that integrate with Pulp would benefit from
being able to renew tokens using an existing token.

Enabling this feature with django-rest-framework-jwt requires also
selecting the maximum amount of time since original token was issued that
the token can be refreshed. The default is 7 days. Pulp users should be
able to supply this value. Thy should also be able to specify how long each
token is good for.


What do others think?
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Pulp 3: using JWT to request a JWT

2017-11-29 Thread Dennis Kliban
On Tue, Nov 28, 2017 at 8:32 PM, David Davis  wrote:

> I’m not sure I fully understand this last paragraph about setting a
> maximum amount of time per token. Regardless, I would not add the ability
> to request new JWT tokens using JWT authentication in the MVP unless it’s
> easy to implement. I think we want that eventually but what we have today
> supports most of what users want or need from JWT auth.
>

The change I am proposing is just a configuration change in settings.py. We
need to set JWT_ALLOW_REFRESH to True and determine what we want the
default value for JWT_REFRESH_EXPIRATION_DELTA to be. The first will enable
the feature, the second will determine the length of time that can pass
from the creation of the very first token (using name and password) until
the user has to use the username and password again. In the time in
between, the user can use the JWT to get a new JWT.

More docs on this are here[0].


[0] https://getblimp.github.io/django-rest-framework-jwt/


>
> David
>
> On Tue, Nov 28, 2017 at 5:34 PM, Dennis Kliban  wrote:
>
>> Our MVP doc currently states "As an API user, I can authenticate any API
>> call (except to request a JWT) with a JWT. (not certain if this should be
>> the behavior) [in progress]"
>>
>> The uncertainty was due to the "except to request a JWT" clause.
>>
>> I propose that Pulp 3 should support requesting a new JWT by using an
>> existing JWT. Automated systems that integrate with Pulp would benefit from
>> being able to renew tokens using an existing token.
>>
>> Enabling this feature with django-rest-framework-jwt requires also
>> selecting the maximum amount of time since original token was issued that
>> the token can be refreshed. The default is 7 days. Pulp users should be
>> able to supply this value. Thy should also be able to specify how long each
>> token is good for.
>>
>>
>> What do others think?
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Proposal and feedback request: un-nest urls

2017-11-30 Thread Dennis Kliban
+1 to not nesting

I prefer the simplicity of unnested URLs for the API. This change will
require users to specify a repository href when creating an importer or a
publisher. This provides the same amount of information as a nested URL.

On Wed, Nov 29, 2017 at 5:32 PM, Brian Bouterse  wrote:

> For deletes, the db relationships are all there, so I expect deletes to
> cascade to other objects with any url structure. I believe closer to the
> release, we'll have to look at the cascading delete relationships to see if
> the behaviors that we have are correct.
>
> Overall, I'm +1 on un-nesting. I think it would result in a good user
> experience. I know it goes against the logical composition arguments, which
> have been well laid out. We want Pulp to be really simple, and the nested
> URL in the top of this thread is anything but simple. Consider another
> project like Ansible Galaxy (who also uses Django and DRF). Their API is
> very flat and as an outsider I find it very approachable:
> https://galaxy.ansible.com/api/v1/  Pulp could be that simple.
>
> My main concern in keeping the nesting is that this is going to be
> difficult for plugin writers. Making plugin writing easy is a primary goal
> if not the primary goal of Pulp3. If core devs are spending lots of time on
> it, a person doing this in their free time may not bother.
>
> I also see practical reasons motivating us to un-nest. We have been adding
> custom code regularly in this area, and it's been highly complexity and
> slow going. I think Austin described it well. Getting the viewsets working
> and to be simpler would allow us to move forward in many areas.
>
> So overall, un-nesting would give a better user experience (I think), a
> simpler plugin writer experience, and it would unblock a lot of work.
>
>
>
> On Wed, Nov 29, 2017 at 3:29 PM, Bihan Zhang  wrote:
>
>> I have a question about repository delete with the un-nested model.
>> When a repository is deleted does the DELETE cascade to the
>> importers/publishers that are linked to the repo? In an un-nested world I
>> don't think they would. It would be odd for an object with its own endpoint
>> to vanish without the user calling DELETE on the model.
>>
>> When nested it makes sense to cascade the delete so if /repo/1/ is
>> deleted, everything thereafter (/repo/1/importer/2) should also be removed.
>>
>> Austin, I do see you point about it being a lot more complicated, but I
>> think modeling things the right way is worth carrying the extra code and
>> complexity.
>>
>> Anyways, maybe I'm wrong and importer/publishers should exist without a
>> repository, in which case I can definitely see the value in un-nesting the
>> URLs.
>>
>>
>> On Wed, Nov 29, 2017 at 2:21 PM, Jeff Ortel  wrote:
>>
>>> Austin makes a compelling argument.
>>>
>>>
>>> On 11/28/2017 02:16 PM, Austin Macdonald wrote:
>>> > When I look at this, the most important point is that we have a
>>> hyperlinked REST API, which means that the
>>> > urls are specifically not going to be built by users.
>>> >
>>> > For a user to retrieve an importer, they would first GET the importers
>>> for a repository. The next call would
>>> > be the exact href returned by pulp. This workflow is exactly the same
>>> whether we nest or not. The only
>>> > difference is that we no longer convey the information in the href,
>>> which seems fine to me since they aren't
>>> > particularly readable anyway.
>>> >
>>> > It has already been discussed that filtering can make up for the use
>>> cases that use nesting, and that filters
>>> > would be more flexible.
>>> >
>>> > So for me, nesting costs in (1) extra code to carry (2) extra
>>> dependency (3) complexity to use.
>>> >
>>> > To elaborate on the complexity, the problem is in declaring fields on
>>> the serializer. The serializer is
>>> > responsible for building the urls, which requires all of the uuids for
>>> the entire nested structure. This is
>>> > further complicated by master/detail, which is an entirely Pulp
>>> concept.
>>> >
>>> > Because of this, anyone working on the API (likely including plugin
>>> writers) will need to understand
>>> > parent_lookup_kwargs and how to use then with:
>>> > DetailNestedHyperlinkedRelatedField
>>> > DetailNestedHyperlinkedidentityField
>>> > DetailwritableNestedUrlRelatedField
>>> > DetailRelatedField
>>> > DetailIdentityField
>>> > NestedHyperlinkedRelatedField
>>> > HyperlinkedRelatedField.
>>> >
>>> > The complexity seems inherrent, so I doubt we will be able to simplify
>>> this much. So, is all this code and
>>> > complexity worth the implied relationship in non-human-friendly urls?
>>> As someone who has spent a lot of time
>>> > on this code, I don't think so.
>>> >
>>> >
>>> >
>>> > On Nov 28, 2017 06:12, "Patrick Creech" >> pcre...@redhat.com>> wrote:
>>> >
>>> > On Mon, 2017-11-27 at 16:10 -0600, Jeff Ortel wrote:
>>> > > On 11/27/2017 12:19 PM, Jeff Ortel wrote:
>>> > > >
>>> > > >
>>> > > > On 11/17/2017 08:55 AM, Pat

Re: [Pulp-dev] Pulp 3: using JWT to request a JWT

2017-12-01 Thread Dennis Kliban
On Fri, Dec 1, 2017 at 2:30 PM, Brian Bouterse  wrote:

> +1 to using JWT_ALLOW_REFRESH as the name, I read the other name from some
> other docs. +1 to adding a refresh token endpoint and some docs.
>
> We need to update this area in the MVP which is currently in red. We could
> replace the use case in red with:  "As an API user, I can authenticate any
> API call with a JWT" and then add the following two use cases:
>
> As a JWT authenticated user, I can receive a new JWT if Pulp is configured
> with JWT_ALLOW_REFRESH=True
> As a Pulp administrator, my Pulp system disallows JWT renewal by default
> (JWT_ALLOW_REFRESH=False)
>
> What about these use case changes to the MVP to reflect this convo?
>


+1



>
> On Thu, Nov 30, 2017 at 5:46 PM, Jeremy Audet  wrote:
>
>> I think @misa's point is that if a valid token becomes compromised, it
>>> could be renewed for a long-maybe-forever time.
>>>
>>> I'm reading a desire to have Pulp exhibit both of these types of
>>> behaviors, and both for good reasons. What if we introduce a setting
>>> JWT_REFRESH. If enabled, JWT_REFRESH will allow you to receive a new JWT
>>> when authenticating with an existing JWT. Defaults to False.
>>>
>>> I'm picking False as the default on the idea that not renewing tokens
>>> would be a more secure system by limiting access in more case than when
>>> JWT_REFRESH is True. In the implementation, when JWT_REFRESH is set to True
>>> it would fully disable the JWT_REFRESH_EXPIRATION_DELTA setting so that it
>>> could be refreshed indefinitly. The user would never know about
>>> JWT_REFRESH_EXPIRATION_DELTA.
>>
>>
>> Being secure-by-default, with the option to do useful-but-dangerous
>> things, is a great design approach.
>>
>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Pulp 3: using JWT to request a JWT

2017-12-01 Thread Dennis Kliban
On Fri, Dec 1, 2017 at 2:47 PM, David Davis  wrote:

> I would just do:
>
> As a JWT authenticated user, I can refresh my JWT token if Pulp is
> configured with JWT_ALLOW_REFRESH set to True (default is False).
>
> Having two user stories means two separate items in redmine, and both of
> these user stories will probably be fixed in one commit/PR.
>
>
I wrote up a redmine ticket for this: https://pulp.plan.io/issues/3163


>
> David
>
> On Fri, Dec 1, 2017 at 2:30 PM, Brian Bouterse 
> wrote:
>
>> +1 to using JWT_ALLOW_REFRESH as the name, I read the other name from
>> some other docs. +1 to adding a refresh token endpoint and some docs.
>>
>> We need to update this area in the MVP which is currently in red. We
>> could replace the use case in red with:  "As an API user, I can
>> authenticate any API call with a JWT" and then add the following two use
>> cases:
>>
>> As a JWT authenticated user, I can receive a new JWT if Pulp is
>> configured with JWT_ALLOW_REFRESH=True
>> As a Pulp administrator, my Pulp system disallows JWT renewal by default
>> (JWT_ALLOW_REFRESH=False)
>>
>> What about these use case changes to the MVP to reflect this convo?
>>
>> On Thu, Nov 30, 2017 at 5:46 PM, Jeremy Audet  wrote:
>>
>>> I think @misa's point is that if a valid token becomes compromised, it
 could be renewed for a long-maybe-forever time.

 I'm reading a desire to have Pulp exhibit both of these types of
 behaviors, and both for good reasons. What if we introduce a setting
 JWT_REFRESH. If enabled, JWT_REFRESH will allow you to receive a new JWT
 when authenticating with an existing JWT. Defaults to False.

 I'm picking False as the default on the idea that not renewing tokens
 would be a more secure system by limiting access in more case than when
 JWT_REFRESH is True. In the implementation, when JWT_REFRESH is set to True
 it would fully disable the JWT_REFRESH_EXPIRATION_DELTA setting so that it
 could be refreshed indefinitly. The user would never know about
 JWT_REFRESH_EXPIRATION_DELTA.
>>>
>>>
>>> Being secure-by-default, with the option to do useful-but-dangerous
>>> things, is a great design approach.
>>>
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] repository versions update

2017-12-04 Thread Dennis Kliban
I am looking forward to discussing the use cases. I hope we can get
versioned repositories into 3.0. Thanks everyone for the discussion so far.

-Dennis

On Fri, Dec 1, 2017 at 5:16 PM, Brian Bouterse  wrote:

> Thank you all for such great discussion!
>
> To recap some discussion we had today. We are going to look at the
> versioned repos use cases at an upcoming MVP call in the near future
> (probably 12/8). Look for the pulp-list announcement. If you have use cases
> you want to share, you can add them in red in the Versioned Repos section
> of the MVP here:  https://pulp.plan.io/projects/
> pulp/wiki/Pulp_3_Minimum_Viable_Product/#Versioned-Repositories
>
> Once the use cases are known, we can look at the PR and see if it fulfills
> them. From the discussion today, the general consensus is that gap will be
> relatively small, which makes including it in Pulp3 feasible.
>
> @misa providing those types of features may be possible. Imagine an
> optional attribute on a repo version named 'frozen' that defaults to True.
> While the latest repo_version for a repo has frozen=False, any action that
> would normally create a new repo version (copy, add/remove, delete, etc)
> would act on the existing repo version and *not* create a new one. Then the
> user can update the frozen attribute of the repo version when they want,
> which commits the transaction as a repo version. I don't think this would
> be too hard to implement.
>
>
> On Thu, Nov 30, 2017 at 3:20 PM, Michael Hrivnak 
> wrote:
>
>>
>>
>> On Thu, Nov 30, 2017 at 11:43 AM, Mihai Ibanescu <
>> mihai.ibane...@gmail.com> wrote:
>>
>>> I am late to the thread, so I apologize if I repeat things that have
>>> been discussed already.
>>>
>>> Is it a meaningful use case to publish an older version of the repo?
>>> Once published, do you keep track of which version got published, and how
>>> do you decide which version to push next? This seems like a complication to
>>> me.
>>>
>>>
>> A publication will have a reference to the version that it was created
>> from. To illustrate how that would get used: Your CTO calls early on a
>> Saturday morning and says "I read in the news about a major security flaw
>> in cowsay, and I know our applications depend heavily on it. What version
>> do we have deployed right now???!!!" You can concretely determine which
>> publications are being currently "distributed" to your infrastructure, and
>> from there see their exact content sets by virtue of the repo version.
>>
>> Then there is the promotion workflow, which in Pulp 2 requires a lot of
>> copying and re-publishing. With repo versions, you'll have a sequence of
>> versions of course. Let's say there's 1, 2 and 3. Version 1 is deployed
>> now, version 2 is undergoing testing, and version 3 got created last night
>> by the weekly sync job you setup. You would have two different distributors
>> that make these publications available to clients: one for production, and
>> one for testing. "Promotion" becomes just the act of updating the reference
>> on a distribution to a different publication. When testing on version 2 is
>> done, assuming it passes, you can update the production distribution to
>> make it use version 2.
>>
>> There are a few use cases for publishing an old version.
>>
>> One is: I want to publish the same exact content set two different ways,
>> with two different publishers. If the contents change between publishes, I
>> want a guarantee that it won't cause the second publish to use different
>> content than the first.
>>
>> Second: I like the state of the content in a repo as it is right now. I
>> want to publish that exact content set. If any changes happen to the
>> content in that repo between now and when my publish task gets run by a
>> worker, I don't want those changes to affect the publish I'm requesting
>> right now.
>>
>> Third: I want the ability to roll back from a bad content set to a
>> known-good one. How many publications must I keep around to have confidence
>> that if I need to roll back some distance, that publication will still be
>> available? It's valuable to know I can re-publish an older version any time
>> I need it.
>>
>> Fourth: In some cases you may decide after-the-fact that you need to
>> publish the same content set a different way. Maybe you went to kickstart
>> from a yum repo and then remembered that (this is a true story) one version
>> of your installer is too old to know about sha256 checksums, so you have to
>> go re-publish the same content set with different settings for how the
>> metadata gets generated.
>>
>> Otherwise, just as reproducible builds of software is a very valuable
>> trait, reproducible publishes of repositories are valuable for similar
>> reasons.
>>
>>
>>
>>> As a user / content developer, it seems more useful to me to always
>>> publish the latest (i.e. don't have an optional version for publishing),
>>> but have the ability to copy from a specific version of a repo into another
>>> repo (or th

Re: [Pulp-dev] Pulp Code of Conduct PUP discussion

2017-12-06 Thread Dennis Kliban
I submitted the PR[0] for PUP 4. Please continue to provide feedback on the
PR or on this thread.


[0] https://github.com/pulp/pups/pull/6

On Thu, Nov 30, 2017 at 1:06 PM, Daniel Alley  wrote:

> +1 to option F
>
> On Wed, Nov 29, 2017 at 4:47 PM, Brian Bouterse 
> wrote:
>
>> +1 to option F. It starts a CONTRIBUTING.md and also shows off the CoC on
>> the website. We can easily make a CoC page on the website and link to that.
>>
>> For the email, we probably need to make a private mailing list like
>> pulp-...@redhat.com and have 2-3 people subscribed as a start. Is there
>> a better name? Who would want to subscribe to something like this? Overall
>> I think the Django enforcement manual [7] is a decent manual to handle
>> these types of things. I'm not suggesting we formally adopt it, just that
>> it's pretty useful.
>>
>> [7]: https://www.djangoproject.com/conduct/enforcement-manual/
>>
>>
>>
>>
>> On Wed, Nov 29, 2017 at 4:06 PM, David Davis 
>> wrote:
>>
>>> +1 to option F.
>>>
>>> I reviewed the code of conduct and it looks like a great starting point.
>>> There’s a space for email address though ("[INSERT EMAIL ADDRESS]”) in the
>>> CoC.
>>>
>>>
>>> David
>>>
>>> On Mon, Nov 27, 2017 at 4:51 PM, Dennis Kliban 
>>> wrote:
>>>
>>>> Pulp should adopt the Contributor Covenant Code of Conduct[0] as it's
>>>> code of conduct.
>>>>
>>>> I have looked at other projects that have adopted this CoC and have
>>>> discovered that we have a few options for how to publish the CoC.
>>>>
>>>> a) Add a CODE_OF_CONDUCT.md to root of repo and refer to it from
>>>> pulpproject.org[1]
>>>>
>>>> b) Append the CoC to the contributing guide in CONTRIBUTING.md[2].
>>>>
>>>> c) Add a CODE_OF_CONDUCT.md to root of repo and link to the Contributor
>>>> Covenant site.[3]
>>>>
>>>> d) Add a CoC page to pulpproject.org and link to it from
>>>> CODE_OF_CONDUCT.md[4]
>>>>
>>>> e) Include the CoC in the PUP. Add a CODE_OF_CONDUCT.md and link to the
>>>> PUP on GitHub or an HTML version of the PUP on the website.
>>>>
>>>> f) Include the CoC in the PUP. Add a CoC section to CONTRIBUTING.md and
>>>> link to the PUP on GitHub or an HTML version of the PUP on the website.
>>>>
>>>>
>>>> I prefer option F, but if we do this work before we add
>>>> CONTRIBUTING.md[5], I am ok with option E.
>>>>
>>>>
>>>> We should adopt it verbatim. If anyone wants to adopt a modified
>>>> version, please add your revisions here[6] and notify the list when your
>>>> changes are ready to review.
>>>>
>>>>
>>>>
>>>> [0] https://www.contributor-covenant.org/version/1/4/code-of-con
>>>> duct.html
>>>> [1] https://github.com/rom-rb/rom/blob/master/CODE_OF_CONDUCT.md
>>>> [2] https://github.com/gitlabhq/gitlabhq/blob/master/CONTRIBUTIN
>>>> G.md#code-of-conduct
>>>> [3] https://github.com/patternfly/patternfly/blob/master/CODE_OF
>>>> _CONDUCT.md
>>>> [4] https://github.com/rails/rails/blob/master/CODE_OF_CONDUCT.md
>>>> [5] https://pulp.plan.io/issues/3150
>>>> [6] http://pad-theforeman.rhcloud.com/p/Pulp-CoC
>>>>
>>>> ___
>>>> Pulp-dev mailing list
>>>> Pulp-dev@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>
>>>>
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Pulp Code of Conduct PUP discussion

2017-12-07 Thread Dennis Kliban
I pushed some revisions based on the some feedback I received so far.
Please take another look and provide feedback.

On Wed, Dec 6, 2017 at 9:35 PM, Dennis Kliban  wrote:

> I submitted the PR[0] for PUP 4. Please continue to provide feedback on
> the PR or on this thread.
>
>
> [0] https://github.com/pulp/pups/pull/6
>
> On Thu, Nov 30, 2017 at 1:06 PM, Daniel Alley  wrote:
>
>> +1 to option F
>>
>> On Wed, Nov 29, 2017 at 4:47 PM, Brian Bouterse 
>> wrote:
>>
>>> +1 to option F. It starts a CONTRIBUTING.md and also shows off the CoC
>>> on the website. We can easily make a CoC page on the website and link to
>>> that.
>>>
>>> For the email, we probably need to make a private mailing list like
>>> pulp-...@redhat.com and have 2-3 people subscribed as a start. Is there
>>> a better name? Who would want to subscribe to something like this? Overall
>>> I think the Django enforcement manual [7] is a decent manual to handle
>>> these types of things. I'm not suggesting we formally adopt it, just that
>>> it's pretty useful.
>>>
>>> [7]: https://www.djangoproject.com/conduct/enforcement-manual/
>>>
>>>
>>>
>>>
>>> On Wed, Nov 29, 2017 at 4:06 PM, David Davis 
>>> wrote:
>>>
>>>> +1 to option F.
>>>>
>>>> I reviewed the code of conduct and it looks like a great starting
>>>> point. There’s a space for email address though ("[INSERT EMAIL ADDRESS]”)
>>>> in the CoC.
>>>>
>>>>
>>>> David
>>>>
>>>> On Mon, Nov 27, 2017 at 4:51 PM, Dennis Kliban 
>>>> wrote:
>>>>
>>>>> Pulp should adopt the Contributor Covenant Code of Conduct[0] as it's
>>>>> code of conduct.
>>>>>
>>>>> I have looked at other projects that have adopted this CoC and have
>>>>> discovered that we have a few options for how to publish the CoC.
>>>>>
>>>>> a) Add a CODE_OF_CONDUCT.md to root of repo and refer to it from
>>>>> pulpproject.org[1]
>>>>>
>>>>> b) Append the CoC to the contributing guide in CONTRIBUTING.md[2].
>>>>>
>>>>> c) Add a CODE_OF_CONDUCT.md to root of repo and link to the
>>>>> Contributor Covenant site.[3]
>>>>>
>>>>> d) Add a CoC page to pulpproject.org and link to it from
>>>>> CODE_OF_CONDUCT.md[4]
>>>>>
>>>>> e) Include the CoC in the PUP. Add a CODE_OF_CONDUCT.md and link to
>>>>> the PUP on GitHub or an HTML version of the PUP on the website.
>>>>>
>>>>> f) Include the CoC in the PUP. Add a CoC section to CONTRIBUTING.md
>>>>> and link to the PUP on GitHub or an HTML version of the PUP on the 
>>>>> website.
>>>>>
>>>>>
>>>>> I prefer option F, but if we do this work before we add
>>>>> CONTRIBUTING.md[5], I am ok with option E.
>>>>>
>>>>>
>>>>> We should adopt it verbatim. If anyone wants to adopt a modified
>>>>> version, please add your revisions here[6] and notify the list when your
>>>>> changes are ready to review.
>>>>>
>>>>>
>>>>>
>>>>> [0] https://www.contributor-covenant.org/version/1/4/code-of-con
>>>>> duct.html
>>>>> [1] https://github.com/rom-rb/rom/blob/master/CODE_OF_CONDUCT.md
>>>>> [2] https://github.com/gitlabhq/gitlabhq/blob/master/CONTRIBUTIN
>>>>> G.md#code-of-conduct
>>>>> [3] https://github.com/patternfly/patternfly/blob/master/CODE_OF
>>>>> _CONDUCT.md
>>>>> [4] https://github.com/rails/rails/blob/master/CODE_OF_CONDUCT.md
>>>>> [5] https://pulp.plan.io/issues/3150
>>>>> [6] http://pad-theforeman.rhcloud.com/p/Pulp-CoC
>>>>>
>>>>> ___
>>>>> Pulp-dev mailing list
>>>>> Pulp-dev@redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>
>>>>>
>>>>
>>>> ___
>>>> Pulp-dev mailing list
>>>> Pulp-dev@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>
>>>>
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Repo version implementation

2017-12-11 Thread Dennis Kliban
+1 to option 1

The plugin writer's experience will be much simpler.

On Mon, Dec 11, 2017 at 8:20 PM, Mihai Ibanescu 
wrote:

> If you ever want to implement a version diff endpoint, where you can
> compare one version of a repo with another version of the same repo (or of
> another repo, maybe), then #2 is pure pain.
>
> I find myself diffing repos more often than I want to, so I would
> definitely want a way to diff, between versions, between repos, and/or both.
>
> Mihai
>
> On Mon, Dec 11, 2017 at 5:18 PM, Jeremy Audet  wrote:
>
>> Regarding the second option: What happens if I (as a user) add a content
>> unit to a repo, later remove it, later add it again, and later remove it
>> again? Would this result in two "version_added" and two "version_removed"
>> records?
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] Voting for PUP 4: Code of Conduct

2017-12-12 Thread Dennis Kliban
We had some discussion about this PUP in a separate thread[0]. We have now
reached consensus on the wording of the PUP to open it up to voting.

To refresh everyone’s memory, voting is outlined in PUP-1:

https://github.com/pulp/pups/blob/master/pup-0001.md#voting

And here’s the PUP in question:

https://github.com/dkliban/pups/blob/pup4/pup-0004.md

Please respond to this thread with your vote or any comments/questions.




[0] https://www.redhat.com/archives/pulp-dev/2017-November/msg00110.html
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Pulp 3: using JWT to request a JWT

2017-12-12 Thread Dennis Kliban
tl;dr We should support only basic auth for 3.0 and implement JWT
authentication in 3.1+

We currently have 2 stories[0-1] related to JWT authentication that we
wanted to implement for 3.0. As @bmbouter, @daviddavis, and I tried to
groom them earlier today, we decided that we are not ready to commit to
using "djangorestframework-jwt" app for handling JWT authentication. This
app has some behaviors that we want to override and also comes with several
configuration options that we don't want to support long term. I am
proposing that we remove JWT authentication from the MVP and move it to the
3.1+ list. I'd like to

 - close issues 3163 and 3164
 - move JWT auth use cases from the MVP document[2] to the 3.1+
document[3].
 - add a story for removing "djangorestframework-jwt" from pulp 3.0

[0] https://pulp.plan.io/issues/3163
[1] https://pulp.plan.io/issues/3164
[2]
https://pulp.plan.io/projects/pulp/wiki/Pulp_3_Minimum_Viable_Product/#Authentication
[3] https://pulp.plan.io/projects/pulp/wiki/31+_Ideas_(post_MVP)


On Fri, Dec 1, 2017 at 3:48 PM, Brian Bouterse  wrote:

> +1 to just those use cases. Since we can rollback the change I updated the
> MVP with this change:  https://pulp.plan.io/projects/
> pulp/wiki/Pulp_3_Minimum_Viable_Product/diff?utf8=%E2%
> 9C%93&version=125&version_from=124&commit=View+differences
>
> I also added an explicit use case saying that basic auth can authenticate
> to all urls. I think that got lost in the language revisions. It's also in
> the diff ^.
>
> Anyone feel free to suggest other changes or edit and send links with the
> diff.
>
> On Fri, Dec 1, 2017 at 2:47 PM, David Davis  wrote:
>
>> I would just do:
>>
>> As a JWT authenticated user, I can refresh my JWT token if Pulp is
>> configured with JWT_ALLOW_REFRESH set to True (default is False).
>>
>> Having two user stories means two separate items in redmine, and both of
>> these user stories will probably be fixed in one commit/PR.
>>
>>
>> David
>>
>> On Fri, Dec 1, 2017 at 2:30 PM, Brian Bouterse 
>> wrote:
>>
>>> +1 to using JWT_ALLOW_REFRESH as the name, I read the other name from
>>> some other docs. +1 to adding a refresh token endpoint and some docs.
>>>
>>> We need to update this area in the MVP which is currently in red. We
>>> could replace the use case in red with:  "As an API user, I can
>>> authenticate any API call with a JWT" and then add the following two use
>>> cases:
>>>
>>> As a JWT authenticated user, I can receive a new JWT if Pulp is
>>> configured with JWT_ALLOW_REFRESH=True
>>> As a Pulp administrator, my Pulp system disallows JWT renewal by default
>>> (JWT_ALLOW_REFRESH=False)
>>>
>>> What about these use case changes to the MVP to reflect this convo?
>>>
>>> On Thu, Nov 30, 2017 at 5:46 PM, Jeremy Audet  wrote:
>>>
 I think @misa's point is that if a valid token becomes compromised, it
> could be renewed for a long-maybe-forever time.
>
> I'm reading a desire to have Pulp exhibit both of these types of
> behaviors, and both for good reasons. What if we introduce a setting
> JWT_REFRESH. If enabled, JWT_REFRESH will allow you to receive a new JWT
> when authenticating with an existing JWT. Defaults to False.
>
> I'm picking False as the default on the idea that not renewing tokens
> would be a more secure system by limiting access in more case than when
> JWT_REFRESH is True. In the implementation, when JWT_REFRESH is set to 
> True
> it would fully disable the JWT_REFRESH_EXPIRATION_DELTA setting so that it
> could be refreshed indefinitly. The user would never know about
> JWT_REFRESH_EXPIRATION_DELTA.


 Being secure-by-default, with the option to do useful-but-dangerous
 things, is a great design approach.

>>>
>>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Deferring 3 things for Pulp3 to 3.1+

2017-12-13 Thread Dennis Kliban
+1

On Wed, Dec 13, 2017 at 10:04 AM, Austin Macdonald 
wrote:

> +1
>
> On Wed, Dec 13, 2017 at 10:02 AM, Bihan Zhang  wrote:
>
>> +1
>>
>> On Wed, Dec 13, 2017 at 10:01 AM, Jeff Ortel  wrote:
>>
>>> +1
>>>
>>> On 12/12/2017 10:47 AM, Brian Bouterse wrote:
>>>
>>> As we get to the end of the MVP planning for Pulp3, I want to check-in
>>> about deferring 3 areas of Pulp functionality to the 3.1+ page [0]. I'm
>>> looking for feedback, especially -1s, about deferring the following 3
>>> things from the Pulp 3.0 release. This would finalize a few still-red or
>>> totally missing areas of the MVP [1].
>>>
>>> - Consumer Applicability. Pulp3 won't manage consumers, but Pulp is
>>> still in a good position to offer applicability. Katello uses it
>>> significantly, but they won't be using the 3.0 release.
>>>
>>> - Lazy downloading. I think this should be a top 3.1 priority. It will
>>> take a significant effort to update/test/release the streamer so I don't
>>> think we can include it in 3.0 for practical timeline reasons.
>>>
>>> - Content Protection. I believe we want both basic auth and key based
>>> verification of content served by the Pulp content app. This is an easy
>>> feature to add, but not one I think we should plan fully or do as part of
>>> the 3.0 MVP.
>>>
>>> Please send thoughts or ideas on these changes soon, so we can finalize
>>> the MVP document in the next few days.
>>>
>>> [0]: https://pulp.plan.io/projects/pulp/wiki/31+_Ideas_(post_MVP)
>>> 
>>> [1]: https://pulp.plan.io/projects/pulp/wiki/Pulp_3_Minimum_Viabl
>>> e_Product/
>>>
>>> Thank you,
>>> Brian
>>>
>>>
>>> ___
>>> Pulp-dev mailing 
>>> listPulp-dev@redhat.comhttps://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Tasking System Improvement

2017-12-13 Thread Dennis Kliban
I read through the story and it looks good to me. I marked it as groomed.

On Fri, Dec 8, 2017 at 3:50 PM, Brian Bouterse  wrote:

> Recently @ttereshc brought up a tasking system improvement while
> investigating a user reported issue. I think it's something we want to get
> added to both Pulp3 and Pulp2. The story I wrote up is written as a Pulp3
> story [0]. What do others think about this tasking system change? Can
> others post questions or groom that story?
>
> After adding it into Pulp3 I think we can have a separate issue (not yet
> created) to track to backporting of a similar change to the pulp2 API.
>
> [0]: https://pulp.plan.io/issues/3176
>
> -Brian
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Voting for PUP 4: Code of Conduct

2017-12-18 Thread Dennis Kliban
PUP 1 recommends at least 12 days of voting. The voting on this PUP will
end after 14 days on December 26th.

On Thu, Dec 14, 2017 at 11:18 AM, Tatiana Tereshchenko 
wrote:

> +1
>
> On Wed, Dec 13, 2017 at 5:43 PM, Brian Bouterse 
> wrote:
>
>> +1
>>
>> On Wed, Dec 13, 2017 at 11:24 AM, Ina Panova  wrote:
>>
>>> +1
>>>
>>>
>>>
>>> 
>>> Regards,
>>>
>>> Ina Panova
>>> Software Engineer| Pulp| Red Hat Inc.
>>>
>>> "Do not go where the path may lead,
>>>  go instead where there is no path and leave a trail."
>>>
>>> On Tue, Dec 12, 2017 at 8:32 PM, Austin Macdonald 
>>> wrote:
>>>
>>>> +1
>>>>
>>>> On Tue, Dec 12, 2017 at 2:01 PM, Bihan Zhang 
>>>> wrote:
>>>>
>>>>> +1
>>>>>
>>>>> On Tue, Dec 12, 2017 at 1:45 PM, David Davis 
>>>>> wrote:
>>>>>
>>>>>> +1
>>>>>>
>>>>>>
>>>>>> David
>>>>>>
>>>>>> On Tue, Dec 12, 2017 at 1:39 PM, Daniel Alley 
>>>>>> wrote:
>>>>>>
>>>>>>> +1
>>>>>>>
>>>>>>> On Tue, Dec 12, 2017 at 12:25 PM, Dennis Kliban 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> We had some discussion about this PUP in a separate thread[0]. We
>>>>>>>> have now reached consensus on the wording of the PUP to open it up to
>>>>>>>> voting.
>>>>>>>>
>>>>>>>> To refresh everyone’s memory, voting is outlined in PUP-1:
>>>>>>>>
>>>>>>>> https://github.com/pulp/pups/blob/master/pup-0001.md#voting
>>>>>>>>
>>>>>>>> And here’s the PUP in question:
>>>>>>>>
>>>>>>>> https://github.com/dkliban/pups/blob/pup4/pup-0004.md
>>>>>>>>
>>>>>>>> Please respond to this thread with your vote or any
>>>>>>>> comments/questions.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> [0] https://www.redhat.com/archives/pulp-dev/2017-November/msg00
>>>>>>>> 110.html
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Pulp-dev mailing list
>>>>>>>> Pulp-dev@redhat.com
>>>>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> ___
>>>>>>> Pulp-dev mailing list
>>>>>>> Pulp-dev@redhat.com
>>>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Pulp-dev mailing list
>>>>>> Pulp-dev@redhat.com
>>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>>
>>>>>>
>>>>>
>>>>> ___
>>>>> Pulp-dev mailing list
>>>>> Pulp-dev@redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>
>>>>>
>>>>
>>>> ___
>>>> Pulp-dev mailing list
>>>> Pulp-dev@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>
>>>>
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Crane redirects - internal and external content

2017-12-19 Thread Dennis Kliban
Crane cannot perform a rewrite of the redirect URL at this time. This seems
like a reasonable feature request. I recommend filing a story - we can
discuss the feature details on there.

On Wed, Dec 13, 2017 at 11:29 AM, Mihai Ibanescu 
wrote:

> Hi,
>
> In our current setup, we have a purely internal pulp deployment, that
> publishes to an NFS share.
>
> HTTP frontend machines handle the cert-based authn/authz and serve the
> content from the NFS share.
>
> We have an internal set of HTTP frontend machines, and an internal
> customer has access to published content for all development stages
> (dev/test/prod).
>
> We also have an external set of HTTP frontend machines, that handle
> external customer requests, and only serve the prod stage. Content from the
> internal NFS share is selectively rsynced into the external disk share.
>
> This all works great for rpm and such.
>
> I believe there is a problem with docker. We would have one internal and
> one external crane deployment, as expected. Content would be rsynced, as
> usual. However, because the redirect URL is "baked" into the redirect json
> files, the external Crane would redirect to the internal system, which is
> not helpful.
>
> We would prefer not to republish / recreate the redirect files in our
> transition from internal to external content.
>
> One way to handle this would be a Crane configuration option that directs
> crane to rewrite the redirect URL. In that case, internal and external
> crane systems would be configured differently.
>
> The questions:
> * Is there such an option in Crane? (looking at the code, I believe the
> answer is no)
> * Is there a feature request for something like this already?
> * If not, do you agree what I've described above is a valid customer use
> case, and should I file it as a feature request?
>
> Thanks!
> Mihai
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Object IDs in Pulp 3

2018-01-03 Thread Dennis Kliban
We should probably expose the ID field in the REST API.

CLI should convert a resource type and UUID into a URL. CLI users should be
able to specify a UUID when referencing a resource.

The REST API only cares about the relative path after the hostname[0].
Katello can store either the full URL or just the relative path. We should
provide some documentation for REST API users to suggest that resources
should be compared without the protocol, hostname, or port.


[0] https://pulp.plan.io/issues/3024

On Wed, Jan 3, 2018 at 12:32 PM, David Davis  wrote:

> I’ve been working on and discussing an issue with @asmacdo about filtering
> by id [0] and it’s brought up some questions about how repos, importers,
> etc should be identified in Pulp 3. I think currently we uniquely identify
> object’s by their href field since names are mutable. The main questions
> are: how should things like Katello store ids for objects like repos?
> Moreover, what identifier would a user use to look up an object in the
> CLI—would they pass the object’s href?
>
> I imagine right now that Katello would store an object’s href since we
> don’t expose object’s ids. But what if the hostname changes or Pulp
> switches from http to https. Then the object's identifier would change.
>
> Similarly, as a CLI user, what could I use to uniquely identify say a
> repo? Supplying the object’s href seems a bit strange I guess since you
> wouldn’t expect hyperlinks in a command line interface. Would CLI users
> deal with names? Those can change though which would be bad for setting up
> things like cron jobs using the CLI.
>
> Any thoughts?
>
> [0] https://pulp.plan.io/issues/3240
>
> David
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] repository version stories

2018-01-03 Thread Dennis Kliban
@bmbouter, @daviddavis, and I have put together a plan for implementing
repository version use cases. The overall design is captured in issue
3209[0]. The individual use cases are captured in the child stories.

Please take a look at these stories and provide feedback ASAP. We'd like to
add most of these stories to the sprint during planning on Friday.


[0] https://pulp.plan.io/issues/3209
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Voting for PUP 4: Code of Conduct

2018-01-08 Thread Dennis Kliban
The voting has closed on PUP 4. Thanks everyone for voting. The PUP has
been adopted. I will set up the email alias that will accept communication
related to CoC. Once that is working, I'll make a PR to add the CoC
reference to the CONTRIBUTING.md file in the root of pulp repo.

On Mon, Dec 18, 2017 at 10:32 AM, Dennis Kliban  wrote:

> PUP 1 recommends at least 12 days of voting. The voting on this PUP will
> end after 14 days on December 26th.
>
> On Thu, Dec 14, 2017 at 11:18 AM, Tatiana Tereshchenko <
> ttere...@redhat.com> wrote:
>
>> +1
>>
>> On Wed, Dec 13, 2017 at 5:43 PM, Brian Bouterse 
>> wrote:
>>
>>> +1
>>>
>>> On Wed, Dec 13, 2017 at 11:24 AM, Ina Panova  wrote:
>>>
>>>> +1
>>>>
>>>>
>>>>
>>>> 
>>>> Regards,
>>>>
>>>> Ina Panova
>>>> Software Engineer| Pulp| Red Hat Inc.
>>>>
>>>> "Do not go where the path may lead,
>>>>  go instead where there is no path and leave a trail."
>>>>
>>>> On Tue, Dec 12, 2017 at 8:32 PM, Austin Macdonald 
>>>> wrote:
>>>>
>>>>> +1
>>>>>
>>>>> On Tue, Dec 12, 2017 at 2:01 PM, Bihan Zhang 
>>>>> wrote:
>>>>>
>>>>>> +1
>>>>>>
>>>>>> On Tue, Dec 12, 2017 at 1:45 PM, David Davis 
>>>>>> wrote:
>>>>>>
>>>>>>> +1
>>>>>>>
>>>>>>>
>>>>>>> David
>>>>>>>
>>>>>>> On Tue, Dec 12, 2017 at 1:39 PM, Daniel Alley 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> +1
>>>>>>>>
>>>>>>>> On Tue, Dec 12, 2017 at 12:25 PM, Dennis Kliban >>>>>>> > wrote:
>>>>>>>>
>>>>>>>>> We had some discussion about this PUP in a separate thread[0]. We
>>>>>>>>> have now reached consensus on the wording of the PUP to open it up to
>>>>>>>>> voting.
>>>>>>>>>
>>>>>>>>> To refresh everyone’s memory, voting is outlined in PUP-1:
>>>>>>>>>
>>>>>>>>> https://github.com/pulp/pups/blob/master/pup-0001.md#voting
>>>>>>>>>
>>>>>>>>> And here’s the PUP in question:
>>>>>>>>>
>>>>>>>>> https://github.com/dkliban/pups/blob/pup4/pup-0004.md
>>>>>>>>>
>>>>>>>>> Please respond to this thread with your vote or any
>>>>>>>>> comments/questions.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> [0] https://www.redhat.com/archives/pulp-dev/2017-November/msg00
>>>>>>>>> 110.html
>>>>>>>>>
>>>>>>>>> ___
>>>>>>>>> Pulp-dev mailing list
>>>>>>>>> Pulp-dev@redhat.com
>>>>>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> ___
>>>>>>>> Pulp-dev mailing list
>>>>>>>> Pulp-dev@redhat.com
>>>>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> ___
>>>>>>> Pulp-dev mailing list
>>>>>>> Pulp-dev@redhat.com
>>>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Pulp-dev mailing list
>>>>>> Pulp-dev@redhat.com
>>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>>
>>>>>>
>>>>>
>>>>> ___
>>>>> Pulp-dev mailing list
>>>>> Pulp-dev@redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>>
>>>>>
>>>>
>>>> ___
>>>> Pulp-dev mailing list
>>>> Pulp-dev@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>>
>>>>
>>>
>>> ___
>>> Pulp-dev mailing list
>>> Pulp-dev@redhat.com
>>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>>
>>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] creating repository version resources using a single REST endpoint

2018-01-08 Thread Dennis Kliban
Enable users to POST to /api/v3/repositories/123abc456/versions/ with one
required parameter 'operation'. This parameter would be an identifier for a
task Pulp would run to create a new version. Any additional parameters
passed in by the API user would be passed along to the task.

pulpcore would provide the 'sync' task and the 'add_remove' task. 'sync'
would accept an 'importer'. 'add_remove' would accept 'remove_content' and
'add_content'.

Each plugin could provide any number of tasks for creating a repository
version.

pulpcore would always create the new repository version, hand it to the
plugin code, and then mark it as complete after plugin code runs
successfully. Alleviating the plugin writer of these concern.

REST API users would always use the same end point to create a repository
version. Plugin writers wouldn't have to worry about creating repository
versions and managing the 'complete' state.

What do you all think?
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] creating repository version resources using a single REST endpoint

2018-01-08 Thread Dennis Kliban
On Mon, Jan 8, 2018 at 2:39 PM, Austin Macdonald 
wrote:

> I like the concept of single REST endpoint that is responsible for all the
> ways to create a RepositoryVersion, but I don't quite understand how this
> would work. Since the endpoint is purely pulpcore, how can the
> RepositoryVersionViewSet import the plugin defined tasks that correspond to
> the action specified by the user? The only way I see is to force plugin
> writers to define all their tasks as methods on the Importer or Publisher,
> which brings us back to the circular import problem.
> https://pulp.plan.io/issues/3074
>
>
Plugin writers would need to define the tasks inside the tasks module of
their django app. pulpcore would then be able to discover the tasks defined
by the plugin at startup. The 'operation' could be name spaced by the
plugin name. Any tasks discovered in pulpcore would have pulpcore prepended
to the operation name. e.g.: pulpcore.sync or pulp_rpm.deep_copy

This would also address the circular import problem by moving the code that
performs a sync outside the Importer. However, this would require the
plugin writer to instantiate an Importer based on an 'href' passed in as an
argument. And only then could the importer be used to drive the API.


> Also, I think it would be a little unusual that the possible actions
> specified in the POST body to a pulpcore endpoint would vary depending on
> the plugin it is being used with. How would we document how to use this
> endpoint?
>
>
The endpoint would have a limited number of operations listed in our hosted
docs. However, the rest API docs on each Pulp installation should be able
to provide the user with a list of all available options.


> On Mon, Jan 8, 2018 at 1:45 PM, Dennis Kliban  wrote:
>
>> Enable users to POST to /api/v3/repositories/123abc456/versions/ with
>> one required parameter 'operation'. This parameter would be an identifier
>> for a task Pulp would run to create a new version. Any additional
>> parameters passed in by the API user would be passed along to the task.
>>
>> pulpcore would provide the 'sync' task and the 'add_remove' task. 'sync'
>> would accept an 'importer'. 'add_remove' would accept 'remove_content' and
>> 'add_content'.
>>
>> Each plugin could provide any number of tasks for creating a repository
>> version.
>>
>> pulpcore would always create the new repository version, hand it to the
>> plugin code, and then mark it as complete after plugin code runs
>> successfully. Alleviating the plugin writer of these concern.
>>
>> REST API users would always use the same end point to create a repository
>> version. Plugin writers wouldn't have to worry about creating repository
>> versions and managing the 'complete' state.
>>
>> What do you all think?
>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] creating repository version resources using a single REST endpoint

2018-01-08 Thread Dennis Kliban
On Mon, Jan 8, 2018 at 2:36 PM, David Davis  wrote:

> How would REST API users discover the possible values for ‘operation’? I
> guess we could put it in the help text for the field.
>
> I’m unsure of the value of having an operation param. I think I prefer the
> idea of just having users supply importer or add/remove_content (but not
> both) or having two separate endpoints.
>
>
The value is that this will be a single endpoint for creating a repository
version. The plugin writers will not need to add REST API endpoints when
implementing a complex manipulation of content in a repository version.
They will simply create a task and we'll dispatch this task when a user
requests to create a new repository version using this task.


>
> David
>
> On Mon, Jan 8, 2018 at 1:45 PM, Dennis Kliban  wrote:
>
>> Enable users to POST to /api/v3/repositories/123abc456/versions/ with
>> one required parameter 'operation'. This parameter would be an identifier
>> for a task Pulp would run to create a new version. Any additional
>> parameters passed in by the API user would be passed along to the task.
>>
>> pulpcore would provide the 'sync' task and the 'add_remove' task. 'sync'
>> would accept an 'importer'. 'add_remove' would accept 'remove_content' and
>> 'add_content'.
>>
>> Each plugin could provide any number of tasks for creating a repository
>> version.
>>
>> pulpcore would always create the new repository version, hand it to the
>> plugin code, and then mark it as complete after plugin code runs
>> successfully. Alleviating the plugin writer of these concern.
>>
>> REST API users would always use the same end point to create a repository
>> version. Plugin writers wouldn't have to worry about creating repository
>> versions and managing the 'complete' state.
>>
>> What do you all think?
>>
>>
>> ___
>> Pulp-dev mailing list
>> Pulp-dev@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-dev
>>
>>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] creating repository version resources using a single REST endpoint

2018-01-09 Thread Dennis Kliban
On Mon, Jan 8, 2018 at 4:24 PM, Austin Macdonald 
wrote:

> From a discussion with dkliban I see that this design could work. Plugin
> tasks would be imported to pulpcore with a mechanism similar to the named
> viewsets and serializers.
>
> Pro: plugins would define tasks that follow a consistent interface (sync,
> rich copy, etc)
> Con: plugins would be restricted to tasks that are explicitly part of that
> interface.
>

I am actually proposing a very loose interface. Any parameters passed in
the POST body would be dispatched as arguments for the task. The plugin
author could also define the name they want to use for their task.


> For the docs, I think this puts the endpoint in an awkward position. What
> does each action do? Would the actions be generic enough that we could
> correctly explain each of them as part of the core REST API docs?
>
> We should be able to dynamically generate help text needed for our REST
API docs.



> We should also discuss synchronous validation. If a plugin's viewset
> dispatches their own tasks, they can also define their own POST body
> requirements aperform arbitrary synchronous validation. If the
> RepositoryVersionViewset dispatches the task, synchronous validation could
> still be done as part of the interface, with plugins also defining
> something like "sync_validation" which would be run before the task is
> dispatched.
>

I think the interface for validation could be more generic than that. It
would be called validate() and would always accept the same parameters as
the actual task. It's up to the plugin writer to implement it so that the
REST API can validate input before dispatching a task.



>
> Overall, I am convinced that this is a viable option, noting that this
> design favors consistency between plugins over flexibility. If the plugin
> viewsets are the ones to dispatch tasks instead, the plugins can do
> whatever they need to, at the cost of consistency between plugins.
>
> On Mon, Jan 8, 2018 at 3:15 PM, Dennis Kliban  wrote:
>
>> On Mon, Jan 8, 2018 at 2:39 PM, Austin Macdonald 
>> wrote:
>>
>>> I like the concept of single REST endpoint that is responsible for all
>>> the ways to create a RepositoryVersion, but I don't quite understand how
>>> this would work. Since the endpoint is purely pulpcore, how can the
>>> RepositoryVersionViewSet import the plugin defined tasks that correspond to
>>> the action specified by the user? The only way I see is to force plugin
>>> writers to define all their tasks as methods on the Importer or Publisher,
>>> which brings us back to the circular import problem.
>>> https://pulp.plan.io/issues/3074
>>>
>>>
>> Plugin writers would need to define the tasks inside the tasks module of
>> their django app. pulpcore would then be able to discover the tasks defined
>> by the plugin at startup. The 'operation' could be name spaced by the
>> plugin name. Any tasks discovered in pulpcore would have pulpcore prepended
>> to the operation name. e.g.: pulpcore.sync or pulp_rpm.deep_copy
>>
>> This would also address the circular import problem by moving the code
>> that performs a sync outside the Importer. However, this would require the
>> plugin writer to instantiate an Importer based on an 'href' passed in as an
>> argument. And only then could the importer be used to drive the API.
>>
>>
>>> Also, I think it would be a little unusual that the possible actions
>>> specified in the POST body to a pulpcore endpoint would vary depending on
>>> the plugin it is being used with. How would we document how to use this
>>> endpoint?
>>>
>>>
>> The endpoint would have a limited number of operations listed in our
>> hosted docs. However, the rest API docs on each Pulp installation should be
>> able to provide the user with a list of all available options.
>>
>>
>>> On Mon, Jan 8, 2018 at 1:45 PM, Dennis Kliban 
>>> wrote:
>>>
>>>> Enable users to POST to /api/v3/repositories/123abc456/versions/ with
>>>> one required parameter 'operation'. This parameter would be an identifier
>>>> for a task Pulp would run to create a new version. Any additional
>>>> parameters passed in by the API user would be passed along to the task.
>>>>
>>>> pulpcore would provide the 'sync' task and the 'add_remove' task.
>>>> 'sync' would accept an 'importer'. 'add_remove' would accept
>>>> 'remove_content' and 'add_content'.
>>>>
>>>> Each pl

Re: [Pulp-dev] Repository Versions stories update

2018-01-10 Thread Dennis Kliban
On Tue, Jan 9, 2018 at 5:30 PM, Austin Macdonald  wrote:

> In the RepositoryVersions discussion this morning, we identified some
> issues that needed to be updated to the current design. The issue changes
> have been made and they are ready for all to review. We will discuss adding
> them to the sprint in another meeting tomorrow.
>
> https://pulp.plan.io/issues/3226
> https://pulp.plan.io/issues/3173
> https://pulp.plan.io/issues/3234
> https://pulp.plan.io/issues/3074
>
>
We added the 4 stories above to the sprint. We also groomed and added to
the sprint:

https://pulp.plan.io/issues/3225
https://pulp.plan.io/issues/3260

We closed https://pulp.plan.io/issues/3224 as a duplicate of
https://pulp.plan.io/issues/3074 (already on sprint 31)

Next steps:
- @dkliban will write up a technical plan on issue 3209
- @daviddavis will schedule a meeting for next week for us to discuss
https://pulp.plan.io/issues/3186/
- @asmacdo will implement changes for issues 3074 and 3260
- anyone else that is ready to work on a new story is encouraged to
work on stories related to repository versions


> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] 3.0-dev docs builders were broken due to sphinx version

2018-01-10 Thread Dennis Kliban
The docs for 3.0-dev branch were failing to build because of a new version
of sphinx. To avoid the problem temporarily, I pinned the version to 1.6.5
in our jenkins jobs[0]. We should do the same for the dev environment. I
also created an issue[1] for us to figure out how to fix this.


[0] https://github.com/pulp/pulp-ci/pull/468
[1] https://pulp.plan.io/issues/3275
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] Adding to the Plugin Writers Guide

2018-01-17 Thread Dennis Kliban
I just groomed this issue and added to the sprint.

On Mon, Jan 15, 2018 at 2:43 PM, Brian Bouterse  wrote:

> In preparation for the upcoming plugin writer workshops, I've been looking
> at our existing plugin writer's guide [0]. It's got a lot of necessary,
> nuts and bolts info in there which is great.
>
> I want to add a conceptual introduction to writing a plugin to help a
> plugin writer understand what kinds of decisions they need to make early on
> in plugin design. I've written up some of these changes in this ticket
> (with help from @dkliban): https://pulp.plan.io/issues/3284
>
> I need to work on this in prep for a presentation next week, so I'm asking
> that it be brought onto the sprint via email. If someone can groom+add that
> would be great. If others want something to be done (or not), just leave
> some feedback on the ticket or the resulting PR.
>
> [0]: https://pulp.plan.io/issues/3284
>
> Thanks!
> Brian
>
> ___
> Pulp-dev mailing list
> Pulp-dev@redhat.com
> https://www.redhat.com/mailman/listinfo/pulp-dev
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] publishing and repository versions

2018-01-17 Thread Dennis Kliban
We need to decide where the publish API endpoint should live. We also want
to confirm that we don't want to have relationships between repositories
and importers or publishers.

Current design
-

The publish API is at /api/v3/publications/ endpoint. This endpoint accepts
a publisher. pulpcore dispatches a task defined on the publisher called
'publish'. The publish task publishes the latest version of the repository
associated with the publisher.

The sync API is at /api/v3/importers//sync endpoint. The endpoint takes
no POST parameters. pulpcore dispatches a sync task defined by the
importer. The task creates a new version of the repository associated with
the importer.


Proposed design


The publish API should mirror the sync API. The
/api/v3/publisher//publish/ endpoint should be used to published with a
publisher. This endpoint should accept a repository version to publish or a
repository to publish. No association between a publisher and a repository
should exist.

The sync API should remain at /api/v3/importers//sync. It should also
accept a repository as a parameter. No relationship should exist between an
importer and a repository.
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


Re: [Pulp-dev] publishing and repository versions

2018-01-19 Thread Dennis Kliban
After discussion on a video call, we agreed to the proposed plan. As a
result we are adding https://pulp.plan.io/issues/3221 and
https://pulp.plan.io/issues/3296 to the sprint.

On Wed, Jan 17, 2018 at 1:55 PM, Dennis Kliban  wrote:

> We need to decide where the publish API endpoint should live. We also want
> to confirm that we don't want to have relationships between repositories
> and importers or publishers.
>
> Current design
> -
>
> The publish API is at /api/v3/publications/ endpoint. This endpoint
> accepts a publisher. pulpcore dispatches a task defined on the publisher
> called 'publish'. The publish task publishes the latest version of the
> repository associated with the publisher.
>
> The sync API is at /api/v3/importers//sync endpoint. The endpoint
> takes no POST parameters. pulpcore dispatches a sync task defined by the
> importer. The task creates a new version of the repository associated with
> the importer.
>
>
> Proposed design
> 
>
> The publish API should mirror the sync API. The 
> /api/v3/publisher//publish/
> endpoint should be used to published with a publisher. This endpoint should
> accept a repository version to publish or a repository to publish. No
> association between a publisher and a repository should exist.
>
> The sync API should remain at /api/v3/importers//sync. It should also
> accept a repository as a parameter. No relationship should exist between an
> importer and a repository.
>
>
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


[Pulp-dev] pulp 3 distributor use cases

2018-02-01 Thread Dennis Kliban
I've updated the MVP use cases for the Distributors. The diff is here[0]. I
removed the use case of a distributor exporting a Repository Version. The
idea is to give users a single way to export repository versions. Users
will first create a publication and then use a distributor to export that
publication.

Please reply to this thread with any suggestions for the use cases related
to Distributors.

I'd like to write up stories for these use cases early next week, so the
work can be merged by Feb 15th.

[0]
https://pulp.plan.io/projects/pulp/wiki/Pulp_3_Minimum_Viable_Product/diff?utf8=%E2%9C%93&version=143&version_from=142&commit=View+differences

Thanks,
Dennis
___
Pulp-dev mailing list
Pulp-dev@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-dev


  1   2   3   4   5   >