Re: [Pulp-list] Using Pulp to "merge" multiple RPM repos into one?

2015-09-14 Thread Nick Coghlan
On 11 September 2015 at 23:09, Michael Hrivnak <mhriv...@redhat.com> wrote:
> I think your plan is spot-on. In usually makes sense to have a 1-1 mapping
> of remote repos to pulp repos, and to keep the pulp repo as a simple mirror
> of that remote repo. From there, you can copy out of the pulp-hosted mirrors
> to compose new repos with whatever mix of content you like.

I've even come up with names I like for the RepoFunnel data model:
TrackingRepo and MergeRepo, and then the funnels are an N:1 mapping of
TrackingRepo event listeners to the target merge repo.

One feature that Pulp doesn't have yet that could be valuable for this
use case is the notion of a "metadata only" repo, where we don't
actually download the artifacts themsleve, but instead store just the
repo metadata, and the original *URLs* for the artifacts.

With appropriate publisher plugins, RepoFunnel could then be used an
input filter for an object storage based system like pinrepo:
https://github.com/pinterest/pinrepo

Regards,
Nick.

>
> Michael
>
> On Fri, Sep 11, 2015 at 5:42 AM, Nick Coghlan <ncogh...@gmail.com> wrote:
>>
>> Hi folks,
>>
>> As part of a development workflow idea for the Fedora Environments &
>> Stacks working group [1], I'm looking to build a service that lets
>> people select multiple COPR repos, and have them automatically
>> integrated into a single downstream repo.
>>
>> As a starting point, I'm aiming to build the simplest possible proof
>> of concept: take two existing COPR repos, and configure Pulp to
>> download and republish all of their content as a single combined repo.
>>
>> I mistakenly thought I could do this just by adding multiple importers
>> to a single Pulp repository, but discovered today that Pulp doesn't
>> actually support doing that - the importer:repository mapping is 1:1.
>> Finding out I didn't know Pulp's capability's as well as I thought
>> made me realise I should ask here for advice before proceeding further
>> :)
>>
>> My current thinking is that my architecture will need to look something
>> like:
>>
>> 1. For any COPR repo I want to merge, configure a local mirror in Pulp
>> that imports the content from that repo. These would be system
>> managed, so there's only ever one local mirror per remote repo.
>> 2. For each funnel, configure a dedicated target repo, and create
>> event listeners on the relevant mirror repos that trigger a content
>> unit copy whenever the mirror repos are updated
>>
>> Does that general approach sound reasonable? Are there simpler
>> alternatives that I've missed?
>>
>> Regards,
>> Nick.
>>
>> [1]
>> https://fedoraproject.org/wiki/Env_and_Stacks/Projects/SoftwareComponentPipeline
>>
>> --
>> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>>
>> ___
>> Pulp-list mailing list
>> Pulp-list@redhat.com
>> https://www.redhat.com/mailman/listinfo/pulp-list
>
>



-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Using Pulp to "merge" multiple RPM repos into one?

2015-09-11 Thread Nick Coghlan
Hi folks,

As part of a development workflow idea for the Fedora Environments &
Stacks working group [1], I'm looking to build a service that lets
people select multiple COPR repos, and have them automatically
integrated into a single downstream repo.

As a starting point, I'm aiming to build the simplest possible proof
of concept: take two existing COPR repos, and configure Pulp to
download and republish all of their content as a single combined repo.

I mistakenly thought I could do this just by adding multiple importers
to a single Pulp repository, but discovered today that Pulp doesn't
actually support doing that - the importer:repository mapping is 1:1.
Finding out I didn't know Pulp's capability's as well as I thought
made me realise I should ask here for advice before proceeding further
:)

My current thinking is that my architecture will need to look something like:

1. For any COPR repo I want to merge, configure a local mirror in Pulp
that imports the content from that repo. These would be system
managed, so there's only ever one local mirror per remote repo.
2. For each funnel, configure a dedicated target repo, and create
event listeners on the relevant mirror repos that trigger a content
unit copy whenever the mirror repos are updated

Does that general approach sound reasonable? Are there simpler
alternatives that I've missed?

Regards,
Nick.

[1] 
https://fedoraproject.org/wiki/Env_and_Stacks/Projects/SoftwareComponentPipeline

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Synchronize Git Repositories: Crazy?

2015-09-05 Thread Nick Coghlan
On 4 September 2015 at 01:55, Randy Barlow <rbar...@redhat.com> wrote:
> +1 I think this is a great idea. I was thinking a bit about it, and it
> occurred to me that repo groups might be a nice data structure to do this
> with. We are hoping to move towards strongly typed repos, but nothing says
> that the repos in a repo group have to be the same type. If Pulp introduced
> a way to promote repo groups (similar to repo promotion), I think that would
> go a long way towards helping you accomplish this idea.

As a slight variant on this idea, for an S2I image, there's actually a
trio of components at the top: your source repo, your builder image,
and the topmost image in the runtime layer stack (which then brings in
the rest of the layers).

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Local Pulp server for client app development?

2015-08-27 Thread Nick Coghlan
On 27 August 2015 at 12:57, Nick Coghlan ncogh...@gmail.com wrote:
 On 26 August 2015 at 16:56, Nick Coghlan ncogh...@gmail.com wrote:
 Unfortunately, I still end up stuck waiting for mongodb. I even tried
 sudo setenforce 0 and still get stuck there. I'll keep digging :)

 Getting a lot closer now. Another key piece of the puzzle was finding
 this bug regarding issues with container linking in Fedora's docker
 1.7.1 RPM: https://bugzilla.redhat.com/show_bug.cgi?id=1244124

 Updating to the docker 1.8.1 RPM in Fedora 22's testing repo resolved that.

 The customised version of the launch script I'm now using is
 https://github.com/ncoghlan/repofunnel/blob/master/_localdev/start_pulp.sh
 (that has the changes to sprinkle :Z on all the mount commands, as
 well as attempting to make the script still runnable when the
 containers all exist, but aren't currently running)

 The last remaining issue appears to be the beat container failing to
 launch, getting a permission denied error when it tries to write out
 /var/lib/pulp/celery/celerybeat.pid. I haven't started digging into
 that one yet.

After the other containers also started failing, I tried setenforce
0 again with this version of the script, and it looks like the :Z
suffix isn't actually giving the different containers the permission
they need to share the host directories. Things might work better with
an exported cross-container volume mount, rather than having a storage
directory on the host.

However, for now, I'm just going to run without SELinux and get the
initial repofunnel demo working.

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Local Pulp server for client app development?

2015-08-26 Thread Nick Coghlan
On 24 August 2015 at 08:56, Nick Coghlan ncogh...@gmail.com wrote:
 4. After the failed run with a non-absolute path, it's now stuck at
 waiting for mongodb, and Ctrl-C doesn't work to terminate the
 script. The current output is:

 ===
 Launching in /home/ncoghlan/fedoradevel/_storage/pulp
 db
 db already exists
 qpid
 qpid already exists
 chown: changing ownership of '/var/lib/pulp': Permission denied
 cp: cannot create directory '/var/lib/pulp/celery': Permission denied
 cp: cannot create directory '/var/lib/pulp/published': Permission denied
 cp: cannot create directory '/var/lib/pulp/static': Permission denied
 cp: cannot create directory '/var/lib/pulp/uploads': Permission denied
 cp: cannot create directory '/etc/pulp/content': Permission denied
 cp: cannot create regular file '/etc/pulp/repo_auth.conf': Permission denied
 cp: cannot create directory '/etc/pulp/server': Permission denied
 cp: cannot create regular file '/etc/pulp/server.conf': Permission denied
 cp: cannot create directory '/etc/pulp/vhosts80': Permission denied
 cp: cannot create regular file '/etc/pki/pulp/ca.crt': Permission denied
 cp: cannot create regular file '/etc/pki/pulp/ca.key': Permission denied
 cp: cannot create directory '/etc/pki/pulp/content': Permission denied
 cp: cannot create regular file '/etc/pki/pulp/rsa.key': Permission denied
 cp: cannot create regular file '/etc/pki/pulp/rsa_pub.key': Permission denied
 waiting for mongodb
 waiting for mongodb
 ...
 ===

I tracked down the culprit for the permission denied errors, and
they're SELinux related:
http://stackoverflow.com/questions/24288616/permission-denied-on-accessing-host-directory-in-docker

Mounting the volumes with the :Z suffix allowed the container to
access them appropriately.

Unfortunately, I still end up stuck waiting for mongodb. I even tried
sudo setenforce 0 and still get stuck there. I'll keep digging :)

Cheers,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Local Pulp server for client app development?

2015-08-26 Thread Nick Coghlan
On 26 August 2015 at 16:56, Nick Coghlan ncogh...@gmail.com wrote:
 Unfortunately, I still end up stuck waiting for mongodb. I even tried
 sudo setenforce 0 and still get stuck there. I'll keep digging :)

Getting a lot closer now. Another key piece of the puzzle was finding
this bug regarding issues with container linking in Fedora's docker
1.7.1 RPM: https://bugzilla.redhat.com/show_bug.cgi?id=1244124

Updating to the docker 1.8.1 RPM in Fedora 22's testing repo resolved that.

The customised version of the launch script I'm now using is
https://github.com/ncoghlan/repofunnel/blob/master/_localdev/start_pulp.sh
(that has the changes to sprinkle :Z on all the mount commands, as
well as attempting to make the script still runnable when the
containers all exist, but aren't currently running)

The last remaining issue appears to be the beat container failing to
launch, getting a permission denied error when it tries to write out
/var/lib/pulp/celery/celerybeat.pid. I haven't started digging into
that one yet.

Regards,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Local Pulp server for client app development?

2015-08-23 Thread Nick Coghlan
On 22 August 2015 at 09:49, Michael Hrivnak mhriv...@redhat.com wrote:
 Nick,

 I'm glad you made it onto the list!

 The blog post about trying pulp with docker is still current. It's a
 convenient way to experiment with a non-production-quality deployment.

OK, I've tried running through this on F22 now:

1. sudo source ... fails with Command not found, but sudo sh ...
appears to work

2. It would be helpful if the script explicitly checked if the docker
daemon was running and bailed out if not. As a relative Docker noob,
it's hard to get from Post
http:///var/run/docker.sock/v1.19/containers/create?name=db: dial unix
/var/run/docker.sock: no such file or directory. Are you trying to
connect to a TLS-enabled daemon without TLS?  to the correct
explanation The Docker daemon isn't running

3. It would be helpful it the script automatically converted a
supplied relative path to an absolute path to avoid Error response
from daemon: cannot bind mount volume:
_storage/pulp/var/log/httpd-pulpapi volume paths must be absolute.

4. After the failed run with a non-absolute path, it's now stuck at
waiting for mongodb, and Ctrl-C doesn't work to terminate the
script. The current output is:

===
Launching in /home/ncoghlan/fedoradevel/_storage/pulp
db
db already exists
qpid
qpid already exists
chown: changing ownership of '/var/lib/pulp': Permission denied
cp: cannot create directory '/var/lib/pulp/celery': Permission denied
cp: cannot create directory '/var/lib/pulp/published': Permission denied
cp: cannot create directory '/var/lib/pulp/static': Permission denied
cp: cannot create directory '/var/lib/pulp/uploads': Permission denied
cp: cannot create directory '/etc/pulp/content': Permission denied
cp: cannot create regular file '/etc/pulp/repo_auth.conf': Permission denied
cp: cannot create directory '/etc/pulp/server': Permission denied
cp: cannot create regular file '/etc/pulp/server.conf': Permission denied
cp: cannot create directory '/etc/pulp/vhosts80': Permission denied
cp: cannot create regular file '/etc/pki/pulp/ca.crt': Permission denied
cp: cannot create regular file '/etc/pki/pulp/ca.key': Permission denied
cp: cannot create directory '/etc/pki/pulp/content': Permission denied
cp: cannot create regular file '/etc/pki/pulp/rsa.key': Permission denied
cp: cannot create regular file '/etc/pki/pulp/rsa_pub.key': Permission denied
waiting for mongodb
waiting for mongodb
...
===

5. Doing sudo docker rm -fv hash for all of the running containers
and re-running the launch script still gets stuck waiting for mongodb

Regards,
Nick.

-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Local Pulp server for client app development?

2015-08-21 Thread Nick Coghlan
[3rd attempt, as I wasn't subscribed at all the first time, and hadn't
confirmed my subscription the second...]

Hi folks,

I'd like to run up a Pulp server locally on Fedora 22 in order to work
on a separate application that makes it easy to consolidate a selected
subset of COPR repositories into a single Pulp RPM repo, and configure
Pulp to keep them updated automatically.

The example Vagrantfile assumes I want to work on Pulp itself - I
don't, I just want a local Pulp installation to interact with.
However, I'm not clear on which parts of that I can remove while still
getting a working Pulp instance at the end of the process.

Digging around to see if there was a current Nulecule app definition
for Pulp brought me to
http://www.pulpproject.org/2015/05/21/use-docker-to-try-pulp/

Is the latter still current? There were also a couple of completed
Trello cards suggesting there *was* a Nulecule app definition for Pulp
available, but they were marked as completed without any reference to
where the work had been done, or instructions on how to run the
result.

Regards,
Nick.

--
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia


-- 
Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] What is the best way to script our authentication?

2013-11-10 Thread Nick Coghlan
On 11/07/2013 12:18 AM, Caoilte O'Connor wrote:
 Hi Michael,
 
 Thanks for responding. We are using the command-line interface but we'd
 rather not script the password in plain text pulp-admin login -u
 yourusernamehere -p yourpasswordhere every time and we can't reliably
 set it up in advance as the SSL cert expires after 1 week.
 
 Is there any way to configure how long the cert lasts?

Back when I was working on PulpDist (alas, now sadly neglected for more
than a year and in dire need of an update to support Pulp v2+), I
modified the server to support Kerberos (more accurately, the
REMOTE_USER attribute, which I then combined with mod_auth_kerb) and
then used some custom client side scripts to support Kerberos login
(https://git.fedorahosted.org/cgit/pulpdist.git/tree/src/pulpdist/core/pulpapi.py#n292
and
https://git.fedorahosted.org/cgit/pulpdist.git/tree/src/pulpdist/cli/commands.py#n77).

The server side support for mod_auth_kerb was merged a while ago, but it
would be nice if the client could be updated to support Kerberos
authentication through an alternative REST API entry point.

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

Testing Solutions Team Lead
Beaker Development Lead (http://beaker-project.org/)

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Added migration 0001_bind_additions

2012-11-05 Thread Nick Coghlan
On 11/06/2012 12:04 AM, Jay Dobies wrote:
 On 11/05/2012 09:01 AM, Randy Barlow wrote:
 On Mon, 05 Nov 2012 03:32:33 -0500, Lukas Zapletal lzap+...@redhat.com
 wrote:
 How do we make this happen in our katello-upgrade script? I suppose to
 call something, pulp-migrate?

 Yeah, except that we renamed pulp-migrate to pulp-manage-db. It will do
 the migration as well as load the types.
 
 Lukas, this is what you'll have instead of the pulp-server init step.
 That init call was calling pulp-migrate under the covers. We've removed
 pulp-server and this script should just be called directly.

Heh, I know some sysadmins that will be happy about the pulp-server
pseudo-service going away :)

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

Python Applications Team Lead
Beaker Development Lead (http://beaker-project.org/)
GlobalSync Development Lead (http://pulpdist.readthedocs.org)

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] How to write migrations for an external Pulp package

2012-11-04 Thread Nick Coghlan
On 11/02/2012 10:25 PM, Jay Dobies wrote:
 On 11/02/2012 05:22 AM, Nick Coghlan wrote:
 On 11/01/2012 11:28 PM, Randy Barlow wrote:
 This will be officially documented soon, but feel free to ask any
 questions you may have about this in the meantime.

 Dare I hope that we may one day see a similar scheme for registering
 importer and distributor plugins? :)
 
 It's already there, using python's entry point concept in setup.py.
 It'll be in our developer guide when we finish it up, but Mike can give
 you more details in the interim (or you can look at
 https://github.com/pulp/pulp_puppet/tree/master/pulp_puppet_plugins
 which does this already).

Cool. It's likely going to be a while before we get the free dev cycles
to do the 1.x - 2.x migration in PulpDist, but I'm definitely looking
forward to dropping assorted somewhat ugly workarounds once we do.

The upside of working against the alpha APIs is that a lot of my
complaints were addressed in the officially supported APIs. The downside
is that the stuff I wrote against the alpha APIs needs to be ported
before we can upgrade :)

 Send your warm fuzzies to Mike Hrivnak for this  :)

Hmm, no emoticon for warm fuzzies, so a Huzzah! will have to do: \o/

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

Python Applications Team Lead
Beaker Development Lead (http://beaker-project.org/)
GlobalSync Development Lead (http://pulpdist.readthedocs.org)

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Python Module Header

2012-10-25 Thread Nick Coghlan
On 10/26/2012 01:32 AM, Jason Connor wrote:
 +1 to the utf-8 coding header and the © symbol. Red Hat's default text file 
 encoding is utf-8 and I think the header looks professional. I don't know why 
 pep-8 says not to use a mechanism they provide, but this is on of the few 
 times I disagree with it.

Something people often forget when they say follow PEP 8 is that it's
primarily the style guide for the *CPython reference implementation*,
and only incidentally the foundation for a lot of other people's Python
style guides. For the standard library, the rule is use the default
source encoding unless you have a really good reason not to (which
really only affects the examples used in the test suite to ensure the
encoding cookie support is working correctly).

That doesn't mean that rule makes sense for everyone, and, indeed, we
think standardising on utf-8 as the source encoding is such a good idea
we made it the language default in 3.x ;)

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

Python Applications Team Lead
Beaker Development Lead (http://beaker-project.org/)
GlobalSync Development Lead (http://pulpdist.readthedocs.org)

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Python Module Header

2012-10-25 Thread Nick Coghlan
On 10/26/2012 06:22 AM, Randy Barlow wrote:
 On 10/25/2012 04:08 PM, Randy Barlow wrote:
 I'd put a mild +0.1 I suppose.
 
 I'll clarify why it's mild. I love UTF-8. I want to use UTF-8 in my
 Python code, and I like that Python 3 is UTF-8.
 
 However, I don't think that it looks nicer in Python 2 land to have this
 ugly Perl-like thing at the top of the file just so we can write the ©
 character. If we were Python 3, I'd be all about having that © though…
 (and ellipses…) (and unicode snowmen ☃…)

FWIW, the ugly Perl like thing is only needed if you want Emacs to
understand it. If you only care about Python and not Vim or Emacs, then
all you should need is something like:

# Encoding: utf-8

Details are in http://www.python.org/dev/peps/pep-0263/

Interestingly, the tutorial only shows the Emacs way (I guess for
simplicity)
(http://docs.python.org/tutorial/interpreter.html#source-code-encoding)

The language reference at least gives the exact regex, but again, only
shows the examples that the editors understand, omitting the simpler
alternatives.
(http://docs.python.org/reference/lexical_analysis.html#encoding-declarations)

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

Python Applications Team Lead
Beaker Development Lead (http://beaker-project.org/)
GlobalSync Development Lead (http://pulpdist.readthedocs.org)

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list

Re: [Pulp-list] Pulp Community Release 2

2012-10-11 Thread Nick Coghlan
On 10/06/2012 06:41 AM, Jay Dobies wrote:
 http://blog.pulpproject.org/2012/10/05/pulp-v2-community-release-2/
 
 It's Friday afternoon and I'm burnt, so no witty project hyping in the
 announcement e-mail. Check out the blog entry above for all of the awesome.
 

Not sure if you took this down and reposted it, but the correct URL
seems to be:
http://blog.pulpproject.org/2012/10/09/pulp-v2-community-release-2/

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] i18n input

2012-10-03 Thread Nick Coghlan
On 10/03/2012 06:40 AM, Jason Connor wrote:
 Hi All,
 
 Lately we've been struggling with a rash of bugs related to i18n input in 
 Pulp. Python 2's unicode support is only so-so and whenever we get non-ascii 
 or non-utf-8 encoded strings, we tend to run into trouble (the most common is 
 problematic encoding seems to be latin-1). Given that Python's str type is 
 really just a byte array with some built in smarts, it isn't really possible 
 to guess what the encoding might actually be.
 
 To address this issue, I propose that we make string encoding as utf-8 a hard 
 requirement on the server. To enforce this, we'll try to decode all strings 
 from utf-8 and any failures will get a 400 server response with some sort of 
 standardized message: utf-8 encoded strings only (dummy), or something 
 similar.

+1

Boundary validation is the only way to ensure Unicode sanity in Python 2
(same goes for Python 3, it's just a lot harder to omit it
accidentally). You'll still need to figure out what to do with repos
that already contain non-ASCII entries with an unknown encoding though.

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] schedule sub-collections

2012-09-11 Thread Nick Coghlan
On 09/11/2012 03:13 PM, Jason Connor wrote:
 Been doing a little thinking, my current model for schedule sub-collections 
 kinda sucks.
 
 It's as follows (for the schedule REST APIs defined so far):
 
 /v2/repositories/repo id/importers/importer id/sync_schedules/
 /v2/repositories/repo id/distributors/distributor id/publish_schedules/
 /v2/consumers/consumer id/unit_install_schedules/
 
 I think the repo importers and distributors sub-collections kinda tripped me 
 up. I think the following is a better (more RESTful?) implementation, and I'm 
 looking for feedback (thumbs up/thumbs down kinda thing)
 
 /v2/repositories/repo id/importers/importer id/schedules/sync/
 /v2/repositories/repo id/distributors/distributor id/schedules/publish/
 /v2/consumers/consumer id/schedules/unit_install/
 /v2/consumers/consumer id/schedules/unit_update/
 /v2/consumers/consumer id/schedules/unit_uninstall/

While I think that's a definite improvement, those first two URLs are
also getting rather deeply nested for a REST API reference. I'd be more
inclined to flatten them out a bit by hooking them directly to the
repository:

/v2/repositories/repo id/schedules/sync/
/v2/repositories/repo id/schedules/publish/

This also better matches the existing actions API, which lives at the
repo level, not the importer/distributor level.

And so, on that note, have you considered simply creating a parallel
schedules tree for each existing actions subtree? That would give an
API like:

/v2/repositories/repo id/schedules/sync/
/v2/repositories/repo id/schedules/publish/
/v2/consumers/consumer id/schedules/content/install/
/v2/consumers/consumer id/schedules/content/update/
/v2/consumers/consumer id/schedules/content/uninstall/

In the spirit of REST navigability, the returned JSON documents for each
schedule entry could even include the equivalent URL and request body
that could be POSTed to achieve the same effect as the scheduled event.

Side note: it would be handy if the REST API docs included a URL map
for the full API. Something similar to
http://pulpdist.readthedocs.org/en/latest/webapp.html#rest-api, but
autogenerated from the existing docs. This may require more
sophisticated Sphinx-fu than is used in the current docs, though :(

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] New to pulp and I receive SASL errors trying to set up consumers

2012-08-05 Thread Nick Coghlan
On 08/03/2012 11:00 PM, Jay Dobies wrote:
 On 08/03/2012 08:42 AM, Harm Kroon wrote:
 I've run into the same issue when I was playing with my reinstalled
 test setup. Trying to register a CDS on the server gave the same error.
 Setting auth=no in /etc/qpidd.conf and restarting pulp-server and
 -cds fixed things for me.

 (although I'm not sure if that's the desired solution, as I'm also
 very new to pulp).

 Grtz, Harm
 
 Harm is right, I ran into this too. Somewhere in 6.x qpid started
 shipping with auth set to on by default. Without configuring Pulp to
 authenticate against qpid, it's not going to work out of box. Turning
 off auth on qpid (which is used both for consumers and CDS instances)
 should resolve those issues.

Ah, that would explain why I haven't run into the problem - I currently
have the qpid integration disabled completely until I get around to
setting it up properly (it's on the todo list! Somewhere...)

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] New to pulp and I receive SASL errors trying to set up consumers

2012-08-02 Thread Nick Coghlan
On 08/03/2012 07:08 AM, Lundberg, Bruce wrote:
 Since things weren't adding up, I decided to clean everything up and start
 over I unbound the repository from stapatch01, but now I cannot unregister
 any of the consumers neither from the server nor from either of the
 consumer systems. The error I get from all systems is:
 
  error:  operation failed: AuthenticationFailure: Error in
 sasl_client_start (-1) SASL(-1): generic failure: GSSAPI Error:
 Unspecified GSS failure.  Minor code may provide more information
 (Credentials cache file '/tmp/krb5cc_48' not found)
 
 There are no SASL credential files in /tmp. The pulp-server was restarted
 when I made some firewall changes, but nothing else has changed. I've
 searched the Web, the pulp mail archives, and am asking about on the
 freenode IRC channel. I can't find any information on how to clean this
 up. Any help would be greatly appreciated.

That sounds very odd - as far as I am aware, Pulp 1.1 doesn't support
Kerberos at all (I had to patch my local version to handle it, and it
was a bit of a hack: https://bugzilla.redhat.com/show_bug.cgi?id=831937).

However, that error suggests *something* in the client is trying to log
in with Kerberos and complaining that it can't find a valid ticket. If
you were trying to use PulpDist's custom client I'd understand seeing
that error, but I have no idea how you could get the normal clients to
trigger it (unless 1.x has changed even more than I thought since I last
updated from upstream).

Regards,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Pulp init.d script

2012-07-25 Thread Nick Coghlan
On 07/26/2012 12:25 AM, Jeff Ortel wrote:
 I like this approach.  Having an init.d/systemd script/unit that just
 aggregates other /true/ services has always felt wrong and (as Jay
 mentioned) problematic.  Most users will configure httpd, mongod and
 qpidd to start automatically.  Also, when users need to bounce one of
 the services, it's usually to solve a particular issue that is related
 to the service being restarted.  Using pulp's current init.d script has
 the unwanted side effect of bouncing the other services unnecessarily.

I actually had to patch our copy of Pulp to remove the references to
qpidd - we haven't set up the messaging component yet, so it was
refusing to start because qpidd wasn't available.

You would definitely make our sysadmins happier by ditching the fake
service entirely. For development purposes, a helper script would be
just as useful as the current approach.

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] A different way to setup Pulp for development

2012-06-28 Thread Nick Coghlan
- Original Message -
 As far as a script to link plugins, what did you have in mind?  Do
 you mean
 plugins that are already installed on your system in the normal
 locations, just
 a way to symlink those to the correct place under the new top level
 directory?
 Or, plugins that you were actively developing?  I'm not sure how we'd
 know
 where to look for those, although we could introduce an environment
 variable
 set to their location.

I was thinking more of a pulp helper module that I could import into my own 
plugins-dev.py script. However, now that I think about it a bit more, it would 
just be a matter of passing an extra argument to replace the default / target 
directory, so there's no real help needed.

Cheers,
Nick.

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] A different way to setup Pulp for development

2012-06-26 Thread Nick Coghlan
On 06/27/2012 01:37 AM, James Slagle wrote:
 Any feedback is appreciated. So far I've logged in and created and sync'd a
 repo.  Enough to show I basically had it working, but I'm sure there might be
 some issues. 
 
 I'm not looking to merge this into master anytime too soon (master was just
 merged into my branch this morning, fyi).  It's more of a POC to see if it was
 possible.  But, I think this is a valuable effort, and would make getting 
 setup
 to do Pulp development easier.  We in fact have someone else in the community
 who was essentially trying to do the same thing as well.

Being able to test under a non-root Apache would be great. It would be
handy if there was a helper script or module to appropriately symlink in
custom plugins, though.

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Integrating Pulp with Kerberos authentication and LDAP authorisation

2012-06-26 Thread Nick Coghlan
 proceed as usual

Writing automated tests for this new behaviour may prove to be a bit
tricky, so any advice on that front would be appreciated.

Regards,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Unit Test Notes

2012-06-13 Thread Nick Coghlan
On 06/13/2012 11:53 PM, Jay Dobies wrote:
 PulpClientTests - Ignores the database completely. In the setUp call it
 will populate a test client context for use in the test, mocking out the
 server and setting up a Recorder instance for the prompt to capture
 anything the prompt would have output to the screen and make it
 available for assertions.

How public is this API?

Currently, the pulpdist CLI and web UI tests are using a real Pulp
server instance and a very permissive mock object (respectively). They
could likely both be simplified a great deal given a supported mock
server API to test against.

Cheers,
Nick.


-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Agent handlers moved

2012-05-27 Thread Nick Coghlan
On 05/26/2012 05:34 AM, Jay Dobies wrote:
 On 05/24/2012 10:26 PM, Nick Coghlan wrote:
 On 05/25/2012 07:24 AM, Jeff Ortel wrote:
 Agent handlers moved to:

 /etc/pulp/agent/handler/
 /usr/lib/pulp/agent/handler/

 If running out of git - please run:

 # python pulp-dev.py -I'
 # rm -rf /etc/pulp/handler
 # rm -rf /usr/lib/pulp/handler
 # service goferd restart

 Is there a summary anywhere of the major changes involved in moving from
 developing against Pulp v1 (technically, 0.0.267) to the current trunk?
 It's OK if there isn't - I knew what I was letting myself in for when I
 started building the initial version of PulpDist against the alpha
 version of the v2 APIs.

 Still, I'll hopefully be embarking on that upgrade soon, so any pointers
 would be helpful.

 
 Not really. You're bleeding edge enough that we don't have migration
 from early-v2 to slightly-later-v2  :)

No worries, I figured that would be the case. I'm looking forward to
being able to delete (or at least deprecate) a bunch of workarounds

 The API docs are pretty up to date though and should be fairly useful:
 http://pulpproject.org/v2/rest-api/
 
 I know you're writing your own front end but if you're interested the
 user guide is coming along but not yet fully up to speed with current
 functionality:
 http://pulpproject.org/v2/rpm-user-guide/

One of my aims with the upgrade will actually be to throw away as much
of my custom CLI as I can. I only wrote it because the v1 Pulp CLI was
understandably focused on handling v1 repos with minimal generic content
support.

Is it still the plan to support custom CLI plugins for v2 (as described
at https://fedorahosted.org/pulp/wiki/GCCLI?

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Agent handlers moved

2012-05-24 Thread Nick Coghlan
On 05/25/2012 07:24 AM, Jeff Ortel wrote:
 Agent handlers moved to:
 
 /etc/pulp/agent/handler/
 /usr/lib/pulp/agent/handler/
 
 If running out of git - please run:
 
 # python pulp-dev.py -I'
 # rm -rf /etc/pulp/handler
 # rm -rf /usr/lib/pulp/handler
 # service goferd restart

Is there a summary anywhere of the major changes involved in moving from
developing against Pulp v1 (technically, 0.0.267) to the current trunk?
It's OK if there isn't - I knew what I was letting myself in for when I
started building the initial version of PulpDist against the alpha
version of the v2 APIs.

Still, I'll hopefully be embarking on that upgrade soon, so any pointers
would be helpful.

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Infrastructure Engineering  Development, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Client Connection API Change

2012-05-16 Thread Nick Coghlan
On 05/16/2012 11:14 PM, Jay Dobies wrote:
 What does this mean for your code? Not much actually. For normal calls
 you still look at response.response_body to see the reply from the
 server. For async calls, instead of calling things like is_postponed or
 was_successful on the response object itself, you call it on the Task
 which is in the response_body. So response.is_running() becomes
 response.response_body.is_running(). I'm a lot happier about this since
 it's the task that's waiting, not the response itself, it's just a
 matter of making sure all of the existing calls have been migrated.

I'm wondering if it's worth streamlining this such that most client code
doesn't need to *care* if it got an immediate response or not. The model
that comes to mind is Twisted's Deferred objects and similar concepts in
the concurrent.futures module and other Promise style APIs.

The way that kind of approach would work is as follows:

-- response_body would always be a JSON document (even for calls that
returned a queued async result - in those cases, the response_body
would still contain the JSON task details reported by the async call)
-- a new task attribute would *always* be present, even for
synchronous calls. In the latter case, it would be an implementation of
the Task API that is preinitialised with all the info it needs rather
than having to query the server for status updates.

For guaranteed synchronous calls, you would access response_body
directly (and task would always be None).

For potentially asynchronous calls, client code would *always* go
through the new task attribute (which would never be None). If the
response happened to come back immediately, then the interface code
takes care of that by providing an appropriate dummy Task object.

Otherwise you're encouraging client code that has to do things like:

  if isinstance(response.response_body, Task):
 # Handle async case
  else:
 # Handle sync case

It's cleaner if code can do:

  Use response.response_body if you want the received JSON data
  Use response.task if you want the task status
  Use response.task is None to identify synchronous responses

This may mean you need a way to indicate in the JSON response itself
that the content is a reply to a potentially asynchronous request.

Regards,
Nick.

-- 
Nick Coghlan
Red Hat Hosted  Shared Services, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Sphinx References

2012-05-14 Thread Nick Coghlan
On 05/15/2012 05:50 AM, Jay Dobies wrote:
 Something I'm finding more and more useful is creating references
 between page sections in sphinx. References are not namespaced and must
 be unique across the entire sphinx project (keep in mind we have one
 project for REST API docs, a different one for the user guide, etc), but
 that's not too hard to work around.
 
 To create a reference:
 
 .. _repo-create:
 
 That's very specific: Two periods, a space, underscore, name using
 hyphens (not underscores), and a single colon. Any mistake in that exact
 setup and you'll be banging your head against the keyboard wondering why
 the reference doesn't resolve.
 
 To refer to that section, regardless of what page you're in:
 
 :ref:`repo-create`
 
 Again, since they are globally unique you don't need to say which page
 the section is in, just the name of it.
 
 I've started adding them for most calls in the REST API docs and each
 subsection in the user guide as well.

For those interesting in experimenting further, here's the full docs for
cross-references:

http://sphinx.pocoo.org/markup/inline.html#cross-referencing-syntax

(Note that the ReST domain mentioned there is the short form of
reStructuredText, *not* the web service REST)

Cheers,
Nick.

-- 
Nick Coghlan
Red Hat Hosted  Shared Services, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Please Comment: Proposed apache conf change for allowing http yum repository publishing

2012-04-30 Thread Nick Coghlan

On 04/30/2012 10:28 PM, Jay Dobies wrote:

I'm psyched to have this as a repo-level config option. Looks like the
solution will be simple to support for upgrades as well.


+1 from me, too. Being able to host both public and private trees will 
be useful for PulpDist in the long run.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Hosted  Shared Services, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Sending AMQP messages from Pulp 2.0 plugins

2012-04-22 Thread Nick Coghlan

On 04/20/2012 10:54 PM, Jay Dobies wrote:

I'm not 100% comfortable with arbitrary messages from the plugin on
behalf of the Pulp server (meaning originating from the Pulp server's
connection) and could see an argument that it's solely on the plugin to
add that support if it wants it. That said, I'm also not 100% behind
that statement and will have to give it some more thought. What sorts of
use cases are you thinking about?


I'm only just starting to think this problem through myself. For 
outbound messages, I suspect I'll be completely covered if the sync 
summaries (both success and failure) go out on the message bus. If I 
need to generate other messages based on those summaries, there's no 
reason I need to do it inside the plugin itself. I can't think of a 
reason that I'd want to send a message *during* a sync operation.


For inbound messages, I'm currently thinking I'll have an independent 
process that is listening on the relevant AMQP bus and translates 
relevant messages into the appropriate calls on the Pulp REST API 
(specifically, I want to configure some trees to immediately request a 
new sync operation after being notified that a new tree is available 
from the data source).


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Disable AMQP messages in pulp 0.0.267?

2012-04-19 Thread Nick Coghlan

Hi,

I don't want to hook my Pulp installation (0.0.267) up to a message bus 
(yet), but I can't find anything in the docs about how to switch the 
AMQP features off completely. This means I currently have two problems:


service pulp-server start fails (because it's trying to start a qpid 
daemon locally, which doesn't work, and which I won't want it to do even 
after the message bus *is* configured properly)
Several error messages are written to the Pulp log every couple of 
minutes complaining that it can't find the AMQP service


This is with send_enabled and recv_enabled set to false in the [events] 
section of pulp.conf. Aside from replacing localhost with the server's 
FQDN, the [messaging] section is also left at its default settings.


Did I missing something in the docs? Is there a way to disable AMQP 
completely such that both of these problems will go away until I get 
around to configuring the message bus properly?


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Sending AMQP messages from Pulp 2.0 plugins

2012-04-19 Thread Nick Coghlan
Asking the question about turning off the message bus integration 
reminded me that I had a couple of questions about any planned AMQP 
integration in the 2.0 plugin APIs.


1. Will there be a mechanism added to the conduit API to send arbitrary 
AMQP messages?


2. Will the sync summary data reported through the conduit API be 
forwarded to the message bus?


(If 1 is true, then the answer to 2 doesn't really matter, but if only 2 
is true then it will affect what I decide to include in the sync summaries)


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Pulp 0.267: Triggering multiple simultaneous sync jobs on a v2 repo

2012-04-11 Thread Nick Coghlan
(I know, I know, 0.267 is old now when it comes to the v2 repos, but 
it's going to be a while before I can move to a more recent version)


A nasty thought occurred to me today - I believe the coordinator for 
conflict resolution on v2 repos isn't fully hooked up yet in 0.267, so 
what happens when two simultaneous sync requests come in for a v2 repo 
in that version?


Is there enough serialisation in place that the second one will block 
waiting for the first one to complete? Or do I need to worry about 
competing sync jobs until I upgrade to a newer version of Pulp where the 
coordinator support has been integrated properly?


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Pulp Beyond 1.0

2012-04-09 Thread Nick Coghlan

On 04/05/2012 10:43 PM, Jay Dobies wrote:

and rely on an external trigger for periodic sync operations.


Scheduled syncs are coming very soon. It's actively being worked on now
and is pretty close to completion, so hopefully we can remove this
outside step from your setup in the near future.


Good to hear. My current focus is getting a PulpDist 0.1.0 release 
together - something that's deployable in the right context, but still 
has quite a few underlying architectural flaws that mean the right 
context is pretty much limited to the way *I* plan to use it.


Once I have that version out the door, I'm hoping the timing will work 
out such that I can update my underlying Pulp version to a new community 
release (ideal) or testing build (acceptable) and start migrating 
everything over to the updated v2 APIs (for the plugins, the client code 
and the REST interfaces).


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Pulp Beyond 1.0

2012-04-04 Thread Nick Coghlan

On 04/05/2012 01:19 AM, Jay Dobies wrote:

When I say the Pulp model, it's not that you're mapping unit metadata
on to a rigid structure. The content type definition gives Pulp some
clues so it can store it properly to make querying optimized, but at the
end of the day the structure and contents of the unit metadata are
pretty much up to the discretion of the plugin developer.


And you can even dial things down to the level that PulpDist currently 
does: bypass large chunks of Pulp altogether and only use the parts you 
need.


While I make use of the REST API to trigger jobs and the sync history 
tracking, the current set of PulpDist plugins actually write directly to 
user-specified filesystem paths, don't store any content metadata in the 
Pulp repo and rely on an external trigger for periodic sync operations.


Longer term, the level of integration with Pulp will increase (e.g. when 
implemented, the delta plugins will likely use the Pulp content storage 
area, all the plugins will start recording some content metadata in the 
repo to indicate what trees are currently available and I'll start using 
Pulp's scheduling engine to trigger automatic updates), but it's pretty 
cool that the plugin model is flexible enough that you can pick and 
choose which parts you want to adopt at any given point in time and 
still have something useful that can be deployed.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] v2 Client Login/Logout

2012-04-04 Thread Nick Coghlan

On 04/05/2012 03:47 AM, Jay Dobies wrote:

In fact, now that you mention it, the better approach would be to rename
the old client to pulp-v1-admin and start calling this pulp-admin
directly. I may try to slide that in soon.


A related question - have there been many changes in the client 
implementation layout? I dig into a few of the existing CLI internals to 
get Pulp server access from the PulpDist web UI and the plugin test 
suite working over OAuth (and playing nice with Django's logging 
configuration), and I assume I'm eventually going to run into an upgrade 
that breaks my customisations (entirely my fault of course, since I 
*know* I'm messing around with undocumented internal details - some with 
underscore prefixed names, no less!).


So I expect to have to rewrite parts of my client-side code [1] at some 
point, I'm just wondering if at some point actually means when I 
upgrade from Pulp 0.267 :)


Regards,
Nick.

[1] 
http://git.fedorahosted.org/git/?p=pulpdist.git;a=blob;f=src/pulpdist/core/pulpapi.py


--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Pulp Beyond 1.0

2012-04-03 Thread Nick Coghlan

On 04/04/2012 12:40 PM, John Morris wrote:

On 04/03/2012 09:15 AM, Jay Dobies wrote:

http://blog.pulpproject.org/2012/04/03/pulp-beyond-1-0/


Looks awesome. I'd adopt pulp today if the remote filters worked. Ha ha!

Just checking, does the v2.0 plan to support 'user-defined content'
still mean software-repository-like content collections, or is pulp's
function of content synch/distribution/deployment/etc. really being
completely abstracted?

For v2.0, I like the idea of a plugin model. I'd write a plugin that
performed extra processing on repos after synching: rebuild the metadata
for filtered remote repos (ok, broken record here, and anyway I recall
pulp might have done that for me anyway when I gave it a spin), and run
repoview.

I'm betting that in v2.0, grinder will cease to exist as a separate
entity and replaced with something new, sleek, pythonic and with real
inherited OO classes. Of course it may live on for awhile in a
deprecated plugin.


Hi John,

If you want to see some of what is possible with the new repository 
model in v2.0, you may want to take a look at PulpDist:

http://pulpdist.readthedocs.org

I'm using Pulp as the back end for an rsync-based mirrorring network 
with REST API control over the individual jobs.


One caveat: the client and plugins are currently implemented against a 
pre-alpha version of the Pulp v2 APIs (from 0.267 or so), so I wouldn't 
bet on it working properly with the updated plugin and REST APIs in the 
latest Pulp development releases. It will be a few iterations before 
PulpDist catches back up with the API improvements made in Pulp.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Pulp Beyond 1.0

2012-04-03 Thread Nick Coghlan

On 04/04/2012 02:12 PM, John Morris wrote:

Hi John,

If you want to see some of what is possible with the new repository
model in v2.0, you may want to take a look at PulpDist:
http://pulpdist.readthedocs.org

I'm using Pulp as the back end for an rsync-based mirrorring network
with REST API control over the individual jobs.


Rsyncing what sort of files? RPM repos?

The pulpdist docs do answer my question: pulpdist is used to 'manage
arbitrary directory trees', so really any content. From what I can tell,
the same code to sync repos around could also sync video files around,
and I could write a plugin analogous to my 'repoview' plugin to generate
video thumbnails.


Yeah, I've deliberately written the PulpDist plugins to make as few 
assumptions as possible about the content - they're intended to be a 
lowest common denominator kind of deal while still being more 
bandwidth friendly than transferring whole files around all the time.


The closest they get to custom metadata are the PROTECTED files used at 
downstream sites to tell the plugins to leave particular directories 
alone, as well as the STATUS files used at the top level of snapshot 
trees to say when a directory is ready for synchronisation (or has 
already been synchronised).



One more plugin could tell a local database to start
serving the now-locally-stored video from local content servers instead
of remote ones. (Don't worry, I'm not really talking about putting
pulpdist into anyone's production site. Yet. ;)


Heh, I can hardly complain too much if anyone starts using my unfinished 
code when I've been doing the exact same thing to the Pulp team with 
respect to their v2 REST and plugin APIs. In truth, I don't actually 
mind so long as people have a clear understanding that they'll be 
building on a still unstable foundation :)


FWIW, aside from the updates needed to use the new Pulp APIs properly, 
and possibly switching the command line client over to the new 
pulp-admin plugin architecture, I don't anticipate any more drastic 
changes like last month's complete redesign of the approach to PulpDist 
site configuration.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] v2 REST API Documentation Early Look

2012-03-28 Thread Nick Coghlan

On 03/28/2012 10:29 PM, Jay Dobies wrote:

Hmm, that shouldn't happen for the ordinary ReST files. It can
definitely be a problem when using extensions like autodoc, though.
Still, as you say, rebuilding from scratch is generally pretty fast.


That's where I hit it most, using autodoc. I suppose I just got in the
habit without remembering why.


Yeah, there's a problem where Sphinx only looks at which *doc* files 
have changed since the last docs build, so it has no idea if any of the 
source files that autodoc relies on has changed (since that connection 
is mediated by Python's import system).


As something of tangent, I just discovered sphinx-apidoc, which may be 
useful for generating developer API reference docs:

http://sphinx.pocoo.org/man/sphinx-apidoc.html (Sphinx 1.1)
https://bitbucket.org/etienned/sphinx-autopackage-script/src (earlier 
versions)


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Minor usability problem with stale cached login credentials

2012-03-25 Thread Nick Coghlan

On 03/23/2012 10:10 PM, Jay Dobies wrote:

On 03/22/2012 09:03 PM, Nick Coghlan wrote:

On 03/23/2012 04:20 AM, Jay Dobies wrote:

This is one of those things that annoyed us in the beginning and after a
while of never getting around to fixing it, we developed blinders and
don't even notice anymore :)

This sprint I'm looking to get login/logout functionality into the v2
client, I'll make sure I don't fall into the same trap again.


If you're replacing the client then I won't worry about raising the bug.
While I'll be using my custom client (which hooks into the v1 client
authentication code) for PulpDist 0.1.0, the new pulp-admin integration
hooks are definitely something I want to look at for 0.2.0.

If you're curious about how my custom Pulp v2 API client currently
works, the docs are here:

http://readthedocs.org/docs/pulpdist/en/latest/cli.html


Nice docs :)


Sphinx + readthedocs is a very nice combo to work with. (I'd like to do 
a proper site for PulpDist at some point, but it's hard to justify the 
time required when it's just me working on it)



It looks like you've reduced the create/add importer stuff to a single
step in init. I'm doing the same thing for the RPM client and ran into a
problem whose solution you might be interested in.

The create repo call is pretty easy to make succeed, there's very little
that can go wrong. The add importer call, on the other hand, has a lot
of potential to fail due to invalid config. When that happens, I was
left with a created repo but no importer on it and no desire to expose
to the user where that distinction lies.


Ah, thanks. My approach to handling that case is to actually use a 
create_or_save_repo operation in the client - if the POST to the 
collection fails, it tries a PUT to the pre-existing repo instead.


If the importer addition fails, the partially initialised repo does get 
left around on the Pulp server, but a subsequent init call should work 
anyway.


This is due to the fact that the init command is intended to handle 
both initialising a site from scratch and updating the configuration of 
an existing site, so it needs to cover the case of overwriting a 
pre-existing repo regardless.


However, there's still assorted dodginess in the way that works (in 
particular, init will never delete repos from a Pulp server, but will 
still clobber their entries in the site metadata), so I have to revisit 
it at some point. For the moment it's in the good enough for now bucket.


Either way, assuming the REST API distinguishes between failed to 
create due to bad config and failed to create due to an existing repo 
by that name, a combined call should simplify that part of the client 
quite a bit.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Minor usability problem with stale cached login credentials

2012-03-25 Thread Nick Coghlan

On 03/23/2012 10:18 PM, Jay Dobies wrote:

I forget if I told you, but the set_progress method in the conduit has
been wired up to the tasking subsystem and repo sync has been made to be
an asynchronous call. I'd point you to docs on it, but it's so recently
added that we haven't written them yet. Here's a quick rundown:

Check out the bottom of
https://fedorahosted.org/pulp/wiki/CoordinatorProgrammingGuide The call
report is what's returned from the REST call when you trigger an
asynchronous task.

Inside of there, the progress field will contain what your importer
specified the last time it called set_progress.

I'm pretty sure that the REST serialized call report contains an href
back into the server to request an updated version of the call report.
My code is assembling that URL using '/v2/tasks/%s/' and substituting in
task_id. I poll until the state field is one of the three completed
states: finished, error, cancelled. FYI, suspended is listed on that
page but not current in use.

This stuff should be in the current QE build
(http://repos.fedorapeople.org/repos/pulp/pulp/dev/testing/, build
0.277). Sorry for the really thin overview, if you have more questions
don't hesitate to ask.


That sounds promising. For the moment I'm just using Build 0.267 with a 
httpd based workaround for sync log access while a job is still running, 
and I will probably stay with that approach for PulpDist 0.1.0, but 
updating Pulp and replacing my current workaround with something more 
robust based on set_progress() is very high on the todo list for 0.2.0.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Minor usability problem with stale cached login credentials

2012-03-21 Thread Nick Coghlan
After spending some time trying to figure out why my management client 
was complaining that my server's SSL cert had expired (despite the fact 
I had just regenerated the cert), I eventually realised it was actually 
complaining about the locally cached cert that holds the server login 
details.


Part of that's a display problem in my own management client [1], but 
I'm considering raising an upstream bug on Pulp as well, since the error 
details from Pulp in this case are currently just:


sslv3 alert certificate expired

Changing that to something like Cached login credentials have expired, 
please refresh with 'pulp-admin auth login' would definitely have saved 
me some time today.


Regards,
Nick.


[1] https://bugzilla.redhat.com/show_bug.cgi?id=805763

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Importer/Distributor API Change

2012-03-11 Thread Nick Coghlan

On 03/10/2012 01:50 AM, Jay Dobies wrote:

The change is that it now returns a tuple of result (bool) and message
(str), where the message should describe what/why it failed.


Thanks!

I've been meaning to get you some feedback on the sync result reporting 
API, but haven't found the time to make it vaguely coherent and work out 
what I should be dealing with at the plugin level and where it would be 
good if the Pulp API could help out.


I'm also still writing to the v2 API as it existed around 0.254 or so, 
so it's also possible there are some new features I'm not using yet.


However, I figure even my half-formed feedback should be somewhat 
useful, so here's a rough version:


- I want to be able to set summary  details even for a *failed* sync 
job. Currently, marking a job as failed requires throwing an exception, 
which means summary and details don't get set in the JSON reply. This 
doesn't work well for PulpDist, since it means I need to have two 
completely different ways of extracting information in the client 
(depending on whether the job was marked as a success or a failure 
at the Pulp level). Or, I do as I do now, and even PulpDist failures are 
marked as a success at the Pulp level :P


- I need to provide users with access to the sync log while the job is 
running (since that's the quickest and easiest way to figure out whether 
an rsync job is genuinely stuck or is just taking a long time). There's 
no native mechanism to support that, so I'm currently considering making 
the sync log a content unit in its own right.


- It would be handy to have an easy way for the client to request just 
the latest successful sync history entry and the latest sync history 
entry. The general query API may already support that (as I said, I'm 
still working without the capabilities of the older API).


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Should we move pic to under src?

2012-02-14 Thread Nick Coghlan

On 02/15/2012 05:35 AM, Jay Dobies wrote:

For anyone who doesn't know, under playpen/webservices there's a module
called pic. The intention is to use it in an interactive python shell
(ipython) to be able to make web service calls against Pulp. It's pretty
handy for debugging stuff outside of the CLI.


Is there anything that needs ipython specifically, or does it work from 
the default Python shell? (I'm guessing the latter, since you suggest 
using it with -c invocation)


Sounds like it would be very handy as a reference point for debugging my 
own calls in to the REST API.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Errors in controllers

2012-02-12 Thread Nick Coghlan

On 02/11/2012 08:07 AM, Jason L Connor wrote:

This is an absolutely fantastic idea. I've started on the changes to the
web framework. I'm going to put down some base exceptions based on
what's in this email, and I'll present them next week on Tuesday when I
talk about _id v id as well.


While on the general topic of effectively managing and triaging 500 
errors on production web servers, some of the folks over at DISQUS (a 
popular commenting system written in Python) have published an open 
source error collation server (Sentry), along with Python bindings for 
pushing events (Raven).


It's on my radar to have a look at these for PulpDist, but this 
discussion made me realise they might be of interest for Pulp, too.


More info:

http://sentry.readthedocs.org
http://raven.readthedocs.org

Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] LDAP, OAuth and unrecognised users

2012-02-06 Thread Nick Coghlan

Hi,

I'm looking at switching the PulpDist web UI over to passing the correct 
user credentials through to Pulp instead of always querying the database 
as a common user (this is a prerequisite to eventually allowing 
read/write access to the Pulp services through the web UI's OAuth 
connection, instead of the current read-only access).


The LDAP auth docs are clear that when you attempt to log in via the 
command line clients, a failed local login will be passed back to the 
LDAP server, with the user being created automatically if the LDAP 
credentials match.


However, neither the LDAP nor the OAuth docs explain what happens if you 
attempt to access a Pulp server that has LDAP configured via OAuth as a 
user that does not exist locally in the Pulp database (yet), but *does* 
exist in LDAP.


Does Pulp handle this automatically? Or will I need to set up a service 
account so that the PulpDist web service can handle the necessary 
creation of passwordless user entries? (For my use case, I already know 
the PulpDist username represents a valid LDAP user, since PulpDist is 
using the relevant LDAP database for its own authentication).


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] REST Musings

2012-01-08 Thread Nick Coghlan

On 01/07/2012 01:36 AM, James Slagle wrote:

As you already mentioned, I think we need to clearly document which API's
are asynchronous in nature and those API's would have standard return codes
that relate to the task resource that is returned.


From a user point of view, having async APIs that consistently returned 
status codes related solely to whether or not the task was queued 
successfully would be just fine (on success, they should include a link 
to the relevant task status URL, but I believe Pulp already does that).


The only particularly painful case is if a particular API is maybe 
sync, maybe async, depending on the input - that gets annoying, since 
it can make it hard for me to correctly factor out the server 
interaction logic on the client side. Two separate APIs (i.e. one that 
is always synchronous, but may block for a long time before responding 
and one that is always asynchronous, even if the reply data is 
immediately available without needing to block) is much easier to handle.


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Feedback on sync_repo() result reporting for v2 plugins

2011-12-14 Thread Nick Coghlan

On 12/14/2011 07:38 AM, Jay Dobies wrote:

On 12/13/2011 04:28 PM, Nick Coghlan wrote:

On 12/14/2011 06:49 AM, Jay Dobies wrote:

I'm not ignoring this, I'm just so deep into the unit association stuff
that I'm afraid I'd do physical damage to my brain if I were to try to
shift gears and think about this. I'll take a look in the next few days.


No worries - the jam it all into the exception message approach is a
tolerable workaround for the moment, since the key requirement is for
admins to be able to see the partial sync logs for jobs that fail, so
they have some chance of figuring out what went wrong.

I just wanted to bring it up as something to look at before the plugin
APIs are declared stable.


Well there's the trick. I'm gonna go the Google route and call them
beta for as long as possible so I can always make these changes and
tell people the APIs were never stable. :)


Heh, it works to my benefit to - it means I can say this is annoyingly 
difficult and know that the situation can still be improved.


I figure between the need to support your existing yum-based repos and 
my let rsync do most of the work directory tree mirroring we should be 
able to thrash out something fairly reasonable (especially once I start 
working on the rsync delta file transfer process for snapshot trees that 
will also involve a custom distributor plugin).


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Feedback on sync_repo() result reporting for v2 plugins

2011-12-13 Thread Nick Coghlan

On 12/14/2011 06:49 AM, Jay Dobies wrote:

I'm not ignoring this, I'm just so deep into the unit association stuff
that I'm afraid I'd do physical damage to my brain if I were to try to
shift gears and think about this. I'll take a look in the next few days.


No worries - the jam it all into the exception message approach is a 
tolerable workaround for the moment, since the key requirement is for 
admins to be able to see the partial sync logs for jobs that fail, so 
they have some chance of figuring out what went wrong.


I just wanted to bring it up as something to look at before the plugin 
APIs are declared stable.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] API: Consider to only return JSON parseable results

2011-12-11 Thread Nick Coghlan

On 12/10/2011 07:23 AM, Jay Dobies wrote:

I'm up in the air on what delete should return. The serialized objects
that were deleted? A JSON report with some info on the delete such as
how many items were deleted? I'm open to suggestions.


If the deletes end up being asynchronous as well, then I guess the task 
information might be a valid option.


Otherwise, I don't think it matters all that much, so long as whatever 
*is* returned is valid JSON so that returns can be parsed 
unconditionally in server interaction code. For example, TRUE would 
work for that, but a bare TRUE fails.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] API: Consider to only return JSON parseable results

2011-12-07 Thread Nick Coghlan

On 12/08/2011 02:59 AM, Jay Dobies wrote:

100% agree. We're taking a much more REST-like approach with our APIs in
the future (truth be told, the current ones are a bit sloppy) where all
of the responses will be JSON parsable in the future.


There are a couple of v2 APIs that don't behave that way yet. As I 
recall (and from looking at my current server interaction code), the 
offenders are at least:

- deleting objects from collections
- the sync_repo action

(I'm not using publish_repo yet, but I assume it has the same problem as 
sync_repo)


For sync_repo, I think it would be useful to return the new sync_history 
entry generated for the sync operation.


When you get around to adding support for asynchronous explicit sync 
requests, it may make sense to put that under a separate URL (e.g. 
repo_id/actions/async/sync_repo) and leave the existing URL as a 
potentially long-running operation. (Then again, it may make more sense 
to be async by default and have the synchronous API use an alternate URL)


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Avoid usage of '/tmp'

2011-12-07 Thread Nick Coghlan

On 12/08/2011 06:17 AM, Jay Dobies wrote:

On 12/07/2011 03:11 PM, John Matthews wrote:

The issue of Grinder writing some data to /tmp/grinder and referring
to it in between runs came up during our SELinux policy rewrite. Dan
Walsh suggested we avoid using '/tmp/grinder' and instead switch to
'/var/run/grinder'. I wanted to share his blog post highlighting this
reason with the team.

http://danwalsh.livejournal.com/11467.html
snippet from above

Daemon developers should follow these rules:

/tmp is for users to store their stuff not for daemons or any process
that is started in the boot process.
If a daemon wants to communicate with a user then he should do it via
/var/run/DAEMON.
If you have a daemon that wants its temporarily files to survive a
reboot. consider using /var/cache/DAEMON


Pulp's BZ to fix this: https://bugzilla.redhat.com/show_bug.cgi?id=761173


What about making a new directory in /var/lib/pulp?

I don't want to break the conventions that Dan's mentioning, but we have
to think about situations where space on the root partition isn't
exactly in high availability.

Some of the cloud images we've seen have really small root partitions.
Some of the providers I've talked to have differences between the root
volume and the ones they've attached meant to serve Pulp content.

The RHUI installation conventions have been to mount a bunch of space at
/var/lib/pulp for repos. It'd be nice if we could have all of our space
requirements captured in that one case v. having to potentially have
them increase the availability for /var/run/pulp too (not sure the order
of magnitude of how much data grinder uses as temp space).


Both this and the SELinux compatibility question provide a strong 
rationale for using the Pulp-provided working directory in plugins 
rather than using /tmp directly.


I've created a corresponding BZ for PulpDist as well: 
https://bugzilla.redhat.com/show_bug.cgi?id=761257


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Moving a Pulp instance to another server?

2011-11-30 Thread Nick Coghlan

On 11/30/2011 11:43 PM, Jay Dobies wrote:

On 11/30/2011 02:31 AM, Nick Coghlan wrote:

Hi,

I need to migrate a Pulp instance to a different physical machine. Is
there an existing list anywhere of what files need to be migrated to
bring the new instance up with the same data stored?

The configurations files in /etc/pulp obviously need to be saved and
restored, but I'm not sure what needs to be copied to preserve:
- the MongoDB data store
- the actual repo contents

Cheers,
Nick.


I've personally never done it and I'm not sure that list exists.

Tepo contents (and all the stuff you'll need for plugins and the like)
are all under /var/lib/pulp. That part is easy.

Mongo I _think_ is as simple as /var/lib/mongodb. You'll see in there
pulp_database.{0,1,ns} which I'd guess is the entire database contents,
but I don't know that for a fact.


OK, I'll try copying those 3 (/etc/pulp, /var/lib/pulp, 
/var/lib/mongodb), along with the Apache and iptables config, into a 
fresh VM and see if it comes up correctly.


Successful or not, I'll report the results back here.

Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Moving a Pulp instance to another server?

2011-11-29 Thread Nick Coghlan

Hi,

I need to migrate a Pulp instance to a different physical machine. Is 
there an existing list anywhere of what files need to be migrated to 
bring the new instance up with the same data stored?


The configurations files in /etc/pulp obviously need to be saved and 
restored, but I'm not sure what needs to be copied to preserve:

- the MongoDB data store
- the actual repo contents

Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Use Bugzilla for v2 API suggestions?

2011-11-24 Thread Nick Coghlan

Jay (once you're back from Thanksgiving...),

Where would you like v2 API suggestions? Issues in Bugzilla? Or messages 
here while you're still hammering them into shape?


The latest one is that, now that there's a meaningful sync report coming 
back from sync_repo(), it might be good to have that returned from the 
REST API invocation (modulo any changes about exposing more of the sync 
history and about make explicit sync requests asynchronous with respect 
to the client requesting the sync operation).


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Returning created objects from POST APIs?

2011-11-23 Thread Nick Coghlan
In working on the test suite for the new pulpdist plugins, I was 
reminded of a practice some folks recommend for REST APIs that Pulp 
currently doesn't implement: for a successful POST request, return the 
object as if it had been a GET request.


The rationale is that there may be state in the object that the server 
populates rather than the client. If a policy of always return the 
created object is followed, then clients don't need to know whether or 
not that is the case, nor do they have to make a second round trip to 
the server (with a GET request) when it *is* the case.


Essentially, the idea is that, if this API pattern is followed, then:

  server.POST(collection_path, obj_id, details)
  obj = server.GET(collection_path, obj_id)

can consistently be replaced by:

  obj = server.POST(collection_path, obj_id, details)

While there's no additional state added by the server when you create a 
new repo, the same can't be said for arbitrary importer and distributor 
plugins.


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Returning created objects from POST APIs?

2011-11-23 Thread Nick Coghlan

On 11/23/2011 11:18 PM, Jay Dobies wrote:

We've been discussing our REST practices over the last few weeks and
that's one of the things we're identified that we're not consistently
doing but want to.

If you notice it's not in the v2 repo APIs, let me know and I'll fix it;
our intention is that they should be doing this.


I wrote my initial message based on the developer blog post from a while 
ago that described the v2 APIs, so this may already have been fixed - 
I'll experiment with master today and see what it does.


Cheers,
Nick.


--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Returning created objects from POST APIs?

2011-11-23 Thread Nick Coghlan

On 11/24/2011 07:37 AM, Nick Coghlan wrote:

On 11/23/2011 11:18 PM, Jay Dobies wrote:

We've been discussing our REST practices over the last few weeks and
that's one of the things we're identified that we're not consistently
doing but want to.

If you notice it's not in the v2 repo APIs, let me know and I'll fix it;
our intention is that they should be doing this.


I wrote my initial message based on the developer blog post from a while
ago that described the v2 APIs, so this may already have been fixed -
I'll experiment with master today and see what it does.


From looking at webservices/controllers/gc_repositories.py in master, 
it appears it is indeed set up to return the details of the created repo 
on a successful call.


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Returning created objects from POST APIs?

2011-11-23 Thread Nick Coghlan

On 11/24/2011 10:23 AM, Nick Coghlan wrote:

On 11/24/2011 07:37 AM, Nick Coghlan wrote:

On 11/23/2011 11:18 PM, Jay Dobies wrote:

We've been discussing our REST practices over the last few weeks and
that's one of the things we're identified that we're not consistently
doing but want to.

If you notice it's not in the v2 repo APIs, let me know and I'll fix it;
our intention is that they should be doing this.


I wrote my initial message based on the developer blog post from a while
ago that described the v2 APIs, so this may already have been fixed -
I'll experiment with master today and see what it does.


 From looking at webservices/controllers/gc_repositories.py in master,
it appears it is indeed set up to return the details of the created repo
on a successful call.


OK, it's just the blog post that is wrong/out of date - the actual 
implementation returns the repo information as expected.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Importer Sync APIs

2011-11-23 Thread Nick Coghlan

On 11/22/2011 02:34 PM, Nick Coghlan wrote:

You could probably do something fancier by using exceptions, but error
message or None seems to cover all the important aspects.


OK, I've actually started experimenting with the config validation now 
and have discovered a couple of things:


1. Any exceptions thrown by the plugin will get caught and logged by the 
plugin manager. That's very useful to know, since it means I don't need 
to trap the exceptions thrown by my existing validator - I can just let 
them escape and Pulp will log the failure for me.


2. However, there appears to be a problem inside Pulp if validation 
fails - if I set the plugin to just return False all the time (or if 
it throws an exception), then I get a 500 Internal Server Error response 
rather than the expected 400 Bad Request error. (There wasn't anything 
obvious in the Pulp log or the Apache log to explain what was going on)


Now that I know about the first point, this isn't a blocker for me any 
more - I can use the logged exception information from pulp.log to debug 
the configuration code in my test client.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Importer Sync APIs

2011-11-22 Thread Nick Coghlan
 on the importer itself through
REST:

/v2/repositories/my-repo/importers/


Ah, OK. So long as it's accessible somewhere, I'm not overly worried 
about where.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Importer Sync APIs

2011-11-21 Thread Nick Coghlan

On 11/22/2011 07:43 AM, Jay Dobies wrote:

http://blog.pulpproject.org/2011/11/21/importer-sync-apis/

I know the week of Thanksgiving isn't the best time to ask for deep
thought, but I'm asking anyway.


No Thanksgiving over here, so no holiday distractions for me :)


I also know there are at least two other teams interested in writing
plugins that I'd like to give some feedback on how this will meet their
needs.


1. The new sync log API looks pretty good. What I'll do is set up my 
sync commands to log to a file on disk (since of them run in a different 
process), then when everything is done, read that file and pass the 
contents back in the final report.


However, it would be nice to be able to store a stats mapping in 
addition to the raw log data.


2. I *think* the 'working directory' API is the 
'get_repo_storage_directory()' call on the conduit. However, I'm not 
entirely clear on that, nor what the benefits are over using Python's 
own tempfile module (although that may be an artefact of the requirement 
for 2.4 compatibility in Pulp - with 2.5+, the combination of context 
managers, tempfile.mkdtemp() and shutil.remove() means that cleaning up 
temporary directories is a *lot* easier than it used to be)


3. The request_unit_filename and add_or_update_content_unit APIs 
seem oddly asymmetrical, and the get_unit_keys_for_repo naming 
includes quite a bit of redundancy.


To be both consistent, flexible and efficient, I suggest an API based 
around a ContentUnitData class with the following attributes:

  - type_id
  - unit_id (may be None when defining a new unit to be added to Pulp)
  - key_data
  - other_data
  - storage_path (may be None if no bits are stored for the content 
type - perhaps whether or not bits are stored should be part of the 
content type definition?)


The content management API itself could then look like:

- get_units() - two level mapping {type_id: {unit_id: ContentUnitData}}
Replacement for get_unit_keys_for_repo()
Note that if you're concerned about exposing 'unit_id', the 
existing APIs already exposed it as the return value from 
'add_or_update_content_unit'.
I think you're right to avoid exposing a single lookup API, at 
least initially - that's a performance problem waiting to happen.


- new_unit(type_id, key_data, other_data, relative_path) - ContentUnitData
  Does *not* assign a unit ID (or touch the database at all)
  Does fill in absolute path in storage_path based on relative_path
  Replaces any use of request_unit_filename

- save_unit(ContentUnitData) - ContentUnitData
  Assigns a unit ID with the unit and stores the unit in the database
  Associates the unit with the repo
  Batching will be tricky due to error handling if the save fails
  Replaces any use of 'add_or_update_content_unit' and 
'associate_content_unit'


- remove_unit(type_id, pulp_id) - bool
  True if removed, False if association retained
  Replaces any use of 'unassociate_content_unit'

For the content unit lifecycle, I suggest adopting a reference counting 
model where the importer owns one set of references (controlled via 
save_unit/remove_unit on the importer conduit) and manual association 
owns a second set of references (which the importer conduit can't 
touch). A reference through either mechanism would then keep the content 
unit alive and associated with the repository (the repo should present a 
unified interface to other code, so client code doesn't need to care if 
it is an importer association or a manual association that is keeping 
the content unit alive).


4. It's probably worth also discussing the scratchpad API that lets the 
importer store additional state between syncs. I like having this as
a convenience API (rather than having to store everything as explicit 
metadata on the repo), but for debugging purposes, it would probably be 
good to publish _importer_scratchpad as a metadata attribute on the 
repo that is accessible via REST.



This is too big and ambitious for me to get right on my own.


Definitely headed in the right direction, but I think it's worth pushing 
the structured data approach even further. You've already started 
doing this on the configuration side of things, I think it makes sense 
on the metadata side as well.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Importer Sync APIs

2011-11-21 Thread Nick Coghlan

On 11/22/2011 07:43 AM, Jay Dobies wrote:

I also know there are at least two other teams interested in writing
plugins that I'd like to give some feedback on how this will meet their
needs.


I just hit something that's a user-friendliness problem: the content 
free nature of the validate_config boolean result. Currently, there's 
no mechanism to report to a user detailed information on *why* their 
configuration failed.


So, there's two pieces of feedback coming from this:

1. From *all* the plugin methods, it would be good to be able to gain 
access to an appropriately scoped Python logger. Perhaps this can be 
made an API on the Importer and Distributor base classes? Something like:


def get_logger(self):
logger_name = pulp.plugins. + type(self).__module__
return logging.getLogger(logger_name)

2. It would be good to have a richer return value from validate_config() 
that allows a plugin to supply an error message to be displayed to the 
user. I think the simplest way to go on that front is to make the return 
value from validate_config() an error message string, with a return 
value of None used to indicate that no error occurred and the 
configuration is fine.


You could probably do something fancier by using exceptions, but error 
message or None seems to cover all the important aspects.


Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Sphinx resources

2011-11-03 Thread Nick Coghlan
I know the idea of using Sphinx for documentation is being kicked 
around, so I figured I'd offer some additional pointers to resources:


Sphinx's own tutorial:
http://sphinx.pocoo.org/tutorial.html

The RTFD (hosting site for Sphinx docs) getting started guide:
http://readthedocs.org/docs/read-the-docs/en/latest/getting_started.html

The ReStructured Text primer from the Python docs*:
http://docs.python.org/documenting/rest.html

And the Sphinx specific additions (many of these are from Sphinx itself, 
a few are CPython specific additions):

http://docs.python.org/documenting/markup.html

*As mentioned on the site, the Sphinx project actually started when 
Georg wrote it to replace the dodgy old LaTex-based toolchain. The 
contrast between http://www.docs.python.org/2.5 and 
http://www.docs.python.org/2.6 is stark enough, but the difference is 
even greater when you compare the

readability of the documentation files themselves:
Sphinx: http://hg.python.org/cpython/file/default/Doc/whatsnew/2.5.rst
LaTeX: http://hg.python.org/cpython/file/2.5/Doc/whatsnew/whatsnew25.tex

Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Blog - Preview: Creating a Pulp Plugin

2011-11-02 Thread Nick Coghlan

On 11/03/2011 01:22 AM, Jay Dobies wrote:

http://blog.pulpproject.org/2011/11/02/preview-creating-a-pulp-plugin/

I wrote up the first in a series of posts that talks about creating Pulp
v2.0 plugins for anyone who wants to be on the bleeding edge of Pulp
stuff. I still need to write up a description of the REST APIs to
actually use it, but this should be enough to get anyone started who
wants to play with writing a plugin.


That generally looks like a good fit with what we want to do in 
PulpDist, but I'm not sold on the idea of actually having to import and 
run Python code in order to get at the importer and distributor 
metadata, nor on the requirement to install (or symlink) Python code 
into a /var subdirectory to get the plugins to work.


Rather than a dedicated directory for the actual plugin code, have you 
considered a JSON based configuration model along the following lines 
(closer to the existing interface for the content type definitions):


{importers: [
{
id   : sync-tree,
display_name : Directory tree importer,
types: [file],
plugin   : pulpdist.pulp_plugins.TreeImporter
},
{
id   : sync-versioned-tree,
display_name : Directory tree importer (multiple 
independently updated versions),

types: [file],
plugin   : pulpdist.pulp_plugins.VersionedTreeImporter
},
{
id   : sync-snapshot-tree,
display_name : Directory tree importer (sequentially 
released versions),

types: [file],
plugin   : pulpdist.pulp_plugins.SnapshotTreeImporter
},
{
id   : sync-snapshot-delta,
display_name : Rsync delta generator (sequentially 
released versions),

types: [rsync-delta],
plugin   : pulpdist.pulp_plugins.SnapshotDeltaImporter
},
{
id   : sync-delta-tree,
display_name : Directory tree importer (sequentially 
released versions based on rsync delta files),

types: [file],
plugin   : pulpdist.pulp_plugins.DeltaTreeImporter
}
]}

And then in a separate file in the distributors directory:

{distributors: [
{
id   : publish-delta,
display_name : Rsync delta publisher,
types: [rsync-delta],
plugin   : pulpdist.pulp_plugins.DeltaDistributor
}
]}

The key changes would be to move the metadata out into static JSON 
files, and then include a plugin field in the metadata that tells Pulp 
where to find the actual Python class implementing the plugin. That way, 
all the restrictions on naming and file locations for plugins can go 
away (aside from the usual Python namespace rules) and the deployment 
model becomes simply:


  - install your Python package (including any Pulp plugins)
  - add JSON for new content types to /v/l/p/plugins/types
  - add JSON for new importers to /v/l/p/plugins/importers
  - add JSON for new distributors to /v/l/p/plugins/distributors

Cheers,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] GC Repo APIs

2011-08-23 Thread Nick Coghlan

On 08/24/2011 05:58 AM, Jay Dobies wrote:

https://fedorahosted.org/pulp/wiki/GCRepoApis

That's the first pass at what the repo-related APIs will look like in
the generic content world (there may be some missing areas, but that's
the bulk of the base functionality). Let me know if you have any
thoughts or questions. If anyone wants I can take some time on
Thursday's deep dive and walk through them.


Those look quite usable for the GSv3 use case, although I suspect I will 
eventually need get_*_config utility APIs to match the update_*_config ones.


I was initially concerned about the 'one importer' limitation (since 
GSv3 will likely have two alternative distribution channels for content, 
potentially within the same repo, depending on whether or not we're 
willing to release the information to Akamai for transfer). However, I 
realised it makes more sense to deal with the management of multiple 
distribution channels within the importer itself on a case-by-case basis 
rather than trying to define a general purpose mechanism for cooperation 
between arbitrary importers.


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Client Refactoring

2011-07-20 Thread Nick Coghlan

On 07/21/2011 08:02 AM, Jason L Connor wrote:

I think I would prefer the first way.  It would behave a lot like many
ORM's
do.  The model would have to know it's url, but there could be a lot
of code
reuse from a base class, such as the create() method, which would be
basically
the same for almost every model (you just hit a different url).


I like the first idea as well. Simply make the model classes themselves
be clients of the api library.


That approach couples the client model instances to a specific server, 
though, which doesn't fit well with the planned GlobalSync management model.


I'd like to be able to create a single Repo model instance for a new 
tree that has just been made available on the main server, then submit 
that model separately to all of the sites that are going to mirror the 
new repository. I can create new instances just to change the target 
server (or edit the server details in place), but it seems cleaner to 
keep the two separate.


Alternatively, the SQL Alchemy session model may be useful here. That 
allows you to either invoke operations directly on a DB session, or else 
use an implicit session that is held in thread local storage. For the 
Pulp client API, the global server connection could be used implicitly 
when invoking operations on model instances, with a separate explicit 
mechanism to perform operations against a specific server.


Regards,
Nick.

--
Nick Coghlan
Red Hat Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


[Pulp-list] Mongod service init script

2011-06-14 Thread Nick Coghlan
Does the mongod service init script for Pulp come from upstream, or is 
it provided by Pulp?


I encountered a problem today where it was failing to start (due to an 
old lock file), but was reporting a successful exit status (so service 
pulp-server start thought it was in an OK state and tried to bring up 
httpd and pulp-server, even thought the database was falling over).


Just trying to figure out if the bug report should go to Pulp or MongoDB.

Cheers,
Nick.

--
Nick Coghlan
Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list


Re: [Pulp-list] Pulp development on Fedora 15

2011-06-10 Thread Nick Coghlan

On 06/09/2011 11:41 PM, James Slagle wrote:

On Thu, Jun 09, 2011 at 09:10:23AM -0400, Jay Dobies wrote:

You didn't mention if SELinux was enabled or not. We don't currently
work correctly with SELinux. That said, it shouldn't have stopped apache
from starting up.



For the dev setup, I think it may now, httpd wouldn't start for me at all with
selinux set to enforcing.

After sudo echo 0  /selinux/enforce, it starts successfully.


Thanks to [1] I eventually tracked the Apache doesn't even start 
problem down to httpd_enable_homedirs being off by default.


Changing some of the SE contexts in the dev directory got things a 
little further along:


chcon -Rv --type httpd_config_t etc/httpd
chcon -Rv --type cert_t etc/pki/pulp

However, at that point, localhost/pulp/api would still error out 
(replying with 503 Service Temporarily Unavailable) so I resorted to the 
recommended sudo setenforce 0 sledgehammer to get things actually 
running :)


[1] http://beginlinux.com/server_training/web-server/976-apache-and-selinux

Cheers,
Nick.

--
Nick Coghlan
Engineering Operations, Brisbane

___
Pulp-list mailing list
Pulp-list@redhat.com
https://www.redhat.com/mailman/listinfo/pulp-list