Re: [Pulp-list] how do I remove migration plugin and data
Yes, it's python3-pulp-2to3-migration-0.6.0-2.el8.noarch I actually uninstalled that rpm since I was done with the migration, thinking it was no longer needed. Could that be the issue? I reinstalled it but same error trying to delete the repoversion. //Adam From: Tanya Tereshchenko Sent: 10 February 2021 16:42:57 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] how do I remove migration plugin and data Could you please share the pulp-2to3-migration plugin version as well? Thanks! Tanya On Wed, Feb 10, 2021 at 4:26 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Forgot my versions: python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.3-1.el8.noarch So I'm not on latest, will update and test again From: Winberg Adam Sent: 10 February 2021 16:15 To: Tanya Tereshchenko Cc: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] how do I remove migration plugin and data > Could you share your steps to reproduce the problem? Nothing special really, just a repo migrated from pulp2 which I now tried to remove. First I just tried to remove the repoversions containing packages, only keeping version 0, since I wanted to make a fresh sync: http DELETE https:///pulp/api/v3/repositories/rpm/rpm/3bf87b61-8211-45ce-8a0f-377358d2e32c/versions/1/ When that didn't work I tried to remove the repository altogether: http DELETE https:///pulp/api/v3/repositories/rpm/rpm/3bf87b61-8211-45ce-8a0f-377358d2e32c/ Both actions results in the same error. I worked around this by modifying the repo using version 0 as base_version, rather than removing all existing repoversions, in order to make a fresh sync. And that worked. So it's not a pressing need, and I'm pretty sure I have removed repoversions from pulp2-migrated repos before. So i don't know whats different in this case. //Adam From: Tanya Tereshchenko mailto:ttere...@redhat.com>> Sent: 10 February 2021 14:15 To: Winberg Adam Cc: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] how do I remove migration plugin and data Hi Adam, There is a story filed to add an ability to remove a plugin and its data. Please follow it here https://pulp.plan.io/issues/7822 and feel free to leave a comment to express your interest or reasons to remove. We plan to work on it relatively soon, but it was not on a high priority list for us. There are 2 primary goals covered by this story and both don't seem extremely urgent: - space, there is a lot of pulp2 data in pulp3 database that can be removed - compatibility with other plugins, migration plugin won't be compatible with all new pulp3 releases forever, it would be good to be able to remove it; at the moment it's compatible with pulpcore 3.7+. However if you think you are running into some problems because of having the plugin installed, it becomes important to address it sooner rather than later. I tried to reproduce your issue by removing the migrated File or RPM repository/publication/distribution and all worked for me. Could you share your steps to reproduce the problem? Here or feel free to file an issue in redmine https://pulp.plan.io/projects/migration/issues/new The versions of pulpcore and plugins would be good to know (since I'm testing on the latest), as well as which resources and how you delete them. Seeing a specific cli command, API call or your script might be helpful. Thank you, Tanya On Tue, Feb 9, 2021 at 8:44 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, I've finished my 2to3 migration and now I want to get rid of pulp_2to3_migration related db entries. I cant find any documentation regarding this, what is the recommended procedure to do this? Right now I get errors when trying to remove certain repos: django.db.utils.IntegrityError: update or delete on table "core_publication" violates foreign key constraint "pulp_2to3_migration__pulp3_publication_id_221e8b1c_fk_core_publ" on table "pulp_2to3_migration_pulp2distributor" //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] how do I remove migration plugin and data
Forgot my versions: python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.3-1.el8.noarch So I'm not on latest, will update and test again From: Winberg Adam Sent: 10 February 2021 16:15 To: Tanya Tereshchenko Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] how do I remove migration plugin and data > Could you share your steps to reproduce the problem? Nothing special really, just a repo migrated from pulp2 which I now tried to remove. First I just tried to remove the repoversions containing packages, only keeping version 0, since I wanted to make a fresh sync: http DELETE https:///pulp/api/v3/repositories/rpm/rpm/3bf87b61-8211-45ce-8a0f-377358d2e32c/versions/1/ When that didn't work I tried to remove the repository altogether: http DELETE https:///pulp/api/v3/repositories/rpm/rpm/3bf87b61-8211-45ce-8a0f-377358d2e32c/ Both actions results in the same error. I worked around this by modifying the repo using version 0 as base_version, rather than removing all existing repoversions, in order to make a fresh sync. And that worked. So it's not a pressing need, and I'm pretty sure I have removed repoversions from pulp2-migrated repos before. So i don't know whats different in this case. //Adam From: Tanya Tereshchenko Sent: 10 February 2021 14:15 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] how do I remove migration plugin and data Hi Adam, There is a story filed to add an ability to remove a plugin and its data. Please follow it here https://pulp.plan.io/issues/7822 and feel free to leave a comment to express your interest or reasons to remove. We plan to work on it relatively soon, but it was not on a high priority list for us. There are 2 primary goals covered by this story and both don't seem extremely urgent: - space, there is a lot of pulp2 data in pulp3 database that can be removed - compatibility with other plugins, migration plugin won't be compatible with all new pulp3 releases forever, it would be good to be able to remove it; at the moment it's compatible with pulpcore 3.7+. However if you think you are running into some problems because of having the plugin installed, it becomes important to address it sooner rather than later. I tried to reproduce your issue by removing the migrated File or RPM repository/publication/distribution and all worked for me. Could you share your steps to reproduce the problem? Here or feel free to file an issue in redmine https://pulp.plan.io/projects/migration/issues/new The versions of pulpcore and plugins would be good to know (since I'm testing on the latest), as well as which resources and how you delete them. Seeing a specific cli command, API call or your script might be helpful. Thank you, Tanya On Tue, Feb 9, 2021 at 8:44 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, I've finished my 2to3 migration and now I want to get rid of pulp_2to3_migration related db entries. I cant find any documentation regarding this, what is the recommended procedure to do this? Right now I get errors when trying to remove certain repos: django.db.utils.IntegrityError: update or delete on table "core_publication" violates foreign key constraint "pulp_2to3_migration__pulp3_publication_id_221e8b1c_fk_core_publ" on table "pulp_2to3_migration_pulp2distributor" //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] how do I remove migration plugin and data
> Could you share your steps to reproduce the problem? Nothing special really, just a repo migrated from pulp2 which I now tried to remove. First I just tried to remove the repoversions containing packages, only keeping version 0, since I wanted to make a fresh sync: http DELETE https:///pulp/api/v3/repositories/rpm/rpm/3bf87b61-8211-45ce-8a0f-377358d2e32c/versions/1/ When that didn't work I tried to remove the repository altogether: http DELETE https:///pulp/api/v3/repositories/rpm/rpm/3bf87b61-8211-45ce-8a0f-377358d2e32c/ Both actions results in the same error. I worked around this by modifying the repo using version 0 as base_version, rather than removing all existing repoversions, in order to make a fresh sync. And that worked. So it's not a pressing need, and I'm pretty sure I have removed repoversions from pulp2-migrated repos before. So i don't know whats different in this case. //Adam From: Tanya Tereshchenko Sent: 10 February 2021 14:15 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] how do I remove migration plugin and data Hi Adam, There is a story filed to add an ability to remove a plugin and its data. Please follow it here https://pulp.plan.io/issues/7822 and feel free to leave a comment to express your interest or reasons to remove. We plan to work on it relatively soon, but it was not on a high priority list for us. There are 2 primary goals covered by this story and both don't seem extremely urgent: - space, there is a lot of pulp2 data in pulp3 database that can be removed - compatibility with other plugins, migration plugin won't be compatible with all new pulp3 releases forever, it would be good to be able to remove it; at the moment it's compatible with pulpcore 3.7+. However if you think you are running into some problems because of having the plugin installed, it becomes important to address it sooner rather than later. I tried to reproduce your issue by removing the migrated File or RPM repository/publication/distribution and all worked for me. Could you share your steps to reproduce the problem? Here or feel free to file an issue in redmine https://pulp.plan.io/projects/migration/issues/new The versions of pulpcore and plugins would be good to know (since I'm testing on the latest), as well as which resources and how you delete them. Seeing a specific cli command, API call or your script might be helpful. Thank you, Tanya On Tue, Feb 9, 2021 at 8:44 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, I've finished my 2to3 migration and now I want to get rid of pulp_2to3_migration related db entries. I cant find any documentation regarding this, what is the recommended procedure to do this? Right now I get errors when trying to remove certain repos: django.db.utils.IntegrityError: update or delete on table "core_publication" violates foreign key constraint "pulp_2to3_migration__pulp3_publication_id_221e8b1c_fk_core_publ" on table "pulp_2to3_migration_pulp2distributor" //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
[Pulp-list] how do I remove migration plugin and data
Hi, I've finished my 2to3 migration and now I want to get rid of pulp_2to3_migration related db entries. I cant find any documentation regarding this, what is the recommended procedure to do this? Right now I get errors when trying to remove certain repos: django.db.utils.IntegrityError: update or delete on table "core_publication" violates foreign key constraint "pulp_2to3_migration__pulp3_publication_id_221e8b1c_fk_core_publ" on table "pulp_2to3_migration_pulp2distributor" //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] epel syncs all advisories every time
Looking a bit more on this the copy issue, with the error pulp_rpm.app.exceptions.AdvisoryConflict: Incoming and existing advisories have the same id and timestamp but different and intersecting package lists. At least one of them is wrong. Advisory id: FEDORA-EPEL-2019-927a9446df seems to be due to the advisory metadata in my frozen epel repos being broken, the package lists are incomplete. These repos are migrated from pulp2, but I don't know if the metadata was broken already in pulp2 or if it broke during the migration. My pulp2 environment is no longer with me so there's no way for me to find out. So I need to fix those repos. Once thats done the issue with EPEL syncs re-adding all advisories are probably not that big, but it does make syncing take quite some time and also makes it difficult for me to filter newly added advisories (since they are added at every sync). So if anyone has any idea how to solve this I would be grateful. Comparing the advisories that are added with the ones that are removed shows that they are identical except for a newer 'updated_date' on the added one (the updated_date is todays date). Maybe an advisory shouldnt be considered for addition if the only change is the updated_date and no other metadata is changed? And why is EPEL setting a new 'update_date' on their advisories at every rebuild/re-index of their repos? //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 16:07 To: pulp-list@redhat.com Subject: Re: [Pulp-list] epel syncs all advisories every time The patch was also for copying, not syncing, so I was totally confused. Sorry about that... From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 15:58 To: pulp-list@redhat.com Subject: Re: [Pulp-list] epel syncs all advisories every time Did a quick test and the patch i mentioned is not involved. Note that I only see this problem with EPEL, not with RHEL repos which also publishes erratas/advisories. So the problem is probably on the EPEL side, or..? //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 10:55 To: pulp-list@redhat.com Subject: Re: [Pulp-list] epel syncs all advisories every time I'm not sure, but I think this behaviour might have started after I applied the following patch: https://github.com/pulp/pulp_rpm/commit/1652026913308e8348543af6f62c3b5c5f89985b#diff-0b195d23762f04b205940bafb5889ddf96181afde122ead35f8c65fe03527647 //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 10:39 To: pulp-list@redhat.com Subject: [Pulp-list] epel syncs all advisories every time I sync the rhel8 and rhel7 epel repo every day, and for some reason all advisories are removed and added each time: "content_summary": { "added": { "rpm.advisory": { "count": 2540, "href": "/pulp/api/v3/content/rpm/advisories/?repository_version_added=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" }, "rpm.package": { "count": 6, "href": "/pulp/api/v3/content/rpm/packages/?repository_version_added=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" } }, "removed": { "rpm.advisory": { "count": 2536, "href": "/pulp/api/v3/content/rpm/advisories/?repository_version_removed=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" } } }, This also leads to problem when I want to copy new advisories to my frozen epel repo: pulp_rpm.app.exceptions.AdvisoryConflict: Incoming and existing advisories have the same id and timestamp but different and intersecting package lists. At least one of them is wrong. Advisory id: FEDORA-EPEL-2019-927a9446df So it seems that the advisory has actually changed, which seems weird. And changed every day? Is this some quirk in the EPEL sources or a bug in Pulp? //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] epel syncs all advisories every time
The patch was also for copying, not syncing, so I was totally confused. Sorry about that... From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 15:58 To: pulp-list@redhat.com Subject: Re: [Pulp-list] epel syncs all advisories every time Did a quick test and the patch i mentioned is not involved. Note that I only see this problem with EPEL, not with RHEL repos which also publishes erratas/advisories. So the problem is probably on the EPEL side, or..? //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 10:55 To: pulp-list@redhat.com Subject: Re: [Pulp-list] epel syncs all advisories every time I'm not sure, but I think this behaviour might have started after I applied the following patch: https://github.com/pulp/pulp_rpm/commit/1652026913308e8348543af6f62c3b5c5f89985b#diff-0b195d23762f04b205940bafb5889ddf96181afde122ead35f8c65fe03527647 //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 10:39 To: pulp-list@redhat.com Subject: [Pulp-list] epel syncs all advisories every time I sync the rhel8 and rhel7 epel repo every day, and for some reason all advisories are removed and added each time: "content_summary": { "added": { "rpm.advisory": { "count": 2540, "href": "/pulp/api/v3/content/rpm/advisories/?repository_version_added=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" }, "rpm.package": { "count": 6, "href": "/pulp/api/v3/content/rpm/packages/?repository_version_added=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" } }, "removed": { "rpm.advisory": { "count": 2536, "href": "/pulp/api/v3/content/rpm/advisories/?repository_version_removed=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" } } }, This also leads to problem when I want to copy new advisories to my frozen epel repo: pulp_rpm.app.exceptions.AdvisoryConflict: Incoming and existing advisories have the same id and timestamp but different and intersecting package lists. At least one of them is wrong. Advisory id: FEDORA-EPEL-2019-927a9446df So it seems that the advisory has actually changed, which seems weird. And changed every day? Is this some quirk in the EPEL sources or a bug in Pulp? //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs
I should also note that I use the 'mpm_event' worker in Apache, which affects how ServerLimit/MaxRequestWorkers are set. //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 15:54 To: Brian Bouterse Cc: pulp-list Subject: Re: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs I'm not using the pulp_installer so I can't help with the patch, unfortunately. But I have borrowed heavily from it, so the change in the Apache config is pulp_installer config: ProxyPass /pulp/content http://${pulp-content}/pulp/content My config: ProxyPass /pulp/content http://${pulp-content}/pulp/content disablereuse=on Aside from that I have also increased the ServerLimit and MaxRequestWorkers, but there YMMV. I've set it to ServerLimit 30 MaxRequestWorkers 750 which works for my workload. //Adam From: Brian Bouterse Sent: 08 February 2021 15:47 To: Winberg Adam Cc: Eric Helms; pulp-list Subject: Re: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs Hi Adam, Thanks for sharing all of that and for opening that issue. Would you be willing to do one or two more things to help move it forward? Could you post the diff of the Apache config changes you made on the issue? Also if you'd be willing to open a PR I think it would go against this file. https://github.com/pulp/pulp_installer/blob/master/roles/pulp_webserver/templates/pulp-vhost.conf.j2 Can you link us to either of these you're able to do? Regarding improving the content app's performance, @dalley has done some investigation there and we hope to build on that work to make some improvements. Cheers, Brian On Fri, Feb 5, 2021 at 12:47 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: I raised a similar question here about pulp-content performance a short while ago, and also created https://pulp.plan.io/issues/8180 since there may be a lack of documentation about this. In short, I've made a number of adjustments and among them was setting the 'disablereuse=on' setting in my proxypass declaration. I have not seen any performance issue, probably because the proxypass target is on localhost and not a remote machine. //Adam From: pulp-list-boun...@redhat.com<mailto:pulp-list-boun...@redhat.com> mailto:pulp-list-boun...@redhat.com>> on behalf of Eric Helms mailto:ehe...@redhat.com>> Sent: 05 February 2021 17:29 To: pulp-list Subject: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs Howdy, Some quick background, over in the Katello project we deploy the pulpcore-content service via a unix socket with Apache serving as a reverse proxy. Today, we deploy pulpcore-content with two gunicorn workers. Tomorrow we are considering changing this to 2 * CPU + 1 per gunicorn documentation. The issue we are running into is intermittent 502s from Apache caused by being unable to make a connection to the underlying pulpcore-content app. This manifests itself primarily during a Pulp 3 to Pulp 3 sync. That is, when we sync our content proxy from the main servers Pulp 3. The sync can result in a large number of parallel connections back to the pulpcore-content application running on the main server. In an issue for aiohttp which is used by the project, and whose worker is used for gunicorn [1] they talk about the issues with Apache and aiohttp. In that issue there are two suggestions that I could extract: 1) set disablereuse=on the Apache reverse proxy declarations for the content app 2) change the default Apache worker type to be more like Nginx There are performance tradeoffs with #1, however, I do not fully grasp if they are relative to our primary use case when it comes to the content app. So, I am coming to the experts here to try to get some insight into what changes we should pursue to ensure the optimal default performance for our deployment. And to, as best as we can, limit these kind of intermittent failures to extreme cases. Because today, we see this intermittent failure with Pulp 3 running the same test suite we did with Pulp 2. Related, is there retry support built into syncing? Thanks! [1] https://github.com/aio-libs/aiohttp/issues/2687 -- Eric Helms Principal Software Engineer Satellite ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs
I'm not using the pulp_installer so I can't help with the patch, unfortunately. But I have borrowed heavily from it, so the change in the Apache config is pulp_installer config: ProxyPass /pulp/content http://${pulp-content}/pulp/content My config: ProxyPass /pulp/content http://${pulp-content}/pulp/content disablereuse=on Aside from that I have also increased the ServerLimit and MaxRequestWorkers, but there YMMV. I've set it to ServerLimit 30 MaxRequestWorkers 750 which works for my workload. //Adam From: Brian Bouterse Sent: 08 February 2021 15:47 To: Winberg Adam Cc: Eric Helms; pulp-list Subject: Re: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs Hi Adam, Thanks for sharing all of that and for opening that issue. Would you be willing to do one or two more things to help move it forward? Could you post the diff of the Apache config changes you made on the issue? Also if you'd be willing to open a PR I think it would go against this file. https://github.com/pulp/pulp_installer/blob/master/roles/pulp_webserver/templates/pulp-vhost.conf.j2 Can you link us to either of these you're able to do? Regarding improving the content app's performance, @dalley has done some investigation there and we hope to build on that work to make some improvements. Cheers, Brian On Fri, Feb 5, 2021 at 12:47 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: I raised a similar question here about pulp-content performance a short while ago, and also created https://pulp.plan.io/issues/8180 since there may be a lack of documentation about this. In short, I've made a number of adjustments and among them was setting the 'disablereuse=on' setting in my proxypass declaration. I have not seen any performance issue, probably because the proxypass target is on localhost and not a remote machine. //Adam From: pulp-list-boun...@redhat.com<mailto:pulp-list-boun...@redhat.com> mailto:pulp-list-boun...@redhat.com>> on behalf of Eric Helms mailto:ehe...@redhat.com>> Sent: 05 February 2021 17:29 To: pulp-list Subject: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs Howdy, Some quick background, over in the Katello project we deploy the pulpcore-content service via a unix socket with Apache serving as a reverse proxy. Today, we deploy pulpcore-content with two gunicorn workers. Tomorrow we are considering changing this to 2 * CPU + 1 per gunicorn documentation. The issue we are running into is intermittent 502s from Apache caused by being unable to make a connection to the underlying pulpcore-content app. This manifests itself primarily during a Pulp 3 to Pulp 3 sync. That is, when we sync our content proxy from the main servers Pulp 3. The sync can result in a large number of parallel connections back to the pulpcore-content application running on the main server. In an issue for aiohttp which is used by the project, and whose worker is used for gunicorn [1] they talk about the issues with Apache and aiohttp. In that issue there are two suggestions that I could extract: 1) set disablereuse=on the Apache reverse proxy declarations for the content app 2) change the default Apache worker type to be more like Nginx There are performance tradeoffs with #1, however, I do not fully grasp if they are relative to our primary use case when it comes to the content app. So, I am coming to the experts here to try to get some insight into what changes we should pursue to ensure the optimal default performance for our deployment. And to, as best as we can, limit these kind of intermittent failures to extreme cases. Because today, we see this intermittent failure with Pulp 3 running the same test suite we did with Pulp 2. Related, is there retry support built into syncing? Thanks! [1] https://github.com/aio-libs/aiohttp/issues/2687 -- Eric Helms Principal Software Engineer Satellite ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] epel syncs all advisories every time
I'm not sure, but I think this behaviour might have started after I applied the following patch: https://github.com/pulp/pulp_rpm/commit/1652026913308e8348543af6f62c3b5c5f89985b#diff-0b195d23762f04b205940bafb5889ddf96181afde122ead35f8c65fe03527647 //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 08 February 2021 10:39 To: pulp-list@redhat.com Subject: [Pulp-list] epel syncs all advisories every time I sync the rhel8 and rhel7 epel repo every day, and for some reason all advisories are removed and added each time: "content_summary": { "added": { "rpm.advisory": { "count": 2540, "href": "/pulp/api/v3/content/rpm/advisories/?repository_version_added=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" }, "rpm.package": { "count": 6, "href": "/pulp/api/v3/content/rpm/packages/?repository_version_added=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" } }, "removed": { "rpm.advisory": { "count": 2536, "href": "/pulp/api/v3/content/rpm/advisories/?repository_version_removed=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" } } }, This also leads to problem when I want to copy new advisories to my frozen epel repo: pulp_rpm.app.exceptions.AdvisoryConflict: Incoming and existing advisories have the same id and timestamp but different and intersecting package lists. At least one of them is wrong. Advisory id: FEDORA-EPEL-2019-927a9446df So it seems that the advisory has actually changed, which seems weird. And changed every day? Is this some quirk in the EPEL sources or a bug in Pulp? //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
[Pulp-list] epel syncs all advisories every time
I sync the rhel8 and rhel7 epel repo every day, and for some reason all advisories are removed and added each time: "content_summary": { "added": { "rpm.advisory": { "count": 2540, "href": "/pulp/api/v3/content/rpm/advisories/?repository_version_added=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" }, "rpm.package": { "count": 6, "href": "/pulp/api/v3/content/rpm/packages/?repository_version_added=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" } }, "removed": { "rpm.advisory": { "count": 2536, "href": "/pulp/api/v3/content/rpm/advisories/?repository_version_removed=/pulp/api/v3/repositories/rpm/rpm/10e51ae6-65c7-42aa-8ab1-ffebdf752500/versions/19/" } } }, This also leads to problem when I want to copy new advisories to my frozen epel repo: pulp_rpm.app.exceptions.AdvisoryConflict: Incoming and existing advisories have the same id and timestamp but different and intersecting package lists. At least one of them is wrong. Advisory id: FEDORA-EPEL-2019-927a9446df So it seems that the advisory has actually changed, which seems weird. And changed every day? Is this some quirk in the EPEL sources or a bug in Pulp? //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs
I raised a similar question here about pulp-content performance a short while ago, and also created https://pulp.plan.io/issues/8180 since there may be a lack of documentation about this. In short, I've made a number of adjustments and among them was setting the 'disablereuse=on' setting in my proxypass declaration. I have not seen any performance issue, probably because the proxypass target is on localhost and not a remote machine. //Adam From: pulp-list-boun...@redhat.com on behalf of Eric Helms Sent: 05 February 2021 17:29 To: pulp-list Subject: [Pulp-list] Apache 502 proxy errors when performing Pulp 3 to Pulp 3 syncs Howdy, Some quick background, over in the Katello project we deploy the pulpcore-content service via a unix socket with Apache serving as a reverse proxy. Today, we deploy pulpcore-content with two gunicorn workers. Tomorrow we are considering changing this to 2 * CPU + 1 per gunicorn documentation. The issue we are running into is intermittent 502s from Apache caused by being unable to make a connection to the underlying pulpcore-content app. This manifests itself primarily during a Pulp 3 to Pulp 3 sync. That is, when we sync our content proxy from the main servers Pulp 3. The sync can result in a large number of parallel connections back to the pulpcore-content application running on the main server. In an issue for aiohttp which is used by the project, and whose worker is used for gunicorn [1] they talk about the issues with Apache and aiohttp. In that issue there are two suggestions that I could extract: 1) set disablereuse=on the Apache reverse proxy declarations for the content app 2) change the default Apache worker type to be more like Nginx There are performance tradeoffs with #1, however, I do not fully grasp if they are relative to our primary use case when it comes to the content app. So, I am coming to the experts here to try to get some insight into what changes we should pursue to ensure the optimal default performance for our deployment. And to, as best as we can, limit these kind of intermittent failures to extreme cases. Because today, we see this intermittent failure with Pulp 3 running the same test suite we did with Pulp 2. Related, is there retry support built into syncing? Thanks! [1] https://github.com/aio-libs/aiohttp/issues/2687 -- Eric Helms Principal Software Engineer Satellite ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] very slow yum runs
I managed to get better performance by doing the following changes: * Increasing apache parameters 'ServerLimit' and 'MaxRequestWorkers'. This was suggested from warning in the apache error logs which I had missed. * Increasing aiohttpd workers for the pulpcore-content service from 2 to 6. Not sure what effect if any this has. Not sure which measure had effect or if it is a combination, but things seem to run pretty smoothly now without any errors. Still not quite as fast as with pulp2, so if anyone has any tips I'll gladly accept them. //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 20 January 2021 15:38 To: pulp-list@redhat.com Subject: [Pulp-list] very slow yum runs Hello, we have been using a pulp2 installation serving yum repos to ~ 1200 clients. Yesterday I went live with a pulp3 installation and client yum runs was very slow. I mean timeout-slow, grinding our puppet runs to a halt and forcing me to revert back to our pulp2 instance. Pulp3 is behind a apache reverse proxy, and I'm seeing errors like: AH01102: error reading status line from remote server 127.0.0.1:24816 There is no shortage of cpu/ram on the server. Pulp is connected to a remote postgresql server. Using rpm based installation on RHEL8 with python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.3-1.el8.noarch Anyone recognize this problem, is there some basic tuning that I have missed? Regards Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
[Pulp-list] very slow yum runs
Hello, we have been using a pulp2 installation serving yum repos to ~ 1200 clients. Yesterday I went live with a pulp3 installation and client yum runs was very slow. I mean timeout-slow, grinding our puppet runs to a halt and forcing me to revert back to our pulp2 instance. Pulp3 is behind a apache reverse proxy, and I'm seeing errors like: AH01102: error reading status line from remote server 127.0.0.1:24816 There is no shortage of cpu/ram on the server. Pulp is connected to a remote postgresql server. Using rpm based installation on RHEL8 with python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.3-1.el8.noarch Anyone recognize this problem, is there some basic tuning that I have missed? Regards Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] 2to3 migration: memory usage and open file handles
both! After fixing that script the memory usage is very manageable. //Adam From: Daniel Alley Sent: 05 November 2020 15:30 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3 migration: memory usage and open file handles Do you mean that the bash script was the cause of the file descriptor issue, or the memory issue, or both? On Thu, Nov 5, 2020 at 3:25 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: sorry, ignore my problem. It was totally unrelated to pulp, it was another process going haywire at the same time as I was running the migration (bug in bash while loop in an unrelated housekeeping script). I was so focused on the migration that I didn't even notice that the issue was elsewhere. //Adam From: pulp-list-boun...@redhat.com<mailto:pulp-list-boun...@redhat.com> mailto:pulp-list-boun...@redhat.com>> on behalf of Winberg Adam mailto:adam.winb...@smhi.se>> Sent: 05 November 2020 07:42 To: Daniel Alley Cc: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] 2to3 migration: memory usage and open file handles thats weird. I had a lot of memory usage while I was running on pulpcore 3.4, but after upgrading to 3.7 there was hardly any memory usage. I will reboot and run with 2 workers instead of 4 (don't know if that is even relevant in migration) and run a new migration from scratch before filing an issue. //Adam From: Daniel Alley mailto:dal...@redhat.com>> Sent: 05 November 2020 01:58 To: Winberg Adam Cc: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] 2to3 migration: memory usage and open file handles Hi Adam, We discovered (and fixed) some memory leaks in a library that we are using [0] [1], which happens to be a Python extension written in C. Right now we're still waiting on the maintainers of that library to review the changes before we package them and ship the RPM, but we'll definitely let you know what that happens. However, these issues would have affected every previous version equally, so it's a little strange that you're only running into it now. Nothing else about your setup has changed I assume? re: file descriptors, we've been testing migrating much larger systems (300k RPMs, 600k errata) without running into problems, so I'm perplexed about what could be causing that. File an issue and list which repositories you're attempting to migrate and we'll see if we can reproduce. [0] https://github.com/rpm-software-management/createrepo_c/pull/231 [1] https://github.com/rpm-software-management/createrepo_c/pull/233 On Wed, Nov 4, 2020 at 12:40 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, running a 2to3 migration with 2to3-migration-0.5.1 seems to consume a whole lot more memory than previous versions. My 12G RAM was quickly spent, i increased to 16G which wasnt enough either. Earlier migrations with 0.5.0 didnt spend anywhere near that amount. Also - the migration fails with OSError: [Errno 23] Too many open files in system: .. The memory usage increases while running the 'Migrating rpm content to Pulp 3 rpm' subtask. With 16G RAM I only get to about 114000/152000 pkgs in that task before the memory is more or less all consumed and the OSError appears. So it seems to me that there is some type of regression here. Any pointers on how I can further debug or work around this? This is on RHEL8 with python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.3-1.el8.noarch python3-pulp-2to3-migration-0.5.1-1.el8.noarch //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] 2to3 migration: memory usage and open file handles
sorry, ignore my problem. It was totally unrelated to pulp, it was another process going haywire at the same time as I was running the migration (bug in bash while loop in an unrelated housekeeping script). I was so focused on the migration that I didn't even notice that the issue was elsewhere. //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 05 November 2020 07:42 To: Daniel Alley Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3 migration: memory usage and open file handles thats weird. I had a lot of memory usage while I was running on pulpcore 3.4, but after upgrading to 3.7 there was hardly any memory usage. I will reboot and run with 2 workers instead of 4 (don't know if that is even relevant in migration) and run a new migration from scratch before filing an issue. //Adam From: Daniel Alley Sent: 05 November 2020 01:58 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3 migration: memory usage and open file handles Hi Adam, We discovered (and fixed) some memory leaks in a library that we are using [0] [1], which happens to be a Python extension written in C. Right now we're still waiting on the maintainers of that library to review the changes before we package them and ship the RPM, but we'll definitely let you know what that happens. However, these issues would have affected every previous version equally, so it's a little strange that you're only running into it now. Nothing else about your setup has changed I assume? re: file descriptors, we've been testing migrating much larger systems (300k RPMs, 600k errata) without running into problems, so I'm perplexed about what could be causing that. File an issue and list which repositories you're attempting to migrate and we'll see if we can reproduce. [0] https://github.com/rpm-software-management/createrepo_c/pull/231 [1] https://github.com/rpm-software-management/createrepo_c/pull/233 On Wed, Nov 4, 2020 at 12:40 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, running a 2to3 migration with 2to3-migration-0.5.1 seems to consume a whole lot more memory than previous versions. My 12G RAM was quickly spent, i increased to 16G which wasnt enough either. Earlier migrations with 0.5.0 didnt spend anywhere near that amount. Also - the migration fails with OSError: [Errno 23] Too many open files in system: .. The memory usage increases while running the 'Migrating rpm content to Pulp 3 rpm' subtask. With 16G RAM I only get to about 114000/152000 pkgs in that task before the memory is more or less all consumed and the OSError appears. So it seems to me that there is some type of regression here. Any pointers on how I can further debug or work around this? This is on RHEL8 with python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.3-1.el8.noarch python3-pulp-2to3-migration-0.5.1-1.el8.noarch //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] 2to3 migration: memory usage and open file handles
thats weird. I had a lot of memory usage while I was running on pulpcore 3.4, but after upgrading to 3.7 there was hardly any memory usage. I will reboot and run with 2 workers instead of 4 (don't know if that is even relevant in migration) and run a new migration from scratch before filing an issue. //Adam From: Daniel Alley Sent: 05 November 2020 01:58 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3 migration: memory usage and open file handles Hi Adam, We discovered (and fixed) some memory leaks in a library that we are using [0] [1], which happens to be a Python extension written in C. Right now we're still waiting on the maintainers of that library to review the changes before we package them and ship the RPM, but we'll definitely let you know what that happens. However, these issues would have affected every previous version equally, so it's a little strange that you're only running into it now. Nothing else about your setup has changed I assume? re: file descriptors, we've been testing migrating much larger systems (300k RPMs, 600k errata) without running into problems, so I'm perplexed about what could be causing that. File an issue and list which repositories you're attempting to migrate and we'll see if we can reproduce. [0] https://github.com/rpm-software-management/createrepo_c/pull/231 [1] https://github.com/rpm-software-management/createrepo_c/pull/233 On Wed, Nov 4, 2020 at 12:40 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, running a 2to3 migration with 2to3-migration-0.5.1 seems to consume a whole lot more memory than previous versions. My 12G RAM was quickly spent, i increased to 16G which wasnt enough either. Earlier migrations with 0.5.0 didnt spend anywhere near that amount. Also - the migration fails with OSError: [Errno 23] Too many open files in system: .. The memory usage increases while running the 'Migrating rpm content to Pulp 3 rpm' subtask. With 16G RAM I only get to about 114000/152000 pkgs in that task before the memory is more or less all consumed and the OSError appears. So it seems to me that there is some type of regression here. Any pointers on how I can further debug or work around this? This is on RHEL8 with python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.3-1.el8.noarch python3-pulp-2to3-migration-0.5.1-1.el8.noarch //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
[Pulp-list] 2to3 migration: memory usage and open file handles
Hi, running a 2to3 migration with 2to3-migration-0.5.1 seems to consume a whole lot more memory than previous versions. My 12G RAM was quickly spent, i increased to 16G which wasnt enough either. Earlier migrations with 0.5.0 didnt spend anywhere near that amount. Also - the migration fails with OSError: [Errno 23] Too many open files in system: .. The memory usage increases while running the 'Migrating rpm content to Pulp 3 rpm' subtask. With 16G RAM I only get to about 114000/152000 pkgs in that task before the memory is more or less all consumed and the OSError appears. So it seems to me that there is some type of regression here. Any pointers on how I can further debug or work around this? This is on RHEL8 with python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.3-1.el8.noarch python3-pulp-2to3-migration-0.5.1-1.el8.noarch //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] 2to3-migration takes a long time
'not' = now From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 25 October 2020 15:59 To: Daniel Alley Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3-migration takes a long time have tested not with 0.5.0 and it did indeed fix the problem. The initial migrations was way faster, about 2.5hrs, and the subsequent migrations are really fast. Thanks! //Adam From: Daniel Alley Sent: 23 October 2020 19:02 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3-migration takes a long time No, this is with 0.4.0, since 0.5.0 is quite new I haven't updated yet. Will do so and try again. If it's 0.4.0, then this is actually not surprising after all. This was a bug that was fixed in 0.5.0 https://pulp.plan.io/issues/7280 On Fri, Oct 23, 2020 at 12:45 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: > This is with the very latest version of the migration plugin (0.5.0)? No, this is with 0.4.0, since 0.5.0 is quite new I haven't updated yet. Will do so and try again. > Is it actually equally long, or just not 'reasonably' faster? Is it only the > "migrating rpm content to Pulp 3 erratum" sub-item which reports doing a lot > of work or are many of them doing so? I would say that the runtime is very similar. And it looks like many if not all sub-items are repeating their work but since the 'pulp3 erratum' is the longest running that's the one that stands out. Here are some progress reports on a subsequent run which should only resulted in a handful of new rpm's: { "code": "migrating.rpm.content", "done": 32348, "message": "Migrating rpm content to Pulp 3 rpm", "state": "completed", "suffix": null, "total": 32348 }, { "code": "premigrating.content.general", "done": 89690, "message": "Pre-migrating Pulp 2 ERRATUM content (general info)", "state": "completed", "suffix": null, "total": 89690 }, { "code": "premigrating.content.general", "done": 39, "message": "Pre-migrating Pulp 2 RPM content (general info)", "state": "completed", "suffix": null, "total": 43 }, { "code": "migrating.rpm.content", "done": 89690, "message": "Migrating rpm content to Pulp 3 erratum", "state": "completed", "suffix": null, "total": 89690 }, Should these numbers be 0 on a subsequent run if nothing has changed? //Adam From: Daniel Alley mailto:dal...@redhat.com>> Sent: 23 October 2020 17:33 To: Winberg Adam Cc: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] 2to3-migration takes a long time Hey Adam, It's not expected for subsequent migrations to take equally long, that is strange and concerning. This is with the very latest version of the migration plugin (0.5.0)? Is it actually equally long, or just not 'reasonably' faster? Is it only the "migrating rpm content to Pulp 3 erratum" sub-item which reports doing a lot of work or are many of them doing so? I do have some good news though, the 6-7 hour runtime will improve significantly once you can upgrade to 3.7.2 (which doesn't look like it has been pushed to the Foreman repositories yet). There was a significant performance regression introduced by 3.7.0 which has been fixed, and in my testing migrations only took a bit more than 1/3 as long as they had been taking previously. On Fri, Oct 23, 2020 at 10:47 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Depending on the size of your pulp2 installation I understand that the 2to3migration can take quite some time. In my environment it takes approx. 6-7hrs. However, I expect consecutive migrations to be faster, based on the documentation: "When you are ready to switch to Pulp 3: run migration, then stop Pulp 2 services (so no new data is coming in), run migration for the last time (it should not take long)." But all my migration runs are equally long, the 'sub-task' "Migrating rpm content to Pulp 3 erratum" processes about 9 items every time which takes a long time. Is this expected behaviour? Thanks, Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] 2to3-migration takes a long time
have tested not with 0.5.0 and it did indeed fix the problem. The initial migrations was way faster, about 2.5hrs, and the subsequent migrations are really fast. Thanks! //Adam From: Daniel Alley Sent: 23 October 2020 19:02 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3-migration takes a long time No, this is with 0.4.0, since 0.5.0 is quite new I haven't updated yet. Will do so and try again. If it's 0.4.0, then this is actually not surprising after all. This was a bug that was fixed in 0.5.0 https://pulp.plan.io/issues/7280 On Fri, Oct 23, 2020 at 12:45 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: > This is with the very latest version of the migration plugin (0.5.0)? No, this is with 0.4.0, since 0.5.0 is quite new I haven't updated yet. Will do so and try again. > Is it actually equally long, or just not 'reasonably' faster? Is it only the > "migrating rpm content to Pulp 3 erratum" sub-item which reports doing a lot > of work or are many of them doing so? I would say that the runtime is very similar. And it looks like many if not all sub-items are repeating their work but since the 'pulp3 erratum' is the longest running that's the one that stands out. Here are some progress reports on a subsequent run which should only resulted in a handful of new rpm's: { "code": "migrating.rpm.content", "done": 32348, "message": "Migrating rpm content to Pulp 3 rpm", "state": "completed", "suffix": null, "total": 32348 }, { "code": "premigrating.content.general", "done": 89690, "message": "Pre-migrating Pulp 2 ERRATUM content (general info)", "state": "completed", "suffix": null, "total": 89690 }, { "code": "premigrating.content.general", "done": 39, "message": "Pre-migrating Pulp 2 RPM content (general info)", "state": "completed", "suffix": null, "total": 43 }, { "code": "migrating.rpm.content", "done": 89690, "message": "Migrating rpm content to Pulp 3 erratum", "state": "completed", "suffix": null, "total": 89690 }, Should these numbers be 0 on a subsequent run if nothing has changed? //Adam From: Daniel Alley mailto:dal...@redhat.com>> Sent: 23 October 2020 17:33 To: Winberg Adam Cc: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] 2to3-migration takes a long time Hey Adam, It's not expected for subsequent migrations to take equally long, that is strange and concerning. This is with the very latest version of the migration plugin (0.5.0)? Is it actually equally long, or just not 'reasonably' faster? Is it only the "migrating rpm content to Pulp 3 erratum" sub-item which reports doing a lot of work or are many of them doing so? I do have some good news though, the 6-7 hour runtime will improve significantly once you can upgrade to 3.7.2 (which doesn't look like it has been pushed to the Foreman repositories yet). There was a significant performance regression introduced by 3.7.0 which has been fixed, and in my testing migrations only took a bit more than 1/3 as long as they had been taking previously. On Fri, Oct 23, 2020 at 10:47 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Depending on the size of your pulp2 installation I understand that the 2to3migration can take quite some time. In my environment it takes approx. 6-7hrs. However, I expect consecutive migrations to be faster, based on the documentation: "When you are ready to switch to Pulp 3: run migration, then stop Pulp 2 services (so no new data is coming in), run migration for the last time (it should not take long)." But all my migration runs are equally long, the 'sub-task' "Migrating rpm content to Pulp 3 erratum" processes about 9 items every time which takes a long time. Is this expected behaviour? Thanks, Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] 2to3-migration takes a long time
> This is with the very latest version of the migration plugin (0.5.0)? No, this is with 0.4.0, since 0.5.0 is quite new I haven't updated yet. Will do so and try again. > Is it actually equally long, or just not 'reasonably' faster? Is it only the > "migrating rpm content to Pulp 3 erratum" sub-item which reports doing a lot > of work or are many of them doing so? I would say that the runtime is very similar. And it looks like many if not all sub-items are repeating their work but since the 'pulp3 erratum' is the longest running that's the one that stands out. Here are some progress reports on a subsequent run which should only resulted in a handful of new rpm's: { "code": "migrating.rpm.content", "done": 32348, "message": "Migrating rpm content to Pulp 3 rpm", "state": "completed", "suffix": null, "total": 32348 }, { "code": "premigrating.content.general", "done": 89690, "message": "Pre-migrating Pulp 2 ERRATUM content (general info)", "state": "completed", "suffix": null, "total": 89690 }, { "code": "premigrating.content.general", "done": 39, "message": "Pre-migrating Pulp 2 RPM content (general info)", "state": "completed", "suffix": null, "total": 43 }, { "code": "migrating.rpm.content", "done": 89690, "message": "Migrating rpm content to Pulp 3 erratum", "state": "completed", "suffix": null, "total": 89690 }, Should these numbers be 0 on a subsequent run if nothing has changed? //Adam From: Daniel Alley Sent: 23 October 2020 17:33 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3-migration takes a long time Hey Adam, It's not expected for subsequent migrations to take equally long, that is strange and concerning. This is with the very latest version of the migration plugin (0.5.0)? Is it actually equally long, or just not 'reasonably' faster? Is it only the "migrating rpm content to Pulp 3 erratum" sub-item which reports doing a lot of work or are many of them doing so? I do have some good news though, the 6-7 hour runtime will improve significantly once you can upgrade to 3.7.2 (which doesn't look like it has been pushed to the Foreman repositories yet). There was a significant performance regression introduced by 3.7.0 which has been fixed, and in my testing migrations only took a bit more than 1/3 as long as they had been taking previously. On Fri, Oct 23, 2020 at 10:47 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Depending on the size of your pulp2 installation I understand that the 2to3migration can take quite some time. In my environment it takes approx. 6-7hrs. However, I expect consecutive migrations to be faster, based on the documentation: "When you are ready to switch to Pulp 3: run migration, then stop Pulp 2 services (so no new data is coming in), run migration for the last time (it should not take long)." But all my migration runs are equally long, the 'sub-task' "Migrating rpm content to Pulp 3 erratum" processes about 9 items every time which takes a long time. Is this expected behaviour? Thanks, Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
[Pulp-list] 2to3-migration takes a long time
Depending on the size of your pulp2 installation I understand that the 2to3migration can take quite some time. In my environment it takes approx. 6-7hrs. However, I expect consecutive migrations to be faster, based on the documentation: "When you are ready to switch to Pulp 3: run migration, then stop Pulp 2 services (so no new data is coming in), run migration for the last time (it should not take long)." But all my migration runs are equally long, the 'sub-task' "Migrating rpm content to Pulp 3 erratum" processes about 9 items every time which takes a long time. Is this expected behaviour? Thanks, Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] pulp2 migration: AccessPolicy matching query does not exist
> Could you share why you wanted to start completely from scratch and not just > re-run the existing migration plan or run a new one? I've had some problems with bugs in the migration, so when a migration fails I've flushed the db to rerun from scratch (after 'fixing' the bug which mostly meant to remove certain packages from my pulp2 repo). I think the bugs I've encountered has been solved in newer versions so hopefully I won't have to flush anymore... //Adam From: Tatiana Tereshchenko Sent: 15 October 2020 14:09 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] pulp2 migration: AccessPolicy matching query does not exist As for the teardown method, it was written for use with our tests and is not exposed as a command in any way. If it's helpful, here is the list of tables which are NOT SAFE to remove data from https://github.com/pulp/pulp-2to3-migration/blob/master/pulp_2to3_migration/tests/functional/constants.py#L1-L14. I hope we can provide the reset option soon, and you'll no longer need any workarounds. Tanya On Thu, Oct 15, 2020 at 2:02 PM Tatiana Tereshchenko mailto:ttere...@redhat.com>> wrote: Good to know that it's not for the upgrade. We've planned to have a way to restart migration from scratch but never got to it and no one asked before. Here is a story I filed, feel free to provide any feedback there https://pulp.plan.io/issues/7714. It would be helpful for us to understand your use case. Could you share why you wanted to start completely from scratch and not just re-run the existing migration plan or run a new one? Thanks, Tanya On Wed, Oct 14, 2020 at 7:07 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: > probably because we never ask to wipe out the database for upgrades. My reason for doing a 'flush' was to rerun my pulp2 migration from scratch, so not really because of the upgrade. But with the background provided by you, issue https://pulp.plan.io/issues/6963 now makes more sense to me. I guess I should preferably use the 'teardown' util mentioned there instead? It is however unclear to me how to use that. The 'reset_db' won't work for me since we have a centralized postgres infrastructure where I don't have permissions to drop/create db's (I had to get the help of a DBMS to drop my pulp-db). @Brian Bouterse - thanks for creating the issue! //Adam From: Brian Bouterse mailto:bmbou...@redhat.com>> Sent: 14 October 2020 18:17 To: Tatiana Tereshchenko Cc: Winberg Adam; pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] pulp2 migration: AccessPolicy matching query does not exist I've filed this issue tracking the improvement which would allow users to run `flush` and not experience this problem. https://pulp.plan.io/issues/7710 On Wed, Oct 14, 2020 at 12:14 PM Tatiana Tereshchenko mailto:ttere...@redhat.com>> wrote: Adam, I agree we lack documentation on resetting the environment and the database, probably because we never ask to wipe out the database for upgrades. The instructions are usually provided with release and ask you basically to run the latest pulp_installer. There is some data which is provided with migrations and which has to be present in the database, that's why dropping the database and applying migrations work and `flush` does not. `flush` just removes data and keeps all the tables in place, so migrations are not re-applied. So for now, please do not use `flush` if you want to start using pulp 3 db from scratch. We potentially can provide a separate command or find some other way to fill in the essential data back. Please file an issue in our tracker if such feature/command is helpful for you. https://pulp.plan.io/projects/pulp/issues/new As a side note, if you are interested, reset_db command drops database. It's provided as a part of django-extensions package https://django-extensions.readthedocs.io/en/latest/command_extensions.html I'm glad that the migration runs for you now, Tanya On Wed, Oct 14, 2020 at 3:52 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Thanks for the reply! I use 'flush' to clear the db, I don't have a 'reset_db' command. Otherwise that's pretty much my process. The migrate command returns 'No migrations to apply'. Running 'pulpcore-manager show-migrations' shows that all migrations, including 'guardian.0001/guardian.0002' has checkmarks ([X]). guardian [X] 0001_initial [X] 0002_generic_permissions_index The creation of the migration plan works so I assume my admin user is ok. I am using an rpm based installation on RHEL8, with python3-pulp-2to3-migration-0.4.0-1.el8.noarch python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.1-3.el8.noarch I don't know what went wrong, but I surrendered and dropped my DB and redid the migrations from scratch - and now it works. Is there a documented in
Re: [Pulp-list] pulp2 migration: AccessPolicy matching query does not exist
> probably because we never ask to wipe out the database for upgrades. My reason for doing a 'flush' was to rerun my pulp2 migration from scratch, so not really because of the upgrade. But with the background provided by you, issue https://pulp.plan.io/issues/6963 now makes more sense to me. I guess I should preferably use the 'teardown' util mentioned there instead? It is however unclear to me how to use that. The 'reset_db' won't work for me since we have a centralized postgres infrastructure where I don't have permissions to drop/create db's (I had to get the help of a DBMS to drop my pulp-db). @Brian Bouterse - thanks for creating the issue! //Adam From: Brian Bouterse Sent: 14 October 2020 18:17 To: Tatiana Tereshchenko Cc: Winberg Adam; pulp-list@redhat.com Subject: Re: [Pulp-list] pulp2 migration: AccessPolicy matching query does not exist I've filed this issue tracking the improvement which would allow users to run `flush` and not experience this problem. https://pulp.plan.io/issues/7710 On Wed, Oct 14, 2020 at 12:14 PM Tatiana Tereshchenko mailto:ttere...@redhat.com>> wrote: Adam, I agree we lack documentation on resetting the environment and the database, probably because we never ask to wipe out the database for upgrades. The instructions are usually provided with release and ask you basically to run the latest pulp_installer. There is some data which is provided with migrations and which has to be present in the database, that's why dropping the database and applying migrations work and `flush` does not. `flush` just removes data and keeps all the tables in place, so migrations are not re-applied. So for now, please do not use `flush` if you want to start using pulp 3 db from scratch. We potentially can provide a separate command or find some other way to fill in the essential data back. Please file an issue in our tracker if such feature/command is helpful for you. https://pulp.plan.io/projects/pulp/issues/new As a side note, if you are interested, reset_db command drops database. It's provided as a part of django-extensions package https://django-extensions.readthedocs.io/en/latest/command_extensions.html I'm glad that the migration runs for you now, Tanya On Wed, Oct 14, 2020 at 3:52 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Thanks for the reply! I use 'flush' to clear the db, I don't have a 'reset_db' command. Otherwise that's pretty much my process. The migrate command returns 'No migrations to apply'. Running 'pulpcore-manager show-migrations' shows that all migrations, including 'guardian.0001/guardian.0002' has checkmarks ([X]). guardian [X] 0001_initial [X] 0002_generic_permissions_index The creation of the migration plan works so I assume my admin user is ok. I am using an rpm based installation on RHEL8, with python3-pulp-2to3-migration-0.4.0-1.el8.noarch python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.1-3.el8.noarch I don't know what went wrong, but I surrendered and dropped my DB and redid the migrations from scratch - and now it works. Is there a documented instruction on upgrading existing installations? //Adam From: Tatiana Tereshchenko mailto:ttere...@redhat.com>> Sent: 14 October 2020 14:15 To: Winberg Adam Cc: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] pulp2 migration: AccessPolicy matching query does not exist On Wed, Oct 14, 2020 at 2:12 PM Tatiana Tereshchenko mailto:ttere...@redhat.com>> wrote: Hi Adam, My understanding is that you did the following: * stop pulp services * pulpcore-manager (or django-admin) reset_db * pulpcore-manager migrate * pulpcore-manager reset-admin-password --password password * start services * http POST :/pulp/api/v3/migration-plans/ < your_migraiton_plan.json * http POST :/pulp/api/v3/migration-plans/48d03a72-96a1-4d36-9f8b-9a57e97846ef/run/ Sent too early :) I can't reproduce it so far, so any hints about what can be special about your environment or installation would be appreciated. Make sure that you have at least one user which has admin privileges and that the guardian migrations ran indeed. Applying guardian.0001_initial... OK Applying guardian.0002_generic_permissions_index... OK Tanya On Wed, Oct 14, 2020 at 8:02 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hello, so I updated my pulp3 installation from 3.4 to 3.7 and tried to rerun my pulp2 migration - but it errors out with "AccessPolicy matching query does not exist". Anyone know why? I flushed my db, reran the 'migrate' job, created a pulp2migration plan (which worked fine) and then tried to run it. Here's the complete error: Oct 14 05:43:26 gunicorn[2150852]: pulp: django.request:ERROR: Internal Server Error: /pulp/api/v3/migration-plans/48d03a72-96a1-4d36-9f8b-9a57e97846ef/run/ Oct 14 05:43:26 gunic
Re: [Pulp-list] pulp2 migration: AccessPolicy matching query does not exist
Thanks for the reply! I use 'flush' to clear the db, I don't have a 'reset_db' command. Otherwise that's pretty much my process. The migrate command returns 'No migrations to apply'. Running 'pulpcore-manager show-migrations' shows that all migrations, including 'guardian.0001/guardian.0002' has checkmarks ([X]). guardian [X] 0001_initial [X] 0002_generic_permissions_index The creation of the migration plan works so I assume my admin user is ok. I am using an rpm based installation on RHEL8, with python3-pulp-2to3-migration-0.4.0-1.el8.noarch python3-pulp-rpm-3.7.0-1.el8.noarch python3-pulpcore-3.7.1-3.el8.noarch I don't know what went wrong, but I surrendered and dropped my DB and redid the migrations from scratch - and now it works. Is there a documented instruction on upgrading existing installations? //Adam From: Tatiana Tereshchenko Sent: 14 October 2020 14:15 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] pulp2 migration: AccessPolicy matching query does not exist On Wed, Oct 14, 2020 at 2:12 PM Tatiana Tereshchenko mailto:ttere...@redhat.com>> wrote: Hi Adam, My understanding is that you did the following: * stop pulp services * pulpcore-manager (or django-admin) reset_db * pulpcore-manager migrate * pulpcore-manager reset-admin-password --password password * start services * http POST :/pulp/api/v3/migration-plans/ < your_migraiton_plan.json * http POST :/pulp/api/v3/migration-plans/48d03a72-96a1-4d36-9f8b-9a57e97846ef/run/ Sent too early :) I can't reproduce it so far, so any hints about what can be special about your environment or installation would be appreciated. Make sure that you have at least one user which has admin privileges and that the guardian migrations ran indeed. Applying guardian.0001_initial... OK Applying guardian.0002_generic_permissions_index... OK Tanya On Wed, Oct 14, 2020 at 8:02 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hello, so I updated my pulp3 installation from 3.4 to 3.7 and tried to rerun my pulp2 migration - but it errors out with "AccessPolicy matching query does not exist". Anyone know why? I flushed my db, reran the 'migrate' job, created a pulp2migration plan (which worked fine) and then tried to run it. Here's the complete error: Oct 14 05:43:26 gunicorn[2150852]: pulp: django.request:ERROR: Internal Server Error: /pulp/api/v3/migration-plans/48d03a72-96a1-4d36-9f8b-9a57e97846ef/run/ Oct 14 05:43:26 gunicorn[2150852]: Traceback (most recent call last): Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner Oct 14 05:43:26 gunicorn[2150852]: response = get_response(request) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response Oct 14 05:43:26 gunicorn[2150852]: response = self.process_exception_by_middleware(e, request) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response Oct 14 05:43:26 gunicorn[2150852]: response = wrapped_callback(request, *callback_args, **callback_kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view Oct 14 05:43:26 gunicorn[2150852]: return view_func(*args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/viewsets.py", line 114, in view Oct 14 05:43:26 gunicorn[2150852]: return self.dispatch(request, *args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 505, in dispatch Oct 14 05:43:26 gunicorn[2150852]: response = self.handle_exception(exc) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 465, in handle_exception Oct 14 05:43:26 gunicorn[2150852]: self.raise_uncaught_exception(exc) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception Oct 14 05:43:26 gunicorn[2150852]: raise exc Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 502, in dispatch Oct 14 05:43:26 gunicorn[2150852]: response = handler(request, *args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/viewsets.py", line 85, in run Oct 14 05:43:26 gunicorn[2150852]: 'dry_run': dry_run Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/pulpcore/tasking/tasks.py", line 236, in enqueue_with_reservation Oct 14 05:43:26
[Pulp-list] pulp2 migration: AccessPolicy matching query does not exist
Hello, so I updated my pulp3 installation from 3.4 to 3.7 and tried to rerun my pulp2 migration - but it errors out with "AccessPolicy matching query does not exist". Anyone know why? I flushed my db, reran the 'migrate' job, created a pulp2migration plan (which worked fine) and then tried to run it. Here's the complete error: Oct 14 05:43:26 gunicorn[2150852]: pulp: django.request:ERROR: Internal Server Error: /pulp/api/v3/migration-plans/48d03a72-96a1-4d36-9f8b-9a57e97846ef/run/ Oct 14 05:43:26 gunicorn[2150852]: Traceback (most recent call last): Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/core/handlers/exception.py", line 34, in inner Oct 14 05:43:26 gunicorn[2150852]: response = get_response(request) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/core/handlers/base.py", line 115, in _get_response Oct 14 05:43:26 gunicorn[2150852]: response = self.process_exception_by_middleware(e, request) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/core/handlers/base.py", line 113, in _get_response Oct 14 05:43:26 gunicorn[2150852]: response = wrapped_callback(request, *callback_args, **callback_kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view Oct 14 05:43:26 gunicorn[2150852]: return view_func(*args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/viewsets.py", line 114, in view Oct 14 05:43:26 gunicorn[2150852]: return self.dispatch(request, *args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 505, in dispatch Oct 14 05:43:26 gunicorn[2150852]: response = self.handle_exception(exc) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 465, in handle_exception Oct 14 05:43:26 gunicorn[2150852]: self.raise_uncaught_exception(exc) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception Oct 14 05:43:26 gunicorn[2150852]: raise exc Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/rest_framework/views.py", line 502, in dispatch Oct 14 05:43:26 gunicorn[2150852]: response = handler(request, *args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/pulp_2to3_migration/app/viewsets.py", line 85, in run Oct 14 05:43:26 gunicorn[2150852]: 'dry_run': dry_run Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/pulpcore/tasking/tasks.py", line 236, in enqueue_with_reservation Oct 14 05:43:26 gunicorn[2150852]: **parent_kwarg, Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method Oct 14 05:43:26 gunicorn[2150852]: return getattr(self.get_queryset(), name)(*args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 422, in create Oct 14 05:43:26 gunicorn[2150852]: obj.save(force_insert=True, using=self.db) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django_lifecycle/mixins.py", line 132, in save Oct 14 05:43:26 gunicorn[2150852]: self._run_hooked_methods(AFTER_CREATE) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django_lifecycle/mixins.py", line 207, in _run_hooked_methods Oct 14 05:43:26 gunicorn[2150852]: method() Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django_lifecycle/decorators.py", line 69, in func Oct 14 05:43:26 gunicorn[2150852]: hooked_method(*args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/pulpcore/app/models/access_policy.py", line 60, in add_perms Oct 14 05:43:26 gunicorn[2150852]: access_policy = AccessPolicy.objects.get(viewset_name=self.ACCESS_POLICY_VIEWSET_NAME) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/db/models/manager.py", line 82, in manager_method Oct 14 05:43:26 gunicorn[2150852]: return getattr(self.get_queryset(), name)(*args, **kwargs) Oct 14 05:43:26 gunicorn[2150852]: File "/usr/lib/python3.6/site-packages/django/db/models/query.py", line 408, in get Oct 14 05:43:26 gunicorn[2150852]: self.model._meta.object_name Oct 14 05:43:26 gunicorn[2150852]: pulpcore.app.models.access_policy.AccessPolicy.DoesNotExist: AccessPolicy matching query does not exist. Regards //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] rpm plugin: multirepo copy of errata not working
I have now flushed and migrated a couple of times and tried to sync my migrated 'appstream' repo against the migrated 'appstream' remote. Both times I've done this the result is: { "child_tasks": [], "created_resources": [], "error": { "description": "Incoming and existing advisories have the same id and timestamp but different and intersecting package lists. At least one of them is wrong. Advisory id: RHBA-2019:2723", "traceback": " File \"/usr/lib/python3.6/site-packages/rq/worker.py\", line 883, in perform_job\nrv = job.perform()\n File \"/usr/lib/python3.6/site-packages/rq/job.py\", line 657, in perform\n self._result = self._execute()\n File \"/usr/lib/python3.6/site-packages/rq/job.py\", line 663, in _execute\n return self.func(*self.args, **self.kwargs)\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/synchronizing.py\", line 264, in synchronize\ndv.create()\n File \"/usr/lib/python3.6/site-packages/pulpcore/plugin/stages/declarative_version.py\", line 148, in create\nloop.run_until_complete(pipeline)\n File \"/usr/lib/python3.6/site-packages/pulpcore/app/models/repository.py\", line 776, in __exit__\nrepository.finalize_new_version(self)\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/models/repository.py\", line 151, in finalize_new_version\nresolve_advisories(new_version, previous_version)\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/advisory.py\", line 79, in resolve_advisories\nprevious_advisory, added_advisory\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/advisory.py\", line 154, in resolve_advisory_conflict\nraise AdvisoryConflict(_('Incoming and existing advisories have the same id and '\n" }, "finished_at": "2020-10-02T14:16:17.768388Z", "name": "pulp_rpm.app.tasks.synchronizing.synchronize", "parent_task": null, "progress_reports": [ { "code": "parsing.modulemds", "done": 206, "message": "Parsed Modulemd", "state": "completed", "suffix": null, "total": 206 }, { "code": "parsing.modulemd_defaults", "done": 42, "message": "Parsed Modulemd-defaults", "state": "completed", "suffix": null, "total": 42 }, { "code": "parsing.comps", "done": 67, "message": "Parsed Comps", "state": "completed", "suffix": null, "total": 67 }, { "code": "parsing.advisories", "done": 784, "message": "Parsed Advisories", "state": "completed", "suffix": null, "total": 784 }, { "code": "parsing.packages", "done": 12185, "message": "Parsed Packages", "state": "completed", "suffix": null, "total": 12185 }, { "code": "downloading.metadata", "done": 5, "message": "Downloading Metadata Files", "state": "completed", "suffix": null, "total": null }, { "code": "downloading.artifacts", "done": 64, "message": "Downloading Artifacts", "state": "completed", "suffix": null, "total": null }, { "code": "associating.content", "done": 975, "message": "Associating Content", "state": "completed", "suffix": null, "total": null } ], "pulp_created": "2020-10-02T14:14:07.970174Z", "pulp_finished_at": "2020-10-02T14:16:17.768388Z", "pulp_href": "/pulp/api/v3/tasks/0eb7d3a2-c8f4-42bd-af52-b25ccc6a8015/", "reserved_resources_record": [ "/pulp/api/v3/repositories/rpm/rpm/44baa281-e85f-40cc-ab76-9a91e45237cf/", "/pulp/api/v3/remotes/rpm/rpm/e48a62af-bf49-4632-8a57-2f0c66a82be9/" ], "started_at": "2020-10-02T14:14:08.531173Z", "state": "failed", "task_group": null, "worker": "/pulp/api/v3/workers/9e02cddf-f703-41f3-8c84-6b03b3592887/" } So there is a lot of content be
Re: [Pulp-list] rpm plugin: multirepo copy of errata not working
>Ok, tried that now, synced a total of 72 advisories in 3-4 copy operations. This was a bit unclear I realize, but I created new pulp3 repos for appstream and baseos and synced them with our redhat cdn remote as source. Then I _copied_ a total of 72 advisories from this new appstream repo to a new empty repo, while using the new baseos repo as a 'dependency source' paired with another new empty repo. //Adam From: pulp-list-boun...@redhat.com on behalf of Winberg Adam Sent: 30 September 2020 09:06 To: Daniel Alley Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] rpm plugin: multirepo copy of errata not working We're glad to help, Pulp is extremely useful for us so if we can contribute we are happy to do so. I created an issue for this: https://pulp.plan.io/issues/7625 > If you sync the Pulp 3 repository using the migrated remote, what happens? Do you mean syncing the 'my-new-repo1' repo with the migrated appstream remote? > Are the Pulp 2 repositories you're migrating modified significantly from when > they were originally synced? Well, in the case of the appstream repo it has been synced against the Red Hat source every night, so the content has certainly been modified. Otherwise than that, no. > If you make brand new Pulp 3 repositories and try the same copy operation, > does the same weird copy behavior occur? Do you mean new pulp3 repos synced using the appstream and baseos remotes (w. content from redhat cdn) and then using that as the source of the copy? Ok, tried that now, synced a total of 72 advisories in 3-4 copy operations. The copy itself between my new pulp3 appstream repo and the new empty repo works well and 610 rpm packages has been copied. But there are no dependencies copied from the new baseos repo, no content at all. //Adam From: Daniel Alley Sent: 29 September 2020 20:28 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] rpm plugin: multirepo copy of errata not working Hi Adam, Thank you for providing feedback on some of these rough edges, it is extremely helpful. There's a couple of things you can try that would give us some useful information. If you sync the Pulp 3 repository using the migrated remote, what happens? Does the repository change or does it stay the same? Are the Pulp 2 repositories you're migrating modified significantly from when they were originally synced? If you make brand new Pulp 3 repositories and try the same copy operation, does the same weird copy behavior occur? Also, if it wouldn't be too much trouble, could you file this as an issue on our bug tracker (https://pulp.plan.io/) with the details? We can continue discussion here but if there is a lot of information it's better to keep it in one place on the issue. Thanks again, Daniel On Tue, Sep 29, 2020 at 8:21 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: I applied the patch from https://github.com/pulp/pulp_rpm/commit/712abdf1abb95c969b54fd2968a573189b77bcba and the the copy then went through without errors. I'm a bit confused by the result however. I copied 16 advisories from the appstreams repo to my new empty repo and the copy ended up with copying all modulemds and almost all packages (11466 of 12053). That doesnt seem right to me. And my other new repo ('my-new-repo2') is still empty, meaning that of those 11000 packages there were none that had any dependencies from the baseos repo which also strikes me as odd. If i set 'dependency_solving=True' (contrary to the documentation) I end up with the same amount of packages in my 'my-new-repo1' and 2 packages in 'my-new-repo2'. Am I misunderstanding something about this functionality? //Adam ________ From: Winberg Adam Sent: 28 September 2020 16:44 To: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: rpm plugin: multirepo copy of errata not working I have succeeded in migrating my pulp2 content to pulp3 and all repos look complete (i've reran the migration and no new repoversions are generated). In an attempt to test the multirepo copy functionality described at https://pulp-rpm.readthedocs.io/en/latest/workflows/copy.html#recipes , I created a couple of new, empty repos and tried to copy errata from my migrated RHEL8-appstream repo to them: POST /pulp/api/v3/rpm/copy/ config:=[ {"source_repo_version": "", "dest_repo": "my-new-repo1", "content": [$ADVISORY_HREF1]}, {"source_repo_version": "", "dest_repo": "my-new-repo2", "content": []}, ] dependency_solving=False All looks correct, but the operation ultimately fails with the following error: "description": "Modulemd matching query does not exist.", "traceback": " File \"/usr/lib/python3.6/site-packages
Re: [Pulp-list] rpm plugin: multirepo copy of errata not working
We're glad to help, Pulp is extremely useful for us so if we can contribute we are happy to do so. I created an issue for this: https://pulp.plan.io/issues/7625 > If you sync the Pulp 3 repository using the migrated remote, what happens? Do you mean syncing the 'my-new-repo1' repo with the migrated appstream remote? > Are the Pulp 2 repositories you're migrating modified significantly from when > they were originally synced? Well, in the case of the appstream repo it has been synced against the Red Hat source every night, so the content has certainly been modified. Otherwise than that, no. > If you make brand new Pulp 3 repositories and try the same copy operation, > does the same weird copy behavior occur? Do you mean new pulp3 repos synced using the appstream and baseos remotes (w. content from redhat cdn) and then using that as the source of the copy? Ok, tried that now, synced a total of 72 advisories in 3-4 copy operations. The copy itself between my new pulp3 appstream repo and the new empty repo works well and 610 rpm packages has been copied. But there are no dependencies copied from the new baseos repo, no content at all. //Adam From: Daniel Alley Sent: 29 September 2020 20:28 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] rpm plugin: multirepo copy of errata not working Hi Adam, Thank you for providing feedback on some of these rough edges, it is extremely helpful. There's a couple of things you can try that would give us some useful information. If you sync the Pulp 3 repository using the migrated remote, what happens? Does the repository change or does it stay the same? Are the Pulp 2 repositories you're migrating modified significantly from when they were originally synced? If you make brand new Pulp 3 repositories and try the same copy operation, does the same weird copy behavior occur? Also, if it wouldn't be too much trouble, could you file this as an issue on our bug tracker (https://pulp.plan.io/) with the details? We can continue discussion here but if there is a lot of information it's better to keep it in one place on the issue. Thanks again, Daniel On Tue, Sep 29, 2020 at 8:21 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: I applied the patch from https://github.com/pulp/pulp_rpm/commit/712abdf1abb95c969b54fd2968a573189b77bcba and the the copy then went through without errors. I'm a bit confused by the result however. I copied 16 advisories from the appstreams repo to my new empty repo and the copy ended up with copying all modulemds and almost all packages (11466 of 12053). That doesnt seem right to me. And my other new repo ('my-new-repo2') is still empty, meaning that of those 11000 packages there were none that had any dependencies from the baseos repo which also strikes me as odd. If i set 'dependency_solving=True' (contrary to the documentation) I end up with the same amount of packages in my 'my-new-repo1' and 2 packages in 'my-new-repo2'. Am I misunderstanding something about this functionality? //Adam ________ From: Winberg Adam Sent: 28 September 2020 16:44 To: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: rpm plugin: multirepo copy of errata not working I have succeeded in migrating my pulp2 content to pulp3 and all repos look complete (i've reran the migration and no new repoversions are generated). In an attempt to test the multirepo copy functionality described at https://pulp-rpm.readthedocs.io/en/latest/workflows/copy.html#recipes , I created a couple of new, empty repos and tried to copy errata from my migrated RHEL8-appstream repo to them: POST /pulp/api/v3/rpm/copy/ config:=[ {"source_repo_version": "", "dest_repo": "my-new-repo1", "content": [$ADVISORY_HREF1]}, {"source_repo_version": "", "dest_repo": "my-new-repo2", "content": []}, ] dependency_solving=False All looks correct, but the operation ultimately fails with the following error: "description": "Modulemd matching query does not exist.", "traceback": " File \"/usr/lib/python3.6/site-packages/rq/worker.py\", line 883, in perform_job\nrv = job.perform()\n File \"/usr/lib/python3.6/site-packages/rq/job.py\", line 657, in perform\n self._result = self._execute()\n File \"/usr/lib/python3.6/site-packages/rq/job.py\", line 663, in _execute\n return self.func(*self.args, **self.kwargs)\n File \"/usr/lib64/python3.6/contextlib.py\", line 52, in inner\nreturn func(*args, **kwds)\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/copy.py\", line 167, in copy_content\ncontent_to_copy |= find_children_of_content(content_to_copy, source_repo_version)\n File \"/usr
Re: [Pulp-list] rpm plugin: multirepo copy of errata not working
I applied the patch from https://github.com/pulp/pulp_rpm/commit/712abdf1abb95c969b54fd2968a573189b77bcba and the the copy then went through without errors. I'm a bit confused by the result however. I copied 16 advisories from the appstreams repo to my new empty repo and the copy ended up with copying all modulemds and almost all packages (11466 of 12053). That doesnt seem right to me. And my other new repo ('my-new-repo2') is still empty, meaning that of those 11000 packages there were none that had any dependencies from the baseos repo which also strikes me as odd. If i set 'dependency_solving=True' (contrary to the documentation) I end up with the same amount of packages in my 'my-new-repo1' and 2 packages in 'my-new-repo2'. Am I misunderstanding something about this functionality? //Adam From: Winberg Adam Sent: 28 September 2020 16:44 To: pulp-list@redhat.com Subject: rpm plugin: multirepo copy of errata not working I have succeeded in migrating my pulp2 content to pulp3 and all repos look complete (i've reran the migration and no new repoversions are generated). In an attempt to test the multirepo copy functionality described at https://pulp-rpm.readthedocs.io/en/latest/workflows/copy.html#recipes , I created a couple of new, empty repos and tried to copy errata from my migrated RHEL8-appstream repo to them: POST /pulp/api/v3/rpm/copy/ config:=[ {"source_repo_version": "", "dest_repo": "my-new-repo1", "content": [$ADVISORY_HREF1]}, {"source_repo_version": "", "dest_repo": "my-new-repo2", "content": []}, ] dependency_solving=False All looks correct, but the operation ultimately fails with the following error: "description": "Modulemd matching query does not exist.", "traceback": " File \"/usr/lib/python3.6/site-packages/rq/worker.py\", line 883, in perform_job\nrv = job.perform()\n File \"/usr/lib/python3.6/site-packages/rq/job.py\", line 657, in perform\n self._result = self._execute()\n File \"/usr/lib/python3.6/site-packages/rq/job.py\", line 663, in _execute\n return self.func(*self.args, **self.kwargs)\n File \"/usr/lib64/python3.6/contextlib.py\", line 52, in inner\nreturn func(*args, **kwds)\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/copy.py\", line 167, in copy_content\ncontent_to_copy |= find_children_of_content(content_to_copy, source_repo_version)\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/copy.py\", line 74, in find_children_of_content\nname=name, stream=stream, version=version, context=context, arch=arch)\n File \"/usr/lib/python3.6/site-packages/django/db/models/query.py\", line 408, in get\nself.model._meta.object_name\n" }, Any ideas why this happens? Regards, Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
[Pulp-list] rpm plugin: multirepo copy of errata not working
I have succeeded in migrating my pulp2 content to pulp3 and all repos look complete (i've reran the migration and no new repoversions are generated). In an attempt to test the multirepo copy functionality described at https://pulp-rpm.readthedocs.io/en/latest/workflows/copy.html#recipes , I created a couple of new, empty repos and tried to copy errata from my migrated RHEL8-appstream repo to them: POST /pulp/api/v3/rpm/copy/ config:=[ {"source_repo_version": "", "dest_repo": "my-new-repo1", "content": [$ADVISORY_HREF1]}, {"source_repo_version": "", "dest_repo": "my-new-repo2", "content": []}, ] dependency_solving=False All looks correct, but the operation ultimately fails with the following error: "description": "Modulemd matching query does not exist.", "traceback": " File \"/usr/lib/python3.6/site-packages/rq/worker.py\", line 883, in perform_job\nrv = job.perform()\n File \"/usr/lib/python3.6/site-packages/rq/job.py\", line 657, in perform\n self._result = self._execute()\n File \"/usr/lib/python3.6/site-packages/rq/job.py\", line 663, in _execute\n return self.func(*self.args, **self.kwargs)\n File \"/usr/lib64/python3.6/contextlib.py\", line 52, in inner\nreturn func(*args, **kwds)\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/copy.py\", line 167, in copy_content\ncontent_to_copy |= find_children_of_content(content_to_copy, source_repo_version)\n File \"/usr/lib/python3.6/site-packages/pulp_rpm/app/tasks/copy.py\", line 74, in find_children_of_content\nname=name, stream=stream, version=version, context=context, arch=arch)\n File \"/usr/lib/python3.6/site-packages/django/db/models/query.py\", line 408, in get\nself.model._meta.object_name\n" }, Any ideas why this happens? Regards, Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] 2to3migration fails on 'Package' object has no attribute '_remote_artifact_saver_cas'
Thanks! That's a very nice patch, after trying for a week to get the migration to work and continously searching for and deleting duplicate pulp2 content, the migration worked right away after this patch. :) Or, at least regarding the duplicate content errors, however I still have problems with a createrepo_c error, same as described in https://pulp.plan.io/issues/7193 (huge input lookup). That error only arise for a few packages so that is manageable for me to purge from pulp2 before migration, but it would be nice to not have to do that.. //Adam From: Dennis Kliban Sent: 27 September 2020 17:35 To: Winberg Adam Cc: Ina Panova; pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3migration fails on 'Package' object has no attribute '_remote_artifact_saver_cas' Here is a patch for pulpcore that fixes this problem[0]. [0] https://github.com/pulp/pulpcore/pull/937 On Mon, Sep 21, 2020 at 7:17 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Thank you for your reply - yes I did clean orphans after i removed the pulp2 repos that i suspected might be the cause. But as you say, there might be some other cause for this in my case. At the moment I have adjusted the code so the iteration only runs if the '_remote_artifact_saver_cas' attribute is present. Don't know if the result will be any good though, running the migration right now. //Adam From: Ina Panova mailto:ipan...@redhat.com>> Sent: 21 September 2020 13:09 To: Winberg Adam Cc: pulp-list@redhat.com<mailto:pulp-list@redhat.com> Subject: Re: [Pulp-list] 2to3migration fails on 'Package' object has no attribute '_remote_artifact_saver_cas' Hi, the provided steps in the mentioned issue are the steps to reproduce the issue, however, unfortunately, this does not necessarily mean that this is the root cause of the manifested problem. Apparently we need to find a fix to properly handle duplicated declarative content in a batch. Looking at the steps you have tried to bypass the issue, have you run orphan clean up after pulp2 repos removal? Regards, Ina Panova Senior Software Engineer| Pulp| Red Hat Inc. "Do not go where the path may lead, go instead where there is no path and leave a trail." On Sun, Sep 20, 2020 at 9:05 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, When running the 2to3migration for the 'rpm' plugin, I get the following error: AttributeError: 'Package' object has no attribute '_remote_artifact_saver_cas' This is the same as specified in https://pulp.plan.io/issues/7147, and I actually had a couple of repos in pulp2 which shared the same feed. I removed the redundant repos, flushed the pulp3 db and reran the 2to3-migration but still got stuck on the same error. Anyone got any pointers how to resolve this? //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] 2to3migration fails on 'Package' object has no attribute '_remote_artifact_saver_cas'
Thank you for your reply - yes I did clean orphans after i removed the pulp2 repos that i suspected might be the cause. But as you say, there might be some other cause for this in my case. At the moment I have adjusted the code so the iteration only runs if the '_remote_artifact_saver_cas' attribute is present. Don't know if the result will be any good though, running the migration right now. //Adam From: Ina Panova Sent: 21 September 2020 13:09 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] 2to3migration fails on 'Package' object has no attribute '_remote_artifact_saver_cas' Hi, the provided steps in the mentioned issue are the steps to reproduce the issue, however, unfortunately, this does not necessarily mean that this is the root cause of the manifested problem. Apparently we need to find a fix to properly handle duplicated declarative content in a batch. Looking at the steps you have tried to bypass the issue, have you run orphan clean up after pulp2 repos removal? Regards, Ina Panova Senior Software Engineer| Pulp| Red Hat Inc. "Do not go where the path may lead, go instead where there is no path and leave a trail." On Sun, Sep 20, 2020 at 9:05 AM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, When running the 2to3migration for the 'rpm' plugin, I get the following error: AttributeError: 'Package' object has no attribute '_remote_artifact_saver_cas' This is the same as specified in https://pulp.plan.io/issues/7147, and I actually had a couple of repos in pulp2 which shared the same feed. I removed the redundant repos, flushed the pulp3 db and reran the 2to3-migration but still got stuck on the same error. Anyone got any pointers how to resolve this? //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
[Pulp-list] 2to3migration fails on 'Package' object has no attribute '_remote_artifact_saver_cas'
Hi, When running the 2to3migration for the 'rpm' plugin, I get the following error: AttributeError: 'Package' object has no attribute '_remote_artifact_saver_cas' This is the same as specified in https://pulp.plan.io/issues/7147, and I actually had a couple of repos in pulp2 which shared the same feed. I removed the redundant repos, flushed the pulp3 db and reran the 2to3-migration but still got stuck on the same error. Anyone got any pointers how to resolve this? //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] Pulp RHEL8 rpm installation
great stuff, thanks for pointing me to the 'nightly' repo, this indeed solves the 'solv' problem. Thank you! //Adam From: Mike DePaulo Sent: 03 September 2020 22:52 To: Winberg Adam Cc: Dennis Kliban; pulp-list@redhat.com Subject: Re: [Pulp-list] Pulp RHEL8 rpm installation Hi Adam, I tested this with pulplift and can confirm that the 3.16 repo has this problem, but the nightly repo does not. (And 3.17 repo doesn't exist yet for el8 for some reason.) The installer fails at collectstatic for me ("pulp_common : Collect static content"), which is where any python dependency errors are usually caught. These are the vars I used: pulp_default_admin_password: password pulp_install_plugins: pulp_settings: pulp_install_source: packages pulp_api_bind: "unix:/var/run/pulpcore-api/pulpcore-api.sock" pulp_content_bind: "unix:/var/run/pulpcore-content/pulpcore-content.sock" pulp_pkg_repo: "https://fedorapeople.org/groups/katello/releases/yum/3.16/pulpcore/el{{ ansible_distribution_major_version }}/x86_64/" # pulp_pkg_repo: "https://fedorapeople.org/groups/katello/releases/yum/3.17/pulpcore/el{{ ansible_distribution_major_version }}/x86_64/" # pulp_pkg_repo: "https://fedorapeople.org/groups/katello/releases/yum/nightly/pulpcore/el{{ ansible_distribution_major_version }}/x86_64/" I hope the nightly repo does suffice for you. I tried to figure out why this occurred, but will move onto other stuff now (since the pulp project neither supports nor tests the 3.16 repo), and leave you with what I uncovered below. Remember that the 3.16 repo has pulpcore 3.3, and we're on 3.6 now, and we only support 3.6. Although the installer does generally work with a few older versions of pulpcore, we only test it with the current pulp pypi release, git branches, and the katello nightly RPM repo. It does not seem like an installer issue, because I can reproduce it on the command line with the following commands: sudo su - pulp --shell /bin/bash PYTHONPATH=/usr/lib64/python3.6/ DJANGO_SETTINGS_MODULE=pulpcore.app.settings PULP_SETTINGS_FILE=/etc/pulp/settings.py /usr/bin/python3-django-admin collectstatic --noinput --link (this throws your error.) pulpcore-manager collectstatic --noinput --link (latter command is equivalent and throws same error) Both versions of the libsolv RPM provide this file, I am surprised it is not being loaded: /usr/lib64/python3.6/site-packages/solv.py And libsolv is hardly modified between versions: https://github.com/theforeman/foreman-packaging/commits/rpm/develop/packages/pulpcore/libsolv Also, it looks like neither katello 3.16 nor 3.17 rc1 support installation on el8 yet: https://theforeman.org/plugins/katello/3.17/installation/index.html Also, we are supposed to install pulp-rpm (from the nightly repo, not 3.16) in our molecule-based CI for pulp_installer. The CI installs pulp-file & pulp-rpm for all pip install tests, but not RPM install tests. I'll start on fixing this right now. -Mike On Thu, Sep 3, 2020 at 2:38 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: So I've been trying to install pulp3 on RHEL8 from the rpms hosted at https://fedorapeople.org/groups/katello/releases/yum/3.16/pulpcore/ I have not used the pulp_installer but have looked at it to make the install correctly. However, I can't get the pulp services to start, I get the following error: ... File "/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 787, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'solv' distribution was not found and is required by the application 'python3-solv' is installed (it is a dependency for the 'python3-pulp-rpm' package). I don't know enough python to understand why this dependency is not found at runtime, has anyone seen this or knows what the problem is? Has anyone used the pulp_installer to install pulp3 on RHEL/Centos8 with rpms? Thanks for any help! //Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list -- Mike DePaulo He / Him / His Service Reliability Engineer, Pulp Red Hat<https://www.redhat.com/> IM: mikedep333 GPG: 51745404 [https://marketing-outfit-prod-images.s3-us-west-2.amazonaws.com/f5445ae0c9ddafd5b2f1836854d7416a/Logo-RedHat-Email.png]<https://www.redhat.com/> ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] Pulp RHEL8 rpm installation
So I've been trying to install pulp3 on RHEL8 from the rpms hosted at https://fedorapeople.org/groups/katello/releases/yum/3.16/pulpcore/ I have not used the pulp_installer but have looked at it to make the install correctly. However, I can't get the pulp services to start, I get the following error: ... File "/usr/local/lib/python3.6/site-packages/pkg_resources/__init__.py", line 787, in resolve raise DistributionNotFound(req, requirers) pkg_resources.DistributionNotFound: The 'solv' distribution was not found and is required by the application 'python3-solv' is installed (it is a dependency for the 'python3-pulp-rpm' package). I don't know enough python to understand why this dependency is not found at runtime, has anyone seen this or knows what the problem is? Has anyone used the pulp_installer to install pulp3 on RHEL/Centos8 with rpms? Thanks for any help! //Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] Pulp RHEL8 rpm installation
That's really good news, thanks for the info. Will take a closer look at this, those plugins should be enough for us at the time being. //Adam From: Dennis Kliban Sent: 28 August 2020 16:52 To: Winberg Adam Cc: pulp-list@redhat.com Subject: Re: [Pulp-list] Pulp RHEL8 rpm installation The pulp_installer supports installing from RPMs. Currently we don't publish the RPMs, but the Katello project does include such packages in its repository[0]. However, this repository only includes the plugins that are used by Katello - pulp_file, pulp_rpm, pulp_deb, pulp_container, pulp-certguard, and pulp-2to3-migration. There is definitely a desire to eventually provide official Pulp RPMs in our own repositories, however, that is not a focus at this time. [0] https://fedorapeople.org/groups/katello/releases/yum/3.16/pulpcore/ On Thu, Aug 27, 2020 at 2:52 PM Winberg Adam mailto:adam.winb...@smhi.se>> wrote: Hi, We are currently running pulp2 on RHEL7 and I was looking to upgrade to pulp3 and move this over to our RHEL8 environment. To my surprise I discovered that rpm as installation method is no longer an option with pulp3 and there are no rhel8 rpm builds of either pulp2 or pulp3. Our organization is relying quite heavily on RPM for CI/CD and automatic rebuilds of server. Ansible/pypi installations require Internet access and external docker/container images can also be considered a security liability (is in my org anyway). Sure, we can have local pypi indexes and such but Ansible/pypi as deployment method to me just does not feel very 'enterprisey'. So I wanted to open this up for discussion. Anyone else out there hoping for rpm builds of Pulp for RHEL8 (or other dists)? Regards, Adam ___ Pulp-list mailing list Pulp-list@redhat.com<mailto:Pulp-list@redhat.com> https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
[Pulp-list] Pulp RHEL8 rpm installation
Hi, We are currently running pulp2 on RHEL7 and I was looking to upgrade to pulp3 and move this over to our RHEL8 environment. To my surprise I discovered that rpm as installation method is no longer an option with pulp3 and there are no rhel8 rpm builds of either pulp2 or pulp3. Our organization is relying quite heavily on RPM for CI/CD and automatic rebuilds of server. Ansible/pypi installations require Internet access and external docker/container images can also be considered a security liability (is in my org anyway). Sure, we can have local pypi indexes and such but Ansible/pypi as deployment method to me just does not feel very 'enterprisey'. So I wanted to open this up for discussion. Anyone else out there hoping for rpm builds of Pulp for RHEL8 (or other dists)? Regards, Adam ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list
Re: [Pulp-list] dependency problems when copying errata with --recursive
Forgot: I'm using Pulp 2.7 Sent from my Samsung Galaxy smartphone. Original message From: "Adam.Winberg"Date: 17/12/2015 11:20 (GMT+01:00) To: pulp-list@redhat.com Subject: [Pulp-list] dependency problems when copying errata with --recursive I'm syncing Redhat repos every night and then copy rpms from these to my own 'frozen' repos every thursday. To minimize the number of updates I want to copy only 'Important' and 'Critical' errata to my frozen repos. I use this command to achieve this: $ pulp-admin rpm repo copy errata --match="severity=Important|Critical" --from-repo-id rh-repo-daily --to-repo-id rh-repo-frozen --recursive The problem is that the --recursive flag does not copy all dependencies, or maybe it copies too many. For example, this week there was an important kernel errata. The kernel is depending on the 'perf' package which in turn is depending on the 'perl' package which in turn is depending on the 'perl-libs' package. So all of these were copied to my frozen repo. However, there is about a gazillion other perl packages you need to copy to avoid version mismatch and failed update, but these are not included. So, pulp is not able to read those dependencies (in the form of, for example, "perl(Cwd)"). Besides, the copy of the perl rpm was not needed at all in the first place, since I already had perl in my frozen repo. It all ended with me having to manually delete a number of packages from my frozen repo to get the updates to work. This happens quite often and is causing a lot of work for us. I could revert back to copying everything, but I would really like to be able to only copy relevant packages. There was some discussion regarding this in an a thread from February (https://www.redhat.com/archives/pulp-list/2015-February/msg9.html) with suggestions to add an option to skip copying of a rpm if it is already present in the target repo. However, there was no RFE created that I know of - is there one or could one be created? Or am I missing some already existing way of fixing this problem? Regards Adam Winberg ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list ___ Pulp-list mailing list Pulp-list@redhat.com https://www.redhat.com/mailman/listinfo/pulp-list