Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Ilya Maximets writes: > On 1/24/24 13:13, Aaron Conole wrote: >> Ilya Maximets writes: >> >>> On 1/24/24 01:14, Jeremy Kerr wrote: Hi Ilya, > Jeremy, could you, please, try to unban the robot? Sure thing, done! >>> >>> Thanks! >>> Let me know how it goes on your side, I'll monitor things here. >>> >>> We'll re-enable the robot later today and will keep an eye on it. >> >> Done. > > It looks like patchwork API broke. Following link (and similar) > returns 500: > http://patchwork.ozlabs.org/api/patches/1887843/ > > The same link, but with 1.1 API version works: > http://patchwork.ozlabs.org/api/1.1/patches/1887843/ > > With 1.2 it is 500. 1.3 - 404 (probably expected). I don't see that issue currently. Have you noticed new issues? >> > Note: Just to clarify, all the UAs should have '(pw-ci)' prefix > now with additional identifications for the project/actions they > are related to. OK, good to know - as long as there's a common identifier present. Cheers, Jeremy >> ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Ilya Maximets writes: > On 1/24/24 01:14, Jeremy Kerr wrote: >> Hi Ilya, >> >>> Jeremy, could you, please, try to unban the robot? >> >> Sure thing, done! > > Thanks! > >> >> Let me know how it goes on your side, I'll monitor things here. > > We'll re-enable the robot later today and will keep an eye on it. Done. >> >>> Note: Just to clarify, all the UAs should have '(pw-ci)' prefix >>> now with additional identifications for the project/actions they >>> are related to. >> >> OK, good to know - as long as there's a common identifier present. >> >> Cheers, >> >> >> Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Jeremy Kerr writes: > Hi Ilya, > >> Jeremy, could you, please, try to unban the robot? > > Sure thing, done! > > Let me know how it goes on your side, I'll monitor things here. Thanks. I've just re-enabled it. The first run will be noisy (because it will requery the open checks). After that, it should only query the checks API when a new report arrives. >> Note: Just to clarify, all the UAs should have '(pw-ci)' prefix >> now with additional identifications for the project/actions they >> are related to. > > OK, good to know - as long as there's a common identifier present. We'll keep this consistent on our side as well. > Cheers, > > > Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Ilya, > Jeremy, could you, please, try to unban the robot? Sure thing, done! Let me know how it goes on your side, I'll monitor things here. > Note: Just to clarify, all the UAs should have '(pw-ci)' prefix > now with additional identifications for the project/actions they > are related to. OK, good to know - as long as there's a common identifier present. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Jeremy Kerr writes: > Hi Ilya, > >> Ugh. Yeah, the checks/ requests are definitely something we can >> improve. Aaron is working on removing vast majority of this type of >> requests as we speak. Hopefully, that will be done soon. > > OK, sounds good! > >> Do you think it'll be fine to unban the robot once it doesn't run that >> many requests on the checks/ API in particular? (I expect the number >> of requests to be less than a 100-ish per day after the fix.) > > Yes, definitely. I'm okay with unbanning it sooner, if it's doing useful > stuff, and we can contain the load somewhat, and/or we need to test > against production data. Thanks for help identifying the API usage - I've pushed a new series that should reduce it quite a bit. I'm hoping we can unban the RH robot. If it continues to show issues, we can address it as well. > (more that I wasn't sure if it was a legitimate use of the API > earlier, hence reducing the load via the nft rule) I've added a custom set of user agents so we can see that it is at least our set of patchwork polling / updating CI scripts. Hopefully that also will help to characterize if the robot is doing too heavy of a workload. >> That might be useful, thanks! > > OK, I will send that separately. Thanks! > Cheers, > > > Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Jeremy Kerr writes: > Hi Ilya, > >> Ugh. Yeah, the checks/ requests are definitely something we can >> improve. Aaron is working on removing vast majority of this type of >> requests as we speak. Hopefully, that will be done soon. > > OK, sounds good! > >> Do you think it'll be fine to unban the robot once it doesn't run that >> many requests on the checks/ API in particular? (I expect the number >> of requests to be less than a 100-ish per day after the fix.) > > Yes, definitely. I'm okay with unbanning it sooner, if it's doing useful > stuff, and we can contain the load somewhat, and/or we need to test > against production data. > > (more that I wasn't sure if it was a legitimate use of the API > earlier, hence reducing the load via the nft rule) While we are using those APIs, it is the only place where we have difference between DPDK server and ozlabs server. For DPDK patchwork, they have a separate hook which processes the reports. Ours is probably much too simple to have let it run like this for as long as it has. >> That might be useful, thanks! > > OK, I will send that separately. Thanks. > Cheers, > > > Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Ilya Maximets writes: > On 1/22/24 14:59, Jeremy Kerr wrote: >> Hi Ilya, >> >>> We have a robot running in the RH network that pushes checks and >>> updates statuses for openvswitch and ovn projects, but it shouldn't >>> really make any "global patches view" types of requests. All the >>> patches it looks at supposed to be only from these two projects. And >>> it should not make more than one concurrent request. >> >> OK, that looks like it then; this isn't one of the spiders crawling >> through *all* patches (and does seem to be contained to OVS & OVN) but >> it's certainly a major contributor to load. >> >> It seems to be re-requesting the same view hundreds of times. From one >> day's worth of log, the top 10 URLs from that IP: >> >>399 /api/patches/1887072/checks/ >>285 /api/patches/1888116/checks/ >>285 /api/patches/1888115/checks/ >>285 /api/patches/1888111/checks/ >>228 /api/patches/1888114/checks/ >>228 /api/patches/1888112/checks/ >>228 /api/patches/1887464/checks/ >>228 /api/patches/1887463/checks/ >>228 /api/patches/1884952/checks/ >>228 /api/patches/1884950/checks/ >> >> - totalling 43,786 requests for that day. > > Ugh. Yeah, the checks/ requests are definitely something we can improve. > Aaron is working on removing vast majority of this type of requests as we > speak. Hopefully, that will be done soon. > > Do you think it'll be fine to unban the robot once it doesn't run that > many requests on the checks/ API in particular? (I expect the number of > requests to be less than a 100-ish per day after the fix.) Yes - we are now going to store the checks that have already been submitted (I'm going to try and pre-load the requests). That should significantly reduce the amount of churn through the requests. > Robot will also have an updated UA, so it will be easier to identify in > case of any issues in the future. Yes, this change was already completed. I'm testing out things right now. When it is reeady, I'll post a series and CC you. >> >>> Could you provide some examples of requests that are heavy (maybe >>> off-list), so we can take a look? >> >> I can send you a log over a day if that's helpful. > > That might be useful, thanks! > >> >> Cheers, >> >> >> Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
On Mon, 2024-01-22 at 15:52 +0800, Jeremy Kerr wrote: > Hi all, > > I'll try and get to that over the weekend. > > Looks like the heaviest database load is due to API request to the > global patches view, which is a bit of an odd use-case; that all > appears to be mostly spider traffic. Just as an aside, many if not all of the performance issues related to this API in particular should be resolved in the 3.0 release, owing to the removal of the Submission table. The DPDK folks are running 3.x in production for a couple of months now (at https://patches.dpdk.org/) and I'm only aware of one minor issue [1] that they've encountered. Could be worth lining up the upgrade at some point... Cheers, Stephen PS: URLs API v2.0 will be almost entirely project-oriented (e.g. '/project/{projectID}/patches'), but I haven't got there yet. [1] https://github.com/getpatchwork/patchwork/issues/556 > > Konstantin: I'm not sure your new index would help in that case, we're > not looking up delegates for those views. > > Looking through the access logs, there seem to be three clients that > are causing around 40-50% of patchwork load: > > - one IP from an "Alibaba Cloud HK" AS, various UAs > - one IP from a Red Hat AS, curl/7.61.1 UA > - the Bytedance "Bytespider" UA > > All three seem to be scraping the patchwork site. > > I have blocked all three for now, but it would be worthwhile setting up > a more fair robots.txt and/or a reasonable ratelimit for the latter > case. > > If anyone knows what might be up with that Red Hat crawler, please get > in touch with me. > > I'll keep an eye on things here; there's still likely a bunch of > potential configuration optimisation we can do too. Let me know if your > observations change though. > > Cheers, > > > Jeremy > ___ > Patchwork mailing list > Patchwork@lists.ozlabs.org > https://lists.ozlabs.org/listinfo/patchwork ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Ilya, > Ugh. Yeah, the checks/ requests are definitely something we can > improve. Aaron is working on removing vast majority of this type of > requests as we speak. Hopefully, that will be done soon. OK, sounds good! > Do you think it'll be fine to unban the robot once it doesn't run that > many requests on the checks/ API in particular? (I expect the number > of requests to be less than a 100-ish per day after the fix.) Yes, definitely. I'm okay with unbanning it sooner, if it's doing useful stuff, and we can contain the load somewhat, and/or we need to test against production data. (more that I wasn't sure if it was a legitimate use of the API earlier, hence reducing the load via the nft rule) > That might be useful, thanks! OK, I will send that separately. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Ilya, > We have a robot running in the RH network that pushes checks and > updates statuses for openvswitch and ovn projects, but it shouldn't > really make any "global patches view" types of requests. All the > patches it looks at supposed to be only from these two projects. And > it should not make more than one concurrent request. OK, that looks like it then; this isn't one of the spiders crawling through *all* patches (and does seem to be contained to OVS & OVN) but it's certainly a major contributor to load. It seems to be re-requesting the same view hundreds of times. From one day's worth of log, the top 10 URLs from that IP: 399 /api/patches/1887072/checks/ 285 /api/patches/1888116/checks/ 285 /api/patches/1888115/checks/ 285 /api/patches/1888111/checks/ 228 /api/patches/1888114/checks/ 228 /api/patches/1888112/checks/ 228 /api/patches/1887464/checks/ 228 /api/patches/1887463/checks/ 228 /api/patches/1884952/checks/ 228 /api/patches/1884950/checks/ - totalling 43,786 requests for that day. > Could you provide some examples of requests that are heavy (maybe > off-list), so we can take a look? I can send you a log over a day if that's helpful. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi all, > I'll try and get to that over the weekend. Looks like the heaviest database load is due to API request to the global patches view, which is a bit of an odd use-case; that all appears to be mostly spider traffic. Konstantin: I'm not sure your new index would help in that case, we're not looking up delegates for those views. Looking through the access logs, there seem to be three clients that are causing around 40-50% of patchwork load: - one IP from an "Alibaba Cloud HK" AS, various UAs - one IP from a Red Hat AS, curl/7.61.1 UA - the Bytedance "Bytespider" UA All three seem to be scraping the patchwork site. I have blocked all three for now, but it would be worthwhile setting up a more fair robots.txt and/or a reasonable ratelimit for the latter case. If anyone knows what might be up with that Red Hat crawler, please get in touch with me. I'll keep an eye on things here; there's still likely a bunch of potential configuration optimisation we can do too. Let me know if your observations change though. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Rob, (adding Ilya, who has done some very handy logging too) > I have a job that runs every 20min checking PW. I added timestamps > and > here's all the 503 failures in the last 22 hours: > > Wed Jan 17 10:12:29 AM CST 2024 > Wed Jan 17 12:16:38 PM CST 2024 > Wed Jan 17 01:37:01 PM CST 2024 > Wed Jan 17 03:00:23 PM CST 2024 > Wed Jan 17 03:43:04 PM CST 2024 > Wed Jan 17 11:52:18 PM CST 2024 > Thu Jan 18 12:32:38 AM CST 2024 > Thu Jan 18 02:33:54 AM CST 2024 > Thu Jan 18 03:34:50 AM CST 2024 > Thu Jan 18 06:20:40 AM CST 2024 > Thu Jan 18 06:40:57 AM CST 2024 > > Looks pretty much spread out except 4PM-11PM my time didn't have any > failures. Thanks for that - good to know! Ilya: your timestamps - are they in UTC or local? > Must be the Europeans causing problems. ;) I blame you all being north of the equator :D Your data seems pretty consistent with what Ilya had found - spread out through the day rather than just during backups. It looks like the request load has increased enough to need tuning of the application server's backlog queue. I'll do some adjustments to that, but it does need to be a little careful in that we don't just punt the problem somewhere else. I'll try and get to that over the weekend. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
On Wed, Jan 17, 2024 at 10:36 AM Jeremy Kerr wrote: > > Hi Rob, > > > When was recent? The 503s have been happening for probably a month or > > 2 now, but seem to be increasing in frequency to the point of PW > > being unusable now. > > That was early Jan, so you've been seeing those since before the > upgrade. > > I do see quite a bit of spider traffic now though, so might have some > options to limit load there too. > > > > So far it does seem time of day related, so possibly conflicting > > > with load from overnight backups. > > > > When and how long do those run? I'm pretty much working your > > overnight. > > I'd have to defer to Stephen on that, but I had been assuming 4am / > 17:00 UTC. I have a job that runs every 20min checking PW. I added timestamps and here's all the 503 failures in the last 22 hours: Wed Jan 17 10:12:29 AM CST 2024 Wed Jan 17 12:16:38 PM CST 2024 Wed Jan 17 01:37:01 PM CST 2024 Wed Jan 17 03:00:23 PM CST 2024 Wed Jan 17 03:43:04 PM CST 2024 Wed Jan 17 11:52:18 PM CST 2024 Thu Jan 18 12:32:38 AM CST 2024 Thu Jan 18 02:33:54 AM CST 2024 Thu Jan 18 03:34:50 AM CST 2024 Thu Jan 18 06:20:40 AM CST 2024 Thu Jan 18 06:40:57 AM CST 2024 Looks pretty much spread out except 4PM-11PM my time didn't have any failures. Must be the Europeans causing problems. ;) Rob ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi all, On Thu, 18 Jan 2024 00:36:39 +0800 Jeremy Kerr wrote: > > > When and how long do those run? I'm pretty much working your > > overnight. > > I'd have to defer to Stephen on that, but I had been assuming 4am / > 17:00 UTC. Yeah, 4am to 6am our time (currently +1100) ... it takes a long time to backup a 25GB database :-( We may need to consider alternative backup methods? See https://munin.ozlabs.org/ozlabs.org/legolas.ozlabs.org/index.html (legolas is patchwork.ozlabs.org). -- Cheers, Stephen Rothwell pgpQQHbHCn8Er.pgp Description: OpenPGP digital signature ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Rob, > When was recent? The 503s have been happening for probably a month or > 2 now, but seem to be increasing in frequency to the point of PW > being unusable now. That was early Jan, so you've been seeing those since before the upgrade. I do see quite a bit of spider traffic now though, so might have some options to limit load there too. > > So far it does seem time of day related, so possibly conflicting > > with load from overnight backups. > > When and how long do those run? I'm pretty much working your > overnight. I'd have to defer to Stephen on that, but I had been assuming 4am / 17:00 UTC. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
On Tue, Jan 16, 2024 at 6:56 PM Jeremy Kerr wrote: > > Hi Rob & Konstantin, > > > > > > I'm still seeing this slowness and now to add to that intermittent > > > 503 errors. > > OK, I've heard the same yesterday from the ovn folks. Sounds like this > is continuing after a recent hardware upgrade too. When was recent? The 503s have been happening for probably a month or 2 now, but seem to be increasing in frequency to the point of PW being unusable now. > So far it does seem time of day related, so possibly conflicting with > load from overnight backups. When and how long do those run? I'm pretty much working your overnight. Rob ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Rob & Konstantin, > > > I'm still seeing this slowness and now to add to that intermittent > > 503 errors. OK, I've heard the same yesterday from the ovn folks. Sounds like this is continuing after a recent hardware upgrade too. So far it does seem time of day related, so possibly conflicting with load from overnight backups. > In case it helps, and just for the record, the following index really > helped to improve some of the patch list views when delegates were > used: > > ALTER TABLE patchwork_patch ADD INDEX > patchwork_patch_delegate_id_state_id_archived (delegate_id, state_id, > archived); Neat, good to know! I'll check out the query log and see if we have overlap on the 'expensive' queries issued, and add the index if it looks like that would help. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
On Tue, Jan 16, 2024 at 11:05:42AM -0600, Rob Herring wrote: > > This improved things, but it seems like the DT PW has gotten slower > > again in the last few weeks. It's taking ~12sec to load the patch > > list. > > I'm still seeing this slowness and now to add to that intermittent 503 errors. In case it helps, and just for the record, the following index really helped to improve some of the patch list views when delegates were used: ALTER TABLE patchwork_patch ADD INDEX patchwork_patch_delegate_id_state_id_archived (delegate_id, state_id, archived); -K ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Jeremy, On Thu, Jun 29, 2023 at 11:55 AM Rob Herring wrote: > > Hi Jeremy, > > On Wed, Aug 17, 2022 at 6:19 PM Jeremy Kerr wrote: > > > > Hi Rob, > > > > > > I'll do a bit of experimentation here to see what's up, will keep > > > > you > > > > posted. > > > > > > Actually, it is worse than just slow. New patches aren't showing up > > > since before the move. > > > > Looks like I missed a dependency for the custom filter for the DT feed; > > apologies for that. I've updated the filter, and incoming patches > > should be parsed from here on. > > > > I've also had a look at the speed issues, which seem particularly bad > > for the DT list. We've bumped up the system memory, and tweaked the > > indexes a little; things look a little faster now, but keep me posted > > on performance there too. > > This improved things, but it seems like the DT PW has gotten slower > again in the last few weeks. It's taking ~12sec to load the patch > list. I'm still seeing this slowness and now to add to that intermittent 503 errors. Rob ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Jeremy, On Wed, Aug 17, 2022 at 6:19 PM Jeremy Kerr wrote: > > Hi Rob, > > > > I'll do a bit of experimentation here to see what's up, will keep > > > you > > > posted. > > > > Actually, it is worse than just slow. New patches aren't showing up > > since before the move. > > Looks like I missed a dependency for the custom filter for the DT feed; > apologies for that. I've updated the filter, and incoming patches > should be parsed from here on. > > I've also had a look at the speed issues, which seem particularly bad > for the DT list. We've bumped up the system memory, and tweaked the > indexes a little; things look a little faster now, but keep me posted > on performance there too. This improved things, but it seems like the DT PW has gotten slower again in the last few weeks. It's taking ~12sec to load the patch list. Rob ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Rob, > > I'll do a bit of experimentation here to see what's up, will keep > > you > > posted. > > Actually, it is worse than just slow. New patches aren't showing up > since before the move. Looks like I missed a dependency for the custom filter for the DT feed; apologies for that. I've updated the filter, and incoming patches should be parsed from here on. I've also had a look at the speed issues, which seem particularly bad for the DT list. We've bumped up the system memory, and tweaked the indexes a little; things look a little faster now, but keep me posted on performance there too. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
On Tue, Aug 16, 2022 at 8:26 PM Jeremy Kerr wrote: > > Hi Rob, > > > > Since the move, the web interface seems slow for some operations. > > Loading the list view takes ~30s. Loading a patch seems fine, and I'm > > not seeing any slowness with the CLI. > > OK, thanks for letting me know. It looks like this is reliably > reproducible, but only affects some of the patch lists; DT is > particularly slow, but linuxppc seems fine. > > > I'll do a bit of experimentation here to see what's up, will keep you > posted. Actually, it is worse than just slow. New patches aren't showing up since before the move. Rob ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
Hi Rob, > Since the move, the web interface seems slow for some operations. > Loading the list view takes ~30s. Loading a patch seems fine, and I'm > not seeing any slowness with the CLI. OK, thanks for letting me know. It looks like this is reliably reproducible, but only affects some of the patch lists; DT is particularly slow, but linuxppc seems fine. I'll do a bit of experimentation here to see what's up, will keep you posted. Cheers, Jeremy ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork
Re: [Patchwork-maintainers] patchwork.ozlabs.org downtime for maintenance - 15/16 August
On Mon, Aug 15, 2022 at 1:06 AM Jeremy Kerr wrote: > > Hi patchworkers, > > Stephen and I will be moving the patchwork.ozlabs.org instance to a new > server this week. For this, we'll need about two hours of downtime, > starting at: > > Australia East: midday Tuesday 16 Aug > Australia West: 10am Tuesday 16 Aug > UTC: 2am Tuesday 16 Aug > US West: 7pm Monday 15 Aug > US East: 10pm Monday 15 Aug > > Once back up, there should be no change to the software, just where > it's hosted. Since the move, the web interface seems slow for some operations. Loading the list view takes ~30s. Loading a patch seems fine, and I'm not seeing any slowness with the CLI. Rob ___ Patchwork mailing list Patchwork@lists.ozlabs.org https://lists.ozlabs.org/listinfo/patchwork