Re: [go-cd] Performance of popups in the gui

2024-05-13 Thread 'Wolfgang Achinger' via go-cd
I'm currently in the process of upgrading the server and agents of all our
gocd setups. We will monitor it the next few days and I will come back if
we notice anything

Am Mo., 13. Mai 2024 um 10:24 Uhr schrieb Chad Wilson <
ch...@thoughtworks.com>:

> Great to hear - back to how it was *supposed* to behave! I hope it hasn't
> caused any other regressions 
>
> Now that this is cleaned up, let me know if it exposes any other
> unexpected weird slowness you can't get to the bottom of and I'll see if I
> can chip away at the other various niggles.
>
> -Chad
>
> On Mon, May 13, 2024 at 4:10 PM 'Wolfgang Achinger' via go-cd <
> go-cd@googlegroups.com> wrote:
>
>> Dude the fix is amazing !
>>
>> Am Mo., 13. Mai 2024 um 04:05 Uhr schrieb Chad Wilson <
>> ch...@thoughtworks.com>:
>>
>>> Apologies for the slow release of this (been rather busy personally) but
>>> 24.1.0 is out with what I think should be a fix for this issue.
>>>
>>> If you have any feedback it'd be appreciated.
>>>
>>> -Chad
>>>
>>>
>>> On Mon, 19 Feb 2024, 15:15 'Wolfgang Achinger' via go-cd, <
>>> go-cd@googlegroups.com> wrote:
>>>
 Hello,

 this is actually incredible news for us. We will look forward to the
 release with the fix.
 Thanks for the support.

 The workarounds seem not to be viable for us, since we use the
 environment for a lot of global variables and customizations.

 Regards

 Am Sa., 17. Feb. 2024 um 18:08 Uhr schrieb Chad Wilson <
 ch...@thoughtworks.com>:

> Hiya folks
>
> I've been able to replicate this problem and should be able to fix it
> for a subsequent release - thanks for the help debugging.
>
> The problem appears to arrive when there is a large number of pipelines*
> mapped to an environment;* and also a large number of agents for that
> environment. The logic for calculating the API response agents >
> environments is accidentally very, very inefficient (I think it's O(n^2 x
> m^2) or something crazy. I replicated something similar to what you
> describe with 5,000 pipelines and 60 or so agents, all mapped into the
> same, single, logical environment.
>
> [image: image.png]
>
>
> In your case if you have all say 1,690 pipelines mapped to a single
> environment (from your stats below), and all of your 95 agents are in the
> same environment, you'd definitely trigger this issue. I can't tell 
> exactly
> from what you have shared how the pipelines and agents are mapped to
> environments, so this is a guess - can you confirm how many agents and
> pipelines are mapped to the environment below?
>
> "Number of pipelines": 1690,
> "Number of environments": 1,
> "Number of agents": 95,
>
>
> If it's the same problem, you will probably find that *untagging the
> agents from the environment *also has a similar speed-up effect to
> deleting all of the agents (although then the pipelines requiring that
> environment won't schedule either, obviously).
>
> Another workaround in the meantime, *if you don't rely on the
> environment*
>
>- to define environment variables/secure environment variables
>that apply across all pipelines/jobs
>- to affect whether jobs are scheduled to special agents
>
> ... may be to untag all pipelines and agents from the environment you
> use and just use the default/empty environment.
>
> -Chad
>
> On Fri, Feb 16, 2024 at 3:40 PM 'Wolfgang Achinger' via go-cd <
> go-cd@googlegroups.com> wrote:
>
>> > By 'resources' I am referring to the GoCD functionality where you
>> can tag agents with resources that they offer, which are then matched to
>> pipeline jobs that say they *require* those resources to run as part
>> of agent assignment.
>> 10 Agents have 5 resources attached
>> 85 have 1 resource attached
>>
>> We use the resources to different special agents. They do the same as
>> the rest, but they are placed in dedicated networks.
>>
>> > To make sure I understand you, are you saying that the problem has
>> been here for the last year, perhaps gradually getting worse a story add
>> more agents or pipelines - but not an issue suddenly created after a
>> particular upgrade or change?
>> That's correct. It's more an over-time issue than a sudden issue.
>>
>> I sent the additional information out, but not directly, they come
>> from a different mail address over a secure transfer method.
>>
>> Am Do., 15. Feb. 2024 um 17:57 Uhr schrieb Chad Wilson <
>> ch...@thoughtworks.com>:
>>
>>> Cool, thanks! Just trying to gather enough information to see if I
>>> can replicate or find the issue in a dedicated chunk of time this 
>>> weekend.
>>>
>>> You can email it to me, and/or encrypt with my GPG key if you'd like
>>> (
>>> 

Re: [go-cd] Performance of popups in the gui

2024-05-13 Thread Chad Wilson
Great to hear - back to how it was *supposed* to behave! I hope it hasn't
caused any other regressions 

Now that this is cleaned up, let me know if it exposes any other unexpected
weird slowness you can't get to the bottom of and I'll see if I can chip
away at the other various niggles.

-Chad

On Mon, May 13, 2024 at 4:10 PM 'Wolfgang Achinger' via go-cd <
go-cd@googlegroups.com> wrote:

> Dude the fix is amazing !
>
> Am Mo., 13. Mai 2024 um 04:05 Uhr schrieb Chad Wilson <
> ch...@thoughtworks.com>:
>
>> Apologies for the slow release of this (been rather busy personally) but
>> 24.1.0 is out with what I think should be a fix for this issue.
>>
>> If you have any feedback it'd be appreciated.
>>
>> -Chad
>>
>>
>> On Mon, 19 Feb 2024, 15:15 'Wolfgang Achinger' via go-cd, <
>> go-cd@googlegroups.com> wrote:
>>
>>> Hello,
>>>
>>> this is actually incredible news for us. We will look forward to the
>>> release with the fix.
>>> Thanks for the support.
>>>
>>> The workarounds seem not to be viable for us, since we use the
>>> environment for a lot of global variables and customizations.
>>>
>>> Regards
>>>
>>> Am Sa., 17. Feb. 2024 um 18:08 Uhr schrieb Chad Wilson <
>>> ch...@thoughtworks.com>:
>>>
 Hiya folks

 I've been able to replicate this problem and should be able to fix it
 for a subsequent release - thanks for the help debugging.

 The problem appears to arrive when there is a large number of pipelines*
 mapped to an environment;* and also a large number of agents for that
 environment. The logic for calculating the API response agents >
 environments is accidentally very, very inefficient (I think it's O(n^2 x
 m^2) or something crazy. I replicated something similar to what you
 describe with 5,000 pipelines and 60 or so agents, all mapped into the
 same, single, logical environment.

 [image: image.png]


 In your case if you have all say 1,690 pipelines mapped to a single
 environment (from your stats below), and all of your 95 agents are in the
 same environment, you'd definitely trigger this issue. I can't tell exactly
 from what you have shared how the pipelines and agents are mapped to
 environments, so this is a guess - can you confirm how many agents and
 pipelines are mapped to the environment below?

 "Number of pipelines": 1690,
 "Number of environments": 1,
 "Number of agents": 95,


 If it's the same problem, you will probably find that *untagging the
 agents from the environment *also has a similar speed-up effect to
 deleting all of the agents (although then the pipelines requiring that
 environment won't schedule either, obviously).

 Another workaround in the meantime, *if you don't rely on the
 environment*

- to define environment variables/secure environment variables that
apply across all pipelines/jobs
- to affect whether jobs are scheduled to special agents

 ... may be to untag all pipelines and agents from the environment you
 use and just use the default/empty environment.

 -Chad

 On Fri, Feb 16, 2024 at 3:40 PM 'Wolfgang Achinger' via go-cd <
 go-cd@googlegroups.com> wrote:

> > By 'resources' I am referring to the GoCD functionality where you
> can tag agents with resources that they offer, which are then matched to
> pipeline jobs that say they *require* those resources to run as part
> of agent assignment.
> 10 Agents have 5 resources attached
> 85 have 1 resource attached
>
> We use the resources to different special agents. They do the same as
> the rest, but they are placed in dedicated networks.
>
> > To make sure I understand you, are you saying that the problem has
> been here for the last year, perhaps gradually getting worse a story add
> more agents or pipelines - but not an issue suddenly created after a
> particular upgrade or change?
> That's correct. It's more an over-time issue than a sudden issue.
>
> I sent the additional information out, but not directly, they come
> from a different mail address over a secure transfer method.
>
> Am Do., 15. Feb. 2024 um 17:57 Uhr schrieb Chad Wilson <
> ch...@thoughtworks.com>:
>
>> Cool, thanks! Just trying to gather enough information to see if I
>> can replicate or find the issue in a dedicated chunk of time this 
>> weekend.
>>
>> You can email it to me, and/or encrypt with my GPG key if you'd like (
>> https://github.com/chadlwilson/chadlwilson/blob/main/gpg-public-key.asc
>> )
>>
>> By 'resources' I am referring to the GoCD functionality where you can
>> tag agents with resources that they offer, which are then matched to
>> pipeline jobs that say they *require* those resources to run as part
>> of agent assignment.
>>
>> > No we use this setup no for about a 

Re: [go-cd] Performance of popups in the gui

2024-05-13 Thread 'Wolfgang Achinger' via go-cd
Dude the fix is amazing !

Am Mo., 13. Mai 2024 um 04:05 Uhr schrieb Chad Wilson <
ch...@thoughtworks.com>:

> Apologies for the slow release of this (been rather busy personally) but
> 24.1.0 is out with what I think should be a fix for this issue.
>
> If you have any feedback it'd be appreciated.
>
> -Chad
>
>
> On Mon, 19 Feb 2024, 15:15 'Wolfgang Achinger' via go-cd, <
> go-cd@googlegroups.com> wrote:
>
>> Hello,
>>
>> this is actually incredible news for us. We will look forward to the
>> release with the fix.
>> Thanks for the support.
>>
>> The workarounds seem not to be viable for us, since we use the
>> environment for a lot of global variables and customizations.
>>
>> Regards
>>
>> Am Sa., 17. Feb. 2024 um 18:08 Uhr schrieb Chad Wilson <
>> ch...@thoughtworks.com>:
>>
>>> Hiya folks
>>>
>>> I've been able to replicate this problem and should be able to fix it
>>> for a subsequent release - thanks for the help debugging.
>>>
>>> The problem appears to arrive when there is a large number of pipelines*
>>> mapped to an environment;* and also a large number of agents for that
>>> environment. The logic for calculating the API response agents >
>>> environments is accidentally very, very inefficient (I think it's O(n^2 x
>>> m^2) or something crazy. I replicated something similar to what you
>>> describe with 5,000 pipelines and 60 or so agents, all mapped into the
>>> same, single, logical environment.
>>>
>>> [image: image.png]
>>>
>>>
>>> In your case if you have all say 1,690 pipelines mapped to a single
>>> environment (from your stats below), and all of your 95 agents are in the
>>> same environment, you'd definitely trigger this issue. I can't tell exactly
>>> from what you have shared how the pipelines and agents are mapped to
>>> environments, so this is a guess - can you confirm how many agents and
>>> pipelines are mapped to the environment below?
>>>
>>> "Number of pipelines": 1690,
>>> "Number of environments": 1,
>>> "Number of agents": 95,
>>>
>>>
>>> If it's the same problem, you will probably find that *untagging the
>>> agents from the environment *also has a similar speed-up effect to
>>> deleting all of the agents (although then the pipelines requiring that
>>> environment won't schedule either, obviously).
>>>
>>> Another workaround in the meantime, *if you don't rely on the
>>> environment*
>>>
>>>- to define environment variables/secure environment variables that
>>>apply across all pipelines/jobs
>>>- to affect whether jobs are scheduled to special agents
>>>
>>> ... may be to untag all pipelines and agents from the environment you
>>> use and just use the default/empty environment.
>>>
>>> -Chad
>>>
>>> On Fri, Feb 16, 2024 at 3:40 PM 'Wolfgang Achinger' via go-cd <
>>> go-cd@googlegroups.com> wrote:
>>>
 > By 'resources' I am referring to the GoCD functionality where you can
 tag agents with resources that they offer, which are then matched to
 pipeline jobs that say they *require* those resources to run as part
 of agent assignment.
 10 Agents have 5 resources attached
 85 have 1 resource attached

 We use the resources to different special agents. They do the same as
 the rest, but they are placed in dedicated networks.

 > To make sure I understand you, are you saying that the problem has
 been here for the last year, perhaps gradually getting worse a story add
 more agents or pipelines - but not an issue suddenly created after a
 particular upgrade or change?
 That's correct. It's more an over-time issue than a sudden issue.

 I sent the additional information out, but not directly, they come from
 a different mail address over a secure transfer method.

 Am Do., 15. Feb. 2024 um 17:57 Uhr schrieb Chad Wilson <
 ch...@thoughtworks.com>:

> Cool, thanks! Just trying to gather enough information to see if I can
> replicate or find the issue in a dedicated chunk of time this weekend.
>
> You can email it to me, and/or encrypt with my GPG key if you'd like (
> https://github.com/chadlwilson/chadlwilson/blob/main/gpg-public-key.asc
> )
>
> By 'resources' I am referring to the GoCD functionality where you can
> tag agents with resources that they offer, which are then matched to
> pipeline jobs that say they *require* those resources to run as part
> of agent assignment.
>
> > No we use this setup no for about a year, patch the system on a
> regular basis including the latest gocd stable version.
>
> To make sure I understand you, are you saying that the problem has
> been here for the last year, perhaps gradually getting worse a story add
> more agents or pipelines - but not an issue suddenly created after a
> particular upgrade or change?
>
> -Chad
>
> On Fri, 16 Feb 2024, 00:29 'Wolfgang Achinger' via go-cd, <
> go-cd@googlegroups.com> wrote:
>
>> > And how many