Re: Anyone have any ideas why beam_PreCommit_CommunityMetrics is failing?

2022-04-21 Thread Daniela Martín
Thanks for the review Cham.

Regards,

On Wed, Apr 20, 2022 at 8:28 PM Chamikara Jayalath 
wrote:

> Thanks for the fix. Merged the PR.
>
> - Cham
>
> On Wed, Apr 20, 2022 at 1:37 PM Daniela Martín <
> daniela.mar...@wizeline.com> wrote:
>
>> Hi everyone,
>>
>> After a deep testing along with @Elias Segundo Antonio
>>   (he is currently working on BEAM-14169
>> <https://github.com/apache/beam/pull/17383>), we found that the issue is
>> in fact related to the rotation of the k8s credentials. We identified that
>> in this specific test, there is no get-credentials instruction when it’s
>> executed, that’s why the test is failing.
>>
>> In order to avoid this issue, we have refactored the test including the
>> get-credentials and remove-config every time the job is triggered.  Also,
>> we have updated the --dry-run flag in order to remove the warning of the
>> deprecated usage.
>>
>> Could you please help us to review the PR#17396
>> <https://github.com/apache/beam/pull/17396>?
>>
>> Please let us know if you have any comments or questions.
>>
>> Thank you!
>>
>> Regards,
>>
>> On Wed, Apr 13, 2022 at 12:03 PM Daniela Martín <
>> daniela.mar...@wizeline.com> wrote:
>>
>>> Hi everyone,
>>>
>>> I'll take a look. Thank you for the information.
>>>
>>> Regards,
>>>
>>> On Mon, Mar 14, 2022 at 5:11 PM Ahmet Altay  wrote:
>>>
>>>> I do not know the code well enough either. But I could not find any
>>>> references to "104.154.102.21" in the code search.
>>>>
>>>> In case this might help with anyone to help here:
>>>> - Failing test is a single line of kubectl command "kubectl apply
>>>> --dry-run=true -Rf kubernetes" (
>>>> https://github.com/apache/beam/blob/f779a3fca31f08ada5011155484b69bdca962754/.test-infra/metrics/build.gradle#L55
>>>> )
>>>> - kubernetes is referring to the directory with yaml files (
>>>> https://github.com/apache/beam/tree/master/.test-infra/metrics/kubernetes
>>>> )
>>>>
>>>> On Mon, Mar 14, 2022 at 7:40 AM Kerry Donny-Clark 
>>>> wrote:
>>>>
>>>>> Hi Daniel,
>>>>> I may be the culprit, as I had to rotate our credentials on tke k8
>>>>> cluster. That meant I also had to rebuild the nodes, and perform an IP
>>>>> rotation. My suspicion is that there may be hardcoded addresses that
>>>>> changed when the nodes were rebuilt, but I don't know the code well enough
>>>>> to find out if that's true.
>>>>>
>>>>> Kerry
>>>>>
>>>>> On Thu, Mar 10, 2022 at 6:11 PM Daniel Oliveira <
>>>>> danolive...@google.com> wrote:
>>>>>
>>>>>> Hi everyone,
>>>>>>
>>>>>> Can anyone take some time to look at BEAM-14017
>>>>>> <https://issues.apache.org/jira/browse/BEAM-14017>? Especially if
>>>>>> you're at all familiar with our scripts for gathering metrics for the
>>>>>> Community Metrics page.
>>>>>>
>>>>>> I noticed beam_PreCommit_CommunityMetrics_Cron is failing
>>>>>> consistently and I took a look into it (everything is documented in the
>>>>>> Jira). But I found very little to help diagnose the problem. As best I 
>>>>>> can
>>>>>> tell, the test is failing to connect to some Kubernetes cluster, but for
>>>>>> some reason the Community Metrics Page
>>>>>> <http://104.154.241.245/d/1/getting-started?orgId=1> it's supposed
>>>>>> to update is still getting regularly updated. I don't really have the 
>>>>>> time
>>>>>> to look into it further so I'm hoping someone else can take a look.
>>>>>>
>>>>>> Thanks,
>>>>>> Daniel Oliveira
>>>>>>
>>>>>
>>>
>>> --
>>>
>>> Daniela Martín (She/Her) | <https://www.wizeline.com/>
>>>
>>> Site Reliability Engineer III
>>>
>>> daniela.mar...@wizeline.com
>>>
>>> Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.
>>>
>>> Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
>>> <https://www.facebook.com/WizelineGlobal> | Instagr

Re: Anyone have any ideas why beam_PreCommit_CommunityMetrics is failing?

2022-04-20 Thread Daniela Martín
Hi everyone,

After a deep testing along with @Elias Segundo Antonio
  (he is currently working on BEAM-14169
<https://github.com/apache/beam/pull/17383>), we found that the issue is in
fact related to the rotation of the k8s credentials. We identified that in
this specific test, there is no get-credentials instruction when it’s
executed, that’s why the test is failing.

In order to avoid this issue, we have refactored the test including the
get-credentials and remove-config every time the job is triggered.  Also,
we have updated the --dry-run flag in order to remove the warning of the
deprecated usage.

Could you please help us to review the PR#17396
<https://github.com/apache/beam/pull/17396>?

Please let us know if you have any comments or questions.

Thank you!

Regards,

On Wed, Apr 13, 2022 at 12:03 PM Daniela Martín 
wrote:

> Hi everyone,
>
> I'll take a look. Thank you for the information.
>
> Regards,
>
> On Mon, Mar 14, 2022 at 5:11 PM Ahmet Altay  wrote:
>
>> I do not know the code well enough either. But I could not find any
>> references to "104.154.102.21" in the code search.
>>
>> In case this might help with anyone to help here:
>> - Failing test is a single line of kubectl command "kubectl apply
>> --dry-run=true -Rf kubernetes" (
>> https://github.com/apache/beam/blob/f779a3fca31f08ada5011155484b69bdca962754/.test-infra/metrics/build.gradle#L55
>> )
>> - kubernetes is referring to the directory with yaml files (
>> https://github.com/apache/beam/tree/master/.test-infra/metrics/kubernetes
>> )
>>
>> On Mon, Mar 14, 2022 at 7:40 AM Kerry Donny-Clark 
>> wrote:
>>
>>> Hi Daniel,
>>> I may be the culprit, as I had to rotate our credentials on tke k8
>>> cluster. That meant I also had to rebuild the nodes, and perform an IP
>>> rotation. My suspicion is that there may be hardcoded addresses that
>>> changed when the nodes were rebuilt, but I don't know the code well enough
>>> to find out if that's true.
>>>
>>> Kerry
>>>
>>> On Thu, Mar 10, 2022 at 6:11 PM Daniel Oliveira 
>>> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> Can anyone take some time to look at BEAM-14017
>>>> <https://issues.apache.org/jira/browse/BEAM-14017>? Especially if
>>>> you're at all familiar with our scripts for gathering metrics for the
>>>> Community Metrics page.
>>>>
>>>> I noticed beam_PreCommit_CommunityMetrics_Cron is failing consistently
>>>> and I took a look into it (everything is documented in the Jira). But I
>>>> found very little to help diagnose the problem. As best I can tell, the
>>>> test is failing to connect to some Kubernetes cluster, but for some reason
>>>> the Community Metrics Page
>>>> <http://104.154.241.245/d/1/getting-started?orgId=1> it's supposed to
>>>> update is still getting regularly updated. I don't really have the time to
>>>> look into it further so I'm hoping someone else can take a look.
>>>>
>>>> Thanks,
>>>> Daniel Oliveira
>>>>
>>>
>
> --
>
> Daniela Martín (She/Her) | <https://www.wizeline.com/>
>
> Site Reliability Engineer III
>
> daniela.mar...@wizeline.com
>
> Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.
>
> Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
> <https://www.facebook.com/WizelineGlobal> | Instagram
> <https://www.instagram.com/wizelineglobal/> | LinkedIn
> <https://www.linkedin.com/company/wizeline>
>
> Share feedback on Clutch <https://clutch.co/review/submit/375119>
>


-- 

Daniela Martín (She/Her) | <https://www.wizeline.com/>

Site Reliability Engineer III

daniela.mar...@wizeline.com

Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.

Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
<https://www.facebook.com/WizelineGlobal> | Instagram
<https://www.instagram.com/wizelineglobal/> | LinkedIn
<https://www.linkedin.com/company/wizeline>

Share feedback on Clutch <https://clutch.co/review/submit/375119>

-- 
*This email and its contents (including any attachments) are being sent to
you on the condition of confidentiality and may be protected by legal
privilege. Access to this email by anyone other than the intended recipient
is unauthorized. If you are not the intended recipient, please immediately
notify the sender by replying to this message and delete the material
immediately from your system. Any further use, dissemination, distribution
or reproduction of this email is strictly prohibited. Further, no
representation is made with respect to any content contained in this email.*


Re: Anyone have any ideas why beam_PreCommit_CommunityMetrics is failing?

2022-04-13 Thread Daniela Martín
Hi everyone,

I'll take a look. Thank you for the information.

Regards,

On Mon, Mar 14, 2022 at 5:11 PM Ahmet Altay  wrote:

> I do not know the code well enough either. But I could not find any
> references to "104.154.102.21" in the code search.
>
> In case this might help with anyone to help here:
> - Failing test is a single line of kubectl command "kubectl apply
> --dry-run=true -Rf kubernetes" (
> https://github.com/apache/beam/blob/f779a3fca31f08ada5011155484b69bdca962754/.test-infra/metrics/build.gradle#L55
> )
> - kubernetes is referring to the directory with yaml files (
> https://github.com/apache/beam/tree/master/.test-infra/metrics/kubernetes)
>
> On Mon, Mar 14, 2022 at 7:40 AM Kerry Donny-Clark 
> wrote:
>
>> Hi Daniel,
>> I may be the culprit, as I had to rotate our credentials on tke k8
>> cluster. That meant I also had to rebuild the nodes, and perform an IP
>> rotation. My suspicion is that there may be hardcoded addresses that
>> changed when the nodes were rebuilt, but I don't know the code well enough
>> to find out if that's true.
>>
>> Kerry
>>
>> On Thu, Mar 10, 2022 at 6:11 PM Daniel Oliveira 
>> wrote:
>>
>>> Hi everyone,
>>>
>>> Can anyone take some time to look at BEAM-14017
>>> <https://issues.apache.org/jira/browse/BEAM-14017>? Especially if
>>> you're at all familiar with our scripts for gathering metrics for the
>>> Community Metrics page.
>>>
>>> I noticed beam_PreCommit_CommunityMetrics_Cron is failing consistently
>>> and I took a look into it (everything is documented in the Jira). But I
>>> found very little to help diagnose the problem. As best I can tell, the
>>> test is failing to connect to some Kubernetes cluster, but for some reason
>>> the Community Metrics Page
>>> <http://104.154.241.245/d/1/getting-started?orgId=1> it's supposed to
>>> update is still getting regularly updated. I don't really have the time to
>>> look into it further so I'm hoping someone else can take a look.
>>>
>>> Thanks,
>>> Daniel Oliveira
>>>
>>

-- 

Daniela Martín (She/Her) | <https://www.wizeline.com/>

Site Reliability Engineer III

daniela.mar...@wizeline.com

Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.

Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
<https://www.facebook.com/WizelineGlobal> | Instagram
<https://www.instagram.com/wizelineglobal/> | LinkedIn
<https://www.linkedin.com/company/wizeline>

Share feedback on Clutch <https://clutch.co/review/submit/375119>

-- 
*This email and its contents (including any attachments) are being sent to
you on the condition of confidentiality and may be protected by legal
privilege. Access to this email by anyone other than the intended recipient
is unauthorized. If you are not the intended recipient, please immediately
notify the sender by replying to this message and delete the material
immediately from your system. Any further use, dissemination, distribution
or reproduction of this email is strictly prohibited. Further, no
representation is made with respect to any content contained in this email.*


Re: [Question] MacOS self-hosted servers - BEAM-12812

2022-02-24 Thread Daniela Martín
Hi,

Thank you very much for your comments and suggestions, totally agree with
you both. We will discuss it with the rest of the team and let you know the
resolution.

Thanks.


Regards,

On Wed, Feb 23, 2022 at 8:05 PM Ahmet Altay  wrote:

> Hi Daniela,
>
> My suggestion would be to rely on github provided runners and avoid self
> hosted runners for macos builds. We would like to use builtin support in
> these platforms (GH, GCP etc.) and not build our own systems because of the
> complications you are mentioning. I do not think adding AWS based self
> hosted runners would be worth the complexity just for this either.
>
> > Also, we would like to know if there is any information regarding Google
> Cloud Platform having a macOS image anytime soon for running it in VM or
> containers, as we think this may be the best approach for this task.
>
> People working at Google won't be able to answer this question. There is
> no public information about this. I agree that it would be the best
> approach but it is not clear if/when it will be available. I would not
> recommend waiting for it.
>
> Ahmet
>
>
> On Wed, Feb 23, 2022 at 5:46 PM Danny McCormick 
> wrote:
>
>> Unfortunately, Apple is pretty hardcore about licensing such that AWS,
>> Mac Stadium, or buying/hosting dedicated macs are pretty much the only good
>> options AFAIK. That was a pain in the butt for the multiple CI systems I've
>> worked on in the past.
>>
>> > Is this approach completely allowed according to Apple’s license?
>>
>> Almost definitely not - Apple's OS licensing requires every OS to be run
>> on "Apple-branded" hardware - e.g. from the Catalina license (
>> https://www.apple.com/legal/sla/docs/macOSCatalina.pdf)
>>
>> "The grants set forth in this License do not permit you to, and you
>> agree not to, install, use or run the Apple Software on any
>> non-Apple-branded computer, or to enable others to do so."
>>
>> The same presumably applies to the Hackintosh approach.
>>
>> Disclaimer - I'm not a lawyer, but I have had lawyers say my team
>> couldn't do something like this in the past 🙂
>>
>> Thanks,
>> Danny
>>
>> On Wed, Feb 23, 2022 at 5:44 PM Daniela Martín <
>> daniela.mar...@wizeline.com> wrote:
>>
>>> Hi everyone,
>>>
>>> We are currently working on *BEAM-12812 Run GitHub Actions on GCP
>>> self-hosted workers* [1] task, and we would like to know your thoughts
>>> regarding the Mac OS runners.
>>>
>>> Some context of the task
>>>
>>> The current GitHub Actions workflows are being tested on multiple
>>> operating systems, such as Ubuntu, Windows and MacOS. The way to migrate
>>> these runners from GitHub-hosted to GCP is by implementing self-hosted
>>> runners, so we have started implementing them in both Ubuntu and Windows
>>> environments, going with Google Kubernetes Engine and Google Cloud Compute
>>> VMs instances respectively.
>>>
>>> Findings
>>>
>>> In addition, we are working on researching the best way to implement the
>>> MacOS self-hosted runners, concluding with the following approaches:
>>>
>>>-
>>>
>>>Cloud Virtual Machines Support
>>>-
>>>
>>>Mac OS X in Docker
>>>-
>>>
>>>Hackintosh
>>>
>>>
>>> Cloud VM Support
>>>
>>> We have found that there are other Cloud Providers, such as AWS [2],
>>> that allow us to host Mac OS instances in our own dedicated hosts using
>>> official Apple hardware. However, we don’t have any Mac OS image available
>>> in Google Cloud Platform [3] yet.
>>> Mac OS X in Docker
>>>
>>> A Docker image docker-osx [4] is available in Docker Hub for running
>>> Mac OS X in a Docker container.
>>>
>>> Pros
>>>
>>> Cons
>>>
>>>-
>>>
>>>macOS Monterey VM on Linux
>>>-
>>>
>>>Near-native performance
>>>-
>>>
>>>Multiple versions of mac OS: High Sierra, Mojave, Catalina, Big Sur
>>>and Monterey
>>>-
>>>
>>>Multiple kind of images depending on the use case
>>>-
>>>
>>>Runs on top of QEMU + KVM
>>>-
>>>
>>>Supports Kubernetes
>>>
>>>
>>>-
>>>
>>>Is this approach completely allowed according to Apple’s license?
>>>

[Question] MacOS self-hosted servers - BEAM-12812

2022-02-23 Thread Daniela Martín
Hi everyone,

We are currently working on *BEAM-12812 Run GitHub Actions on GCP
self-hosted workers* [1] task, and we would like to know your thoughts
regarding the Mac OS runners.

Some context of the task

The current GitHub Actions workflows are being tested on multiple operating
systems, such as Ubuntu, Windows and MacOS. The way to migrate these
runners from GitHub-hosted to GCP is by implementing self-hosted runners,
so we have started implementing them in both Ubuntu and Windows
environments, going with Google Kubernetes Engine and Google Cloud Compute
VMs instances respectively.

Findings

In addition, we are working on researching the best way to implement the
MacOS self-hosted runners, concluding with the following approaches:

   -

   Cloud Virtual Machines Support
   -

   Mac OS X in Docker
   -

   Hackintosh


Cloud VM Support

We have found that there are other Cloud Providers, such as AWS [2], that
allow us to host Mac OS instances in our own dedicated hosts using official
Apple hardware. However, we don’t have any Mac OS image available in Google
Cloud Platform [3] yet.
Mac OS X in Docker

A Docker image docker-osx [4] is available in Docker Hub for running Mac OS
X in a Docker container.

Pros

Cons

   -

   macOS Monterey VM on Linux
   -

   Near-native performance
   -

   Multiple versions of mac OS: High Sierra, Mojave, Catalina, Big Sur and
   Monterey
   -

   Multiple kind of images depending on the use case
   -

   Runs on top of QEMU + KVM
   -

   Supports Kubernetes


   -

   Is this approach completely allowed according to Apple’s license?
   -

   Unverified Docker Hub publisher
   -

   Hardware virtualization enabled in BIOS
   -

   Approx 20 GB disk space for minimum installation



Hackintosh

A way to get Mac OS running on hardware that is not authorized by Apple.
The creation and configuration of the equipment (using GNU/Linux as a base)
can become very complicated resulting in a malfunctioning OS if the
required settings are not properly implemented or the hardware is not
suitable.


In conclusion, we found that there are some ways to run macOS in
self-hosted runners, however this could conflict with Apple's terms and
licenses and should be investigated in depth before any implementation.

The question here would be, if someone knows any other approach where we
could run Mac OS in the Cloud following Apple's licenses.

Also, we would like to know if there is any information regarding Google
Cloud Platform having a macOS image anytime soon for running it in VM or
containers, as we think this may be the best approach for this task.

Please feel free to share your comments and suggestions.

Thank you!

Regards,

[1] https://issues.apache.org/jira/browse/BEAM-12812

[2]
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-mac-instances.html

[3]
https://cloud.google.com/migrate/compute-engine/docs/5.0/reference/supported-os-versions


[4] https://hub.docker.com/r/sickcodes/docker-osx

-- 

Daniela Martín (She/Her) | <https://www.wizeline.com/>

Site Reliability Engineer III

daniela.mar...@wizeline.com

Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.

Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
<https://www.facebook.com/WizelineGlobal> | Instagram
<https://www.instagram.com/wizelineglobal/> | LinkedIn
<https://www.linkedin.com/company/wizeline>

Share feedback on Clutch <https://clutch.co/review/submit/375119>

-- 
*This email and its contents (including any attachments) are being sent to
you on the condition of confidentiality and may be protected by legal
privilege. Access to this email by anyone other than the intended recipient
is unauthorized. If you are not the intended recipient, please immediately
notify the sender by replying to this message and delete the material
immediately from your system. Any further use, dissemination, distribution
or reproduction of this email is strictly prohibited. Further, no
representation is made with respect to any content contained in this email.*


Re: Upgrading Jenkins Workers BEAM-12621

2022-02-01 Thread Daniela Martín
Thank you everyone for all the help and patience. We will be on the lookout
if something comes up with the Jenkins jobs.

In the next few days, we will share with you more details of the work done
in the upgrade.

Thank you.

Regards,

On Tue, Feb 1, 2022 at 3:08 PM Kiley Sok  wrote:

> The rest of the workers were updated to the new image and are back online.
> Thanks for your patience!
>
> On Mon, Jan 31, 2022 at 4:43 PM Daniela Martín <
> daniela.mar...@wizeline.com> wrote:
>
>> Thanks Kiley, it seems that the failures are not related to the *Ubuntu
>> 20.04.3* upgrade.
>>
>> In case the *Python Release Candidate* job works, we can proceed with
>> the upgrade of the remaining Jenkins workers.
>>
>> Thank you.
>>
>> Regards,
>>
>> On Mon, Jan 31, 2022 at 6:31 PM Kiley Sok  wrote:
>>
>>> XVR failures: https://issues.apache.org/jira/browse/BEAM-13778
>>>
>>> On Mon, Jan 31, 2022 at 4:02 PM Daniela Martín <
>>> daniela.mar...@wizeline.com> wrote:
>>>
>>>> Hi everyone,
>>>>
>>>> Thank you for the information and details.
>>>>
>>>> We reviewed the tests and we are taking a look at the following jobs
>>>> that started failing close to the date the Jenkins instance was upgraded.
>>>> We are not sure if it’s because of the Ubuntu 20.04.3 upgrade or for
>>>> another reason.
>>>>
>>>>
>>>>-
>>>>
>>>>beam_PostCommit_XVR_Direct
>>>>
>>>> <https://ci-beam.apache.org/view/PostCommit/job/beam_PostCommit_XVR_Direct/>
>>>>-
>>>>
>>>>beam_PostCommit_XVR_Flink
>>>>
>>>> <https://ci-beam.apache.org/view/PostCommit/job/beam_PostCommit_XVR_Flink/>
>>>>-
>>>>
>>>>beam_PostCommit_XVR_Spark3
>>>>
>>>> <https://ci-beam.apache.org/view/PostCommit/job/beam_PostCommit_XVR_Spark3/>
>>>>
>>>>
>>>> We will continue testing them. Any information regarding these jobs
>>>> would be greatly appreciated.
>>>>
>>>> Emily, could you please re-run the Python Release Candidate
>>>> <https://ci-beam.apache.org/job/beam_PostRelease_Python_Candidate/>
>>>> job in your PR #16632 <https://github.com/apache/beam/pull/16632>?
>>>>
>>>>
>>>> Thank you very much.
>>>>
>>>> Regards,
>>>>
>>>> On Mon, Jan 31, 2022 at 4:53 PM Valentyn Tymofieiev <
>>>> valen...@google.com> wrote:
>>>>
>>>>> Hey Daniela and Giomar (& all), do you see any concerns with the
>>>>> current image so far? If not, let's upgrade the remaining workers 9-15 to
>>>>> the new image. Thanks!
>>>>>
>>>>> On Fri, Jan 28, 2022 at 10:05 PM Valentyn Tymofieiev <
>>>>> valen...@google.com> wrote:
>>>>>
>>>>>> Thanks, Kiley. we are back to Ubuntu 20. For the weekend, we'll keep
>>>>>> the pool at 50% capacity with the benched workers still on ubuntu 16 as
>>>>>> backup and either upgrade all of the workers to ubuntu 20 or roll back
>>>>>> again to ubuntu 16 next week.
>>>>>>
>>>>>> On Fri, Jan 28, 2022 at 10:29 AM Kiley Sok 
>>>>>> wrote:
>>>>>>
>>>>>>> Nexmark failures are from an unrelated change, fyi
>>>>>>>
>>>>>>> On Thu, Jan 27, 2022 at 5:59 PM Valentyn Tymofieiev <
>>>>>>> valen...@google.com> wrote:
>>>>>>>
>>>>>>>> It looks like something is resetting the  *Restrict where this
>>>>>>>> project can be run* label setting on
>>>>>>>> https://ci-beam.apache.org/job/beam_PreCommit_Website_Stage_GCS_Commit/configure
>>>>>>>>  back to "beam" for the jobs that I previously rerouted to 
>>>>>>>> beam-ubuntu20.
>>>>>>>> I'll try to rollback all workers to ubuntu 16.
>>>>>>>>
>>>>>>>> Daniela & Giomar:  Here are other builds that have likely got
>>>>>>>> broken by the update:
>>>>>>>>
>>>>>>>>
>>>>>>>> https://ci-beam.apache.org/job/beam_PostCommit_Java_Nexmark_Dataflow_V2/994/console
>>>>>>>

Re: Upgrading Jenkins Workers BEAM-12621

2022-01-31 Thread Daniela Martín
Thanks Kiley, it seems that the failures are not related to the *Ubuntu
20.04.3* upgrade.

In case the *Python Release Candidate* job works, we can proceed with the
upgrade of the remaining Jenkins workers.

Thank you.

Regards,

On Mon, Jan 31, 2022 at 6:31 PM Kiley Sok  wrote:

> XVR failures: https://issues.apache.org/jira/browse/BEAM-13778
>
> On Mon, Jan 31, 2022 at 4:02 PM Daniela Martín <
> daniela.mar...@wizeline.com> wrote:
>
>> Hi everyone,
>>
>> Thank you for the information and details.
>>
>> We reviewed the tests and we are taking a look at the following jobs that
>> started failing close to the date the Jenkins instance was upgraded. We are
>> not sure if it’s because of the Ubuntu 20.04.3 upgrade or for another
>> reason.
>>
>>
>>-
>>
>>beam_PostCommit_XVR_Direct
>>
>> <https://ci-beam.apache.org/view/PostCommit/job/beam_PostCommit_XVR_Direct/>
>>-
>>
>>beam_PostCommit_XVR_Flink
>>
>> <https://ci-beam.apache.org/view/PostCommit/job/beam_PostCommit_XVR_Flink/>
>>-
>>
>>beam_PostCommit_XVR_Spark3
>>
>> <https://ci-beam.apache.org/view/PostCommit/job/beam_PostCommit_XVR_Spark3/>
>>
>>
>> We will continue testing them. Any information regarding these jobs would
>> be greatly appreciated.
>>
>> Emily, could you please re-run the Python Release Candidate
>> <https://ci-beam.apache.org/job/beam_PostRelease_Python_Candidate/> job
>> in your PR #16632 <https://github.com/apache/beam/pull/16632>?
>>
>>
>> Thank you very much.
>>
>> Regards,
>>
>> On Mon, Jan 31, 2022 at 4:53 PM Valentyn Tymofieiev 
>> wrote:
>>
>>> Hey Daniela and Giomar (& all), do you see any concerns with the current
>>> image so far? If not, let's upgrade the remaining workers 9-15 to the new
>>> image. Thanks!
>>>
>>> On Fri, Jan 28, 2022 at 10:05 PM Valentyn Tymofieiev <
>>> valen...@google.com> wrote:
>>>
>>>> Thanks, Kiley. we are back to Ubuntu 20. For the weekend, we'll keep
>>>> the pool at 50% capacity with the benched workers still on ubuntu 16 as
>>>> backup and either upgrade all of the workers to ubuntu 20 or roll back
>>>> again to ubuntu 16 next week.
>>>>
>>>> On Fri, Jan 28, 2022 at 10:29 AM Kiley Sok  wrote:
>>>>
>>>>> Nexmark failures are from an unrelated change, fyi
>>>>>
>>>>> On Thu, Jan 27, 2022 at 5:59 PM Valentyn Tymofieiev <
>>>>> valen...@google.com> wrote:
>>>>>
>>>>>> It looks like something is resetting the  *Restrict where this
>>>>>> project can be run* label setting on
>>>>>> https://ci-beam.apache.org/job/beam_PreCommit_Website_Stage_GCS_Commit/configure
>>>>>>  back to "beam" for the jobs that I previously rerouted to beam-ubuntu20.
>>>>>> I'll try to rollback all workers to ubuntu 16.
>>>>>>
>>>>>> Daniela & Giomar:  Here are other builds that have likely got
>>>>>> broken by the update:
>>>>>>
>>>>>>
>>>>>> https://ci-beam.apache.org/job/beam_PostCommit_Java_Nexmark_Dataflow_V2/994/console
>>>>>>
>>>>>> https://ci-beam.apache.org/job/beam_PostCommit_Java_Nexmark_Dataflow_V2_Java17/
>>>>>>
>>>>>> https://ci-beam.apache.org/job/beam_Release_Python_NightlySnapshot/1259/console
>>>>>>
>>>>>> https://ci-beam.apache.org/job/beam_PostRelease_Python_Candidate/
>>>>>>
>>>>>> I  found them by looking at https://ci-beam.apache.org/ and sorting
>>>>>> by status. Looks like we also have a lot of permared Jenkins suites...
>>>>>>
>>>>>> Also, we should maybe think of some sustainable rolling-upgrades for
>>>>>> Jenkins + autorollback to a healthy state as babysitting these upgrades
>>>>>> becomes quite toilsome. Admittedly, they are not very frequent though.
>>>>>>
>>>>>>
>>>>>> On Thu, Jan 27, 2022 at 1:52 PM Emily Ye  wrote:
>>>>>>
>>>>>>> I'm also running into these errors on my PR for testing the release
>>>>>>> https://github.com/apache/beam/pull/16632, re-running doesn't seem
>>>>>>> to help. Do we have to force the machine per-job/task?
&

Re: Upgrading Jenkins Workers BEAM-12621

2022-01-31 Thread Daniela Martín
;>>>>> wrote:
>>>>>>>
>>>>>>>> Thanks Valentyn. FYI I re-ran a Website_Stage_GCS failure but it
>>>>>>>> failed again with the same error:
>>>>>>>> https://ci-beam.apache.org/job/beam_PreCommit_Website_Stage_GCS_Phrase/94/
>>>>>>>>
>>>>>>>> On Wed, Jan 26, 2022 at 12:13 PM Valentyn Tymofieiev <
>>>>>>>> valen...@google.com> wrote:
>>>>>>>>
>>>>>>>>> We are investigating an issue affecting
>>>>>>>>> beam_PreCommit_Website_Stage_GCS_Commit jobs. If this job failed for 
>>>>>>>>> you,
>>>>>>>>> please rerun it via a trigger phrase, it should be scheduled to run on
>>>>>>>>> ubuntu16 nodes for now.
>>>>>>>>>
>>>>>>>>
>>>>>>>>> Aslo, I accidentally canceled several Jenkins jobs in the
>>>>>>>>> build queue, which may have been triggered by your PRs, you many need 
>>>>>>>>> to
>>>>>>>>> rerun them as well. Sorry about this inconvenience.
>>>>>>>>>
>>>>>>>>> As a reminder, trigger phrases are in
>>>>>>>>> https://github.com/apache/beam/blob/master/.test-infra/jenkins/README.md
>>>>>>>>> (which is linked in PR description template).
>>>>>>>>>
>>>>>>>>> On Wed, Jan 26, 2022 at 10:30 AM Daniela Martín <
>>>>>>>>> daniela.mar...@wizeline.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi everyone,
>>>>>>>>>>
>>>>>>>>>> We are updating the Jenkins workers with a new image that was
>>>>>>>>>> upgraded from Ubuntu 16.0 LTS to Ubuntu 20.04.3 LTS. You can see more
>>>>>>>>>> details on: BEAM-12621
>>>>>>>>>> <https://issues.apache.org/jira/browse/BEAM-12621>
>>>>>>>>>>
>>>>>>>>>> If something comes up with the workers or the tests please reach
>>>>>>>>>> out to us.
>>>>>>>>>>
>>>>>>>>>> Please let us know if you have any questions or comments.
>>>>>>>>>>
>>>>>>>>>> Thank you.
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>>
>>>>>>>>>> Daniela Martín (She/Her) | <https://www.wizeline.com/>
>>>>>>>>>>
>>>>>>>>>> Site Reliability Engineer III
>>>>>>>>>>
>>>>>>>>>> daniela.mar...@wizeline.com
>>>>>>>>>>
>>>>>>>>>> Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan,
>>>>>>>>>> Jal.
>>>>>>>>>>
>>>>>>>>>> Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
>>>>>>>>>> <https://www.facebook.com/WizelineGlobal> | Instagram
>>>>>>>>>> <https://www.instagram.com/wizelineglobal/> | LinkedIn
>>>>>>>>>> <https://www.linkedin.com/company/wizeline>
>>>>>>>>>>
>>>>>>>>>> Share feedback on Clutch <https://clutch.co/review/submit/375119>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *This email and its contents (including any attachments) are
>>>>>>>>>> being sent toyou on the condition of confidentiality and may be 
>>>>>>>>>> protected
>>>>>>>>>> by legalprivilege. Access to this email by anyone other than the 
>>>>>>>>>> intended
>>>>>>>>>> recipientis unauthorized. If you are not the intended recipient, 
>>>>>>>>>> please
>>>>>>>>>> immediatelynotify the sender by replying to this message and delete 
>>>>>>>>>> the
>>>>>>>>>> materialimmediately from your system. Any further use, dissemination,
>>>>>>>>>> distributionor reproduction of this email is strictly prohibited. 
>>>>>>>>>> Further,
>>>>>>>>>> norepresentation is made with respect to any content contained in 
>>>>>>>>>> this
>>>>>>>>>> email.*
>>>>>>>>>
>>>>>>>>>

-- 

Daniela Martín (She/Her) | <https://www.wizeline.com/>

Site Reliability Engineer III

daniela.mar...@wizeline.com

Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.

Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
<https://www.facebook.com/WizelineGlobal> | Instagram
<https://www.instagram.com/wizelineglobal/> | LinkedIn
<https://www.linkedin.com/company/wizeline>

Share feedback on Clutch <https://clutch.co/review/submit/375119>

-- 
*This email and its contents (including any attachments) are being sent to
you on the condition of confidentiality and may be protected by legal
privilege. Access to this email by anyone other than the intended recipient
is unauthorized. If you are not the intended recipient, please immediately
notify the sender by replying to this message and delete the material
immediately from your system. Any further use, dissemination, distribution
or reproduction of this email is strictly prohibited. Further, no
representation is made with respect to any content contained in this email.*


Upgrading Jenkins Workers BEAM-12621

2022-01-26 Thread Daniela Martín
Hi everyone,

We are updating the Jenkins workers with a new image that was upgraded from
Ubuntu 16.0 LTS to Ubuntu 20.04.3 LTS. You can see more details on:
BEAM-12621 <https://issues.apache.org/jira/browse/BEAM-12621>

If something comes up with the workers or the tests please reach out to us.

Please let us know if you have any questions or comments.

Thank you.

Regards,

-- 

Daniela Martín (She/Her) | <https://www.wizeline.com/>

Site Reliability Engineer III

daniela.mar...@wizeline.com

Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.

Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
<https://www.facebook.com/WizelineGlobal> | Instagram
<https://www.instagram.com/wizelineglobal/> | LinkedIn
<https://www.linkedin.com/company/wizeline>

Share feedback on Clutch <https://clutch.co/review/submit/375119>

-- 
*This email and its contents (including any attachments) are being sent to
you on the condition of confidentiality and may be protected by legal
privilege. Access to this email by anyone other than the intended recipient
is unauthorized. If you are not the intended recipient, please immediately
notify the sender by replying to this message and delete the material
immediately from your system. Any further use, dissemination, distribution
or reproduction of this email is strictly prohibited. Further, no
representation is made with respect to any content contained in this email.*


Re: Best practices for upgrading installed dependencies on Jenkins VMs?

2022-01-06 Thread Daniela Martín
Hi Valentyn,

We decided to include the Java 17 installation in the image that we are
creating for the Ubuntu upgrade (BEAM-12621). We are using the latest image
j*enkins-worker-boot-image-20211029* that the Jenkins workers are currently
using, so the remaining changes in this new image would be the ones that
were made yesterday in *jenkins-worker-boot-image-20220105* image.

We will create the new image later today, including the Ubuntu upgrade and
Java SDK17 installation (which were previously implemented in
*jenkins-worker-boot-image-20211214*), and let you know.

Thank you.

Regards,

On Thu, Jan 6, 2022 at 10:01 AM Valentyn Tymofieiev 
wrote:

> Thanks, Daniela. I am happy to spot-check the new image you are building
> for issues I am aware of.
>
> I made my changes to the latest VM image, building on top of latest
> jenkins-worker-boot-image-20211214, and replicated those changes on the
> running workers.
>
> I noticed that current Jenkins workers (at least some of them) are still
> running on boot disks from older image jenkins-worker-boot-image-20211029,
> and not the newest available image, jenkins-worker-boot-image-20211214.
> Image comment for the latter image says: Installed Java SDK 17. See
> BEAM-12313.
>
> I was wondering - is there a reason we did not reload Jenkins workers to
> pick up this latest image? Or did you decide to upgrade to the new Ubuntu
> version instead that would also include Java 17.
>
> If jenkins-worker-boot-image-20211214 is known to work and needed for
> BEAM-12313 ~now, I can do this update, and we can continue to work in
> parallel on BEAM-12621.
>
> Thanks,
> Valentyn
>


-- 

Daniela Martín (She/Her) | <https://www.wizeline.com/>

Site Reliability Engineer

daniela.mar...@wizeline.com

Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.

Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
<https://www.facebook.com/WizelineGlobal> | Instagram
<https://www.instagram.com/wizelineglobal/> | LinkedIn
<https://www.linkedin.com/company/wizeline>

Share feedback on Clutch <https://clutch.co/review/submit/375119>

-- 
*This email and its contents (including any attachments) are being sent to
you on the condition of confidentiality and may be protected by legal
privilege. Access to this email by anyone other than the intended recipient
is unauthorized. If you are not the intended recipient, please immediately
notify the sender by replying to this message and delete the material
immediately from your system. Any further use, dissemination, distribution
or reproduction of this email is strictly prohibited. Further, no
representation is made with respect to any content contained in this email.*


Re: Best practices for upgrading installed dependencies on Jenkins VMs?

2022-01-05 Thread Daniela Martín
ss?
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Mon, Oct 11, 2021 at 6:22 PM Daniel Oliveira <
>>>>>>>>>>>>>>>>> danolive...@google.com> wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Took me a bit to get to this, sorry. I finally figured
>>>>>>>>>>>>>>>>>> out an approach for updating Go and did so and will be 
>>>>>>>>>>>>>>>>>> updating the image
>>>>>>>>>>>>>>>>>> momentarily.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I think a more important note is that I tried what
>>>>>>>>>>>>>>>>>> Valentyn was considering, which is SSHing into workers and 
>>>>>>>>>>>>>>>>>> updating the
>>>>>>>>>>>>>>>>>> dependency. I'll describe the process below, but the summary 
>>>>>>>>>>>>>>>>>> is that I did
>>>>>>>>>>>>>>>>>> it on one worker with Go so far, saw no problems over the 
>>>>>>>>>>>>>>>>>> weekend, and
>>>>>>>>>>>>>>>>>> would like to continue updating the rest of the workers if 
>>>>>>>>>>>>>>>>>> there are no
>>>>>>>>>>>>>>>>>> objections.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> Here's a step-by-step of what I did. If we decide to
>>>>>>>>>>>>>>>>>> stick with this approach, these instructions can be added to 
>>>>>>>>>>>>>>>>>> Confluence:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> 1. Go to the page for the Jenkins agent you want to
>>>>>>>>>>>>>>>>>> update [1] and click "Mark this node temporarily offline", 
>>>>>>>>>>>>>>>>>> leaving a reason
>>>>>>>>>>>>>>>>>> such as "Updating X dependency."
>>>>>>>>>>>>>>>>>> 2. Wait until there are no more tests running in that
>>>>>>>>>>>>>>>>>> agent (under "Build Executor Status" on the left of the 
>>>>>>>>>>>>>>>>>> page).
>>>>>>>>>>>>>>>>>> 3. SSH into the agent and perform the update.
>>>>>>>>>>>>>>>>>> 4. Mark the node as online again.
>>>>>>>>>>>>>>>>>> 5. Repeat for every worker.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> And these are some additional steps if you want to
>>>>>>>>>>>>>>>>>> immediately run a test suite to check that the update worked 
>>>>>>>>>>>>>>>>>> correctly. For
>>>>>>>>>>>>>>>>>> example in my case, I wanted to check against the Go 
>>>>>>>>>>>>>>>>>> Postcommit, and it was
>>>>>>>>>>>>>>>>>> a good thing I did, because it actually failed the first 
>>>>>>>>>>>>>>>>>> time and I had to
>>>>>>>>>>>>>>>>>> go back in to fix a small oversight I made. So doing this 
>>>>>>>>>>>>>>>>>> after you update
>>>>>>>>>>>>>>>>>> your first worker is probably a good idea before updating 
>>>>>>>>>>>>>>>>>> the rest:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> 1. Go to the page for the job you want to run (for
>>>>>>>>>>>>>>>>>> example: [2]).
>>>>>>>>>>>>>>>>>> 2. Click "Configure" on the left menu.
>>>>>>>>>>>>>>>>>> 3. Find the checkmark "Restrict where this project can be
>>>>>>>>>>>>>>>>>> run" and change the restriction from "beam" to the specific 
>>>>>>>>>>>>>>>>>> name of the
>>>>>>>>>>>>>>>>>> agent (ex. "apache-beam-jenkins-1").
>>>>>>>>>>>>>>>>>> 4. Save and apply that change.
>>>>>>>>>>>>>>>>>> 5. Back on the page for the job, click "Build with
>>>>>>>>>>>>>>>>>> Parameters" on the left menu.
>>>>>>>>>>>>>>>>>> 6. Run the build on "master".
>>>>>>>>>>>>>>>>>> 7. Once you're done checking the results, change
>>>>>>>>>>>>>>>>>> the restriction for the job back to "beam". (This also gets 
>>>>>>>>>>>>>>>>>> reset once
>>>>>>>>>>>>>>>>>> every 24 hours in case you forget.)
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> I did that on one agent (apache-beam-jenkins-2) on Friday
>>>>>>>>>>>>>>>>>> evening when it wasn't too busy, and got Go updated and 
>>>>>>>>>>>>>>>>>> working. I checked
>>>>>>>>>>>>>>>>>> that agent's execution history again today just in case, and 
>>>>>>>>>>>>>>>>>> it was healthy
>>>>>>>>>>>>>>>>>> over the weekend, with no Go-related problems as far as I 
>>>>>>>>>>>>>>>>>> could see. If
>>>>>>>>>>>>>>>>>> there's no objections I'd like to go ahead and continue 
>>>>>>>>>>>>>>>>>> updating the rest
>>>>>>>>>>>>>>>>>> of the workers (I'll do this late at night or over the 
>>>>>>>>>>>>>>>>>> weekend to avoid
>>>>>>>>>>>>>>>>>> disrupting dev work).
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>>>>>> https://ci-beam.apache.org/computer/apache-beam-jenkins-1/
>>>>>>>>>>>>>>>>>> [2] https://ci-beam.apache.org/job/beam_PostCommit_Go/
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Mon, Oct 4, 2021 at 6:14 PM Valentyn Tymofieiev <
>>>>>>>>>>>>>>>>>> valen...@google.com> wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I updated the image in [1], but did not change the
>>>>>>>>>>>>>>>>>>> workers yet to pick up the new image yet. We can do this 
>>>>>>>>>>>>>>>>>>> once we add Go
>>>>>>>>>>>>>>>>>>> changes on top of it.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> I am also considering to SSH into every worker and run a
>>>>>>>>>>>>>>>>>>> one-line command that adds the dependency that was missing. 
>>>>>>>>>>>>>>>>>>> It seems to be
>>>>>>>>>>>>>>>>>>> low risk, and  there is a fall-back plan to re-start the 
>>>>>>>>>>>>>>>>>>> worker using the
>>>>>>>>>>>>>>>>>>> saved image - both new and old images are saved and 
>>>>>>>>>>>>>>>>>>> available in Cloud
>>>>>>>>>>>>>>>>>>> Console.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Ideally, we should find a way to do a rolling upgrade
>>>>>>>>>>>>>>>>>>> that a PMC or committer could trigger without logging into 
>>>>>>>>>>>>>>>>>>> every machine.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>>>>>>> https://issues.apache.org/jira/browse/BEAM-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17424228#comment-17424228
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Wed, Sep 22, 2021 at 3:28 PM Daniel Oliveira <
>>>>>>>>>>>>>>>>>>> danolive...@google.com> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> @Brian Hulette  That button seems
>>>>>>>>>>>>>>>>>>>> like exactly what we'd need. Doing it manually would be a 
>>>>>>>>>>>>>>>>>>>> pain, but it's
>>>>>>>>>>>>>>>>>>>> probably still preferable to causing a bunch of aborted 
>>>>>>>>>>>>>>>>>>>> tests.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> @Valentyn Tymofieiev  Collaborating
>>>>>>>>>>>>>>>>>>>> to do both updates at once is a great idea! I'll message 
>>>>>>>>>>>>>>>>>>>> you directly about
>>>>>>>>>>>>>>>>>>>> it.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Sep 22, 2021 at 2:44 PM Valentyn Tymofieiev <
>>>>>>>>>>>>>>>>>>>> valen...@google.com> wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> I am also interested in this updating version of
>>>>>>>>>>>>>>>>>>>>> Python on VMs, I need to install Python 3.9. Thanks for 
>>>>>>>>>>>>>>>>>>>>> looking into this.
>>>>>>>>>>>>>>>>>>>>> We can coordinate together to make one update instead of 
>>>>>>>>>>>>>>>>>>>>> two.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Wed, Sep 22, 2021 at 2:40 PM Brian Hulette <
>>>>>>>>>>>>>>>>>>>>> bhule...@google.com> wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I'm not sure about best practices here. Out of
>>>>>>>>>>>>>>>>>>>>>> curiosity I just poked around in the Jenkins UI (e.g. 
>>>>>>>>>>>>>>>>>>>>>> [1]) and it looks
>>>>>>>>>>>>>>>>>>>>>> like you can manually "Mark node temporarily offline" 
>>>>>>>>>>>>>>>>>>>>>> when logged in (if
>>>>>>>>>>>>>>>>>>>>>> you're a committer). According to [2] this will prevent 
>>>>>>>>>>>>>>>>>>>>>> it from picking up
>>>>>>>>>>>>>>>>>>>>>> new jobs after it's finished the currently executing 
>>>>>>>>>>>>>>>>>>>>>> ones. Doing that
>>>>>>>>>>>>>>>>>>>>>> manually for every worker could be a pain though.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Brian
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> [1]
>>>>>>>>>>>>>>>>>>>>>> https://ci-beam.apache.org/computer/apache-beam-jenkins-13/
>>>>>>>>>>>>>>>>>>>>>> [2]
>>>>>>>>>>>>>>>>>>>>>> https://stackoverflow.com/questions/26553612/how-do-i-disable-a-node-in-jenkins-ui-after-it-has-completed-its-currently-runni
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> On Wed, Sep 22, 2021 at 1:03 PM Daniel Oliveira <
>>>>>>>>>>>>>>>>>>>>>> danolive...@google.com> wrote:
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Hey everyone,
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I'm aiming at upgrading the version of Go on our
>>>>>>>>>>>>>>>>>>>>>>> Jenkins VMs, and I found these instructions on
>>>>>>>>>>>>>>>>>>>>>>> upgrading software on Jenkins
>>>>>>>>>>>>>>>>>>>>>>> <https://cwiki.apache.org/confluence/display/BEAM/Jenkins+Tips#JenkinsTips-HowtoinstallandupgradesoftwareonJenkinsworkers>
>>>>>>>>>>>>>>>>>>>>>>>  on
>>>>>>>>>>>>>>>>>>>>>>> our cwiki.
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> I haven't started going through it yet, but I was
>>>>>>>>>>>>>>>>>>>>>>> wondering about the last few steps that involve 
>>>>>>>>>>>>>>>>>>>>>>> stopping VMs, deleting boot
>>>>>>>>>>>>>>>>>>>>>>> disks, and restarting executors. Is there some best 
>>>>>>>>>>>>>>>>>>>>>>> practice for
>>>>>>>>>>>>>>>>>>>>>>> that section to avoid causing interruptions in our 
>>>>>>>>>>>>>>>>>>>>>>> automated testing?
>>>>>>>>>>>>>>>>>>>>>>> Should I be trying to do this outside of peak dev 
>>>>>>>>>>>>>>>>>>>>>>> hours, or going one VM at
>>>>>>>>>>>>>>>>>>>>>>> a time so others can pick up extra load, or anything 
>>>>>>>>>>>>>>>>>>>>>>> like that?
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>>>>>>>>>> Daniel Oliveira
>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>

-- 

Daniela Martín (She/Her) | <https://www.wizeline.com/>

Site Reliability Engineer

daniela.mar...@wizeline.com

Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.

Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
<https://www.facebook.com/WizelineGlobal> | Instagram
<https://www.instagram.com/wizelineglobal/> | LinkedIn
<https://www.linkedin.com/company/wizeline>

Share feedback on Clutch <https://clutch.co/review/submit/375119>

-- 
*This email and its contents (including any attachments) are being sent to
you on the condition of confidentiality and may be protected by legal
privilege. Access to this email by anyone other than the intended recipient
is unauthorized. If you are not the intended recipient, please immediately
notify the sender by replying to this message and delete the material
immediately from your system. Any further use, dissemination, distribution
or reproduction of this email is strictly prohibited. Further, no
representation is made with respect to any content contained in this email.*


Contributor permission for Beam Jira Tickets

2021-12-06 Thread Daniela Martín
Hello everyone,

Hope you are doing well.

I'm Daniela and I'm currently working at Wizeline. I would like to be added
as a contributor in the Beam Jira issue tracker to assign myself to a
couple of Beam tasks.

My JiraID is: danimartin

Thank you in advance!

Regards,
-- 

Daniela Martín (She/Her) | <https://www.wizeline.com/>

Site Reliability Engineer

daniela.mar...@wizeline.com

Amado Nervo 2200, Esfera P6, Col. Ciudad del Sol, 45050 Zapopan, Jal.

Follow us Twitter <https://twitter.com/wizelineglobal> | Facebook
<https://www.facebook.com/WizelineGlobal> | Instagram
<https://www.instagram.com/wizelineglobal/> | LinkedIn
<https://www.linkedin.com/company/wizeline>

Share feedback on Clutch <https://clutch.co/review/submit/375119>

-- 
*This email and its contents (including any attachments) are being sent to
you on the condition of confidentiality and may be protected by legal
privilege. Access to this email by anyone other than the intended recipient
is unauthorized. If you are not the intended recipient, please immediately
notify the sender by replying to this message and delete the material
immediately from your system. Any further use, dissemination, distribution
or reproduction of this email is strictly prohibited. Further, no
representation is made with respect to any content contained in this email.*