ok, i'm gonna have to reboot all the workers tomorrow and wipe the m2
caches. it looks like zombie builds were lingering post-jenkins-wedging
and corrupting the repos.
fixed on -05.
On Mon, Jul 6, 2020 at 2:17 PM Jungtaek Lim
wrote:
> Just encountered the same and it's worker-05 again. (You
Just encountered the same and it's worker-05 again. (You can find [error]
in the console to see what's the problem. I guess jetty artifacts in the
worker might be messed up.)
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/125127/consoleFull
On Tue, Jul 7, 2020 at 5:35 AM
Could this be a flaky or persistent issue? It failed with Scala gendoc but
it didn't fail with the part the PR modified. It ran from worker-05.
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/125121/consoleFull
On Tue, Jul 7, 2020 at 2:10 AM shane knapp ☠ wrote:
> i killed
i killed and retriggered the PRB jobs on 04, and wiped that workers' m2
cache.
On Mon, Jul 6, 2020 at 9:24 AM shane knapp ☠ wrote:
> once the jobs running on that worker are finished, yes.
>
> On Sun, Jul 5, 2020 at 7:41 PM Hyukjin Kwon wrote:
>
>> Shane, can we remove .m2 in worker machine 4?
once the jobs running on that worker are finished, yes.
On Sun, Jul 5, 2020 at 7:41 PM Hyukjin Kwon wrote:
> Shane, can we remove .m2 in worker machine 4?
>
> 2020년 7월 3일 (금) 오전 8:18, Jungtaek Lim 님이 작성:
>
>> Looks like Jenkins service itself becomes unstable. It took considerable
>> time to
Shane, can we remove .m2 in worker machine 4?
2020년 7월 3일 (금) 오전 8:18, Jungtaek Lim 님이 작성:
> Looks like Jenkins service itself becomes unstable. It took considerable
> time to just open the test report for a specific build, and Jenkins doesn't
> pick the request on rebuild (retest this, please)
Looks like Jenkins service itself becomes unstable. It took considerable
time to just open the test report for a specific build, and Jenkins doesn't
pick the request on rebuild (retest this, please) in Github comment.
On Thu, Jul 2, 2020 at 2:12 PM Hyukjin Kwon wrote:
> Ah, okay. Actually there
Ah, okay. Actually there already is -
https://issues.apache.org/jira/browse/SPARK-31693. I am reopening.
2020년 7월 2일 (목) 오후 2:06, Holden Karau 님이 작성:
> We don't I didn't file one originally, but Shane reminded me to in the
> future.
>
> On Wed, Jul 1, 2020 at 9:44 PM Hyukjin Kwon wrote:
>
>>
We don't I didn't file one originally, but Shane reminded me to in the
future.
On Wed, Jul 1, 2020 at 9:44 PM Hyukjin Kwon wrote:
> Nope, do we have an existing ticket? I think we can reopen if there is.
>
> 2020년 7월 2일 (목) 오후 1:43, Holden Karau 님이 작성:
>
>> Huh interesting that it’s the same
Nope, do we have an existing ticket? I think we can reopen if there is.
2020년 7월 2일 (목) 오후 1:43, Holden Karau 님이 작성:
> Huh interesting that it’s the same worker. Have you filed a ticket to
> Shane?
>
> On Wed, Jul 1, 2020 at 8:50 PM Hyukjin Kwon wrote:
>
>> Hm .. seems this is happening again
Huh interesting that it’s the same worker. Have you filed a ticket to Shane?
On Wed, Jul 1, 2020 at 8:50 PM Hyukjin Kwon wrote:
> Hm .. seems this is happening again in amp-jenkins-worker-04 ;(.
>
> 2020년 6월 25일 (목) 오전 3:15, shane knapp ☠ 님이 작성:
>
>> done:
>> -bash-4.1$ cd .m2
>> -bash-4.1$ ls
Hm .. seems this is happening again in amp-jenkins-worker-04 ;(.
2020년 6월 25일 (목) 오전 3:15, shane knapp ☠ 님이 작성:
> done:
> -bash-4.1$ cd .m2
> -bash-4.1$ ls
> repository
> -bash-4.1$ time rm -rf *
>
> real17m4.607s
> user0m0.950s
> sys 0m18.816s
> -bash-4.1$
>
> On Wed, Jun 24, 2020
done:
-bash-4.1$ cd .m2
-bash-4.1$ ls
repository
-bash-4.1$ time rm -rf *
real17m4.607s
user0m0.950s
sys 0m18.816s
-bash-4.1$
On Wed, Jun 24, 2020 at 10:50 AM shane knapp ☠ wrote:
> ok, i've taken that worker offline and once the job running on it
> finishes, i'll wipe the cache.
>
Will do :) Thanks for keeping the build system running smoothly :)
On Wed, Jun 24, 2020 at 10:50 AM shane knapp ☠ wrote:
> ok, i've taken that worker offline and once the job running on it
> finishes, i'll wipe the cache.
>
> in the future, please file a JIRA and assign it to me so i don't have
ok, i've taken that worker offline and once the job running on it finishes,
i'll wipe the cache.
in the future, please file a JIRA and assign it to me so i don't have to
track my work through emails to the dev@ list. ;)
thanks!
shane
On Wed, Jun 24, 2020 at 10:48 AM Holden Karau wrote:
>
The most recent one I noticed was
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/124437/console
which
was run on amp-jenkins-worker-04.
On Wed, Jun 24, 2020 at 10:44 AM shane knapp ☠ wrote:
> for those weird failures, it's super helpful to provide which workers are
>
for those weird failures, it's super helpful to provide which workers are
showing these issues. :)
i'd rather not wipe all of the m2 caches on all of the workers, as we'll
then potentially get blacklisted again if we download too many packages
from apache.org.
On Tue, Jun 23, 2020 at 5:58 PM
Hi Folks,
I've been see some weird failures on Jenkins and it looks like it might be
from the m2 cache. Would it be OK to clean it out? Or is it important?
Cheers,
Holden
--
Twitter: https://twitter.com/holdenkarau
Books (Learning Spark, High Performance Spark, etc.):
https://amzn.to/2MaRAG9
18 matches
Mail list logo