[jira] [Commented] (SPARK-29803) remove all instances of 'from __future__ import print_function'

2019-11-08 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970527#comment-16970527
 ] 

Shane Knapp commented on SPARK-29803:
-

i actually believe that we can do this at any time, as spark 3.0+ technically 
does NOT support python versions earlier than 3.5.

> remove all instances of 'from __future__ import print_function' 
> 
>
> Key: SPARK-29803
> URL: https://issues.apache.org/jira/browse/SPARK-29803
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, PySpark, Tests
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Priority: Major
> Attachments: print_function_list.txt
>
>
> there are 135 python files in the spark repo that need to have `from 
> __future__ import print_function` removed (see attached file 
> 'print_function_list.txt').
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29673) upgrade jenkins pypy to PyPy3.6 v7.2.0

2019-11-08 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp resolved SPARK-29673.
-
Resolution: Fixed

> upgrade jenkins pypy to PyPy3.6 v7.2.0
> --
>
> Key: SPARK-29673
> URL: https://issues.apache.org/jira/browse/SPARK-29673
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Assignee: Shane Knapp
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29672) remove python2 tests and test infra

2019-11-08 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29672:

Attachment: (was: print_function_list.txt)

> remove python2 tests and test infra
> ---
>
> Key: SPARK-29672
> URL: https://issues.apache.org/jira/browse/SPARK-29672
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Assignee: Shane Knapp
>Priority: Major
>
> python 2.7 is EOL jan 1st 2020: [https://github.com/python/devguide/pull/344]
> it's time, at least for 3.0+ to remove python 2.7 test support and migrate 
> the test execution framework to python 3.6.
> this PR ([https://github.com/apache/spark/pull/26330]) does all of the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29672) remove python2 tests and test infra

2019-11-08 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29672:

Description: 
python 2.7 is EOL jan 1st 2020: [https://github.com/python/devguide/pull/344]

it's time, at least for 3.0+ to remove python 2.7 test support and migrate the 
test execution framework to python 3.6.

this PR ([https://github.com/apache/spark/pull/26330]) does all of the above.

  was:
python 2.7 is EOL jan 1st 2020: [https://github.com/python/devguide/pull/344]

it's time, at least for 3.0+ to remove python 2.7 test support and migrate the 
test execution framework to python 3.6.

this PR ([https://github.com/apache/spark/pull/26330]) does all of the above.

two things of note:

1) there are a bunch of other python scripts in the repo that need to be 
updated to '/usr/bin/env python3': 
{noformat}
$ grep -r "env python" * |grep -v python3
dev/create-release/releaseutils.py:#!/usr/bin/env python
dev/create-release/generate-contributors.py:#!/usr/bin/env python
dev/create-release/translate-contributors.py:#!/usr/bin/env python
dev/github_jira_sync.py:#!/usr/bin/env python
dev/merge_spark_pr.py:#!/usr/bin/env python
python/pyspark/version.py:#!/usr/bin/env python
python/pyspark/find_spark_home.py:#!/usr/bin/env python
python/setup.py:#!/usr/bin/env python{noformat}
2) there are 135 python files in the spark repo that need to have `from 
__future__ import print_function` removed (see attached file 
'print_function_list.txt')


> remove python2 tests and test infra
> ---
>
> Key: SPARK-29672
> URL: https://issues.apache.org/jira/browse/SPARK-29672
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Assignee: Shane Knapp
>Priority: Major
>
> python 2.7 is EOL jan 1st 2020: [https://github.com/python/devguide/pull/344]
> it's time, at least for 3.0+ to remove python 2.7 test support and migrate 
> the test execution framework to python 3.6.
> this PR ([https://github.com/apache/spark/pull/26330]) does all of the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29803) remove all instances of 'from __future__ import print_function'

2019-11-08 Thread Shane Knapp (Jira)
Shane Knapp created SPARK-29803:
---

 Summary: remove all instances of 'from __future__ import 
print_function' 
 Key: SPARK-29803
 URL: https://issues.apache.org/jira/browse/SPARK-29803
 Project: Spark
  Issue Type: Sub-task
  Components: Build, PySpark, Tests
Affects Versions: 3.0.0
Reporter: Shane Knapp
 Attachments: print_function_list.txt

there are 135 python files in the spark repo that need to have `from __future__ 
import print_function` removed (see attached file 'print_function_list.txt').

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29803) remove all instances of 'from __future__ import print_function'

2019-11-08 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29803:

Attachment: print_function_list.txt

> remove all instances of 'from __future__ import print_function' 
> 
>
> Key: SPARK-29803
> URL: https://issues.apache.org/jira/browse/SPARK-29803
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, PySpark, Tests
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Priority: Major
> Attachments: print_function_list.txt
>
>
> there are 135 python files in the spark repo that need to have `from 
> __future__ import print_function` removed (see attached file 
> 'print_function_list.txt').
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29802) update remaining python scripts in repo to python3 shebang

2019-11-08 Thread Shane Knapp (Jira)
Shane Knapp created SPARK-29802:
---

 Summary: update remaining python scripts in repo to python3 shebang
 Key: SPARK-29802
 URL: https://issues.apache.org/jira/browse/SPARK-29802
 Project: Spark
  Issue Type: Sub-task
  Components: PySpark
Affects Versions: 3.0.0
Reporter: Shane Knapp


there are a bunch of scripts in the repo that need to have their shebang 
updated to python3:
{noformat}
dev/create-release/releaseutils.py:#!/usr/bin/env python
dev/create-release/generate-contributors.py:#!/usr/bin/env python
dev/create-release/translate-contributors.py:#!/usr/bin/env python
dev/github_jira_sync.py:#!/usr/bin/env python
dev/merge_spark_pr.py:#!/usr/bin/env python
python/pyspark/version.py:#!/usr/bin/env python
python/pyspark/find_spark_home.py:#!/usr/bin/env python
python/setup.py:#!/usr/bin/env python{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29673) upgrade jenkins pypy to PyPy3.6 v7.2.0

2019-11-08 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16970496#comment-16970496
 ] 

Shane Knapp commented on SPARK-29673:
-

[~hyukjin.kwon] pypy3.6 is available on all jenkins workers.  you can test 
against the 'pypy3' executable.

> upgrade jenkins pypy to PyPy3.6 v7.2.0
> --
>
> Key: SPARK-29673
> URL: https://issues.apache.org/jira/browse/SPARK-29673
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Assignee: Shane Knapp
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29672) remove python2 tests and test infra

2019-11-08 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29672:

Description: 
python 2.7 is EOL jan 1st 2020: [https://github.com/python/devguide/pull/344]

it's time, at least for 3.0+ to remove python 2.7 test support and migrate the 
test execution framework to python 3.6.

this PR ([https://github.com/apache/spark/pull/26330]) does all of the above.

two things of note:

1) there are a bunch of other python scripts in the repo that need to be 
updated to '/usr/bin/env python3': 
{noformat}
$ grep -r "env python" * |grep -v python3
dev/create-release/releaseutils.py:#!/usr/bin/env python
dev/create-release/generate-contributors.py:#!/usr/bin/env python
dev/create-release/translate-contributors.py:#!/usr/bin/env python
dev/github_jira_sync.py:#!/usr/bin/env python
dev/merge_spark_pr.py:#!/usr/bin/env python
python/pyspark/version.py:#!/usr/bin/env python
python/pyspark/find_spark_home.py:#!/usr/bin/env python
python/setup.py:#!/usr/bin/env python{noformat}
2) there are 135 python files in the spark repo that need to have `from 
__future__ import print_function` removed (see attached file 
'print_function_list.txt')

> remove python2 tests and test infra
> ---
>
> Key: SPARK-29672
> URL: https://issues.apache.org/jira/browse/SPARK-29672
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Assignee: Shane Knapp
>Priority: Major
> Attachments: print_function_list.txt
>
>
> python 2.7 is EOL jan 1st 2020: [https://github.com/python/devguide/pull/344]
> it's time, at least for 3.0+ to remove python 2.7 test support and migrate 
> the test execution framework to python 3.6.
> this PR ([https://github.com/apache/spark/pull/26330]) does all of the above.
> two things of note:
> 1) there are a bunch of other python scripts in the repo that need to be 
> updated to '/usr/bin/env python3': 
> {noformat}
> $ grep -r "env python" * |grep -v python3
> dev/create-release/releaseutils.py:#!/usr/bin/env python
> dev/create-release/generate-contributors.py:#!/usr/bin/env python
> dev/create-release/translate-contributors.py:#!/usr/bin/env python
> dev/github_jira_sync.py:#!/usr/bin/env python
> dev/merge_spark_pr.py:#!/usr/bin/env python
> python/pyspark/version.py:#!/usr/bin/env python
> python/pyspark/find_spark_home.py:#!/usr/bin/env python
> python/setup.py:#!/usr/bin/env python{noformat}
> 2) there are 135 python files in the spark repo that need to have `from 
> __future__ import print_function` removed (see attached file 
> 'print_function_list.txt')



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29672) remove python2 tests and test infra

2019-11-08 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29672:

Attachment: print_function_list.txt

> remove python2 tests and test infra
> ---
>
> Key: SPARK-29672
> URL: https://issues.apache.org/jira/browse/SPARK-29672
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Assignee: Shane Knapp
>Priority: Major
> Attachments: print_function_list.txt
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29672) remove python2 tests and test infra

2019-11-07 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29672:

Summary: remove python2 tests and test infra  (was: remove python2 test 
from python/run-tests.py)

> remove python2 tests and test infra
> ---
>
> Key: SPARK-29672
> URL: https://issues.apache.org/jira/browse/SPARK-29672
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Assignee: Shane Knapp
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-11-07 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16969615#comment-16969615
 ] 

Shane Knapp commented on SPARK-29106:
-

just uploaded the pip requirements.txt file that i used to get the majority of 
the python tests to run with.

sadly, we will not be able to test against arrow/pyarrow for the foreseeable as 
they're moving to a full conda-forge package solution rather than pip.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-11-07 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29106:

Attachment: arm-python36.txt

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt, arm-python36.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29673) upgrade jenkins pypy to PyPy3.6 v7.2.0

2019-10-31 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16964359#comment-16964359
 ] 

Shane Knapp commented on SPARK-29673:
-

pypy3.6-7.2.0-linux_x86_64-portable has been installed on the centos workers, 
and i'm testing with https://github.com/apache/spark/pull/26330

ubuntu workers will be updated later today.

> upgrade jenkins pypy to PyPy3.6 v7.2.0
> --
>
> Key: SPARK-29673
> URL: https://issues.apache.org/jira/browse/SPARK-29673
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Assignee: Shane Knapp
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-30 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16963505#comment-16963505
 ] 

Shane Knapp commented on SPARK-29106:
-

first pass @ the python tests:
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-python-arm/3

i'll fix the scheduling later, as well as whack-a-mole any python modules that 
i might have missed.

i wasn't able to get pyarrow to install, but it looks like ARM support for 
arrow is limited at best.

note to self:  holy crap this was a serious PITA getting this stuff installed.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29673) upgrade jenkins pypy to PyPy3.6 v7.2.0

2019-10-30 Thread Shane Knapp (Jira)
Shane Knapp created SPARK-29673:
---

 Summary: upgrade jenkins pypy to PyPy3.6 v7.2.0
 Key: SPARK-29673
 URL: https://issues.apache.org/jira/browse/SPARK-29673
 Project: Spark
  Issue Type: Sub-task
  Components: Build
Affects Versions: 3.0.0
Reporter: Shane Knapp
Assignee: Shane Knapp






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29672) remove python2 test from python/run-tests.py

2019-10-30 Thread Shane Knapp (Jira)
Shane Knapp created SPARK-29672:
---

 Summary: remove python2 test from python/run-tests.py
 Key: SPARK-29672
 URL: https://issues.apache.org/jira/browse/SPARK-29672
 Project: Spark
  Issue Type: Sub-task
  Components: Build
Affects Versions: 3.0.0
Reporter: Shane Knapp
Assignee: Shane Knapp






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29624) Jenkins fails with "Python versions prior to 2.7 are not supported."

2019-10-28 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp resolved SPARK-29624.
-
Resolution: Fixed

> Jenkins fails with "Python versions prior to 2.7 are not supported."
> 
>
> Key: SPARK-29624
> URL: https://issues.apache.org/jira/browse/SPARK-29624
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112777/console
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112779/console
> {code}
> ...
> + ./dev/run-tests-jenkins
> Python versions prior to 2.7 are not supported.
> Build step 'Execute shell' marked build as failure
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29624) Jenkins fails with "Python versions prior to 2.7 are not supported."

2019-10-28 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961377#comment-16961377
 ] 

Shane Knapp commented on SPARK-29624:
-

alright, a pull request build has successfully made it past this check:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112790/

resolving now.

> Jenkins fails with "Python versions prior to 2.7 are not supported."
> 
>
> Key: SPARK-29624
> URL: https://issues.apache.org/jira/browse/SPARK-29624
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112777/console
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112779/console
> {code}
> ...
> + ./dev/run-tests-jenkins
> Python versions prior to 2.7 are not supported.
> Build step 'Execute shell' marked build as failure
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29624) Jenkins fails with "Python versions prior to 2.7 are not supported."

2019-10-28 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961320#comment-16961320
 ] 

Shane Knapp commented on SPARK-29624:
-

alright, i triggered a job and checked the console, and the restart seemed to 
fix the PATH variable.

both of these above builds failed on amp-jenkins-worker-03, so i'll keep an eye 
on that worker.

> Jenkins fails with "Python versions prior to 2.7 are not supported."
> 
>
> Key: SPARK-29624
> URL: https://issues.apache.org/jira/browse/SPARK-29624
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112777/console
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112779/console
> {code}
> ...
> + ./dev/run-tests-jenkins
> Python versions prior to 2.7 are not supported.
> Build step 'Execute shell' marked build as failure
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29624) Jenkins fails with "Python versions prior to 2.7 are not supported."

2019-10-28 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961290#comment-16961290
 ] 

Shane Knapp commented on SPARK-29624:
-

nothing changed...  the PATH env vars for each worker got borked during the 
downtime.

i'll need to restart jenkins, and will send a note to the list about this.

> Jenkins fails with "Python versions prior to 2.7 are not supported."
> 
>
> Key: SPARK-29624
> URL: https://issues.apache.org/jira/browse/SPARK-29624
> Project: Spark
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Blocker
>
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112777/console
> - 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/112779/console
> {code}
> ...
> + ./dev/run-tests-jenkins
> Python versions prior to 2.7 are not supported.
> Build step 'Execute shell' marked build as failure
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-24 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959204#comment-16959204
 ] 

Shane Knapp commented on SPARK-29106:
-

i bumped the git timeout to 30mins, which is a much more obfuscated set of 
tasks than i ever would have imagined lol...

relaunched the job and let's see if it fetches/clones in time.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-24 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959153#comment-16959153
 ] 

Shane Knapp commented on SPARK-29106:
-

btw the VM is currently experiencing a lot of network latency and relatively 
high ping times to github.com, and the job is having trouble cloning the git 
repo.  i rebooted the VM, but it doesn't seem to be helping much.

my lead sysadmin will be out for the next week and a half, but when he returns 
we'll look in to getting a basic ARM server for our build system.  i'm pretty 
unhappy w/the VM option and think we'll have a lot more luck w/bare metal.  the 
VM will definitely help us get the ansible configs built but i'd like to get 
off of it ASAP.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-24 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16959124#comment-16959124
 ] 

Shane Knapp commented on SPARK-29106:
-

> First I want to share the details what we have done in Openlab test env.

this is an extremely basic python installation, and doesn't include important 
things that pyspark needs to test against, like pandas and pyarrow.

> 1) If we can not use Anaconda, how about manage the packages via ansible too? 
> Just for ARM now?  Such as for py27, we need to install what packages from 
> pip/somewhere and need to install manually(For manually installed packages, 
> if possible, we can do something like leveldbjni on maven, provider a 
> public/official way to fit the ARM package downloading/installation). For 
> now, I personally think it's very difficult to use Anaconda, as there aren't 
> so much package management platform for ARM, eventhrough we start up Anaconda 
> on ARM. If we do that, we need to fix the all gaps, that's a very huge 
> project.

a few things here:

* i am already using ansible to set up and deploy python via anaconda (and pip) 
on the x86 workers
* we can't use anaconda for ARM, period.  we have to use python virtual envs
* i still haven't had the cycles to dive in to trying to recreate the 3 python 
envs on ARM yet

> 2) For multiple python version, py27 py34 py36 and pypy, the venv is the 
> right choice now. But how about support part of them for the first step? Such 
> as only 1 or 2 python version support now, as we already passed on py27 and 
> py36 testing. Let's see that ARM eco is very limited now. 

yeah, i was planning on doing one at a time.

> 3) As the following integration work is in your sight, we can not know so 
> much details about what problem you hit. So please feel free to tell us how 
> can we help you, we are looking forward to work with you.

that's the plan!  :)

> For more quick to test SparkR, I install manually in the ARM jenkins worker, 
> because the R installation also need so much time, including deb librarise 
> install and R itself. So I found amplab jenkins job also manage the R 
> installation before the real spark test execution? Is that happened in each 
> build?

no, R is set up via ansible and not modified by the build.

> I think the current maven UT test could be run 1 time per day, and 
> pyspark/sparkR runs 1 time per day. Eventhough they are running 
> simultaneously, but we can make the 2 jobs trigger in different time period, 
> such as maven UT test(From 0:00 am to 12:00 am), pyspark/sparkR(From 1:00pm 
> to 10:00pm).

sure, sounds like a plan once we/i get those two parts set up on the worker in 
an atomic and reproducible way.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 

[jira] [Comment Edited] (SPARK-29106) Add jenkins arm test for spark

2019-10-23 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16958056#comment-16958056
 ] 

Shane Knapp edited comment on SPARK-29106 at 10/23/19 5:29 PM:
---

> For pyspark test, you mentioned we didn't install any python debs for 
> testing. Is there any "requirements.txt" or "test-requirements.txt" in the 
> spark repo? I'm failed to find them. When we test the pyspark before, we just 
> realize that we need to install numpy package with pip, because when we exec 
> the pyspark test scripts, the fail message noticed us. So you mentioned 
> "pyspark testing debs" before, you mean that we should figure all out 
> manually? Is there any kind suggest from your side?

i manage the jenkins configs via ansible, and python specifically through 
anaconda.  anaconda was my initial choice for package management because we 
need to support multiple python versions (2.7, 3.x, pypy) and specific package 
versions for each python version itself.

sadly there is no official ARM anaconda python distribution, which is a BIG 
hurdle for this project.

i also don't use requirements.txt and pip to do the initial python env setup as 
pip is flakier than i like, and the conda envs just work a LOT better.

see:  
https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#building-identical-conda-environments

i could check in the specific python package configs in to the spark repo, but 
they're specific to our worker configsn and even though the worker deployment 
process is automated (via ansible) there is ALWAYS some stupid dependency loop 
that pops up and requires manual intervention.

another issue is that i do NOT want any builds installing/updating/creating 
either python environments OR packages.  builds should NEVER EVER modify the 
bare-metal (or VM) system-level configs.

so, to summarize what needs to happen to get the python tests up and running:
1) there is no conda distribution for the ARM architecture, meaning...
2) i need to use venv to install everything...
3) which means i need to use pip/requirements.txt, which is known to be flaky...
4) and the python packages for ARM are named differently than x86...
5) or don't exist...
6) or are the wrong version...
7) meaning that setting up and testing three different python versions with 
differing package names and versions makes this a lot of trial and error.

i would like to get this done asap, but i will need to carve some serious time 
to get my brain wrapped around the 

> For sparkR test, we compile a higher R version 3.6.1 by fix many lib 
> dependency, and make it work. And exec the R test script, till to all of them 
> return pass. So we wonder the difficult about the test when we truelly in 
> amplab, could you please share more to us?

i have a deep and comprehensive hatred of installing and setting up R.  i've 
attached a couple of files showing the packages installed, their versions, and 
some of the ansible snippets i use to do the initial install.

https://issues.apache.org/jira/secure/attachment/12983856/R-ansible.yml
https://issues.apache.org/jira/secure/attachment/12983857/R-libs.txt

just like you, i need to go back and manually fix lib dependency and version 
errors once the initial setup is complete.

this is why i have a deep and comprehensive hatred of installing and setting up 
R.

> For current periodic jobs, you said it will be triggered 2 times per day. 
> Each build will cost most 11 hours. I have a thought about the next job 
> deployment, wish to know your thought about it. My thought is we can setup 2 
> jobs per day, one is the current maven UT test triggered by SCM changes(11h), 
> the other will run the pyspark and sparkR tests also triggered by SCM 
> changes(including spark build and tests, may cost 5-6 hours). How about this? 
> We can talk and discuss if we don't realize how difficult to do these now.

yeah, i am amenable to having a second ARM build.  i'd be curious as to the 
impact on the VM's performance when we have two builds running simultaneously.  
if i have some time today i'll experiment.

shane


was (Author: shaneknapp):
> For pyspark test, you mentioned we didn't install any python debs for 
> testing. Is there any "requirements.txt" or "test-requirements.txt" in the 
> spark repo? I'm failed to find them. When we test the pyspark before, we just 
> realize that we need to install numpy package with pip, because when we exec 
> the pyspark test scripts, the fail message noticed us. So you mentioned 
> "pyspark testing debs" before, you mean that we should figure all out 
> manually? Is there any kind suggest from your side?

i manage the jenkins configs via ansible, and python specifically through 
anaconda.  anaconda was my initial choice for package management because we 
need to support multiple python versions (2.7, 3.x, pypy) and specific package 

[jira] [Updated] (SPARK-29106) Add jenkins arm test for spark

2019-10-23 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29106:

Attachment: R-libs.txt
R-ansible.yml

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
> Attachments: R-ansible.yml, R-libs.txt
>
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-23 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16958056#comment-16958056
 ] 

Shane Knapp commented on SPARK-29106:
-

> For pyspark test, you mentioned we didn't install any python debs for 
> testing. Is there any "requirements.txt" or "test-requirements.txt" in the 
> spark repo? I'm failed to find them. When we test the pyspark before, we just 
> realize that we need to install numpy package with pip, because when we exec 
> the pyspark test scripts, the fail message noticed us. So you mentioned 
> "pyspark testing debs" before, you mean that we should figure all out 
> manually? Is there any kind suggest from your side?

i manage the jenkins configs via ansible, and python specifically through 
anaconda.  anaconda was my initial choice for package management because we 
need to support multiple python versions (2.7, 3.x, pypy) and specific package 
versions for each python version itself.

sadly there is no official ARM anaconda python distribution, which is a BIG 
hurdle for this project.

i also don't use requirements.txt and pip to do the initial python env setup as 
pip is flakier than i like, and the conda envs just work a LOT better.

see:  
https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#building-identical-conda-environments

i could check in the specific python package configs in to the spark repo, but 
they're specific to our worker configsn and even though the worker deployment 
process is automated (via ansible) there is ALWAYS some stupid dependency loop 
that pops up and requires manual intervention.

another issue is that i do NOT want any builds installing/updating/creating 
either python environments OR packages.  builds should NEVER EVER modify the 
bare-metal (or VM) system-level configs.

so, to summarize what needs to happen to get the python tests up and running:
1) there is no conda distribution for the ARM architecture, meaning...
2) i need to use venv to install everything...
3) which means i need to use pip/requirements.txt, which is known to be flaky...
4) and the python packages for ARM are named differently than x86...
5) or don't exist...
6) or are the wrong version...
7) meaning that setting up and testing three different python versions with 
differing package names and versions makes this a lot of trial and error.

i would like to get this done asap, but i will need to carve some serious time 
to get my brain wrapped around the 

> For sparkR test, we compile a higher R version 3.6.1 by fix many lib 
> dependency, and make it work. And exec the R test script, till to all of them 
> return pass. So we wonder the difficult about the test when we truelly in 
> amplab, could you please share more to us?

i have a deep and comprehensive hatred of installing and setting up R.  i'll 
attach a couple of files showing the packages installed, their versions, and 
some of the ansible snippets i use to do the initial install.

just like you, i need to go back and manually fix lib dependency and version 
errors once the initial setup is complete.

this is why i have a deep and comprehensive hatred of installing and setting up 
R.

> For current periodic jobs, you said it will be triggered 2 times per day. 
> Each build will cost most 11 hours. I have a thought about the next job 
> deployment, wish to know your thought about it. My thought is we can setup 2 
> jobs per day, one is the current maven UT test triggered by SCM changes(11h), 
> the other will run the pyspark and sparkR tests also triggered by SCM 
> changes(including spark build and tests, may cost 5-6 hours). How about this? 
> We can talk and discuss if we don't realize how difficult to do these now.

yeah, i am amenable to having a second ARM build.  i'd be curious as to the 
impact on the VM's performance when we have two builds running simultaneously.  
if i have some time today i'll experiment.

shane

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when 

[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-23 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16958019#comment-16958019
 ] 

Shane Knapp commented on SPARK-29106:
-

[~huangtianhua]:

> we don't have to download and install leveldbjni-all-1.8 in our arm test 
> instance, we have installed it and it was there.

it's a very inexpensive step to execute and i'd rather have builds be atomic.  
if for some reason the dependency get wiped/corrupted/etc, the download will 
ensure we're properly building.

> maybe we can try to use 'mvn clean package ' instead of 'mvn clean 
> install '?

sure, i'll give that a shot now.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-21 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16956298#comment-16956298
 ] 

Shane Knapp commented on SPARK-29106:
-

> I find the arm job now is triggered by 'SCM' change, it's good. I wonder the 
> periodic time. Thanks.

i had it poll once per day at ~midnight...  however, i just updated that to 
poll twice each day (noon and midnight).

> we are plan to integrate more and higher performance ARM VMs into community 
> for supporting the PullRequest Trigger type testing jobs, also more works to 
> improve exec testing for matching the PullRequest Trigger requirement are 
> waiting for us..

this would be great!

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-18 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954912#comment-16954912
 ] 

Shane Knapp commented on SPARK-29106:
-

also, i will be exploring the purchase of an ARM server for our cluster.  the 
VM is just not going to be enough for our purposes.  this won't happen 
immediately, so we'll use the VM until then.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-18 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954904#comment-16954904
 ] 

Shane Knapp commented on SPARK-29106:
-

i'm actually not going to use the script – the testing code will be in the 
jenkins job config:

[https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/]

 

once i get the build config sorted and working as expected i'll be sure to give 
you all a copy.  :)

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-18 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954809#comment-16954809
 ] 

Shane Knapp commented on SPARK-29106:
-

we're definitely going to have an issue w/the both R and python tests as it 
looks like none of the testing deps have been installed.

we use anaconda python to manage our bare metal, so i'll have to see if i can 
make things work w/virtualenv.

R, well, that's always a can of worms best left untouched.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-18 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954705#comment-16954705
 ] 

Shane Knapp commented on SPARK-29106:
-

re: real time logging -- yeah i noticed that.  :)

i'll look at that script and play around w/it today.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-17 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954210#comment-16954210
 ] 

Shane Knapp commented on SPARK-29106:
-

[~bzhaoopenstack] [~huangtianhua] yeah i was wondering about the ansible 
stuff...  i can take care of the script that launches things.  jenkins will 
pull master from github and we can go from there.

today was a bit crazy as we're hosting a large event for our lab 
(risecamp.cs.berkeley.edu), so i didn't have a chance to really start 
unravelling things.  i should have a little time tomorrow, and definitely next 
week.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-29106) Add jenkins arm test for spark

2019-10-17 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953896#comment-16953896
 ] 

Shane Knapp edited comment on SPARK-29106 at 10/17/19 5:36 PM:
---

build running:
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/2/


was (Author: shaneknapp):
first build running:
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/1/

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-17 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953896#comment-16953896
 ] 

Shane Knapp commented on SPARK-29106:
-

first build running:
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-arm/1/

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-17 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953878#comment-16953878
 ] 

Shane Knapp commented on SPARK-29106:
-

this is great.  i will think about test strategies today and how we can split 
these up and have them run in parallel.  11h is insane.  :)

some questions:  

* do we want to have a pull request builder job for ARM?  this can be triggered 
by putting an {{[arm]}} tag in the subject, much like we have for K8s.
* how do we want the general tests to be triggered?  if they're taking 11h then 
i would suggest nightly builds vs being triggered by SCM changes.

> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29106) Add jenkins arm test for spark

2019-10-17 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953856#comment-16953856
 ] 

Shane Knapp commented on SPARK-29106:
-

worker is up and sshable from the jenkins master:
https://amplab.cs.berkeley.edu/jenkins/computer/spark-arm-vm/

waiting on VM config to be sorted by [~huangtianhua] and then i will ensure i 
can launch the worker process and run the build.

steps for the VM:
* java is not installed, please install the following:
  - java8 min version 1.8.0_191
  - java11 min version 11.0.1

* it appears from the ansible playbook that there are other deps that need to 
be installed.
  - please install all deps
  - manually run the tests until they pass

* the jenkins user should NEVER have sudo or any root-level access!

* once the arm tests pass when manually run, take a snapshot of this image so 
we can recreate it w/o needing to reinstall everything


> Add jenkins arm test for spark
> --
>
> Key: SPARK-29106
> URL: https://issues.apache.org/jira/browse/SPARK-29106
> Project: Spark
>  Issue Type: Test
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: huangtianhua
>Priority: Minor
>
> Add arm test jobs to amplab jenkins for spark.
> Till now we made two arm test periodic jobs for spark in OpenLab, one is 
> based on master with hadoop 2.7(similar with QA test of amplab jenkins), 
> other one is based on a new branch which we made on date 09-09, see  
> [http://status.openlabtesting.org/builds/job/spark-master-unit-test-hadoop-2.7-arm64]
>   and 
> [http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64.|http://status.openlabtesting.org/builds/job/spark-unchanged-branch-unit-test-hadoop-2.7-arm64]
>  We only have to care about the first one when integrate arm test with amplab 
> jenkins.
> About the k8s test on arm, we have took test it, see 
> [https://github.com/theopenlab/spark/pull/17], maybe we can integrate it 
> later. 
> And we plan test on other stable branches too, and we can integrate them to 
> amplab when they are ready.
> We have offered an arm instance and sent the infos to shane knapp, thanks 
> shane to add the first arm job to amplab jenkins :) 
> The other important thing is about the leveldbjni 
> [https://github.com/fusesource/leveldbjni,|https://github.com/fusesource/leveldbjni/issues/80]
>  spark depends on leveldbjni-all-1.8 
> [https://mvnrepository.com/artifact/org.fusesource.leveldbjni/leveldbjni-all/1.8],
>  we can see there is no arm64 supporting. So we build an arm64 supporting 
> release of leveldbjni see 
> [https://mvnrepository.com/artifact/org.openlabtesting.leveldbjni/leveldbjni-all/1.8],
>  but we can't modified the spark pom.xml directly with something like 
> 'property'/'profile' to choose correct jar package on arm or x86 platform, 
> because spark depends on some hadoop packages like hadoop-hdfs, the packages 
> depend on leveldbjni-all-1.8 too, unless hadoop release with new arm 
> supporting leveldbjni jar. Now we download the leveldbjni-al-1.8 of 
> openlabtesting and 'mvn install' to use it when arm testing for spark.
> PS: The issues found and fixed:
>  SPARK-28770
>  [https://github.com/apache/spark/pull/25673]
>   
>  SPARK-28519
>  [https://github.com/apache/spark/pull/25279]
>   
>  SPARK-28433
>  [https://github.com/apache/spark/pull/25186]
>  
> SPARK-28467
> [https://github.com/apache/spark/pull/25864]
>  
> SPARK-29286
> [https://github.com/apache/spark/pull/26021]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-27177) Update jenkins locale to en_US.UTF-8

2019-10-16 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-27177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp resolved SPARK-27177.
-
Resolution: Fixed

> Update jenkins locale to en_US.UTF-8
> 
>
> Key: SPARK-27177
> URL: https://issues.apache.org/jira/browse/SPARK-27177
> Project: Spark
>  Issue Type: Bug
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: Yuming Wang
>Assignee: Shane Knapp
>Priority: Major
>
> Two test cases will failed on our jenkins since HADOOP-12045(Hadoop-2.8.0). 
> I'd like to update our jenkins locale to en_US.UTF-8 to workaround this issue.
>  How to reproduce:
> {code:java}
> export LANG=
> git clone https://github.com/apache/spark.git && cd spark && git checkout 
> v2.4.0
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> Stack trace:
> {noformat}
> Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
> Malformed input or input contains unmappable characters: 
> /home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/DaTaBaSe_I.db/tab_ı
>   at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
>   at sun.nio.fs.UnixPath.(UnixPath.java:71)
>   at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
>   at java.io.File.toPath(File.java:2234)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:694)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:520)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1436)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1503)
> {noformat}
> Workaround:
> {code:java}
> export LANG=en_US.UTF-8
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> More details: 
> https://issues.apache.org/jira/browse/HADOOP-16180
> https://github.com/apache/spark/pull/24044/commits/4c1ec25d3bc64bf358edf1380a7c863596722362



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-27177) Update jenkins locale to en_US.UTF-8

2019-10-16 Thread Shane Knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-27177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16953045#comment-16953045
 ] 

Shane Knapp commented on SPARK-27177:
-

this is done!

> Update jenkins locale to en_US.UTF-8
> 
>
> Key: SPARK-27177
> URL: https://issues.apache.org/jira/browse/SPARK-27177
> Project: Spark
>  Issue Type: Bug
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: Yuming Wang
>Assignee: Shane Knapp
>Priority: Major
>
> Two test cases will failed on our jenkins since HADOOP-12045(Hadoop-2.8.0). 
> I'd like to update our jenkins locale to en_US.UTF-8 to workaround this issue.
>  How to reproduce:
> {code:java}
> export LANG=
> git clone https://github.com/apache/spark.git && cd spark && git checkout 
> v2.4.0
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> Stack trace:
> {noformat}
> Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
> Malformed input or input contains unmappable characters: 
> /home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/DaTaBaSe_I.db/tab_ı
>   at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
>   at sun.nio.fs.UnixPath.(UnixPath.java:71)
>   at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
>   at java.io.File.toPath(File.java:2234)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:694)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:520)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1436)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1503)
> {noformat}
> Workaround:
> {code:java}
> export LANG=en_US.UTF-8
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> More details: 
> https://issues.apache.org/jira/browse/HADOOP-16180
> https://github.com/apache/spark/pull/24044/commits/4c1ec25d3bc64bf358edf1380a7c863596722362



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29467) dev/merge_spark_pr.py fails on CAPTCHA

2019-10-14 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29467:

Description: 
so, i was merging a PR and when i tried to update the associated jira, it 
failed and dumped out the error response to my terminal.  the important bit is 
here:

jira.exceptions.JIRAError: JiraError HTTP 403 url: 
https://issues.apache.org/jira/rest/api/2/serverInfo
text: CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp

when i went to log in to the site and close the issue manually, i had to enter 
the captcha.

three thoughts:
1) perhaps people will need to make sure they're logged in to the jira BEFORE 
running the merge script...
2) or we can remove the jira update section (which isn't ideal)
3) or we somehow bypass it for the script?

open to suggestions.  

¯\_(ツ)_/¯


  was:
so, i was merging a PR and when i tried to update the associated jira, it 
failed and dumped out the error response to my terminal.  the important bit is 
here:

{{
jira.exceptions.JIRAError: JiraError HTTP 403 url: 
https://issues.apache.org/jira/rest/api/2/serverInfo
text: CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp
}}

when i went to log in to the site and close the issue manually, i had to enter 
the captcha.

three thoughts:
1) perhaps people will need to make sure they're logged in to the jira BEFORE 
running the merge script...
2) or we can remove the jira update section (which isn't ideal)
3) or we somehow bypass it for the script?

open to suggestions.  
{{
¯\_(ツ)_/¯
}}


> dev/merge_spark_pr.py fails on CAPTCHA
> --
>
> Key: SPARK-29467
> URL: https://issues.apache.org/jira/browse/SPARK-29467
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Priority: Minor
>
> so, i was merging a PR and when i tried to update the associated jira, it 
> failed and dumped out the error response to my terminal.  the important bit 
> is here:
> jira.exceptions.JIRAError: JiraError HTTP 403 url: 
> https://issues.apache.org/jira/rest/api/2/serverInfo
> text: CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp
> when i went to log in to the site and close the issue manually, i had to 
> enter the captcha.
> three thoughts:
> 1) perhaps people will need to make sure they're logged in to the jira BEFORE 
> running the merge script...
> 2) or we can remove the jira update section (which isn't ideal)
> 3) or we somehow bypass it for the script?
> open to suggestions.  
> ¯\_(ツ)_/¯



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29467) dev/merge_spark_pr.py fails on CAPTCHA

2019-10-14 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp updated SPARK-29467:

Description: 
so, i was merging a PR and when i tried to update the associated jira, it 
failed and dumped out the error response to my terminal.  the important bit is 
here:

{{
jira.exceptions.JIRAError: JiraError HTTP 403 url: 
https://issues.apache.org/jira/rest/api/2/serverInfo
text: CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp
}}

when i went to log in to the site and close the issue manually, i had to enter 
the captcha.

three thoughts:
1) perhaps people will need to make sure they're logged in to the jira BEFORE 
running the merge script...
2) or we can remove the jira update section (which isn't ideal)
3) or we somehow bypass it for the script?

open to suggestions.  
{{
¯\_(ツ)_/¯
}}

  was:
so, i was merging a PR and when i tried to update the associated jira, it 
failed and dumped out the error response to my terminal.  the important bit is 
here:

{{jira.exceptions.JIRAError: JiraError HTTP 403 url: 
https://issues.apache.org/jira/rest/api/2/serverInfo
text: CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp
}}

when i went to log in to the site and close the issue manually, i had to enter 
the captcha.

three thoughts:
1) perhaps people will need to make sure they're logged in to the jira BEFORE 
running the merge script...
2) or we can remove the jira update section (which isn't ideal)
3) or we somehow bypass it for the script?

open to suggestions.  {{ ¯\_(ツ)_/¯}}


> dev/merge_spark_pr.py fails on CAPTCHA
> --
>
> Key: SPARK-29467
> URL: https://issues.apache.org/jira/browse/SPARK-29467
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Shane Knapp
>Priority: Minor
>
> so, i was merging a PR and when i tried to update the associated jira, it 
> failed and dumped out the error response to my terminal.  the important bit 
> is here:
> {{
> jira.exceptions.JIRAError: JiraError HTTP 403 url: 
> https://issues.apache.org/jira/rest/api/2/serverInfo
> text: CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp
> }}
> when i went to log in to the site and close the issue manually, i had to 
> enter the captcha.
> three thoughts:
> 1) perhaps people will need to make sure they're logged in to the jira BEFORE 
> running the merge script...
> 2) or we can remove the jira update section (which isn't ideal)
> 3) or we somehow bypass it for the script?
> open to suggestions.  
> {{
> ¯\_(ツ)_/¯
> }}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-29467) dev/merge_spark_pr.py fails on CAPTCHA

2019-10-14 Thread Shane Knapp (Jira)
Shane Knapp created SPARK-29467:
---

 Summary: dev/merge_spark_pr.py fails on CAPTCHA
 Key: SPARK-29467
 URL: https://issues.apache.org/jira/browse/SPARK-29467
 Project: Spark
  Issue Type: Bug
  Components: Project Infra
Affects Versions: 3.0.0
Reporter: Shane Knapp


so, i was merging a PR and when i tried to update the associated jira, it 
failed and dumped out the error response to my terminal.  the important bit is 
here:

{{jira.exceptions.JIRAError: JiraError HTTP 403 url: 
https://issues.apache.org/jira/rest/api/2/serverInfo
text: CAPTCHA_CHALLENGE; login-url=https://issues.apache.org/jira/login.jsp
}}

when i went to log in to the site and close the issue manually, i had to enter 
the captcha.

three thoughts:
1) perhaps people will need to make sure they're logged in to the jira BEFORE 
running the merge script...
2) or we can remove the jira update section (which isn't ideal)
3) or we somehow bypass it for the script?

open to suggestions.  {{ ¯\_(ツ)_/¯}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-25152) Enable Spark on Kubernetes R Integration Tests

2019-10-14 Thread Shane Knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-25152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Knapp resolved SPARK-25152.
-
  Assignee: Ilan Filonenko
Resolution: Fixed

merged in to master

> Enable Spark on Kubernetes R Integration Tests
> --
>
> Key: SPARK-25152
> URL: https://issues.apache.org/jira/browse/SPARK-25152
> Project: Spark
>  Issue Type: Test
>  Components: Kubernetes, SparkR
>Affects Versions: 2.4.0
>Reporter: Matt Cheah
>Assignee: Ilan Filonenko
>Priority: Major
>
> We merged [https://github.com/apache/spark/pull/21584] for SPARK-24433 but we 
> had to turn off the integration tests due to issues with the Jenkins 
> environment. Re-enable the tests after the environment is fixed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29220) Flaky test: org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]

2019-09-24 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937155#comment-16937155
 ] 

shane knapp commented on SPARK-29220:
-

it's ok...  let's try and keep an eye on these tests for other PRs and see if 
it pops up again.

> Flaky test: 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large 
> number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]
> --
>
> Key: SPARK-29220
> URL: https://issues.apache.org/jira/browse/SPARK-29220
> Project: Spark
>  Issue Type: Test
>  Components: Tests, YARN
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111229/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111236/testReport/]
> {code:java}
> Error Messageorg.scalatest.exceptions.TestFailedException: 
> java.lang.StackOverflowError did not equal 
> nullStacktracesbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedException: java.lang.StackOverflowError 
> did not equal null
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:528)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:527)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
>   at 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.$anonfun$new$1(LocalityPlacementStrategySuite.scala:48)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:149)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
>   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
>   at 
> org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
>   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
>   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
>   at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
>   at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
>   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
>   at org.scalatest.Suite.run(Suite.scala:1147)
>   at org.scalatest.Suite.run$(Suite.scala:1129)
>   at 
> org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
>   at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
>   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
>   at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
>   at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
>   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
>   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
>   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:56)
>   at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
>   at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:507)
>   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
>   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
>   at 

[jira] [Commented] (SPARK-29220) Flaky test: org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]

2019-09-24 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937126#comment-16937126
 ] 

shane knapp commented on SPARK-29220:
-

hmm, i see that https://github.com/apache/spark/pull/25901 has been merged in.

> Flaky test: 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large 
> number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]
> --
>
> Key: SPARK-29220
> URL: https://issues.apache.org/jira/browse/SPARK-29220
> Project: Spark
>  Issue Type: Test
>  Components: Tests, YARN
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111229/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111236/testReport/]
> {code:java}
> Error Messageorg.scalatest.exceptions.TestFailedException: 
> java.lang.StackOverflowError did not equal 
> nullStacktracesbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedException: java.lang.StackOverflowError 
> did not equal null
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:528)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:527)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
>   at 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.$anonfun$new$1(LocalityPlacementStrategySuite.scala:48)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:149)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
>   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
>   at 
> org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
>   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
>   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
>   at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
>   at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
>   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
>   at org.scalatest.Suite.run(Suite.scala:1147)
>   at org.scalatest.Suite.run$(Suite.scala:1129)
>   at 
> org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
>   at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
>   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
>   at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
>   at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
>   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
>   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
>   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:56)
>   at 
> org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
>   at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:507)
>   at sbt.ForkMain$Run$2.call(ForkMain.java:296)
>   at sbt.ForkMain$Run$2.call(ForkMain.java:286)
>   at 

[jira] [Commented] (SPARK-29220) Flaky test: org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]

2019-09-24 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937124#comment-16937124
 ] 

shane knapp commented on SPARK-29220:
-

ARGH i just messed up and accidentally overwrote all of these logs.  :(

i'll retrigger a build and if it happens again let me know here and we can 
investigate fully.  i'm not really sure what's happening, as i'm not a 
java/scala experten, and other pull request builds are successfully running on 
this worker.  the system configs for these machines hasn't changed in a long 
time, either.

> Flaky test: 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large 
> number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]
> --
>
> Key: SPARK-29220
> URL: https://issues.apache.org/jira/browse/SPARK-29220
> Project: Spark
>  Issue Type: Test
>  Components: Tests, YARN
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111229/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111236/testReport/]
> {code:java}
> Error Messageorg.scalatest.exceptions.TestFailedException: 
> java.lang.StackOverflowError did not equal 
> nullStacktracesbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedException: java.lang.StackOverflowError 
> did not equal null
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:528)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:527)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
>   at 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.$anonfun$new$1(LocalityPlacementStrategySuite.scala:48)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:149)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
>   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
>   at 
> org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
>   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
>   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
>   at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
>   at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
>   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
>   at org.scalatest.Suite.run(Suite.scala:1147)
>   at org.scalatest.Suite.run$(Suite.scala:1129)
>   at 
> org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
>   at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
>   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
>   at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
>   at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
>   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
>   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
>   at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:56)
>   at 
> 

[jira] [Issue Comment Deleted] (SPARK-29220) Flaky test: org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]

2019-09-24 Thread shane knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp updated SPARK-29220:

Comment: was deleted

(was: for 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111236/testReport/

would any of the following logs be useful?


{noformat}
-bash-4.1$ find . | grep "unit-tests\.log"|egrep -i "yarn|sql"
./sql/hive/target/unit-tests.log
./sql/core/target/unit-tests.log
./sql/catalyst/target/unit-tests.log
./sql/hive-thriftserver/target/unit-tests.log
./external/kafka-0-10-sql/target/unit-tests.log
./resource-managers/yarn/target/unit-tests.log
{noformat}
 
let me know ASAP!)

> Flaky test: 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large 
> number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]
> --
>
> Key: SPARK-29220
> URL: https://issues.apache.org/jira/browse/SPARK-29220
> Project: Spark
>  Issue Type: Test
>  Components: Tests, YARN
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111229/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111236/testReport/]
> {code:java}
> Error Messageorg.scalatest.exceptions.TestFailedException: 
> java.lang.StackOverflowError did not equal 
> nullStacktracesbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedException: java.lang.StackOverflowError 
> did not equal null
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:528)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:527)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
>   at 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.$anonfun$new$1(LocalityPlacementStrategySuite.scala:48)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:149)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
>   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
>   at 
> org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
>   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
>   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
>   at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
>   at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
>   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
>   at org.scalatest.Suite.run(Suite.scala:1147)
>   at org.scalatest.Suite.run$(Suite.scala:1129)
>   at 
> org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
>   at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
>   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
>   at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
>   at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
>   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
>   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
>   at 

[jira] [Commented] (SPARK-29220) Flaky test: org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]

2019-09-24 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16937120#comment-16937120
 ] 

shane knapp commented on SPARK-29220:
-

for 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111236/testReport/

would any of the following logs be useful?


{noformat}
-bash-4.1$ find . | grep "unit-tests\.log"|egrep -i "yarn|sql"
./sql/hive/target/unit-tests.log
./sql/core/target/unit-tests.log
./sql/catalyst/target/unit-tests.log
./sql/hive-thriftserver/target/unit-tests.log
./external/kafka-0-10-sql/target/unit-tests.log
./resource-managers/yarn/target/unit-tests.log
{noformat}
 
let me know ASAP!

> Flaky test: 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.handle large 
> number of containers and tasks (SPARK-18750) [hadoop-3.2][java11]
> --
>
> Key: SPARK-29220
> URL: https://issues.apache.org/jira/browse/SPARK-29220
> Project: Spark
>  Issue Type: Test
>  Components: Tests, YARN
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Minor
>
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111229/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/111236/testReport/]
> {code:java}
> Error Messageorg.scalatest.exceptions.TestFailedException: 
> java.lang.StackOverflowError did not equal 
> nullStacktracesbt.ForkMain$ForkError: 
> org.scalatest.exceptions.TestFailedException: java.lang.StackOverflowError 
> did not equal null
>   at 
> org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:528)
>   at 
> org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:527)
>   at 
> org.scalatest.FunSuite.newAssertionFailedException(FunSuite.scala:1560)
>   at 
> org.scalatest.Assertions$AssertionsHelper.macroAssert(Assertions.scala:501)
>   at 
> org.apache.spark.deploy.yarn.LocalityPlacementStrategySuite.$anonfun$new$1(LocalityPlacementStrategySuite.scala:48)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>   at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>   at org.scalatest.Transformer.apply(Transformer.scala:22)
>   at org.scalatest.Transformer.apply(Transformer.scala:20)
>   at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>   at org.apache.spark.SparkFunSuite.withFixture(SparkFunSuite.scala:149)
>   at 
> org.scalatest.FunSuiteLike.invokeWithFixture$1(FunSuiteLike.scala:184)
>   at org.scalatest.FunSuiteLike.$anonfun$runTest$1(FunSuiteLike.scala:196)
>   at org.scalatest.SuperEngine.runTestImpl(Engine.scala:289)
>   at org.scalatest.FunSuiteLike.runTest(FunSuiteLike.scala:196)
>   at org.scalatest.FunSuiteLike.runTest$(FunSuiteLike.scala:178)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterEach$$super$runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest(BeforeAndAfterEach.scala:221)
>   at 
> org.scalatest.BeforeAndAfterEach.runTest$(BeforeAndAfterEach.scala:214)
>   at org.apache.spark.SparkFunSuite.runTest(SparkFunSuite.scala:56)
>   at 
> org.scalatest.FunSuiteLike.$anonfun$runTests$1(FunSuiteLike.scala:229)
>   at 
> org.scalatest.SuperEngine.$anonfun$runTestsInBranch$1(Engine.scala:396)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:384)
>   at org.scalatest.SuperEngine.runTestsInBranch(Engine.scala:379)
>   at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:461)
>   at org.scalatest.FunSuiteLike.runTests(FunSuiteLike.scala:229)
>   at org.scalatest.FunSuiteLike.runTests$(FunSuiteLike.scala:228)
>   at org.scalatest.FunSuite.runTests(FunSuite.scala:1560)
>   at org.scalatest.Suite.run(Suite.scala:1147)
>   at org.scalatest.Suite.run$(Suite.scala:1129)
>   at 
> org.scalatest.FunSuite.org$scalatest$FunSuiteLike$$super$run(FunSuite.scala:1560)
>   at org.scalatest.FunSuiteLike.$anonfun$run$1(FunSuiteLike.scala:233)
>   at org.scalatest.SuperEngine.runImpl(Engine.scala:521)
>   at org.scalatest.FunSuiteLike.run(FunSuiteLike.scala:233)
>   at org.scalatest.FunSuiteLike.run$(FunSuiteLike.scala:232)
>   at 
> org.apache.spark.SparkFunSuite.org$scalatest$BeforeAndAfterAll$$super$run(SparkFunSuite.scala:56)
>   at 
> org.scalatest.BeforeAndAfterAll.liftedTree1$1(BeforeAndAfterAll.scala:213)
>   at org.scalatest.BeforeAndAfterAll.run(BeforeAndAfterAll.scala:210)
>   at org.scalatest.BeforeAndAfterAll.run$(BeforeAndAfterAll.scala:208)
>   at 

[jira] [Resolved] (SPARK-29204) Remove `Spark Release` Jenkins tab and its four jobs

2019-09-23 Thread shane knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp resolved SPARK-29204.
-
  Assignee: shane knapp
Resolution: Fixed

> Remove `Spark Release` Jenkins tab and its four jobs
> 
>
> Key: SPARK-29204
> URL: https://issues.apache.org/jira/browse/SPARK-29204
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
> Attachments: Spark Release Jobs.png
>
>
> Since last two years, we didn't use `Spark Release` Jenkins jobs. Although we 
> keep them until now, it already became outdated because we are using Docker 
> `spark-rm` image.
>  !Spark Release Jobs.png! 
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
> We had better remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29204) Remove `Spark Release` Jenkins tab and its four jobs

2019-09-23 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936066#comment-16936066
 ] 

shane knapp commented on SPARK-29204:
-

done, done and done.

> Remove `Spark Release` Jenkins tab and its four jobs
> 
>
> Key: SPARK-29204
> URL: https://issues.apache.org/jira/browse/SPARK-29204
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Major
> Attachments: Spark Release Jobs.png
>
>
> Since last two years, we didn't use `Spark Release` Jenkins jobs. Although we 
> keep them until now, it already became outdated because we are using Docker 
> `spark-rm` image.
>  !Spark Release Jobs.png! 
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
> We had better remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29204) Remove `Spark Release` Jenkins tab and its four jobs

2019-09-23 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936060#comment-16936060
 ] 

shane knapp commented on SPARK-29204:
-

PR merged.  deleting jobs + views now.  thanks [~yhuai]!

> Remove `Spark Release` Jenkins tab and its four jobs
> 
>
> Key: SPARK-29204
> URL: https://issues.apache.org/jira/browse/SPARK-29204
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Major
> Attachments: Spark Release Jobs.png
>
>
> Since last two years, we didn't use `Spark Release` Jenkins jobs. Although we 
> keep them until now, it already became outdated because we are using Docker 
> `spark-rm` image.
>  !Spark Release Jobs.png! 
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
> We had better remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29204) Remove `Spark Release` Jenkins tab and its four jobs

2019-09-23 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936051#comment-16936051
 ] 

shane knapp commented on SPARK-29204:
-

for those w/perms to see the JJB databricks repo:
https://github.com/databricks/spark-jenkins-configurations/pull/59

> Remove `Spark Release` Jenkins tab and its four jobs
> 
>
> Key: SPARK-29204
> URL: https://issues.apache.org/jira/browse/SPARK-29204
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Major
> Attachments: Spark Release Jobs.png
>
>
> Since last two years, we didn't use `Spark Release` Jenkins jobs. Although we 
> keep them until now, it already became outdated because we are using Docker 
> `spark-rm` image.
>  !Spark Release Jobs.png! 
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
> We had better remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29204) Remove `Spark Release` Jenkins tab and its four jobs

2019-09-23 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16936034#comment-16936034
 ] 

shane knapp commented on SPARK-29204:
-

today, in order, i will:

1) delete the configs from the databricks JJB configs (YAY!).  side effect: 
 this will make it a LOT easier to move the configs in to the main spark repo.
2) after (1), delete the jobs from jenkins and clean up any leftover historical 
builds in the nodes workspaces
3) delete the view



> Remove `Spark Release` Jenkins tab and its four jobs
> 
>
> Key: SPARK-29204
> URL: https://issues.apache.org/jira/browse/SPARK-29204
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Major
> Attachments: Spark Release Jobs.png
>
>
> Since last two years, we didn't use `Spark Release` Jenkins jobs. Although we 
> keep them until now, it already became outdated because we are using Docker 
> `spark-rm` image.
>  !Spark Release Jobs.png! 
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
> We had better remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29204) Remove `Spark Release` Jenkins tab and its four jobs

2019-09-23 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935946#comment-16935946
 ] 

shane knapp commented on SPARK-29204:
-

yeah, i'll get around to this later today (as well as the JJB configs).



> Remove `Spark Release` Jenkins tab and its four jobs
> 
>
> Key: SPARK-29204
> URL: https://issues.apache.org/jira/browse/SPARK-29204
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Major
> Attachments: Spark Release Jobs.png
>
>
> Since last two years, we didn't use `Spark Release` Jenkins jobs. Although we 
> keep them until now, it already became outdated because we are using Docker 
> `spark-rm` image.
>  !Spark Release Jobs.png! 
> - https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Release/
> We had better remove them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29129) Test failure: org.apache.spark.sql.hive.JavaDataFrameSuite (hadoop-2.7/JDK 11 combination)

2019-09-20 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934828#comment-16934828
 ] 

shane knapp commented on SPARK-29129:
-

[~dongjoon] especially when it involves deleting builds.  :)

> Test failure: org.apache.spark.sql.hive.JavaDataFrameSuite (hadoop-2.7/JDK 11 
> combination)
> --
>
> Key: SPARK-29129
> URL: https://issues.apache.org/jira/browse/SPARK-29129
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Major
>
> Some of tests in org.apache.spark.sql.hive.JavaDataFrameSuite are failing 
> intermittently in CI builds.
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1564/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1563/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1562/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1559/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1558/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1557/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1541/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1540/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1539/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1538/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1537/testReport/]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29129) Test failure: org.apache.spark.sql.hive.JavaDataFrameSuite (hadoop-2.7/JDK 11 combination)

2019-09-20 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934825#comment-16934825
 ] 

shane knapp commented on SPARK-29129:
-

all three have been deleted.

> Test failure: org.apache.spark.sql.hive.JavaDataFrameSuite (hadoop-2.7/JDK 11 
> combination)
> --
>
> Key: SPARK-29129
> URL: https://issues.apache.org/jira/browse/SPARK-29129
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Major
>
> Some of tests in org.apache.spark.sql.hive.JavaDataFrameSuite are failing 
> intermittently in CI builds.
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1564/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1563/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1562/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1559/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1558/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1557/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1541/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1540/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1539/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1538/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1537/testReport/]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29129) Test failure: org.apache.spark.sql.hive.JavaDataFrameSuite (hadoop-2.7/JDK 11 combination)

2019-09-20 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934823#comment-16934823
 ] 

shane knapp commented on SPARK-29129:
-

i will happily continue to delete jobs!

and any 'ubuntu-testing' jobs are designed to help us iron out bugs and make 
the (never ending) project to reimage the old centos boxes w/ubuntu go more 
smoothly.

> Test failure: org.apache.spark.sql.hive.JavaDataFrameSuite (hadoop-2.7/JDK 11 
> combination)
> --
>
> Key: SPARK-29129
> URL: https://issues.apache.org/jira/browse/SPARK-29129
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Major
>
> Some of tests in org.apache.spark.sql.hive.JavaDataFrameSuite are failing 
> intermittently in CI builds.
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1564/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1563/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1562/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1559/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1558/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1557/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1541/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1540/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1539/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1538/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1537/testReport/]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29129) Test failure: org.apache.spark.sql.hive.JavaDataFrameSuite (hadoop-2.7/JDK 11 combination)

2019-09-20 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934567#comment-16934567
 ] 

shane knapp commented on SPARK-29129:
-

this seems reasonable to me given the circumstances.

> Test failure: org.apache.spark.sql.hive.JavaDataFrameSuite (hadoop-2.7/JDK 11 
> combination)
> --
>
> Key: SPARK-29129
> URL: https://issues.apache.org/jira/browse/SPARK-29129
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 3.0.0
>Reporter: Jungtaek Lim
>Priority: Major
>
> Some of tests in org.apache.spark.sql.hive.JavaDataFrameSuite are failing 
> intermittently in CI builds.
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1564/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1563/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1562/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1559/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1558/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1557/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1541/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1540/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1539/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1538/testReport/]
> [https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1537/testReport/]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29183) Upgrade JDK 11 Installation to 11.0.4

2019-09-19 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933840#comment-16933840
 ] 

shane knapp commented on SPARK-29183:
-

good point.

i won't be able to get started until the middle of next week.  yay fun java.

> Upgrade JDK 11 Installation to 11.0.4
> -
>
> Key: SPARK-29183
> URL: https://issues.apache.org/jira/browse/SPARK-29183
> Project: Spark
>  Issue Type: Improvement
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Priority: Major
>
> Every JDK 11.0.x releases have many fixes including performance regression 
> fix. We had better upgrade it to the latest 11.0.4.
> - https://bugs.java.com/bugdatabase/view_bug.do?bug_id=JDK-8221760



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-13 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929510#comment-16929510
 ] 

shane knapp commented on SPARK-29066:
-

well, we weren't actually stressed for space.  each of the workers has ~1T of 
disk, and we were hovering around ~40% usage.  now that's 'only' ~37%.

it's nice to clean up cruft regardless.  :)

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-13 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929493#comment-16929493
 ] 

shane knapp commented on SPARK-29066:
-

i'm also going through all of the centos and ubuntu workers and deleting any 
leftover build directories in the jenkins workspace.

so far i've reclaimed ~40G of disk space on each worker...  and i'm not even 
done yet!  :)

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-13 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16929396#comment-16929396
 ] 

shane knapp commented on SPARK-29066:
-

done...  this is great!  the jenkins build list is looking much more reasonable 
now.  :)

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-12 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928991#comment-16928991
 ] 

shane knapp commented on SPARK-29066:
-

done

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-12 Thread shane knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp resolved SPARK-29066.
-
Resolution: Fixed

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-12 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928824#comment-16928824
 ] 

shane knapp commented on SPARK-29066:
-

done!  that was quite satisfying.  :)

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-12 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928816#comment-16928816
 ] 

shane knapp commented on SPARK-29066:
-

i am currently deleting old jobs w/absolute relish.

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-12 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928620#comment-16928620
 ] 

shane knapp commented on SPARK-29066:
-

yep, i can take care of this pretty easily.

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-29066) Remove old Jenkins jobs for EOL versions or obsolete combinations

2019-09-12 Thread shane knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp reassigned SPARK-29066:
---

Assignee: shane knapp

> Remove old Jenkins jobs for EOL versions or obsolete combinations
> -
>
> Key: SPARK-29066
> URL: https://issues.apache.org/jira/browse/SPARK-29066
> Project: Spark
>  Issue Type: Task
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> This issue aims to remove the old Jenkins jobs for EOL versions (1.6 ~ 2.3) 
> and some obsolete combinations.
> 1. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/
> 2. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/ (Here, 
> `spark-master-compile-maven-hadoop-2.6` is an invalid combination.)
> 3. https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test/
> 4. 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/
> 5. https://amplab.cs.berkeley.edu/jenkins/view/spark%20k8s%20builds/
> For 1~3, we need additional scroll-down in laptop environments. It's 
> inconvenient.
> This cleanup will make us more room when we add `branch-3.0` later. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28953) Integration tests fail due to malformed URL

2019-09-04 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922684#comment-16922684
 ] 

shane knapp commented on SPARK-28953:
-

[~skonto] we're currently on minikube v0.34.1 and k8s 1.13.3

> Integration tests fail due to malformed URL
> ---
>
> Key: SPARK-28953
> URL: https://issues.apache.org/jira/browse/SPARK-28953
> Project: Spark
>  Issue Type: Bug
>  Components: jenkins, Kubernetes
>Affects Versions: 3.0.0
>Reporter: Stavros Kontopoulos
>Priority: Major
>
> Tests failed on Ubuntu, verified on two different machines:
> KubernetesSuite:
> - Launcher client dependencies *** FAILED ***
>  java.net.MalformedURLException: no protocol: * http://172.31.46.91:30706
>  at java.net.URL.(URL.java:600)
>  at java.net.URL.(URL.java:497)
>  at java.net.URL.(URL.java:446)
>  at 
> org.apache.spark.deploy.k8s.integrationtest.DepsTestsSuite.$anonfun$$init$$1(DepsTestsSuite.scala:160)
>  at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalatest.Transformer.apply(Transformer.scala:22)
>  at org.scalatest.Transformer.apply(Transformer.scala:20)
>  at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>  
> Welcome to
>   __
>  / __/__ ___ _/ /__
>  _\ \/ _ \/ _ `/ __/ '_/
>  /___/ .__/\_,_/_/ /_/\_\ version 3.0.0-SNAPSHOT
>  /_/
>  
>  Using Scala version 2.12.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_222)
>  Type in expressions to have them evaluated.
>  Type :help for more information.
>  
> scala> val pb = new ProcessBuilder().command("bash", "-c", "minikube service 
> ceph-nano-s3 -n spark --url")
>  pb: ProcessBuilder = java.lang.ProcessBuilder@46092840
> scala> pb.redirectErrorStream(true)
>  res0: ProcessBuilder = java.lang.ProcessBuilder@46092840
> scala> val proc = pb.start()
>  proc: Process = java.lang.UNIXProcess@5e9650d3
> scala> val r = org.apache.commons.io.IOUtils.toString(proc.getInputStream())
>  r: String =
>  "* http://172.31.46.91:30706
>  "
> Although (no asterisk):
> $ minikube service ceph-nano-s3 -n spark --url
> [http://172.31.46.91:30706|http://172.31.46.91:30706/]
>  
> This is weird because it fails at the java level, where does the asterisk 
> come from?
> $ minikube version
> minikube version: v1.3.1
> commit: ca60a424ce69a4d79f502650199ca2b52f29e631
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28900) Test Pyspark, SparkR on JDK 11 with run-tests

2019-09-04 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16922681#comment-16922681
 ] 

shane knapp commented on SPARK-28900:
-

hey everyone, i have to back-burner this issue for the next couple of weeks.  
half of my team is out on vacation and for family issues and i'm sole support 
for 4 research labs right now.  also i threw my back out last week and have 
been dealing w/that.

wish me luck!  :\

> Test Pyspark, SparkR on JDK 11 with run-tests
> -
>
> Key: SPARK-28900
> URL: https://issues.apache.org/jira/browse/SPARK-28900
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Sean Owen
>Priority: Major
>
> Right now, we are testing JDK 11 with a Maven-based build, as in 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2/
> It looks like _all_ of the Maven-based jobs 'manually' build and invoke 
> tests, and only run tests via Maven -- that is, they do not run Pyspark or 
> SparkR tests. The SBT-based builds do, because they use the {{dev/run-tests}} 
> script that is meant to be for this purpose.
> In fact, there seem to be a couple flavors of copy-pasted build configs. SBT 
> builds look like:
> {code}
> #!/bin/bash
> set -e
> # Configure per-build-executor Ivy caches to avoid SBT Ivy lock contention
> export HOME="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER"
> mkdir -p "$HOME"
> export SBT_OPTS="-Duser.home=$HOME -Dsbt.ivy.home=$HOME/.ivy2"
> export SPARK_VERSIONS_SUITE_IVY_PATH="$HOME/.ivy2"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> git clean -fdx
> ./dev/run-tests
> {code}
> Maven builds looks like:
> {code}
> #!/bin/bash
> set -x
> set -e
> rm -rf ./work
> git clean -fdx
> # Generate random point for Zinc
> export ZINC_PORT
> ZINC_PORT=$(python -S -c "import random; print random.randrange(3030,4030)")
> # Use per-build-executor Ivy caches to avoid SBT Ivy lock contention:
> export 
> SPARK_VERSIONS_SUITE_IVY_PATH="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER/.ivy2"
> mkdir -p "$SPARK_VERSIONS_SUITE_IVY_PATH"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> MVN="build/mvn -DzincPort=$ZINC_PORT"
> set +e
> if [[ $HADOOP_PROFILE == hadoop-1 ]]; then
> # Note that there is no -Pyarn flag here for Hadoop 1:
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> else
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> fi
> if [[ $retcode1 -ne 0 || $retcode2 -ne 0 ]]; then
>   if [[ $retcode1 -ne 0 ]]; then
> echo "Packaging Spark with Maven failed"
>   fi
>   if [[ $retcode2 -ne 0 ]]; then
> echo "Testing Spark with Maven failed"
>   fi
>   exit 1
> fi
> {code}
> The PR builder (one of them at least) looks like:
> {code}
> #!/bin/bash
> set -e  # fail on any non-zero exit code
> set -x
> export AMPLAB_JENKINS=1
> export PATH="$PATH:/home/anaconda/envs/py3k/bin"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> echo "fixing target dir permissions"
> chmod -R +w target/* || true  # stupid hack by sknapp to ensure that the 
> chmod always exits w/0 and doesn't bork the 

[jira] [Commented] (SPARK-28953) Integration tests fail due to malformed URL

2019-09-03 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16921691#comment-16921691
 ] 

shane knapp commented on SPARK-28953:
-

could you link to a build that failed this way?

> Integration tests fail due to malformed URL
> ---
>
> Key: SPARK-28953
> URL: https://issues.apache.org/jira/browse/SPARK-28953
> Project: Spark
>  Issue Type: Bug
>  Components: jenkins, Kubernetes
>Affects Versions: 3.0.0
>Reporter: Stavros Kontopoulos
>Priority: Major
>
> Tests failed on Ubuntu, verified on two different machines:
> KubernetesSuite:
> - Launcher client dependencies *** FAILED ***
>  java.net.MalformedURLException: no protocol: * http://172.31.46.91:30706
>  at java.net.URL.(URL.java:600)
>  at java.net.URL.(URL.java:497)
>  at java.net.URL.(URL.java:446)
>  at 
> org.apache.spark.deploy.k8s.integrationtest.DepsTestsSuite.$anonfun$$init$$1(DepsTestsSuite.scala:160)
>  at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalatest.Transformer.apply(Transformer.scala:22)
>  at org.scalatest.Transformer.apply(Transformer.scala:20)
>  at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>  
> Welcome to
>   __
>  / __/__ ___ _/ /__
>  _\ \/ _ \/ _ `/ __/ '_/
>  /___/ .__/\_,_/_/ /_/\_\ version 3.0.0-SNAPSHOT
>  /_/
>  
>  Using Scala version 2.12.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_222)
>  Type in expressions to have them evaluated.
>  Type :help for more information.
>  
> scala> val pb = new ProcessBuilder().command("bash", "-c", "minikube service 
> ceph-nano-s3 -n spark --url")
>  pb: ProcessBuilder = java.lang.ProcessBuilder@46092840
> scala> pb.redirectErrorStream(true)
>  res0: ProcessBuilder = java.lang.ProcessBuilder@46092840
> scala> val proc = pb.start()
>  proc: Process = java.lang.UNIXProcess@5e9650d3
> scala> val r = org.apache.commons.io.IOUtils.toString(proc.getInputStream())
>  r: String =
>  "* http://172.31.46.91:30706
>  "
> Although (no asterisk):
> $ minikube service ceph-nano-s3 -n spark --url
> [http://172.31.46.91:30706|http://172.31.46.91:30706/]
>  
> This is weird because it fails at the java level, where does the asterisk 
> come from?
> $ minikube version
> minikube version: v1.3.1
> commit: ca60a424ce69a4d79f502650199ca2b52f29e631
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28900) Test Pyspark, SparkR on JDK 11 with run-tests

2019-08-28 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917939#comment-16917939
 ] 

shane knapp commented on SPARK-28900:
-

i kinda think it's time to boil the ocean, tbh.  i am more than happy to BART 
over to SF and camp out w/some folks at databricks to help me push through 
this.  ;)

> Test Pyspark, SparkR on JDK 11 with run-tests
> -
>
> Key: SPARK-28900
> URL: https://issues.apache.org/jira/browse/SPARK-28900
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Sean Owen
>Priority: Major
>
> Right now, we are testing JDK 11 with a Maven-based build, as in 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2/
> It looks like _all_ of the Maven-based jobs 'manually' build and invoke 
> tests, and only run tests via Maven -- that is, they do not run Pyspark or 
> SparkR tests. The SBT-based builds do, because they use the {{dev/run-tests}} 
> script that is meant to be for this purpose.
> In fact, there seem to be a couple flavors of copy-pasted build configs. SBT 
> builds look like:
> {code}
> #!/bin/bash
> set -e
> # Configure per-build-executor Ivy caches to avoid SBT Ivy lock contention
> export HOME="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER"
> mkdir -p "$HOME"
> export SBT_OPTS="-Duser.home=$HOME -Dsbt.ivy.home=$HOME/.ivy2"
> export SPARK_VERSIONS_SUITE_IVY_PATH="$HOME/.ivy2"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> git clean -fdx
> ./dev/run-tests
> {code}
> Maven builds looks like:
> {code}
> #!/bin/bash
> set -x
> set -e
> rm -rf ./work
> git clean -fdx
> # Generate random point for Zinc
> export ZINC_PORT
> ZINC_PORT=$(python -S -c "import random; print random.randrange(3030,4030)")
> # Use per-build-executor Ivy caches to avoid SBT Ivy lock contention:
> export 
> SPARK_VERSIONS_SUITE_IVY_PATH="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER/.ivy2"
> mkdir -p "$SPARK_VERSIONS_SUITE_IVY_PATH"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> MVN="build/mvn -DzincPort=$ZINC_PORT"
> set +e
> if [[ $HADOOP_PROFILE == hadoop-1 ]]; then
> # Note that there is no -Pyarn flag here for Hadoop 1:
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> else
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> fi
> if [[ $retcode1 -ne 0 || $retcode2 -ne 0 ]]; then
>   if [[ $retcode1 -ne 0 ]]; then
> echo "Packaging Spark with Maven failed"
>   fi
>   if [[ $retcode2 -ne 0 ]]; then
> echo "Testing Spark with Maven failed"
>   fi
>   exit 1
> fi
> {code}
> The PR builder (one of them at least) looks like:
> {code}
> #!/bin/bash
> set -e  # fail on any non-zero exit code
> set -x
> export AMPLAB_JENKINS=1
> export PATH="$PATH:/home/anaconda/envs/py3k/bin"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> echo "fixing target dir permissions"
> chmod -R +w target/* || true  # stupid hack by sknapp to ensure that the 
> chmod always exits w/0 and doesn't bork the script
> echo "running git clean -fdx"
> git clean -fdx
> # Configure per-build-executor Ivy caches to avoid SBT 

[jira] [Commented] (SPARK-28900) Test Pyspark, SparkR on JDK 11 with run-tests

2019-08-28 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917937#comment-16917937
 ] 

shane knapp commented on SPARK-28900:
-

i don't think we can sever things right now from the JJB repo.  there are 
secrets stored for publishing artifacts.

> Test Pyspark, SparkR on JDK 11 with run-tests
> -
>
> Key: SPARK-28900
> URL: https://issues.apache.org/jira/browse/SPARK-28900
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Sean Owen
>Priority: Major
>
> Right now, we are testing JDK 11 with a Maven-based build, as in 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2/
> It looks like _all_ of the Maven-based jobs 'manually' build and invoke 
> tests, and only run tests via Maven -- that is, they do not run Pyspark or 
> SparkR tests. The SBT-based builds do, because they use the {{dev/run-tests}} 
> script that is meant to be for this purpose.
> In fact, there seem to be a couple flavors of copy-pasted build configs. SBT 
> builds look like:
> {code}
> #!/bin/bash
> set -e
> # Configure per-build-executor Ivy caches to avoid SBT Ivy lock contention
> export HOME="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER"
> mkdir -p "$HOME"
> export SBT_OPTS="-Duser.home=$HOME -Dsbt.ivy.home=$HOME/.ivy2"
> export SPARK_VERSIONS_SUITE_IVY_PATH="$HOME/.ivy2"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> git clean -fdx
> ./dev/run-tests
> {code}
> Maven builds looks like:
> {code}
> #!/bin/bash
> set -x
> set -e
> rm -rf ./work
> git clean -fdx
> # Generate random point for Zinc
> export ZINC_PORT
> ZINC_PORT=$(python -S -c "import random; print random.randrange(3030,4030)")
> # Use per-build-executor Ivy caches to avoid SBT Ivy lock contention:
> export 
> SPARK_VERSIONS_SUITE_IVY_PATH="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER/.ivy2"
> mkdir -p "$SPARK_VERSIONS_SUITE_IVY_PATH"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> MVN="build/mvn -DzincPort=$ZINC_PORT"
> set +e
> if [[ $HADOOP_PROFILE == hadoop-1 ]]; then
> # Note that there is no -Pyarn flag here for Hadoop 1:
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> else
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> fi
> if [[ $retcode1 -ne 0 || $retcode2 -ne 0 ]]; then
>   if [[ $retcode1 -ne 0 ]]; then
> echo "Packaging Spark with Maven failed"
>   fi
>   if [[ $retcode2 -ne 0 ]]; then
> echo "Testing Spark with Maven failed"
>   fi
>   exit 1
> fi
> {code}
> The PR builder (one of them at least) looks like:
> {code}
> #!/bin/bash
> set -e  # fail on any non-zero exit code
> set -x
> export AMPLAB_JENKINS=1
> export PATH="$PATH:/home/anaconda/envs/py3k/bin"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> echo "fixing target dir permissions"
> chmod -R +w target/* || true  # stupid hack by sknapp to ensure that the 
> chmod always exits w/0 and doesn't bork the script
> echo "running git clean -fdx"
> git clean -fdx
> # Configure per-build-executor Ivy caches to avoid SBT Ivy lock contention
> export 

[jira] [Commented] (SPARK-28900) Test Pyspark, SparkR on JDK 11 with run-tests

2019-08-28 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917932#comment-16917932
 ] 

shane knapp commented on SPARK-28900:
-

oh yeah, i forgot the most important thing:

during this build/test/job audit we should *_dockerize everything_*

> Test Pyspark, SparkR on JDK 11 with run-tests
> -
>
> Key: SPARK-28900
> URL: https://issues.apache.org/jira/browse/SPARK-28900
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Sean Owen
>Priority: Major
>
> Right now, we are testing JDK 11 with a Maven-based build, as in 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2/
> It looks like _all_ of the Maven-based jobs 'manually' build and invoke 
> tests, and only run tests via Maven -- that is, they do not run Pyspark or 
> SparkR tests. The SBT-based builds do, because they use the {{dev/run-tests}} 
> script that is meant to be for this purpose.
> In fact, there seem to be a couple flavors of copy-pasted build configs. SBT 
> builds look like:
> {code}
> #!/bin/bash
> set -e
> # Configure per-build-executor Ivy caches to avoid SBT Ivy lock contention
> export HOME="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER"
> mkdir -p "$HOME"
> export SBT_OPTS="-Duser.home=$HOME -Dsbt.ivy.home=$HOME/.ivy2"
> export SPARK_VERSIONS_SUITE_IVY_PATH="$HOME/.ivy2"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> git clean -fdx
> ./dev/run-tests
> {code}
> Maven builds looks like:
> {code}
> #!/bin/bash
> set -x
> set -e
> rm -rf ./work
> git clean -fdx
> # Generate random point for Zinc
> export ZINC_PORT
> ZINC_PORT=$(python -S -c "import random; print random.randrange(3030,4030)")
> # Use per-build-executor Ivy caches to avoid SBT Ivy lock contention:
> export 
> SPARK_VERSIONS_SUITE_IVY_PATH="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER/.ivy2"
> mkdir -p "$SPARK_VERSIONS_SUITE_IVY_PATH"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> MVN="build/mvn -DzincPort=$ZINC_PORT"
> set +e
> if [[ $HADOOP_PROFILE == hadoop-1 ]]; then
> # Note that there is no -Pyarn flag here for Hadoop 1:
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> else
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> fi
> if [[ $retcode1 -ne 0 || $retcode2 -ne 0 ]]; then
>   if [[ $retcode1 -ne 0 ]]; then
> echo "Packaging Spark with Maven failed"
>   fi
>   if [[ $retcode2 -ne 0 ]]; then
> echo "Testing Spark with Maven failed"
>   fi
>   exit 1
> fi
> {code}
> The PR builder (one of them at least) looks like:
> {code}
> #!/bin/bash
> set -e  # fail on any non-zero exit code
> set -x
> export AMPLAB_JENKINS=1
> export PATH="$PATH:/home/anaconda/envs/py3k/bin"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> echo "fixing target dir permissions"
> chmod -R +w target/* || true  # stupid hack by sknapp to ensure that the 
> chmod always exits w/0 and doesn't bork the script
> echo "running git clean -fdx"
> git clean -fdx
> # Configure per-build-executor Ivy caches to avoid SBT Ivy lock contention
> export 

[jira] [Commented] (SPARK-28900) Test Pyspark, SparkR on JDK 11 with run-tests

2019-08-28 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16917921#comment-16917921
 ] 

shane knapp commented on SPARK-28900:
-

some quick comments:
* this jenkins installation is literally 9 years old (IIRC it first put up in 
2010 by matt massie)
* most spark builds are managed by jenkins job builder (JJB) configs, and that 
stuff is stored in a databricks github repo (this means the bash scripts 
referenced above are not copy-pasted)
* the ones that ARE NOT managed by JJB are the following:  
SparkPullRequestBuilder, NewSparkPullRequestBuilder, all JDK11 builds, all 
ubuntu-testing builds, and finally 
testing-k8s-prb-make-spark-distribution-unified
* moving the JJB configs to the apache spark repo is a priority, but we need to 
figure out how to manage creds and secrets that are currently in the 
databricks/spark-jenkins-configurations repo
* we really need to upgrade jenkins to 2.0+, which will let us use pipeline 
builds (https://jenkins.io/doc/book/pipeline/)
* many many many of these builds were created long before i took over 
jenkins...  so even i don't have historical context for many of the testing 
decisions made
* this project will be very big, have many moving parts and dependencies, and 
will require a few people dedicated to making it work

> Test Pyspark, SparkR on JDK 11 with run-tests
> -
>
> Key: SPARK-28900
> URL: https://issues.apache.org/jira/browse/SPARK-28900
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: 3.0.0
>Reporter: Sean Owen
>Priority: Major
>
> Right now, we are testing JDK 11 with a Maven-based build, as in 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-3.2/
> It looks like _all_ of the Maven-based jobs 'manually' build and invoke 
> tests, and only run tests via Maven -- that is, they do not run Pyspark or 
> SparkR tests. The SBT-based builds do, because they use the {{dev/run-tests}} 
> script that is meant to be for this purpose.
> In fact, there seem to be a couple flavors of copy-pasted build configs. SBT 
> builds look like:
> {code}
> #!/bin/bash
> set -e
> # Configure per-build-executor Ivy caches to avoid SBT Ivy lock contention
> export HOME="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER"
> mkdir -p "$HOME"
> export SBT_OPTS="-Duser.home=$HOME -Dsbt.ivy.home=$HOME/.ivy2"
> export SPARK_VERSIONS_SUITE_IVY_PATH="$HOME/.ivy2"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> git clean -fdx
> ./dev/run-tests
> {code}
> Maven builds looks like:
> {code}
> #!/bin/bash
> set -x
> set -e
> rm -rf ./work
> git clean -fdx
> # Generate random point for Zinc
> export ZINC_PORT
> ZINC_PORT=$(python -S -c "import random; print random.randrange(3030,4030)")
> # Use per-build-executor Ivy caches to avoid SBT Ivy lock contention:
> export 
> SPARK_VERSIONS_SUITE_IVY_PATH="/home/sparkivy/per-executor-caches/$EXECUTOR_NUMBER/.ivy2"
> mkdir -p "$SPARK_VERSIONS_SUITE_IVY_PATH"
> # Prepend JAVA_HOME/bin to fix issue where Zinc's embedded SBT incremental 
> compiler seems to
> # ignore our JAVA_HOME and use the system javac instead.
> export PATH="$JAVA_HOME/bin:$PATH"
> # Add a pre-downloaded version of Maven to the path so that we avoid the 
> flaky download step.
> export 
> PATH="/home/jenkins/tools/hudson.tasks.Maven_MavenInstallation/Maven_3.3.9/bin/:$PATH"
> MVN="build/mvn -DzincPort=$ZINC_PORT"
> set +e
> if [[ $HADOOP_PROFILE == hadoop-1 ]]; then
> # Note that there is no -Pyarn flag here for Hadoop 1:
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Dhadoop.version="$HADOOP_VERSION" \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> else
> $MVN \
> -DskipTests \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> clean package
> retcode1=$?
> $MVN \
> -P"$HADOOP_PROFILE" \
> -Pyarn \
> -Phive \
> -Phive-thriftserver \
> -Pkinesis-asl \
> -Pmesos \
> --fail-at-end \
> test
> retcode2=$?
> fi
> if [[ $retcode1 -ne 0 || $retcode2 -ne 0 ]]; then
>   if [[ $retcode1 -ne 0 ]]; then
> echo "Packaging Spark with Maven failed"
>   

[jira] [Resolved] (SPARK-28701) add java11 support for spark pull request builds

2019-08-26 Thread shane knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp resolved SPARK-28701.
-
Resolution: Fixed

Issue resolved by pull request 25585
[https://github.com/apache/spark/pull/25585]

> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
> Fix For: 3.0.0
>
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Reopened] (SPARK-28701) add java11 support for spark pull request builds

2019-08-26 Thread shane knapp (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp reopened SPARK-28701:
-

argh, i broke run-tests.py:

https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-3.2/321/console

> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
> Fix For: 3.0.0
>
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28701) add java11 support for spark pull request builds

2019-08-26 Thread shane knapp (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915948#comment-16915948
 ] 

shane knapp commented on SPARK-28701:
-

i'm currently trying to fix the hadoop-2.7/jdk-11 build...  it's executing 
tests now:
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing/1458/

> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Sub-task
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
> Fix For: 3.0.0
>
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28701) add java11 support for spark pull request builds

2019-08-14 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16907394#comment-16907394
 ] 

shane knapp commented on SPARK-28701:
-

[~dongjoon] whoops!  i just fixed that build...

also, i'm hoping to get the [test-java11] flag working fully and merged in the 
next day or so...


> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Improvement
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28701) add java11 support for spark pull request builds

2019-08-13 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906618#comment-16906618
 ] 

shane knapp commented on SPARK-28701:
-

excellent.  this is useful...  thanks!

> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Improvement
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28701) add java11 support for spark pull request builds

2019-08-13 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16906609#comment-16906609
 ] 

shane knapp commented on SPARK-28701:
-

[~srowen] [~hyukjin.kwon] [~dongjoon]

what is the roadmap for java version support?  i assume for the "time being" 
the default will be 8, but eventually (timeframe) 11?

this could change how i implement the changes to the build/test configs to 
properly launch things.

> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Improvement
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-28701) add java11 support for spark pull request builds

2019-08-12 Thread shane knapp (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp updated SPARK-28701:

Issue Type: Improvement  (was: Bug)

> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Improvement
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28701) add java11 support for spark pull request builds

2019-08-12 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905543#comment-16905543
 ] 

shane knapp commented on SPARK-28701:
-

this also fails when running the k8s integration tests:
https://amplab.cs.berkeley.edu/jenkins//job/testing-k8s-prb-make-spark-distribution-unified/14059/

> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Bug
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28701) add java11 support for spark pull request builds

2019-08-12 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16905477#comment-16905477
 ] 

shane knapp commented on SPARK-28701:
-

builds are failing to generate the javadocs...

{noformat}
[error] (spark/javaunidoc:doc) javadoc returned nonzero exit code
[error] Total time: 122 s, completed Aug 12, 2019, 11:01:18 AM
[error] running /home/jenkins/workspace/SparkPullRequestBuilder/build/sbt 
-Phadoop-2.7 -Pkubernetes -Phive-thriftserver -Phadoop-cloud -Pkinesis-asl 
-Pyarn -Pspark-ganglia-lgpl -Phive -Pmesos unidoc ; received return code 1
{noformat}

see:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/108989/console

i'm a little out of my depth here, could use some help.  :)

> add java11 support for spark pull request builds
> 
>
> Key: SPARK-28701
> URL: https://issues.apache.org/jira/browse/SPARK-28701
> Project: Spark
>  Issue Type: Bug
>  Components: Build, jenkins
>Affects Versions: 3.0.0
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Major
>
> from https://github.com/apache/spark/pull/25405
> add a PRB subject check for [test-java11] and update JAVA_HOME env var to 
> point to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-28701) add java11 support for spark pull request builds

2019-08-12 Thread shane knapp (JIRA)
shane knapp created SPARK-28701:
---

 Summary: add java11 support for spark pull request builds
 Key: SPARK-28701
 URL: https://issues.apache.org/jira/browse/SPARK-28701
 Project: Spark
  Issue Type: Bug
  Components: Build, jenkins
Affects Versions: 3.0.0
Reporter: shane knapp
Assignee: shane knapp


from https://github.com/apache/spark/pull/25405

add a PRB subject check for [test-java11] and update JAVA_HOME env var to point 
to /usr/java/jdk-11.0.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-28509) K8S integration tests are failing

2019-07-25 Thread shane knapp (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp closed SPARK-28509.
---

> K8S integration tests are failing
> -
>
> Key: SPARK-28509
> URL: https://issues.apache.org/jira/browse/SPARK-28509
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Assignee: shane knapp
>Priority: Major
>
> I've been seeing lots of failures in master. e.g. 
> https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13180/console
> {noformat}
> - Start pod creation from template *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201)
>   at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
>   at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
>   at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
>   at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   ...
> - PVs with local storage *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
> POST at: https://192.168.39.112:8443/api/v1/persistentvolumes. Message: 
> PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: Local 
> volumes are disabled by feature-gate, metadata.annotations: Required value: 
> Local volume requires node affinity]. Received status: Status(apiVersion=v1, 
> code=422, details=StatusDetails(causes=[StatusCause(field=spec.local, 
> message=Forbidden: Local volumes are disabled by feature-gate, 
> reason=FieldValueForbidden, additionalProperties={}), 
> StatusCause(field=metadata.annotations, message=Required value: Local volume 
> requires node affinity, reason=FieldValueRequired, additionalProperties={})], 
> group=null, kind=PersistentVolume, name=test-local-pv, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: 
> Local volumes are disabled by feature-gate, metadata.annotations: Required 
> value: Local volume requires node affinity], 
> metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:787)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:357)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.setupLocalStorage(PVTestsSuite.scala:87)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.$anonfun$$init$$1(PVTestsSuite.scala:137)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ...
> - Launcher client dependencies *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 1 times 
> over 6.67390320003 minutes. Last failure message: assertion failed: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28509) K8S integration tests are failing

2019-07-25 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16893031#comment-16893031
 ] 

shane knapp commented on SPARK-28509:
-

checked that worker today and all k8s builds are running successfully:
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13265/console

> K8S integration tests are failing
> -
>
> Key: SPARK-28509
> URL: https://issues.apache.org/jira/browse/SPARK-28509
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Assignee: shane knapp
>Priority: Major
>
> I've been seeing lots of failures in master. e.g. 
> https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13180/console
> {noformat}
> - Start pod creation from template *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201)
>   at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
>   at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
>   at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
>   at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   ...
> - PVs with local storage *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
> POST at: https://192.168.39.112:8443/api/v1/persistentvolumes. Message: 
> PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: Local 
> volumes are disabled by feature-gate, metadata.annotations: Required value: 
> Local volume requires node affinity]. Received status: Status(apiVersion=v1, 
> code=422, details=StatusDetails(causes=[StatusCause(field=spec.local, 
> message=Forbidden: Local volumes are disabled by feature-gate, 
> reason=FieldValueForbidden, additionalProperties={}), 
> StatusCause(field=metadata.annotations, message=Required value: Local volume 
> requires node affinity, reason=FieldValueRequired, additionalProperties={})], 
> group=null, kind=PersistentVolume, name=test-local-pv, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: 
> Local volumes are disabled by feature-gate, metadata.annotations: Required 
> value: Local volume requires node affinity], 
> metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:787)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:357)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.setupLocalStorage(PVTestsSuite.scala:87)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.$anonfun$$init$$1(PVTestsSuite.scala:137)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ...
> - Launcher client dependencies *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 1 times 
> over 6.67390320003 minutes. Last failure message: assertion failed: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28509) K8S integration tests are failing

2019-07-24 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892219#comment-16892219
 ] 

shane knapp commented on SPARK-28509:
-

ok, all fixed and builds are passing on this worker!

https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13238/

> K8S integration tests are failing
> -
>
> Key: SPARK-28509
> URL: https://issues.apache.org/jira/browse/SPARK-28509
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Assignee: shane knapp
>Priority: Major
>
> I've been seeing lots of failures in master. e.g. 
> https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13180/console
> {noformat}
> - Start pod creation from template *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201)
>   at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
>   at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
>   at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
>   at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   ...
> - PVs with local storage *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
> POST at: https://192.168.39.112:8443/api/v1/persistentvolumes. Message: 
> PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: Local 
> volumes are disabled by feature-gate, metadata.annotations: Required value: 
> Local volume requires node affinity]. Received status: Status(apiVersion=v1, 
> code=422, details=StatusDetails(causes=[StatusCause(field=spec.local, 
> message=Forbidden: Local volumes are disabled by feature-gate, 
> reason=FieldValueForbidden, additionalProperties={}), 
> StatusCause(field=metadata.annotations, message=Required value: Local volume 
> requires node affinity, reason=FieldValueRequired, additionalProperties={})], 
> group=null, kind=PersistentVolume, name=test-local-pv, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: 
> Local volumes are disabled by feature-gate, metadata.annotations: Required 
> value: Local volume requires node affinity], 
> metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:787)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:357)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.setupLocalStorage(PVTestsSuite.scala:87)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.$anonfun$$init$$1(PVTestsSuite.scala:137)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ...
> - Launcher client dependencies *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 1 times 
> over 6.67390320003 minutes. Last failure message: assertion failed: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-28509) K8S integration tests are failing

2019-07-24 Thread shane knapp (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp resolved SPARK-28509.
-
Resolution: Fixed

> K8S integration tests are failing
> -
>
> Key: SPARK-28509
> URL: https://issues.apache.org/jira/browse/SPARK-28509
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Assignee: shane knapp
>Priority: Major
>
> I've been seeing lots of failures in master. e.g. 
> https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13180/console
> {noformat}
> - Start pod creation from template *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201)
>   at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
>   at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
>   at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
>   at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   ...
> - PVs with local storage *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
> POST at: https://192.168.39.112:8443/api/v1/persistentvolumes. Message: 
> PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: Local 
> volumes are disabled by feature-gate, metadata.annotations: Required value: 
> Local volume requires node affinity]. Received status: Status(apiVersion=v1, 
> code=422, details=StatusDetails(causes=[StatusCause(field=spec.local, 
> message=Forbidden: Local volumes are disabled by feature-gate, 
> reason=FieldValueForbidden, additionalProperties={}), 
> StatusCause(field=metadata.annotations, message=Required value: Local volume 
> requires node affinity, reason=FieldValueRequired, additionalProperties={})], 
> group=null, kind=PersistentVolume, name=test-local-pv, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: 
> Local volumes are disabled by feature-gate, metadata.annotations: Required 
> value: Local volume requires node affinity], 
> metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:787)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:357)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.setupLocalStorage(PVTestsSuite.scala:87)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.$anonfun$$init$$1(PVTestsSuite.scala:137)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ...
> - Launcher client dependencies *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 1 times 
> over 6.67390320003 minutes. Last failure message: assertion failed: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28509) K8S integration tests are failing

2019-07-24 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892165#comment-16892165
 ] 

shane knapp commented on SPARK-28509:
-

the entire k8s config on worker-09 is completely borked.  working on getting 
that fixed now.

> K8S integration tests are failing
> -
>
> Key: SPARK-28509
> URL: https://issues.apache.org/jira/browse/SPARK-28509
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Assignee: shane knapp
>Priority: Major
>
> I've been seeing lots of failures in master. e.g. 
> https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13180/console
> {noformat}
> - Start pod creation from template *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201)
>   at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
>   at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
>   at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
>   at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   ...
> - PVs with local storage *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
> POST at: https://192.168.39.112:8443/api/v1/persistentvolumes. Message: 
> PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: Local 
> volumes are disabled by feature-gate, metadata.annotations: Required value: 
> Local volume requires node affinity]. Received status: Status(apiVersion=v1, 
> code=422, details=StatusDetails(causes=[StatusCause(field=spec.local, 
> message=Forbidden: Local volumes are disabled by feature-gate, 
> reason=FieldValueForbidden, additionalProperties={}), 
> StatusCause(field=metadata.annotations, message=Required value: Local volume 
> requires node affinity, reason=FieldValueRequired, additionalProperties={})], 
> group=null, kind=PersistentVolume, name=test-local-pv, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: 
> Local volumes are disabled by feature-gate, metadata.annotations: Required 
> value: Local volume requires node affinity], 
> metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:787)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:357)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.setupLocalStorage(PVTestsSuite.scala:87)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.$anonfun$$init$$1(PVTestsSuite.scala:137)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ...
> - Launcher client dependencies *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 1 times 
> over 6.67390320003 minutes. Last failure message: assertion failed: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28509) K8S integration tests are failing

2019-07-24 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892142#comment-16892142
 ] 

shane knapp commented on SPARK-28509:
-

ah, got it!

it's research-jenkins-worker-09!

{noformat}
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
{noformat}

we need k8s to be 1.13.3...  this is now fixed.

> K8S integration tests are failing
> -
>
> Key: SPARK-28509
> URL: https://issues.apache.org/jira/browse/SPARK-28509
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Priority: Major
>
> I've been seeing lots of failures in master. e.g. 
> https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13180/console
> {noformat}
> - Start pod creation from template *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201)
>   at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
>   at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
>   at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
>   at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   ...
> - PVs with local storage *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
> POST at: https://192.168.39.112:8443/api/v1/persistentvolumes. Message: 
> PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: Local 
> volumes are disabled by feature-gate, metadata.annotations: Required value: 
> Local volume requires node affinity]. Received status: Status(apiVersion=v1, 
> code=422, details=StatusDetails(causes=[StatusCause(field=spec.local, 
> message=Forbidden: Local volumes are disabled by feature-gate, 
> reason=FieldValueForbidden, additionalProperties={}), 
> StatusCause(field=metadata.annotations, message=Required value: Local volume 
> requires node affinity, reason=FieldValueRequired, additionalProperties={})], 
> group=null, kind=PersistentVolume, name=test-local-pv, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: 
> Local volumes are disabled by feature-gate, metadata.annotations: Required 
> value: Local volume requires node affinity], 
> metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:787)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:357)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.setupLocalStorage(PVTestsSuite.scala:87)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.$anonfun$$init$$1(PVTestsSuite.scala:137)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ...
> - Launcher client dependencies *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 1 times 
> over 6.67390320003 minutes. Last failure message: assertion failed: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-28509) K8S integration tests are failing

2019-07-24 Thread shane knapp (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp reassigned SPARK-28509:
---

Assignee: shane knapp

> K8S integration tests are failing
> -
>
> Key: SPARK-28509
> URL: https://issues.apache.org/jira/browse/SPARK-28509
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Assignee: shane knapp
>Priority: Major
>
> I've been seeing lots of failures in master. e.g. 
> https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13180/console
> {noformat}
> - Start pod creation from template *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201)
>   at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
>   at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
>   at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
>   at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   ...
> - PVs with local storage *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
> POST at: https://192.168.39.112:8443/api/v1/persistentvolumes. Message: 
> PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: Local 
> volumes are disabled by feature-gate, metadata.annotations: Required value: 
> Local volume requires node affinity]. Received status: Status(apiVersion=v1, 
> code=422, details=StatusDetails(causes=[StatusCause(field=spec.local, 
> message=Forbidden: Local volumes are disabled by feature-gate, 
> reason=FieldValueForbidden, additionalProperties={}), 
> StatusCause(field=metadata.annotations, message=Required value: Local volume 
> requires node affinity, reason=FieldValueRequired, additionalProperties={})], 
> group=null, kind=PersistentVolume, name=test-local-pv, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: 
> Local volumes are disabled by feature-gate, metadata.annotations: Required 
> value: Local volume requires node affinity], 
> metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:787)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:357)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.setupLocalStorage(PVTestsSuite.scala:87)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.$anonfun$$init$$1(PVTestsSuite.scala:137)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ...
> - Launcher client dependencies *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 1 times 
> over 6.67390320003 minutes. Last failure message: assertion failed: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28509) K8S integration tests are failing

2019-07-24 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16892137#comment-16892137
 ] 

shane knapp commented on SPARK-28509:
-

as a precautionary step, on all ubuntu workers, i:

1) minikube stop && minikube delete
2) rm -rf .minikube .kube
3) rebooting the workers once all jobs are done.

things are now passing...  i'm hoping this clears it up.  the minikube/k8s 
versions haven't changed, but i did find a couple of dead pods that needed 
cleaning up on amp-jenkins-staging-worker-02.  the dead pods were in a 
completely different namespace, so that shouldn't impact the tests.

i will keep a close eye on this and see if i can track the failures down to one 
specific worker...  that doesn't seem to be the case tho.  :\

> K8S integration tests are failing
> -
>
> Key: SPARK-28509
> URL: https://issues.apache.org/jira/browse/SPARK-28509
> Project: Spark
>  Issue Type: Bug
>  Components: Kubernetes, Tests
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Priority: Major
>
> I've been seeing lots of failures in master. e.g. 
> https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/13180/console
> {noformat}
> - Start pod creation from template *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: 404 page not found
>   at 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$2.onFailure(WatchConnectionManager.java:201)
>   at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:571)
>   at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:198)
>   at okhttp3.RealCall$AsyncCall.execute(RealCall.java:206)
>   at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   ...
> - PVs with local storage *** FAILED ***
>   io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: 
> POST at: https://192.168.39.112:8443/api/v1/persistentvolumes. Message: 
> PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: Local 
> volumes are disabled by feature-gate, metadata.annotations: Required value: 
> Local volume requires node affinity]. Received status: Status(apiVersion=v1, 
> code=422, details=StatusDetails(causes=[StatusCause(field=spec.local, 
> message=Forbidden: Local volumes are disabled by feature-gate, 
> reason=FieldValueForbidden, additionalProperties={}), 
> StatusCause(field=metadata.annotations, message=Required value: Local volume 
> requires node affinity, reason=FieldValueRequired, additionalProperties={})], 
> group=null, kind=PersistentVolume, name=test-local-pv, 
> retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, 
> message=PersistentVolume "test-local-pv" is invalid: [spec.local: Forbidden: 
> Local volumes are disabled by feature-gate, metadata.annotations: Required 
> value: Local volume requires node affinity], 
> metadata=ListMeta(_continue=null, resourceVersion=null, selfLink=null, 
> additionalProperties={}), reason=Invalid, status=Failure, 
> additionalProperties={}).
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:478)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:417)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:381)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344)
>   at 
> io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:227)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:787)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.create(BaseOperation.java:357)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.setupLocalStorage(PVTestsSuite.scala:87)
>   at 
> org.apache.spark.deploy.k8s.integrationtest.PVTestsSuite.$anonfun$$init$$1(PVTestsSuite.scala:137)
>   at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>   ...
> - Launcher client dependencies *** FAILED ***
>   The code passed to eventually never returned normally. Attempted 1 times 
> over 6.67390320003 minutes. Last failure message: assertion failed: 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-28457) curl: (60) SSL certificate problem: unable to get local issuer certificate More details here:

2019-07-22 Thread shane knapp (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp reassigned SPARK-28457:
---

Assignee: shane knapp

> curl: (60) SSL certificate problem: unable to get local issuer certificate 
> More details here: 
> --
>
> Key: SPARK-28457
> URL: https://issues.apache.org/jira/browse/SPARK-28457
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Xiao Li
>Assignee: shane knapp
>Priority: Blocker
>
>  
> Build broke since this afternoon.
> [spark-master-compile-maven-hadoop-2.7 #10224 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-2.7/10224/]
>  [spark-master-compile-maven-hadoop-3.2 #171 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-3.2/171/]
>  [spark-master-lint #10599 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-lint/10599/]
>   
> {code:java}
>   
>  
> https://www.apache.org/dyn/closer.lua?action=download=/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz
>  curl: (60) SSL certificate problem: unable to get local issuer certificate
>  More details here: 
>  https://curl.haxx.se/docs/sslcerts.html
>  curl performs SSL certificate verification by default, using a "bundle"
>  of Certificate Authority (CA) public keys (CA certs). If the default
>  bundle file isn't adequate, you can specify an alternate file
>  using the --cacert option.
>  If this HTTPS server uses a certificate signed by a CA represented in
>  the bundle, the certificate verification probably failed due to a
>  problem with the certificate (it might be expired, or the name might
>  not match the domain name in the URL).
>  If you'd like to turn off curl's verification of the certificate, use
>  the -k (or --insecure) option.
> gzip: stdin: unexpected end of file
>  tar: Child returned status 1
>  tar: Error is not recoverable: exiting now
>  Using `mvn` from path: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn
>  build/mvn: line 163: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn:
>  No such file or directory
>  Build step 'Execute shell' marked build as failure
>  Finished: FAILURE
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-28457) curl: (60) SSL certificate problem: unable to get local issuer certificate More details here:

2019-07-22 Thread shane knapp (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp resolved SPARK-28457.
-
Resolution: Fixed

> curl: (60) SSL certificate problem: unable to get local issuer certificate 
> More details here: 
> --
>
> Key: SPARK-28457
> URL: https://issues.apache.org/jira/browse/SPARK-28457
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Xiao Li
>Assignee: shane knapp
>Priority: Blocker
>
>  
> Build broke since this afternoon.
> [spark-master-compile-maven-hadoop-2.7 #10224 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-2.7/10224/]
>  [spark-master-compile-maven-hadoop-3.2 #171 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-3.2/171/]
>  [spark-master-lint #10599 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-lint/10599/]
>   
> {code:java}
>   
>  
> https://www.apache.org/dyn/closer.lua?action=download=/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz
>  curl: (60) SSL certificate problem: unable to get local issuer certificate
>  More details here: 
>  https://curl.haxx.se/docs/sslcerts.html
>  curl performs SSL certificate verification by default, using a "bundle"
>  of Certificate Authority (CA) public keys (CA certs). If the default
>  bundle file isn't adequate, you can specify an alternate file
>  using the --cacert option.
>  If this HTTPS server uses a certificate signed by a CA represented in
>  the bundle, the certificate verification probably failed due to a
>  problem with the certificate (it might be expired, or the name might
>  not match the domain name in the URL).
>  If you'd like to turn off curl's verification of the certificate, use
>  the -k (or --insecure) option.
> gzip: stdin: unexpected end of file
>  tar: Child returned status 1
>  tar: Error is not recoverable: exiting now
>  Using `mvn` from path: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn
>  build/mvn: line 163: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn:
>  No such file or directory
>  Build step 'Execute shell' marked build as failure
>  Finished: FAILURE
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28457) curl: (60) SSL certificate problem: unable to get local issuer certificate More details here:

2019-07-22 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890279#comment-16890279
 ] 

shane knapp commented on SPARK-28457:
-

ok, the error i'm seeing in the lint job is most definitely not related to the 
SSL certs:

[https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-lint/10613/console]
{noformat}
starting python compilation test...
python compilation succeeded.

downloading pycodestyle from 
https://raw.githubusercontent.com/PyCQA/pycodestyle/2.4.0/pycodestyle.py...
starting pycodestyle test...
pycodestyle checks failed:
  File "/home/jenkins/workspace/spark-master-lint/dev/pycodestyle-2.4.0.py", 
line 1
500: Internal Server Error
   ^
SyntaxError: invalid syntax{noformat}

i went to PyCQA's repo on github and i'm seeing a LOT of 500 errors.  this is 
out of scope of this ticket, and actually not a localized (to our jenkins) 
issue, so i will notify dev@ and mark this as resolved.

> curl: (60) SSL certificate problem: unable to get local issuer certificate 
> More details here: 
> --
>
> Key: SPARK-28457
> URL: https://issues.apache.org/jira/browse/SPARK-28457
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Xiao Li
>Priority: Blocker
>
>  
> Build broke since this afternoon.
> [spark-master-compile-maven-hadoop-2.7 #10224 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-2.7/10224/]
>  [spark-master-compile-maven-hadoop-3.2 #171 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-3.2/171/]
>  [spark-master-lint #10599 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-lint/10599/]
>   
> {code:java}
>   
>  
> https://www.apache.org/dyn/closer.lua?action=download=/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz
>  curl: (60) SSL certificate problem: unable to get local issuer certificate
>  More details here: 
>  https://curl.haxx.se/docs/sslcerts.html
>  curl performs SSL certificate verification by default, using a "bundle"
>  of Certificate Authority (CA) public keys (CA certs). If the default
>  bundle file isn't adequate, you can specify an alternate file
>  using the --cacert option.
>  If this HTTPS server uses a certificate signed by a CA represented in
>  the bundle, the certificate verification probably failed due to a
>  problem with the certificate (it might be expired, or the name might
>  not match the domain name in the URL).
>  If you'd like to turn off curl's verification of the certificate, use
>  the -k (or --insecure) option.
> gzip: stdin: unexpected end of file
>  tar: Child returned status 1
>  tar: Error is not recoverable: exiting now
>  Using `mvn` from path: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn
>  build/mvn: line 163: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn:
>  No such file or directory
>  Build step 'Execute shell' marked build as failure
>  Finished: FAILURE
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28457) curl: (60) SSL certificate problem: unable to get local issuer certificate More details here:

2019-07-22 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890275#comment-16890275
 ] 

shane knapp commented on SPARK-28457:
-

ok, curl was unhappy w/the old the cacert.pem, so i updated to the latest from 
[https://curl.haxx.se/ca/cacert.pem] and things look to be better, tho the lint 
job is failing.

once i get that sorted i will mark this as resolved.

> curl: (60) SSL certificate problem: unable to get local issuer certificate 
> More details here: 
> --
>
> Key: SPARK-28457
> URL: https://issues.apache.org/jira/browse/SPARK-28457
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Xiao Li
>Priority: Blocker
>
>  
> Build broke since this afternoon.
> [spark-master-compile-maven-hadoop-2.7 #10224 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-2.7/10224/]
>  [spark-master-compile-maven-hadoop-3.2 #171 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-3.2/171/]
>  [spark-master-lint #10599 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-lint/10599/]
>   
> {code:java}
>   
>  
> https://www.apache.org/dyn/closer.lua?action=download=/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz
>  curl: (60) SSL certificate problem: unable to get local issuer certificate
>  More details here: 
>  https://curl.haxx.se/docs/sslcerts.html
>  curl performs SSL certificate verification by default, using a "bundle"
>  of Certificate Authority (CA) public keys (CA certs). If the default
>  bundle file isn't adequate, you can specify an alternate file
>  using the --cacert option.
>  If this HTTPS server uses a certificate signed by a CA represented in
>  the bundle, the certificate verification probably failed due to a
>  problem with the certificate (it might be expired, or the name might
>  not match the domain name in the URL).
>  If you'd like to turn off curl's verification of the certificate, use
>  the -k (or --insecure) option.
> gzip: stdin: unexpected end of file
>  tar: Child returned status 1
>  tar: Error is not recoverable: exiting now
>  Using `mvn` from path: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn
>  build/mvn: line 163: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn:
>  No such file or directory
>  Build step 'Execute shell' marked build as failure
>  Finished: FAILURE
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-28457) curl: (60) SSL certificate problem: unable to get local issuer certificate More details here:

2019-07-22 Thread shane knapp (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-28457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16890255#comment-16890255
 ] 

shane knapp commented on SPARK-28457:
-

looking in to it now.

> curl: (60) SSL certificate problem: unable to get local issuer certificate 
> More details here: 
> --
>
> Key: SPARK-28457
> URL: https://issues.apache.org/jira/browse/SPARK-28457
> Project: Spark
>  Issue Type: Bug
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Xiao Li
>Priority: Blocker
>
>  
> Build broke since this afternoon.
> [spark-master-compile-maven-hadoop-2.7 #10224 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-2.7/10224/]
>  [spark-master-compile-maven-hadoop-3.2 #171 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-compile-maven-hadoop-3.2/171/]
>  [spark-master-lint #10599 (broken since this 
> build)|https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Compile/job/spark-master-lint/10599/]
>   
> {code:java}
>   
>  
> https://www.apache.org/dyn/closer.lua?action=download=/maven/maven-3/3.6.1/binaries/apache-maven-3.6.1-bin.tar.gz
>  curl: (60) SSL certificate problem: unable to get local issuer certificate
>  More details here: 
>  https://curl.haxx.se/docs/sslcerts.html
>  curl performs SSL certificate verification by default, using a "bundle"
>  of Certificate Authority (CA) public keys (CA certs). If the default
>  bundle file isn't adequate, you can specify an alternate file
>  using the --cacert option.
>  If this HTTPS server uses a certificate signed by a CA represented in
>  the bundle, the certificate verification probably failed due to a
>  problem with the certificate (it might be expired, or the name might
>  not match the domain name in the URL).
>  If you'd like to turn off curl's verification of the certificate, use
>  the -k (or --insecure) option.
> gzip: stdin: unexpected end of file
>  tar: Child returned status 1
>  tar: Error is not recoverable: exiting now
>  Using `mvn` from path: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn
>  build/mvn: line 163: 
> /home/jenkins/workspace/spark-master-compile-maven-hadoop-2.7/build/apache-maven-3.6.1/bin/mvn:
>  No such file or directory
>  Build step 'Execute shell' marked build as failure
>  Finished: FAILURE
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Closed] (SPARK-28114) Add Jenkins job for `Hadoop-3.2` profile

2019-06-28 Thread shane knapp (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-28114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shane knapp closed SPARK-28114.
---

> Add Jenkins job for `Hadoop-3.2` profile
> 
>
> Key: SPARK-28114
> URL: https://issues.apache.org/jira/browse/SPARK-28114
> Project: Spark
>  Issue Type: Improvement
>  Components: Project Infra
>Affects Versions: 3.0.0
>Reporter: Dongjoon Hyun
>Assignee: shane knapp
>Priority: Major
>
> Spark 3.0 is a major version change. We want to have the following new Jobs.
> 1. SBT with hadoop-3.2
> 2. Maven with hadoop-3.2 (on JDK8 and JDK11)
> Also, shall we have a limit for the concurrent run for the following existing 
> job? Currently, it invokes multiple jobs concurrently. We can save the 
> resource by limiting to 1 like the other jobs.
> - 
> https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-master-test-maven-hadoop-2.7-jdk-11-ubuntu-testing
> We will drop four `branch-2.3` jobs at the end of August, 2019.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



<    1   2   3   4   5   6   7   8   >