Re: Jenkins build errors

2018-06-29 Thread petar . zecevic


The problem was with the changes upstream. fetch upstream and a rebase resolved 
it and now the build is passing.

I also added a design doc and made the JIRA description a bit clearer 
(https://issues.apache.org/jira/browse/SPARK-24020) so I hope it will get 
merged soon.

Thanks,
Petar


Sean Owen @ 1970-01-01 01:00 CET:

> Also confused about this one as many builds succeed. One possible difference 
> is that this failure is in the Hive tests, so are you building and testing 
> with -Phive locally where it works? still does not explain the download 
> failure. It could be a mirror
> problem, throttling, etc. But there again haven't spotted another failing 
> Hive test.
>
> On Wed, Jun 20, 2018 at 1:55 AM Petar Zecevic  wrote:
>
>  It's still dying. Back to this error (it used to be spark-2.2.0 before):
>
> java.io.IOException: Cannot run program "./bin/spark-submit" (in directory 
> "/tmp/test-spark/spark-2.1.2"): error=2, No such file or directory
>  So, a mirror is missing that Spark version... I don't understand why nobody 
> else has these errors and I get them every time without fail.
>
>  Petar


-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: Jenkins build errors

2018-06-23 Thread Sean Owen
Also confused about this one as many builds succeed. One possible
difference is that this failure is in the Hive tests, so are you building
and testing with -Phive locally where it works? still does not explain the
download failure. It could be a mirror problem, throttling, etc. But there
again haven't spotted another failing Hive test.

On Wed, Jun 20, 2018 at 1:55 AM Petar Zecevic 
wrote:

>
> It's still dying. Back to this error (it used to be spark-2.2.0 before):
>
> java.io.IOException: Cannot run program "./bin/spark-submit" (in directory 
> "/tmp/test-spark/spark-2.1.2"): error=2, No such file or directory
>
> So, a mirror is missing that Spark version... I don't understand why
> nobody else has these errors and I get them every time without fail.
>
>
> Petar
>
>


Re: Jenkins build errors

2018-06-20 Thread Petar Zecevic


It's still dying. Back to this error (it used to be spark-2.2.0 before):

java.io.IOException: Cannot run program "./bin/spark-submit" (in directory 
"/tmp/test-spark/spark-2.1.2"): error=2, No such file or directory

So, a mirror is missing that Spark version... I don't understand why 
nobody else has these errors and I get them every time without fail.


Petar


Le 6/19/2018 à 2:35 PM, Sean Owen a écrit :
Those still appear to be env problems. I don't know why it is so 
persistent. Does it all pass locally? Retrigger tests again and see 
what happens.


On Tue, Jun 19, 2018, 2:53 AM Petar Zecevic > wrote:



Thanks, but unfortunately, it died again. Now at pyspark tests:



Running PySpark tests

Running PySpark tests. Output is in 
/home/jenkins/workspace/SparkPullRequestBuilder@2/python/unit-tests.log
Will test against the following Python executables: ['python2.7', 
'python3.4', 'pypy']
Will test the following Python modules: ['pyspark-core', 'pyspark-ml', 
'pyspark-mllib', 'pyspark-sql', 'pyspark-streaming']
Will skip PyArrow related features against Python executable 'python2.7' in 
'pyspark-sql' module. PyArrow >= 0.8.0 is required; however, PyArrow was not 
found.
Will skip Pandas related features against Python executable 'python2.7' in 
'pyspark-sql' module. Pandas >= 0.19.2 is required; however, Pandas 0.16.0 was 
found.
Will test PyArrow related features against Python executable 'python3.4' in 
'pyspark-sql' module.
Will test Pandas related features against Python executable 'python3.4' in 
'pyspark-sql' module.
Will skip PyArrow related features against Python executable 'pypy' in 
'pyspark-sql' module. PyArrow >= 0.8.0 is required; however, PyArrow was not 
found.
Will skip Pandas related features against Python executable 'pypy' in 
'pyspark-sql' module. Pandas >= 0.19.2 is required; however, Pandas was not 
found.
Starting test(python2.7): pyspark.mllib.tests
Starting test(pypy): pyspark.sql.tests
Starting test(pypy): pyspark.streaming.tests
Starting test(pypy): pyspark.tests
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
...
[Stage 0:>  (0 + 1) 
/ 1]
 
..

[Stage 0:>  (0 + 4) 
/ 4]
 
.

[Stage 0:>  (0 + 4) 
/ 4]
 
..

[Stage 0:>  (0 + 4) 
/ 4]
 


[Stage 0:>  (0 + 4) 
/ 4]
 


[Stage 0:>  (0 + 4) 
/ 4]
 


[Stage 0:>  (0 + 4) 
/ 4]
 


[Stage 0:>(0 + 32) 
/ 32]...
[Stage 10:> (0 + 1) 
/ 1]
 
.

[Stage 0:>  (0 + 4) 
/ 4]
 
.s

[Stage 0:>  (0 + 1) 
/ 1]
 
.

[Stage 0:>  (0 + 4) 
/ 4]
[Stage 0:==>(1 + 3) 
/ 4]
 
.

[Stage 0:>  (0 + 4) 
/ 4]
 
..

[Stage 0:>  (0 + 2) 
/ 2]
 
.

[Stage 

Re: Jenkins build errors

2018-06-19 Thread Sean Owen
Those still appear to be env problems. I don't know why it is so
persistent. Does it all pass locally? Retrigger tests again and see what
happens.

On Tue, Jun 19, 2018, 2:53 AM Petar Zecevic  wrote:

>
> Thanks, but unfortunately, it died again. Now at pyspark tests:
>
>
> 
> Running PySpark tests
> 
> Running PySpark tests. Output is in 
> /home/jenkins/workspace/SparkPullRequestBuilder@2/python/unit-tests.log
> Will test against the following Python executables: ['python2.7', 
> 'python3.4', 'pypy']
> Will test the following Python modules: ['pyspark-core', 'pyspark-ml', 
> 'pyspark-mllib', 'pyspark-sql', 'pyspark-streaming']
> Will skip PyArrow related features against Python executable 'python2.7' in 
> 'pyspark-sql' module. PyArrow >= 0.8.0 is required; however, PyArrow was not 
> found.
> Will skip Pandas related features against Python executable 'python2.7' in 
> 'pyspark-sql' module. Pandas >= 0.19.2 is required; however, Pandas 0.16.0 
> was found.
> Will test PyArrow related features against Python executable 'python3.4' in 
> 'pyspark-sql' module.
> Will test Pandas related features against Python executable 'python3.4' in 
> 'pyspark-sql' module.
> Will skip PyArrow related features against Python executable 'pypy' in 
> 'pyspark-sql' module. PyArrow >= 0.8.0 is required; however, PyArrow was not 
> found.
> Will skip Pandas related features against Python executable 'pypy' in 
> 'pyspark-sql' module. Pandas >= 0.19.2 is required; however, Pandas was not 
> found.
> Starting test(python2.7): pyspark.mllib.tests
> Starting test(pypy): pyspark.sql.tests
> Starting test(pypy): pyspark.streaming.tests
> Starting test(pypy): pyspark.tests
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
> setLogLevel(newLevel).
> ...
> [Stage 0:>  (0 + 1) / 
> 1]
>
> ..
> [Stage 0:>  (0 + 4) / 
> 4]
>
> .
> [Stage 0:>  (0 + 4) / 
> 4]
>
> ..
> [Stage 0:>  (0 + 4) / 
> 4]
>
>
> [Stage 0:>  (0 + 4) / 
> 4]
>
>
> [Stage 0:>  (0 + 4) / 
> 4]
>
>
> [Stage 0:>  (0 + 4) / 
> 4]
>
> 
> [Stage 0:>(0 + 32) / 
> 32]...
> [Stage 10:> (0 + 1) / 
> 1]
>
> .
> [Stage 0:>  (0 + 4) / 
> 4]
>
> .s
> [Stage 0:>  (0 + 1) / 
> 1]
>
> .
> [Stage 0:>  (0 + 4) / 
> 4]
> [Stage 0:==>(1 + 3) / 
> 4]
>
> .
> [Stage 0:>  (0 + 4) / 
> 4]
>
> ..
> [Stage 0:>  (0 + 2) / 
> 2]
>
> .
> [Stage 29:===>  (3 + 1) / 
> 4]
>
> ..
> [Stage 79:> (0 + 1) / 
> 1]
>
> ..
> [Stage 83:>(0 + 4) / 
> 10]
> [Stage 83:==>  (4 + 4) / 
> 10]
> [Stage 83:=>   (8 + 2) / 
> 10]
>
> ..cc: cc: no input filesno input files
>
> cc: no input files
> cc: no input files
> Exception in thread Thread-1:
> Traceback (most recent call last):
>   File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 806, in 
> __bootstrap_inner
> self.run()
>   File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 759, in run
> self.__target(*self.__args, **self.__kwargs)
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/rdd.py" 
> , 
> line 771, in pipe_objs
> out.write(s.encode('utf-8'))
> IOError: [Errno 32] Broken pipe: ''
>
> cc: no input files
> cc: no input files
> cc: no input files
> Exception in thread Thread-1:
> Traceback (most recent call last):
>   File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 806, in 
> __bootstrap_inner
> self.run()
>   File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 759, in run
> self.__target(*self.__args, **self.__kwargs)
>   File 
> "/home/jenkins/workspace/SparkPullRequestBuilder@2/python/pyspark/rdd.py" 
> , 
> line 771, in pipe_objs
> out.write(s.encode('utf-8'))
> IOError: [Errno 32] Broken pipe: ''
>
> Exception in 

Re: Jenkins build errors

2018-06-19 Thread Petar Zecevic


Thanks, but unfortunately, it died again. Now at pyspark tests:



Running PySpark tests

Running PySpark tests. Output is in 
/home/jenkins/workspace/SparkPullRequestBuilder@2/python/unit-tests.log
Will test against the following Python executables: ['python2.7', 'python3.4', 
'pypy']
Will test the following Python modules: ['pyspark-core', 'pyspark-ml', 
'pyspark-mllib', 'pyspark-sql', 'pyspark-streaming']
Will skip PyArrow related features against Python executable 'python2.7' in 
'pyspark-sql' module. PyArrow >= 0.8.0 is required; however, PyArrow was not 
found.
Will skip Pandas related features against Python executable 'python2.7' in 
'pyspark-sql' module. Pandas >= 0.19.2 is required; however, Pandas 0.16.0 was 
found.
Will test PyArrow related features against Python executable 'python3.4' in 
'pyspark-sql' module.
Will test Pandas related features against Python executable 'python3.4' in 
'pyspark-sql' module.
Will skip PyArrow related features against Python executable 'pypy' in 
'pyspark-sql' module. PyArrow >= 0.8.0 is required; however, PyArrow was not 
found.
Will skip Pandas related features against Python executable 'pypy' in 
'pyspark-sql' module. Pandas >= 0.19.2 is required; however, Pandas was not 
found.
Starting test(python2.7): pyspark.mllib.tests
Starting test(pypy): pyspark.sql.tests
Starting test(pypy): pyspark.streaming.tests
Starting test(pypy): pyspark.tests
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
...
[Stage 0:>  (0 + 1) / 1]

..

[Stage 0:>  (0 + 4) / 4]

.

[Stage 0:>  (0 + 4) / 4]

..

[Stage 0:>  (0 + 4) / 4]



[Stage 0:>  (0 + 4) / 4]



[Stage 0:>  (0 + 4) / 4]



[Stage 0:>  (0 + 4) / 4]



[Stage 0:>(0 + 32) / 
32]...
[Stage 10:> (0 + 1) / 1]

.

[Stage 0:>  (0 + 4) / 4]

.s

[Stage 0:>  (0 + 1) / 1]

.

[Stage 0:>  (0 + 4) / 4]
[Stage 0:==>(1 + 3) / 4]

.

[Stage 0:>  (0 + 4) / 4]

..

[Stage 0:>  (0 + 2) / 2]

.

[Stage 29:===>  (3 + 1) / 4]

..

[Stage 79:> (0 + 1) / 1]

..

[Stage 83:>(0 + 4) / 10]
[Stage 83:==>  (4 + 4) / 10]
[Stage 83:=>   (8 + 2) / 10]

..cc: cc: no input filesno input files


cc: no input files
cc: no input files
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib64/pypy-2.5.1/lib-python/2.7/threading.py", line 806, in 
__bootstrap_inner
self.run()
  File 

Re: Jenkins build errors

2018-06-18 Thread shane knapp
i triggered another build against your PR, so let's see if this happens
again or was a transient failure.

https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92038/

shane

On Mon, Jun 18, 2018 at 5:30 AM, Petar Zecevic 
wrote:

> Hi,
> Jenkins build for my PR (https://github.com/apache/spark/pull/21109 ;
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92023/
> testReport/org.apache.spark.sql.hive/HiveExternalCatalogVersionsSui
> te/_It_is_not_a_test_it_is_a_sbt_testing_SuiteSelector_/) keeps failing.
> First it couldn't download Spark v.2.2.0 (indeed, it wasn't available at
> the mirror it selected), now it's failing with this exception below.
>
> Can someone explain these errors for me? Is anybody else experiencing
> similar problems?
>
> Thanks,
> Petar
>
>
> Error Message
>
> java.io.IOException: Cannot run program "./bin/spark-submit" (in directory
> "/tmp/test-spark/spark-2.2.1"): error=2, No such file or directory
>
> Stacktrace
>
> sbt.ForkMain$ForkError: java.io.IOException: Cannot run program
> "./bin/spark-submit" (in directory "/tmp/test-spark/spark-2.2.1"):
> error=2, No such file or directory
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
> at org.apache.spark.sql.hive.SparkSubmitTestUtils$class.
> runSparkSubmit(SparkSubmitTestUtils.scala:73)
> at org.apache.spark.sql.hive.HiveExternalCatalogVersionsSui
> te.runSparkSubmit(HiveExternalCatalogVersionsSuite.scala:43)
> at org.apache.spark.sql.hive.HiveExternalCatalogVersionsSui
> te$$anonfun$beforeAll$1.apply(HiveExternalCatalogVersionsSuite.scala:176)
> at org.apache.spark.sql.hive.HiveExternalCatalogVersionsSui
> te$$anonfun$beforeAll$1.apply(HiveExternalCatalogVersionsSuite.scala:161)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at org.apache.spark.sql.hive.HiveExternalCatalogVersionsSui
> te.beforeAll(HiveExternalCatalogVersionsSuite.scala:161)
> at org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(
> BeforeAndAfterAll.scala:212)
> at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)
>
> at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:52)
> at org.scalatest.tools.Framework.org$scalatest$tools$Framework$
> $runSuite(Framework.scala:314)
> at 
> org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)
>
> at sbt.ForkMain$Run$2.call(ForkMain.java:296)
> at sbt.ForkMain$Run$2.call(ForkMain.java:286)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: sbt.ForkMain$ForkError: java.io.IOException: error=2, No such
> file or directory
> at java.lang.UNIXProcess.forkAndExec(Native Method)
> at java.lang.UNIXProcess.(UNIXProcess.java:248)
> at java.lang.ProcessImpl.start(ProcessImpl.java:134)
> at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
> ... 17 more
>
>


-- 
Shane Knapp
UC Berkeley EECS Research / RISELab Staff Technical Lead
https://rise.cs.berkeley.edu


Jenkins build errors

2018-06-18 Thread Petar Zecevic

Hi,
Jenkins build for my PR (https://github.com/apache/spark/pull/21109 ; 
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/92023/testReport/org.apache.spark.sql.hive/HiveExternalCatalogVersionsSuite/_It_is_not_a_test_it_is_a_sbt_testing_SuiteSelector_/) 
keeps failing. First it couldn't download Spark v.2.2.0 (indeed, it 
wasn't available at the mirror it selected), now it's failing with this 
exception below.


Can someone explain these errors for me? Is anybody else experiencing 
similar problems?


Thanks,
Petar


Error Message

java.io.IOException: Cannot run program "./bin/spark-submit" (in 
directory "/tmp/test-spark/spark-2.2.1"): error=2, No such file or 
directory


Stacktrace

sbt.ForkMain$ForkError: java.io.IOException: Cannot run program 
"./bin/spark-submit" (in directory "/tmp/test-spark/spark-2.2.1"): 
error=2, No such file or directory

    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
    at 
org.apache.spark.sql.hive.SparkSubmitTestUtils$class.runSparkSubmit(SparkSubmitTestUtils.scala:73)
    at 
org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite.runSparkSubmit(HiveExternalCatalogVersionsSuite.scala:43)
    at 
org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite$$anonfun$beforeAll$1.apply(HiveExternalCatalogVersionsSuite.scala:176)
    at 
org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite$$anonfun$beforeAll$1.apply(HiveExternalCatalogVersionsSuite.scala:161)

    at scala.collection.immutable.List.foreach(List.scala:381)
    at 
org.apache.spark.sql.hive.HiveExternalCatalogVersionsSuite.beforeAll(HiveExternalCatalogVersionsSuite.scala:161)
    at 
org.scalatest.BeforeAndAfterAll$class.liftedTree1$1(BeforeAndAfterAll.scala:212)
    at 
org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:210)

    at org.apache.spark.SparkFunSuite.run(SparkFunSuite.scala:52)
    at 
org.scalatest.tools.Framework.org$scalatest$tools$Framework$$runSuite(Framework.scala:314)
    at 
org.scalatest.tools.Framework$ScalaTestTask.execute(Framework.scala:480)

    at sbt.ForkMain$Run$2.call(ForkMain.java:296)
    at sbt.ForkMain$Run$2.call(ForkMain.java:286)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

    at java.lang.Thread.run(Thread.java:745)
Caused by: sbt.ForkMain$ForkError: java.io.IOException: error=2, No such 
file or directory

    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.(UNIXProcess.java:248)
    at java.lang.ProcessImpl.start(ProcessImpl.java:134)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
    ... 17 more