[jira] [Commented] (SPARK-22401) Missing 2.1.2 tag in git

2017-10-30 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16226178#comment-16226178
 ] 

Xin Lu commented on SPARK-22401:


[~holdenk] is this just a new process? 

> Missing 2.1.2 tag in git
> 
>
> Key: SPARK-22401
> URL: https://issues.apache.org/jira/browse/SPARK-22401
> Project: Spark
>  Issue Type: Bug
>  Components: Build, Deploy
>Affects Versions: 2.1.2
>Reporter: Brian Barker
>Priority: Minor
>
> We only saw a 2.1.2-rc4 tag in git, no official release. The releases web 
> page shows 2.1.2 was released in October 9.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7019) Build docs on doc changes

2017-10-29 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224388#comment-16224388
 ] 

Xin Lu commented on SPARK-7019:
---

recent pr here:

https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83085/consoleFull


Building Unidoc API Documentation

[info] Building Spark unidoc (w/Hive 1.2.1) using SBT with these arguments:  
-Phadoop-2.6 -Phive-thriftserver -Pflume -Pkinesis-asl -Pyarn -Pkafka-0-8 
-Phive -Pmesos unidoc
Using /usr/java/jdk1.8.0_60 as default JAVA_HOME.

> Build docs on doc changes
> -
>
> Key: SPARK-7019
> URL: https://issues.apache.org/jira/browse/SPARK-7019
> Project: Spark
>  Issue Type: New Feature
>  Components: Build
>Reporter: Brennon York
>
> Currently when a pull request changes the {{docs/}} directory, the docs 
> aren't actually built. When a PR is submitted the {{git}} history should be 
> checked to see if any doc changes were made and, if so, properly build the 
> docs and report any issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-7019) Build docs on doc changes

2017-10-29 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224386#comment-16224386
 ] 

Xin Lu commented on SPARK-7019:
---

It looks like unidoc is running on new PRs now.  Maybe this can be closed now?

> Build docs on doc changes
> -
>
> Key: SPARK-7019
> URL: https://issues.apache.org/jira/browse/SPARK-7019
> Project: Spark
>  Issue Type: New Feature
>  Components: Build
>Reporter: Brennon York
>
> Currently when a pull request changes the {{docs/}} directory, the docs 
> aren't actually built. When a PR is submitted the {{git}} history should be 
> checked to see if any doc changes were made and, if so, properly build the 
> docs and report any issues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20000) Spark Hive tests aborted due to lz4-java on ppc64le

2017-10-29 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224358#comment-16224358
 ] 

Xin Lu edited comment on SPARK-2 at 10/30/17 4:12 AM:
--

I checked the dependencies and it looks like lz4-java already updated to 1.4.0: 
https://github.com/apache/spark/blob/master/pom.xml#L538

lz4 1.4.0 was released august 2nd and looks like it included the patch above. 
This is probably resolvable now. 

This should be a dupe of this issue which will be fixed in 2.3.0: 
https://github.com/apache/spark/commit/b78cf13bf05f0eadd7ae97df84b6e1505dc5ff9f

[SPARK-21276][CORE] Update lz4-java to the latest (v1.4.0)


was (Author: xynny):
I checked the dependencies and it looks like lz4-java already updated to 1.4.0: 
https://github.com/apache/spark/blob/master/pom.xml#L538

lz4 1.4.0 was released august 2nd and looks like it included the patch above. 
This is probably resolvable now. 

This should be a dupe of this: 
https://github.com/apache/spark/commit/b78cf13bf05f0eadd7ae97df84b6e1505dc5ff9f

[SPARK-21276][CORE] Update lz4-java to the latest (v1.4.0)

> Spark Hive tests aborted due to lz4-java on ppc64le
> ---
>
> Key: SPARK-2
> URL: https://issues.apache.org/jira/browse/SPARK-2
> Project: Spark
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 2.2.0
> Environment: Ubuntu 14.04 ppc64le 
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Priority: Minor
>  Labels: ppc64le
> Attachments: hs_err_pid.log
>
>
> The tests are getting aborted in Spark Hive project with the following error :
> {code:borderStyle=solid}
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x3fff94dbf114, pid=6160, tid=0x3fff6efef1a0
> #
> # JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build 
> 1.8.0_111-8u111-b14-3~14.04.1-b14)
> # Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-ppc64 
> compressed oops)
> # Problematic frame:
> # V  [libjvm.so+0x56f114]
> {code}
> In the thread log file, I found the following traces :
> Event: 3669.042 Thread 0x3fff89976800 Exception  'java/lang/NoClassDefFoundError': Could not initialize class 
> net.jpountz.lz4.LZ4JNI> (0x00079fcda3b8) thrown at 
> [/build/openjdk-8-fVIxxI/openjdk-8-8u111-b14/src/hotspot/src/share/vm/oops/instanceKlass.cpp,
>  line 890]
> This error is due to the lz4-java (version 1.3.0), which doesn’t have support 
> for ppc64le.PFA the thread log file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-20000) Spark Hive tests aborted due to lz4-java on ppc64le

2017-10-29 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224358#comment-16224358
 ] 

Xin Lu edited comment on SPARK-2 at 10/30/17 4:04 AM:
--

I checked the dependencies and it looks like lz4-java already updated to 1.4.0: 
https://github.com/apache/spark/blob/master/pom.xml#L538

lz4 1.4.0 was released august 2nd and looks like it included the patch above. 
This is probably resolvable now. 

This should be a dupe of this: 
https://github.com/apache/spark/commit/b78cf13bf05f0eadd7ae97df84b6e1505dc5ff9f

[SPARK-21276][CORE] Update lz4-java to the latest (v1.4.0)


was (Author: xynny):
I checked the dependencies and it looks like lz4-java already updated to 1.4.0: 
https://github.com/apache/spark/blob/master/pom.xml#L538

lz4 1.4.0 was released august 2nd and looks like it included the patch above. 
This is probably resolvable now. 

> Spark Hive tests aborted due to lz4-java on ppc64le
> ---
>
> Key: SPARK-2
> URL: https://issues.apache.org/jira/browse/SPARK-2
> Project: Spark
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 2.2.0
> Environment: Ubuntu 14.04 ppc64le 
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Priority: Minor
>  Labels: ppc64le
> Attachments: hs_err_pid.log
>
>
> The tests are getting aborted in Spark Hive project with the following error :
> {code:borderStyle=solid}
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x3fff94dbf114, pid=6160, tid=0x3fff6efef1a0
> #
> # JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build 
> 1.8.0_111-8u111-b14-3~14.04.1-b14)
> # Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-ppc64 
> compressed oops)
> # Problematic frame:
> # V  [libjvm.so+0x56f114]
> {code}
> In the thread log file, I found the following traces :
> Event: 3669.042 Thread 0x3fff89976800 Exception  'java/lang/NoClassDefFoundError': Could not initialize class 
> net.jpountz.lz4.LZ4JNI> (0x00079fcda3b8) thrown at 
> [/build/openjdk-8-fVIxxI/openjdk-8-8u111-b14/src/hotspot/src/share/vm/oops/instanceKlass.cpp,
>  line 890]
> This error is due to the lz4-java (version 1.3.0), which doesn’t have support 
> for ppc64le.PFA the thread log file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-20000) Spark Hive tests aborted due to lz4-java on ppc64le

2017-10-29 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-2?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16224358#comment-16224358
 ] 

Xin Lu commented on SPARK-2:


I checked the dependencies and it looks like lz4-java already updated to 1.4.0: 
https://github.com/apache/spark/blob/master/pom.xml#L538

lz4 1.4.0 was released august 2nd and looks like it included the patch above. 
This is probably resolvable now. 

> Spark Hive tests aborted due to lz4-java on ppc64le
> ---
>
> Key: SPARK-2
> URL: https://issues.apache.org/jira/browse/SPARK-2
> Project: Spark
>  Issue Type: Improvement
>  Components: Tests
>Affects Versions: 2.2.0
> Environment: Ubuntu 14.04 ppc64le 
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Priority: Minor
>  Labels: ppc64le
> Attachments: hs_err_pid.log
>
>
> The tests are getting aborted in Spark Hive project with the following error :
> {code:borderStyle=solid}
> #
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x3fff94dbf114, pid=6160, tid=0x3fff6efef1a0
> #
> # JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build 
> 1.8.0_111-8u111-b14-3~14.04.1-b14)
> # Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-ppc64 
> compressed oops)
> # Problematic frame:
> # V  [libjvm.so+0x56f114]
> {code}
> In the thread log file, I found the following traces :
> Event: 3669.042 Thread 0x3fff89976800 Exception  'java/lang/NoClassDefFoundError': Could not initialize class 
> net.jpountz.lz4.LZ4JNI> (0x00079fcda3b8) thrown at 
> [/build/openjdk-8-fVIxxI/openjdk-8-8u111-b14/src/hotspot/src/share/vm/oops/instanceKlass.cpp,
>  line 890]
> This error is due to the lz4-java (version 1.3.0), which doesn’t have support 
> for ppc64le.PFA the thread log file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-18451) Always set -XX:+HeapDumpOnOutOfMemoryError for Spark tests

2017-10-28 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-18451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16223846#comment-16223846
 ] 

Xin Lu commented on SPARK-18451:


Pretty easy to do if we can get the jenkins job builder scripts changed and 
there is a place to dump the files on amplab jenkins/s3

> Always set -XX:+HeapDumpOnOutOfMemoryError for Spark tests
> --
>
> Key: SPARK-18451
> URL: https://issues.apache.org/jira/browse/SPARK-18451
> Project: Spark
>  Issue Type: Bug
>  Components: Build, Tests
>Reporter: Cheng Lian
>
> It would be nice if we always set {{-XX:+HeapDumpOnOutOfMemoryError}} and 
> {{-XX:+HeapDumpPath}} for open source Spark tests. So that it would be easier 
> to investigate issues like SC-5041.
> Note:
> - We need to ensure that the heap dumps are stored in a location on Jenkins 
> that won't be automatically cleaned up.
> - It would be nice to be able to customize the customize the heap dump output 
> paths on a per build basis so that it's easier to find the heap dump file of 
> any given build.
> The 2nd point is optional since we can probably identify wanted heap dump 
> files by looking at the creation timestamp.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Comment Edited] (SPARK-22055) Port release scripts

2017-10-28 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16223229#comment-16223229
 ] 

Xin Lu edited comment on SPARK-22055 at 10/28/17 6:30 PM:
--

[~holdenk] [~joshrosen] do you guys need help with this?   This is all the JJB 
code josh has in databricks/spark?


was (Author: xynny):
[~holdenk] [~joshrosen] do you guys need help with this?  

> Port release scripts
> 
>
> Key: SPARK-22055
> URL: https://issues.apache.org/jira/browse/SPARK-22055
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.1, 2.3.0
>Reporter: holdenk
>Priority: Blocker
>
> The current Jenkins jobs are generated from scripts in a private repo. We 
> should port these to enable changes like SPARK-22054 .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-22377) Maven nightly snapshot jenkins jobs are broken on multiple workers due to lsof

2017-10-27 Thread Xin Lu (JIRA)
Xin Lu created SPARK-22377:
--

 Summary: Maven nightly snapshot jenkins jobs are broken on 
multiple workers due to lsof
 Key: SPARK-22377
 URL: https://issues.apache.org/jira/browse/SPARK-22377
 Project: Spark
  Issue Type: Bug
  Components: Build
Affects Versions: 2.2.0, 2.1.0
Reporter: Xin Lu


It looks like multiple workers in the amplab jenkins cannot execute lsof.  
Example log below:

https://amplab.cs.berkeley.edu/jenkins/view/Spark%20Packaging/job/spark-branch-2.1-maven-snapshots/182/console

spark-build/dev/create-release/release-build.sh: line 344: lsof: command not 
found
usage: kill [ -s signal | -p ] [ -a ] pid ...
   kill -l [ signal ]

I looked at the jobs and it looks like only  amp-jenkins-worker-01 works so you 
are getting a successful build every week or so.  Unclear if the snapshot is 
actually released.  







--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-22055) Port release scripts

2017-10-27 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-22055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16223229#comment-16223229
 ] 

Xin Lu commented on SPARK-22055:


[~holdenk] [~joshrosen] do you guys need help with this?  

> Port release scripts
> 
>
> Key: SPARK-22055
> URL: https://issues.apache.org/jira/browse/SPARK-22055
> Project: Spark
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 2.2.1, 2.3.0
>Reporter: holdenk
>Priority: Blocker
>
> The current Jenkins jobs are generated from scripts in a private repo. We 
> should port these to enable changes like SPARK-22054 .



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-13085) Add scalastyle command used in build testing

2017-10-27 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-13085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16223227#comment-16223227
 ] 

Xin Lu commented on SPARK-13085:


I think scalastyle-maven-plugin already got upgraded to latest, which is 1.0.0. 
 https://github.com/xynny/spark/commit/64936c14a7ef30b9eacb129bafe6a1665887bf21

I don't think this should be an issue anymore.  

> Add scalastyle command used in build testing
> 
>
> Key: SPARK-13085
> URL: https://issues.apache.org/jira/browse/SPARK-13085
> Project: Spark
>  Issue Type: Wish
>  Components: Build, Tests
>Reporter: Charles Allen
>
> As an occasional or new contributor, it is easy to screw up scala style. But 
> looking at the output logs (for example 
> https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50300/consoleFull
>  ) it is not obvious how to fix the scala style tests, even when reading the 
> scala style guide.
> {code}
> 
> Running Scala style checks
> 
> Scalastyle checks failed at following occurrences:
> [error] 
> /home/jenkins/workspace/SparkPullRequestBuilder/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala:22:0:
>  import.ordering.wrongOrderInGroup.message
> [error] (core/compile:scalastyle) errors exist
> [error] Total time: 9 s, completed Jan 28, 2016 2:11:00 PM
> [error] running 
> /home/jenkins/workspace/SparkPullRequestBuilder/dev/lint-scala ; received 
> return code 1
> {code}
> This ask is that the command used to check scalastyle is presented in the log 
> so a developer does not have to wait for the build process to check if a pull 
> request should pass scala style checks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-8571) spark streaming hanging processes upon build exit

2017-10-27 Thread Xin Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-8571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16223226#comment-16223226
 ] 

Xin Lu commented on SPARK-8571:
---

still an issue?

> spark streaming hanging processes upon build exit
> -
>
> Key: SPARK-8571
> URL: https://issues.apache.org/jira/browse/SPARK-8571
> Project: Spark
>  Issue Type: Bug
>  Components: Build, DStreams
> Environment: centos 6.6 amplab build system
>Reporter: shane knapp
>Assignee: shane knapp
>Priority: Minor
>  Labels: build, test
>
> over the past 3 months i've been noticing that there are occasionally hanging 
> processes on our build system workers after various spark builds have 
> finished.  these are all spark streaming processes.
> today i noticed a 3+ hour spark build that was timed out after 200 minutes 
> (https://amplab.cs.berkeley.edu/jenkins/job/Spark-Master-Maven-pre-YARN/2994/),
>  and the matrix build hadoop.version=2.0.0-mr1-cdh4.1.2 ran on 
> amp-jenkins-worker-02.  after the timeout, it left the following process (and 
> all of it's children) hanging.
> the process' CLI command was:
> {quote}
> [root@amp-jenkins-worker-02 ~]# ps auxwww|grep 1714
> jenkins1714  733  2.7 21342148 3642740 ?Sl   07:52 1713:41 java 
> -Dderby.system.durability=test -Djava.awt.headless=true 
> -Djava.io.tmpdir=/home/jenkins/workspace/Spark-Master-Maven-pre-YARN/hadoop.version/2.0.0-mr1-cdh4.1.2/label/centos/streaming/target/tmp
>  -Dspark.driver.allowMultipleContexts=true 
> -Dspark.test.home=/home/jenkins/workspace/Spark-Master-Maven-pre-YARN/hadoop.version/2.0.0-mr1-cdh4.1.2/label/centos
>  -Dspark.testing=1 -Dspark.ui.enabled=false 
> -Dspark.ui.showConsoleProgress=false 
> -Dbasedir=/home/jenkins/workspace/Spark-Master-Maven-pre-YARN/hadoop.version/2.0.0-mr1-cdh4.1.2/label/centos/streaming
>  -ea -Xmx3g -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=512m 
> org.scalatest.tools.Runner -R 
> /home/jenkins/workspace/Spark-Master-Maven-pre-YARN/hadoop.version/2.0.0-mr1-cdh4.1.2/label/centos/streaming/target/scala-2.10/classes
>  
> /home/jenkins/workspace/Spark-Master-Maven-pre-YARN/hadoop.version/2.0.0-mr1-cdh4.1.2/label/centos/streaming/target/scala-2.10/test-classes
>  -o -f 
> /home/jenkins/workspace/Spark-Master-Maven-pre-YARN/hadoop.version/2.0.0-mr1-cdh4.1.2/label/centos/streaming/target/surefire-reports/SparkTestSuite.txt
>  -u 
> /home/jenkins/workspace/Spark-Master-Maven-pre-YARN/hadoop.version/2.0.0-mr1-cdh4.1.2/label/centos/streaming/target/surefire-reports/.
> {quote}
> stracing that process doesn't give us much:
> {quote}
> [root@amp-jenkins-worker-02 ~]# strace -p 1714
> Process 1714 attached - interrupt to quit
> futex(0x7ff3cdd269d0, FUTEX_WAIT, 1715, NULL
> {quote}
> stracing it's children gives is a *little* bit more...  some loop like this:
> {quote}
> 
> futex(0x7ff3c8012d28, FUTEX_WAKE_PRIVATE, 1) = 0
> futex(0x7ff3c8012f54, FUTEX_WAIT_PRIVATE, 28969, NULL) = 0
> futex(0x7ff3c8012f28, FUTEX_WAKE_PRIVATE, 1) = 0
> futex(0x7ff3c8f17954, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7ff3c8f17950, 
> {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
> futex(0x7ff3c8f17928, FUTEX_WAKE_PRIVATE, 1) = 1
> futex(0x7ff3c8012d54, FUTEX_WAIT_BITSET_PRIVATE, 1, {2263862, 865233273}, 
> ) = -1 ETIMEDOUT (Connection timed out)
> {quote}
> and others loop on prtrace_attach (no such process) or restart_syscall 
> (resuming interrupted call)
> even though this behavior has been solidly pinned to jobs timing out (which 
> ends w/an aborted, not failed, build), i've seen it happen for failed builds 
> as well.  if i see any hanging processes from failed (not aborted) builds, i 
> will investigate them and update this bug as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org