Unable to add to roles in JIRA

2015-06-28 Thread Sean Owen
In case you've tried and failed to add a person to a role in JIRA...
https://issues.apache.org/jira/browse/INFRA-9891

-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Question about Spark process and thread

2015-06-28 Thread Dogtail Ray
Hi,

I was looking at Spark source code, and I found that when launching a
Executor, actually Spark is launching a threadpool; each time the scheduler
launches a task, the executor will launch a thread within the threadpool.

However, I also found that the Spark process always has approximately 40
threads running regardless of my configuration (SPARK_WORKER_CORES,
SPARK_WORKER_INSTANCES, --executor-cores, --total-executor-cores, etc.).
Does it mean Spark will pre-launch 40 threads even before the tasks are
launched? Great thanks!

Best,
Ray


Re: [VOTE] Release Apache Spark 1.4.1

2015-06-28 Thread Krishna Sankar
Patrick,
   Haven't seen any replies on test results. I will byte ;o) - Should I
test this version or is another one in the wings ?
Cheers
k/

On Tue, Jun 23, 2015 at 10:37 PM, Patrick Wendell pwend...@gmail.com
wrote:

 Please vote on releasing the following candidate as Apache Spark version
 1.4.1!

 This release fixes a handful of known issues in Spark 1.4.0, listed here:
 http://s.apache.org/spark-1.4.1

 The tag to be voted on is v1.4.1-rc1 (commit 60e08e5):
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=
 60e08e50751fe3929156de956d62faea79f5b801

 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-releases/spark-1.4.1-rc1-bin/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:
 [published as version: 1.4.1]
 https://repository.apache.org/content/repositories/orgapachespark-1118/
 [published as version: 1.4.1-rc1]
 https://repository.apache.org/content/repositories/orgapachespark-1119/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-releases/spark-1.4.1-rc1-docs/

 Please vote on releasing this package as Apache Spark 1.4.1!

 The vote is open until Saturday, June 27, at 06:32 UTC and passes
 if a majority of at least 3 +1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.4.1
 [ ] -1 Do not release this package because ...

 To learn more about Apache Spark, please see
 http://spark.apache.org/

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org




Re: [VOTE] Release Apache Spark 1.4.1

2015-06-28 Thread Patrick Wendell
Hey Krishna - this is still the current release candidate.

- Patrick

On Sun, Jun 28, 2015 at 12:14 PM, Krishna Sankar ksanka...@gmail.com wrote:
 Patrick,
Haven't seen any replies on test results. I will byte ;o) - Should I test
 this version or is another one in the wings ?
 Cheers
 k/

 On Tue, Jun 23, 2015 at 10:37 PM, Patrick Wendell pwend...@gmail.com
 wrote:

 Please vote on releasing the following candidate as Apache Spark version
 1.4.1!

 This release fixes a handful of known issues in Spark 1.4.0, listed here:
 http://s.apache.org/spark-1.4.1

 The tag to be voted on is v1.4.1-rc1 (commit 60e08e5):
 https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=
 60e08e50751fe3929156de956d62faea79f5b801

 The release files, including signatures, digests, etc. can be found at:
 http://people.apache.org/~pwendell/spark-releases/spark-1.4.1-rc1-bin/

 Release artifacts are signed with the following key:
 https://people.apache.org/keys/committer/pwendell.asc

 The staging repository for this release can be found at:
 [published as version: 1.4.1]
 https://repository.apache.org/content/repositories/orgapachespark-1118/
 [published as version: 1.4.1-rc1]
 https://repository.apache.org/content/repositories/orgapachespark-1119/

 The documentation corresponding to this release can be found at:
 http://people.apache.org/~pwendell/spark-releases/spark-1.4.1-rc1-docs/

 Please vote on releasing this package as Apache Spark 1.4.1!

 The vote is open until Saturday, June 27, at 06:32 UTC and passes
 if a majority of at least 3 +1 PMC votes are cast.

 [ ] +1 Release this package as Apache Spark 1.4.1
 [ ] -1 Do not release this package because ...

 To learn more about Apache Spark, please see
 http://spark.apache.org/

 -
 To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
 For additional commands, e-mail: dev-h...@spark.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org



Spark 1.5.0-SNAPSHOT broken with Scala 2.11

2015-06-28 Thread Alessandro Baretta
I am building the current master branch with Scala 2.11 following these
instructions:

Building for Scala 2.11

To produce a Spark package compiled with Scala 2.11, use the -Dscala-2.11
 property:

dev/change-version-to-2.11.sh
mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package


Here's what I'm seeing:

log4j:WARN No appenders could be found for logger
(org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
Using Spark's repl log4j profile:
org/apache/spark/log4j-defaults-repl.properties
To adjust logging level use sc.setLogLevel(INFO)
Welcome to
    __
 / __/__  ___ _/ /__
_\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.5.0-SNAPSHOT
  /_/

Using Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
15/06/29 00:42:20 ERROR ActorSystemImpl: Uncaught fatal error from thread
[sparkDriver-akka.remote.default-remote-dispatcher-6] shutting down
ActorSystem [sparkDriver]
java.lang.VerifyError: class akka.remote.WireFormats$AkkaControlMessage
overrides final method
getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at
akka.remote.transport.AkkaPduProtobufCodec$.constructControlMessagePdu(AkkaPduCodec.scala:231)
at
akka.remote.transport.AkkaPduProtobufCodec$.init(AkkaPduCodec.scala:153)
at akka.remote.transport.AkkaPduProtobufCodec$.clinit(AkkaPduCodec.scala)
at akka.remote.EndpointManager$$anonfun$9.apply(Remoting.scala:733)
at akka.remote.EndpointManager$$anonfun$9.apply(Remoting.scala:703)

What am I doing wrong?


Re: Spark 1.5.0-SNAPSHOT broken with Scala 2.11

2015-06-28 Thread Ted Yu
Spark-Master-Scala211-Compile build is green.

However it is not clear what the actual command is:

[EnvInject] - Variables injected successfully.
[Spark-Master-Scala211-Compile] $ /bin/bash /tmp/hudson8945334776362889961.sh


FYI


On Sun, Jun 28, 2015 at 6:02 PM, Alessandro Baretta alexbare...@gmail.com
wrote:

 I am building the current master branch with Scala 2.11 following these
 instructions:

 Building for Scala 2.11

 To produce a Spark package compiled with Scala 2.11, use the -Dscala-2.11
  property:

 dev/change-version-to-2.11.sh
 mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package


 Here's what I'm seeing:

 log4j:WARN No appenders could be found for logger
 (org.apache.hadoop.security.Groups).
 log4j:WARN Please initialize the log4j system properly.
 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
 more info.
 Using Spark's repl log4j profile:
 org/apache/spark/log4j-defaults-repl.properties
 To adjust logging level use sc.setLogLevel(INFO)
 Welcome to
     __
  / __/__  ___ _/ /__
 _\ \/ _ \/ _ `/ __/  '_/
/___/ .__/\_,_/_/ /_/\_\   version 1.5.0-SNAPSHOT
   /_/

 Using Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.7.0_79)
 Type in expressions to have them evaluated.
 Type :help for more information.
 15/06/29 00:42:20 ERROR ActorSystemImpl: Uncaught fatal error from thread
 [sparkDriver-akka.remote.default-remote-dispatcher-6] shutting down
 ActorSystem [sparkDriver]
 java.lang.VerifyError: class akka.remote.WireFormats$AkkaControlMessage
 overrides final method
 getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
 at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at
 akka.remote.transport.AkkaPduProtobufCodec$.constructControlMessagePdu(AkkaPduCodec.scala:231)
 at
 akka.remote.transport.AkkaPduProtobufCodec$.init(AkkaPduCodec.scala:153)
 at akka.remote.transport.AkkaPduProtobufCodec$.clinit(AkkaPduCodec.scala)
 at akka.remote.EndpointManager$$anonfun$9.apply(Remoting.scala:733)
 at akka.remote.EndpointManager$$anonfun$9.apply(Remoting.scala:703)

 What am I doing wrong?




Re: Spark 1.5.0-SNAPSHOT broken with Scala 2.11

2015-06-28 Thread Josh Rosen
The 2.11 compile build is going to be green because this is an issue with
tests, not compilation.

On Sun, Jun 28, 2015 at 6:30 PM, Ted Yu yuzhih...@gmail.com wrote:

 Spark-Master-Scala211-Compile build is green.

 However it is not clear what the actual command is:

 [EnvInject] - Variables injected successfully.
 [Spark-Master-Scala211-Compile] $ /bin/bash /tmp/hudson8945334776362889961.sh


 FYI


 On Sun, Jun 28, 2015 at 6:02 PM, Alessandro Baretta alexbare...@gmail.com
  wrote:

 I am building the current master branch with Scala 2.11 following these
 instructions:

 Building for Scala 2.11

 To produce a Spark package compiled with Scala 2.11, use the -Dscala-2.11
  property:

 dev/change-version-to-2.11.sh
 mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package


 Here's what I'm seeing:

 log4j:WARN No appenders could be found for logger
 (org.apache.hadoop.security.Groups).
 log4j:WARN Please initialize the log4j system properly.
 log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
 more info.
 Using Spark's repl log4j profile:
 org/apache/spark/log4j-defaults-repl.properties
 To adjust logging level use sc.setLogLevel(INFO)
 Welcome to
     __
  / __/__  ___ _/ /__
 _\ \/ _ \/ _ `/ __/  '_/
/___/ .__/\_,_/_/ /_/\_\   version 1.5.0-SNAPSHOT
   /_/

 Using Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.7.0_79)
 Type in expressions to have them evaluated.
 Type :help for more information.
 15/06/29 00:42:20 ERROR ActorSystemImpl: Uncaught fatal error from thread
 [sparkDriver-akka.remote.default-remote-dispatcher-6] shutting down
 ActorSystem [sparkDriver]
 java.lang.VerifyError: class akka.remote.WireFormats$AkkaControlMessage
 overrides final method
 getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
 at java.lang.ClassLoader.defineClass1(Native Method)
 at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
 at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
 at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
 at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at
 akka.remote.transport.AkkaPduProtobufCodec$.constructControlMessagePdu(AkkaPduCodec.scala:231)
 at
 akka.remote.transport.AkkaPduProtobufCodec$.init(AkkaPduCodec.scala:153)
 at
 akka.remote.transport.AkkaPduProtobufCodec$.clinit(AkkaPduCodec.scala)
 at akka.remote.EndpointManager$$anonfun$9.apply(Remoting.scala:733)
 at akka.remote.EndpointManager$$anonfun$9.apply(Remoting.scala:703)

 What am I doing wrong?





Gossip protocol in Master selection

2015-06-28 Thread Debasish Das
Hi,

Akka cluster uses gossip protocol for Master election. The approach in
Spark right now is to use Zookeeper for high availability.

Interestingly Cassandra and Redis clusters are both using Gossip protocol.

I am not sure what is the default behavior right now. If the master dies
and zookeeper selects a new master, the whole depedency graph will be
re-executed or only the unfinished stages will be restarted ?

Also why the zookeeper based HA was preferred in Spark ? I was wondering if
there is JIRA to add gossip protocol for Spark Master election ?

In the code I see zookeeper, filesystem, custom and default is
MonarchyLeader. So looks like Spark is designed to add new
leaderElectionAgent.

Thanks.
Deb