SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder

2015-07-10 Thread Mulugeta Mammo
Hi,

My spark job runs without error, but once it completes I get this message
and the app is logged as incomplete application in my spark-history :

SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details.

To fix the issue, I downloaded slf4j-simple-1.7.12.jar and included it in
class path. But when I do that I get Multiple bindings were found on the
class path, the class paths point to: spark-assembly-1.3.1-hadoop2.6.0.jar
and slf4j-simple-1.7.12.jar file.

Any ideas?

Thanks,


Re: SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder

2015-07-10 Thread Mulugeta Mammo
No. Works perfectly.

On Fri, Jul 10, 2015 at 3:38 PM, liangdianpeng liangdianp...@vip.163.com
wrote:

 if the class inside the spark_XXX.jar was damaged


 发自网易邮箱手机版


 On 2015-07-11 06:13 , Mulugeta Mammo mulugeta.abe...@gmail.com Wrote:

 Hi,

 My spark job runs without error, but once it completes I get this message
 and the app is logged as incomplete application in my spark-history :

 SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder
 SLF4J: Defaulting to no-operation (NOP) logger implementation
 SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
 details.

 To fix the issue, I downloaded slf4j-simple-1.7.12.jar and included it in
 class path. But when I do that I get Multiple bindings were found on the
 class path, the class paths point to: spark-assembly-1.3.1-hadoop2.6.0.jar
 and slf4j-simple-1.7.12.jar file.

 Any ideas?

 Thanks,




Re: Setting JVM heap start and max sizes, -Xms and -Xmx, for executors

2015-07-02 Thread Mulugeta Mammo
tried that one and it throws error - extraJavaOptions is not allowed to
alter memory settings, use spakr.executor.memory instead.

On Thu, Jul 2, 2015 at 12:21 PM, Benjamin Fradet benjamin.fra...@gmail.com
wrote:

 Hi,

 You can set those parameters through the

 spark.executor.extraJavaOptions

 Which is documented in the configuration guide:
 spark.apache.org/docs/latest/configuration.htnl
 On 2 Jul 2015 9:06 pm, Mulugeta Mammo mulugeta.abe...@gmail.com wrote:

 Hi,

 I'm running Spark 1.4.0, I want to specify the start and max size (-Xms
 and Xmx) of the jvm heap size for my executors, I tried:

 executor.cores.memory=-Xms1g -Xms8g

 but doesn't work. How do I specify?

 Appreciate your help.

 Thanks,




Re: Setting JVM heap start and max sizes, -Xms and -Xmx, for executors

2015-07-02 Thread Mulugeta Mammo
thanks but my use case requires I specify different start and max heap
sizes. Looks like spark sets start and max sizes  same value.

On Thu, Jul 2, 2015 at 1:08 PM, Todd Nist tsind...@gmail.com wrote:

 You should use:

 spark.executor.memory

 from the docs https://spark.apache.org/docs/latest/configuration.html:
 spark.executor.memory512mAmount of memory to use per executor process, in
 the same format as JVM memory strings (e.g.512m, 2g).

 -Todd



 On Thu, Jul 2, 2015 at 3:36 PM, Mulugeta Mammo mulugeta.abe...@gmail.com
 wrote:

 tried that one and it throws error - extraJavaOptions is not allowed to
 alter memory settings, use spakr.executor.memory instead.

 On Thu, Jul 2, 2015 at 12:21 PM, Benjamin Fradet 
 benjamin.fra...@gmail.com wrote:

 Hi,

 You can set those parameters through the

 spark.executor.extraJavaOptions

 Which is documented in the configuration guide:
 spark.apache.org/docs/latest/configuration.htnl
 On 2 Jul 2015 9:06 pm, Mulugeta Mammo mulugeta.abe...@gmail.com
 wrote:

 Hi,

 I'm running Spark 1.4.0, I want to specify the start and max size (-Xms
 and Xmx) of the jvm heap size for my executors, I tried:

 executor.cores.memory=-Xms1g -Xms8g

 but doesn't work. How do I specify?

 Appreciate your help.

 Thanks,






Re: Setting JVM heap start and max sizes, -Xms and -Xmx, for executors

2015-07-02 Thread Mulugeta Mammo
Ya, I think its a limitation too.I looked at the source code,
SparkConf.scala and ExecutorRunnable.scala both Xms and Xmx are set equal
value which is spark.executor.memory.

Thanks

On Thu, Jul 2, 2015 at 1:18 PM, Todd Nist tsind...@gmail.com wrote:

 Yes, that does appear to be the case.  The documentation is very clear
 about the heap settings and that they can not be used with
 spark.executor.extraJavaOptions

 spark.executor.extraJavaOptions(none)A string of extra JVM options to
 pass to executors. For instance, GC settings or other logging. *Note that
 it is illegal to set Spark properties or heap size settings with this
 option.* Spark properties should be set using a SparkConf object or the
 spark-defaults.conf file used with the spark-submit script. *Heap size
 settings can be set with spark.executor.memory*.
 So it appears to be a limitation at this time.

 -Todd



 On Thu, Jul 2, 2015 at 4:13 PM, Mulugeta Mammo mulugeta.abe...@gmail.com
 wrote:

 thanks but my use case requires I specify different start and max heap
 sizes. Looks like spark sets start and max sizes  same value.

 On Thu, Jul 2, 2015 at 1:08 PM, Todd Nist tsind...@gmail.com wrote:

 You should use:

 spark.executor.memory

 from the docs https://spark.apache.org/docs/latest/configuration.html:
 spark.executor.memory512mAmount of memory to use per executor process,
 in the same format as JVM memory strings (e.g.512m, 2g).

 -Todd



 On Thu, Jul 2, 2015 at 3:36 PM, Mulugeta Mammo 
 mulugeta.abe...@gmail.com wrote:

 tried that one and it throws error - extraJavaOptions is not allowed to
 alter memory settings, use spakr.executor.memory instead.

 On Thu, Jul 2, 2015 at 12:21 PM, Benjamin Fradet 
 benjamin.fra...@gmail.com wrote:

 Hi,

 You can set those parameters through the

 spark.executor.extraJavaOptions

 Which is documented in the configuration guide:
 spark.apache.org/docs/latest/configuration.htnl
 On 2 Jul 2015 9:06 pm, Mulugeta Mammo mulugeta.abe...@gmail.com
 wrote:

 Hi,

 I'm running Spark 1.4.0, I want to specify the start and max size
 (-Xms and Xmx) of the jvm heap size for my executors, I tried:

 executor.cores.memory=-Xms1g -Xms8g

 but doesn't work. How do I specify?

 Appreciate your help.

 Thanks,








Re: Can't build Spark

2015-06-02 Thread Mulugeta Mammo
Spark 1.3.1, Scala 2.11.6, Maven 3.3.3, I'm behind proxy, have set my proxy
settings in maven settings.

Thanks,

On Tue, Jun 2, 2015 at 2:54 PM, Ted Yu yuzhih...@gmail.com wrote:

 Can you give us some more information ?
 Such as:
 which Spark release you were building
 what command you used
 Scala version you used

 Thanks

 On Tue, Jun 2, 2015 at 2:50 PM, Mulugeta Mammo mulugeta.abe...@gmail.com
 wrote:

 building Spark is throwing errors, any ideas?


 [FATAL] Non-resolvable parent POM: Could not transfer artifact 
 org.apache:apache:pom:14 from/to central ( 
 http://repo.maven.apache.org/maven2): Error transferring file: 
 repo.maven.apache.org from  
 http://repo.maven.apache.org/maven2/org/apache/apache/14/apache-14.pom and 
 'parent.relativePath' points at wrong local POM @ line 21, column 11

 at 
 org.apache.maven.model.building.DefaultModelProblemCollector.newModelBuildingException(DefaultModelProblemCollector.java:195)
 at 
 org.apache.maven.model.building.DefaultModelBuilder.readParentExternally(DefaultModelBuilder.java:841)





Can't build Spark

2015-06-02 Thread Mulugeta Mammo
building Spark is throwing errors, any ideas?


[FATAL] Non-resolvable parent POM: Could not transfer artifact
org.apache:apache:pom:14 from/to central (
http://repo.maven.apache.org/maven2): Error transferring file:
repo.maven.apache.org from
http://repo.maven.apache.org/maven2/org/apache/apache/14/apache-14.pom
and 'parent.relativePath' points at wrong local POM @ line 21, column
11

at 
org.apache.maven.model.building.DefaultModelProblemCollector.newModelBuildingException(DefaultModelProblemCollector.java:195)
at 
org.apache.maven.model.building.DefaultModelBuilder.readParentExternally(DefaultModelBuilder.java:841)


Building Spark for Hadoop 2.6.0

2015-06-01 Thread Mulugeta Mammo
Does this build Spark for hadoop version 2.6.0?

build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests clean
package

Thanks!


Re: Value for SPARK_EXECUTOR_CORES

2015-05-28 Thread Mulugeta Mammo
Thanks for the valuable information. The blog states:

The cores property controls the number of concurrent tasks an executor can
run. --executor-cores 5 means that each executor can run a maximum of five
tasks at the same time. 

So, I guess the max number of executor-cores I can assign is the CPU count
(which includes the number of threads per core), not just the number of
cores. I just want to be sure the cores term Spark is using.

Thanks

On Thu, May 28, 2015 at 11:16 AM, Ruslan Dautkhanov dautkha...@gmail.com
wrote:

 It's not only about cores. Keep in mind spark.executor.cores also affects
 available memeory for each task:

 From
 http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/

 The memory available to each task is (spark.executor.memory *
 spark.shuffle.memoryFraction *spark.shuffle.safetyFraction)/
 spark.executor.cores. Memory fraction and safety fraction default to 0.2
 and 0.8 respectively.

 I'd test spark.executor.cores with 2,4,8 and 16 and see what makes your
 job run faster..


 --
 Ruslan Dautkhanov

 On Wed, May 27, 2015 at 6:46 PM, Mulugeta Mammo mulugeta.abe...@gmail.com
  wrote:

 My executor has the following spec (lscpu):

 CPU(s): 16
 Core(s) per socket: 4
 Socket(s): 2
 Thread(s) per code: 2

 The CPU count is obviously 4*2*2 = 16. My question is what value is Spark
 expecting in SPARK_EXECUTOR_CORES ? The CPU count (16) or total # of cores
 (2 * 2 = 4) ?

 Thanks





Hyperthreading

2015-05-28 Thread Mulugeta Mammo
Hi guys,

Does the SPARK_EXECUTOR_CORES assume Hyper threading? For example, if I
have 4 cores with 2 threads per core, should the SPARK_EXECUTOR_CORES be
4*2 = 8 or just 4?

Thanks,


Value for SPARK_EXECUTOR_CORES

2015-05-27 Thread Mulugeta Mammo
My executor has the following spec (lscpu):

CPU(s): 16
Core(s) per socket: 4
Socket(s): 2
Thread(s) per code: 2

The CPU count is obviously 4*2*2 = 16. My question is what value is Spark
expecting in SPARK_EXECUTOR_CORES ? The CPU count (16) or total # of cores
(2 * 2 = 4) ?

Thanks