Hi,
My spark job runs without error, but once it completes I get this message
and the app is logged as incomplete application in my spark-history :
SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See
No. Works perfectly.
On Fri, Jul 10, 2015 at 3:38 PM, liangdianpeng liangdianp...@vip.163.com
wrote:
if the class inside the spark_XXX.jar was damaged
发自网易邮箱手机版
On 2015-07-11 06:13 , Mulugeta Mammo mulugeta.abe...@gmail.com Wrote:
Hi,
My spark job runs without error, but once
Which is documented in the configuration guide:
spark.apache.org/docs/latest/configuration.htnl
On 2 Jul 2015 9:06 pm, Mulugeta Mammo mulugeta.abe...@gmail.com wrote:
Hi,
I'm running Spark 1.4.0, I want to specify the start and max size (-Xms
and Xmx) of the jvm heap size for my executors, I
/configuration.html:
spark.executor.memory512mAmount of memory to use per executor process, in
the same format as JVM memory strings (e.g.512m, 2g).
-Todd
On Thu, Jul 2, 2015 at 3:36 PM, Mulugeta Mammo mulugeta.abe...@gmail.com
wrote:
tried that one and it throws error - extraJavaOptions
, Jul 2, 2015 at 4:13 PM, Mulugeta Mammo mulugeta.abe...@gmail.com
wrote:
thanks but my use case requires I specify different start and max heap
sizes. Looks like spark sets start and max sizes same value.
On Thu, Jul 2, 2015 at 1:08 PM, Todd Nist tsind...@gmail.com wrote:
You should use
Scala version you used
Thanks
On Tue, Jun 2, 2015 at 2:50 PM, Mulugeta Mammo mulugeta.abe...@gmail.com
wrote:
building Spark is throwing errors, any ideas?
[FATAL] Non-resolvable parent POM: Could not transfer artifact
org.apache:apache:pom:14 from/to central (
http
building Spark is throwing errors, any ideas?
[FATAL] Non-resolvable parent POM: Could not transfer artifact
org.apache:apache:pom:14 from/to central (
http://repo.maven.apache.org/maven2): Error transferring file:
repo.maven.apache.org from
Does this build Spark for hadoop version 2.6.0?
build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -DskipTests clean
package
Thanks!
and safety fraction default to 0.2
and 0.8 respectively.
I'd test spark.executor.cores with 2,4,8 and 16 and see what makes your
job run faster..
--
Ruslan Dautkhanov
On Wed, May 27, 2015 at 6:46 PM, Mulugeta Mammo mulugeta.abe...@gmail.com
wrote:
My executor has the following spec (lscpu
Hi guys,
Does the SPARK_EXECUTOR_CORES assume Hyper threading? For example, if I
have 4 cores with 2 threads per core, should the SPARK_EXECUTOR_CORES be
4*2 = 8 or just 4?
Thanks,
My executor has the following spec (lscpu):
CPU(s): 16
Core(s) per socket: 4
Socket(s): 2
Thread(s) per code: 2
The CPU count is obviously 4*2*2 = 16. My question is what value is Spark
expecting in SPARK_EXECUTOR_CORES ? The CPU count (16) or total # of cores
(2 * 2 = 4) ?
Thanks
11 matches
Mail list logo