[jira] [Updated] (SPARK-16819) Exception in thread “main” org.apache.spark.SparkException: Application application finished with failed status

2016-07-30 Thread Asmaa Ali (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Asmaa Ali  updated SPARK-16819:
---
Description: 
What is the reason of this exception ?!

cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class 
SparkBWA --master yarn-cluster --deploy-mode cluster --conf 
spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m 
--executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose 
./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 
-partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: 
spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: 
spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=1920m
Adding default property: spark.shuffle.service.enabled=true
Adding default property: 
spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
Adding default property: spark.sql.parquet.cacheMetadata=false
Adding default property: spark.driver.memory=3840m
Adding default property: spark.dynamicAllocation.maxExecutors=1
Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
Adding default property: spark.yarn.am.memoryOverhead=558
Adding default property: spark.yarn.am.memory=5586m
Adding default property: 
spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.master=yarn-client
Adding default property: spark.executor.memory=5586m
Adding default property: 
spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.cores=2
Adding default property: spark.yarn.executor.memoryOverhead=558
Adding default property: spark.dynamicAllocation.minExecutors=1
Adding default property: spark.dynamicAllocation.initialExecutors=1
Adding default property: spark.akka.frameSize=512
Parsed arguments:
  master  yarn-cluster
  deployMode  cluster
  executorMemory  1500m
  executorCores   1
  totalExecutorCores  null
  propertiesFile  /usr/lib/spark/conf/spark-defaults.conf
  driverMemory1500m
  driverCores null
  driverExtraClassPathnull
  driverExtraLibraryPath  null
  driverExtraJavaOptions  
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  supervise   false
  queue   null
  numExecutorsnull
  files   null
  pyFiles null
  archivesfile:/home/cancerdetector/SparkBWA/build/./bwa.zip
  mainClass   SparkBWA
  primaryResource 
file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
  nameSparkBWA
  childArgs   [-algorithm mem -reads paired -index 
/Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq 
ERR000589_2.filt.fastqhb Output_ERR000589]
  jarsnull
  packagesnull
  packagesExclusions  null
  repositoriesnull
  verbose true

Spark properties used, including those specified through
 --conf and those from the properties file 
/usr/lib/spark/conf/spark-defaults.conf:
  spark.yarn.am.memoryOverhead -> 558
  spark.driver.memory -> 1500m
  spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
  spark.executor.memory -> 5586m
  spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
  spark.eventLog.enabled -> true
  spark.scheduler.minRegisteredResourcesRatio -> 0.0
  spark.dynamicAllocation.maxExecutors -> 1
  spark.akka.frameSize -> 512
  spark.executor.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.sql.parquet.cacheMetadata -> false
  spark.shuffle.service.enabled -> true
  spark.history.fs.logDirectory -> 
hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.dynamicAllocation.initialExecutors -> 1
  spark.dynamicAllocation.minExecutors -> 1
  spark.yarn.executor.memoryOverhead -> 558
  spark.driver.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.yarn.am.memory -> 5586m
  spark.driver.maxResultSize -> 1920m
  spark.master -> yarn-client
  spark.dynamicAllocation.enabled -> true
  spark.executor.cores -> 2


Main class:
org.apache.spark.deploy.yarn.Client
Arguments:
--name
SparkBWA
--driver-memory
1500m
--executor-memory
1500m
--executor-cores
1
--archives

[jira] [Updated] (SPARK-16819) Exception in thread “main” org.apache.spark.SparkException: Application application finished with failed status

2016-07-30 Thread Asmaa Ali (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Asmaa Ali  updated SPARK-16819:
---
Description: 
What is the reason of this exception ?!

cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class 
SparkBWA --master yarn-cluster --deploy-mode cluster --conf 
spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m 
--executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose 
./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 
-partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: 
spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: 
spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=1920m
Adding default property: spark.shuffle.service.enabled=true
Adding default property: 
spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
Adding default property: spark.sql.parquet.cacheMetadata=false
Adding default property: spark.driver.memory=3840m
Adding default property: spark.dynamicAllocation.maxExecutors=1
Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
Adding default property: spark.yarn.am.memoryOverhead=558
Adding default property: spark.yarn.am.memory=5586m
Adding default property: 
spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.master=yarn-client
Adding default property: spark.executor.memory=5586m
Adding default property: 
spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.cores=2
Adding default property: spark.yarn.executor.memoryOverhead=558
Adding default property: spark.dynamicAllocation.minExecutors=1
Adding default property: spark.dynamicAllocation.initialExecutors=1
Adding default property: spark.akka.frameSize=512
Parsed arguments:
  master  yarn-cluster
  deployMode  cluster
  executorMemory  1500m
  executorCores   1
  totalExecutorCores  null
  propertiesFile  /usr/lib/spark/conf/spark-defaults.conf
  driverMemory1500m
  driverCores null
  driverExtraClassPathnull
  driverExtraLibraryPath  null
  driverExtraJavaOptions  
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  supervise   false
  queue   null
  numExecutorsnull
  files   null
  pyFiles null
  archivesfile:/home/cancerdetector/SparkBWA/build/./bwa.zip
  mainClass   SparkBWA
  primaryResource 
file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
  nameSparkBWA
  childArgs   [-algorithm mem -reads paired -index 
/Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq 
ERR000589_2.filt.fastqhb Output_ERR000589]
  jarsnull
  packagesnull
  packagesExclusions  null
  repositoriesnull
  verbose true

Spark properties used, including those specified through
 --conf and those from the properties file 
/usr/lib/spark/conf/spark-defaults.conf:
  spark.yarn.am.memoryOverhead -> 558
  spark.driver.memory -> 1500m
  spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
  spark.executor.memory -> 5586m
  spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
  spark.eventLog.enabled -> true
  spark.scheduler.minRegisteredResourcesRatio -> 0.0
  spark.dynamicAllocation.maxExecutors -> 1
  spark.akka.frameSize -> 512
  spark.executor.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.sql.parquet.cacheMetadata -> false
  spark.shuffle.service.enabled -> true
  spark.history.fs.logDirectory -> 
hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.dynamicAllocation.initialExecutors -> 1
  spark.dynamicAllocation.minExecutors -> 1
  spark.yarn.executor.memoryOverhead -> 558
  spark.driver.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.yarn.am.memory -> 5586m
  spark.driver.maxResultSize -> 1920m
  spark.master -> yarn-client
  spark.dynamicAllocation.enabled -> true
  spark.executor.cores -> 2


Main class:
org.apache.spark.deploy.yarn.Client
Arguments:
--name
SparkBWA
--driver-memory
1500m
--executor-memory
1500m
--executor-cores
1
--archives