[ 
https://issues.apache.org/jira/browse/SPARK-16819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Asmaa Ali  updated SPARK-16819:
-------------------------------
    Description: 
What is the reason of this exception ?!

cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class 
SparkBWA --master yarn-cluster --deploy-mode cluster --conf 
spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m 
--executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose 
./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 
-partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb Output_ERR000589
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: 
spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: 
spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=1920m
Adding default property: spark.shuffle.service.enabled=true
Adding default property: 
spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
Adding default property: spark.sql.parquet.cacheMetadata=false
Adding default property: spark.driver.memory=3840m
Adding default property: spark.dynamicAllocation.maxExecutors=10000
Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
Adding default property: spark.yarn.am.memoryOverhead=558
Adding default property: spark.yarn.am.memory=5586m
Adding default property: 
spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.master=yarn-client
Adding default property: spark.executor.memory=5586m
Adding default property: 
spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.cores=2
Adding default property: spark.yarn.executor.memoryOverhead=558
Adding default property: spark.dynamicAllocation.minExecutors=1
Adding default property: spark.dynamicAllocation.initialExecutors=10000
Adding default property: spark.akka.frameSize=512
Parsed arguments:
  master                  yarn-cluster
  deployMode              cluster
  executorMemory          1500m
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /usr/lib/spark/conf/spark-defaults.conf
  driverMemory            1500m
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  supervise               false
  queue                   null
  numExecutors            null
  files                   null
  pyFiles                 null
  archives                file:/home/cancerdetector/SparkBWA/build/./bwa.zip
  mainClass               SparkBWA
  primaryResource         
file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
  name                    SparkBWA
  childArgs               [-algorithm mem -reads paired -index 
/Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq 
ERR000589_2.filt.fastqhb Output_ERR000589]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file 
/usr/lib/spark/conf/spark-defaults.conf:
  spark.yarn.am.memoryOverhead -> 558
  spark.driver.memory -> 1500m
  spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
  spark.executor.memory -> 5586m
  spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
  spark.eventLog.enabled -> true
  spark.scheduler.minRegisteredResourcesRatio -> 0.0
  spark.dynamicAllocation.maxExecutors -> 10000
  spark.akka.frameSize -> 512
  spark.executor.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.sql.parquet.cacheMetadata -> false
  spark.shuffle.service.enabled -> true
  spark.history.fs.logDirectory -> 
hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.dynamicAllocation.initialExecutors -> 10000
  spark.dynamicAllocation.minExecutors -> 1
  spark.yarn.executor.memoryOverhead -> 558
  spark.driver.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.yarn.am.memory -> 5586m
  spark.driver.maxResultSize -> 1920m
  spark.master -> yarn-client
  spark.dynamicAllocation.enabled -> true
  spark.executor.cores -> 2

    
Main class:
org.apache.spark.deploy.yarn.Client
Arguments:
--name
SparkBWA
--driver-memory
1500m
--executor-memory
1500m
--executor-cores
1
--archives
file:/home/cancerdetector/SparkBWA/build/./bwa.zip
--jar
file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
--class
SparkBWA
--arg
-algorithm
--arg
mem
--arg
-reads
--arg
paired
--arg
-index
--arg
/Data/HumanBase/hg38
--arg
-partitions
--arg
32
--arg
ERR000589_1.filt.fastq
--arg
ERR000589_2.filt.fastqhb
--arg
Output_ERR000589
System properties:
spark.yarn.am.memoryOverhead -> 558
spark.driver.memory -> 1500m
spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
spark.executor.memory -> 1500m
spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
spark.eventLog.enabled -> true
spark.scheduler.minRegisteredResourcesRatio -> 0.0
SPARK_SUBMIT -> true
spark.dynamicAllocation.maxExecutors -> 10000
spark.akka.frameSize -> 512
spark.sql.parquet.cacheMetadata -> false
spark.executor.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.app.name -> SparkBWA
spark.shuffle.service.enabled -> true
spark.history.fs.logDirectory -> 
hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.dynamicAllocation.initialExecutors -> 10000
spark.dynamicAllocation.minExecutors -> 1
spark.yarn.executor.memoryOverhead -> 558
spark.driver.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.submit.deployMode -> cluster
spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.yarn.am.memory -> 5586m
spark.driver.maxResultSize -> 1920m
spark.master -> yarn-cluster
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 1
Classpath elements:
spark.yarn.am.memory is set but does not apply in cluster mode.
spark.yarn.am.memoryOverhead is set but does not apply in cluster mode.
16/07/31 01:12:39 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to 
ResourceManager at cluster-cancerdet
ector-m/10.132.0.2:8032
16/07/31 01:12:40 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: 
Submitted application application_
1467990031555_0106
Exception in thread "main" org.apache.spark.SparkException: Application 
application_1467990031555_0106 finished 
with failed status
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
        at org.apache.spark.deploy.yarn.Client.main(Client.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7
31)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


When I tried to check the AM and executor logs. the command didn't work (I have 
set the yarn.log-aggregation-enable to true), So I tried to manually access 
into NM's log dir to see the detailed application logs. Here are the 
application logs from the NM's log file:


2016-07-30 19:37:23,620 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocate blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
 
ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
 for 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/SparkBWA.jar
2016-07-30 19:37:23,807 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.4:50010 is added to 
blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
 
ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
 size 0
2016-07-30 19:37:23,807 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.3:50010 is added to 
blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
 
ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
 size 0
2016-07-30 19:37:23,812 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/SparkBWA.jar 
is closed by DFSClient_NONMAPREDUCE_606595546_1
2016-07-30 19:37:23,843 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocate blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 for /user/cancerdetector/.sparkStaging/application_1467990031555_0105/bwa.zip
2016-07-30 19:37:23,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.4:50010 is added to 
blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 size 0
2016-07-30 19:37:23,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.3:50010 is added to 
blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 size 0
2016-07-30 19:37:23,864 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/bwa.zip is 
closed by DFSClient_NONMAPREDUCE_606595546_1
2016-07-30 19:37:23,911 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocate blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 for 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/__spark_conf__3335387778472809466.zip
2016-07-30 19:37:23,922 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.4:50010 is added to 
blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 size 0
2016-07-30 19:37:23,922 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.3:50010 is added to 
blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 size 0
2016-07-30 19:37:23,925 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/__spark_conf__3335387778472809466.zip
 is closed by DFSClient_NONMAPREDUCE_606595546_1
2016-07-30 19:37:26,235 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742332_1508 10.132.0.3:50010 10.132.0.4:50010 
2016-07-30 19:37:26,236 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742334_1510 10.132.0.3:50010 10.132.0.4:50010 
2016-07-30 19:37:26,236 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742333_1509 10.132.0.3:50010 10.132.0.4:50010 
2016-07-30 19:37:26,961 INFO BlockStateChange: BLOCK* BlockManager: ask 
10.132.0.3:50010 to delete [blk_1073742332_1508, blk_1073742333_1509, 
blk_1073742334_1510]
2016-07-30 19:37:28,791 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.1b2f4ed4-0992-4bf3-a453-4c02e9ce00fe is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:37:29,961 INFO BlockStateChange: BLOCK* BlockManager: ask 
10.132.0.4:50010 to delete [blk_1073742332_1508, blk_1073742333_1509, 
blk_1073742334_1510]
2016-07-30 19:37:38,799 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.a0ca1b29-3022-4d1c-a868-4710d56903f9 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:37:48,806 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.fa70676f-ce52-4ddf-8fb6-1649284f5da0 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:37:58,814 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.7550f1fe-81e1-4a4f-9a72-5210dbae1a31 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:08,819 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 674 Total time for transactions(ms): 12 Number of 
transactions batched in Syncs: 0 Number of syncs: 668 SyncTimes(ms): 628 
2016-07-30 19:38:08,822 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.f6d27b3c-f60d-4c70-b9eb-9a682c783cf9 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:18,830 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.33f22e09-343f-4192-b194-a4617ba6fde5 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:28,838 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.9a90102c-bb41-42e8-ab5f-285e74f14388 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:38,846 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.f9a82533-de04-4da8-9054-f7f74f781351 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:48,854 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.96d8dfad-bcfa-4116-b159-62caa493208d is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:58,862 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.2c24d60a-c76e-4c6e-a6f2-868b6f7d746b is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:08,867 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 692 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 686 SyncTimes(ms): 643 
2016-07-30 19:39:08,870 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.200cfa9e-9429-4c9f-9227-aad743d833d7 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:18,878 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b2c007fb-0334-4539-b83f-152069a0cde9 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:28,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c5cc9039-11de-4a18-aa1d-95d16db8dcf9 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:38,893 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5b18a8cc-18d2-404e-aed4-799257e460d2 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:48,901 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.82de795e-9c85-4b03-b596-d6dcdee6eaa3 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:58,909 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c724a7b0-722b-4207-b946-f859fe2f10cc is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:08,914 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 710 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 704 SyncTimes(ms): 659 
2016-07-30 19:40:08,917 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.46ce84b2-885c-497a-8b9f-8f3202a317c2 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:18,925 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5fa59a96-cda0-4820-b1ec-38d120ff5dca is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:29,006 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.0f45738e-9626-4713-b39d-3883f0408146 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:39,014 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.005ce47c-ef57-4d4c-9a2f-57c32927aca1 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:49,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.1f889794-c1e6-4054-a533-7f43ee06966b is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:59,029 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.bc953f0d-287e-4745-b862-cfdd713e3777 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:09,034 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 728 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 722 SyncTimes(ms): 675 
2016-07-30 19:41:09,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5129bf62-08d3-4171-9591-57a5b004bb34 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:19,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.2f78852f-309c-45ef-ae9e-38b46c705e98 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:29,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b0dc7906-651d-4b26-b683-1799b325ba8d is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:39,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.cbcca99f-bedc-43d8-a890-a69c18b29b43 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:49,067 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.7ea8f3d6-dfd8-4080-8a45-a42419303fa0 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:59,074 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.1f22a8fd-ccb7-4138-b9f9-ab1ff1963b02 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:09,078 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 746 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 740 SyncTimes(ms): 691 
2016-07-30 19:42:09,081 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.6b4b0b45-00bf-47d6-bc2b-9dc149e10f01 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:19,089 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.2a1a8c1e-1b8b-485d-a108-41ea8087bafe is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:29,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.8c1b7511-83b2-4584-ab14-408a9e85d0c4 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:39,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.216d4363-b070-47c3-97ac-f0eac64ed411 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:49,110 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5d5cbb0a-8cad-41be-ba17-388b9fc955c4 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:59,117 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c805224b-1833-4dba-8cf5-80164b3ecd7b is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:09,121 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 764 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 758 SyncTimes(ms): 707 
2016-07-30 19:43:09,125 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.51715f99-5d67-4fa7-907b-7522fcca03c2 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:19,132 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.95920ee2-d9e2-41f4-a9f6-a495560af73f is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:29,141 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.0d4d5099-21d1-4e3f-84e0-7623511c542c is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:39,148 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b6c93d4f-040c-4b9e-a89e-15313efd13ce is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:49,157 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.35ed35e6-2c7d-4a45-ae4f-afaf538afc78 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:59,164 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.49c44bf3-ea11-4df1-ac71-a26203e9abba is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:09,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 782 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 776 SyncTimes(ms): 725 
2016-07-30 19:44:09,173 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.060f7d11-d341-4cab-8925-9b6203316744 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:19,181 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.666c8d61-405e-49bd-b2d0-939c920b6cd2 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:29,188 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.433f0daa-3386-44a6-b6b1-0285e9f5b176 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:39,197 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.1e840f6a-999b-4e1d-8eda-a95c409e351c is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:49,206 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c0df4079-d352-4aae-8392-9596f355c408 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:59,215 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.df28952f-5a2a-411b-b72d-49380b1ac88e is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:09,221 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 800 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 794 SyncTimes(ms): 743 
2016-07-30 19:45:09,224 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5515b3ca-de5d-46df-a49c-c07d5c09969d is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:19,234 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.3991cd72-3fb2-48a4-8083-5327d82be73b is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:29,243 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5233a5f3-e15a-4bae-9baa-5d81b5da0459 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:39,252 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.6097d0dd-d3c4-482e-8fb7-baa22602fb53 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:49,261 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.f94ba7f7-313b-4387-a447-59214ddf6ecc is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:59,269 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.864e23f9-2b2a-44f4-b11d-e4d48249c7f3 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:09,275 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 818 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 812 SyncTimes(ms): 761 
2016-07-30 19:46:09,278 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.96da4a90-afeb-4fe9-84eb-2d759785d428 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:19,288 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.a50035
66-95ab-4a4b-a1f3-7de6302d26a0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:29,296 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.65acb8
c8-9f16-49cc-951c-01dadd298e86 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:39,306 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.a3281e
71-6713-422e-b43f-5cd9500f8dd2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:49,314 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.343366
ba-49fc-4b9a-b4d8-7a6b6c8683e0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:59,323 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.88395b
cf-4668-4a9e-8586-69de43e7e0b9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:09,328 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 836 Total
 time for transactions(ms): 14 Number of transactions batched in Syncs: 0 
Number of syncs: 830 SyncTimes(ms): 77
9 
2016-07-30 19:47:09,331 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c3c519
04-dda5-4e25-af9c-6188615063d5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:19,338 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.163b09
48-28a7-4a1e-9a09-a8de560d0200 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:29,346 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b726de
12-176b-47c0-964c-0fa5196f626f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:39,354 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.8eddb5
b2-746c-4852-8b15-f15c7a7068f0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:49,364 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.0f467a
89-42d3-4b2e-ae9e-36714bd417f3 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:59,374 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.074a81
30-f4ba-4be6-a614-e39a252cd57b is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:48:09,380 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 854 Total
 time for transactions(ms): 14 Number of transactions batched in Syncs: 0 
Number of syncs: 848 SyncTimes(ms): 79
7 
2016-07-30 19:48:09,383 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b7e19c
12-827d-4ca7-87e9-1e3b8ab01c01 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:48:19,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.a06d3b
48-6a08-4294-9b6a-7c8ffcebef52 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:48:29,401 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.89fc29
55-136b-400f-a78c-c439726e4964 is closed by DFSClient_NONMAPREDUCE_-1615501432_1

  was:
What is the reason of this exception ?!


cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit --class 
SparkBWA --master yarn-cluster --
deploy-mode cluster --conf spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar 
--driver-memory 1500m --executor-memory 1500m --executor-cores 1 --archives 
./bwa.zip --verbose ./SparkBWA.jar -algorithm mem -reads paired -index 
/Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq 
ERR000589_2.filt.fastqhb Output_ERR000589-> added --deploy-mode cluster
Using properties file: /usr/lib/spark/conf/spark-defaults.conf
Adding default property: 
spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: 
spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.eventLog.enabled=true
Adding default property: spark.driver.maxResultSize=1920m
Adding default property: spark.shuffle.service.enabled=true
Adding default property: 
spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
Adding default property: spark.sql.parquet.cacheMetadata=false
Adding default property: spark.driver.memory=3840m
Adding default property: spark.dynamicAllocation.maxExecutors=10000
Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
Adding default property: spark.yarn.am.memoryOverhead=558
Adding default property: spark.yarn.am.memory=5586m
Adding default property: 
spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
Adding default property: spark.master=yarn-client
Adding default property: spark.executor.memory=5586m
Adding default property: 
spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
Adding default property: spark.dynamicAllocation.enabled=true
Adding default property: spark.executor.cores=2
Adding default property: spark.yarn.executor.memoryOverhead=558
Adding default property: spark.dynamicAllocation.minExecutors=1
Adding default property: spark.dynamicAllocation.initialExecutors=10000
Adding default property: spark.akka.frameSize=512
Parsed arguments:
  master                  yarn-cluster
  deployMode              cluster
  executorMemory          1500m
  executorCores           1
  totalExecutorCores      null
  propertiesFile          /usr/lib/spark/conf/spark-defaults.conf
  driverMemory            1500m
  driverCores             null
  driverExtraClassPath    null
  driverExtraLibraryPath  null
  driverExtraJavaOptions  
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  supervise               false
  queue                   null
  numExecutors            null
  files                   null
  pyFiles                 null
  archives                file:/home/cancerdetector/SparkBWA/build/./bwa.zip
  mainClass               SparkBWA
  primaryResource         
file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
  name                    SparkBWA
  childArgs               [-algorithm mem -reads paired -index 
/Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq 
ERR000589_2.filt.fastqhb Output_ERR000589- --deploy-mode cluster]
  jars                    null
  packages                null
  packagesExclusions      null
  repositories            null
  verbose                 true

Spark properties used, including those specified through
 --conf and those from the properties file 
/usr/lib/spark/conf/spark-defaults.conf:
  spark.yarn.am.memoryOverhead -> 558
  spark.driver.memory -> 1500m
  spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
  spark.executor.memory -> 5586m
  spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
  spark.eventLog.enabled -> true
  spark.scheduler.minRegisteredResourcesRatio -> 0.0
  spark.dynamicAllocation.maxExecutors -> 10000
  spark.akka.frameSize -> 512
  spark.executor.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.sql.parquet.cacheMetadata -> false
  spark.shuffle.service.enabled -> true
  spark.history.fs.logDirectory -> 
hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.dynamicAllocation.initialExecutors -> 10000
  spark.dynamicAllocation.minExecutors -> 1
  spark.yarn.executor.memoryOverhead -> 558
  spark.driver.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
  spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
  spark.yarn.am.memory -> 5586m
  spark.driver.maxResultSize -> 1920m
  spark.master -> yarn-client
  spark.dynamicAllocation.enabled -> true
  spark.executor.cores -> 2


Main class:
org.apache.spark.deploy.yarn.Client
Arguments:
--name
SparkBWA
--driver-memory
1500m
--executor-memory
1500m
--executor-cores
1
--archives
file:/home/cancerdetector/SparkBWA/build/./bwa.zip
--jar
file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
--class
SparkBWA
--arg
-algorithm
--arg
mem
--arg
-reads
--arg
paired
--arg
-index
--arg
/Data/HumanBase/hg38
--arg
-partitions
--arg
32
--arg
ERR000589_1.filt.fastq
--arg
ERR000589_2.filt.fastqhb
--arg
Output_ERR000589-
--arg
--deploy-mode
--arg
cluster
System properties:
spark.yarn.am.memoryOverhead -> 558
spark.driver.memory -> 1500m
spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
spark.executor.memory -> 1500m
spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
spark.eventLog.enabled -> true
spark.scheduler.minRegisteredResourcesRatio -> 0.0
SPARK_SUBMIT -> true
spark.dynamicAllocation.maxExecutors -> 10000
spark.akka.frameSize -> 512
spark.sql.parquet.cacheMetadata -> false
spark.executor.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.app.name -> SparkBWA
spark.shuffle.service.enabled -> true
spark.history.fs.logDirectory -> 
hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.dynamicAllocation.initialExecutors -> 10000
--arg
spark.dynamicAllocation.minExecutors -> 1
spark.yarn.executor.memoryOverhead -> 558
spark.driver.extraJavaOptions -> 
-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
spark.submit.deployMode -> cluster
spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
spark.yarn.am.memory -> 5586m
spark.driver.maxResultSize -> 1920m
spark.master -> yarn-cluster
spark.dynamicAllocation.enabled -> true
spark.executor.cores -> 1
Classpath elements:
16/07/30 19:37:22 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to 
ResourceManager at cluster-cancerdet
ector-m/10.132.0.2:8032
16/07/30 19:37:24 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: 
Submitted application application_
1467990031555_0105
Exception in thread "main" org.apache.spark.SparkException: Application 
application_1467990031555_0105 finished 
with failed status
        at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
        at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
        at org.apache.spark.deploy.yarn.Client.main(Client.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7
31)
        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


When I tried to check the AM and executor logs. the command didn't work (I have 
set the yarn.log-aggregation-enable to true), So I tried to manually access 
into NM's log dir to see the detailed application logs. Here are the 
application logs from the NM's log file:


2016-07-30 19:37:23,620 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocate blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
 
ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
 for 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/SparkBWA.jar
2016-07-30 19:37:23,807 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.4:50010 is added to 
blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
 
ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
 size 0
2016-07-30 19:37:23,807 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.3:50010 is added to 
blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
 
ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
 size 0
2016-07-30 19:37:23,812 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/SparkBWA.jar 
is closed by DFSClient_NONMAPREDUCE_606595546_1
2016-07-30 19:37:23,843 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocate blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 for /user/cancerdetector/.sparkStaging/application_1467990031555_0105/bwa.zip
2016-07-30 19:37:23,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.4:50010 is added to 
blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 size 0
2016-07-30 19:37:23,862 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.3:50010 is added to 
blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 size 0
2016-07-30 19:37:23,864 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/bwa.zip is 
closed by DFSClient_NONMAPREDUCE_606595546_1
2016-07-30 19:37:23,911 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
allocate blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 for 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/__spark_conf__3335387778472809466.zip
2016-07-30 19:37:23,922 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.4:50010 is added to 
blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 size 0
2016-07-30 19:37:23,922 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap 
updated: 10.132.0.3:50010 is added to 
blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
primaryNodeIndex=-1, 
replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
 
ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
 size 0
2016-07-30 19:37:23,925 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: 
/user/cancerdetector/.sparkStaging/application_1467990031555_0105/__spark_conf__3335387778472809466.zip
 is closed by DFSClient_NONMAPREDUCE_606595546_1
2016-07-30 19:37:26,235 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742332_1508 10.132.0.3:50010 10.132.0.4:50010 
2016-07-30 19:37:26,236 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742334_1510 10.132.0.3:50010 10.132.0.4:50010 
2016-07-30 19:37:26,236 INFO BlockStateChange: BLOCK* addToInvalidates: 
blk_1073742333_1509 10.132.0.3:50010 10.132.0.4:50010 
2016-07-30 19:37:26,961 INFO BlockStateChange: BLOCK* BlockManager: ask 
10.132.0.3:50010 to delete [blk_1073742332_1508, blk_1073742333_1509, 
blk_1073742334_1510]
2016-07-30 19:37:28,791 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.1b2f4ed4-0992-4bf3-a453-4c02e9ce00fe is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:37:29,961 INFO BlockStateChange: BLOCK* BlockManager: ask 
10.132.0.4:50010 to delete [blk_1073742332_1508, blk_1073742333_1509, 
blk_1073742334_1510]
2016-07-30 19:37:38,799 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.a0ca1b29-3022-4d1c-a868-4710d56903f9 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:37:48,806 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.fa70676f-ce52-4ddf-8fb6-1649284f5da0 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:37:58,814 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.7550f1fe-81e1-4a4f-9a72-5210dbae1a31 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:08,819 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 674 Total time for transactions(ms): 12 Number of 
transactions batched in Syncs: 0 Number of syncs: 668 SyncTimes(ms): 628 
2016-07-30 19:38:08,822 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.f6d27b3c-f60d-4c70-b9eb-9a682c783cf9 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:18,830 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.33f22e09-343f-4192-b194-a4617ba6fde5 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:28,838 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.9a90102c-bb41-42e8-ab5f-285e74f14388 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:38,846 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.f9a82533-de04-4da8-9054-f7f74f781351 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:48,854 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.96d8dfad-bcfa-4116-b159-62caa493208d is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:38:58,862 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.2c24d60a-c76e-4c6e-a6f2-868b6f7d746b is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:08,867 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 692 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 686 SyncTimes(ms): 643 
2016-07-30 19:39:08,870 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.200cfa9e-9429-4c9f-9227-aad743d833d7 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:18,878 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b2c007fb-0334-4539-b83f-152069a0cde9 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:28,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c5cc9039-11de-4a18-aa1d-95d16db8dcf9 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:38,893 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5b18a8cc-18d2-404e-aed4-799257e460d2 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:48,901 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.82de795e-9c85-4b03-b596-d6dcdee6eaa3 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:39:58,909 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c724a7b0-722b-4207-b946-f859fe2f10cc is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:08,914 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 710 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 704 SyncTimes(ms): 659 
2016-07-30 19:40:08,917 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.46ce84b2-885c-497a-8b9f-8f3202a317c2 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:18,925 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5fa59a96-cda0-4820-b1ec-38d120ff5dca is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:29,006 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.0f45738e-9626-4713-b39d-3883f0408146 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:39,014 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.005ce47c-ef57-4d4c-9a2f-57c32927aca1 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:49,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.1f889794-c1e6-4054-a533-7f43ee06966b is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:40:59,029 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.bc953f0d-287e-4745-b862-cfdd713e3777 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:09,034 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 728 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 722 SyncTimes(ms): 675 
2016-07-30 19:41:09,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5129bf62-08d3-4171-9591-57a5b004bb34 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:19,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.2f78852f-309c-45ef-ae9e-38b46c705e98 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:29,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b0dc7906-651d-4b26-b683-1799b325ba8d is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:39,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.cbcca99f-bedc-43d8-a890-a69c18b29b43 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:49,067 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.7ea8f3d6-dfd8-4080-8a45-a42419303fa0 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:41:59,074 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.1f22a8fd-ccb7-4138-b9f9-ab1ff1963b02 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:09,078 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 746 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 740 SyncTimes(ms): 691 
2016-07-30 19:42:09,081 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.6b4b0b45-00bf-47d6-bc2b-9dc149e10f01 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:19,089 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.2a1a8c1e-1b8b-485d-a108-41ea8087bafe is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:29,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.8c1b7511-83b2-4584-ab14-408a9e85d0c4 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:39,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.216d4363-b070-47c3-97ac-f0eac64ed411 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:49,110 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5d5cbb0a-8cad-41be-ba17-388b9fc955c4 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:42:59,117 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c805224b-1833-4dba-8cf5-80164b3ecd7b is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:09,121 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 764 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 758 SyncTimes(ms): 707 
2016-07-30 19:43:09,125 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.51715f99-5d67-4fa7-907b-7522fcca03c2 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:19,132 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.95920ee2-d9e2-41f4-a9f6-a495560af73f is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:29,141 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.0d4d5099-21d1-4e3f-84e0-7623511c542c is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:39,148 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b6c93d4f-040c-4b9e-a89e-15313efd13ce is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:49,157 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.35ed35e6-2c7d-4a45-ae4f-afaf538afc78 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:43:59,164 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.49c44bf3-ea11-4df1-ac71-a26203e9abba is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:09,170 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 782 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 776 SyncTimes(ms): 725 
2016-07-30 19:44:09,173 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.060f7d11-d341-4cab-8925-9b6203316744 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:19,181 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.666c8d61-405e-49bd-b2d0-939c920b6cd2 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:29,188 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.433f0daa-3386-44a6-b6b1-0285e9f5b176 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:39,197 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.1e840f6a-999b-4e1d-8eda-a95c409e351c is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:49,206 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c0df4079-d352-4aae-8392-9596f355c408 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:44:59,215 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.df28952f-5a2a-411b-b72d-49380b1ac88e is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:09,221 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 800 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 794 SyncTimes(ms): 743 
2016-07-30 19:45:09,224 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5515b3ca-de5d-46df-a49c-c07d5c09969d is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:19,234 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.3991cd72-3fb2-48a4-8083-5327d82be73b is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:29,243 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.5233a5f3-e15a-4bae-9baa-5d81b5da0459 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:39,252 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.6097d0dd-d3c4-482e-8fb7-baa22602fb53 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:49,261 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.f94ba7f7-313b-4387-a447-59214ddf6ecc is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:45:59,269 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.864e23f9-2b2a-44f4-b11d-e4d48249c7f3 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:09,275 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 818 Total time for transactions(ms): 14 Number of 
transactions batched in Syncs: 0 Number of syncs: 812 SyncTimes(ms): 761 
2016-07-30 19:46:09,278 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.96da4a90-afeb-4fe9-84eb-2d759785d428 is 
closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:19,288 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.a50035
66-95ab-4a4b-a1f3-7de6302d26a0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:29,296 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.65acb8
c8-9f16-49cc-951c-01dadd298e86 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:39,306 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.a3281e
71-6713-422e-b43f-5cd9500f8dd2 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:49,314 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.343366
ba-49fc-4b9a-b4d8-7a6b6c8683e0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:46:59,323 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.88395b
cf-4668-4a9e-8586-69de43e7e0b9 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:09,328 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 836 Total
 time for transactions(ms): 14 Number of transactions batched in Syncs: 0 
Number of syncs: 830 SyncTimes(ms): 77
9 
2016-07-30 19:47:09,331 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.c3c519
04-dda5-4e25-af9c-6188615063d5 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:19,338 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.163b09
48-28a7-4a1e-9a09-a8de560d0200 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:29,346 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b726de
12-176b-47c0-964c-0fa5196f626f is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:39,354 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.8eddb5
b2-746c-4852-8b15-f15c7a7068f0 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:49,364 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.0f467a
89-42d3-4b2e-ae9e-36714bd417f3 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:47:59,374 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.074a81
30-f4ba-4be6-a614-e39a252cd57b is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:48:09,380 INFO org.apache.hadoop.hdfs.server.namenode.FSEditLog: 
Number of transactions: 854 Total
 time for transactions(ms): 14 Number of transactions batched in Syncs: 0 
Number of syncs: 848 SyncTimes(ms): 79
7 
2016-07-30 19:48:09,383 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.b7e19c
12-827d-4ca7-87e9-1e3b8ab01c01 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:48:19,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.a06d3b
48-6a08-4294-9b6a-7c8ffcebef52 is closed by DFSClient_NONMAPREDUCE_-1615501432_1
2016-07-30 19:48:29,401 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
completeFile: /user/spark/eventlog/.89fc29
55-136b-400f-a78c-c439726e4964 is closed by DFSClient_NONMAPREDUCE_-1615501432_1


> Exception in thread “main” org.apache.spark.SparkException: Application 
> application finished with failed status
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-16819
>                 URL: https://issues.apache.org/jira/browse/SPARK-16819
>             Project: Spark
>          Issue Type: Question
>          Components: Streaming, YARN
>            Reporter: Asmaa Ali 
>              Labels: beginner
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> What is the reason of this exception ?!
> cancerdetector@cluster-cancerdetector-m:~/SparkBWA/build$ spark-submit 
> --class SparkBWA --master yarn-cluster --deploy-mode cluster --conf 
> spark.yarn.jar=hdfs:///user/spark/spark-assembly.jar --driver-memory 1500m 
> --executor-memory 1500m --executor-cores 1 --archives ./bwa.zip --verbose 
> ./SparkBWA.jar -algorithm mem -reads paired -index /Data/HumanBase/hg38 
> -partitions 32 ERR000589_1.filt.fastq ERR000589_2.filt.fastqhb 
> Output_ERR000589
> Using properties file: /usr/lib/spark/conf/spark-defaults.conf
> Adding default property: 
> spark.executor.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> Adding default property: 
> spark.history.fs.logDirectory=hdfs://cluster-cancerdetector-m/user/spark/eventlog
> Adding default property: spark.eventLog.enabled=true
> Adding default property: spark.driver.maxResultSize=1920m
> Adding default property: spark.shuffle.service.enabled=true
> Adding default property: 
> spark.yarn.historyServer.address=cluster-cancerdetector-m:18080
> Adding default property: spark.sql.parquet.cacheMetadata=false
> Adding default property: spark.driver.memory=3840m
> Adding default property: spark.dynamicAllocation.maxExecutors=10000
> Adding default property: spark.scheduler.minRegisteredResourcesRatio=0.0
> Adding default property: spark.yarn.am.memoryOverhead=558
> Adding default property: spark.yarn.am.memory=5586m
> Adding default property: 
> spark.driver.extraJavaOptions=-Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> Adding default property: spark.master=yarn-client
> Adding default property: spark.executor.memory=5586m
> Adding default property: 
> spark.eventLog.dir=hdfs://cluster-cancerdetector-m/user/spark/eventlog
> Adding default property: spark.dynamicAllocation.enabled=true
> Adding default property: spark.executor.cores=2
> Adding default property: spark.yarn.executor.memoryOverhead=558
> Adding default property: spark.dynamicAllocation.minExecutors=1
> Adding default property: spark.dynamicAllocation.initialExecutors=10000
> Adding default property: spark.akka.frameSize=512
> Parsed arguments:
>   master                  yarn-cluster
>   deployMode              cluster
>   executorMemory          1500m
>   executorCores           1
>   totalExecutorCores      null
>   propertiesFile          /usr/lib/spark/conf/spark-defaults.conf
>   driverMemory            1500m
>   driverCores             null
>   driverExtraClassPath    null
>   driverExtraLibraryPath  null
>   driverExtraJavaOptions  
> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
>   supervise               false
>   queue                   null
>   numExecutors            null
>   files                   null
>   pyFiles                 null
>   archives                file:/home/cancerdetector/SparkBWA/build/./bwa.zip
>   mainClass               SparkBWA
>   primaryResource         
> file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
>   name                    SparkBWA
>   childArgs               [-algorithm mem -reads paired -index 
> /Data/HumanBase/hg38 -partitions 32 ERR000589_1.filt.fastq 
> ERR000589_2.filt.fastqhb Output_ERR000589]
>   jars                    null
>   packages                null
>   packagesExclusions      null
>   repositories            null
>   verbose                 true
> Spark properties used, including those specified through
>  --conf and those from the properties file 
> /usr/lib/spark/conf/spark-defaults.conf:
>   spark.yarn.am.memoryOverhead -> 558
>   spark.driver.memory -> 1500m
>   spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
>   spark.executor.memory -> 5586m
>   spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
>   spark.eventLog.enabled -> true
>   spark.scheduler.minRegisteredResourcesRatio -> 0.0
>   spark.dynamicAllocation.maxExecutors -> 10000
>   spark.akka.frameSize -> 512
>   spark.executor.extraJavaOptions -> 
> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
>   spark.sql.parquet.cacheMetadata -> false
>   spark.shuffle.service.enabled -> true
>   spark.history.fs.logDirectory -> 
> hdfs://cluster-cancerdetector-m/user/spark/eventlog
>   spark.dynamicAllocation.initialExecutors -> 10000
>   spark.dynamicAllocation.minExecutors -> 1
>   spark.yarn.executor.memoryOverhead -> 558
>   spark.driver.extraJavaOptions -> 
> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
>   spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
>   spark.yarn.am.memory -> 5586m
>   spark.driver.maxResultSize -> 1920m
>   spark.master -> yarn-client
>   spark.dynamicAllocation.enabled -> true
>   spark.executor.cores -> 2
>     
> Main class:
> org.apache.spark.deploy.yarn.Client
> Arguments:
> --name
> SparkBWA
> --driver-memory
> 1500m
> --executor-memory
> 1500m
> --executor-cores
> 1
> --archives
> file:/home/cancerdetector/SparkBWA/build/./bwa.zip
> --jar
> file:/home/cancerdetector/SparkBWA/build/./SparkBWA.jar
> --class
> SparkBWA
> --arg
> -algorithm
> --arg
> mem
> --arg
> -reads
> --arg
> paired
> --arg
> -index
> --arg
> /Data/HumanBase/hg38
> --arg
> -partitions
> --arg
> 32
> --arg
> ERR000589_1.filt.fastq
> --arg
> ERR000589_2.filt.fastqhb
> --arg
> Output_ERR000589
> System properties:
> spark.yarn.am.memoryOverhead -> 558
> spark.driver.memory -> 1500m
> spark.yarn.jar -> hdfs:///user/spark/spark-assembly.jar
> spark.executor.memory -> 1500m
> spark.yarn.historyServer.address -> cluster-cancerdetector-m:18080
> spark.eventLog.enabled -> true
> spark.scheduler.minRegisteredResourcesRatio -> 0.0
> SPARK_SUBMIT -> true
> spark.dynamicAllocation.maxExecutors -> 10000
> spark.akka.frameSize -> 512
> spark.sql.parquet.cacheMetadata -> false
> spark.executor.extraJavaOptions -> 
> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> spark.app.name -> SparkBWA
> spark.shuffle.service.enabled -> true
> spark.history.fs.logDirectory -> 
> hdfs://cluster-cancerdetector-m/user/spark/eventlog
> spark.dynamicAllocation.initialExecutors -> 10000
> spark.dynamicAllocation.minExecutors -> 1
> spark.yarn.executor.memoryOverhead -> 558
> spark.driver.extraJavaOptions -> 
> -Xbootclasspath/p:/usr/local/share/google/alpn/alpn-boot-8.1.7.v20160121.jar
> spark.submit.deployMode -> cluster
> spark.eventLog.dir -> hdfs://cluster-cancerdetector-m/user/spark/eventlog
> spark.yarn.am.memory -> 5586m
> spark.driver.maxResultSize -> 1920m
> spark.master -> yarn-cluster
> spark.dynamicAllocation.enabled -> true
> spark.executor.cores -> 1
> Classpath elements:
> spark.yarn.am.memory is set but does not apply in cluster mode.
> spark.yarn.am.memoryOverhead is set but does not apply in cluster mode.
> 16/07/31 01:12:39 INFO org.apache.hadoop.yarn.client.RMProxy: Connecting to 
> ResourceManager at cluster-cancerdet
> ector-m/10.132.0.2:8032
> 16/07/31 01:12:40 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl: 
> Submitted application application_
> 1467990031555_0106
> Exception in thread "main" org.apache.spark.SparkException: Application 
> application_1467990031555_0106 finished 
> with failed status
>         at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
>         at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
>         at org.apache.spark.deploy.yarn.Client.main(Client.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at 
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:7
> 31)
>         at 
> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> When I tried to check the AM and executor logs. the command didn't work (I 
> have set the yarn.log-aggregation-enable to true), So I tried to manually 
> access into NM's log dir to see the detailed application logs. Here are the 
> application logs from the NM's log file:
> 2016-07-30 19:37:23,620 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
>  
> ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
>  for 
> /user/cancerdetector/.sparkStaging/application_1467990031555_0105/SparkBWA.jar
> 2016-07-30 19:37:23,807 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.132.0.4:50010 is added to 
> blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
>  
> ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
>  size 0
> 2016-07-30 19:37:23,807 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.132.0.3:50010 is added to 
> blk_1073742332_1508{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW],
>  
> ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW]]}
>  size 0
> 2016-07-30 19:37:23,812 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: 
> /user/cancerdetector/.sparkStaging/application_1467990031555_0105/SparkBWA.jar
>  is closed by DFSClient_NONMAPREDUCE_606595546_1
> 2016-07-30 19:37:23,843 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
>  
> ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
>  for /user/cancerdetector/.sparkStaging/application_1467990031555_0105/bwa.zip
> 2016-07-30 19:37:23,862 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.132.0.4:50010 is added to 
> blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
>  
> ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
>  size 0
> 2016-07-30 19:37:23,862 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.132.0.3:50010 is added to 
> blk_1073742333_1509{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
>  
> ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
>  size 0
> 2016-07-30 19:37:23,864 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: 
> /user/cancerdetector/.sparkStaging/application_1467990031555_0105/bwa.zip is 
> closed by DFSClient_NONMAPREDUCE_606595546_1
> 2016-07-30 19:37:23,911 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> allocate blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
>  
> ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
>  for 
> /user/cancerdetector/.sparkStaging/application_1467990031555_0105/__spark_conf__3335387778472809466.zip
> 2016-07-30 19:37:23,922 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.132.0.4:50010 is added to 
> blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
>  
> ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
>  size 0
> 2016-07-30 19:37:23,922 INFO BlockStateChange: BLOCK* addStoredBlock: 
> blockMap updated: 10.132.0.3:50010 is added to 
> blk_1073742334_1510{UCState=UNDER_CONSTRUCTION, truncateBlock=null, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUC[[DISK]DS-19f52f20-0053-443d-bf33-dd636d8b2d07:NORMAL:10.132.0.3:50010|RBW],
>  
> ReplicaUC[[DISK]DS-6b7272d9-24d2-4d77-85e2-49c492bd12a4:NORMAL:10.132.0.4:50010|RBW]]}
>  size 0
> 2016-07-30 19:37:23,925 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: 
> /user/cancerdetector/.sparkStaging/application_1467990031555_0105/__spark_conf__3335387778472809466.zip
>  is closed by DFSClient_NONMAPREDUCE_606595546_1
> 2016-07-30 19:37:26,235 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742332_1508 10.132.0.3:50010 10.132.0.4:50010 
> 2016-07-30 19:37:26,236 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742334_1510 10.132.0.3:50010 10.132.0.4:50010 
> 2016-07-30 19:37:26,236 INFO BlockStateChange: BLOCK* addToInvalidates: 
> blk_1073742333_1509 10.132.0.3:50010 10.132.0.4:50010 
> 2016-07-30 19:37:26,961 INFO BlockStateChange: BLOCK* BlockManager: ask 
> 10.132.0.3:50010 to delete [blk_1073742332_1508, blk_1073742333_1509, 
> blk_1073742334_1510]
> 2016-07-30 19:37:28,791 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.1b2f4ed4-0992-4bf3-a453-4c02e9ce00fe is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:37:29,961 INFO BlockStateChange: BLOCK* BlockManager: ask 
> 10.132.0.4:50010 to delete [blk_1073742332_1508, blk_1073742333_1509, 
> blk_1073742334_1510]
> 2016-07-30 19:37:38,799 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.a0ca1b29-3022-4d1c-a868-4710d56903f9 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:37:48,806 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.fa70676f-ce52-4ddf-8fb6-1649284f5da0 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:37:58,814 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.7550f1fe-81e1-4a4f-9a72-5210dbae1a31 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:38:08,819 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 674 
> Total time for transactions(ms): 12 Number of transactions batched in Syncs: 
> 0 Number of syncs: 668 SyncTimes(ms): 628 
> 2016-07-30 19:38:08,822 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.f6d27b3c-f60d-4c70-b9eb-9a682c783cf9 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:38:18,830 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.33f22e09-343f-4192-b194-a4617ba6fde5 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:38:28,838 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.9a90102c-bb41-42e8-ab5f-285e74f14388 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:38:38,846 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.f9a82533-de04-4da8-9054-f7f74f781351 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:38:48,854 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.96d8dfad-bcfa-4116-b159-62caa493208d is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:38:58,862 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.2c24d60a-c76e-4c6e-a6f2-868b6f7d746b is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:39:08,867 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 692 
> Total time for transactions(ms): 14 Number of transactions batched in Syncs: 
> 0 Number of syncs: 686 SyncTimes(ms): 643 
> 2016-07-30 19:39:08,870 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.200cfa9e-9429-4c9f-9227-aad743d833d7 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:39:18,878 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.b2c007fb-0334-4539-b83f-152069a0cde9 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:39:28,885 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.c5cc9039-11de-4a18-aa1d-95d16db8dcf9 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:39:38,893 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.5b18a8cc-18d2-404e-aed4-799257e460d2 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:39:48,901 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.82de795e-9c85-4b03-b596-d6dcdee6eaa3 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:39:58,909 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.c724a7b0-722b-4207-b946-f859fe2f10cc is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:40:08,914 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 710 
> Total time for transactions(ms): 14 Number of transactions batched in Syncs: 
> 0 Number of syncs: 704 SyncTimes(ms): 659 
> 2016-07-30 19:40:08,917 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.46ce84b2-885c-497a-8b9f-8f3202a317c2 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:40:18,925 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.5fa59a96-cda0-4820-b1ec-38d120ff5dca is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:40:29,006 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.0f45738e-9626-4713-b39d-3883f0408146 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:40:39,014 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.005ce47c-ef57-4d4c-9a2f-57c32927aca1 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:40:49,023 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.1f889794-c1e6-4054-a533-7f43ee06966b is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:40:59,029 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.bc953f0d-287e-4745-b862-cfdd713e3777 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:41:09,034 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 728 
> Total time for transactions(ms): 14 Number of transactions batched in Syncs: 
> 0 Number of syncs: 722 SyncTimes(ms): 675 
> 2016-07-30 19:41:09,038 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.5129bf62-08d3-4171-9591-57a5b004bb34 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:41:19,045 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.2f78852f-309c-45ef-ae9e-38b46c705e98 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:41:29,052 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.b0dc7906-651d-4b26-b683-1799b325ba8d is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:41:39,059 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.cbcca99f-bedc-43d8-a890-a69c18b29b43 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:41:49,067 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.7ea8f3d6-dfd8-4080-8a45-a42419303fa0 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:41:59,074 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.1f22a8fd-ccb7-4138-b9f9-ab1ff1963b02 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:42:09,078 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 746 
> Total time for transactions(ms): 14 Number of transactions batched in Syncs: 
> 0 Number of syncs: 740 SyncTimes(ms): 691 
> 2016-07-30 19:42:09,081 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.6b4b0b45-00bf-47d6-bc2b-9dc149e10f01 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:42:19,089 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.2a1a8c1e-1b8b-485d-a108-41ea8087bafe is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:42:29,096 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.8c1b7511-83b2-4584-ab14-408a9e85d0c4 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:42:39,103 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.216d4363-b070-47c3-97ac-f0eac64ed411 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:42:49,110 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.5d5cbb0a-8cad-41be-ba17-388b9fc955c4 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:42:59,117 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.c805224b-1833-4dba-8cf5-80164b3ecd7b is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:43:09,121 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 764 
> Total time for transactions(ms): 14 Number of transactions batched in Syncs: 
> 0 Number of syncs: 758 SyncTimes(ms): 707 
> 2016-07-30 19:43:09,125 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.51715f99-5d67-4fa7-907b-7522fcca03c2 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:43:19,132 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.95920ee2-d9e2-41f4-a9f6-a495560af73f is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:43:29,141 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.0d4d5099-21d1-4e3f-84e0-7623511c542c is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:43:39,148 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.b6c93d4f-040c-4b9e-a89e-15313efd13ce is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:43:49,157 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.35ed35e6-2c7d-4a45-ae4f-afaf538afc78 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:43:59,164 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.49c44bf3-ea11-4df1-ac71-a26203e9abba is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:44:09,170 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 782 
> Total time for transactions(ms): 14 Number of transactions batched in Syncs: 
> 0 Number of syncs: 776 SyncTimes(ms): 725 
> 2016-07-30 19:44:09,173 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.060f7d11-d341-4cab-8925-9b6203316744 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:44:19,181 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.666c8d61-405e-49bd-b2d0-939c920b6cd2 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:44:29,188 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.433f0daa-3386-44a6-b6b1-0285e9f5b176 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:44:39,197 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.1e840f6a-999b-4e1d-8eda-a95c409e351c is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:44:49,206 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.c0df4079-d352-4aae-8392-9596f355c408 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:44:59,215 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.df28952f-5a2a-411b-b72d-49380b1ac88e is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:45:09,221 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 800 
> Total time for transactions(ms): 14 Number of transactions batched in Syncs: 
> 0 Number of syncs: 794 SyncTimes(ms): 743 
> 2016-07-30 19:45:09,224 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.5515b3ca-de5d-46df-a49c-c07d5c09969d is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:45:19,234 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.3991cd72-3fb2-48a4-8083-5327d82be73b is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:45:29,243 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.5233a5f3-e15a-4bae-9baa-5d81b5da0459 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:45:39,252 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.6097d0dd-d3c4-482e-8fb7-baa22602fb53 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:45:49,261 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.f94ba7f7-313b-4387-a447-59214ddf6ecc is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:45:59,269 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.864e23f9-2b2a-44f4-b11d-e4d48249c7f3 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:46:09,275 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 818 
> Total time for transactions(ms): 14 Number of transactions batched in Syncs: 
> 0 Number of syncs: 812 SyncTimes(ms): 761 
> 2016-07-30 19:46:09,278 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.96da4a90-afeb-4fe9-84eb-2d759785d428 is 
> closed by DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:46:19,288 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.a50035
> 66-95ab-4a4b-a1f3-7de6302d26a0 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:46:29,296 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.65acb8
> c8-9f16-49cc-951c-01dadd298e86 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:46:39,306 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.a3281e
> 71-6713-422e-b43f-5cd9500f8dd2 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:46:49,314 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.343366
> ba-49fc-4b9a-b4d8-7a6b6c8683e0 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:46:59,323 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.88395b
> cf-4668-4a9e-8586-69de43e7e0b9 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:47:09,328 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 836 
> Total
>  time for transactions(ms): 14 Number of transactions batched in Syncs: 0 
> Number of syncs: 830 SyncTimes(ms): 77
> 9 
> 2016-07-30 19:47:09,331 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.c3c519
> 04-dda5-4e25-af9c-6188615063d5 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:47:19,338 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.163b09
> 48-28a7-4a1e-9a09-a8de560d0200 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:47:29,346 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.b726de
> 12-176b-47c0-964c-0fa5196f626f is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:47:39,354 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.8eddb5
> b2-746c-4852-8b15-f15c7a7068f0 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:47:49,364 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.0f467a
> 89-42d3-4b2e-ae9e-36714bd417f3 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:47:59,374 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.074a81
> 30-f4ba-4be6-a614-e39a252cd57b is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:48:09,380 INFO 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog: Number of transactions: 854 
> Total
>  time for transactions(ms): 14 Number of transactions batched in Syncs: 0 
> Number of syncs: 848 SyncTimes(ms): 79
> 7 
> 2016-07-30 19:48:09,383 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.b7e19c
> 12-827d-4ca7-87e9-1e3b8ab01c01 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:48:19,391 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.a06d3b
> 48-6a08-4294-9b6a-7c8ffcebef52 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1
> 2016-07-30 19:48:29,401 INFO org.apache.hadoop.hdfs.StateChange: DIR* 
> completeFile: /user/spark/eventlog/.89fc29
> 55-136b-400f-a78c-c439726e4964 is closed by 
> DFSClient_NONMAPREDUCE_-1615501432_1



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to