[jira] [Commented] (SPARK-23123) Unable to run Spark Job with Hadoop NameNode Federation using ViewFS

2018-01-17 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328651#comment-16328651
 ] 

Steve Loughran commented on SPARK-23123:


I've never looked at ViewFS internals before, so treat my commentary here with 
caution
 # Something (probably yarn Node Manager/Resource Localizer) is trying to D/L 
the JAR from a viewfs URL
 # it can't init viewfs as it's not finding the conf entry for the mount table, 
which, is *probably* {{fs.viewfs.mounttable.default}}. (ie. it will be that 
unless overridden
 # Yet the spark-submit client can see it, which is why it manages to delete 
the staging dir.
 # Which would imply that the NM isn't getting the core-site.xml values 
configuring viewfs.

Like Saisai says, I wouldn't blame Spark here; I don't think it's a spark 
process

I'd try and work out which node this failed on and see what the NM logs say. If 
it's failing for this job submit, it's likely to be failing for other things 
too. Then try restarting it to see if the problem "goes away"...it would 
indicate that the settings in its local /etc/conf/hadoop/core-site.xml don't 
have the binding info.

If you are confident that it is in that file, and you've restarted the NM, and 
it's still failing in its logs, then file a YARN bug attaching the log. But 
you'd need to provide that evidence that it wasn't a local config problem 
before anyone would look at it

> Unable to run Spark Job with Hadoop NameNode Federation using ViewFS
> 
>
> Key: SPARK-23123
> URL: https://issues.apache.org/jira/browse/SPARK-23123
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.6.3
>Reporter: Nihar Nayak
>Priority: Major
>  Labels: Hadoop, Spark
>
> Added following to core-site.xml in order to make use of ViewFS in a NameNode 
> federated cluster. 
> {noformat}
> 
>  fs.defaultFS
>  viewfs:///
>  
> 
>  fs.viewfs.mounttable.default.link./apps
>  hdfs://nameservice1/apps
>  
>  
>  fs.viewfs.mounttable.default.link./app-logs
>  hdfs://nameservice2/app-logs
>  
>  
>  fs.viewfs.mounttable.default.link./tmp
>  hdfs://nameservice2/tmp
>  
>  
>  fs.viewfs.mounttable.default.link./user
>  hdfs://nameservice2/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns1/user
>  hdfs://nameservice1/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns2/user
>  hdfs://nameservice2/user
>  
> {noformat}
> Got the following error .
> {noformat}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --num-executors 3 --driver-memory 512m --executor-memory 512m 
> --executor-cores 1 ${SPARK_HOME}/lib/spark-examples*.jar 10
> 18/01/17 02:14:45 INFO spark.SparkContext: Added JAR 
> file:/home/nayak/hdp26_c4000_stg/spark2/lib/spark-examples_2.11-2.1.1.2.6.2.0-205.jar
>  at spark://x:35633/jars/spark-examples_2.11-2.1.1.2.6.2.0-205.jar with 
> timestamp 1516155285534
> 18/01/17 02:14:46 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/01/17 02:14:46 INFO yarn.Client: Requesting a new application from cluster 
> with 26 NodeManagers
> 18/01/17 02:14:46 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (13800 MB 
> per container)
> 18/01/17 02:14:46 INFO yarn.Client: Will allocate AM container, with 896 MB 
> memory including 384 MB overhead
> 18/01/17 02:14:46 INFO yarn.Client: Setting up container launch context for 
> our AM
> 18/01/17 02:14:46 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 18/01/17 02:14:46 INFO yarn.Client: Preparing resources for our AM container
> 18/01/17 02:14:46 INFO security.HDFSCredentialProvider: getting token for 
> namenode: viewfs:/user/nayak
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 22488202 for nayak on ha-hdfs:nameservice1
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 50 
> for nayak on ha-hdfs:nameservice2
> 18/01/17 02:14:47 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://:9083
> 18/01/17 02:14:47 INFO hive.metastore: Connected to metastore.
> 18/01/17 02:14:49 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 29 6e 61 79 61 
> 6b 6e 69 68 61 72 72 61 30 31 40 53 54 47 32 30 30 30 2e 48 41 44 4f 4f 50 2e 
> 52 41 4b 55 54 45 4e 2e 43 4f 4d 04 68 69 76 65 00 8a 01 61 01 e5 be 03 8a 01 
> 61 25 f2 42 03 8d 02 21 bb 8e 02 b7
> 18/01/17 02:14:49 WARN yarn.Client: Neither spark.yarn.jars nor 
> spark.yarn.archive is set, falling back to uploading libraries under 
> SPARK_HOME.
> 18/01/17 02:14:50 INFO yarn.Client: Uploading resource 
> 

[jira] [Commented] (SPARK-23123) Unable to run Spark Job with Hadoop NameNode Federation using ViewFS

2018-01-16 Thread Saisai Shao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328364#comment-16328364
 ] 

Saisai Shao commented on SPARK-23123:
-

[~ste...@apache.org], any thought on this issue?

> Unable to run Spark Job with Hadoop NameNode Federation using ViewFS
> 
>
> Key: SPARK-23123
> URL: https://issues.apache.org/jira/browse/SPARK-23123
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.6.3
>Reporter: Nihar Nayak
>Priority: Major
>  Labels: Hadoop, Spark
>
> Added following to core-site.xml in order to make use of ViewFS in a NameNode 
> federated cluster. 
> {noformat}
> 
>  fs.defaultFS
>  viewfs:///
>  
> 
>  fs.viewfs.mounttable.default.link./apps
>  hdfs://nameservice1/apps
>  
>  
>  fs.viewfs.mounttable.default.link./app-logs
>  hdfs://nameservice2/app-logs
>  
>  
>  fs.viewfs.mounttable.default.link./tmp
>  hdfs://nameservice2/tmp
>  
>  
>  fs.viewfs.mounttable.default.link./user
>  hdfs://nameservice2/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns1/user
>  hdfs://nameservice1/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns2/user
>  hdfs://nameservice2/user
>  
> {noformat}
> Got the following error .
> {noformat}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --num-executors 3 --driver-memory 512m --executor-memory 512m 
> --executor-cores 1 ${SPARK_HOME}/lib/spark-examples*.jar 10
> 18/01/17 02:14:45 INFO spark.SparkContext: Added JAR 
> file:/home/nayak/hdp26_c4000_stg/spark2/lib/spark-examples_2.11-2.1.1.2.6.2.0-205.jar
>  at spark://x:35633/jars/spark-examples_2.11-2.1.1.2.6.2.0-205.jar with 
> timestamp 1516155285534
> 18/01/17 02:14:46 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/01/17 02:14:46 INFO yarn.Client: Requesting a new application from cluster 
> with 26 NodeManagers
> 18/01/17 02:14:46 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (13800 MB 
> per container)
> 18/01/17 02:14:46 INFO yarn.Client: Will allocate AM container, with 896 MB 
> memory including 384 MB overhead
> 18/01/17 02:14:46 INFO yarn.Client: Setting up container launch context for 
> our AM
> 18/01/17 02:14:46 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 18/01/17 02:14:46 INFO yarn.Client: Preparing resources for our AM container
> 18/01/17 02:14:46 INFO security.HDFSCredentialProvider: getting token for 
> namenode: viewfs:/user/nayak
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 22488202 for nayak on ha-hdfs:nameservice1
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 50 
> for nayak on ha-hdfs:nameservice2
> 18/01/17 02:14:47 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://:9083
> 18/01/17 02:14:47 INFO hive.metastore: Connected to metastore.
> 18/01/17 02:14:49 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 29 6e 61 79 61 
> 6b 6e 69 68 61 72 72 61 30 31 40 53 54 47 32 30 30 30 2e 48 41 44 4f 4f 50 2e 
> 52 41 4b 55 54 45 4e 2e 43 4f 4d 04 68 69 76 65 00 8a 01 61 01 e5 be 03 8a 01 
> 61 25 f2 42 03 8d 02 21 bb 8e 02 b7
> 18/01/17 02:14:49 WARN yarn.Client: Neither spark.yarn.jars nor 
> spark.yarn.archive is set, falling back to uploading libraries under 
> SPARK_HOME.
> 18/01/17 02:14:50 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-7498ee81-d22b-426e-9466-3a08f7c827b1/__spark_libs__6643608006679813597.zip
>  -> 
> viewfs:/user/nayak/.sparkStaging/application_1515035441414_275503/__spark_libs__6643608006679813597.zip
> 18/01/17 02:14:55 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-7498ee81-d22b-426e-9466-3a08f7c827b1/__spark_conf__405432153902988742.zip
>  -> 
> viewfs:/user/nayak/.sparkStaging/application_1515035441414_275503/__spark_conf__.zip
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing view acls to: nayak
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing modify acls to: 
> nayak
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing view acls groups to:
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing modify acls groups to:
> 18/01/17 02:14:55 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users  with view permissions: Set(nayak); 
> groups with view permissions: Set(); users  with modify permissions: 
> Set(nayak); groups with modify permissions: Set()
> 18/01/17 02:14:55 INFO yarn.Client: Submitting application 
> application_1515035441414_275503 to ResourceManager
> 18/01/17 02:14:55 INFO impl.YarnClientImpl: Submitted 

[jira] [Commented] (SPARK-23123) Unable to run Spark Job with Hadoop NameNode Federation using ViewFS

2018-01-16 Thread Saisai Shao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328361#comment-16328361
 ] 

Saisai Shao commented on SPARK-23123:
-

>From the stack, looks like the issue is from YARN to communicate with HDFS to 
>distribute application dependencies. So I'm guessing it is more like a YARN 
>issue communicating to HDFS. Why don't you also create a YARN issue, YARN 
>community will have a more deep insight about this issue.

> Unable to run Spark Job with Hadoop NameNode Federation using ViewFS
> 
>
> Key: SPARK-23123
> URL: https://issues.apache.org/jira/browse/SPARK-23123
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.6.3
>Reporter: Nihar Nayak
>Priority: Major
>  Labels: Hadoop, Spark
>
> Added following to core-site.xml in order to make use of ViewFS in a NameNode 
> federated cluster. 
> {noformat}
> 
>  fs.defaultFS
>  viewfs:///
>  
> 
>  fs.viewfs.mounttable.default.link./apps
>  hdfs://nameservice1/apps
>  
>  
>  fs.viewfs.mounttable.default.link./app-logs
>  hdfs://nameservice2/app-logs
>  
>  
>  fs.viewfs.mounttable.default.link./tmp
>  hdfs://nameservice2/tmp
>  
>  
>  fs.viewfs.mounttable.default.link./user
>  hdfs://nameservice2/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns1/user
>  hdfs://nameservice1/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns2/user
>  hdfs://nameservice2/user
>  
> {noformat}
> Got the following error .
> {noformat}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --num-executors 3 --driver-memory 512m --executor-memory 512m 
> --executor-cores 1 ${SPARK_HOME}/lib/spark-examples*.jar 10
> 18/01/17 02:14:45 INFO spark.SparkContext: Added JAR 
> file:/home/nayak/hdp26_c4000_stg/spark2/lib/spark-examples_2.11-2.1.1.2.6.2.0-205.jar
>  at spark://x:35633/jars/spark-examples_2.11-2.1.1.2.6.2.0-205.jar with 
> timestamp 1516155285534
> 18/01/17 02:14:46 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/01/17 02:14:46 INFO yarn.Client: Requesting a new application from cluster 
> with 26 NodeManagers
> 18/01/17 02:14:46 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (13800 MB 
> per container)
> 18/01/17 02:14:46 INFO yarn.Client: Will allocate AM container, with 896 MB 
> memory including 384 MB overhead
> 18/01/17 02:14:46 INFO yarn.Client: Setting up container launch context for 
> our AM
> 18/01/17 02:14:46 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 18/01/17 02:14:46 INFO yarn.Client: Preparing resources for our AM container
> 18/01/17 02:14:46 INFO security.HDFSCredentialProvider: getting token for 
> namenode: viewfs:/user/nayak
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 22488202 for nayak on ha-hdfs:nameservice1
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 50 
> for nayak on ha-hdfs:nameservice2
> 18/01/17 02:14:47 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://:9083
> 18/01/17 02:14:47 INFO hive.metastore: Connected to metastore.
> 18/01/17 02:14:49 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 29 6e 61 79 61 
> 6b 6e 69 68 61 72 72 61 30 31 40 53 54 47 32 30 30 30 2e 48 41 44 4f 4f 50 2e 
> 52 41 4b 55 54 45 4e 2e 43 4f 4d 04 68 69 76 65 00 8a 01 61 01 e5 be 03 8a 01 
> 61 25 f2 42 03 8d 02 21 bb 8e 02 b7
> 18/01/17 02:14:49 WARN yarn.Client: Neither spark.yarn.jars nor 
> spark.yarn.archive is set, falling back to uploading libraries under 
> SPARK_HOME.
> 18/01/17 02:14:50 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-7498ee81-d22b-426e-9466-3a08f7c827b1/__spark_libs__6643608006679813597.zip
>  -> 
> viewfs:/user/nayak/.sparkStaging/application_1515035441414_275503/__spark_libs__6643608006679813597.zip
> 18/01/17 02:14:55 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-7498ee81-d22b-426e-9466-3a08f7c827b1/__spark_conf__405432153902988742.zip
>  -> 
> viewfs:/user/nayak/.sparkStaging/application_1515035441414_275503/__spark_conf__.zip
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing view acls to: nayak
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing modify acls to: 
> nayak
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing view acls groups to:
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing modify acls groups to:
> 18/01/17 02:14:55 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users  with view permissions: Set(nayak); 
> groups with view permissions: Set(); users  with modify 

[jira] [Commented] (SPARK-23123) Unable to run Spark Job with Hadoop NameNode Federation using ViewFS

2018-01-16 Thread Nihar Nayak (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328349#comment-16328349
 ] 

Nihar Nayak commented on SPARK-23123:
-

I'm able to run all other Hadoop Applications (MR/Hive Jobs, HDFS Command as 
well). It's only failing in case of Spark application. As far as the exception 
is concerned, yes it's thrown by Hadoop, but the exception saying unable to 
initialize which is strange because all the related configurations are provided 
in core-site.xml . 

Note : I can able reproduce the exact issue in case of HDFS command if i remove 
the "fs.viewfs.mounttable" configuration from core-site.xml, but with the 
correct configuration any Hadoop dependent application (Including Spark) should 
run without any issue . 


> Unable to run Spark Job with Hadoop NameNode Federation using ViewFS
> 
>
> Key: SPARK-23123
> URL: https://issues.apache.org/jira/browse/SPARK-23123
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.6.3
>Reporter: Nihar Nayak
>Priority: Major
>  Labels: Hadoop, Spark
>
> Added following to core-site.xml in order to make use of ViewFS in a NameNode 
> federated cluster. 
> {noformat}
> 
>  fs.defaultFS
>  viewfs:///
>  
> 
>  fs.viewfs.mounttable.default.link./apps
>  hdfs://nameservice1/apps
>  
>  
>  fs.viewfs.mounttable.default.link./app-logs
>  hdfs://nameservice2/app-logs
>  
>  
>  fs.viewfs.mounttable.default.link./tmp
>  hdfs://nameservice2/tmp
>  
>  
>  fs.viewfs.mounttable.default.link./user
>  hdfs://nameservice2/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns1/user
>  hdfs://nameservice1/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns2/user
>  hdfs://nameservice2/user
>  
> {noformat}
> Got the following error .
> {noformat}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --num-executors 3 --driver-memory 512m --executor-memory 512m 
> --executor-cores 1 ${SPARK_HOME}/lib/spark-examples*.jar 10
> 18/01/17 02:14:45 INFO spark.SparkContext: Added JAR 
> file:/home/nayak/hdp26_c4000_stg/spark2/lib/spark-examples_2.11-2.1.1.2.6.2.0-205.jar
>  at spark://x:35633/jars/spark-examples_2.11-2.1.1.2.6.2.0-205.jar with 
> timestamp 1516155285534
> 18/01/17 02:14:46 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/01/17 02:14:46 INFO yarn.Client: Requesting a new application from cluster 
> with 26 NodeManagers
> 18/01/17 02:14:46 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (13800 MB 
> per container)
> 18/01/17 02:14:46 INFO yarn.Client: Will allocate AM container, with 896 MB 
> memory including 384 MB overhead
> 18/01/17 02:14:46 INFO yarn.Client: Setting up container launch context for 
> our AM
> 18/01/17 02:14:46 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 18/01/17 02:14:46 INFO yarn.Client: Preparing resources for our AM container
> 18/01/17 02:14:46 INFO security.HDFSCredentialProvider: getting token for 
> namenode: viewfs:/user/nayak
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 22488202 for nayak on ha-hdfs:nameservice1
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 50 
> for nayak on ha-hdfs:nameservice2
> 18/01/17 02:14:47 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://:9083
> 18/01/17 02:14:47 INFO hive.metastore: Connected to metastore.
> 18/01/17 02:14:49 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 29 6e 61 79 61 
> 6b 6e 69 68 61 72 72 61 30 31 40 53 54 47 32 30 30 30 2e 48 41 44 4f 4f 50 2e 
> 52 41 4b 55 54 45 4e 2e 43 4f 4d 04 68 69 76 65 00 8a 01 61 01 e5 be 03 8a 01 
> 61 25 f2 42 03 8d 02 21 bb 8e 02 b7
> 18/01/17 02:14:49 WARN yarn.Client: Neither spark.yarn.jars nor 
> spark.yarn.archive is set, falling back to uploading libraries under 
> SPARK_HOME.
> 18/01/17 02:14:50 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-7498ee81-d22b-426e-9466-3a08f7c827b1/__spark_libs__6643608006679813597.zip
>  -> 
> viewfs:/user/nayak/.sparkStaging/application_1515035441414_275503/__spark_libs__6643608006679813597.zip
> 18/01/17 02:14:55 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-7498ee81-d22b-426e-9466-3a08f7c827b1/__spark_conf__405432153902988742.zip
>  -> 
> viewfs:/user/nayak/.sparkStaging/application_1515035441414_275503/__spark_conf__.zip
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing view acls to: nayak
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing modify acls to: 
> nayak
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing view acls 

[jira] [Commented] (SPARK-23123) Unable to run Spark Job with Hadoop NameNode Federation using ViewFS

2018-01-16 Thread Saisai Shao (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-23123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328297#comment-16328297
 ] 

Saisai Shao commented on SPARK-23123:
-

It looks like a Hadoop issue rather than a Spark issue from the stack.

> Unable to run Spark Job with Hadoop NameNode Federation using ViewFS
> 
>
> Key: SPARK-23123
> URL: https://issues.apache.org/jira/browse/SPARK-23123
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Submit
>Affects Versions: 1.6.3
>Reporter: Nihar Nayak
>Priority: Major
>  Labels: Hadoop, Spark
>
> Added following to core-site.xml in order to make use of ViewFS in a NameNode 
> federated cluster. 
> {noformat}
> 
>  fs.defaultFS
>  viewfs:///
>  
> 
>  fs.viewfs.mounttable.default.link./apps
>  hdfs://nameservice1/apps
>  
>  
>  fs.viewfs.mounttable.default.link./app-logs
>  hdfs://nameservice2/app-logs
>  
>  
>  fs.viewfs.mounttable.default.link./tmp
>  hdfs://nameservice2/tmp
>  
>  
>  fs.viewfs.mounttable.default.link./user
>  hdfs://nameservice2/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns1/user
>  hdfs://nameservice1/user
>  
>  
>  fs.viewfs.mounttable.default.link./ns2/user
>  hdfs://nameservice2/user
>  
> {noformat}
> Got the following error .
> {noformat}
> spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --num-executors 3 --driver-memory 512m --executor-memory 512m 
> --executor-cores 1 ${SPARK_HOME}/lib/spark-examples*.jar 10
> 18/01/17 02:14:45 INFO spark.SparkContext: Added JAR 
> file:/home/nayak/hdp26_c4000_stg/spark2/lib/spark-examples_2.11-2.1.1.2.6.2.0-205.jar
>  at spark://x:35633/jars/spark-examples_2.11-2.1.1.2.6.2.0-205.jar with 
> timestamp 1516155285534
> 18/01/17 02:14:46 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
> to rm2
> 18/01/17 02:14:46 INFO yarn.Client: Requesting a new application from cluster 
> with 26 NodeManagers
> 18/01/17 02:14:46 INFO yarn.Client: Verifying our application has not 
> requested more than the maximum memory capability of the cluster (13800 MB 
> per container)
> 18/01/17 02:14:46 INFO yarn.Client: Will allocate AM container, with 896 MB 
> memory including 384 MB overhead
> 18/01/17 02:14:46 INFO yarn.Client: Setting up container launch context for 
> our AM
> 18/01/17 02:14:46 INFO yarn.Client: Setting up the launch environment for our 
> AM container
> 18/01/17 02:14:46 INFO yarn.Client: Preparing resources for our AM container
> 18/01/17 02:14:46 INFO security.HDFSCredentialProvider: getting token for 
> namenode: viewfs:/user/nayak
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 
> 22488202 for nayak on ha-hdfs:nameservice1
> 18/01/17 02:14:46 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token 50 
> for nayak on ha-hdfs:nameservice2
> 18/01/17 02:14:47 INFO hive.metastore: Trying to connect to metastore with 
> URI thrift://:9083
> 18/01/17 02:14:47 INFO hive.metastore: Connected to metastore.
> 18/01/17 02:14:49 INFO security.HiveCredentialProvider: Get Token from hive 
> metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 29 6e 61 79 61 
> 6b 6e 69 68 61 72 72 61 30 31 40 53 54 47 32 30 30 30 2e 48 41 44 4f 4f 50 2e 
> 52 41 4b 55 54 45 4e 2e 43 4f 4d 04 68 69 76 65 00 8a 01 61 01 e5 be 03 8a 01 
> 61 25 f2 42 03 8d 02 21 bb 8e 02 b7
> 18/01/17 02:14:49 WARN yarn.Client: Neither spark.yarn.jars nor 
> spark.yarn.archive is set, falling back to uploading libraries under 
> SPARK_HOME.
> 18/01/17 02:14:50 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-7498ee81-d22b-426e-9466-3a08f7c827b1/__spark_libs__6643608006679813597.zip
>  -> 
> viewfs:/user/nayak/.sparkStaging/application_1515035441414_275503/__spark_libs__6643608006679813597.zip
> 18/01/17 02:14:55 INFO yarn.Client: Uploading resource 
> file:/tmp/spark-7498ee81-d22b-426e-9466-3a08f7c827b1/__spark_conf__405432153902988742.zip
>  -> 
> viewfs:/user/nayak/.sparkStaging/application_1515035441414_275503/__spark_conf__.zip
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing view acls to: nayak
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing modify acls to: 
> nayak
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing view acls groups to:
> 18/01/17 02:14:55 INFO spark.SecurityManager: Changing modify acls groups to:
> 18/01/17 02:14:55 INFO spark.SecurityManager: SecurityManager: authentication 
> disabled; ui acls disabled; users  with view permissions: Set(nayak); 
> groups with view permissions: Set(); users  with modify permissions: 
> Set(nayak); groups with modify permissions: Set()
> 18/01/17 02:14:55 INFO yarn.Client: Submitting application 
> application_1515035441414_275503 to ResourceManager
> 18/01/17 02:14:55 INFO