Re: Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-02-03 Thread maven

The version I'm using was already pre-built for Hadoop 2.3. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382p21485.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-01-28 Thread siddardha
Then your spark is not built for yarn. Try to build with
sbt/sbt -Dhadoop.version=2.3.0 -Pyarn assembly



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382p21404.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-01-27 Thread Niranjan Reddy
Thanks, Ted. Kerberos is enabled on the cluster.

I'm new to the world of kerberos, so pease excuse my ignorance here. Do you
know if there are any additional steps I need to take in addition to
setting HADOOP_CONF_DIR? For instance, does hadoop.security.auth_to_local
require any specific setting (the current setting for this property was set
by the admin)? This cluster has Spark 1.0 installed and I can use it
without any errors.
On Jan 26, 2015 11:38 PM, Ted Yu yuzhih...@gmail.com wrote:

 Looks like Kerberos was enabled for your cluster.

 Can you check the config files under HADOOP_CONF_DIR ?

 Cheers

 On Mon, Jan 26, 2015 at 8:17 PM, maven niranja...@gmail.com wrote:

 All,

 I recently try to build Spark-1.2 on my enterprise server (which has
 Hadoop
 2.3 with YARN). Here're the steps I followed for the build:

 $ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package
 $ export SPARK_HOME=/path/to/spark/folder
 $ export HADOOP_CONF_DIR=/etc/hadoop/conf

 However, when I try to work with this installation either locally or on
 YARN, I get the following error:

 Exception in thread main java.lang.ExceptionInInitializerError
 at
 org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:1784)
 at
 org.apache.spark.storage.BlockManager.init(BlockManager.scala:105)
 at
 org.apache.spark.storage.BlockManager.init(BlockManager.scala:180)
 at org.apache.spark.SparkEnv$.create(SparkEnv.scala:292)
 at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:159)
 at org.apache.spark.SparkContext.init(SparkContext.scala:232)
 at water.MyDriver$.main(MyDriver.scala:19)
 at water.MyDriver.main(MyDriver.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:360)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 Caused by: org.apache.spark.SparkException: Unable to load YARN support
 at

 org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:199)
 at
 org.apache.spark.deploy.SparkHadoopUtil$.init(SparkHadoopUtil.scala:194)
 at
 org.apache.spark.deploy.SparkHadoopUtil$.clinit(SparkHadoopUtil.scala)
 ... 15 more
 Caused by: java.lang.IllegalArgumentException: Invalid rule: L
 RULE:[2:$1@$0](.*@XXXCOMPANY.COM)s/(.*)@XXXCOMPANY.COM/$1/L
 DEFAULT
 at

 org.apache.hadoop.security.authentication.util.KerberosName.parseRules(KerberosName.java:321)
 at

 org.apache.hadoop.security.authentication.util.KerberosName.setRules(KerberosName.java:386)
 at

 org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:75)
 at

 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:247)
 at

 org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
 at
 org.apache.spark.deploy.SparkHadoopUtil.init(SparkHadoopUtil.scala:43)
 at

 org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.init(YarnSparkHadoopUtil.scala:45)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
 at

 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at

 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at java.lang.Class.newInstance(Class.java:374)
 at

 org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:196)
 ... 17 more

 I noticed that when I unset HADOOP_CONF_DIR, I'm able to work in the local
 mode without any errors. I'm able to work with pre-installed Spark 1.0,
 locally and on yarn, without any issues. It looks like I may be missing a
 configuration step somewhere. Any thoughts on what may be causing this?

 NR



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org





Re: Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-01-27 Thread maven
Thanks, Siddardha. I did but got the same error. Kerberos is enabled on my
cluster and I may be missing a configuration step somewhere. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382p21392.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-01-27 Thread Ted Yu
Caused by: java.lang.IllegalArgumentException: Invalid rule: L
RULE:[2:$1@$0](.*@XXXCOMPANY.COM http://xxxcompany.com/)s/(.*)@
XXXCOMPANY.COM/$1/L http://xxxcompany.com/$1/L
DEFAULT

Can you put the rule on a single line (not sure whether there is newline or
space between L and DEFAULT) ?

Looks like the characters between the last slash and DEFAULT are extraneous.

Cheers

On Tue, Jan 27, 2015 at 7:21 AM, Niranjan Reddy niranja...@gmail.com
wrote:

 Thanks, Ted. Kerberos is enabled on the cluster.

 I'm new to the world of kerberos, so pease excuse my ignorance here. Do
 you know if there are any additional steps I need to take in addition to
 setting HADOOP_CONF_DIR? For instance, does hadoop.security.auth_to_local
 require any specific setting (the current setting for this property was set
 by the admin)? This cluster has Spark 1.0 installed and I can use it
 without any errors.
 On Jan 26, 2015 11:38 PM, Ted Yu yuzhih...@gmail.com wrote:

 Looks like Kerberos was enabled for your cluster.

 Can you check the config files under HADOOP_CONF_DIR ?

 Cheers

 On Mon, Jan 26, 2015 at 8:17 PM, maven niranja...@gmail.com wrote:

 All,

 I recently try to build Spark-1.2 on my enterprise server (which has
 Hadoop
 2.3 with YARN). Here're the steps I followed for the build:

 $ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean
 package
 $ export SPARK_HOME=/path/to/spark/folder
 $ export HADOOP_CONF_DIR=/etc/hadoop/conf

 However, when I try to work with this installation either locally or on
 YARN, I get the following error:

 Exception in thread main java.lang.ExceptionInInitializerError
 at
 org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:1784)
 at
 org.apache.spark.storage.BlockManager.init(BlockManager.scala:105)
 at
 org.apache.spark.storage.BlockManager.init(BlockManager.scala:180)
 at org.apache.spark.SparkEnv$.create(SparkEnv.scala:292)
 at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:159)
 at org.apache.spark.SparkContext.init(SparkContext.scala:232)
 at water.MyDriver$.main(MyDriver.scala:19)
 at water.MyDriver.main(MyDriver.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at

 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at

 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:360)
 at
 org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 Caused by: org.apache.spark.SparkException: Unable to load YARN support
 at

 org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:199)
 at

 org.apache.spark.deploy.SparkHadoopUtil$.init(SparkHadoopUtil.scala:194)
 at
 org.apache.spark.deploy.SparkHadoopUtil$.clinit(SparkHadoopUtil.scala)
 ... 15 more
 Caused by: java.lang.IllegalArgumentException: Invalid rule: L
 RULE:[2:$1@$0](.*@XXXCOMPANY.COM)s/(.*)@XXXCOMPANY.COM/$1/L
 DEFAULT
 at

 org.apache.hadoop.security.authentication.util.KerberosName.parseRules(KerberosName.java:321)
 at

 org.apache.hadoop.security.authentication.util.KerberosName.setRules(KerberosName.java:386)
 at

 org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:75)
 at

 org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:247)
 at

 org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
 at
 org.apache.spark.deploy.SparkHadoopUtil.init(SparkHadoopUtil.scala:43)
 at

 org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.init(YarnSparkHadoopUtil.scala:45)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
 at

 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at

 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at
 java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at java.lang.Class.newInstance(Class.java:374)
 at

 org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:196)
 ... 17 more

 I noticed that when I unset HADOOP_CONF_DIR, I'm able to work in the
 local
 mode without any errors. I'm able to work with pre-installed Spark 1.0,
 locally and on yarn, without any issues. It looks like I may be missing a
 configuration step somewhere. Any thoughts on what may be causing this?

 NR



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382.html
 Sent from the Apache Spark 

Spark on Yarn: java.lang.IllegalArgumentException: Invalid rule

2015-01-26 Thread maven
All, 

I recently try to build Spark-1.2 on my enterprise server (which has Hadoop
2.3 with YARN). Here're the steps I followed for the build: 

$ mvn -Pyarn -Phadoop-2.3 -Dhadoop.version=2.3.0 -DskipTests clean package 
$ export SPARK_HOME=/path/to/spark/folder 
$ export HADOOP_CONF_DIR=/etc/hadoop/conf 

However, when I try to work with this installation either locally or on
YARN, I get the following error: 

Exception in thread main java.lang.ExceptionInInitializerError 
at
org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:1784) 
at
org.apache.spark.storage.BlockManager.init(BlockManager.scala:105) 
at
org.apache.spark.storage.BlockManager.init(BlockManager.scala:180) 
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:292) 
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:159) 
at org.apache.spark.SparkContext.init(SparkContext.scala:232) 
at water.MyDriver$.main(MyDriver.scala:19) 
at water.MyDriver.main(MyDriver.scala) 
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 
at java.lang.reflect.Method.invoke(Method.java:606) 
at
org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:360) 
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75) 
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 
Caused by: org.apache.spark.SparkException: Unable to load YARN support 
at
org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:199)
 
at
org.apache.spark.deploy.SparkHadoopUtil$.init(SparkHadoopUtil.scala:194) 
at
org.apache.spark.deploy.SparkHadoopUtil$.clinit(SparkHadoopUtil.scala) 
... 15 more 
Caused by: java.lang.IllegalArgumentException: Invalid rule: L 
RULE:[2:$1@$0](.*@XXXCOMPANY.COM)s/(.*)@XXXCOMPANY.COM/$1/L 
DEFAULT 
at
org.apache.hadoop.security.authentication.util.KerberosName.parseRules(KerberosName.java:321)
 
at
org.apache.hadoop.security.authentication.util.KerberosName.setRules(KerberosName.java:386)
 
at
org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:75)
 
at
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:247)
 
at
org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:283)
 
at
org.apache.spark.deploy.SparkHadoopUtil.init(SparkHadoopUtil.scala:43) 
at
org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.init(YarnSparkHadoopUtil.scala:45)
 
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method) 
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 
at java.lang.reflect.Constructor.newInstance(Constructor.java:526) 
at java.lang.Class.newInstance(Class.java:374) 
at
org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:196)
 
... 17 more 

I noticed that when I unset HADOOP_CONF_DIR, I'm able to work in the local
mode without any errors. I'm able to work with pre-installed Spark 1.0,
locally and on yarn, without any issues. It looks like I may be missing a
configuration step somewhere. Any thoughts on what may be causing this? 

NR



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-Yarn-java-lang-IllegalArgumentException-Invalid-rule-tp21382.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org