Re: Cannot run SimpleApp as regular Java app

2014-09-19 Thread ericacm
It turns out that it was the Hadoop version that was the issue.

spark-1.0.2-hadoop1 and spark-1.1.0-hadoop1 both work.

spark.1.0.2-hadoop2, spark-1.1.0-hadoop2.4 and spark-1.1.0-hadoop2.4 do not
work.

It's strange because for this little test I am not even using HDFS at all.



-- Eric

On Thu, Sep 18, 2014 at 11:58 AM, ericacm [via Apache Spark User List] 
ml-node+s1001560n14570...@n3.nabble.com wrote:

 Upgrading from spark-1.0.2-hadoop2 to spark-1.1.0-hadoop1 fixed my
 problem.

 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-run-SimpleApp-as-regular-Java-app-tp13695p14570.html
  To unsubscribe from Cannot run SimpleApp as regular Java app, click here
 http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=13695code=ZXJpY2FjbUBnbWFpbC5jb218MTM2OTV8MTY0ODE0NDgzOQ==
 .
 NAML
 http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-run-SimpleApp-as-regular-Java-app-tp13695p14685.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Cannot run SimpleApp as regular Java app

2014-09-18 Thread ericacm
Upgrading from spark-1.0.2-hadoop2 to spark-1.1.0-hadoop1 fixed my problem. 



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-run-SimpleApp-as-regular-Java-app-tp13695p14570.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Cannot run SimpleApp as regular Java app

2014-09-09 Thread Yana Kadiyska
(ObjectInputStream.java:1894)
 at
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
 at
 java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
 at
 java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
 at
 java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
 at
 java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)

 It seems like a serialization problem because there is plenty of heap space
 (and it works with spark-submit).

 Thanks!



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-run-SimpleApp-as-regular-Java-app-tp13695.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Re: Cannot run SimpleApp as regular Java app

2014-09-09 Thread ericacm
Hi Yana - 

I added the following to spark-class:

echo RUNNER: $RUNNER
echo CLASSPATH: $CLASSPATH
echo JAVA_OPTS: $JAVA_OPTS
echo '$@': $@

Here's the output:

$ ./spark-submit --class experiments.SimpleApp --master
spark://myhost.local:7077
/IdeaProjects/spark-experiments/target/spark-experiments-1.0-SNAPSHOT.jar

Spark assembly has been built with Hive, including Datanucleus jars on
classpath

RUNNER:
/Library/Java/JavaVirtualMachines/jdk1.7.0_13.jdk/Contents/Home/bin/java

CLASSPATH:
::/dev/spark-1.0.2-bin-hadoop2/conf:/dev/spark-1.0.2-bin-hadoop2/lib/spark-assembly-1.0.2-hadoop2.2.0.jar:/dev/spark-1.0.2-bin-hadoop2/lib/datanucleus-api-jdo-3.2.1.jar:/dev/spark-1.0.2-bin-hadoop2/lib/datanucleus-core-3.2.2.jar:/dev/spark-1.0.2-bin-hadoop2/lib/datanucleus-rdbms-3.2.1.jar

JAVA_OPTS: -XX:MaxPermSize=128m -Djava.library.path= -Xms512m -Xmx512m

$@: org.apache.spark.deploy.SparkSubmit --class experiments.SimpleApp
--master spark://myhost.local:7077
/IdeaProjects/spark-experiments/target/spark-experiments-1.0-SNAPSHOT.jar

The differences I can see in the code that runs via my standalone Java app:
- Does not have -Djava.library.path=  (should not make a difference)
- Main class is org.apache.spark.executor.CoarseGrainedExecutorBackend
instead of org.apache.spark.deploy.SparkSubmit (should not make a
difference)
- My jar's classes are directly available when running via spark-submit (it
runs the Jar so it they will be in the main classloader) but they are only
available via conf.setJars() in the standalone Java app.  But they should be
available indirectly in the classloader that is created in the executor:

14/09/08 10:04:06 INFO Executor: Adding
file:/dev/spark-1.0.2-bin-hadoop2/work/app-20140908100358-0002/1/./spark-experiments-1.0-SNAPSHOT.jar
to class loader

I've been assuming that my conf.setJars() is the proper way to provide my
code to Spark.  

Thanks!




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-run-SimpleApp-as-regular-Java-app-tp13695p13842.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Cannot run SimpleApp as regular Java app

2014-09-08 Thread ericacm
)
at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
at 
org.apache.spark.broadcast.HttpBroadcast$.read(HttpBroadcast.scala:205)
at
org.apache.spark.broadcast.HttpBroadcast.readObject(HttpBroadcast.scala:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1872)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
14/09/08 10:04:06 INFO HttpBroadcast: Started reading broadcast variable 0
14/09/08 10:04:06 ERROR ExecutorUncaughtExceptionHandler: Uncaught exception
in thread Thread[Executor task launch worker-0,5,main]
java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.io.WritableUtils.readCompressedStringArray(WritableUtils.java:183)
at 
org.apache.hadoop.conf.Configuration.readFields(Configuration.java:2378)
at 
org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285)
at 
org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77)
at
org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1872)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:369)
at
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:63)
at 
org.apache.spark.broadcast.HttpBroadcast$.read(HttpBroadcast.scala:205)
at
org.apache.spark.broadcast.HttpBroadcast.readObject(HttpBroadcast.scala:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1872)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1970)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1894)
at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1777)

It seems like a serialization problem because there is plenty of heap space
(and it works with spark-submit).

Thanks!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-run-SimpleApp-as-regular-Java-app-tp13695.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org