Re: ClassNotFoundException on job submit

2017-10-26 Thread Stefan Miklosovic
Ok I am getting somewhere:

@RunWith(JUnit4.class)
public class LivyTestCase {

private static final int SAMPLES = 1;

private static final String LIVY_URI = "http://spark-master:8998;;

@Rule
public TemporaryFolder jarFolder = new TemporaryFolder();

@Test
public void testPiJob() throws Exception {

File jarFile = jarFolder.newFile("testpijob2.jar");

ShrinkWrap.create(JavaArchive.class)
.addClass(PiJob.class)
.as(ZipExporter.class)
.exportTo(jarFile, true);

LivyClient client = new LivyClientBuilder()
.setURI(new URI(LIVY_URI))
.build();

System.out.println("Uploading PiJob jar");

client.uploadJar(jarFile).get();

System.out.println("PiJob jar uploaded");

final Double result = client.submit(new PiJob(1000)).get();

System.out.println(result);
}
}

But while doing so, it gives me:

java.util.concurrent.ExecutionException: java.io.IOException:
Bad Request: "requirement failed: Local path
/root/.livy-sessions/c7dbb697-13ed-443f-a630-bc9d9a544f6b/testpijob2.jar
cannot be added to user sessions."

at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at LivyTestCase.testPiJob(LivyTestCase.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
Caused by: java.io.IOException: Bad Request: "requirement failed:
Local path 
/root/.livy-sessions/c7dbb697-13ed-443f-a630-bc9d9a544f6b/testpijob2.jar
cannot be added to user sessions."
at 
org.apache.livy.client.http.LivyConnection.sendRequest(LivyConnection.java:229)
at org.apache.livy.client.http.LivyConnection.post(LivyConnection.java:192)
at org.apache.livy.client.http.HttpClient$2.call(HttpClient.java:152)
at org.apache.livy.client.http.HttpClient$2.call(HttpClient.java:149)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

I have checked that there is nothing else but that PiJob.class in that
JAR. Nothing else at all.

I have a feeling that the path for that jar needs to be visible to all
slaves, I do not have HDFS, I have spark slaves in Docker containers
so once I want to upload it, I see it in spark master
/root/.livy-sessions, but it is not in containers ...

Could not this be helpful?

# List of local directories from where files are allowed to be added
to user sessions. By
# default it's empty, meaning users can only reference remote URIs
when starting their
# sessions.
# livy.file.local-dir-whitelist =

On Thu, Oct 26, 2017 at 5:30 PM, Stefan Miklosovic  wrote:
> I think I have to add a jar with PiJob on the classpath of Livy so it
> knows how to deserialize it  hm
>
> On Thu, Oct 26, 2017 at 5:24 PM, Stefan 

Re: ClassNotFoundException on job submit

2017-10-26 Thread Stefan Miklosovic
I think I have to add a jar with PiJob on the classpath of Livy so it
knows how to deserialize it  hm

On Thu, Oct 26, 2017 at 5:24 PM, Stefan Miklosovic  wrote:
> I have did it as you suggested and it seems to start the jobs OK and I
> see the sessions in UI but while it is being computed (I see the job
> is distributed on two spark slaves where in front of that there is
> spark-master), I am computing this from my localhost:
>
> @RunWith(JUnit4.class)
> public class LivyTestCase {
>
> private static final int SAMPLES = 1;
>
> private static final String LIVY_URI = "http://spark-master:8998;;
>
> @Test
> public void testPiJob() throws Exception {
>
> LivyClient client = new LivyClientBuilder()
> .setURI(new URI(LIVY_URI))
> .build();
>
> final Double result = client.submit(new PiJob(1000)).get();
>
> System.out.println(result);
> }
> }
>
> It is a PiJob from sites examples.
>
> Now what I see in Livy logs is this
>
> org.apache.livy.shaded.kryo.kryo.KryoException: Unable to find class: PiJob
> at 
> org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
> at 
> org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
> at org.apache.livy.shaded.kryo.kryo.Kryo.readClass(Kryo.java:656)
> at org.apache.livy.shaded.kryo.kryo.Kryo.readClassAndObject(Kryo.java:767)
> at org.apache.livy.client.common.Serializer.deserialize(Serializer.java:63)
> at org.apache.livy.rsc.driver.BypassJob.call(BypassJob.java:39)
> at org.apache.livy.rsc.driver.BypassJob.call(BypassJob.java:27)
> at org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:57)
> at org.apache.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:42)
> at org.apache.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:27)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: PiJob
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
> ... 13 more
>
> I know to read it - I understand there is not PiJob on the class path,
> but why? I have just sent that class to Livy ...
>
> On Thu, Oct 26, 2017 at 4:17 PM, Saisai Shao  wrote:
>> You can choose to set "livy.spark.master" to "local" and
>> "livy.spark.deploy-mode" to "client" to start Spark with local mode, in such
>> case YARN is not required.
>>
>> Otherwise if you plan to run on YARN, you have to install Hadoop and
>> configure HADOOP_CONF_DIR in livy-env.sh.
>>
>> On Thu, Oct 26, 2017 at 9:40 PM, Stefan Miklosovic 
>> wrote:
>>>
>>> Hi,
>>>
>>> I am running Livy server in connection with Spark without Hadoop. I am
>>> setting only SPARK_HOME and I am getting this in Livy UI logs after
>>> job submission.
>>>
>>> I am using pretty much standard configuration but
>>> livy.spark.deploy-mode = cluster
>>>
>>> Do I need to run with Hadoop installation as well and specify
>>> HADOOP_CONF_DIR?
>>>
>>> Is not it possible to run Livy with "plain" Spark without YARN?
>>>
>>> stderr:
>>> java.lang.ClassNotFoundException:
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:348)
>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:230)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:712)
>>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>
>>> Thanks!
>>>
>>> --
>>> Stefan Miklosovic
>>
>>
>
>
>
> --
> Stefan Miklosovic



-- 
Stefan Miklosovic


Re: ClassNotFoundException on job submit

2017-10-26 Thread Stefan Miklosovic
I have did it as you suggested and it seems to start the jobs OK and I
see the sessions in UI but while it is being computed (I see the job
is distributed on two spark slaves where in front of that there is
spark-master), I am computing this from my localhost:

@RunWith(JUnit4.class)
public class LivyTestCase {

private static final int SAMPLES = 1;

private static final String LIVY_URI = "http://spark-master:8998;;

@Test
public void testPiJob() throws Exception {

LivyClient client = new LivyClientBuilder()
.setURI(new URI(LIVY_URI))
.build();

final Double result = client.submit(new PiJob(1000)).get();

System.out.println(result);
}
}

It is a PiJob from sites examples.

Now what I see in Livy logs is this

org.apache.livy.shaded.kryo.kryo.KryoException: Unable to find class: PiJob
at 
org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
at 
org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
at org.apache.livy.shaded.kryo.kryo.Kryo.readClass(Kryo.java:656)
at org.apache.livy.shaded.kryo.kryo.Kryo.readClassAndObject(Kryo.java:767)
at org.apache.livy.client.common.Serializer.deserialize(Serializer.java:63)
at org.apache.livy.rsc.driver.BypassJob.call(BypassJob.java:39)
at org.apache.livy.rsc.driver.BypassJob.call(BypassJob.java:27)
at org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:57)
at org.apache.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:42)
at org.apache.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:27)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: PiJob
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
... 13 more

I know to read it - I understand there is not PiJob on the class path,
but why? I have just sent that class to Livy ...

On Thu, Oct 26, 2017 at 4:17 PM, Saisai Shao  wrote:
> You can choose to set "livy.spark.master" to "local" and
> "livy.spark.deploy-mode" to "client" to start Spark with local mode, in such
> case YARN is not required.
>
> Otherwise if you plan to run on YARN, you have to install Hadoop and
> configure HADOOP_CONF_DIR in livy-env.sh.
>
> On Thu, Oct 26, 2017 at 9:40 PM, Stefan Miklosovic 
> wrote:
>>
>> Hi,
>>
>> I am running Livy server in connection with Spark without Hadoop. I am
>> setting only SPARK_HOME and I am getting this in Livy UI logs after
>> job submission.
>>
>> I am using pretty much standard configuration but
>> livy.spark.deploy-mode = cluster
>>
>> Do I need to run with Hadoop installation as well and specify
>> HADOOP_CONF_DIR?
>>
>> Is not it possible to run Livy with "plain" Spark without YARN?
>>
>> stderr:
>> java.lang.ClassNotFoundException:
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:348)
>> at org.apache.spark.util.Utils$.classForName(Utils.scala:230)
>> at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:712)
>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>
>> Thanks!
>>
>> --
>> Stefan Miklosovic
>
>



-- 
Stefan Miklosovic


Re: ClassNotFoundException on job submit

2017-10-26 Thread Saisai Shao
You can choose to set "livy.spark.master" to "local" and
"livy.spark.deploy-mode" to "client" to start Spark with local mode, in
such case YARN is not required.

Otherwise if you plan to run on YARN, you have to install Hadoop and
configure HADOOP_CONF_DIR in livy-env.sh.

On Thu, Oct 26, 2017 at 9:40 PM, Stefan Miklosovic 
wrote:

> Hi,
>
> I am running Livy server in connection with Spark without Hadoop. I am
> setting only SPARK_HOME and I am getting this in Livy UI logs after
> job submission.
>
> I am using pretty much standard configuration but
> livy.spark.deploy-mode = cluster
>
> Do I need to run with Hadoop installation as well and specify
> HADOOP_CONF_DIR?
>
> Is not it possible to run Livy with "plain" Spark without YARN?
>
> stderr:
> java.lang.ClassNotFoundException:
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at org.apache.spark.util.Utils$.classForName(Utils.scala:230)
> at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$
> deploy$SparkSubmit$$runMain(SparkSubmit.scala:712)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
> Thanks!
>
> --
> Stefan Miklosovic
>