I think I have to add a jar with PiJob on the classpath of Livy so it
knows how to deserialize it .... hmmmmm

On Thu, Oct 26, 2017 at 5:24 PM, Stefan Miklosovic <mikloso...@gmail.com> wrote:
> I have did it as you suggested and it seems to start the jobs OK and I
> see the sessions in UI but while it is being computed (I see the job
> is distributed on two spark slaves where in front of that there is
> spark-master), I am computing this from my localhost:
>
> @RunWith(JUnit4.class)
> public class LivyTestCase {
>
>     private static final int SAMPLES = 10000;
>
>     private static final String LIVY_URI = "http://spark-master:8998";;
>
>     @Test
>     public void testPiJob() throws Exception {
>
>         LivyClient client = new LivyClientBuilder()
>             .setURI(new URI(LIVY_URI))
>             .build();
>
>         final Double result = client.submit(new PiJob(1000)).get();
>
>         System.out.println(result);
>     }
> }
>
> It is a PiJob from sites examples.
>
> Now what I see in Livy logs is this
>
> org.apache.livy.shaded.kryo.kryo.KryoException: Unable to find class: PiJob
> at 
> org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
> at 
> org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
> at org.apache.livy.shaded.kryo.kryo.Kryo.readClass(Kryo.java:656)
> at org.apache.livy.shaded.kryo.kryo.Kryo.readClassAndObject(Kryo.java:767)
> at org.apache.livy.client.common.Serializer.deserialize(Serializer.java:63)
> at org.apache.livy.rsc.driver.BypassJob.call(BypassJob.java:39)
> at org.apache.livy.rsc.driver.BypassJob.call(BypassJob.java:27)
> at org.apache.livy.rsc.driver.JobWrapper.call(JobWrapper.java:57)
> at org.apache.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:42)
> at org.apache.livy.rsc.driver.BypassJobWrapper.call(BypassJobWrapper.java:27)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException: PiJob
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at 
> org.apache.livy.shaded.kryo.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:136)
> ... 13 more
>
> I know to read it - I understand there is not PiJob on the class path,
> but why? I have just sent that class to Livy ...
>
> On Thu, Oct 26, 2017 at 4:17 PM, Saisai Shao <sai.sai.s...@gmail.com> wrote:
>> You can choose to set "livy.spark.master" to "local" and
>> "livy.spark.deploy-mode" to "client" to start Spark with local mode, in such
>> case YARN is not required.
>>
>> Otherwise if you plan to run on YARN, you have to install Hadoop and
>> configure HADOOP_CONF_DIR in livy-env.sh.
>>
>> On Thu, Oct 26, 2017 at 9:40 PM, Stefan Miklosovic <mikloso...@gmail.com>
>> wrote:
>>>
>>> Hi,
>>>
>>> I am running Livy server in connection with Spark without Hadoop. I am
>>> setting only SPARK_HOME and I am getting this in Livy UI logs after
>>> job submission.
>>>
>>> I am using pretty much standard configuration but
>>> livy.spark.deploy-mode = cluster
>>>
>>> Do I need to run with Hadoop installation as well and specify
>>> HADOOP_CONF_DIR?
>>>
>>> Is not it possible to run Livy with "plain" Spark without YARN?
>>>
>>> stderr:
>>> java.lang.ClassNotFoundException:
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:348)
>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:230)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:712)
>>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>
>>> Thanks!
>>>
>>> --
>>> Stefan Miklosovic
>>
>>
>
>
>
> --
> Stefan Miklosovic



-- 
Stefan Miklosovic

Reply via email to