If you are going to access jars outside HDFS you must have their path
listed in livy.file.local-dir-whitelist like you found.
Alex Bozarth
Ok I am getting somewhere:
@RunWith(JUnit4.class)
public class LivyTestCase {
private static final int SAMPLES = 1;
private static final String LIVY_URI = "http://spark-master:8998";;
@Rule
public TemporaryFolder jarFolder = new TemporaryFolder();
@Test
public void
I think I have to add a jar with PiJob on the classpath of Livy so it
knows how to deserialize it hm
On Thu, Oct 26, 2017 at 5:24 PM, Stefan Miklosovic wrote:
> I have did it as you suggested and it seems to start the jobs OK and I
> see the sessions in UI but while it is being computed
I have did it as you suggested and it seems to start the jobs OK and I
see the sessions in UI but while it is being computed (I see the job
is distributed on two spark slaves where in front of that there is
spark-master), I am computing this from my localhost:
@RunWith(JUnit4.class)
public class L
You can choose to set "livy.spark.master" to "local" and
"livy.spark.deploy-mode" to "client" to start Spark with local mode, in
such case YARN is not required.
Otherwise if you plan to run on YARN, you have to install Hadoop and
configure HADOOP_CONF_DIR in livy-env.sh.
On Thu, Oct 26, 2017 at 9
Hi,
I am running Livy server in connection with Spark without Hadoop. I am
setting only SPARK_HOME and I am getting this in Livy UI logs after
job submission.
I am using pretty much standard configuration but
livy.spark.deploy-mode = cluster
Do I need to run with Hadoop installation as well and