Hi Everybody, 

through the effect that I’m new to Spark and Zeppelin I hope my question I have 
is here in the right place. 
I played around with Zeppelin and Spark and tried to load data by connection to 
an elasticsearch cluster. 
But to be honest I have no clue how I have to setup zeppelin or the notebook to 
use the elasticsearch/hadoop/spark
library (jar) so I’m able to connect using pyspark. 
Do I have to copy the jar somewhere in the zeppelin folders?

My plan is to transfer an index/type from elasticsearch to datafframes in spark.

Did somebody give me a short explanation for setting this up? Or could point me 
to the right documentation.

Any help would be appreciated.

Thanks a lot
Sven

Reply via email to