In order to use PySpark with MongoDB and ElasticSearch, I currently run the
rather long commands of:

1) pyspark --executor-memory 10g --jars
../lib/mongo-hadoop-spark-2.0.0-rc0.jar,../lib/mongo-java-driver-3.2.2.jar,../lib/mongo-hadoop-2.0.0-rc0.jar
--driver-class-path
../lib/mongo-hadoop-spark-2.0.0-rc0.jar:../lib/mongo-java-driver-3.2.2.jar:../lib/mongo-hadoop-2.0.0-rc0.jar

2) pyspark --jars ../lib/elasticsearch-hadoop-2.3.4.jar --driver-class-path
../lib/elasticsearch-hadoop-2.3.4.jar
Can all these things be made a part of my configuration, so that I don't
have to call these lengthy additions to pyspark?

Thanks!
-- 
Russell Jurney twitter.com/rjurney russell.jur...@gmail.com relato.io

Reply via email to