Hi,

Ignite 2.7.5 requires the spark of 2.3.X version.

You should start the separate Spark and Ignite cluster:

https://apacheignite.readme.io/docs/getting-started
https://spark.apache.org/docs/2.3.0/spark-standalone.html

After that you should provide all required Ignite libraries to you driver and executor classpath. You can just copy all Ignite jars to every Spark node and add them to Spark classpath or try to use next script to submit your spark job:

LIBS_DIR=$1
EXAMPLE_CLASS=$2
PATH_TO_JAR=$3
JARS=find $LIBS_DIR -name '*.jar'
EXECUTOR_PATH=""
 for eachjarinlib in $JARS ; do
if [ "$eachjarinlib" != "ABCDEFGHIJKLMNOPQRSTUVWXYZ.JAR" ]; then
    EXECUTOR_PATH=file:$eachjarinlib:$EXECUTOR_PATH
fi
done
spark-submit --deploy-mode client --master spark://andrei-ThinkPad-P51s:7077 --conf "spark.driver.extraClassPath=$EXECUTOR_PATH" --conf "spark.executor.extraClassPath=$EXECUTOR_PATH" --class $EXAMPLE_CLASS $PATH_TO_JAR $4 $5 $6 $7

Libs can be collected in one place in case of you used maven project as next:

<plugin><groupId>org.apache.maven.plugins</groupId><artifactId>maven-dependency-plugin</artifactId><version>3.1.1</version><executions><execution><id>copy-sources</id><phase>package</phase><goals><goal>copy-dependencies</goal></goals><configuration><outputDirectory>target/libs</outputDirectory><overWriteReleases>false</overWriteReleases><overWriteSnapshots>false</overWriteSnapshots><overWriteIfNewer>true</overWriteIfNewer></configuration></execution></executions></plugin>

After that your start command should be like next:

bash run_example.sh ./target/libs/* com.some.your.ClassName ./target/your.jar client.xml

Some Spark job example you can see here (the code from this link can be used with Ignite as well):

https://docs.gridgain.com/docs/cross-database-queries

BR,
Andrei

9/22/2019 8:36 PM, George Davies пишет:
I have already have a standalone ignite cluster running on k8s and can run SQL statements against it fine.

Part of a requirements on the system i am building is to perform v-pivots on the query result set.

I've seen spark come up as a good solution to v-pivots and so I'm trying to set up a simple master + executor cluster.

I have added all the ignite libs to the classpath per the docs but when i attempt to launch the master i get the error:

Error SparkUncaughtExceptionHandler:91 - Uncaught Exception in thread Thread[main,5,main]
java.io.IOException: failure to login
Caused by: javax.security.auth.login.LoginException: java.lang.NullPointerException: invalid null input: name

Any pointers on what I am doing incorrectly? I dont have a separate HDFS cluster to log in to, I just want to use spark over the ignite caches.



  • Spark setup George Davies
    • Re: Spark setup Andrei Aleksandrov

Reply via email to