You can use PySpark in exactly as you normally do. So something like this works:
stuff = spark.read \
     .format("ignite") \
     .option("config", "ignite-client.xml") \
     .option("table", “Stuff") \
     .option("primaryKeyFields", "ID") \
     .load()
You might need to check the Java documentation for some of the configuration 
options but it’s mostly pretty straight-toward.

In 2.7 there’s also an Ignite Python client. You could use ODBC or the REST API 
if you have a specific requirement that isn’t met by the other methods, but off 
the top of my head I’m not sure what that would be.

Regards,
Stephen

> On 11 Dec 2018, at 23:51, anthonycwmak <anthonycw...@gmail.com> wrote:
> 
> I am interested to use Ignite to speedup Spark as in
> https://apacheignite-fs.readme.io/docs/ignite-for-spark, but all the example
> seems to be in Java/Scala. Is there an easy way to do the same in Python? I
> read somewhere that Ignite has an ODBC driver and perhaps a RESTful api as
> an alternative. Could anyone share your experiences what is the best/easiest
> way at the current state to do the above in Python?
> 
> Anthony 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Reply via email to