spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:219)
...
Thanks for the help!
Kind Regards,
Joris van Agtmaal
+(0) 6 25 39 39 06
From: Stephen Etheridge [mailto:setheri...@basho.com]
Sent: 13 September 2016 14:45
To: Agtmaal, Joris van
Cc: riak-users@lists.basho.
patience with my rookie mistakes.
;-)
Kind Regards,
Joris van Agtmaal
+(0) 6 25 39 39 06
From: Alex Moore [mailto:amo...@basho.com]
Sent: 13 September 2016 15:35
To: Stephen Etheridge
Cc: Agtmaal, Joris van ;
riak-users@lists.basho.com; Manu Marchal
Subject: Re: RIAK TS installed nodes not
Joris,
One thing to check - since you are using a downloaded jar, are you using
the Uber jar that contains all the dependencies?
http://search.maven.org/remotecontent?filepath=com/basho/riak/spark-riak-connector_2.10/1.6.0/spark-riak-connector_2.10-1.6.0-uber.jar
Thanks,
Alex
On Tue, Sep 13, 201
Hi Joris,
I have looked at the tutorial you have been following but I confess I am
confused. In the example you are following I do not see where the spark
and sql contexts are created. I use PySpark through the Jupyter notebook
and I have to specify a path to the connector on invoking the jupyte