Hi Vlad,
The ideal work flow for my use case is: I host two clusters, one is
computation cluster that run Spark jobs, the other is data cluster that host
Ignite node and cache hot data. Then at the run time, multiple Spark jobs
share this data cluster and query it. The problem I have is, I am
pre-
Thanks. Works fine now.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Multiple-servers-in-a-Ignite-Cluster-tp8840p8851.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Thanks. In that case, my question is how to define the scope of cluster(Or
how to specify the cluster a server belongs to)? I assume if someone else
start a ignite node, would my ignite server auto-discover it as well?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.
I have following ignite config:
def initializeIgniteConfig() = {
val ipFinder = new TcpDiscoveryVmIpFinder()
val HOST = "xx.xx.xx.xx:47500..47509"
ipFinder.setAddresses(Collections.singletonList(HOST))
val discoverySpi = new TcpDiscoverySpi()
discoverySpi.
Thanks Alexey.
By predicate/projection pushdown, I mean: currently I am storing a native
Spark Row object as value format of IgniteCache. If I retrieve it as an
IgniteRDD, I only want certain column of that Row object rather than
returning entire Row and do filter/projection at Spark level. Do you
What I would like to do is to achieve predicate/column project push-down to
the ignite cache layer. I guess this two options could do it, isn't it? If
so, what's the difference? Are there any other options to achieve predicate
push-down? Thanks in advance!
--
View this message in context:
http:
As subject shows.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Is-it-possible-to-enable-both-REPLICATED-and-PARTITIONED-tp8167.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Denis,
This is really helpful. Yes, I need the original dataframe for other API.
Now I am using RDD[String, Row] as type and caching dataframe using:
val rdd = df.map(row => (row.getAs[String]("KEY"), row))
igniteRDD.savePairs(rdd)
It works perfectly fine. Also I was able to reconstruct the d
Thanks for prompt reply. So if I want to cache dataframe in IgniteCache, I
have to do define a custom data model class(e.g.
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/model/Person.java
) as a schema of dataframe, then construct objects and declare
Hi team,
I was trying to cache dataframe in Ignite Cache. I was able to cache generic
type data elements(RDD). However each time when I use
igniteRDDF.saveValues() to cache a non-generic data type(e.g. RDD), it
will trigger the noSuchMethod for saveValues as following shows. I am using
scala 2.10
Hey team,
I was able to use JDBC driver tool to access IgniteCache. Is it possible to
connect to Ignite thr Spark JDBC data source api? Below are the code and
exceptions I got. It seems like the connection is successful but there are
datatype mapping issues. Do I need to define some schema from t
test
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/test-tp8120.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hey team,
I was able to use JDBC driver tool to access IgniteCache. Is it possible to
connect to Ignite thr Spark JDBC data source api? Below are the code and
exceptions I got. It seems like the connection is successful but there are
datatype mapping issues. Do I need to define some schema from th
Hi team,
I was trying to cache dataframe in Ignite Cache. I was able to cache generic
type data elements(RDD). However each time when I use
igniteRDDF.saveValues() to cache a non-generic data type(e.g. RDD), it
will trigger the noSuchMethod for saveValues as following shows. I am using
scala 2.10,
14 matches
Mail list logo