Hi,
For an example of table creation via native API please check
https://apacheignite-sql.readme.io/docs/schema-and-indexes#section-overview.
Kind regards,
Alex
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Yes, that worked for me. I guess this will be fixed in version 2.4.
Final version of JavaIgniteContext construction looks like:
JavaIgniteContext igniteContext
= new JavaIgniteContext<>(jctx, () -> {
IgniteConfiguration cfg = null;
try {
cfg =
Ignition
Hi,
Your assumptions are correct. There is also an issue [1] that likely is
causing this behavior. As a workaround you can try to force IgniteContext to
start everything in client mode. In order to achieve this, use
setClientMode(true) in the closure that created IgniteConfiguration:
IgniteOutClo
Thank you for suggestions.
I have 5 node standalone ignite cluster and the main goal is to load data
into it and store it for long time for future use. I can't keep Spark
workers in memory and assume what my data is ending up in cache distributed
inside 5 standalone ignite nodes.
Spark process is
Looks like you're running in embedded mode. In this mode server nodes are
started within Spark executors, so when executor is stopped some of the data
is lost. Try to start a separate Ignite cluster and create IgniteContext
with standalone=true.
-Val
--
Sent from: http://apache-ignite-users.705
Switched to Ignite 2.3.0 in hope it has better behavior.
Unfortunately it is not.
During the execution of Spark job number of cache rows is growing but after
Spark job completes - looks like some entries has been removed.
JavaIgniteRDD shows correct count but again final result is incorrect.
I wa
Hi Val
Thank you for response
you can find maven project here:
https://github.com/Soroka21/ign-loader-spark
This app actually loads any parquet file into cache (let me know if you need
one)
I've tried to run on 1.2 mil records wit the same symptoms - looks like my
app is working with portio
Alexey,
Something is wrong, but I don't see any obvious mistakes in your code. Is it
possible to provide a test as a standalone GitHub project so that I can run
it and reproduce the problem?
Is it reproduced on smaller data sets? Or if load not through Spark, but
just doing regular put/putAll ope
Hi,
I've loaded 50 million BinaryObjects into TEST cache using Apache Spark
They look like this:
o.a.i.i.binary.BinaryObjectImpl | DATASET1 [hash=86282065, F01=-206809353,
F00=A1782096681-B2022047863-C554782990, F03=Must be timestamp,
F02=2.6983596317719E8, F05=182918247,
F04=A1997114384-B293944