Hi Team,
Please could any body help me or guide me to solve this issues.
Thanks & Regards,
Venkat
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Additional findings. Actually the SQL schema defined by running sql create
table works fine except for the date field. So I suspect the error message
"BinaryInvalidTypeException: Unknown pair [platformId=0,
typeId=-1854586790]" would probably mean the framework cannot find a mapping
between Java
Hi,
Can you share the full code? Especially when you're closing it or flushing.
Evgenii
пт, 16 нояб. 2018 г. в 17:43, KR Kumar :
> Following code creates the data streamer instance
>
> dataStreamer =
> IgniteContextWrapper.getInstance().getEngine().dataStreamer("eventCache-" +
>
Following code creates the data streamer instance
dataStreamer =
IgniteContextWrapper.getInstance().getEngine().dataStreamer("eventCache-" +
System.getProperty(RUN_ID) +"-"+ tenantId);
and for writing the data to cache,
dataStreamer.addData(key, value);
Nothing
Hi,
Can you share the code when you load data with DataStreamer?
Evgenii
пт, 16 нояб. 2018 г. в 16:02, KR Kumar :
> Hi Guys - I have a weird problem. when I am using data streamer to write
> data
> to ignite ( file based persistence), not all the entries are getting
> persisted as later some ge
Hi All,
I am getting an exception while creating a cache. Here is the piece of code
boolean check = true;
String cacheName = id + "XXX";
try {
IgniteCache igniteCache =
ignite.cache(cacheName);
if (igniteCache == null) {
CacheConfiguration cfg = new
CacheConfigura
Hi All,
I am getting an exception while creating a cache. Here is the piece of code
boolean check = true;
String cacheName = id + "XXX";
try {
IgniteCache igniteCache =
ignite.cache(cacheName);
if (igniteCache == null) {
CacheConfiguration cfg = new
CacheConfigura
Hi - I have a application where the writes happens mostly from one node and
reads/compute happens on all the nodes. The problem that I have is that the
data is not getting distributed across nodes meaning the disk on the node
where writes are happening is getting filled and disk is running out of
s
Hi Guys - I have a weird problem. when I am using data streamer to write data
to ignite ( file based persistence), not all the entries are getting
persisted as later some gets are returning nulls for few keys. This is very
random in terms of which keys getting persisted but consistent in terms not
No, As you can see in the configuration it is ATOMIC mode. I did not write
any transactional code, just get and put and scanQuery.
Native persistence is enabled.
Most of the writes done by one node and other 2 nodes mostly read and less
write.
It is got only 3 nodes.
each node is 4 CPU and 8
10 matches
Mail list logo