Hi Denis,
the case may be avoid by checking if key exists. and also I can load all the
data into cache and disable the read through settings.
thanks for your suggestion.
Cheers,
Kevin
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/ignite-cache-get-performance-
Hi Dsetrakyan,
I have to 2 servers, with 48g and 32g memory respectively.
| Heap memory used/max | 29gb/43gb
Heap memory used/max | 28gb/30gb
cheers,
Kevin
--
View this mes
facing similar problem : memory consumption problem.
loading 47Millions Records, with data streamer method, took 30mins.
memory usage : 51g.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Slow-data-loading-and-high-very-memory-usage-issues-tp4798p4803.html
Sent
Hi Binti,
thank you for your help.
May I ask how to perform GC on each node?
using code? or command inside CLI?
best regards,
Kevin
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4635.html
Sent from the Apache Ignite Users
Thank you for your help!
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/ignite-service-session-issue-tp4475p4485.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
BTW, I created 4 + 3 nodes on two servers.
each node I called command like this ./ignite.sh -J-Xmx8g -J-Xms8g
kind regards,
Kevin
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4454.html
Sent from the Apache Ignite Users
Hi Val,
When I delete the jar class file from ./libs folder, and set
peerclassLoadingEnabled to true inside the configuration file. I get the
following exception message:
[09:57:06,516][SEVERE][tcp-disco-msg-worker-#2%null%][TcpDiscoverySpi]
Failed to unmarshal discovery custom message.
class org.
Hi Val,
Thank you for your suggestion. I have two more question regards to the table
loading problem.
1. Currently I have a table with 20 Millions records, it takes almost 2hours
to load all into the cache. is it normal this so long time? (I have set
offheap size to 30g, and begins with 2 nodes (e
Hi there,
I am trying to load a large table content into cache (about 10g).
I have initiate 8 nodes in two servers (each has 4 nodes).
My server has 48g and 32g memory respectively.
During the loading process, I have the following exception message, and all
the nodes stopped:
[14:19:15,322][SEVER
I think ignite should add example codes or document to tell the beginners. I
did search the forum but no results. but the error log message did give me
hint on where the problem might be occurred (in my case, the object
serialization part).
--
View this message in context:
http://apache-ignite-
Finally, I found the solution: I need to use "cache.withKeepBinary()" to
call the query method when the cache Key is an object type.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/problem-of-using-object-as-key-in-cache-configurations-tp4116p4146.html
Sent fro
Dear all,I have a table which has no primary index (need to use two columns
as composite index).at the beginning, I use only one column as key to load
all the data into the cache. However, lots of records were ignored ( because
they have the same key).Then I create an java object to store two colum
Hi there,
I am a new ignite user. I noticed that the sql query performance is much
slower that the general cache.get(x) method. In my case, I run a simple
query twice: in the first time, I called a sql query 100 times and it toke
235ms( average 23.5ms per sql); in the second time, I called the que
13 matches
Mail list logo