Memory leak.Ignite runs slower and slower after a period of time.

2019-04-10 Thread BinaryTree
When my Ignite clients run for a while, it becomes slower and slower, and the 
outputs can be seen in our gc logs:
2019-04-10T06:42:47.885+: 62271.788: [Full GC (Ergonomics) [PSYoungGen: 
1494016K->1494005K(1797120K)] [ParOldGen: 2097006K->2097006K(2097152K)] 
3591022K->3591012K(3894272K), [Metaspace: 103757K->103757K(1144832K)], 
9.9864029 secs] [Times: user=19.85 sys=0.00, real=9.98 secs]
2019-04-10T06:42:57.874+: 62281.777: [Full GC (Ergonomics) [PSYoungGen: 
1494015K->1494012K(1797120K)] [ParOldGen: 2097006K->2097006K(2097152K)] 
3591022K->3591019K(3894272K), [Metaspace: 103757K->103757K(1144832K)], 
9.9982344 secs] [Times: user=19.87 sys=0.00, real=9.99 secs]
2019-04-10T06:43:07.874+: 62291.778: [Full GC (Ergonomics) [PSYoungGen: 
1494016K->1494014K(1797120K)] [ParOldGen: 2097006K->2097006K(2097152K)] 
3591022K->3591020K(3894272K), [Metaspace: 103757K->103757K(1144832K)], 
10.0803891 secs] [Times: user=19.93 sys=0.00, real=10.08 secs]
According to the outputs, I am sure that some objects have not been recycled. 
So I dumped the heap and analyzed them in the Eclipse Memory Analyzer, here is 
the reports that were given by the tool.











From the above picture, I guess there may be some bugs or inappropriate usage 
caused GridReduceQueryExecutor is not being recycled, but I don't know what the 
specific reason is.So I hope that you can give me some advice.

The code segments show how I execute a query:
public List query(String key, String value) {
List list = Lists.newArrayList();
String fields = "id, gmtCreate, gmtModified, devId, dpId, code, name, 
customName, mode, type, value, rawValue, time, status, uuid";
String sql = "select " + fields + " from " + 
IgniteTableKey.T_DATA_POINT_NEW + " where " + key + "='" + value +"'";
FieldsQueryCursor> cursor = newIgniteCache.query(new 
SqlFieldsQuery(sql));
for (List objects : cursor) {
DpCache cache = convertToDpCache(objects);
list.add(cache);
}
return list;
}public DpCache queryOne(String devId, Integer dpId) {
DpCache cache = null;
String fields = "id, gmtCreate, gmtModified, devId, dpId, code, name, 
customName, mode, type, value, rawValue, time, status, uuid";
String sql = "select " + fields + " from " + 
IgniteTableKey.T_DATA_POINT_NEW + " where devId=? and dpId=?";
?6?7
SqlFieldsQuery query = new SqlFieldsQuery(sql);
query.setArgs(devId, dpId);
FieldsQueryCursor> cursor = newIgniteCache.query(query);
Iterator> iterator = cursor.iterator();
if (iterator.hasNext()) {
cache = convertToDpCache(iterator.next());
}
turn cache;
}public boolean hasRecord(String devId, Integer dpId) {
boolean hasRecord;
String sql = "select 1 from t_data_point_new where devId=? and dpId=?";
SqlFieldsQuery query = new SqlFieldsQuery(sql);
query.setArgs(devId, dpId);
?6?7
FieldsQueryCursor> cursor = newIgniteCache.query(query);
?6?7
Iterator> iterator = cursor.iterator();
hasRecord = iterator.hasNext();
return hasRecord;
}public void invokeAllAsync(Map map) {
Map processorMap = Maps.newHashMap();
for (Map.Entry entry : map.entrySet()) {
processorMap.put(entry.getKey(), new 
DataPointEntryProcessor(entry.getValue()));
}
newIgniteCache.invokeAllAsync(processorMap);
}


Anyone who can give me advice will be appreciated.

Looking forward to your reply.

Can I update specific field of a binaryobject

2019-03-08 Thread BinaryTree
Hi Igniters - 


As far as I am known, igniteCache.put(K, V) will replace the value of K with V, 
but sometimes I just would like to update a specific field of V instead of the 
whole object.
I know that I can update the specific field via 
igniteCache.query(SqlFieldsQuery),  but how-ignite-sql-works writes that ignite 
will generate SELECT queries internally before UPDATE or DELETE a set of 
records, so it may not be as good as igniteCache.put(K, V), so is there a way 
can update a specific field without QUERY?

Ignite put/query takes more than 500ms occasionally

2019-03-03 Thread BinaryTree
Hello Ignites -
I have an ignite cluster with 8c16 configuration and SSD disk, sometimes the 
cache put/query operations may take more 500ms.  


Here is my cache configuration:
https://github.com/RedBlackTreei/streamer


And I dumped five thread logs when timeout (>500ms) occurred, You can get it in 
the attachments.


Looking forward to your replies.<>


Ignite put/query takes more than 500ms occasionally

2019-03-03 Thread BinaryTree
Hello Ignites -
I have an ignite cluster with 8c16 configuration and SSD disk, sometimes the 
cache put/query operations may take more 500ms.  


Here is my cache configuration:
https://github.com/RedBlackTreei/streamer


And I dumped five thread logs when timeout (>500ms) occurred, You can get it in 
the attachments.


Looking forward to your replies.<>


Is there a mechanism that allows the user to evict cache entries that relate to an affinityKey.

2019-03-01 Thread BinaryTree
I have a cache, it contains many datapoint, the datapoint looks like:
dpId integer
devId String
name String
The datapoint relates to device, their relationship is one-to-many, and they 
are connected according to devId, so the devId is the affinityKey.

The cache key is:
//key=devId + "_" + dpId
private String key;
@AffinityKeyMapped
private String devId;
​
public DpKey() {
}
For product purpose, we should keep the integrity of a set of datapoint, but 
when eviction policies triggered, a part of records that belong to a device my 
be evicted.

So my question is :

Is there a mechanism that allows the user to evict cache entries that relate to 
an affinityKey?

If no, is there a convenient way to implement it, how to do?

Anyone who can give me any advice will be appreciated.

Backup make DataStreamer performance decreased a lot.

2019-03-01 Thread BinaryTree
Hi Igniters - 

I know backups will impact the performance of the cluster:

If you use a PARTITIONED cache and the data loss is not critical for you (for 
example, when you have a backing cache store), consider disabling backups for 
the cache. When backups are enabled, the cache engine has to maintain a remote 
copy of each entry, which requires network exchange and is time-consuming.

Because the data is important and can not lose, so the backup is necessary.

But the backup make DataStreamer performance decreased a lot, if backups are 
disabled,  40 million records can be loaded in 4 minutes, but when set backup  
= 1, after loading 20 million records, the speed decreased a lot, sometimes, it 
will cost more 20 seconds to load 10 thousands records.

Are there any configurations or methods can improve the performace of 
DataStreamer?

Related post:

http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-tp21161.html

I attached the thread dumps in this post.

I also create a project to reproduce the problem, you can refer to :

https://github.com/RedBlackTreei/streamer.git

?????? Performance degradation in case of high volumes

2019-02-28 Thread BinaryTree
Hi Ilya - 
First of all, thank for your reply!
Here is my cache configuration:
private static CacheConfiguration 
getCacheConfiguration(IgniteConfiguration cfg) {

CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(IgniteCacheKey.DATA_POINT_NEW.getCode());
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setBackups(1);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setDataRegionName(Constants.FIVE_GB_PERSISTENCE_REGION);

cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(true);
cacheCfg.setWriteBehindFlushThreadCount(2);
cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
cacheCfg.setWriteBehindFlushSize(409600);
cacheCfg.setWriteBehindBatchSize(1024);
cacheCfg.setStoreKeepBinary(true);
cacheCfg.setQueryParallelism(16);

//2M
cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
cacheCfg.setRebalanceThrottle(100);

cacheCfg.setSqlIndexMaxInlineSize(256);

List entities = getQueryEntities();
cacheCfg.setQueryEntities(entities);

CacheKeyConfiguration cacheKeyConfiguration = new 
CacheKeyConfiguration(DpKey.class);
cacheCfg.setKeyConfiguration(cacheKeyConfiguration);

RendezvousAffinityFunction affinityFunction = new 
RendezvousAffinityFunction();
affinityFunction.setPartitions(128);
affinityFunction.setExcludeNeighbors(true);
cacheCfg.setAffinity(affinityFunction);
cfg.setCacheConfiguration(cacheCfg);
return cacheCfg;
}


private static List getQueryEntities() {
List entities = Lists.newArrayList();

//()
QueryEntity entity = new QueryEntity(DpKey.class.getName(), 
DpCache.class.getName());
entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());

LinkedHashMap map = new LinkedHashMap<>();
map.put("id", "java.lang.String");
map.put("gmtCreate", "java.lang.Long");
map.put("gmtModified", "java.lang.Long");
map.put("devId", "java.lang.String");
map.put("dpId", "java.lang.Integer");
map.put("code", "java.lang.String");
map.put("name", "java.lang.String");
map.put("customName", "java.lang.String");
map.put("mode", "java.lang.String");
map.put("type", "java.lang.String");
map.put("value", "java.lang.String");
map.put("rawValue", byte[].class.getName());
map.put("time", "java.lang.Long");
map.put("status", "java.lang.Boolean");
map.put("uuid", "java.lang.String");

entity.setFields(map);

//
QueryIndex devIdIdx = new QueryIndex("devId");
devIdIdx.setName("idx_devId");
devIdIdx.setInlineSize(32);
List indexes = Lists.newArrayList(devIdIdx);
entity.setIndexes(indexes);

entities.add(entity);

return entities;
}
public class DpKey implements Serializable {
private String key;
@AffinityKeyMapped
private String devId;

public DpKey() {
}

public DpKey(String key, String devId) {
this.key = key;
this.devId = devId;
}

public String getKey() {
return this.key;
}

public void setKey(String key) {
this.key = key;
}

public String getDevId() {
return this.devId;
}

public void setDevId(String devId) {
this.devId = devId;
}

public boolean equals(Object o) {
if (this == o) {
return true;
} else if (o != null && this.getClass() == o.getClass()) {
DpKey key = (DpKey)o;
return this.key.equals(key.key);
} else {
return false;
}
}

public int hashCode() {
return this.key.hashCode();
}
}

And I have described my issue in this post and some tests I have done :
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Hung-after-a-period-td21161.html


--  --
??: "ilya.kasnacheev";
: 2019??2??28??(??) 9:03
??: "user";

: Re: Performance degradation in case of high volumes



Hello Justin!


Ignite 2.6 does have IGNITE_MAX_INDEX_PAYLOAD_SIZE system property.


We are talking about primary key here. What is your primary key type? What 
other indexes do you have? Can you provide complete configuration for affected 
tables (including POJOs if applicable?)


Regards,

-- 

Ilya Kasnacheev









, 28 . 2019 ??. ?? 15:29, Justin Ji :

Ilya - 
 
 I use ignite 2.6.0, does not have IGNITE_MAX_INDEX_PAYLOAD_SIZE system
 property.
 But our index field has a fixed length:25 characters, so where can I find
 the algorithm to calculate the 'index inline size'.
 
 Looking forward to your reply.
 
 
 
 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

?????? Ignite Data Streamer Hung after a period

2019-02-28 Thread BinaryTree
nagerImpl.java:1257)
   at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:1529)
   at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:352)
   at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3602)
   at 
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2774)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:2125)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:400)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:60)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:90)
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1556)
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1184)
   at 
org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:125)
   at 
org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1091)
   at 
org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:511)
   at java.lang.Thread.run(Thread.java:748)







--  ------
??: "BinaryTree";
: 2019??2??28??(??) 11:27
??: 
"user";"ilya.kasnacheev";

: ?? Ignite Data Streamer Hung after a period



Ilya 


I attached the thread dump logs. I have three ignite nodes, every node dumped 
four thread files.
Looking forward to your reply.




--  --
??: "BinaryTree";
: 2019??2??28??(??) 10:28
??: 
"user";"ilya.kasnacheev";

: ?? Ignite Data Streamer Hung after a period



Thank for your reply. 
1. Yes, I have persistence.
2. I think the cache store is not the bottleneck, because the skipStore is 
enabled when loading data.
IgniteDataStreamer streamer = 
ignite.dataStreamer(IgniteCacheKey.DATA_POINT_NEW.getCode());
streamer.skipStore(true);
streamer.keepBinary(true);
streamer.perNodeBufferSize(1);
streamer.perNodeParallelOperations(32);





--  --
??: "Ilya Kasnacheev";
: 2019??2??27??(??) 9:59
??: "user";

: Re: Ignite Data Streamer Hung after a period



Hello!



It's hard to say. Do you have persistence? Are you sure that cache store is not 
the bottleneck?


I would start with gathering thread dumps from whole cluster when in stuck 
state.


Regards,

-- 

Ilya Kasnacheev









, 27 . 2019 ??. ?? 15:06, Justin Ji :

Dmitry  - 
 
 I also encountered this problem.
 
 I used both persistence and indexing, when I loaded 20 million records, the
 loading speed became much slower than before, but the CPU of the ignite
 server is low.
 
 
<http://apache-ignite-users.70518.x6.nabble.com/file/t2000/WX20190227-200059.png>
 
 
 Here is my cache configuration:
 
 CacheConfiguration cacheCfg = new CacheConfiguration();
 cacheCfg.setName(cacheName);
 cacheCfg.setCacheMode(CacheMode.PARTITIONED);
 cacheCfg.setBackups(1);
 cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
 
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
 cacheCfg.setWriteThrough(true);
 cacheCfg.setWriteBehindEnabled(true);
 cacheCfg.setWriteBehindFlushThreadCount(2);
 cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
 cacheCfg.setWriteBehindFlushSize(409600);
 cacheCfg.setWriteBehindBatchSize(1024);
 cacheCfg.setStoreKeepBinary(true);
 cacheCfg.setQueryParallelism(16);
 cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
 cacheCfg.setRebalanceThrottle(100);
 CacheKeyConfiguration cacheKeyConfiguration = new
 CacheKeyConfiguration(DpKey.class);
 cacheCfg.setKeyConfiguration(cacheKeyConfiguration);
 
 List entities = Lists.newArrayList();
 
 QueryEntity entity = new QueryEntity(DpKey.class.getName(),
 DpCache.class.getName());
 entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());
 
 LinkedHashMap map = new LinkedHashMap<>();
 map.put("id", "java.lang.String");
 map.put("gmtCreate", "java.lang.Long");
 map.put("gmtModified", "java.lang.Long");

?????? Ignite Data Streamer Hung after a period

2019-02-27 Thread BinaryTree
Thank for your reply. 
1. Yes, I have persistence.
2. I think the cache store is not the bottleneck, because the skipStore is 
enabled when loading data.
IgniteDataStreamer streamer = 
ignite.dataStreamer(IgniteCacheKey.DATA_POINT_NEW.getCode());
streamer.skipStore(true);
streamer.keepBinary(true);
streamer.perNodeBufferSize(1);
streamer.perNodeParallelOperations(32);





--  --
??: "Ilya Kasnacheev";
: 2019??2??27??(??) 9:59
??: "user";

: Re: Ignite Data Streamer Hung after a period



Hello!



It's hard to say. Do you have persistence? Are you sure that cache store is not 
the bottleneck?


I would start with gathering thread dumps from whole cluster when in stuck 
state.


Regards,

-- 

Ilya Kasnacheev









, 27 . 2019 ??. ?? 15:06, Justin Ji :

Dmitry  - 
 
 I also encountered this problem.
 
 I used both persistence and indexing, when I loaded 20 million records, the
 loading speed became much slower than before, but the CPU of the ignite
 server is low.
 
 

 
 
 Here is my cache configuration:
 
 CacheConfiguration cacheCfg = new CacheConfiguration();
 cacheCfg.setName(cacheName);
 cacheCfg.setCacheMode(CacheMode.PARTITIONED);
 cacheCfg.setBackups(1);
 cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
 
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
 cacheCfg.setWriteThrough(true);
 cacheCfg.setWriteBehindEnabled(true);
 cacheCfg.setWriteBehindFlushThreadCount(2);
 cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
 cacheCfg.setWriteBehindFlushSize(409600);
 cacheCfg.setWriteBehindBatchSize(1024);
 cacheCfg.setStoreKeepBinary(true);
 cacheCfg.setQueryParallelism(16);
 cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
 cacheCfg.setRebalanceThrottle(100);
 CacheKeyConfiguration cacheKeyConfiguration = new
 CacheKeyConfiguration(DpKey.class);
 cacheCfg.setKeyConfiguration(cacheKeyConfiguration);
 
 List entities = Lists.newArrayList();
 
 QueryEntity entity = new QueryEntity(DpKey.class.getName(),
 DpCache.class.getName());
 entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());
 
 LinkedHashMap map = new LinkedHashMap<>();
 map.put("id", "java.lang.String");
 map.put("gmtCreate", "java.lang.Long");
 map.put("gmtModified", "java.lang.Long");
 map.put("devId", "java.lang.String");
 map.put("dpId", "java.lang.Integer");
 map.put("code", "java.lang.String");
 map.put("name", "java.lang.String");
 map.put("customName", "java.lang.String");
 map.put("mode", "java.lang.String");
 map.put("type", "java.lang.String");
 map.put("value", "java.lang.String");
 map.put("rawValue", byte[].class.getName());
 map.put("time", "java.lang.Long");
 map.put("status", "java.lang.Boolean");
 map.put("uuid", "java.lang.String");
 
 entity.setFields(map);
 QueryIndex devIdIdx = new QueryIndex("devId");
 devIdIdx.setName("idx_devId");
 devIdIdx.setInlineSize(128);
 List indexes = Lists.newArrayList(devIdIdx);
 entity.setIndexes(indexes);
 
 entities.add(entity);
 cacheCfg.setQueryEntities(entities);
 
 
 Can you give me some advice on where to start solving these problems?
 
 
 
 
 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/