RE: persist only on delete

2017-02-06 Thread Shawn Du
Hi, 

 

I tested that the cache value was still in memory when calling store delete 
method. For I leave write empty, so it can’t get it from persist store.

 

public void write(Cache.Entry entry)
{
//do nothing
}

 

 

Thanks

Shawn

 

发件人: Vladislav Pyatkov [mailto:vldpyat...@gmail.com] 
发送时间: 2017年2月6日 16:04
收件人: user@ignite.apache.org
主题: Re: 答复: persist only on delete

 

Hi,

 

I think it will be work, because each time when "get"  from a cache Ignite try 
to get the value from persistence story, if it will not find in memory.

 

On Mon, Feb 6, 2017 at 10:39 AM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi,

 

In delete method, we can only get cache key.  In order to get cache entry 
value, we need get Ignite instance and get the cache.

 

Assume we already get all of them. Can we get value in the cache by the key? 
Which is still valid in cache?

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn <mailto:shawn...@neulion.com.cn> 
] 
发送时间: 2017年2月6日 15:12
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: persist only on delete 

 

Hi,

 

I have a case which cache entry will be update frequently, and I only want to 
persist it when the cache is manually remove(cache will not change anymore) by 
me.

 

For this case, It is a good idea to implement persist logic in delete/deleteAll 
method while write/writeAll do nothing?

 

@Override
public void delete(Object o)
{
//do nothing, we never have this operation.
}

 

 

Thanks

Shawn





 

-- 

Vladislav Pyatkov



RE: persist only on delete

2017-02-06 Thread Shawn Du
Hi,

 

Have a quick test, it works as I expect. I only test with sync API.

Still want to get comments about it. 

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年2月6日 15:39
收件人: user@ignite.apache.org
主题: 答复: persist only on delete 

 

Hi,

 

In delete method, we can only get cache key.  In order to get cache entry
value, we need get Ignite instance and get the cache.

 

Assume we already get all of them. Can we get value in the cache by the key?
Which is still valid in cache?

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年2月6日 15:12
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: persist only on delete 

 

Hi,

 

I have a case which cache entry will be update frequently, and I only want
to persist it when the cache is manually remove(cache will not change
anymore) by me.

 

For this case, It is a good idea to implement persist logic in
delete/deleteAll method while write/writeAll do nothing?

 

@Override
public void delete(Object o)
{
//do nothing, we never have this operation.
}

 

 

Thanks

Shawn



答复: persist only on delete

2017-02-05 Thread Shawn Du
Hi,

 

In delete method, we can only get cache key.  In order to get cache entry
value, we need get Ignite instance and get the cache.

 

Assume we already get all of them. Can we get value in the cache by the key?
Which is still valid in cache?

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年2月6日 15:12
收件人: user@ignite.apache.org
主题: persist only on delete 

 

Hi,

 

I have a case which cache entry will be update frequently, and I only want
to persist it when the cache is manually remove(cache will not change
anymore) by me.

 

For this case, It is a good idea to implement persist logic in
delete/deleteAll method while write/writeAll do nothing?

 

@Override
public void delete(Object o)
{
//do nothing, we never have this operation.
}

 

 

Thanks

Shawn



persist only on delete

2017-02-05 Thread Shawn Du
Hi,

 

I have a case which cache entry will be update frequently, and I only want
to persist it when the cache is manually remove(cache will not change
anymore) by me.

 

For this case, It is a good idea to implement persist logic in
delete/deleteAll method while write/writeAll do nothing?

 

@Override
public void delete(Object o)
{
//do nothing, we never have this operation.
}

 

 

Thanks

Shawn



RE: How can I get the object "org.apache.ignite.Ignite" in a CacheStore implementation?

2017-02-05 Thread Shawn Du
Hi howfree,

Try this:

@IgniteInstanceResource
Ignite ignite;


Thanks
Shawn

-邮件原件-
发件人: howfree [mailto:2789106...@qq.com] 
发送时间: 2017年2月6日 6:04
收件人: user@ignite.apache.org
主题: How can I get the object "org.apache.ignite.Ignite" in a CacheStore
implementation?







when I implement the CacheStore interface, I need read data from another
cache.
I usually do it like this:
Ignite ignite = Ignition.start("example-ignite.xml").

Is there any way where I can get it directly from Ignite APIs?
or I inject it into my CacheStore Implementation in XML file definition?

thanks.


 



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-can-I-get-the-object-org-
apache-ignite-Ignite-in-a-CacheStore-implementation-tp10435.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



equivalent xml configuration with code.

2017-02-03 Thread Shawn Du
Hi,

 

CacheConfiguration cacheConfiguration = new CacheConfiguration();

cacheConfiguration.setTypes(String.class, );

cacheConfiguration.setIndexedTypes(String.class, );

 

what's equivalent xml configuration with above code?

 

Thanks

Shawn



答复: ignite client memory issue

2017-01-23 Thread Shawn Du
Hi Denis,

 

This is the code, more things: 

#1 We store a *table* in each cache entry. 

#2 Table has several columns, column may be a big object like more than 10k 
bytes. Column implement interface Binarylizable.

 

I think you can try to produce this issue by design a class like above table. 
Also if you need heap dump, I can share it also.

 

Thanks

Shawn

 

private void save(Context context)
{
BinaryObjectBuilder builder = 
IgniteManager.getIgnite().binary().builder(context.cacheName);
builder.setField(IgniteConstants.COLUMN_TIMESTAMP, 
context.table.getTimestamp() / 1000);
builder.setField(IgniteConstants.COLUMN_SITE, context.table.getSite());
builder.setField(IgniteConstants.COLUMN_PRODUCT, 
context.table.getProduct());
context.table.getDimColumns().forEach(c -> builder.setField(c.getName(), 
c));
builder.setField(context.table.getMeasColumn().getName(), 
context.table.getMeasColumn());
IgniteCache cache = getCache(context);
cache.put(generateKey(context.table), builder.build());
}

private CacheConfiguration 
createCacheConfiguration(Context context)
{
CacheConfiguration config = new 
CacheConfiguration<>();
List columns = new ArrayList<>();
columns.add(new ColumnScheme(IgniteConstants.COLUMN_TIMESTAMP, 
Long.class.getTypeName(),
commonConfig.indexingTimestampEnable, false));
columns.add(new ColumnScheme(IgniteConstants.COLUMN_SITE, 
commonConfig.indexingSiteEnable));
columns.add(new ColumnScheme(IgniteConstants.COLUMN_PRODUCT, 
commonConfig.indexingProductEnable));
for (Column column : context.table.getDimColumns())
{
columns.add(new ColumnScheme(column.getName(), 
Column.class.getTypeName(), false));
}
String measName = context.table.getMeasColumn().getName();
columns.add(new ColumnScheme(measName, Column.class.getTypeName(), false));
config.setQueryEntities(Collections
.singleton(IgniteManager.createEntity(String.class.getTypeName(), 
context.cacheName, columns)));
config.setName(context.cacheName);
config.setMemoryMode(commonConfig.cacheMemoryMode);
config.setBackups(commonConfig.backups);
config.setStartSize(10_000);
config.setCopyOnRead(commonConfig.copyOnRead);
return config;
}

private IgniteCache getCache(Context context)
{
IgniteCache cache = 
IgniteManager.getIgnite().cache(context.cacheName);
if (cache == null)
{
cache = 
IgniteManager.getIgnite().getOrCreateCache(createCacheConfiguration(context));
   }
if (context.ttl > 0)
{
cache = cache.withExpiryPolicy(new ModifiedExpiryPolicy(new 
Duration(TimeUnit.SECONDS, context.ttl)));
}
cache = cache.withKeepBinary();
return cache;
}

static class Context
{
final long ttl;
final Table table;
final String cacheName;

Context(long ttl, Table table)
{
this.ttl = ttl;
this.table = table;
this.cacheName = table.getSchema().generateCacheName();
}
}

 

 

发件人: Denis Magda [mailto:dma...@apache.org] 
发送时间: 2017年1月24日 3:30
收件人: user@ignite.apache.org
主题: Re: ignite client memory issue

 

Hi,

 

Share piece of the code that produces the leak.

 

—

Denis

 

On Jan 22, 2017, at 5:56 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

 

Hi,

 

My application run overnight  and crash again after I set max heap as 2G. For I 
saw there were many Future Objects,

I guess it may be caused by Async API. Now I am using Sync API. It seems that 
the memory issue disappeared. 

My application memory kept at 60M.

 

cache = cache.withAsync().withKeepBinary(); -->

cache = cache.withKeepBinary();

 

I think this will not happens always, but real happened on some condition and 
worthy further investigate.

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月22日 17:36
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: RE: ignite client memory issue

 

Hi,

 

I am sure there are memory leaks.   See below.  

 

Class Name  
 | Objects 
| Shallow Heap |  Retained Heap

-

java.lang.Thread
   | 128 |  
 15,360 | >= 879,574,832

java.lang.ThreadLocal$ThreadLocalMap
   | 100 |2,400 | >= 
870,478,984

java.lang.ThreadLocal$ThreadLocalMap$Entry[]
| 100 |  271,168 | >= 870,476,576

java.lang.ThreadLocal$ThreadLocalMap$Entry  

RE: ignite client memory issue

2017-01-23 Thread Shawn Du
Thanks val, you are right, I am using sync API. It is fast enough for me. 

Thanks
Shawn

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2017年1月24日 3:39
收件人: user@ignite.apache.org
主题: Re: 答复: ignite client memory issue

Shawn,

It looks like you had too many asynchronous operations executed at the same
time and therefore too many futures created in memory. Async approach
requires more accuracy, and if everything works for you with sync
operations, I would use them.

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/ignite-client-memory-issue-tp
10174p10201.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: ignite client memory issue

2017-01-22 Thread Shawn Du
Hi,

 

My application run overnight  and crash again after I set max heap as 2G. For I 
saw there were many Future Objects,

I guess it may be caused by Async API. Now I am using Sync API. It seems that 
the memory issue disappeared. 

My application memory kept at 60M.

 

cache = cache.withAsync().withKeepBinary(); -->

cache = cache.withKeepBinary();

 

I think this will not happens always, but real happened on some condition and 
worthy further investigate.

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月22日 17:36
收件人: user@ignite.apache.org
主题: RE: ignite client memory issue

 

Hi,

 

I am sure there are memory leaks.   See below.  

 

Class Name  
 | Objects 
| Shallow Heap |  Retained Heap

-

java.lang.Thread
   | 128 |  
 15,360 | >= 879,574,832

java.lang.ThreadLocal$ThreadLocalMap
   | 100 |2,400 | >= 
870,478,984

java.lang.ThreadLocal$ThreadLocalMap$Entry[]
| 100 |  271,168 | >= 870,476,576

java.lang.ThreadLocal$ThreadLocalMap$Entry  
 |  23,153 |  740,896 | >= 868,443,568

org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl   
   |  22,444 |  359,104 | >= 867,056,800

org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture
   |  22,444 |1,436,416 | >= 866,697,704

byte[]  
   
|  26,362 |  864,837,368 | >= 864,837,368

org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture|
  22,441 |3,411,032 | >= 861,188,352

org.apache.ignite.internal.binary.BinaryObjectImpl  
   |  22,441 |  897,640 | >= 855,099,616

-

 

Now my application is still running, and the memory is growing up. Please help.

 

More information:

 

Ignite version: 1.8.0

java version "1.8.0_77"

Java(TM) SE Runtime Environment (build 1.8.0_77-b03)

Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

Platform:

Linux dev-s2 4.4.8-20.46.amzn1.x86_64 #1 SMP Wed Apr 27 19:28:52 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux

 

 

This is a new issue in my application. Recently we change our code. Add below 
Class which implement 

public class Column implements Binarylizable

{
}

 

Thanks. Please help.

 

Shawn

 

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月22日 13:52
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: ignite client memory issue

 

Hi,

I assume this document for ignite server.  In my case, I set ignite work at 
client mode. 

Now I increase the Max Heap Size and my application is running, I will monitor 
the memory usage. 

For my view, at client mode, ignite should not use too much memory.

 

Thanks

Shawn

 

发件人: Denis Magda [mailto:dma...@apache.org] 
发送时间: 2017年1月22日 13:38
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: ignite client memory issue

 

Also keep in mind that every Ignite node requires at least ~ 300 MB for its 
internal purposes. This capacity planning page might be useful as well for you:

https://apacheignite.readme.io/docs/capacity-planning-bak

 

—

Denis

 

On Jan 21, 2017, at 9:29 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

 

My heap max size is 768M. more than 500M  are consumed  by ignite.

 

Your advice is quite reasonable. I will refactor my code. 

 

Thanks

发件人: Denis Magda [mailto:dma...@apache.org] 
发送时间: 2017年1月22日 13:18
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: ignite client memory issue

 

Hi Shawn,

 

What is the maximum size of the heap? 

 

I don’t think the cache configurations can be a reason of the OOM. As a side 
note, there is no reason to keep the configurations at all. Once a cache is 
started with a configuration you can either keep a single reference to it and 
reuse by multiple app threads or get a new one by passing the cache name into a 
respective Ignite method.

 

—

Denis

 

On Jan 21, 2017

RE: ignite client memory issue

2017-01-22 Thread Shawn Du
Hi,

 

I am sure there are memory leaks.   See below.  

 

Class Name  
 | Objects 
| Shallow Heap |  Retained Heap

-

java.lang.Thread
   | 128 |  
 15,360 | >= 879,574,832

java.lang.ThreadLocal$ThreadLocalMap
   | 100 |2,400 | >= 
870,478,984

java.lang.ThreadLocal$ThreadLocalMap$Entry[]
| 100 |  271,168 | >= 870,476,576

java.lang.ThreadLocal$ThreadLocalMap$Entry  
 |  23,153 |  740,896 | >= 868,443,568

org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl   
   |  22,444 |  359,104 | >= 867,056,800

org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture
   |  22,444 |1,436,416 | >= 866,697,704

byte[]  
   
|  26,362 |  864,837,368 | >= 864,837,368

org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture|
  22,441 |3,411,032 | >= 861,188,352

org.apache.ignite.internal.binary.BinaryObjectImpl  
   |  22,441 |  897,640 | >= 855,099,616

-

 

Now my application is still running, and the memory is growing up. Please help.

 

More information:

 

Ignite version: 1.8.0

java version "1.8.0_77"

Java(TM) SE Runtime Environment (build 1.8.0_77-b03)

Java HotSpot(TM) 64-Bit Server VM (build 25.77-b03, mixed mode)

Platform:

Linux dev-s2 4.4.8-20.46.amzn1.x86_64 #1 SMP Wed Apr 27 19:28:52 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux

 

 

This is a new issue in my application. Recently we change our code. Add below 
Class which implement 

public class Column implements Binarylizable

{
}

 

Thanks. Please help.

 

Shawn

 

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月22日 13:52
收件人: user@ignite.apache.org
主题: Re: ignite client memory issue

 

Hi,

I assume this document for ignite server.  In my case, I set ignite work at 
client mode. 

Now I increase the Max Heap Size and my application is running, I will monitor 
the memory usage. 

For my view, at client mode, ignite should not use too much memory.

 

Thanks

Shawn

 

发件人: Denis Magda [mailto:dma...@apache.org] 
发送时间: 2017年1月22日 13:38
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: ignite client memory issue

 

Also keep in mind that every Ignite node requires at least ~ 300 MB for its 
internal purposes. This capacity planning page might be useful as well for you:

https://apacheignite.readme.io/docs/capacity-planning-bak

 

—

Denis

 

On Jan 21, 2017, at 9:29 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

 

My heap max size is 768M. more than 500M  are consumed  by ignite.

 

Your advice is quite reasonable. I will refactor my code. 

 

Thanks

发件人: Denis Magda [mailto:dma...@apache.org] 
发送时间: 2017年1月22日 13:18
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: ignite client memory issue

 

Hi Shawn,

 

What is the maximum size of the heap? 

 

I don’t think the cache configurations can be a reason of the OOM. As a side 
note, there is no reason to keep the configurations at all. Once a cache is 
started with a configuration you can either keep a single reference to it and 
reuse by multiple app threads or get a new one by passing the cache name into a 
respective Ignite method.

 

—

Denis

 

On Jan 21, 2017, at 9:03 PM, Shawn Du < <mailto:shawn...@neulion.com.cn> 
shawn...@neulion.com.cn> wrote:

 

I review the code, find a never released HashMap which store some cache 
configurations.

private Map> 
cacheConfigurations = new HashMap<>();

I cache these configurations for performance consideration. 

I get ignite cache by calling:

IgniteCache cache = 
IgniteManager.getIgnite().getOrCreateCache(configuration);

 

My question:

If cache configuration doesn’t release, it will prevent some memeory/cache 
entries/something else release?

 

Thanks

Shawn

 

发件人: Shawn Du [ <mailto:shawn...@neulion.com.cn> 
mailto:shawn...@neulion.com.cn] 
发送时

Re: ignite client memory issue

2017-01-21 Thread Shawn Du
Hi,

I assume this document for ignite server.  In my case, I set ignite work at 
client mode. 

Now I increase the Max Heap Size and my application is running, I will monitor 
the memory usage. 

For my view, at client mode, ignite should not use too much memory.

 

Thanks

Shawn

 

发件人: Denis Magda [mailto:dma...@apache.org] 
发送时间: 2017年1月22日 13:38
收件人: user@ignite.apache.org
主题: Re: ignite client memory issue

 

Also keep in mind that every Ignite node requires at least ~ 300 MB for its 
internal purposes. This capacity planning page might be useful as well for you:

https://apacheignite.readme.io/docs/capacity-planning-bak

 

—

Denis

 

On Jan 21, 2017, at 9:29 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

 

My heap max size is 768M. more than 500M  are consumed  by ignite.

 

Your advice is quite reasonable. I will refactor my code. 

 

Thanks

发件人: Denis Magda [mailto:dma...@apache.org] 
发送时间: 2017年1月22日 13:18
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: ignite client memory issue

 

Hi Shawn,

 

What is the maximum size of the heap? 

 

I don’t think the cache configurations can be a reason of the OOM. As a side 
note, there is no reason to keep the configurations at all. Once a cache is 
started with a configuration you can either keep a single reference to it and 
reuse by multiple app threads or get a new one by passing the cache name into a 
respective Ignite method.

 

—

Denis

 

On Jan 21, 2017, at 9:03 PM, Shawn Du < <mailto:shawn...@neulion.com.cn> 
shawn...@neulion.com.cn> wrote:

 

I review the code, find a never released HashMap which store some cache 
configurations.

private Map> 
cacheConfigurations = new HashMap<>();

I cache these configurations for performance consideration. 

I get ignite cache by calling:

IgniteCache cache = 
IgniteManager.getIgnite().getOrCreateCache(configuration);

 

My question:

If cache configuration doesn’t release, it will prevent some memeory/cache 
entries/something else release?

 

Thanks

Shawn

 

发件人: Shawn Du [ <mailto:shawn...@neulion.com.cn> 
mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月22日 9:09
收件人:  <mailto:user@ignite.apache.org> user@ignite.apache.org
主题: ignite client memory issue

 

Hi,

 

My ignite client died many times recently because of OOM. Of course, I can 
increase the Max heap size. But I want to know why these memory are not 
released.

This is part of analysis of Eclipse Memory Analyzer. My ignite client use Async 
cache API. please help.

 

Class Name  
   | Objects | Shallow Heap |  Retained Heap

---

java.lang.Thread
 | 109 |   13,080 | >= 561,086,296

java.lang.ThreadLocal$ThreadLocalMap
  |  98 |2,352 | >= 537,825,592

java.lang.ThreadLocal$ThreadLocalMap$Entry[]
   |  98 |  139,808 | >= 537,823,232

java.lang.ThreadLocal$ThreadLocalMap$Entry  
   |  15,202 |  486,464 | >= 535,750,616

org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl   
|  14,476 |  231,616 | >= 534,866,208

org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture
  |  14,476 |  926,464 | >= 534,634,600

org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture
  |  14,466 |2,198,832 | >= 531,444,584

byte[]  
|  18,438 |  530,369,360 | >= 530,369,360

org.apache.ignite.internal.binary.BinaryObjectImpl  
   |  14,466 |  578,640 | >= 527,749,368

 

Thanks

Shawn

 



答复: ignite client memory issue

2017-01-21 Thread Shawn Du
My heap max size is 768M. more than 500M  are consumed  by ignite.

 

Your advice is quite reasonable. I will refactor my code. 

 

Thanks

发件人: Denis Magda [mailto:dma...@apache.org] 
发送时间: 2017年1月22日 13:18
收件人: user@ignite.apache.org
主题: Re: ignite client memory issue

 

Hi Shawn,

 

What is the maximum size of the heap? 

 

I don’t think the cache configurations can be a reason of the OOM. As a side 
note, there is no reason to keep the configurations at all. Once a cache is 
started with a configuration you can either keep a single reference to it and 
reuse by multiple app threads or get a new one by passing the cache name into a 
respective Ignite method.

 

—

Denis

 

On Jan 21, 2017, at 9:03 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

 

I review the code, find a never released HashMap which store some cache 
configurations.

private Map> 
cacheConfigurations = new HashMap<>();

I cache these configurations for performance consideration. 

I get ignite cache by calling:

IgniteCache cache = 
IgniteManager.getIgnite().getOrCreateCache(configuration);

 

My question:

If cache configuration doesn’t release, it will prevent some memeory/cache 
entries/something else release?

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月22日 9:09
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: ignite client memory issue

 

Hi,

 

My ignite client died many times recently because of OOM. Of course, I can 
increase the Max heap size. But I want to know why these memory are not 
released.

This is part of analysis of Eclipse Memory Analyzer. My ignite client use Async 
cache API. please help.

 

Class Name  
   | Objects | Shallow Heap |  Retained Heap

---

java.lang.Thread
 | 109 |   13,080 | >= 561,086,296

java.lang.ThreadLocal$ThreadLocalMap
  |  98 |2,352 | >= 537,825,592

java.lang.ThreadLocal$ThreadLocalMap$Entry[]
   |  98 |  139,808 | >= 537,823,232

java.lang.ThreadLocal$ThreadLocalMap$Entry  
   |  15,202 |  486,464 | >= 535,750,616

org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl   
|  14,476 |  231,616 | >= 534,866,208

org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture
  |  14,476 |  926,464 | >= 534,634,600

org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture
  |  14,466 |2,198,832 | >= 531,444,584

byte[]  
|  18,438 |  530,369,360 | >= 530,369,360

org.apache.ignite.internal.binary.BinaryObjectImpl  
   |  14,466 |  578,640 | >= 527,749,368

 

Thanks

Shawn

 



答复: ignite client memory issue

2017-01-21 Thread Shawn Du
I review the code, find a never released HashMap which store some cache
configurations.
private Map>
cacheConfigurations = new HashMap<>();
I cache these configurations for performance consideration. 
I get ignite cache by calling:
IgniteCache cache =
IgniteManager.getIgnite().getOrCreateCache(configuration);
 
My question:
If cache configuration doesn’t release, it will prevent some memeory/cache
entries/something else release?
 
Thanks
Shawn
 
发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月22日 9:09
收件人: user@ignite.apache.org
主题: ignite client memory issue
 
Hi,
 
My ignite client died many times recently because of OOM. Of course, I can
increase the Max heap size. But I want to know why these memory are not
released.
This is part of analysis of Eclipse Memory Analyzer. My ignite client use
Async cache API. please help.
 
Class Name
| Objects | Shallow Heap |  Retained Heap



---
java.lang.Thread
| 109 |   13,080 | >= 561,086,296
java.lang.ThreadLocal$ThreadLocalMap
|  98 |2,352 | >= 537,825,592
java.lang.ThreadLocal$ThreadLocalMap$Entry[]
|  98 |  139,808 | >= 537,823,232
java.lang.ThreadLocal$ThreadLocalMap$Entry
|  15,202 |  486,464 | >= 535,750,616
org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl
|  14,476 |  231,616 | >= 534,866,208
org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture
|  14,476 |  926,464 | >= 534,634,600
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearA
tomicSingleUpdateFuture  |  14,466 |2,198,832 | >=
531,444,584
byte[]
|  18,438 |  530,369,360 | >= 530,369,360
org.apache.ignite.internal.binary.BinaryObjectImpl
|  14,466 |  578,640 | >= 527,749,368
 
Thanks
Shawn


ignite client memory issue

2017-01-21 Thread Shawn Du
Hi,

 

My ignite client died many times recently because of OOM. Of course, I can
increase the Max heap size. But I want to know why these memory are not
released.

This is part of analysis of Eclipse Memory Analyzer. My ignite client use
Async cache API. please help.

 

Class Name
| Objects | Shallow Heap |  Retained Heap




---

java.lang.Thread
| 109 |   13,080 | >= 561,086,296

java.lang.ThreadLocal$ThreadLocalMap
|  98 |2,352 | >= 537,825,592

java.lang.ThreadLocal$ThreadLocalMap$Entry[]
|  98 |  139,808 | >= 537,823,232

java.lang.ThreadLocal$ThreadLocalMap$Entry
|  15,202 |  486,464 | >= 535,750,616

org.apache.ignite.internal.processors.cache.IgniteCacheFutureImpl
|  14,476 |  231,616 | >= 534,866,208

org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture
|  14,476 |  926,464 | >= 534,634,600

org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearA
tomicSingleUpdateFuture  |  14,466 |2,198,832 | >=
531,444,584

byte[]
|  18,438 |  530,369,360 | >= 530,369,360

org.apache.ignite.internal.binary.BinaryObjectImpl
|  14,466 |  578,640 | >= 527,749,368

 

Thanks

Shawn



答复: how to increase CPU utilization to increase compute performance

2017-01-18 Thread Shawn Du
This is the configuration. All just default.

 













localhost:47500..47509













 

When running the job, both IO and CPU load are very low. 

Now I am testing split less jobs. Also tune the publicThreadPoolSize.  It seems 
that splitting less jobs has positive effect.

Tune the publicThreadPoolSize high has negative effect.  Please confirm and 
give suggestions.

 

Thanks

Shawn

 

发件人: Artem Schitow [mailto:artem.schi...@gmail.com] 
发送时间: 2017年1月18日 18:08
收件人: user@ignite.apache.org
主题: Re: how to increase CPU utilization to increase compute performance

 

Hi, Shawn!

 

Can you please attach you Ignite configuration? What are your disk I/O load and 
CPU load when you’re running the job?


—

Artem Schitow

artem.schi...@gmail.com <mailto:artem.schi...@gmail.com> 

 

 

 

On 18 Jan 2017, at 13:02, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

 

Hi, 

 

I have a task to compute on ignite. My Service has 8 cores. I split the task 
into more than 1K jobs and merge the result.

>From client see, the task run more than 3 seconds, and sometimes more than 10 
>seconds. The ignite server load is very slow.

I wonder to know how to increase the CPU utilization to increase the 
performance?

 

Thanks

Shawn

 



how to increase CPU utilization to increase compute performance

2017-01-18 Thread Shawn Du
Hi, 

 

I have a task to compute on ignite. My Service has 8 cores. I split the task
into more than 1K jobs and merge the result.

>From client see, the task run more than 3 seconds, and sometimes more than
10 seconds. The ignite server load is very slow.

I wonder to know how to increase the CPU utilization to increase the
performance?

 

Thanks

Shawn



RE: Efficient way to get partial data of a cache entry

2017-01-11 Thread Shawn Du
Thanks Val, I fixed this issue by implement interface Binarylizable.

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2017年1月12日 13:31
收件人: user@ignite.apache.org
主题: RE: Efficient way to get partial data of a cache entry

RoaringBitmap is Externalizable with custom serialization logic, therefore
can't be represented as BinaryObject. In such cases toBinary() method return
the original object without changes.

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Efficient-way-to-get-partial-
data-of-a-cache-entry-tp9965p10051.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



RE: Efficient way to get partial data of a cache entry

2017-01-11 Thread Shawn Du
Hi,

Exception in thread "main" java.lang.ClassCastException:
org.roaringbitmap.RoaringBitmap cannot be cast to
org.apache.ignite.binary.BinaryObject

You can easily reproduce this issue: 

public static void main(String[] args)
 {
//start your ignite before ...
RoaringBitmap bitmap = new RoaringBitmap();
bitmap.add(100);
BinaryObject bo = ignite.binary().toBinary(bitmap);
System.out.println(bo);
}


I use RoaringBitMap 0.6.29 and ignite 1.8 and test on windows.


   org.roaringbitmap
   RoaringBitmap
   0.6.29



Thanks
Shawn

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2017年1月12日 9:42
收件人: user@ignite.apache.org
主题: Re: 答复: 答复: Efficient way to get partial data of a cache entry

What is the exception? Can you show the trace?

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Efficient-way-to-get-partial-
data-of-a-cache-entry-tp9965p10047.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: 答复: Efficient way to get partial data of a cache entry

2017-01-11 Thread Shawn Du
Hi Val,

Thanks for your help. It works. But throw new issues.

RoaringBitmap bitmap = new RoaringBitmap();
bitmap.add(100);
BinaryObject bo = ignite.binary().toBinary(bitmap); // exception throw here.
Ignite can't build binary format for RoaringBitmap. I test java class, it
works. 

Please help. 
Thanks Shawn.

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2017年1月12日 5:42
收件人: user@ignite.apache.org
主题: Re: 答复: Efficient way to get partial data of a cache entry

Shawn,

BinaryObject always return another BinaryObject for your custom type,
because otherwise it would mean that you have classes and most likely there
is no reason to use BinaryObject and withKeepBinary in the first place.
However, you can always call BinaryObject.deserialize() to convert to
instance of your class.

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Efficient-way-to-get-partial-
data-of-a-cache-entry-tp9965p10042.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: Efficient way to get partial data of a cache entry

2017-01-11 Thread Shawn Du
 

Hi,

 

It seems that we can’t read  customized class from BinaryObject by calling 
BinaryObject.field() or FieldType.value.

They will both return BinaryObjectImpl which can’t cast to my customized class. 

 

I also see this javadoc for withKeepBinary. So isn’t it? This is a limitation? 

 

/**
* Returns cache that will operate with binary objects.
* 
* Cache returned by this method will not be forced to deserialize binary 
objects,
* so keys and values will be returned from cache API methods without changes. 
Therefore,
* signature of the cache can contain only following types:
* 
* org.apache.ignite.binary.BinaryObject for binary 
classes
* All primitives (byte, int, ...) and there boxed versions (Byte, 
Integer, ...)
* Arrays of primitives (byte[], int[], ...)
* {@link String} and array of {@link String}s
* {@link UUID} and array of {@link UUID}s
* {@link Date} and array of {@link Date}s
* {@link Timestamp} and array of {@link Timestamp}s
* Enums and array of enums
* 
* Maps, collections and array of objects (but objects inside
* them will still be converted if they are binary)
* 
* 
* 

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月10日 8:36
收件人: user@ignite.apache.org
主题: 答复: Efficient way to get partial data of a cache entry

 

Thanks all’s reply.

 

It seems that we all agree that #2 is not the answer. And Yakov give new 
answers. I will have a test all of them and share the result here. 

 

 

Thanks

Shawn

 

发件人: Yakov Zhdanov [mailto:yzhda...@apache.org] 
发送时间: 2017年1月9日 19:21
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: Efficient way to get partial data of a cache entry

 

I would suggest calling cache.invoke() and return needed result without 
altering the entry.

 

Or compute.affinityCall() and do local get inside callable and computing and 
returning the result.



RE: compute SQL returned data.

2017-01-10 Thread Shawn Du
Yes. It works. Thanks.

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2017年1月11日 9:35
收件人: user@ignite.apache.org
主题: Re: 答复: 答复: compute SQL returned data.

Shawn,

map() method is execute locally on the master node, so you can do all the
checks outside the task and then execute only if needed. Will this work?

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/compute-SQL-returned-data-tp9
890p10016.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: 答复: compute SQL returned data.

2017-01-10 Thread Shawn Du
Yes. For my case, before submitting jobs, I need query data in task, so if
no data, I have to submit an empty job which does nothing, this is my
workaround.
Is there "normal" ways for this case?

Thanks
Shawn

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2017年1月11日 3:31
收件人: user@ignite.apache.org
主题: Re: 答复: compute SQL returned data.

Hi Shawn,

This means what it says - ComputeTask.map() method returned no jobs, i.e.
null or empty map.

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/compute-SQL-returned-data-tp9
890p.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: compute SQL returned data.

2017-01-09 Thread Shawn Du
Hi, 

When I run my compute task, it throws below exception. What it may be caused
by?

class org.apache.ignite.IgniteCheckedException: Task map operation produced
no mapped jobs

Thanks
Shawn

-邮件原件-
发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月9日 17:31
收件人: user@ignite.apache.org
主题: 答复: compute SQL returned data.

Thanks, I already figure it out just As you mentioned.


-邮件原件-
发件人: dkarachentsev [mailto:dkarachent...@gridgain.com]
发送时间: 2017年1月9日 17:24
收件人: user@ignite.apache.org
主题: Re: compute SQL returned data.

Hi Shawn,

For map-reduce operations is most suitable ComputeTask [1], which has
map/reduce methods. In map() method you may select nodes on which start
processing according to key (refer Ignite.affinity() [2,3]). Result will be
reduced and returned to the client.

In summary, client invokes SQL to collect keys and runs ComputeTask`s mapped
on nodes that has keys/values that should be processed (this will allow not
to transfer over network big values).

[1]
https://ignite.apache.org/releases/1.8.0/javadoc/org/apache/ignite/compute/C
omputeTask.html
[2]
https://ignite.apache.org/releases/1.8.0/javadoc/org/apache/ignite/Ignite.ht
ml#affinity(java.lang.String)
[3]
https://ignite.apache.org/releases/1.8.0/javadoc/org/apache/ignite/cache/aff
inity/Affinity.html



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/compute-SQL-returned-data-tp9
890p9967.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: Efficient way to get partial data of a cache entry

2017-01-09 Thread Shawn Du
Thanks all’s reply.

 

It seems that we all agree that #2 is not the answer. And Yakov give new 
answers. I will have a test all of them and share the result here. 

 

 

Thanks

Shawn

 

发件人: Yakov Zhdanov [mailto:yzhda...@apache.org] 
发送时间: 2017年1月9日 19:21
收件人: user@ignite.apache.org
主题: Re: Efficient way to get partial data of a cache entry

 

I would suggest calling cache.invoke() and return needed result without 
altering the entry.

 

Or compute.affinityCall() and do local get inside callable and computing and 
returning the result.



答复: compute SQL returned data.

2017-01-09 Thread Shawn Du
Thanks, I already figure it out just As you mentioned.


-邮件原件-
发件人: dkarachentsev [mailto:dkarachent...@gridgain.com] 
发送时间: 2017年1月9日 17:24
收件人: user@ignite.apache.org
主题: Re: compute SQL returned data.

Hi Shawn,

For map-reduce operations is most suitable ComputeTask [1], which has
map/reduce methods. In map() method you may select nodes on which start
processing according to key (refer Ignite.affinity() [2,3]). Result will be
reduced and returned to the client.

In summary, client invokes SQL to collect keys and runs ComputeTask`s mapped
on nodes that has keys/values that should be processed (this will allow not
to transfer over network big values).

[1]
https://ignite.apache.org/releases/1.8.0/javadoc/org/apache/ignite/compute/C
omputeTask.html
[2]
https://ignite.apache.org/releases/1.8.0/javadoc/org/apache/ignite/Ignite.ht
ml#affinity(java.lang.String)
[3]
https://ignite.apache.org/releases/1.8.0/javadoc/org/apache/ignite/cache/aff
inity/Affinity.html



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/compute-SQL-returned-data-tp9
890p9967.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Efficient way to get partial data of a cache entry

2017-01-08 Thread Shawn Du
Hi,

 

Given a key, how to get partial data of a cache entry efficiently?

 

#1 by SQL (SELECT field, . FROM t where _key='key')

#2 by cache.get(key) directly

#3 get binaryObject by cache.get(key) and read by field.

 

Thanks

Shawn



Simple Compute API for SQL returned data.

2017-01-08 Thread Shawn Du
Hi,

 


As far my experience using Hadoop and HBase, Hadoop/HBase provides API like
(TableMapReduceUtil.initTableMApperJob)for computing on the HBase
scanned/filtered data.

In a user view, it is very straight forward.  As ignite already been a SQL
grid and compute Grid, it will be very useful to let Ignite compute on its
own data(SQL returned).

Maybe ignite already can, but a straight forward API will be very convenient
for user. Also if ignite can hide some detail technology like affinity and
handle them automatically, it will be great.

 

Thanks

Shawn 

 



答复: compute SQL returned data.

2017-01-05 Thread Shawn Du
Thanks!

 

My case looks like this:

 

We store big cache object to overcome the defects of ignite overhead of each 
cache entry. 

For these big cache object, we can’t use SQL to query/aggregate them.  So we 
need compute by ourselves.

 

We want to compute on the SQL returned data in ignite server side for several 
reasons:

#1, for the SQL returned data is huge, we don’t want to transfer them from 
sever to client.

#2, compute on the client server, we need write code to compute parallel. I 
assume ignite will do it for us.

#3, in future, If we add new server nodes, we will benefit immediately.

 

Thanks

Shawn 

 

 

发件人: Nikolai Tikhonov [mailto:ntikho...@apache.org] 
发送时间: 2017年1月5日 15:45
收件人: user@ignite.apache.org
主题: Re: compute SQL returned data.

 

Hi,

 

I'm not sure that undestrand what do you want. Could you describe your case 
more details?

Also you can find examples on here: 
https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples

 

On Thu, Jan 5, 2017 at 10:20 AM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi experts,

 

I want to compute the SQL returned data on the server side. Is there any 
examples?

 

 

Thanks

Shawn 

 



compute SQL returned data.

2017-01-04 Thread Shawn Du
Hi experts,

 

I want to compute the SQL returned data on the server side. Is there any
examples?

 

 

Thanks

Shawn 



答复: 回复:BinaryObject and String.intern

2017-01-04 Thread Shawn Du
Hi   dkarachentsev,
 
Suppose I have a String array and array length is 1000. There are many 
duplicated values and only five distinct values in the array.
 
We store them using blow ways, which one save memory most?
#1 List
#2 Map
In client side, #2 will save memory greatly, but how does it in ignite server?
If Ignite stores BitSet as Integer array, it seems that #2 will not save so 
much memory?
 
If there any tips to save memory for above case. Thanks in advance.
 
Thanks
Shawn
 
发件人: shawn.du [mailto:shawn...@neulion.com.cn] 
发送时间: 2017年1月3日 18:37
收件人: user@ignite.apache.org
抄送: user@ignite.apache.org
主题: 回复:BinaryObject and String.intern
 
 
thanks   dkarachentsev.
 
在2017年01月03日 18:25,  dkarachentsev 写道:
Actually no, because Ignite internally will store it as a BinaryObject and 
will send to other nodes in a binary format as well, where all string fields 
will be unmarshaled without intern(). 



-- 
View this message in context:  

 
http://apache-ignite-users.70518.x6.nabble.com/BinaryObject-and-String-intern-tp9826p9834.html
 
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


答复: BinaryObject and String.intern

2017-01-03 Thread Shawn Du
If I don't use binary object and use POJO and never call keepBinary, it
will?

Thanks
Shawn

-邮件原件-
发件人: dkarachentsev [mailto:dkarachent...@gridgain.com] 
发送时间: 2017年1月3日 15:16
收件人: user@ignite.apache.org
主题: Re: BinaryObject and String.intern

This won't give you any benefit, because Strings will be marshaled and
stored in binary format. In other words you'll get a binary copy of your
string, which is managed just like any other object.



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/BinaryObject-and-String-inter
n-tp9826p9827.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: class org.apache.ignite.binary.BinaryObjectException: Wrong value has been set

2017-01-03 Thread Shawn Du
Exactly!

 

Thanks.

Shawn

 

发件人: Nikolai Tikhonov [mailto:ntikho...@apache.org] 
发送时间: 2017年1月3日 16:05
收件人: user@ignite.apache.org
主题: Re: class org.apache.ignite.binary.BinaryObjectException: Wrong value has 
been set

 

Hi,

 

Is it possible that you build your object by binary builder and set null value 
for "product" field? In this case will be created metadata where type of field 
will be assigned Object and when we will create new object with non-null value 
(for example String) you got this exception.

 

Thanks,

Nikolay 

 

On Tue, Jan 3, 2017 at 10:56 AM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi,

 

I met very strange issues.  With the same code, on one machine, it works fine, 
but on another cluster, it always failed with exception:

 

java.lang.RuntimeException: class 
org.apache.ignite.binary.BinaryObjectException: Wrong value has been set 
[typeName=streams, fieldName=product, fieldType=Object, 
assignedValueType=String] at

I met this issue before, I remembered that I fixed it by renaming a class 
field.  But this time, I can’t, for we create it dynamically with binary object.

 

Please help.

 

Thanks

Shawn

 

 

 

 



class org.apache.ignite.binary.BinaryObjectException: Wrong value has been set

2017-01-02 Thread Shawn Du
Hi,

 

I met very strange issues.  With the same code, on one machine, it works
fine, but on another cluster, it always failed with exception:

 

java.lang.RuntimeException: class
org.apache.ignite.binary.BinaryObjectException: Wrong value has been set
[typeName=streams, fieldName=product, fieldType=Object,
assignedValueType=String] at

I met this issue before, I remembered that I fixed it by renaming a class
field.  But this time, I can't, for we create it dynamically with binary
object.

 

Please help.

 

Thanks

Shawn

 

 

 



BinaryObject and String.intern

2017-01-02 Thread Shawn Du
Hi experts,

 

I am trying to use String.intern to save memory, below is Pseudo code, will
it work? 

 

public class Example

{

   String[] values;

}

 

Map fields = new HashMap();

fields.put("example",Example.class.getTypeName());

queryEntity.setFields(fields);

 

 

Public class MyEntryProcessor implements
EntryProcessor

{

Private List values;

 

 public Object process(MutableEntry entry,
Object... args)

{

 BinaryObjectBuilder builder = ignite.binary().builder("Example");

 Example example = new Example();

example.values = new String[this.values.size()];

 for(int i=0;i

答复: 回复:Re: 答复: Cache.invoke are terribly slow and can't update cache

2016-12-22 Thread Shawn Du
Thanks Val, that's good.

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2016年12月23日 9:47
收件人: user@ignite.apache.org
主题: Re: 回复:Re: 答复: Cache.invoke are terribly slow and can't update
cache

Shawn,

You just should keep in mind that this is an object that will be serialized,
sent across network and invoked on other side. The purpose of entry
processor is to be executed on server side and atomically read and update a
single entry. Having said that, the logic you described doesn't make much
sense there.

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cache-invoke-are-terribly-slo
w-and-can-t-update-cache-tp9676p9712.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: Cache.invoke are terribly slow and can't update cache

2016-12-22 Thread Shawn Du
Hi,

I fix it by myself. 

I make two mistakes.

#1 peerClassLoadingEnabled is disabled by default. Should enable it.
#2 get ignite instance using:
  @IgniteInstanceResource
  Ignite ignite;

  Can't pass client ignite instance to it. 

Anyway, IMO, for EntryProcessor API it is a little error prone. 

Thanks
Shawn

-邮件原件-
发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2016年12月22日 10:34
收件人: user@ignite.apache.org
主题: 答复: Cache.invoke are terribly slow and can't update cache

Thanks Val!

Can you explain more detail how "invoke" works internally comparing with put
operation. For we pass a "function" to ignite. 

I see below Java Doc about "invoke"

* An instance of entry processor must be stateless as it may be invoked
multiple times on primary and
* backup nodes in the cache. It is guaranteed that the value passed to the
entry processor will be always
* the same.

My question: can we use *outer* object in the function process of
EntryProcessor which is visible in its scope?
More complex, how to iterate, just like in my code?

Thanks
Shawn


-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com]
发送时间: 2016年12月22日 2:52
收件人: user@ignite.apache.org
主题: Re: Cache.invoke are terribly slow and can't update cache

Hi Shawn,

Cache is not updated because you never update it :) i.e. entry.setValue() is
never called.

Bad performance can be caused by the fact that you do multiple operations
one after another in a single thread without any batching. Consider using
invokeAll or IgniteDataStreamer.

As for OOME, it's hard to tell what's causing it. Did you look at the heap
dump?

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cache-invoke-are-terribly-slo
w-and-can-t-update-cache-tp9676p9685.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



答复: Cache.invoke are terribly slow and can't update cache

2016-12-21 Thread Shawn Du
Thanks Val!

Can you explain more detail how "invoke" works internally comparing with put
operation. For we pass a "function" to ignite. 

I see below Java Doc about "invoke"

* An instance of entry processor must be stateless as it may be invoked
multiple times on primary and
* backup nodes in the cache. It is guaranteed that the value passed to the
entry processor will be always
* the same.

My question: can we use *outer* object in the function process of
EntryProcessor which is visible in its scope?
More complex, how to iterate, just like in my code?

Thanks
Shawn


-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2016年12月22日 2:52
收件人: user@ignite.apache.org
主题: Re: Cache.invoke are terribly slow and can't update cache

Hi Shawn,

Cache is not updated because you never update it :) i.e. entry.setValue() is
never called.

Bad performance can be caused by the fact that you do multiple operations
one after another in a single thread without any batching. Consider using
invokeAll or IgniteDataStreamer.

As for OOME, it's hard to tell what's causing it. Did you look at the heap
dump?

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Cache-invoke-are-terribly-slo
w-and-can-t-update-cache-tp9676p9685.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Cache.invoke are terribly slow and can't update cache

2016-12-21 Thread Shawn Du
Hi experts,

 

I try to update cache by delta by calling invoke. This is the code, it is
terribly slow and run out of Memory and no cache updated. Please help.

 

table.getRows().forEach(r -> cache.invoke(r.getKey(), (entry, args) ->

{

 BinaryObject bo = entry.getValue();

 if (bo == null)

 {

   BinaryObjectBuilder builder =
IgniteManager.getIgnite().binary().builder(cacheName);

   builder.setField("site", r.getSite());

   ...

   bo = builder.build();

 }

 BinaryObjectBuilder builder = bo.toBuilder();

 for (String measName : table.getSchema().getMeasNames())

 {

   String measValue = r.getMeasValue(table.getSchema(),
measName);

   if (measValue != null)

   {

builder.setField(measName, measValue);

   }

 }

 return null;

}));

 

 

Thanks

Shawn



答复: schemaless support

2016-12-19 Thread Shawn Du
Thanks Val,

That's a great feature. It will give some convenience like in HBase and
Cassandra.

-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2016年12月20日 9:23
收件人: user@ignite.apache.org
主题: Re: schemaless support

Hi Shawn,

You can dynamically change the schema when using binary format (default
format for storing data in Ignite):
https://apacheignite.readme.io/docs/binary-marshaller

However, with SQL this is not currently possible. DDL support is currently
in progress and will be ready sometime first-second quarter next year.

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/schemaless-support-tp9632p963
4.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



schemaless support

2016-12-19 Thread Shawn Du
Hi,

 

Does ignite support "schemaless"?

For example, I just create an empty object with key and put an arbitrary
field to the object dynamically. The cache also can be queried with SQL.

 

Thanks

Shawn



alter cache configuration after cache created.

2016-12-18 Thread Shawn Du
Hi,

 

It is possible to alter cache configuration after cache created and running?

 

Thanks

Shawn



答复: 答复: 答复: query binaryobject cache in zeppelin

2016-12-15 Thread Shawn Du
  |

| Query Indexed Types   |  
   |

+--+

 

 

Thanks

Shawn

 

 

发件人: Andrey Mashenkov [mailto:andrey.mashen...@gmail.com] 
发送时间: 2016年12月15日 18:21
收件人: user@ignite.apache.org
主题: Re: 答复: 答复: query binaryobject cache in zeppelin

 

Hi Shawn,

 

I've got Zeppelin sources, rewrite test query () with arterisk 
(org.apache.zeppelin.ignite.IgniteSqlInterpreterTest). It works fine for me as 
with ignite 1.7 as 1.8.

Would you please share sql query that failed, jdbc connection string that used 
by zeppelin, ignite cache configuration and also ignite logs if possible?

 

On Tue, Dec 13, 2016 at 3:34 AM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

 

Hi Andrey,

 

I had code to add entities to configuration. It seems that it doesn’t support 
using Asterisk in select clause.

If I use column names, zeppelin can show the data.

 

Also some background about the testing:

 

Using ignite 1.8 and zeppelin 0.6.2.  in order to make zeppelin work with 
ignite 1.8, I had to build from source and change pom.xml to use ignite 1.8.0.

All Seems good. 

 

Thanks

Shawn

 

发件人: Andrey Mashenkov [mailto:andrey.mashen...@gmail.com 
<mailto:andrey.mashen...@gmail.com> ] 
发送时间: 2016年12月12日 18:54
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: 答复: query binaryobject cache in zeppelin

 

Hi Shawn,

 

Looks strange that Query Indexed Types  is "n/a"

Are you forget to add query entities to configuration? I can't see in your 
code: cacheCfg.setQueryEntities(Arrays.asList(entity))

 

 

On Mon, Dec 12, 2016 at 1:41 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi,

 

Just this is part of my cache configuration.  See the red part. Query schema 
Name is empty, it is  so-called *table name*?

 

|Store Write Through | off  
|

| Write-Behind Enabled  | off   
   |

| Write-Behind Flush Size   | 10240 
   |

| Write-Behind Frequency| 5000  
   |

| Write-Behind Flush Threads Count  | 1 
   |

| Write-Behind Batch Size   | 512   
   |

| Concurrent Asynchronous Operations Number | 500   
   |

| Memory Mode   | ONHEAP_TIERED 
   |

| Off-Heap Size |  
   |

| Loader Factory Class Name |  
   |

| Writer Factory Class Name |  
   |

| Expiry Policy Factory Class Name  | 
javax.cache.configuration.FactoryBuilder$SingletonFactory|

| Query Execution Time Threshold| 3000  
   |

| Query Schema Name |   
   |

| Query Escaped Names   | off   
   |

| Query Onheap Cache Size   | 10240 
   |

| Query SQL functions   |  
   |

| Query Indexed Types   |  
   |

+--+

 

 

I create queryEntity by the following code,  ColumnScheme is my own class, just 
contains information column name/type and ensureIndex or not.

 

Is the entity value type is “Table Name”?  I set it the same with the cache 
name.  I try to use this as the table name in SQL, still not work.  Please 
help. Thanks.

 

QueryEntity entity = new QueryEntity();
entity.setKeyType(keyType);
entity.setValueType(valueType);
LinkedHashMap fields = new LinkedHashMap<>();
List indexes = new ArrayList<>();
for (ColumnScheme columnScheme : columns)
{
fields.put(columnScheme.getName(), columnScheme.getType());
if (columnScheme.isEnsureIndex())
{
indexes.add(new QueryIndex(columnScheme.getName()));
}
}
entity.setIndexes(indexes);
entity.setFields(fields);

 

Shawn

 

发件人: Andrey Mashenkov [mailto:amashen...@gridgain.com 
<mailto:amashen...@gridgain.com> ] 
发送时间: 2016年12月12日 17:24
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: query binaryobject cache in zeppelin

 

Hi Shawn,

 

Classes that you want to use i

答复: how to use memory efficiently

2016-12-15 Thread Shawn Du
Thanks.

I will test copyOnRead. In fact, we do have troubles in using ignite. For we
think it uses too much memory. We are finding ways to decrease the memory
consumption. 
I have an idea, but I don't know it is worthy of trying or not. Please
advise.

Suppose we have several tables. These tables store some metric data. Each
table row contains several dimensions data and metrics.
These table share some common dimensions. 

Currently, we store data like this, each table has all data. And when select
no need join:
dim_1, ...,dim_x, shared_dim_1,...,shared_dim_n, metric_1,...,metric_m

we are going to store like this, we use a separated table to store shared
dimensions data:
shared_id,shared_dim_1,...,shared_dim_n
and each table use a foreign key (shared_id)to refer to the shared table:
dim_1,...,dim_x, shared_id,metric_1,...,metric_m

This is common ways in RDBMS table design, I don't know whether it works on
ignite or not. 
Also I concern the big overhead each row entry will do can save the memory.

Thanks
Shawn

-邮件原件-
发件人: vdpyatkov [mailto:vldpyat...@gmail.com] 
发送时间: 2016年12月15日 21:36
收件人: user@ignite.apache.org
主题: Re: how to use memory efficiently

Hi,

1) No cache name does not affect the memory utilization.
2) Yes, the key can
3) Ignite stores the description of the class only once, and organizes hash
each object. class or field name length does not affect the memory.
4) Off_heap may be cheaper by memory consume, because in some case two copy
of entries will stored on heap (on_heap - memory mode)[1]. But ONHEAP_TIERED
does not compress data.

If you do not going to modify data, which gets from cache, set copyOnRead
flag to false:

 


Also, you can try to compress data (before to will put it to cache), but
this restricts your SLQ.

[1]: https://issues.apache.org/jira/browse/IGNITE-2417 



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/how-to-use-memory-efficiently
-tp9553p9557.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



how to use memory efficiently

2016-12-15 Thread Shawn Du
Hi,

 

Suppose we will cache many entries(millions to billions) in ignite and cache
kept with binary.

How to decrease ignite memory usage? 

1)   Shorten cache name. I don't think it does matter, but just want
confirmed. For we use longer(more than 200 chars) cache name.

2)   Shorten cache key. I think it will. We are using md5 hash.

3)   Make column name shorter. Please confirm this. In RDMBS, it will
has no effect. 

Also shorten the column name will bring some side effects, like code
complexity and readability etc.

4)   Using off_heap. I don't know, I remembered somebody said ignite
will compress data when using off heap.

 

Anything else?

 

Thanks

Shawn



答复: 答复: query binaryobject cache in zeppelin

2016-12-12 Thread Shawn Du
 

Hi Andrey,

 

I had code to add entities to configuration. It seems that it doesn’t support 
using Asterisk in select clause.

If I use column names, zeppelin can show the data.

 

Also some background about the testing:

 

Using ignite 1.8 and zeppelin 0.6.2.  in order to make zeppelin work with 
ignite 1.8, I had to build from source and change pom.xml to use ignite 1.8.0.

All Seems good. 

 

Thanks

Shawn

 

发件人: Andrey Mashenkov [mailto:andrey.mashen...@gmail.com] 
发送时间: 2016年12月12日 18:54
收件人: user@ignite.apache.org
主题: Re: 答复: query binaryobject cache in zeppelin

 

Hi Shawn,

 

Looks strange that Query Indexed Types  is "n/a"

Are you forget to add query entities to configuration? I can't see in your 
code: cacheCfg.setQueryEntities(Arrays.asList(entity))

 

 

On Mon, Dec 12, 2016 at 1:41 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi,

 

Just this is part of my cache configuration.  See the red part. Query schema 
Name is empty, it is  so-called *table name*?

 

|Store Write Through | off  
|

| Write-Behind Enabled  | off   
   |

| Write-Behind Flush Size   | 10240 
   |

| Write-Behind Frequency| 5000  
   |

| Write-Behind Flush Threads Count  | 1 
   |

| Write-Behind Batch Size   | 512   
   |

| Concurrent Asynchronous Operations Number | 500   
   |

| Memory Mode   | ONHEAP_TIERED 
   |

| Off-Heap Size |  
   |

| Loader Factory Class Name |  
   |

| Writer Factory Class Name |  
   |

| Expiry Policy Factory Class Name  | 
javax.cache.configuration.FactoryBuilder$SingletonFactory|

| Query Execution Time Threshold| 3000  
   |

| Query Schema Name |   
   |

| Query Escaped Names   | off   
   |

| Query Onheap Cache Size   | 10240 
   |

| Query SQL functions   |  
   |

| Query Indexed Types   |  
   |

+--+

 

 

I create queryEntity by the following code,  ColumnScheme is my own class, just 
contains information column name/type and ensureIndex or not.

 

Is the entity value type is “Table Name”?  I set it the same with the cache 
name.  I try to use this as the table name in SQL, still not work.  Please 
help. Thanks.

 

QueryEntity entity = new QueryEntity();
entity.setKeyType(keyType);
entity.setValueType(valueType);
LinkedHashMap fields = new LinkedHashMap<>();
List indexes = new ArrayList<>();
for (ColumnScheme columnScheme : columns)
{
fields.put(columnScheme.getName(), columnScheme.getType());
if (columnScheme.isEnsureIndex())
{
indexes.add(new QueryIndex(columnScheme.getName()));
}
}
entity.setIndexes(indexes);
entity.setFields(fields);

 

Shawn

 

发件人: Andrey Mashenkov [mailto:amashen...@gridgain.com 
<mailto:amashen...@gridgain.com> ] 
发送时间: 2016年12月12日 17:24
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: Re: query binaryobject cache in zeppelin

 

Hi Shawn,

 

Classes that you want to use in queries should be set via setQueryEntities.

 

*Table name* in Ignite has name of type. E.g if you want to get some record of 
class "my.org.Person" you should use "Person" as table name: Select * from 
Person. 

To make cross cache query you should use full table name as 
"cache_name".class_name: Select ... from Person, "other_cache".Org Where ...

 

For JDK classes like java.lang.Integer, table name will be "Integer"

 

On Mon, Dec 12, 2016 at 12:09 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi,

 

Today, I had a try of zeppelin. After setup a zeppelin node and began to issue 
SQL for query, I don’t know how to input the *table name* of SQL.

For my cache are built with BinaryObject and config w

答复: query binaryobject cache in zeppelin

2016-12-12 Thread Shawn Du
Hi,

 

Just this is part of my cache configuration.  See the red part. Query schema 
Name is empty, it is  so-called *table name*?

 

|Store Write Through | off  
|

| Write-Behind Enabled  | off   
   |

| Write-Behind Flush Size   | 10240 
   |

| Write-Behind Frequency| 5000  
   |

| Write-Behind Flush Threads Count  | 1 
   |

| Write-Behind Batch Size   | 512   
   |

| Concurrent Asynchronous Operations Number | 500   
   |

| Memory Mode   | ONHEAP_TIERED 
   |

| Off-Heap Size |  
   |

| Loader Factory Class Name |  
   |

| Writer Factory Class Name |  
   |

| Expiry Policy Factory Class Name  | 
javax.cache.configuration.FactoryBuilder$SingletonFactory|

| Query Execution Time Threshold| 3000  
   |

| Query Schema Name |   
   |

| Query Escaped Names   | off   
   |

| Query Onheap Cache Size   | 10240 
   |

| Query SQL functions   |  
   |

| Query Indexed Types   |  
   |

+--+

 

 

I create queryEntity by the following code,  ColumnScheme is my own class, just 
contains information column name/type and ensureIndex or not.

 

Is the entity value type is “Table Name”?  I set it the same with the cache 
name.  I try to use this as the table name in SQL, still not work.  Please 
help. Thanks.

 

QueryEntity entity = new QueryEntity();
entity.setKeyType(keyType);
entity.setValueType(valueType);
LinkedHashMap fields = new LinkedHashMap<>();
List indexes = new ArrayList<>();
for (ColumnScheme columnScheme : columns)
{
fields.put(columnScheme.getName(), columnScheme.getType());
if (columnScheme.isEnsureIndex())
{
indexes.add(new QueryIndex(columnScheme.getName()));
}
}
entity.setIndexes(indexes);
entity.setFields(fields);

 

Shawn

 

发件人: Andrey Mashenkov [mailto:amashen...@gridgain.com] 
发送时间: 2016年12月12日 17:24
收件人: user@ignite.apache.org
主题: Re: query binaryobject cache in zeppelin

 

Hi Shawn,

 

Classes that you want to use in queries should be set via setQueryEntities.

 

*Table name* in Ignite has name of type. E.g if you want to get some record of 
class "my.org.Person" you should use "Person" as table name: Select * from 
Person. 

To make cross cache query you should use full table name as 
"cache_name".class_name: Select ... from Person, "other_cache".Org Where ...

 

For JDK classes like java.lang.Integer, table name will be "Integer"

 

On Mon, Dec 12, 2016 at 12:09 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi,

 

Today, I had a try of zeppelin. After setup a zeppelin node and began to issue 
SQL for query, I don’t know how to input the *table name* of SQL.

For my cache are built with BinaryObject and config with setQueryEntities.

 

It is possible to do query in zeppinlin for these caches?

 

Thanks

Shawn  

 



query binaryobject cache in zeppelin

2016-12-12 Thread Shawn Du
Hi,

 

Today, I had a try of zeppelin. After setup a zeppelin node and began to
issue SQL for query, I don't know how to input the *table name* of SQL.

For my cache are built with BinaryObject and config with setQueryEntities.

 

It is possible to do query in zeppinlin for these caches?

 

Thanks

Shawn  



replace storm with ignite

2016-11-29 Thread Shawn Du
Hi experts,

 

After days trying with ignite, I am impressed by its powerful capability.

 

Now in our architecture, ignite works with storm and act as cache/persist
layer. I am considering replace storm with ignite at all.

 

The reasons are : 

#1 In essential, we are recovering a stateful object according to the
incoming events and generate some metrics periodically.

Maintaining stateful objects, personally I think it is not storm strong
point, but ignite it is.

#2 in order to generate some metrics, the storm codes become very complex
and hard to debug and maintain. I want to use ignite

to simplify the codes.

 

Now my rough ideas is:

#1 Integrate kafka with ignite and rebuild the state according to the
incoming events

#2 schedule some periodically tasks to query the cache and generate metrics
and store them into cache again.

 

Please comments on this solution.

 

saying, there are several caches, for each cache I execute dozens of queries
every 30 seconds (such as group by different dimensions) and put them into
the cache again.

Can ignite support it?

 

Thanks

Shawn

 



答复: Lock and Failed to unlock keys exception

2016-11-23 Thread Shawn Du
Fixed by myself.

There are two errors in my code/configuration.

1) lock are only supported in TRANSACTIONAL mode. I missed it in my
configuration.
2) within a lock block, can't start cache:
  IgniteCache cache =
ignite.cache(NamedSequence.CACHE_NAME_ID);
  
  Above code should be called out of the lock block.

Thanks
Shawn

-邮件原件-
发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2016年11月24日 14:20
收件人: user@ignite.apache.org
主题: Lock and Failed to unlock keys exception

Hi,

I am trying to implement distributed ID generators using lock and caches. 

I wrote some codes like this, but it always throw exception "Failed to
unlock keys exception". 
The code may runs in different JVM/hosts.

Do I misuse the lock feature?  Thanks in advance.


Lock lock = osCache.lock(Os.CACHE_LOCK); try {
lock.lock();
os = osCache.get(videoMessage.os());
if (os == null)
{
os = new Os(videoMessage.os(),
idGenerator.next(Os.class.getName()));
osCache.put(os.getName(), os);
}
videoMessage.os(os.getId() + "");
}
finally
{
lock.unlock();
}

---End
IdGenerator.next looks like this:
---
public int next(String name)
{
IgniteCache cache =
ignite.cache(NamedSequence.CACHE_NAME_ID);
NamedSequence sequence = cache.get(name);
if (sequence == null)
{
sequence = new NamedSequence(name);
}
else
{
sequence.increase();
}
cache.put(sequence.getName(), sequence);
return sequence.getSequence();
}
--



Lock and Failed to unlock keys exception

2016-11-23 Thread Shawn Du
Hi,

I am trying to implement distributed ID generators using lock and caches. 

I wrote some codes like this, but it always throw exception "Failed to
unlock keys exception". 
The code may runs in different JVM/hosts.

Do I misuse the lock feature?  Thanks in advance.


Lock lock = osCache.lock(Os.CACHE_LOCK);
try
{
lock.lock();
os = osCache.get(videoMessage.os());
if (os == null)
{
os = new Os(videoMessage.os(),
idGenerator.next(Os.class.getName()));
osCache.put(os.getName(), os);
}
videoMessage.os(os.getId() + "");
}
finally
{
lock.unlock();
}

---End
IdGenerator.next looks like this:
---
public int next(String name)
{
IgniteCache cache =
ignite.cache(NamedSequence.CACHE_NAME_ID);
NamedSequence sequence = cache.get(name);
if (sequence == null)
{
sequence = new NamedSequence(name);
}
else
{
sequence.increase();
}
cache.put(sequence.getName(), sequence);
return sequence.getSequence();
}
--



auto loadcache when server startup

2016-11-22 Thread Shawn Du
Hi,

It is possible to load all cache when server startup by configuration?
Suppose I have configured store factory in configuration and implement
loadcache method for CacheStore

Thanks
Shawn



答复: dynamic data structure with binaryobject

2016-10-30 Thread Shawn Du
Thanks Val.

I write below code snippet and tested it, it works fine. But still have some
questions:
I have to call invoke method to do this, how to understand invoke,
especially for performance/latency? It is a recommended practice?

Thanks.
Shawn



public class PartialUpdate{
private static final String CACHE_NAME =
PartialUpdate.class.getSimpleName();
public static void main(String[] args) throws IgniteException{
Ignition.setClientMode(true);
try (Ignite ignite = Ignition.start("config/ss.xml")){
CacheConfiguration cfg = new
CacheConfiguration<>();
cfg.setName(CACHE_NAME);
IgniteCache cache =
ignite.getOrCreateCache(cfg);
Employee employee = new Employee(1,"bob");
cache.put(1, employee);
try (IgniteCache binaryCache =
ignite.getOrCreateCache(cfg).withKeepBinary()){
binaryCache.invoke(1, new EntryProcessor(){
@Override
public Object process(MutableEntry mutableEntry, Object... objects) throws
EntryProcessorException{
if (mutableEntry.getValue() != null){
BinaryObjectBuilder builder =
mutableEntry.getValue().toBuilder().setField("age", 25);
mutableEntry.setValue(builder.build());
}
return null;
}
});
}
}
}

static class Employee implements Serializable{
@QuerySqlField(index = true)
private int id;
@QuerySqlField(index = true)
private String name;
@QuerySqlField
private int age;

public Employee(int id, String name){
this.id = id;
this.name = name;
}
}





-邮件原件-
发件人: vkulichenko [mailto:valentin.kuliche...@gmail.com] 
发送时间: 2016年10月29日 3:04
收件人: user@ignite.apache.org
主题: Re: dynamic data structure with binaryobject

Hi Shawn,

Yes, binary format provides direct support for this. You can change the
class definition on the client and transparently use the new version.
Another option is to use builder [1] to modify the object.

[1]
https://apacheignite.readme.io/docs/binary-marshaller#modifying-binary-objec
ts-using-binaryobjectbuilder

-Val



--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/dynamic-data-structure-with-b
inaryobject-tp8581p8595.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.



dynamic data structure with binaryobject

2016-10-28 Thread Shawn Du
Hi experts,

 

I wonder to know the blew scenario is possible in ignite with binaryobject?

 

At first, cache is empty.

I put a cache entry with key1 and value with field A.

After a while,

I put a cache entry with the same key key1 and value with field B, note
field A keep the old value.

Again, I can add field X dynamically. 

 

How to implement this with BinaryObject?

 

If possible, how its performance?

 

Thanks

Shawn

 



答复: BinaryObject pros/cons

2016-10-27 Thread Shawn Du
 

Hi,

 

In one of our project, we compact java objects which have lots of long zeros. 
It save disk greatly when serialization. 

Thus slow Disk IO operations become fast CPU operations and performance is 
improved.  

Data compact algorithm is inspired by this paper 
http://www.vldb.org/pvldb/vol8/p1816-teller.pdf.

 

Thanks

Shawn

发件人: Vladimir Ozerov [mailto:voze...@gridgain.com] 
发送时间: 2016年10月28日 5:21
收件人: user@ignite.apache.org
主题: Re: BinaryObject pros/cons

 

Hi,

 

I am not very concerned with null fields overhead, because usually it won't be 
significant. However, there is a problem with zeros. User object might have 
lots of int/long zeros, this is not uncommon. And each zero will consume 4-8 
additional bytes. We probably will implement special optimization which will 
write such fields in special compact format.

 

Vladimir.

 

On Thu, Oct 27, 2016 at 10:55 PM, vkulichenko mailto:valentin.kuliche...@gmail.com> > wrote:

Hi,

Yes, null values consume memory. I believe this can be optimized, but I
haven't seen issues with this so far. Unless you have hundreds of fields
most of which are nulls (very rare case), the overhead is minimal.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/BinaryObject-pros-cons-tp8541p8563.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

 



答复: 答复: ignite used too much memory

2016-10-27 Thread Shawn Du
|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+---+

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2016年10月27日 9:17
收件人: user@ignite.apache.org
主题: 答复: 答复: ignite used too much memory

 

Hi Andrey Mashenkov,

 

Thanks, I will have a try and apply these locally.

 

Do you know 1.8’s release plan? 

 

Thanks

Shawn 

 

发件人: Andrey Mashenkov [ <mailto:amashen...@gridgain.com> 
mailto:amashen...@gridgain.com] 
发送时间: 2016年10月26日 20:53
收件人:  <mailto:user@ignite.apache.org> user@ignite.apache.org
主题: Re: 答复: ignite used too much memory

 

Hi, Shawn Du

 

It seems you faced next 2 bugs.

 

First bug: High memory utilization using OffHeap with ExpirePolicy. Issue has a 
fix, but it is not merged to master yet. See:  
<https://issues.apache.org/jira/browse/IGNITE-3840> 
https://issues.apache.org/jira/browse/IGNITE-3840.

Second bug: TTL Manager continue track evicted (and removed) entries,  
<https://issues.apache.org/jira/browse/IGNITE-3948> 
https://issues.apache.org/jira/browse/IGNITE-3948, it seems to be ok for merge, 
but still is not present in master.

 

You can try to merge them locally or wait until they will be available in 
master.

 

On Wed, Oct 26, 2016 at 2:35 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi experts,

 

Can anyone help to explain ignite memory model?

 

Now I tried following ways, but no effect.

1)   Remove all indexes.

2)   enable swap. I see more than 800M’s data are stored in swapspace 
directory.

3)   Don’t cache short life entries. All cache with fifo evict policy and 
max size is 10k.

 

After running for a while, I have a check the caches, there were only 20k 
entries, but the memory still grow up?

 

Please help.

 

Thanks

Shawn  

 

 

发件人: Shawn Du [mailto: <mailto:shawn...@neulion.com.cn> 
shawn...@neulion.com.cn] 
发送时间: 2016年10月26日 17:17
收件人:  <mailto:user@ignite.apache.org> user@ignite.apache.org
主题: 答复: ignite used too much memory

 

Hi 

 

This is the output of jmap histo:live .

 

Any useful information?  There are about 300k cache entries. Also I enable 
swap. 

 

Each entry costs 10K  memory in average. But each entry only has 50 bytes at 
most.

 

Any help will be appreciated.

 

num #instances #bytes  class name

--

   1:  21642240  519413760  
org.apache.ignite.internal.util.GridCircularBuffer$Item

   2: 49027  405647408  [Lorg.jsr166.ConcurrentHashMap8$Node;

   3:   6056069  304233144  [B

   4:   5754322  279563064  [C

   5:  11376526  273036624  
java.util.concurrent.ConcurrentSkipListMap$Node

   6:   5688514  227540560  
org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOnheap

   7:   5684293  181897376  
org.apache.ignite.internal.processors.cache.GridCacheTtlManager$EntryWrapper

   8:   5333102  170659264  java.util.concurrent.ConcurrentHashMap$Node

   9:   5325117  170403744  
org.apache.ignite.spi.swapspace.file.FileSwapSpaceSpi$SwapValue

  10:   5754728  138113472  java.lang.String

  11:   5689577  136549848  
java.util.concurrent.ConcurrentSkipListMap$Index

  12:   5686353  136472472  
org.apache.ignite.internal.processors.cache.KeyCacheObjectImpl

  13:   5325117  127802808  org.apache.ignite.spi.swapspace.SwapKey

  14:   5690452   91047232  org.h2.value.ValueString

  15: 85112   87930752  
[Lorg.apache.ignite.internal.util.GridCircularBuffer$Item;

  16:   1515799   48505568  java.util.HashMap$Node

  17:268818   42628512  [Ljava.lang.Object;

  18: 13467   35031504  
[Ljava.util.concurrent.ConcurrentHashMap$Node;

  19:361155   23113920  
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCacheEntry

  20:   1226922   19630752  java.lang.Integer

  21:102183   15207336  [Ljava.util.HashMap$Node;

  22:363397   14535880  
org.apache.ignite.internal.binary.BinaryObjectImpl

  23:395583   12658656  org.jsr166.ConcurrentHashMap8$Node

  24:359176   11493632  
org.apache.ignite.internal.processors.cache.extras.GridCacheTtlEntryExtras

  25:3633978721528  
org.apache.ignite.internal.processors.query.h2.opt.GridH2ValueCacheObject

  26:2576156182760  java.util.concurrent.atomic.AtomicLong

  27:1707755464800  
java.util.concurrent.locks.ReentrantLock$Nonfa

答复: 答复: ignite used too much memory

2016-10-27 Thread Shawn Du
BTW, I use ONHEAD_TIERED memory mode. 

 

I don’t enable off heap.

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2016年10月27日 15:30
收件人: 'user@ignite.apache.org'
主题: 答复: 答复: ignite used too much memory

 

Hi Andrey Mashenkov,

 

I checkout pr/1101 pr/1037 and have a test both of them. Things maybe go better 
but not resolved.

 

This is my cache state,  I think it is full, for code like this:

 

config.setSwapEnabled(swapEnable);
if (swapEnable)
{
EvictionPolicy policy = new FifoEvictionPolicy(1);
config.setEvictionPolicy(policy);
}

 

new caches should be in memory and the old will be evicted into disk. The 
memory should stopping growing.

After my test, it is not the case. It is still keep growing fast, at speed of 
500M/30minutes. 

I can understand that there will need some extra memory to maintain something, 
but it leaks too fast.

 

Thanks

Shawn

 

 

+-+-+---+---+---+---+---+---+

| onlinesessioncount_by_cityid(@c1)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinesessioncount_by_regionid(@c2) | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlineviewcount_by_cityid(@c3)  | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlineviewcount_by_regionid(@c4)| PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinevisitorcount_by_cityid(@c5)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinevisitorcount_by_regionid(@c6) | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| videoviewcount_by_cityid(@c7)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0  

BinaryObject pros/cons

2016-10-27 Thread Shawn Du
Hi expert,

 

BinaryObject gives great convenience to create dynamic objects. Does it have
disadvantages? 

 

Take below example, does each field name will consume memory or not?  Many
RDBMS which has fixed table schema, column name willn't use memory/disk.
How does it for BinaryObject?

 

If some fields' value are null, does it consume memory? 

 

+===

===+

|Key Class |   Key|   Value Class   |
Value |

+===

===+

| java.lang.String | 3E47D5ACA7A05D59 | o.a.i.i.binary.BinaryObjectImpl |
videoviewcount_by_regionid [idHash=312942657, hash=0, site_product=testsite,
regionID=35415317, value=4, timestamp=147755040]  |

 

 

Thanks

Shawn



答复: 答复: ignite used too much memory

2016-10-26 Thread Shawn Du
Hi Andrey Mashenkov,

 

Thanks, I will have a try and apply these locally.

 

Do you know 1.8’s release plan? 

 

Thanks

Shawn 

 

发件人: Andrey Mashenkov [mailto:amashen...@gridgain.com] 
发送时间: 2016年10月26日 20:53
收件人: user@ignite.apache.org
主题: Re: 答复: ignite used too much memory

 

Hi, Shawn Du

 

It seems you faced next 2 bugs.

 

First bug: High memory utilization using OffHeap with ExpirePolicy. Issue has a 
fix, but it is not merged to master yet. See: 
https://issues.apache.org/jira/browse/IGNITE-3840.

Second bug: TTL Manager continue track evicted (and removed) entries, 
https://issues.apache.org/jira/browse/IGNITE-3948, it seems to be ok for merge, 
but still is not present in master.

 

You can try to merge them locally or wait until they will be available in 
master.

 

On Wed, Oct 26, 2016 at 2:35 PM, Shawn Du mailto:shawn...@neulion.com.cn> > wrote:

Hi experts,

 

Can anyone help to explain ignite memory model?

 

Now I tried following ways, but no effect.

1)   Remove all indexes.

2)   enable swap. I see more than 800M’s data are stored in swapspace 
directory.

3)   Don’t cache short life entries. All cache with fifo evict policy and 
max size is 10k.

 

After running for a while, I have a check the caches, there were only 20k 
entries, but the memory still grow up?

 

Please help.

 

Thanks

Shawn  

 

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn <mailto:shawn...@neulion.com.cn> 
] 
发送时间: 2016年10月26日 17:17
收件人: user@ignite.apache.org <mailto:user@ignite.apache.org> 
主题: 答复: ignite used too much memory

 

Hi 

 

This is the output of jmap histo:live .

 

Any useful information?  There are about 300k cache entries. Also I enable 
swap. 

 

Each entry costs 10K  memory in average. But each entry only has 50 bytes at 
most.

 

Any help will be appreciated.

 

num #instances #bytes  class name

--

   1:  21642240  519413760  
org.apache.ignite.internal.util.GridCircularBuffer$Item

   2: 49027  405647408  [Lorg.jsr166.ConcurrentHashMap8$Node;

   3:   6056069  304233144  [B

   4:   5754322  279563064  [C

   5:  11376526  273036624  
java.util.concurrent.ConcurrentSkipListMap$Node

   6:   5688514  227540560  
org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOnheap

   7:   5684293  181897376  
org.apache.ignite.internal.processors.cache.GridCacheTtlManager$EntryWrapper

   8:   5333102  170659264  java.util.concurrent.ConcurrentHashMap$Node

   9:   5325117  170403744  
org.apache.ignite.spi.swapspace.file.FileSwapSpaceSpi$SwapValue

  10:   5754728  138113472  java.lang.String

  11:   5689577  136549848  
java.util.concurrent.ConcurrentSkipListMap$Index

  12:   5686353  136472472  
org.apache.ignite.internal.processors.cache.KeyCacheObjectImpl

  13:   5325117  127802808  org.apache.ignite.spi.swapspace.SwapKey

  14:   5690452   91047232  org.h2.value.ValueString

  15: 85112   87930752  
[Lorg.apache.ignite.internal.util.GridCircularBuffer$Item;

  16:   1515799   48505568  java.util.HashMap$Node

  17:268818   42628512  [Ljava.lang.Object;

  18: 13467   35031504  
[Ljava.util.concurrent.ConcurrentHashMap$Node;

  19:361155   23113920  
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCacheEntry

  20:   1226922   19630752  java.lang.Integer

  21:102183   15207336  [Ljava.util.HashMap$Node;

  22:363397   14535880  
org.apache.ignite.internal.binary.BinaryObjectImpl

  23:395583   12658656  org.jsr166.ConcurrentHashMap8$Node

  24:359176   11493632  
org.apache.ignite.internal.processors.cache.extras.GridCacheTtlEntryExtras

  25:3633978721528  
org.apache.ignite.internal.processors.query.h2.opt.GridH2ValueCacheObject

  26:2576156182760  java.util.concurrent.atomic.AtomicLong

  27:1707755464800  
java.util.concurrent.locks.ReentrantLock$NonfairSync

  28: 851125447168  
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition

  29: 851125447168  
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition$1

  30:1031884953024  java.util.HashMap

  31: 877513510040  org.jsr166.ConcurrentHashMap8

  32:1363823273168  org.jsr166.ConcurrentLinkedDeque8$Node

  33: 906992902368  org.jsr166.LongAdder8

  34: 872772792864  java.lang.ref.WeakReference

  35:1706632730608  java.util.concurrent.locks.ReentrantLock

  36: 851122723584  
org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapImpl

  37: 85

答复: ignite used too much memory

2016-10-26 Thread Shawn Du
Hi experts,

 

Can anyone help to explain ignite memory model?

 

Now I tried following ways, but no effect.

1)   Remove all indexes.

2)   enable swap. I see more than 800M’s data are stored in swapspace
directory.

3)   Don’t cache short life entries. All cache with fifo evict policy
and max size is 10k.

 

After running for a while, I have a check the caches, there were only 20k
entries, but the memory still grow up?

 

Please help.

 

Thanks

Shawn  

 

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2016年10月26日 17:17
收件人: user@ignite.apache.org
主题: 答复: ignite used too much memory

 

Hi 

 

This is the output of jmap histo:live .

 

Any useful information?  There are about 300k cache entries. Also I enable
swap. 

 

Each entry costs 10K  memory in average. But each entry only has 50 bytes at
most.

 

Any help will be appreciated.

 

num #instances #bytes  class name

--

   1:  21642240  519413760
org.apache.ignite.internal.util.GridCircularBuffer$Item

   2: 49027  405647408  [Lorg.jsr166.ConcurrentHashMap8$Node;

   3:   6056069  304233144  [B

   4:   5754322  279563064  [C

   5:  11376526  273036624
java.util.concurrent.ConcurrentSkipListMap$Node

   6:   5688514  227540560
org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOnheap

   7:   5684293  181897376
org.apache.ignite.internal.processors.cache.GridCacheTtlManager$EntryWrapper

   8:   5333102  170659264
java.util.concurrent.ConcurrentHashMap$Node

   9:   5325117  170403744
org.apache.ignite.spi.swapspace.file.FileSwapSpaceSpi$SwapValue

  10:   5754728  138113472  java.lang.String

  11:   5689577  136549848
java.util.concurrent.ConcurrentSkipListMap$Index

  12:   5686353  136472472
org.apache.ignite.internal.processors.cache.KeyCacheObjectImpl

  13:   5325117  127802808  org.apache.ignite.spi.swapspace.SwapKey

  14:   5690452   91047232  org.h2.value.ValueString

  15: 85112   87930752
[Lorg.apache.ignite.internal.util.GridCircularBuffer$Item;

  16:   1515799   48505568  java.util.HashMap$Node

  17:268818   42628512  [Ljava.lang.Object;

  18: 13467   35031504
[Ljava.util.concurrent.ConcurrentHashMap$Node;

  19:361155   23113920
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAt
omicCacheEntry

  20:   1226922   19630752  java.lang.Integer

  21:102183   15207336  [Ljava.util.HashMap$Node;

  22:363397   14535880
org.apache.ignite.internal.binary.BinaryObjectImpl

  23:395583   12658656  org.jsr166.ConcurrentHashMap8$Node

  24:359176   11493632
org.apache.ignite.internal.processors.cache.extras.GridCacheTtlEntryExtras

  25:3633978721528
org.apache.ignite.internal.processors.query.h2.opt.GridH2ValueCacheObject

  26:2576156182760  java.util.concurrent.atomic.AtomicLong

  27:1707755464800
java.util.concurrent.locks.ReentrantLock$NonfairSync

  28: 851125447168
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPart
ition

  29: 851125447168
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPart
ition$1

  30:1031884953024  java.util.HashMap

  31: 877513510040  org.jsr166.ConcurrentHashMap8

  32:1363823273168  org.jsr166.ConcurrentLinkedDeque8$Node

  33: 906992902368  org.jsr166.LongAdder8

  34: 872772792864  java.lang.ref.WeakReference

  35:1706632730608  java.util.concurrent.locks.ReentrantLock

  36: 851122723584
org.apache.ignite.internal.processors.cache.GridCacheConcurrentMapImpl

  37: 851122723584
org.apache.ignite.internal.util.GridCircularBuffer

  38:1342552148080
org.apache.ignite.internal.processors.cache.CacheEvictableEntryImpl

  39: 853042047296
java.util.concurrent.CopyOnWriteArrayList

  40: 109001693000
[Lorg.jsr166.ConcurrentLinkedHashMap$HashEntry;

  41: 965941545504  java.util.HashSet

  42: 860861377376
java.util.concurrent.atomic.AtomicInteger

  43: 854351366960  java.util.HashMap$KeySet

  44: 850021360032
org.apache.ignite.internal.processors.query.h2.opt.GridH2AbstractKeyValueRow
$WeakValue

  45: 442951063080  java.util.ArrayList

  46: 13513 864832  java.util.concurrent.ConcurrentHashMap

  47: 21090 843600
org.apache.ignite.internal.processors.cache.version.GridCacheVersion

  48:   160 653824  [Lorg.apache.ignite.internal.processors.
cache.distributed.dht.GridDhtLocalPartition;

  49:  5664 637120

答复: ignite used too much memory

2016-10-26 Thread Shawn Du
org.apache.ignite.internal.processors.affinity.GridAffinityAssignment

  59:  7223 288920  java.util.LinkedHashMap$Entry

  60:  9276 222624  org.h2.expression.ValueExpression

  61:  6912 221184  org.h2.expression.Comparison

  62:  3042 219024  java.lang.reflect.Field

  63: 12529 200464
java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock

  64: 12529 200464
java.util.concurrent.locks.ReentrantReadWriteLock$Sync$ThreadLocalHoldCounte
r

  65: 12529 200464
java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock

  66:  1767 197904  org.h2.table.TableFilter

  67:  2116 186208
org.apache.ignite.internal.processors.jobmetrics.GridJobMetricsSnapshot

  68:  1767 169632  org.h2.jdbc.JdbcPreparedStatement

  69:  5145 164640  org.h2.expression.ConditionAndOr

  70:  7674 162472  [Ljava.lang.Class;

  71:  4654 148928  org.h2.expression.Alias

  72:  1726 137328  [S

  73:  1767 127224  org.h2.index.IndexCursor

  74:  7800 124800  java.lang.Object

  75:  1360 119680
org.apache.ignite.internal.util.StripedCompositeReadWriteLock$ReadLock

  76:  1767 113088  org.h2.jdbc.JdbcResultSet

  77:  1992 111552  java.util.LinkedHashMap

  78:  4228 101472  org.h2.value.ValueLong

  79:  2804  89728
java.lang.ThreadLocal$ThreadLocalMap$Entry

  80:   309  88992
org.apache.ignite.configuration.CacheConfiguration

  81:  1767  84816  org.h2.command.CommandContainer

  82:  1172  84384  org.h2.expression.Aggregate

  83:   777  80808  org.h2.table.Column

  84:  2502  80064
java.util.concurrent.ConcurrentSkipListMap$HeadIndex

  85:  2939  79912  [Lorg.h2.expression.Expression;

  86:  1870  73152  [Lorg.h2.table.IndexColumn;

  87:  1798  71920  java.lang.ref.SoftReference

  88:17  69904  [Ljava.nio.ByteBuffer;

  89:  2166  69312
org.apache.ignite.internal.GridLoggerProxy

  90:  1715  68600  org.h2.expression.Operation

  91:64  66560  [Lorg.apache.ignite.internal.processors.
jobmetrics.GridJobMetricsSnapshot;

  92:20  65920  [Ljava.nio.channels.SelectionKey;

 

 

 

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2016年10月26日 13:25
收件人: user@ignite.apache.org
主题: ignite used too much memory

 

Hi,

 

In my ignite server, I have several caches, each cache has about 10k
entries. 

I build the entry using binary object. Each entry just has 3 or 4 fields,
each field is short, less than 20 bytes. But I enable index for each field.

Most entry has set expired time. The expired time is short, about 90
seconds. 

After run for 2 hours, 8G memory are used, and the ignite run out of memory.


I build the ignite from source code, and use yesterdays’ github master
branch code.

 

My questions:

1)   Does the expired data release the memory?

2)   How ignite build the index? How much memory it cost?

 

Thanks

Shawn

 

 

 



ignite used too much memory

2016-10-25 Thread Shawn Du
Hi,

 

In my ignite server, I have several caches, each cache has about 10k
entries. 

I build the entry using binary object. Each entry just has 3 or 4 fields,
each field is short, less than 20 bytes. But I enable index for each field.

Most entry has set expired time. The expired time is short, about 90
seconds. 

After run for 2 hours, 8G memory are used, and the ignite run out of memory.


I build the ignite from source code, and use yesterdays' github master
branch code.

 

My questions:

1)   Does the expired data release the memory?

2)   How ignite build the index? How much memory it cost?

 

Thanks

Shawn