Re: about memory configuration

2019-12-08 Thread c c
Thanks for your reply.
I mean if we store data in off-heap memory,  should give much heap memory
by setup ignite jvm start option (-XMX)?  We setup
CacheConfiguration.setOnheapCacheEnabled (false) .  How many memory
should we configurate for  ignite jvm start option (-XMX) is enough usually?

Mikael  于2019年12月8日周日 下午3:40写道:

> Hi!
>
> I would not expect any big difference, withKeepBinary allows you to work
> with object without having to deserialize the entire object and you do not
> need the actual class to be available, and you can also add/remove fields
> from objects, but from a heap point of view I do not think you will notice
> much difference, the entries are still stored in the same way internally.
>
> Mikael
> Den 2019-12-08 kl. 04:03, skrev c c:
>
> By reading document we know it need read object from off-heap to on-heap
> when do some reads on server node. We do some timer job that would query
> cache(igniteCache.withKeepBinary().query(new ScanQuery())) , Does this
> operation need more on-heap memory? we setup
> CacheConfiguration.setOnheapCacheEnabled (false)
> And we also want to know using EntryProcessor(withKeepBinary or not) need
> more on-heap memory?
>
> c c  于2019年12月8日周日 上午10:24写道:
>
>> thank you very much.
>>
>> Mikael  于2019年12月8日周日 上午1:24写道:
>>
>>> Hi!
>>>
>>> The data regions are always off-heap, you just configure the Java heap
>>> for on-heap usage with -Xmx and so on as usual, have a look in the
>>> ignite.sh/ignite.bat, it depends on how you run your application, just
>>> configure this any way you like if you use embedded Ignite instance, also
>>> read the section about capacity planning.
>>>
>>> The java heap is just for java objects, services and any on-heap data,
>>> all caches are stored in data regions and they are off heap and have
>>> nothing to do with -Xmx except when you use on-heap caching.
>>>
>>> The Ignite documentation is very good and explains all you need to know
>>> on how to configure it.
>>>
>>>
>>> https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching
>>>
>>> https://apacheignite.readme.io/docs/memory-configuration
>>>
>>> Mikael
>>> Den 2019-12-07 kl. 17:41, skrev c c:
>>>
>>> HI,
>>> According to document we can setup memory size by 
>>> org.apache.ignite.configuration.DataStorageConfiguration.
>>> But we do not know this works for off-heap or on-heap memory. We want to
>>> know how to setup ignite jvm startup option(xms, xmx). Shoud jvm heap
>>> memory be great than maxSixe in DataStorageConfiguration.  We know some
>>> hot data would be deserialized from off-heap to on-heap. Would you mind
>>> giving me some advice? thanks very much!
>>>
>>>


Re: about memory configuration

2019-12-07 Thread c c
By reading document we know it need read object from off-heap to on-heap
when do some reads on server node. We do some timer job that would query
cache(igniteCache.withKeepBinary().query(new ScanQuery())) , Does this
operation need more on-heap memory? we setup
CacheConfiguration.setOnheapCacheEnabled (false)
And we also want to know using EntryProcessor(withKeepBinary or not) need
more on-heap memory?

c c  于2019年12月8日周日 上午10:24写道:

> thank you very much.
>
> Mikael  于2019年12月8日周日 上午1:24写道:
>
>> Hi!
>>
>> The data regions are always off-heap, you just configure the Java heap
>> for on-heap usage with -Xmx and so on as usual, have a look in the
>> ignite.sh/ignite.bat, it depends on how you run your application, just
>> configure this any way you like if you use embedded Ignite instance, also
>> read the section about capacity planning.
>>
>> The java heap is just for java objects, services and any on-heap data,
>> all caches are stored in data regions and they are off heap and have
>> nothing to do with -Xmx except when you use on-heap caching.
>>
>> The Ignite documentation is very good and explains all you need to know
>> on how to configure it.
>>
>>
>> https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching
>>
>> https://apacheignite.readme.io/docs/memory-configuration
>>
>> Mikael
>> Den 2019-12-07 kl. 17:41, skrev c c:
>>
>> HI,
>> According to document we can setup memory size by 
>> org.apache.ignite.configuration.DataStorageConfiguration.
>> But we do not know this works for off-heap or on-heap memory. We want to
>> know how to setup ignite jvm startup option(xms, xmx). Shoud jvm heap
>> memory be great than maxSixe in DataStorageConfiguration.  We know some
>> hot data would be deserialized from off-heap to on-heap. Would you mind
>> giving me some advice? thanks very much!
>>
>>


Re: about memory configuration

2019-12-07 Thread c c
thank you very much.

Mikael  于2019年12月8日周日 上午1:24写道:

> Hi!
>
> The data regions are always off-heap, you just configure the Java heap for
> on-heap usage with -Xmx and so on as usual, have a look in the
> ignite.sh/ignite.bat, it depends on how you run your application, just
> configure this any way you like if you use embedded Ignite instance, also
> read the section about capacity planning.
>
> The java heap is just for java objects, services and any on-heap data, all
> caches are stored in data regions and they are off heap and have nothing to
> do with -Xmx except when you use on-heap caching.
>
> The Ignite documentation is very good and explains all you need to know on
> how to configure it.
>
>
> https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching
>
> https://apacheignite.readme.io/docs/memory-configuration
>
> Mikael
> Den 2019-12-07 kl. 17:41, skrev c c:
>
> HI,
> According to document we can setup memory size by 
> org.apache.ignite.configuration.DataStorageConfiguration.
> But we do not know this works for off-heap or on-heap memory. We want to
> know how to setup ignite jvm startup option(xms, xmx). Shoud jvm heap
> memory be great than maxSixe in DataStorageConfiguration.  We know some
> hot data would be deserialized from off-heap to on-heap. Would you mind
> giving me some advice? thanks very much!
>
>


about memory configuration

2019-12-07 Thread c c
HI,
According to document we can setup memory size by
org.apache.ignite.configuration.DataStorageConfiguration.
But we do not know this works for off-heap or on-heap memory. We want to
know how to setup ignite jvm startup option(xms, xmx). Shoud jvm heap
memory be great than maxSixe in DataStorageConfiguration.  We know some hot
data would be deserialized from off-heap to on-heap. Would you mind giving
me some advice? thanks very much!


Re: Does ignite suite for large data search without index?

2019-11-21 Thread c c
HI,
We have some filter condition like this:age >= 18 and level =1 and
gender = 1;  (age >= 18 or level = 2) and hobby = 'music'
If one cache for each column, join the result is complicate.
Is there any way make searching without index fast?

Mikael  于2019年11月21日周四 下午8:08写道:

> Hi!
>
> One idea would be to have one cache for each column, so the key is name
> and value is the hobby for example, you get an index on the key for "free"
> and create one index on the value.
>
> If the cache does not contain name that person does not have a hobby, only
> names that does have a hobby is in the cache, it would complicate the query
> a bit and you need to ask multiple queries for each column, but updating of
> index is fast as you only need to update one index for each cache if you
> only update a few columns, if you need to update all it will of course
> still need to update the index for all caches, I am not sure if that would
> work for you, it depends on what kind of queries you need.
>
> In theory you could have 15 nodes and have one cache on each node and ask
> queries in parallel.
>
> I am not at all sure it will work well, it's just an idea.
>
> Mikael
>
>
> Den 2019-11-21 kl. 12:17, skrev c c:
>
> yes, we may add more columns in the future. You mean creating index create
> on one column or multiple columns? And some columns value difference are
> not big. So many index is not efficient and will cost a lot of ram and
> decrease update or insert performance(this table may udpate real time). So
> we think just traveling collection in memory is good. And cache is scalable
> will get rid of ram limit and make filter more quick.
>
> Mikael  于2019年11月21日周四 下午7:06写道:
>
>> Hi!
>>
>> Are the queries limited to something like "select name from ... where
>> hobby=x and location=y..." or you need more complex queries ?
>>
>> If the columns are fixed to 15, I don't see why you could not create 15
>> indices, it would use lots of ram and I don't think it's the best solution
>> either but it should work.
>>
>> Is it fixed to 15 columns ? or will you have to add more columns in the
>> future ?
>>
>> Den 2019-11-21 kl. 10:56, skrev c c:
>>
>> HI, Mikael
>>  Thanks for you reply very much!
>>  The type of data like this:
>>  member [name, location, age, gender, hobby, level, credits, expense
>> ...]
>>  We need filter data by arbitrary fileds combination, so creating
>> index is not of much use. We thought traveling all data in memory works
>> better.
>>  We can keep all data in ram, but data may increase progressisvely,
>> single node is not scalable. So we plan to use a distribute memory cache.
>>  We store data off heap and all in ram with default ignite
>> serialization. We just create table, then populate data with default
>> configuration in ignite, query by sql(one node,  4 million records ).
>>  Is there anyway can improve query performance ?
>>
>> Mikael  于2019年11月21日周四 下午5:02写道:
>>
>>> Hi!
>>>
>>> The comparison is not of much use, when you talk about ignite, it's not
>>> just to search a list, there is serialization/deserialization and other
>>> things to consider that will make it slower compared to a simple list
>>> search, a linear search on an Ignite cache depends on how you store data
>>> (off heap/on heap, in ram/partially on disk, type of serialization and
>>> so on.
>>>
>>> If you cannot keep all data in ram you are going to need some index to
>>> do a fast lookup, there is no way around it.
>>>
>>> If you can have all the data in ram, why do you need Ignite ? do you
>>> have some other requirements for it that Ignite gives you ? otherwise it
>>> might be simpler to just use a list in ram and go with that ?
>>>
>>> Is memory a limitation (cluster or single node ?) ? if not, could you
>>> explain why is it difficult to create an index on the data ?
>>>
>>> Could you explain what type of data it is ? maybe it is possible to
>>> arrange the data in some other way to improve everything
>>>
>>> Did you test with a single node or a cluster of nodes ? with more nodes
>>> you can improve performance as any search can be split up between the
>>> nodes, still, some kind of index will help a lot.
>>>
>>> Mikael
>>>
>>> Den 2019-11-21 kl. 08:49, skrev c c:
>>> > HI,
>>> >  We have a table with about 30 million records and 15 fields. We
>>> > need implement function that user can filter record by arbitrary 12
>>> > fields( one,two, three...of them) with very low latency. It's
>>> > difficult to create index. We think ignite is a grid memory cache and
>>> > test it with 4 million records(one node) without creating index. It
>>> > took about 5 seconds to find a record match one field filter
>>> > condition. We have tested just travel a java List(10 million elements)
>>> > with 3 filter condition. It took about 0.1 second. We just want to
>>> > know whether ignite suit this use case? Thanks very much.
>>> >
>>>
>>


Re: Does ignite suite for large data search without index?

2019-11-21 Thread c c
yes, we may add more columns in the future. You mean creating index create
on one column or multiple columns? And some columns value difference are
not big. So many index is not efficient and will cost a lot of ram and
decrease update or insert performance(this table may udpate real time). So
we think just traveling collection in memory is good. And cache is scalable
will get rid of ram limit and make filter more quick.

Mikael  于2019年11月21日周四 下午7:06写道:

> Hi!
>
> Are the queries limited to something like "select name from ... where
> hobby=x and location=y..." or you need more complex queries ?
>
> If the columns are fixed to 15, I don't see why you could not create 15
> indices, it would use lots of ram and I don't think it's the best solution
> either but it should work.
>
> Is it fixed to 15 columns ? or will you have to add more columns in the
> future ?
>
> Den 2019-11-21 kl. 10:56, skrev c c:
>
> HI, Mikael
>  Thanks for you reply very much!
>  The type of data like this:
>  member [name, location, age, gender, hobby, level, credits, expense
> ...]
>  We need filter data by arbitrary fileds combination, so creating
> index is not of much use. We thought traveling all data in memory works
> better.
>  We can keep all data in ram, but data may increase progressisvely,
> single node is not scalable. So we plan to use a distribute memory cache.
>  We store data off heap and all in ram with default ignite
> serialization. We just create table, then populate data with default
> configuration in ignite, query by sql(one node,  4 million records ).
>  Is there anyway can improve query performance ?
>
> Mikael  于2019年11月21日周四 下午5:02写道:
>
>> Hi!
>>
>> The comparison is not of much use, when you talk about ignite, it's not
>> just to search a list, there is serialization/deserialization and other
>> things to consider that will make it slower compared to a simple list
>> search, a linear search on an Ignite cache depends on how you store data
>> (off heap/on heap, in ram/partially on disk, type of serialization and
>> so on.
>>
>> If you cannot keep all data in ram you are going to need some index to
>> do a fast lookup, there is no way around it.
>>
>> If you can have all the data in ram, why do you need Ignite ? do you
>> have some other requirements for it that Ignite gives you ? otherwise it
>> might be simpler to just use a list in ram and go with that ?
>>
>> Is memory a limitation (cluster or single node ?) ? if not, could you
>> explain why is it difficult to create an index on the data ?
>>
>> Could you explain what type of data it is ? maybe it is possible to
>> arrange the data in some other way to improve everything
>>
>> Did you test with a single node or a cluster of nodes ? with more nodes
>> you can improve performance as any search can be split up between the
>> nodes, still, some kind of index will help a lot.
>>
>> Mikael
>>
>> Den 2019-11-21 kl. 08:49, skrev c c:
>> > HI,
>> >  We have a table with about 30 million records and 15 fields. We
>> > need implement function that user can filter record by arbitrary 12
>> > fields( one,two, three...of them) with very low latency. It's
>> > difficult to create index. We think ignite is a grid memory cache and
>> > test it with 4 million records(one node) without creating index. It
>> > took about 5 seconds to find a record match one field filter
>> > condition. We have tested just travel a java List(10 million elements)
>> > with 3 filter condition. It took about 0.1 second. We just want to
>> > know whether ignite suit this use case? Thanks very much.
>> >
>>
>


Re: Does ignite suite for large data search without index?

2019-11-21 Thread c c
HI, Mikael
 Thanks for you reply very much!
 The type of data like this:
 member [name, location, age, gender, hobby, level, credits, expense
...]
 We need filter data by arbitrary fileds combination, so creating index
is not of much use. We thought traveling all data in memory works better.
 We can keep all data in ram, but data may increase progressisvely,
single node is not scalable. So we plan to use a distribute memory cache.
 We store data off heap and all in ram with default ignite
serialization. We just create table, then populate data with default
configuration in ignite, query by sql(one node,  4 million records ).
 Is there anyway can improve query performance ?

Mikael  于2019年11月21日周四 下午5:02写道:

> Hi!
>
> The comparison is not of much use, when you talk about ignite, it's not
> just to search a list, there is serialization/deserialization and other
> things to consider that will make it slower compared to a simple list
> search, a linear search on an Ignite cache depends on how you store data
> (off heap/on heap, in ram/partially on disk, type of serialization and
> so on.
>
> If you cannot keep all data in ram you are going to need some index to
> do a fast lookup, there is no way around it.
>
> If you can have all the data in ram, why do you need Ignite ? do you
> have some other requirements for it that Ignite gives you ? otherwise it
> might be simpler to just use a list in ram and go with that ?
>
> Is memory a limitation (cluster or single node ?) ? if not, could you
> explain why is it difficult to create an index on the data ?
>
> Could you explain what type of data it is ? maybe it is possible to
> arrange the data in some other way to improve everything
>
> Did you test with a single node or a cluster of nodes ? with more nodes
> you can improve performance as any search can be split up between the
> nodes, still, some kind of index will help a lot.
>
> Mikael
>
> Den 2019-11-21 kl. 08:49, skrev c c:
> > HI,
> >  We have a table with about 30 million records and 15 fields. We
> > need implement function that user can filter record by arbitrary 12
> > fields( one,two, three...of them) with very low latency. It's
> > difficult to create index. We think ignite is a grid memory cache and
> > test it with 4 million records(one node) without creating index. It
> > took about 5 seconds to find a record match one field filter
> > condition. We have tested just travel a java List(10 million elements)
> > with 3 filter condition. It took about 0.1 second. We just want to
> > know whether ignite suit this use case? Thanks very much.
> >
>


Does ignite suite for large data search without index?

2019-11-20 Thread c c
HI,
 We have a table with about 30 million records and 15 fields. We need
implement function that user can filter record by arbitrary 12 fields(
one,two, three...of them) with very low latency. It's difficult to create
index. We think ignite is a grid memory cache and test it with 4 million
records(one node) without creating index. It took about 5 seconds to find a
record match one field filter condition. We have tested just travel a java
List(10 million elements) with 3 filter condition. It took about 0.1
second. We just want to know whether ignite suit this use case? Thanks very
much.


Re: Is there any way to make sure data and its backups in different data center

2019-10-28 Thread c c
Thank you very much!

Alex Plehanov  于2019年10月28日周一 下午8:59写道:

> Hello,
>
> You can set custom affinityBackupFilter in RendezvousAffinityFunction. See
> [1] for example.
>
> [1] :
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html
>
>
> пн, 28 окт. 2019 г. в 11:31, c c :
>
>> HI, we are working on ignite v2.7.0 . We plan to deploy 5 server nodes
>> across two data center (three in one data center and two in the other). We
>> setup 2 backups for each entry, so we have three copies data for each
>> entry. How can we make sure there is at least one copy in both data center?
>>
>> Regards
>>
>>


Is there any way to make sure data and its backups in different data center

2019-10-28 Thread c c
HI, we are working on ignite v2.7.0 . We plan to deploy 5 server nodes
across two data center (three in one data center and two in the other). We
setup 2 backups for each entry, so we have three copies data for each
entry. How can we make sure there is at least one copy in both data center?

Regards


Re: Re: TPS does not increase even though new server nodes added

2019-03-06 Thread c c
HI ,
We have provide thread dumps from all client and server. Would you mind
look at it ?
thanks very much.

yu...@toonyoo.net  于2019年3月6日周三 上午9:29写道:

> you can see the full thread dump in the attarchment,the filename is
> dump.zip
>
> --
> yu...@toonyoo.net
>
>
> *From:* Ilya Kasnacheev 
> *Date:* 2019-03-04 16:21
> *To:* user 
> *Subject:* Re: Re: TPS does not increase even though new server nodes
> added
> Hello!
>
> Is it a single node dump? Looks like it is idle.
>
> Please provide dumps from all clients and servers during data load.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 4 мар. 2019 г. в 10:53, yu...@toonyoo.net :
>
>> it's the  full thread dumps ,the attarchment file  name  is 1.txt
>>
>>
>> --
>> yu...@toonyoo.net
>>
>>
>> *From:* Ilya Kasnacheev 
>> *Date:* 2019-03-04 15:32
>> *To:* user 
>> *Subject:* Re: TPS does not increase even though new server nodes added
>> Hello!
>>
>> Can you provide full thread dumps from all nodes during max load?
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 4 мар. 2019 г. в 10:01, c c :
>>
>>> More information. The cache is partitioned and full sync.
>>>
>>> c c  于2019年3月4日周一 下午2:54写道:
>>>
>>>> HI,
>>>> We are working on ignite 2.7.0. We just put entity into ignite
>>>> cache with transaction and backup(2) enabled. We can get 6000 TPS with 3
>>>> server node. Then we test on 5 server nodes. But tps does not increase. We
>>>> operate ignite by client server node embbed in application. Except for
>>>> "publicThreadPoolSize" and "systemThreadPoolSize" other configurations left
>>>> unchanged. What problem is it ?
>>>>thanks
>>>>
>>>


Re: TPS does not increase even though new server nodes added

2019-03-03 Thread c c
More information. The cache is partitioned and full sync.

c c  于2019年3月4日周一 下午2:54写道:

> HI,
> We are working on ignite 2.7.0. We just put entity into ignite cache
> with transaction and backup(2) enabled. We can get 6000 TPS with 3 server
> node. Then we test on 5 server nodes. But tps does not increase. We operate
> ignite by client server node embbed in application. Except for
> "publicThreadPoolSize" and "systemThreadPoolSize" other configurations left
> unchanged. What problem is it ?
>thanks
>


TPS does not increase even though new server nodes added

2019-03-03 Thread c c
HI,
We are working on ignite 2.7.0. We just put entity into ignite cache
with transaction and backup(2) enabled. We can get 6000 TPS with 3 server
node. Then we test on 5 server nodes. But tps does not increase. We operate
ignite by client server node embbed in application. Except for
"publicThreadPoolSize" and "systemThreadPoolSize" other configurations left
unchanged. What problem is it ?
   thanks


Re: update Object type value of entry in EntryProcessor within transaction does not work

2019-01-21 Thread c c
thanks for your reply.
i use below configuration for the cache.
partitioned, 2 backup, transactional.
Do these matter?

Ilya Kasnacheev  于2019年1月21日周一 下午7:42写道:

> Hello!
>
> I was able to run your first & second fragments with expected behavior:
>
> Entity{id='hello3', value='v3', date=2019-01-21}
>
> Note that I'm using 2.7.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 21 янв. 2019 г. в 08:43, c c :
>
>> Hi,
>>   I work on ignite 2.7.0
>>
>>   I have a value type as below
>>
>> public class Entity {
>>  String id;
>>  String value;
>>  Date date;
>> }
>>
>> then interact with cache as below
>>
>> try (Transaction tx =
>>  ignite.transactions().txStart(TransactionConcurrency.PESSIMISTIC,
>> TransactionIsolation.SERIALIZABLE)) {
>>  cache1.invoke("k6", new EntryProcessor() {
>>  @Override public Object process(MutableEntry
>> mutableEntry, Object... objects) throws EntryProcessorException {
>>  Entity e = mutableEntry.getValue();
>> e.setId("hello3");
>> e.setValue("v3");
>> mutableEntry.setValue(e);
>> return null;
>> }
>> });
>> tx.commit();
>> }
>>
>> But entry value does not change after commit.
>>
>> If i code as below, entry value would change after commit.
>>
>> try (Transaction tx =
>>  ignite.transactions().txStart(TransactionConcurrency.PESSIMISTIC,
>> TransactionIsolation.SERIALIZABLE)) {
>>  cache1.invoke("k6", new EntryProcessor() {
>>  @Override public Object process(MutableEntry
>> mutableEntry, Object... objects) throws EntryProcessorException {
>> mutableEntry.setValue(new Entity("test2", "a2", new Date()));
>> return null;
>> }
>> });
>> tx.commit();
>> }
>>
>> I found that changes to entry would not take effect after commit when
>> mutableEntry.getValue() called.
>>
>


update Object type value of entry in EntryProcessor within transaction does not work

2019-01-20 Thread c c
Hi,
  I work on ignite 2.7.0

  I have a value type as below

public class Entity {
 String id;
 String value;
 Date date;
}

then interact with cache as below

try (Transaction tx =
 ignite.transactions().txStart(TransactionConcurrency.PESSIMISTIC,
TransactionIsolation.SERIALIZABLE)) {
 cache1.invoke("k6", new EntryProcessor() {
 @Override public Object process(MutableEntry
mutableEntry, Object... objects) throws EntryProcessorException {
 Entity e = mutableEntry.getValue();
e.setId("hello3");
e.setValue("v3");
mutableEntry.setValue(e);
return null;
}
});
tx.commit();
}

But entry value does not change after commit.

If i code as below, entry value would change after commit.

try (Transaction tx =
 ignite.transactions().txStart(TransactionConcurrency.PESSIMISTIC,
TransactionIsolation.SERIALIZABLE)) {
 cache1.invoke("k6", new EntryProcessor() {
 @Override public Object process(MutableEntry
mutableEntry, Object... objects) throws EntryProcessorException {
mutableEntry.setValue(new Entity("test2", "a2", new Date()));
return null;
}
});
tx.commit();
}

I found that changes to entry would not take effect after commit when
mutableEntry.getValue() called.