Re: Quick questions on Evictions

2018-06-18 Thread the_palakkaran
Hi,

DataPageEvictionMode is deprecated now, right? What should I do to evict my
off heap entries? Also, can I limit off heap memory usage?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache Configuration for a data region

2018-06-18 Thread the_palakkaran
Do I need to configure something else for off heap eviction?

I have on heap enabled and set eviction policy(LRU). Max size of 1MB also
provided. I have 1.5million data, so obviously it some entries loaded in the
start should be evicted. Still when I try to get those entries, I get
performance equivalent to the cache on heap reads. Is this normal or am I
missing something?

As per the thread below, I need to set some data page eviction too, but
official documentation says it is deprecated,

http://apache-ignite-users.70518.x6.nabble.com/Quick-questions-on-Evictions-td16632.html



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: monitoring function of web console

2018-06-18 Thread Hu Hailin
Hi,

Thank you for your information.
It's very helpful.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-18 Thread Вячеслав Коптилин
Hello,

It seems that the root cause of the issue is wrong values of 'KEY_TYPE' and
'VALUE_TYPE' parameters.
In your case, there is no need to specify 'KEY_TYPE' at all, and
'VALUE_TYPE' should Person.class.getName() I think.

Please try the following:
String createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG,
firstName VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY
KEY(firstName))" +
"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
", VALUE_TYPE=" + Person.class.getName() + "\"";

Best regards,
Slava.

пн, 18 июн. 2018 г. в 21:50, Cong Guo :

> Hi,
>
>
>
> I need to use both SQL and non-SQL APIs (key-value) on a single cache. I
> follow the document in:
>
> https://apacheignite-sql.readme.io/docs/create-table
>
>
>
> I use “CREATE TABLE” to create the table and its underlying cache. I can
> use both SQL “INSERT” and put to add data to the cache. However, when I run
> a SqlFieldsQuery, only the row added by SQL “INSERT” can be seen. The
> Ignite version is 2.4.0.
>
>
>
> You can reproduce the bug using the following code:
>
>
>
> CacheConfiguration dummyCfg = new
> CacheConfiguration<>("DUMMY");
>
>
> dummyCfg.setSqlSchema("PUBLIC");
>
>
>
>  try(IgniteCache Integer> dummyCache = ignite.getOrCreateCache(dummyCfg)){
>
> String
> createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, firstName
> VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY
> KEY(firstName))" +
>
>
>"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
> WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
>
>
>   ", KEY_TYPE=String, VALUE_TYPE=BinaryObject\"";
>
>
>
>
>  dummyCache.query(new SqlFieldsQuery(createTableSQL)).getAll();
>
>
>
>
>  SqlFieldsQuery firstInsert = new SqlFieldsQuery("INSERT INTO Persons (id,
> orgId, firstName, lastname, resume, salary) VALUES (?,?,?,?,?,?)");
>
>
> firstInsert.setArgs(1L, 1L, "John", "Smith", "PhD", 1.0d);
>
>
> dummyCache.query(firstInsert).getAll();
>
>
>
>
>  try(IgniteCache personCache =
> ignite.cache(PERSON_CACHE_NAME)){
>
>
> Person p2 = new Person(2L, 1L, "Hello", "World", "Master", 1000.0d);
>
>
> personCache.put("Hello", p2);
>
>
>
>
> IgniteCache binaryCache = personCache. BinaryObject>withKeepBinary();
>
>
> System.out.println("Size of the cache is: " +
> binaryCache.size(CachePeekMode.ALL));
>
>
>
>
>
>  binaryCache.query(new ScanQuery<>(null)).forEach(entry ->
> System.out.println(entry.getKey()));
>
>
>
>
>
> System.out.println("Select results: ");
>
>
> SqlFieldsQuery qry = new SqlFieldsQuery("select * from Persons");
>
>
> QueryCursor> answers = personCache.query(qry);
>
>
> List> personList = answers.getAll();
>
>
> for(List row : personList) {
>
>
> String fn = (String)row.get(2);
>
>
> System.out.println(fn);
>
>
> }
>
> }
>
> }
>
>
>
>
>
> The output is:
>
>
>
> Size of the cache is: 2
>
> Hello
>
> String [idHash=213193302, hash=-900113201, FIRSTNAME=John]
>
> Select results:
>
> John
>
>
>
> The bug is that the SqlFieldsQuery cannot see the data added by “put”.
>


RE: SQL cannot find data of new class definition

2018-06-18 Thread Cong Guo
Can I add fields without restarting the cluster? My requirement is to do 
rolling upgrade.

From: Вячеслав Коптилин [mailto:slava.kopti...@gmail.com]
Sent: 2018年6月18日 17:35
To: user@ignite.apache.org
Subject: Re: SQL cannot find data of new class definition

Hello,

>  I use BinaryObject in the first place because the document says BinaryObject 
> “enables you to add and remove fields from objects of the same type”
Yes, you can dynamically add fields to BinaryObject using BinaryObjecyBuilder, 
but fields that you want to query have to be specified on node startup for 
example through QueryEntity.
Please take a look at this page: 
https://apacheignite.readme.io/v2.5/docs/indexes#queryentity-based-configuration

I would suggest specifying a new field via QueryEntity in XML configuration 
file and restart your cluster. I hope it helps.

Thanks!

пн, 18 июн. 2018 г. в 16:47, Cong Guo 
mailto:cong.g...@huawei.com>>:
Hi,

Does anyone have experience using both Cache and SQL interfaces at the same 
time? How do you solve the possible upgrade? Is my problem a bug for 
BinaryObject? Should I debug the ignite source code?

From: Cong Guo
Sent: 2018年6月15日 10:12
To: 'user@ignite.apache.org' 
mailto:user@ignite.apache.org>>
Subject: RE: SQL cannot find data of new class definition

I run the SQL query only after the cache size has changed. The new data should 
be already in the cache when I run the query.


From: Cong Guo
Sent: 2018年6月15日 10:01
To: user@ignite.apache.org
Subject: RE: SQL cannot find data of new class definition

Hi,

Thank you for the reply. In my original test, I do not create a table using 
SQL. I just create a cache. I think a table using the value class name is 
created implicitely. I add the new field/column using ALTER TABLE before I put 
new data into the cache, but I still cannot find the data of the new class in 
the table with the class name.

It is easy to reproduce my original test. I use the Person class from ignite 
example.

In the old code:

CacheConfiguration personCacheCfg = new 
CacheConfiguration<>(PERSON_CACHE_NAME);
personCacheCfg.setCacheMode(CacheMode.REPLICATED);
personCacheCfg.setQueryEntities(Arrays.asList(createPersonQueryEntity()));
try(IgniteCache personCache = 
ignite.getOrCreateCache(personCacheCfg)){
// add some data here
   Person p1 = new Person(…);
   personCache.put(1L, p1);
   //  keep the node running and run the SQL query
}

private static QueryEntity createPersonQueryEntity() {
QueryEntity personEntity = new QueryEntity();

personEntity.setValueType(Person.class.getName());
personEntity.setKeyType(Long.class.getName());

LinkedHashMap fields = new 
LinkedHashMap<>();
fields.put("id", Long.class.getName());
fields.put("orgId", Long.class.getName());
fields.put("firstName", String.class.getName());
fields.put("lastName", String.class.getName());
fields.put("resume", String.class.getName());
fields.put("salary", Double.class.getName());
personEntity.setFields(fields);

personEntity.setIndexes(Arrays.asList(
new 
QueryIndex("id"),
new 
QueryIndex("orgId")
));

return personEntity;
}

The SQL query is:
IgniteCache binaryCache = 
personCache.withKeepBinary();
SqlFieldsQuery 
qry = new SqlFieldsQuery("select salary from Person");


QueryCursor> answers = binaryCache.query(qry);
List> 
salaryList = answers.getAll();
for(List row 
: salaryList) {

Double salary = (Double)row.get(0);

System.out.println(salary);
}

In the new code:

I add a member to the Person class which is “private in addOn”.

try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){
   // add the new data and then check the cache size
  Person p2 = new Person(…);
  personCache.put(2L, p2);
   System.out.println("Size of the cache is: " + 
personCache.size(CachePeekMode.ALL));
}

I can only get the data of the old class P1 using the SQL query, but there is 
no 

Re: SQL cannot find data of new class definition

2018-06-18 Thread Вячеслав Коптилин
Hello,

>  I use BinaryObject in the first place because the document says
BinaryObject “enables you to add and remove fields from objects of the same
type”
Yes, you can dynamically add fields to BinaryObject using
BinaryObjecyBuilder, but fields that you want to query have to be specified
on node startup for example through QueryEntity.
Please take a look at this page:
https://apacheignite.readme.io/v2.5/docs/indexes#queryentity-based-configuration

I would suggest specifying a new field via QueryEntity in XML configuration
file and restart your cluster. I hope it helps.

Thanks!

пн, 18 июн. 2018 г. в 16:47, Cong Guo :

> Hi,
>
>
>
> Does anyone have experience using both Cache and SQL interfaces at the
> same time? How do you solve the possible upgrade? Is my problem a bug for
> BinaryObject? Should I debug the ignite source code?
>
>
>
> *From:* Cong Guo
> *Sent:* 2018年6月15日 10:12
> *To:* 'user@ignite.apache.org' 
> *Subject:* RE: SQL cannot find data of new class definition
>
>
>
> I run the SQL query only after the cache size has changed. The new data
> should be already in the cache when I run the query.
>
>
>
>
>
> *From:* Cong Guo
> *Sent:* 2018年6月15日 10:01
> *To:* user@ignite.apache.org
> *Subject:* RE: SQL cannot find data of new class definition
>
>
>
> Hi,
>
>
>
> Thank you for the reply. In my original test, I do not create a table
> using SQL. I just create a cache. I think a table using the value class
> name is created implicitely. I add the new field/column using ALTER TABLE
> before I put new data into the cache, but I still cannot find the data of
> the new class in the table with the class name.
>
>
>
> It is easy to reproduce my original test. I use the Person class from
> ignite example.
>
>
>
> In the old code:
>
>
>
> CacheConfiguration personCacheCfg = new
> CacheConfiguration<>(PERSON_CACHE_NAME);
>
> personCacheCfg.setCacheMode(CacheMode.REPLICATED);
>
> personCacheCfg.setQueryEntities(Arrays.asList(createPersonQueryEntity()));
>
> try(IgniteCache personCache =
> ignite.getOrCreateCache(personCacheCfg)){
>
> // add some data here
>
>Person p1 = new Person(…);
>
>personCache.put(1L, p1);
>
>//  keep the node running and run the SQL query
>
> }
>
>
>
> private static QueryEntity createPersonQueryEntity() {
>
> QueryEntity personEntity = new
> QueryEntity();
>
>
> personEntity.setValueType(Person.class.getName());
>
>
> personEntity.setKeyType(Long.class.getName());
>
>
>
> LinkedHashMap fields = new
> LinkedHashMap<>();
>
> fields.put("id", Long.class.getName());
>
> fields.put("orgId", Long.class.getName());
>
> fields.put("firstName",
> String.class.getName());
>
> fields.put("lastName",
> String.class.getName());
>
> fields.put("resume",
> String.class.getName());
>
> fields.put("salary",
> Double.class.getName());
>
> personEntity.setFields(fields);
>
>
>
> personEntity.setIndexes(Arrays.asList(
>
> new
> QueryIndex("id"),
>
> new
> QueryIndex("orgId")
>
> ));
>
>
>
> return personEntity;
>
> }
>
>
>
> The SQL query is:
>
> IgniteCache binaryCache =
> personCache.withKeepBinary();
>
>
> SqlFieldsQuery qry = new SqlFieldsQuery("select salary from Person");
>
>
>
>
>
> QueryCursor> answers = binaryCache.query(qry);
>
>
> List> salaryList = answers.getAll();
>
>
> for(List row : salaryList) {
>
>
> Double salary = (Double)row.get(0);
>
>
> System.out.println(salary);
>
> }
>
>
>
> In the new code:
>
>
>
> I add a member to the Person class which is “private in addOn”.
>
>
>
> try(IgniteCache personCache =
> ignite.cache(PERSON_CACHE_NAME)){
>
>// add the new data and then check the cache size
>
>   Person p2 = new Person(…);
>
>   personCache.put(2L, p2);
>
>System.out.println("Size of the cache is: " +
> personCache.size(CachePeekMode.ALL));
>
> }
>
>
>
> I can only get the data of the old class P1 using the SQL query, but there
> is no error.
>
>
>
> I use BinaryObject in the first place because the document says
> BinaryObject “enables you to add and remove fields from objects of the
> same type”
>
>
>
> https://apacheignite.readme.io/docs/binary-marshaller
>
>
>
> I can get the data of different class definitions using get(key), but I
> also need the SQL fields query.
>
>
>
> IgniteCache binaryCache = personCache. BinaryObject>withKeepBinary();
>
> BinaryObject bObj = binaryCache.get(1L);
>
> System.out.println(bObj.type().field("firstName").value(bObj) + 

Re: Distributed Database as best choice for persistence

2018-06-18 Thread Denis Magda
No, you can't make Cassandra transactional by glueing it with Ignite. If
you'd like to have transactions, then a strongly consistent store has to be
used instead (like RDBMS or Ignite persistence).

May I ask why don't you want to go for Ignite persistence?

--
Denis

On Mon, Jun 18, 2018 at 2:09 AM piyush  wrote:

> cool. Does it maintain transactions ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-18 Thread Denis Magda
Hi,

Check out this code samples that suggest best practices on sticking
together SQL + k/v if the structure is defined by CREATE TABLE.
https://github.com/dmagda/ignite_world_demo

--
Denis

On Mon, Jun 18, 2018 at 11:50 AM Cong Guo  wrote:

> Hi,
>
>
>
> I need to use both SQL and non-SQL APIs (key-value) on a single cache. I
> follow the document in:
>
> https://apacheignite-sql.readme.io/docs/create-table
>
>
>
> I use “CREATE TABLE” to create the table and its underlying cache. I can
> use both SQL “INSERT” and put to add data to the cache. However, when I run
> a SqlFieldsQuery, only the row added by SQL “INSERT” can be seen. The
> Ignite version is 2.4.0.
>
>
>
> You can reproduce the bug using the following code:
>
>
>
> CacheConfiguration dummyCfg = new
> CacheConfiguration<>("DUMMY");
>
>
> dummyCfg.setSqlSchema("PUBLIC");
>
>
>
>  try(IgniteCache Integer> dummyCache = ignite.getOrCreateCache(dummyCfg)){
>
> String
> createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, firstName
> VARCHAR, lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY
> KEY(firstName))" +
>
>
>"WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL,
> WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +
>
>
>   ", KEY_TYPE=String, VALUE_TYPE=BinaryObject\"";
>
>
>
>
>  dummyCache.query(new SqlFieldsQuery(createTableSQL)).getAll();
>
>
>
>
>  SqlFieldsQuery firstInsert = new SqlFieldsQuery("INSERT INTO Persons (id,
> orgId, firstName, lastname, resume, salary) VALUES (?,?,?,?,?,?)");
>
>
> firstInsert.setArgs(1L, 1L, "John", "Smith", "PhD", 1.0d);
>
>
> dummyCache.query(firstInsert).getAll();
>
>
>
>
>  try(IgniteCache personCache =
> ignite.cache(PERSON_CACHE_NAME)){
>
>
> Person p2 = new Person(2L, 1L, "Hello", "World", "Master", 1000.0d);
>
>
> personCache.put("Hello", p2);
>
>
>
>
> IgniteCache binaryCache = personCache. BinaryObject>withKeepBinary();
>
>
> System.out.println("Size of the cache is: " +
> binaryCache.size(CachePeekMode.ALL));
>
>
>
>
>
>  binaryCache.query(new ScanQuery<>(null)).forEach(entry ->
> System.out.println(entry.getKey()));
>
>
>
>
>
> System.out.println("Select results: ");
>
>
> SqlFieldsQuery qry = new SqlFieldsQuery("select * from Persons");
>
>
> QueryCursor> answers = personCache.query(qry);
>
>
> List> personList = answers.getAll();
>
>
> for(List row : personList) {
>
>
> String fn = (String)row.get(2);
>
>
> System.out.println(fn);
>
>
> }
>
> }
>
> }
>
>
>
>
>
> The output is:
>
>
>
> Size of the cache is: 2
>
> Hello
>
> String [idHash=213193302, hash=-900113201, FIRSTNAME=John]
>
> Select results:
>
> John
>
>
>
> The bug is that the SqlFieldsQuery cannot see the data added by “put”.
>


A bug in SQL "CREATE TABLE" and its underlying Ignite cache

2018-06-18 Thread Cong Guo
Hi,

I need to use both SQL and non-SQL APIs (key-value) on a single cache. I follow 
the document in:
https://apacheignite-sql.readme.io/docs/create-table

I use "CREATE TABLE" to create the table and its underlying cache. I can use 
both SQL "INSERT" and put to add data to the cache. However, when I run a 
SqlFieldsQuery, only the row added by SQL "INSERT" can be seen. The Ignite 
version is 2.4.0.

You can reproduce the bug using the following code:

CacheConfiguration dummyCfg = new 
CacheConfiguration<>("DUMMY");
dummyCfg.setSqlSchema("PUBLIC");

 try(IgniteCache dummyCache = ignite.getOrCreateCache(dummyCfg)){
String 
createTableSQL = "CREATE TABLE Persons (id LONG, orgId LONG, firstName VARCHAR, 
lastName VARCHAR, resume VARCHAR, salary FLOAT, PRIMARY KEY(firstName))" +

   "WITH \"BACKUPS=1, ATOMICITY=TRANSACTIONAL, 
WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC, CACHE_NAME=" + PERSON_CACHE_NAME +

  ", KEY_TYPE=String, 
VALUE_TYPE=BinaryObject\"";

 
dummyCache.query(new SqlFieldsQuery(createTableSQL)).getAll();

 SqlFieldsQuery 
firstInsert = new SqlFieldsQuery("INSERT INTO Persons (id, orgId, firstName, 
lastname, resume, salary) VALUES (?,?,?,?,?,?)");

firstInsert.setArgs(1L, 1L, "John", "Smith", "PhD", 1.0d);

dummyCache.query(firstInsert).getAll();

 
try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){

Person p2 = new Person(2L, 1L, "Hello", "World", "Master", 1000.0d);

personCache.put("Hello", p2);


IgniteCache binaryCache = personCache.withKeepBinary();

System.out.println("Size of the cache is: " + 
binaryCache.size(CachePeekMode.ALL));


 binaryCache.query(new ScanQuery<>(null)).forEach(entry -> 
System.out.println(entry.getKey()));


System.out.println("Select results: ");

SqlFieldsQuery qry = new SqlFieldsQuery("select * from Persons");

QueryCursor> answers = personCache.query(qry);

List> personList = answers.getAll();

for(List row : personList) {

String fn = (String)row.get(2);

System.out.println(fn);

}
}
}


The output is:

Size of the cache is: 2
Hello
String [idHash=213193302, hash=-900113201, FIRSTNAME=John]
Select results:
John

The bug is that the SqlFieldsQuery cannot see the data added by "put".


RE: Ignite Node failure - Node out of topology (SEGMENTED)

2018-06-18 Thread naresh.goty
Thanks Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache Configuration for a data region

2018-06-18 Thread slava.koptilin
> I guess durable memory is pure RAM based and native persistence is
combination of both RAM and disk.
Durable memory is a memory architecture that allows processing and storing
data both in memory and on disk. In other words, Ignite native persistence
is a feature(add-on) provided by Durable memory architecture.

> How to manipulate durable memory? Can I configure it?
Please take a look on this page:
https://apacheignite.readme.io/docs/memory-configuration

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite docker container not able to join in cluster

2018-06-18 Thread dkarachentsev
Hi,

You configured external public EC interface address (34.241...), but it
should be internal: 172...

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error while Starting Grid: javax.management.InstanceAlreadyExistsException: (LruEvictionPolicy)

2018-06-18 Thread Andrey Mashenkov
Possibly, it is already fixed.
Please, try to upgrade to the latest version.

On Fri, Jun 8, 2018 at 5:25 PM HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com> wrote:

> Hi Andrey,
>
> Thank you very much for the prompt response.
>
>
>
> We have only one node in a JVM.
>
>
>
>
>
> This is my grid config. We use 1.9.0 Ignite.
>
>
>
> IgniteConfiguration cfg = *new *IgniteConfiguration();
>
> cfg.setPeerClassLoadingEnabled(*false*);
>
> cfg.setLifecycleBeans(*new *LogLifecycleBean());
>
> TcpDiscoverySpi discoverySpi = *new *TcpDiscoverySpi();
>
> TcpDiscoveryVmIpFinder ipFinder = *new *TcpDiscoveryVmIpFinder();
> Collection addressSet = *new *HashSet<>();
> *for *(String address : *ipList*) {
> addressSet.add(address);
> }
> ipFinder.setAddresses(addressSet);
>
> discoverySpi.setJoinTimeout(1);
> discoverySpi.setLocalPort(47500);
> discoverySpi.setIpFinder(ipFinder);
>
> cfg.setDiscoverySpi(discoverySpi);
>
>
>
> And this is the cache config. We don’t set cache group specifically.
>
>
>
> CacheConfiguration cc = *new *CacheConfiguration();
>
> cc.setName(*"mycache"*);
> cc.setBackups(1);
> cc.setCacheMode(CacheMode.*PARTITIONED*);
> cc.setAtomicityMode(CacheAtomicityMode.*ATOMIC*);
>
> LruEvictionPolicy evpl = *new *LruEvictionPolicy();
> evpl.setMaxSize(1);
>
> cc.setEvictionPolicy(evpl);
>
> cc.setExpiryPolicyFactory(CreatedExpiryPolicy.*factoryOf*(
> *new *Duration(TimeUnit.*SECONDS*, 15)));
>
> cc.setStatisticsEnabled(*true*);
>
>
>
>
>
>
>
> *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
> *Sent:* Friday, June 08, 2018 10:02 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Error while Starting Grid:
> javax.management.InstanceAlreadyExistsException: (LruEvictionPolicy)
>
>
>
> Hi,
>
>
>
> Looks like a bug.
>
>
>
> Can you share grid configuration?
>
> Do you have more than one node in same JVM?
>
> Do you have configure cache groups manually?
>
>
>
> On Fri, Jun 8, 2018 at 4:48 PM, HEWA WIDANA GAMAGE, SUBASH <
> subash.hewawidanagam...@fmr.com> wrote:
>
> Hi everyone,
>
>
>
> As a quick note on what we do here, we listen to NODE_FAILED, and
> NODE_SEGMENTED events and upon such events, we use Ignition.stopAll(true)
> and Ignition.start() to restart the Ignite grid in a given JVM. Here Ignite
> does  not starts as a standalone process by itself, but bootstrap
> programmatically since it’s meant to be a part of some other main process.
>
>
>
> So we received a NODE_FAILED evet and restarted Ignite where we see
> following error and start fails. And “mycache” is created with LRU
>  eviction policy at Ignite startup process.
>
>
>
> As per error, it tries to registering an LruEvictionPolicy MBean twice. We
> use a cache named mycache in PARTITIONED mode with 4 nodes in the cluster.
> Any idea for this behavior ?
>
>
>
>
>
> org.apache.ignite.IgniteException: Failed to register MBean for component:
> LruEvictionPolicy [max=10, batchSize=1, maxMemSize=524288000,
> memSize=0, size=0]
>at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:946)
>at org.apache.ignite.Ignition.start(Ignition.java:325)
>at com.test.IgniteRuntime.start(IgniteRuntime.java:87)
>at
> com.test.segmentation.SegmentationResolver.recycle(SegmentationResolver.java:61)
>at
> com.test.RandomizedDelayResolver.resolve(RandomizedDelayResolver.java:47)
>at
> com.test.SegmentationProcessor.lambda$init$2(SegmentationProcessor.java:95)
>at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.ignite.IgniteCheckedException: Failed to register
> MBean for component: LruEvictionPolicy [max=10, batchSize=1,
> maxMemSize=524288000, memSize=0, size=0]
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.registerMbean(GridCacheProcessor.java:3518)
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepare(GridCacheProcessor.java:557)
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepare(GridCacheProcessor.java:529)
>at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1306)
>   at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:801)
>at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:959)
>at
> 

Re: Deleting a ticket from https://issues.apache.org/jira/projects/IGNITE/issues

2018-06-18 Thread Raghav
Hi Dkarachentsev,

Thanks for your comments. I have closed the ticket but the ticket is still
accessible with the URL. It would be helpful if we could delete the JIRA so
that the ticket is not accessible over internet.

Kindly let us know the team to contact to deleting the ticket.

Thanks!!!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: High cpu on ignite server nodes

2018-06-18 Thread Stanislav Lukyanov
There is no default expiry policy, and no default eviction policy for the 
on-heap caches (just in case: expiry and eviction are not the same thing, see 
https://apacheignite.readme.io/docs/evictions and 
https://apacheignite.readme.io/docs/expiry-policies).

I see that most of the threads in the dump that you’ve shared are executing 
on-heap eviction code.
Perhaps you’ve just hit the eviction size of your caches, and now the cache 
updates became more expensive (
You can try increasing the eviction maximum size in the eviction policy.

Thanks,
Stan

From: praveeng
Sent: 16 июня 2018 г. 18:38
To: user@ignite.apache.org
Subject: RE: High cpu on ignite server nodes

Hi Stan,

The high cpu usage is on all servers.
one doubt , what is the default expiry policy for any cache if we don't set.

Following are the stats of one cache collected from ignitevisor.



Cache 'playerSessionInfoCacheIgnite(@c18)':
+--+
| Name(@) | playerSessionInfoCacheIgnite(@c18) |
| Nodes   | 7  |
| Total size Min/Avg/Max  | 0 / 53201.29 / 151537  |
|   Heap size Min/Avg/Max | 0 / 21857.29 / 50001   |
|   Off-heap size Min/Avg/Max | 0 / 31344.00 / 101536  |
+--+

Nodes for: playerSessionInfoCacheIgnite(@c18)
+=+
|Node ID8(@), IP| CPUs | Heap Used | CPU Load |   Up Time   
|   Size   | Hi/Mi/Rd/Wr |
+=+
| 54F7EA58(@n4), ip.ip.ip.ip1   | 4| 43.81 %   | 2.43 %   | 24:20:08:018
| Total: 1000  | Hi: 0   |
|   |  |   |  | 
|   Heap: 1000 | Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 0| Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 0 | Wr: 0   |
+---+--+---+--+--+--+-+
| D3A97470(@n7), ip.ip.ip.ip2   | 8| 41.88 %   | 0.27 %   | 02:26:29:576
| Total: 151536| Hi: 0   |
|   |  |   |  | 
|   Heap: 5| Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 101536   | Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 100mb | Wr: 0   |
+---+--+---+--+--+--+-+
| 6BA0FEA2(@n5), ip.ip.ip.ip3   | 8| 25.74 %   | 0.30 %   | 02:29:02:915
| Total: 151529| Hi: 0   |
|   |  |   |  | 
|   Heap: 5| Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 101529   | Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 100mb | Wr: 0   |
+---+--+---+--+--+--+-+
| E41C47FD(@n6), ip.ip.ip.ip4   | 8| 38.53 %   | 0.30 %   | 02:27:35:184
| Total: 66344 | Hi: 0   |
|   |  |   |  | 
|   Heap: 50001| Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 16343| Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 16mb  | Wr: 0   |
+---+--+---+--+--+--+-+
| D487DD7A(@n3), ip.ip.ip.ip5   | 4| 36.07 %   | 1.90 %   | 24:27:24:711
| Total: 1000  | Hi: 0   |
|   |  |   |  | 
|   Heap: 1000 | Mi: 0   |
|   |  |   |  | 
|   Off-Heap: 0| Rd: 0   |
|   |  |   |  | 
|   Off-Heap Memory: 0 | Wr: 0   |
+---+--+---+--+--+--+-+
| A30CC6D1(@n2), ip.ip.ip.ip6   | 4| 29.72 %   | 0.50 %   | 24:33:45:581
| Total: 0 | Hi: 0   |
|   |  |   |  | 

Re: Cache Configuration for a data region

2018-06-18 Thread the_palakkaran
Thanks, every thing other than native vs durable is now clear for me.

I guess durable memory is pure RAM based and native persistence is
combination of both RAM and disk.

How to manipulate durable memory? Can I configure it?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache Configuration for a data region

2018-06-18 Thread slava.koptilin
Hi,

> 1. I have set setOnheapCacheEnabled(true) for every cache.
> This means they will go into java heap right? 
As of AI 2.x Java heap is no longer treated as a data storage
and might be used as an extra caching layer for entries you have in the
off-heap.

> 2. I don't clearly understand the difference between durable memory and
> persistence.
Long story short, the 'Durable' memory is a page based memory architecture
that is split into pages of fixed size. Ignite 'Native persistence' is a
feature provided by Ignite and which allows storing your data on disk.
Please take a look at the following pages [1] & [2]

> 3. If I have enabled persistence and also set LRU eviction policy
> correctly,
> does this mean during loading also this eviction policy will work
> and I would never get an out of memory error? 
Yes, that is correct as long as you have free space on disk, of course.

> 4 How to check if the read happened from ignite persistence(disk) or
> memory?
There is no such capability if I am not mistaken. 

[1] https://apacheignite.readme.io/docs/durable-memory
[2] https://apacheignite.readme.io/docs/distributed-persistent-store

Thanks,
S.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: SQL cannot find data of new class definition

2018-06-18 Thread Cong Guo
Hi,

Does anyone have experience using both Cache and SQL interfaces at the same 
time? How do you solve the possible upgrade? Is my problem a bug for 
BinaryObject? Should I debug the ignite source code?

From: Cong Guo
Sent: 2018年6月15日 10:12
To: 'user@ignite.apache.org' 
Subject: RE: SQL cannot find data of new class definition

I run the SQL query only after the cache size has changed. The new data should 
be already in the cache when I run the query.


From: Cong Guo
Sent: 2018年6月15日 10:01
To: user@ignite.apache.org
Subject: RE: SQL cannot find data of new class definition

Hi,

Thank you for the reply. In my original test, I do not create a table using 
SQL. I just create a cache. I think a table using the value class name is 
created implicitely. I add the new field/column using ALTER TABLE before I put 
new data into the cache, but I still cannot find the data of the new class in 
the table with the class name.

It is easy to reproduce my original test. I use the Person class from ignite 
example.

In the old code:

CacheConfiguration personCacheCfg = new 
CacheConfiguration<>(PERSON_CACHE_NAME);
personCacheCfg.setCacheMode(CacheMode.REPLICATED);
personCacheCfg.setQueryEntities(Arrays.asList(createPersonQueryEntity()));
try(IgniteCache personCache = 
ignite.getOrCreateCache(personCacheCfg)){
// add some data here
   Person p1 = new Person(…);
   personCache.put(1L, p1);
   //  keep the node running and run the SQL query
}

private static QueryEntity createPersonQueryEntity() {
QueryEntity personEntity = new QueryEntity();

personEntity.setValueType(Person.class.getName());
personEntity.setKeyType(Long.class.getName());

LinkedHashMap fields = new 
LinkedHashMap<>();
fields.put("id", Long.class.getName());
fields.put("orgId", Long.class.getName());
fields.put("firstName", String.class.getName());
fields.put("lastName", String.class.getName());
fields.put("resume", String.class.getName());
fields.put("salary", Double.class.getName());
personEntity.setFields(fields);

personEntity.setIndexes(Arrays.asList(
new 
QueryIndex("id"),
new 
QueryIndex("orgId")
));

return personEntity;
}

The SQL query is:
IgniteCache binaryCache = 
personCache.withKeepBinary();
SqlFieldsQuery 
qry = new SqlFieldsQuery("select salary from Person");


QueryCursor> answers = binaryCache.query(qry);
List> 
salaryList = answers.getAll();
for(List row 
: salaryList) {

Double salary = (Double)row.get(0);

System.out.println(salary);
}

In the new code:

I add a member to the Person class which is “private in addOn”.

try(IgniteCache personCache = ignite.cache(PERSON_CACHE_NAME)){
   // add the new data and then check the cache size
  Person p2 = new Person(…);
  personCache.put(2L, p2);
   System.out.println("Size of the cache is: " + 
personCache.size(CachePeekMode.ALL));
}

I can only get the data of the old class P1 using the SQL query, but there is 
no error.

I use BinaryObject in the first place because the document says BinaryObject 
“enables you to add and remove fields from objects of the same type”

https://apacheignite.readme.io/docs/binary-marshaller

I can get the data of different class definitions using get(key), but I also 
need the SQL fields query.

IgniteCache binaryCache = personCache.withKeepBinary();
BinaryObject bObj = binaryCache.get(1L);
System.out.println(bObj.type().field("firstName").value(bObj) + " " + 
bObj.type().field("salary").value(bObj));
System.out.println("" + bObj.type().field("addON").value(bObj));

BinaryObject bObj2 = binaryCache.get(2L);
System.out.println(bObj2.type().field("firstName").value(bObj2) + " " + 
bObj2.type().field("salary").value(bObj2));
System.out.println("" + bObj2.type().field("addON").value(bObj2));



Thanks,
Cong



From: Ilya Kasnacheev [mailto:ilya.kasnach...@gmail.com]
Sent: 2018年6月15日 9:37
To: user@ignite.apache.org
Subject: Re: SQL 

Re: Unsubscribe me from mailing list

2018-06-18 Thread slava.koptilin
Hello,

To unsubscribe from the user mailing list send a letter to
user-unsubscr...@ignite.apache.org with a word "Unsubscribe" without quotes
as a subject.

If you have a mailing client, follow an unsubscribe link here:
https://ignite.apache.org/community/resources.html#mail-lists

Thanks,
S.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache Configuration for a data region

2018-06-18 Thread the_palakkaran
Hi,

1. I have set setOnheapCacheEnabled(true) for every cache. This means they
will go into java heap right?

2. Is it possible for me to take control of this durable memory? What should
essentially be kept there? How to do that? I don't clearly understand the
difference between durable memory and persistence.

3. If I have enabled persistence and also set LRU eviction policy correctly,
does this mean during loading also this eviction policy will work and I
would never get an out of memory error?

4. How to check if the read happened from ignite persistence(disk) or
memory?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: monitoring function of web console

2018-06-18 Thread dkarachentsev
Hi,

AFAIK, you cannot download plugin separately, it's commercial product. You
can use it for free from here [1] or purchase a payed version for internal
use.

[1] http://console.gridgain.com/

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: monitoring function of web console

2018-06-18 Thread Stanislav Lukyanov
The plugin is shipped as a part of the GridGain Enterprise and Ultimate 
editions (which is basically Apache Ignite + GridGain plugins).
The download links are here: https://www.gridgain.com/resources/download.
The description of the GridGain WebConsole is here: 
https://docs.gridgain.com/docs/web-console.

Stan

From: 胡海麟
Sent: 18 июня 2018 г. 14:40
To: user@ignite.apache.org
Subject: monitoring function of web console

Hello,

The docs says:

https://apacheignite-tools.readme.io/docs/ignite-web-console
The web console also features cluster monitoring functionality
(available separately as GridGain plugin) that shows various cache and
node metrics as well as CPU and heap usage.

I googled and failed to find anything on GridGain's HP. Does anyone
know where can I get the GridGain plugin?

Thanks.



Re: Cache Configuration for a data region

2018-06-18 Thread slava.koptilin
Hello,

As of Apache Ignite 2.x, all data are stored into off-heap memory [1].
On-heap memory is used only for temporary operations, buffers etc.

> Also, I have my own cache store implementations. I hope this is only used
> for read through and
> write through from and to database and not for loading or writing to
> ignite durable memory.
> Can someone confirm this too?
Yes, that is correct.

> Again is it possible to restrict the number of entries kept in the memory
> while loading data into cache ?
> The rest can be kept in the durable (disk) memory.
I think that you need to configure eviction policy [2] and use Ignite Native
Persistence [3].
Please take into account that 'Durable' [4] memory is not the synonym of
'Native Persistence'.

[1]
https://apacheignite.readme.io/v2.5/docs/durable-memory#section-in-memory-features
[2] https://apacheignite.readme.io/docs/evictions
[3] https://apacheignite.readme.io/v2.5/docs/distributed-persistent-store
[4] https://apacheignite.readme.io/v2.5/docs/durable-memory

Thanks!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


monitoring function of web console

2018-06-18 Thread 胡海麟
Hello,

The docs says:

https://apacheignite-tools.readme.io/docs/ignite-web-console
The web console also features cluster monitoring functionality
(available separately as GridGain plugin) that shows various cache and
node metrics as well as CPU and heap usage.

I googled and failed to find anything on GridGain's HP. Does anyone
know where can I get the GridGain plugin?

Thanks.


Re: Deleting a ticket from https://issues.apache.org/jira/projects/IGNITE/issues

2018-06-18 Thread dkarachentsev
Hi,

Not sure if it's possible to remove the ticket. Just close it with won't fix
status, it would be enough.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


If a lock is held by another node IgniteCache.isLocalLocked() appears to return incorrect results.

2018-06-18 Thread Jon Tricker
Additionally if a lock is held by another thread lock.tryLock() appears to 
block.

Demonstrated by the following code. See header comment for a full description. 
The error at line 85 should not be printed.

Found on Ignite 2.5.0. Reproducible on Windows and RHL.

Note : The test is technically timing dependant but the sleep in the remote 
thread is long enough that, on any reasonable system, the parent should check 
the lock before it dies.

package igniteCacheLockTest;

import org.apache.ignite.Ignition;
import org.apache.ignite.cache.CacheAtomicityMode;
import org.apache.ignite.cache.CacheMode;
import org.apache.ignite.cache.CacheWriteSynchronizationMode;
import org.apache.ignite.configuration.CacheConfiguration;
import org.apache.ignite.configuration.DataRegionConfiguration;
import org.apache.ignite.configuration.DataStorageConfiguration;
import org.apache.ignite.configuration.IgniteConfiguration;
import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
import 
org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder;

import java.util.Arrays;
import java.util.List;
import java.util.concurrent.locks.Lock;

import org.apache.ignite.Ignite;
import org.apache.ignite.IgniteCache;
import org.apache.ignite.IgniteLock;

/**
* Test demonstrating a cache lock held by one ignite node not being accessible 
on a different node.
*
 * The parent starts an ignite node and creates a cache lock. It then creates a 
thread which starts a second node, gets a handle to the same
* lock (proven by the fact that the set value can be read back) and locks it.
*
 * The parent then uses isLocalLocked() to check if the lock is held. It isn't.
*
 * The parent also tries a trylock(). This was done to check if maybe the lock 
is really held but, maybe, isLocalLocked() is returning
* incorrect information. Incorrectly the, non-blocking, trylock() blocks until 
the thread dies and then gets the lock.
*
 * If the thread is modified to not take the lock (comment out the lock.lock() 
line). The trylock() gets the lock and returns immediately as
 * expected.
*
 * There is some interaction between the threads based on the lock location but 
it is not working as expected.
*/
public class CacheLockTest {
public final static String LOCKNAME = "LOCKNAME";
public final static String CACHENAME = "CACHENAME";
public final static int TESTVALUE = 123;

public static void main(String[] args) {
System.out.println("Starting test");

// Make a uniquely named ignite node.
IgniteConfiguration config1 = new 
DefaultIgniteConfig();
config1.setIgniteInstanceName("INSTANCE1");

// Start node.
Ignite node1 = Ignition.start(config1);

// Get a reference to the cache
IgniteCache cache = 
node1.getOrCreateCache(CACHENAME);

// Write a pattern to the location so we can 
confirm we are connected to the same cache.
cache.put(LOCKNAME, TESTVALUE);

// Make a lock
Lock lock = cache.lock(LOCKNAME);

// Check is initially unlocked
if (cache.isLocalLocked(LOCKNAME, true)) {
System.out.println("Is 
initially locked local");
}
if (cache.isLocalLocked(LOCKNAME, false)) {
System.out.println("Is 
initially locked remote");
}

// Create a remote thread.
TestThread thread = new TestThread(LOCKNAME);

// Run thread. Will get a handle to the lock 
and lock it.
thread.start();

try {
// Give thread a while to run.
Thread.sleep(5000);
} catch (Exception e) {
System.out.println("Could not 
sleep.");
}

// Thread should still be alive and have taken 
the lock. Check it.
if (!cache.isLocalLocked(LOCKNAME, false)) {
System.out.println("ERROR. 
Thread failed to take lock");
}

// To confirm it is really not locked try to 
take the lock.
System.out.println("Parent about to do 
non-blocking 

Deleting a ticket from https://issues.apache.org/jira/projects/IGNITE/issues

2018-06-18 Thread Raghav
Hello Team,

This is not regarding Ignite issue. Am in a situation to delete a ticket
created in https://issues.apache.org/jira/projects/IGNITE/issues. As I do
not have Admin rights am not able to delete the ticket.

Could you please let me know whom should I contact for deletion of Ignite
Tickets created in https://issues.apache.org/jira/projects/IGNITE/issues.

Thanks in Advance!!!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Cache Configuration for a data region

2018-06-18 Thread the_palakkaran
Hi,

I have multiple on heap caches attached to a data region of maximum 1GB size
durable memory. I have 512MB heap memory specified also.

Is it possible for me to set the size of this cache to take maximum of 100MB
and store the remaining in durable memory? Otherwise, won't it be using the
entire 2GB if new entries are put into it?

Or can this only be done using eviction policies?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Unsubscribe me from mailing list

2018-06-18 Thread Harsh Mishra
-- 
Regards,
*Harsh Mishra** | Solution Architect*
Quovantis Technologies
Mobile: +91-9958311308
Skype ID: harshmishra1984
www.quovantis.com


Re: Distributed Database as best choice for persistence

2018-06-18 Thread piyush
cool. Does it maintain transactions ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Distributed Database as best choice for persistence

2018-06-18 Thread dkarachentsev
Hi,

Probably the best choice would be Cassandra as Ignite has out of the box
integration with it [1].

[1]
https://apacheignite-mix.readme.io/v2.5/docs/ignite-with-apache-cassandra

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Distributed Database as best choice for persistence

2018-06-18 Thread piyush
which is best choice of distributed persistence for Ignite if we dont want to
use native persistence ?

Riak ?  Cassandra  ? Dynamo DB ?

Has anyone tried this ? what was the experience ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/