Re: Defrag?

2021-06-28 Thread Ilya Kasnacheev
Hello!

Is it WAL (wal/) that is growing or checkpoint space (db/)? If latter, any
specific caches that are growing unbound?

If letter, you can try creating a new cache, moving the relevant data to
this new cache, switch to using it, and then drop the old cache - should
reclaim the space.

Regards,
-- 
Ilya Kasnacheev


пн, 28 июн. 2021 г. в 17:34, Ryan Trollip :

> Is this why the native disk storage just keeps growing and does not reduce
> after we delete from ignite using SQL?
> We are up to 80GB on disk now on some instances. We implemented a custom
> archiving feature to move older data out of ignite cache to a PostgresSQL
> database but when we delete that data from ignite instance, the disk data
> size ignite is using stays the same, and then keeps growing, and
> growing
>
> On Thu, Jun 24, 2021 at 7:10 PM Denis Magda  wrote:
>
>> Ignite fellows,
>>
>> I remember some of us worked on the persistence defragmentation features.
>> Has it been merged?
>>
>> @Valentin Kulichenko  probably you know
>> the latest state.
>>
>> -
>> Denis
>>
>> On Thu, Jun 24, 2021 at 11:59 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> You can probably drop the entire cache and then re-populate it via
>>> loadCache(), etc.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> ср, 23 июн. 2021 г. в 21:47, Ryan Trollip :
>>>
>>>> Thanks, Ilya, we may have to consider moving back to non-native storage
>>>> and caching more selectively as the performance degrades when there is a
>>>> lot of write/delete activity or tables with large amounts of rows. This is
>>>> with SQL with indexes and the use of query plans etc.
>>>>
>>>> Is there any easy way to rebuild the entire native database after
>>>> hours? e.g. with a batch run on the weeknds?
>>>>
>>>> On Wed, Jun 23, 2021 at 7:39 AM Ilya Kasnacheev <
>>>> ilya.kasnach...@gmail.com> wrote:
>>>>
>>>>> Hello!
>>>>>
>>>>> I don't think there's anything ready to use, but "killing performance"
>>>>> from fragmentation is also not something reported too often.
>>>>>
>>>>> Regards,
>>>>> --
>>>>> Ilya Kasnacheev
>>>>>
>>>>>
>>>>> ср, 16 июн. 2021 г. в 04:39, Ryan Trollip :
>>>>>
>>>>>> We see continual very large growth to data with ignite native. We
>>>>>> have a very chatty use case that's creating and deleting stuff often. The
>>>>>> data on disk just keeps growing at an explosive rate. So much so we 
>>>>>> ported
>>>>>> this to a DB to see the difference and the DB is much smaller. I was
>>>>>> searching to see if someone has the same issue. This is also killing
>>>>>> performance.
>>>>>>
>>>>>> Founds this:
>>>>>>
>>>>>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation
>>>>>>
>>>>>> Apparently, there is no auto-rebalancing of pages? or cleanup of
>>>>>> pages?
>>>>>>
>>>>>> Has anyone implemented a workaround to rebuild the cache and indexes
>>>>>> say on a weekly basis to get it to behave reasonably?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>


Re: Defrag?

2021-06-24 Thread Ilya Kasnacheev
Hello!

You can probably drop the entire cache and then re-populate it via
loadCache(), etc.

Regards,
-- 
Ilya Kasnacheev


ср, 23 июн. 2021 г. в 21:47, Ryan Trollip :

> Thanks, Ilya, we may have to consider moving back to non-native storage
> and caching more selectively as the performance degrades when there is a
> lot of write/delete activity or tables with large amounts of rows. This is
> with SQL with indexes and the use of query plans etc.
>
> Is there any easy way to rebuild the entire native database after hours?
> e.g. with a batch run on the weeknds?
>
> On Wed, Jun 23, 2021 at 7:39 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> I don't think there's anything ready to use, but "killing performance"
>> from fragmentation is also not something reported too often.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 16 июн. 2021 г. в 04:39, Ryan Trollip :
>>
>>> We see continual very large growth to data with ignite native. We have a
>>> very chatty use case that's creating and deleting stuff often. The data on
>>> disk just keeps growing at an explosive rate. So much so we ported this to
>>> a DB to see the difference and the DB is much smaller. I was searching to
>>> see if someone has the same issue. This is also killing performance.
>>>
>>> Founds this:
>>>
>>> https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation
>>>
>>> Apparently, there is no auto-rebalancing of pages? or cleanup of pages?
>>>
>>> Has anyone implemented a workaround to rebuild the cache and indexes say
>>> on a weekly basis to get it to behave reasonably?
>>>
>>> Thanks
>>>
>>


Re: Defrag?

2021-06-23 Thread Ilya Kasnacheev
Hello!

I don't think there's anything ready to use, but "killing performance" from
fragmentation is also not something reported too often.

Regards,
-- 
Ilya Kasnacheev


ср, 16 июн. 2021 г. в 04:39, Ryan Trollip :

> We see continual very large growth to data with ignite native. We have a
> very chatty use case that's creating and deleting stuff often. The data on
> disk just keeps growing at an explosive rate. So much so we ported this to
> a DB to see the difference and the DB is much smaller. I was searching to
> see if someone has the same issue. This is also killing performance.
>
> Founds this:
>
> https://cwiki.apache.org/confluence/display/IGNITE/IEP-47%3A+Native+persistence+defragmentation
>
> Apparently, there is no auto-rebalancing of pages? or cleanup of pages?
>
> Has anyone implemented a workaround to rebuild the cache and indexes say
> on a weekly basis to get it to behave reasonably?
>
> Thanks
>


Re: Ignite client hangs forever when performing cache operation with 6 servers, works with 3 servers

2021-06-23 Thread Ilya Kasnacheev
Hello!

Unfortunately, these links are dead (404).

If it's still relevant, please consider re-uploading.

Regards,
-- 
Ilya Kasnacheev


пн, 14 июн. 2021 г. в 21:59, mapeters :

> Problem: Ignite client hangs forever when performing a cache operation. We
> have 6 ignite servers running, the problem goes away when reducing this to
> 3. What effect does expanding/reducing the server cluster have that could
> cause this?
>
> See attached for sample stack trace of hanging client thread, server config
> snippet, client config snippet, and cache key snippet. From looking through
> the logs, there essentially seem to be various TCP communication errors
> such
> as the attached client and server errors. We tried increasing the (client)
> failure detection timeout values as suggested by the server error message,
> but that just made system startup hang for a long time (close to an hour).
>
> Usage:
>
> We have large number data objects (64k-400M) stored within HDF5 files and
> process hundreds of millions of records a day, with total data throughput
> ranging from 500GB - 10TB of data a day. We utilize ignite as an in memory
> distributed cache in front of the process that interacts with the HDF5
> files.
>
> Configuration:
>
> 1. Ignite version is 2.9.
> 2. The configuration is a 6 node ignite cluster using a partitioned cache.
> 3. Ignite’s persistence is disabled and we wrote a cache store
> implementation to persist the cache entries to the backing hdf5 files.
> 4. Ignite is configured in a write behind / read through manner.
> 5. There are four primary caches split up by data type to reduce amount of
> traffic on any one cache. The caches are all configured the same except for
> write behind properties and the data types within each cache to help manage
> how much data is in a specific cache.
> 6. The cache key is a compound object of path to the file and then a group
> /
> locator string within the file.
>
> Hardware:
>
> 1. In our failure site, there are 6 physical systems running Red Hat
> Hyperconverged Infrastructure.
> 2. Each physical node had a pinned VM running apache ignite. The VM has
> 128GB of memory. Ignite is configured with 16GB of heap memory, and 64GB of
> off heap cache.
> 3. There are 6 other VMs, each running 3 processes that all store to
> ignite.
> 4. There is a single VM that fronts the HDF5 files that Ignite talks to for
> persistent storage.
>
> hangingStackTrace.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3178/hangingStackTrace.txt>
>
> serverConfig.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3178/serverConfig.xml>
>
> clientConfig.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3178/clientConfig.xml>
>
> DataStoreKey.java
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3178/DataStoreKey.java>
>
>
> serverErrors.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3178/serverErrors.txt>
>
> clientErrors.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3178/clientErrors.txt>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Restoring Ignite cluster from the persistent store

2021-06-23 Thread Ilya Kasnacheev
Hello!

Why do you need to change the consistent id of the restored node at all?

Regards,
-- 
Ilya Kasnacheev


чт, 17 июн. 2021 г. в 13:52, Naveen :

> Hi
>
> AM using Ignite 2.8.1, am trying to bring up new cluster by restoring the
> persistent store, wal, work directory of an existing cluster, which I was
> able to do it.
> As part of this, I had to rename the folders name to match it with the
> consistent id of the destination node etc.
> I had to delete this metastorage folder under persistent store to bring up
> the node and once all the nodes are up, I do activate the cluster and works
> fine. But this way, I am not able to retain all the users which I have
> created on the source cluster.
> And if I dont delete metastorage folder, it would try to connect to the
> source cluster not the new cluster, so I had to delete this folder to make
> it work.
> Is there any way I can retain all the users that were created on the source
> cluster while brining up the new cluster by restoring the file system data
> of source cluster
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Conditional data Preloading from third party persistence

2021-06-22 Thread Ilya Kasnacheev
Hello!

I guess that your cache store implementation is using BinaryObject to
populate cache instead of POJO. You can try ((BinaryObject)v).deserialize().

Regards,
-- 
Ilya Kasnacheev


пн, 21 июн. 2021 г. в 14:55, Kamlesh Joshi :

> Hi Igniters,
>
>
>
> Am trying to preload certain data from third party persistence, but am
> getting below error while doing that. I have been using below snippet for
> the same. Can anyone guide, if am doing something wrong here. Any specific
> changes needed to be done on data model side?
>
>
>
> *IgniteCache cache = ignite.cache(cacheName);*
>
> *cache.loadCache(new IgniteBiPredicate() {*
>
> *@Override*
>
> *public boolean apply(String k, Customer v) {*
>
> *  if ("".equalsIgnoreCase(v.getFirstName())) {*
>
> *   return true;*
>
> *  } else {*
>
> *   return false;*
>
> *  }*
>
> *}*
>
> *});*
>
>
>
>
>
> *Custom data model jars are already available on cluster class path.*
>
>
>
>
>
> *org.apache.ignite.IgniteException: Failed to load cache: CustomerCache*
>
> *at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1858)
> ~[ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:568)
> ~[ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6820)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:562)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:491)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1125)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1923)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1569)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1197)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:127)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1093)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [?:1.8.0_271]*
>
> *at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [?:1.8.0_271]*
>
> *at java.lang.Thread.run(Thread.java:748) [?:1.8.0_271]*
>
> *Caused by: org.apache.ignite.IgniteException: Failed to load cache:
> CustomerCache*
>
> *at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:1029)
> [ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJob.localExecute(GridCacheAdapter.java:5681)
> ~[ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJobV2.localExecute(GridCacheAdapter.java:5725)
> ~[ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$TopologyVersionAwareJob.execute(GridCacheAdapter.java:6361)
> ~[ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.compute.ComputeJobAdapter.call(ComputeJobAdapter.java:132)
> ~[ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1855)
> ~[ignite-core-2.7.6.jar:2.7.6]*
>
> *... 14 more*
>
> *Caused by: org.apache.ignite.IgniteCheckedException: Failed to load
> cache: CustomerCache*
>
> *at
> org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:543)
> ~[ignite-core-2.7.6.jar:2.7.6]*
>
> *at
> org.apache.ignite.internal.processors.cache.distr

Re: Scan query ClassNotFoundException

2021-06-21 Thread Ilya Kasnacheev
Hello!

Key/value classes need to be manually deployed to server node but the
listener code itself may be peer loaded from client.

Regards,
-- 
Ilya Kasnacheev


пт, 18 июн. 2021 г. в 10:31, ict.management.trexon <
ict.management.tre...@gmail.com>:

> If I were to do this, it would mean that peerClassLoading cannot work with
> scan queries, as the class, POJO in this case, should be known on the
> server
> node side.
> Instead, like this, it works:
> @Override
> public boolean apply(Integer e1, BinaryObject e2) {
> SimplePojo sp = e2.deserialize(SimplePojo.class.getClassLoader());
> return sp.getId().equals(idToFind);
> }
> look this:
> e2.deserialize(SimplePojo.class.getClassLoader());
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Scan query ClassNotFoundException

2021-06-17 Thread Ilya Kasnacheev
Hello!

I guess that SimplePojo class is loaded in a different classloader which is
not Ignite's classloader. You can start IgniteConfiguration with
.setClassLoader(SimplePojo.class.getClassLoader());

Regards.
-- 
Ilya Kasnacheev


чт, 17 июн. 2021 г. в 16:48, ict.management.trexon <
ict.management.tre...@gmail.com>:

> NO, POJO only client side
> YES, peerClassLoadingEnabled
>
> I think it has something to do with the classloader running the scan query
> .
> if in the signature of the "apply" method I pass the definition of the POJO
> class, it gives a ClassNotFoundException error;
> On the other hand, in the body of the "apply" method, if I pass the
> definition of the class, it works.
>
> THIS WORK!
> @Override
> public boolean apply(Integer e1, BinaryObject e2) {
> SimplePojo sp =
> e2.deserialize(SimplePojo.class.getClassLoader());
> return sp.getId().equals(idToFind);
> }
>
> THIS NOT WORK!
> @Override
> public boolean apply(Integer e1, SimplePojo e2) {
> return sp.getId().equals(idToFind);
> }
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: SQL EXPLAIN ANALYZE

2021-06-17 Thread Ilya Kasnacheev
Hello!

This is unfortunate. Do we have a ticket for it?

As a workaround, one may set query warning timeout to 1 ms and then see
these detailed messages in the log.

Regards,
-- 
Ilya Kasnacheev


чт, 17 июн. 2021 г. в 14:31, :

> Actually this is true that neither plain explain nor explain analyze shows
> the number of scanned rows. This information is only available when ignite
> shows the warning about query exceeding time limits.
>
> Regards,
> Ivan
>
> From: Ilya Kasnacheev 
> Reply-To: "user@ignite.apache.org" 
> Date: Thursday, June 17, 2021 at 1:04 PM
> To: "user@ignite.apache.org" 
> Subject: Re: SQL EXPLAIN ANALYZE
>
> Hello!
>
> What happens if you just do EXPLAIN ?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 11 июн. 2021 г. в 14:34, Devakumar J  <mailto:mail2jdevaku...@gmail.com>>:
> Hi,
>
> In general. I can see in server logs if any query executes longer, log
> shows
> Long running query with information about scan count and index usage.
>
> But if i execute from sql client EXPLAIN ANALYZE   which returns two
> rows as per below link:
>
>
> https://apacheignite-sql.readme.io/docs/performance-and-debugging#using-explain-statement
> <
> https://urldefense.com/v3/__https:/apacheignite-sql.readme.io/docs/performance-and-debugging*using-explain-statement__;Iw!!PNmPxbqrAcYR!2T0TckYHNwL5PQSafBc2_K02iBzP2a8yH5qgI6KEZvtFggKtptsQ5a5pci0jvOJ2h8sl0gFE6g$
> >
>
> But i dont scan count returned in this. Is there any to enable it from
> ignite config?
>
> Thanks & Regards,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/<
> https://urldefense.com/v3/__http:/apache-ignite-users.70518.x6.nabble.com/__;!!PNmPxbqrAcYR!2T0TckYHNwL5PQSafBc2_K02iBzP2a8yH5qgI6KEZvtFggKtptsQ5a5pci0jvOJ2h8tAXO9IlQ$
> >
>


Re: peerClassLoadingEnabled not work

2021-06-17 Thread Ilya Kasnacheev
Hello!

Peer class loading will not peer load entities (key/value types), it will
only peer load compute units, listeners, services, etc.

Regards,
-- 
Ilya Kasnacheev


ср, 16 июн. 2021 г. в 01:06, Vladislav Shipugin :

> Hello!
>
> I’m trying to use DataStreamer and have ClassNotFoundException, but
> peerClassLoadingEnabled=true. What's wrong?
>
> My error log:
> https://gist.github.com/Ship/b56b5876ba95c3c5ca12c885f8e9e4c5
>
>
> --
> С уважением,
>
> Владислав Шипугин
>
> E: vladshipu...@gmail.com
> T: +7 926 103 09 55
>


Re: SQL EXPLAIN ANALYZE

2021-06-17 Thread Ilya Kasnacheev
Hello!

What happens if you just do EXPLAIN ?

Regards,
-- 
Ilya Kasnacheev


пт, 11 июн. 2021 г. в 14:34, Devakumar J :

> Hi,
>
> In general. I can see in server logs if any query executes longer, log
> shows
> Long running query with information about scan count and index usage.
>
> But if i execute from sql client EXPLAIN ANALYZE   which returns two
> rows as per below link:
>
>
> https://apacheignite-sql.readme.io/docs/performance-and-debugging#using-explain-statement
>
> But i dont scan count returned in this. Is there any to enable it from
> ignite config?
>
> Thanks & Regards,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Scan query ClassNotFoundException

2021-06-17 Thread Ilya Kasnacheev
Hello!

Do you actually have the POJO class on the server side? Do you have peer
class loading enabled?

Regards,
-- 
Ilya Kasnacheev


пт, 11 июн. 2021 г. в 11:37, ict.management.trexon <
ict.management.tre...@gmail.com>:

> Hi, why the scanquery return me this error?
> I've a cache , the POJO is a simple entity from jar
> dependency,
> it extend an abstract class who implement an interface that extends
> serializable.
> I've follow the example on page
> https://ignite.apache.org/docs/latest/key-value-api/using-scan-queries
> The IgniteBiPredicate is a concrete class, not a lambda.
> If i do an IgniteRunnable with the POJO class, no error returned, all work
> well. Why scan query fails?
> tank!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Enable Native persistence only for one node of the cluster

2021-06-17 Thread Ilya Kasnacheev
Hello!

"No".

What you can do, you can have a single shared persistent data region on
each node (you may even not put any caches in it) and also a large
persistent region on a subset (or just one) of nodes.

Then you need to specify node filter and data region name to those caches
you want to make persistent, and confine to these persistent node(s).

A bit of context: mixed mode where some of the nodes are in-memory was
possible in theory, but it was very sparsely tested and would lead to a lot
of problems in practice. All nodes need to have at least some persistence
in order to hold baseline topology/metastore data between restarts.

Regards,
-- 
Ilya Kasnacheev


пн, 14 июн. 2021 г. в 15:45, Krish :

> Is it possible to have a cluster topology where native persistence is
> enabled
> only for one node and all other nodes use in-memory cache store *without
> *native persistence?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Intermittent Error

2021-06-17 Thread Ilya Kasnacheev
Hello!

What is the connection string of your JDBC connection?

Regards.
-- 
Ilya Kasnacheev


ср, 16 июн. 2021 г. в 04:20, Moshe Rosten :

> Greetings,
>
> I'm attempting to retrieve a list of values from this query line:
>
> List> res = conn.sqlQuery("select * from  TWSources");
>
> It works sometimes perfectly fine, and at other times it gives me this
> error:
>
> org.apache.ignite.internal.client.thin.ClientServerError: Ignite failed to 
> process request [5]: Failed to set schema for DB connection for thread 
> [schema=myCache] (server status code [1])
>
> What could the problem be?
>
>
>


Re: AtomicReference issue with different userVersions

2021-06-17 Thread Ilya Kasnacheev
Hello!

Tricky exception message leading to usability issues is a bug all right.

Especially if it is possible to check for this case earlier and give a
proper warning/not face the issue.

Regards,
-- 
Ilya Kasnacheev


пт, 11 июн. 2021 г. в 10:59, tanshuai :

> I don't think we should treat it as a bug. But it does introduce tricky
> exceptions under certain circumstances.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Split brain in 2.9.0?

2021-06-11 Thread Ilya Kasnacheev
Hello!

It looks like your nodes has re-joined with different consistent IDs/data
dirs and thus some of your data was not accessible.

Please make sure that your nodes preserve their consistent IDs/data dirs
over restart.

It also looks like your one failing node has formed this topology as
opposed to two surviving ones, which seem to have restarted and rejoined to
it. What's the specific ordering of events?

Regards,
-- 
Ilya Kasnacheev


пт, 11 июн. 2021 г. в 04:30, Devin Bost :

> We encountered a situation after a node unexpectedly went down and came
> back up.
> After it came back, none of our transactions were going through (due to
> rollbacks), and we started getting a lot of exceptions in the logs. (I've
> added the exceptions at the bottom of this message.)
> We were getting "Failed to execute the cache operation (all partition
> owners have left the grid, partition data has been lost)", so we tried to
> reset the partitions (since these are persistent caches), and the commands
> succeeded, but we kept seeing errors.
>
> We checked the cluster state, and it looks like we have two nodes that
> came up with different IDs.
>
> Cluster state: active
> Current topology version: 1170
> Baseline auto adjustment disabled: softTimeout=30
> Current topology version: 1170 (Coordinator:
> ConsistentId=1455b414-5389-454a-9609-8dd1d15a2430, Order=1)
> Baseline nodes:
> ConsistentId=1a0aa611-58b7-479a-b1e6-735e31f87ed9, State=ONLINE,
> Order=1169
> ConsistentId=92bc8407-30f1-433d-9c32-5eeb759c73be, State=OFFLINE
> ConsistentId=b5875ab9-7923-46c9-b3f3-1550455a24e5, State=OFFLINE
>
> 
> Number of baseline nodes: 3
> Other nodes:
> ConsistentId=1455b414-5389-454a-9609-8dd1d15a2430, Order=1
> ConsistentId=5e8f3b03-aa20-45aa-892a-37e988e3741f, Order=2
>
>
> Could this be a split-brain scenario?
>
> Here's the more complete logs:
>
> javax.cache.CacheException: class
> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
> Failed to execute the cache operation (all partition owners have left the
> grid, partition data has been lost) [cacheName=propensity-customer,
> partition=430, key=com.company.PropensityKey [idHash=362342248,
> hash=42458921, customerId=142045188, variant=MODEL_A]]
> at
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1270)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2083)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1110)
> at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:676)
> at
> org.apache.ignite.internal.processors.platform.client.cache.ClientCacheGetRequest.process(ClientCacheGetRequest.java:41)
> at
> org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:99)
> at
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:202)
> at
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:56)
> at
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
> at
> org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
> at
> org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
> org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: class
> org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
> Failed to execute the cache operation (all partition owners have left the
> grid, partition data has been lost) [cacheName=propensity-customer,
> partition=430, key=com.company.PropensityKey [idHash=362342248,
> hash=42458921, customerId=142045188, variant=MODEL_A]]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validateKey(GridDhtTopologyFutureAdapter.java:209)
> at
> org.apache.ignite.internal.processors.ca

Re: SQL EXPLAIN ANALYZE

2021-06-11 Thread Ilya Kasnacheev
Hello!

I can remember seeing scan count in regular EXPLAINs.

Regards,
-- 
Ilya Kasnacheev


пт, 11 июн. 2021 г. в 08:22, Devakumar J :

> Hi,
>
> I see the ANALYZE syntax doesn't show the scan count.
>
> Is this not supported in ignite?
>
> Thanks & Regards,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: unsubscibe

2021-06-10 Thread Ilya Kasnacheev
Hello!

You need to send a message to user-unsubscr...@ignite.apache.org with
subject “Unsubscribe"

Regards,
-- 
Ilya Kasnacheev


чт, 10 июн. 2021 г. в 17:06, pinak sawhney :

>
>


Re: Namespace and DsicoverSpi Properties for Ignite running on Kubernetes

2021-06-10 Thread Ilya Kasnacheev
Hello!

I don't think that you need to have exactly the same IP finder, however,
due to how K8S works your nodes outside K8S may not be able to connect to
nodes within K8S (including thick clients).

Regards,
-- 
Ilya Kasnacheev


ср, 26 мая 2021 г. в 23:58, PunxsutawneyPhil3 :

> I have two questions regarding how to set up Ignite in Kubernetes.
>
> Do all nodes need to be in the same namespace? E.G. If I have a thick
> client
> and a server node, do both need to be in the same name space to form a
> cluster?
> From my research I think the answer is yes they need to be in the same
> namespace but I have not found any definitive documentation
>
> Do both the client and server nodes need to be running the
> TcpDiscoveryKubernetesIpFinder or can nodes use a mix of the
> TcpDiscoveryKubernetesIpFinder and the static IPfinder?
> From my resaerch I am fairly confident that all nodes must be running with
> the TcpDiscoveryKubernetesIpFinder but again I have not found any
> definitive
> documentation.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Restart ignite on segmentation

2021-06-10 Thread Ilya Kasnacheev
Hello!

It should be OK.

Ignite tests start, stop and segment thousands of nodes in a single JVM.

Regards,
-- 
Ilya Kasnacheev


чт, 10 июн. 2021 г. в 14:06, jenny winsor :

> Is it okay to start ignite again on a segmentation without restarting the
> JVM? StopNodeFailureHandler will just stop the ignite instance running
> locally. I am running in embedded mode so do not want to crash the server.
> I've read different opinions on this - is there something I should be aware
> of?
>
>


Re: Exception on CacheEntryProcessor invoke (2.10.0)

2021-06-10 Thread Ilya Kasnacheev
Hello!

Unfortunately, I can't see the stack trace that you are referring to, but
since you did not see the issue before, it may be
https://issues.apache.org/jira/browse/IGNITE-14856

There's a work-around since the bug will only manifest when cache is
defined in client nodes' configuration but not in server nodes'.

Regards,
-- 
Ilya Kasnacheev


вт, 25 мая 2021 г. в 16:04, ihalilaltun :

> Hi,
>
> here is the debug log  ignite.zip
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2515/ignite.zip>
>
> in the mean time i'll try to simplfy use case as you suggested.
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Bug in GridCacheWriteBehindStore

2021-06-10 Thread Ilya Kasnacheev
Hello!

I guess so. I think you should file a ticket against Ignite JIRA.

Regards,
-- 
Ilya Kasnacheev


ср, 9 июн. 2021 г. в 20:26, gigabot :

> There's a bug in GridCacheWriteBehindStore in the flusher method.
>
>
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStore.java#L674
>
> The logic states there that if flush thread count is not a power of 2,
> then
> perform some math that is not guaranteed to return a positive number. For
> example, if you pass this string as a key it returns a negative number:
>  accb2e8ea33e4a89b4189463cacc3c4e
>
> and then throws an array out of bounds exception when looking up the
> thread.
>
> I'm surprised this bug has been there so long, I guess people are not
> setting their thread counts to non-powers-of-2.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Programmatically triggering of write behind

2021-06-09 Thread Ilya Kasnacheev
Hello!

You may
call 
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore#forceFlush
when deciding to evict some entries, assuming you are doing it manually.

Regards,
-- 
Ilya Kasnacheev


ср, 9 июн. 2021 г. в 11:21, r_s :

> Hello All,
> I am running a partitioned cache with native persistence and write-behind
> to
> a DB. Because the DB is fairly slow, I decided to use write-behind not
> write-through. Out of many, mainly performance reasons it makes sense to
> also use native persistence instead of only in memory caching.
> In order to regularly clean up the cache I decided to implement time based
> and state based eviction on the cache. For time based eviction I used
> cache.setExpiryPolicyFactory().
> State based eviction is implemented by a service that deletes an entry from
> the cache as soon as a certain field has reached a final state.
> I experience the following problem: Sometimes the state changes faster than
> write behind, hence the final state of my cache entry will not be written
> behind to the DB.
> Thus my questions:
> - Is there any way to programmatically trigger a write behind on a cache?
> - Is there maybe any Ignite internal option of marking a cache entry as
> expired as soon as a certain state is reached?
> - Will the ExpiryPolicy take into account the configured write-behind?
> Meaning, will it trigger a write behind before deleting the entry if it has
> not been written into the DB before?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Sorting result of ScanQuery

2021-06-09 Thread Ilya Kasnacheev
Hello!

It sounds like you need some data normalization plus SQL secondary indexes.

Regards,
-- 
Ilya Kasnacheev


чт, 3 июн. 2021 г. в 17:11, Taner Ilyazov :

> That's my concern. Because the requirements I have is persisting an Object
> with a complex nested structure, which can't be changed. I mean the class
> files can not be changed. Creating Data transfer objects and mapping
> between them is fine. But what we want to achieve is a really high write
> rate and the ability later to read data with filtering and sorting on the
> values of the hash map. I'm not sure how to achieve that without too much
> overhead.
>
> On Thu, 3 Jun 2021 at 15:29, Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> You can store collections in Ignite, the challenge is they’re effectively
>> invisible to SQL. In general it’s easiest to work with data in relational
>> structure. Ignite isn’t a document database.
>>
>> On 3 Jun 2021, at 12:52, Taner Ilyazov  wrote:
>>
>> Okay, but since the nested object structure that I have contains a
>> Map, for which the idea is to have dynamic values, I'm not
>> sure how it will be handled. Do I need to create a separate table to do the
>> mapping of said Map<>? Couldn't find an example mapping a query entity
>> entry to a parameterized value.
>>
>> On Wed, 26 May 2021 at 17:01, Stephen Darlington <
>> stephen.darling...@gridgain.com> wrote:
>>
>>> A scan isn’t ordered. As you suspect, the way to order queries in Ignite
>>> is to use SQL.
>>>
>>> You don’t need to use annotations to define your SQL fields, indexes,
>>> etc. A slightly more verbose way is to use Query Entities (indexes
>>> <https://ignite.apache.org/docs/latest/SQL/indexes#configuring-indexes-using-query-entities>
>>> ).
>>>
>>> On 26 May 2021, at 14:24, Taner Ilyazov  wrote:
>>>
>>> Hello everyone,
>>>
>>> I'm new to the community and fairly new to Apache Ignite. I have a
>>> question for which I couldn't find a confirmation if it's possible or not.
>>>
>>> I have a use case where I need to persist a certain POJO to an ignite
>>> cluster. The POJO can not be changed, so adding @SqlQueryField to it's
>>> fields is not possible. Creating a data transfer object is an option, but I
>>> think adding mapping from/to the actual POJO will result in too much
>>> overhead, since performance requirements are really high.
>>>
>>> For now I'm using ScanQuery, but I could not find a way to sort the
>>> result based on a field value. So my main question is if it's possible and
>>> if not, what other options are there because the amount of data in question
>>> is too much for sorting on client side.
>>>
>>> If I take the SQL approach and introduce the mapping overhead between
>>> the DTO and POJO can I achieve server-side sorting on multiple nodes,
>>> keeping in mind that we'll have 1 table with a huge amount of data for
>>> writing and reading.
>>> Co-location if I understand correctly is ensuring all related data is on
>>> the same nodes, but in our case we have a single POJO which I would like
>>> it's data to be separated on different nodes for performance.
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>


Re: getOrCreateCache hangs when cacheStore uses spring resources

2021-06-09 Thread Ilya Kasnacheev
Hello!

This may obviously happen when your Ignite instance is a Spring bean, but
its cache store depends on Spring context.

You need to obtain Spring context but it is still under construction since
Ignite is still being initialized.

Make sure that cache is created only after Spring context is ready and
Ignite initialization has finished.

Regards,
-- 
Ilya Kasnacheev


чт, 27 мая 2021 г. в 14:20, Orange :

> Calling ignite.getOrCreateCache(cacheConfig) results in the thread getting
> blocked. I've noticed this does not happen when the cacheStore does not use
> spring resources. Ignite is being started with IgniteSpring.start(config,
> ctx).
>
> The are no other server errors.
>
> I've provided the code and the error below.
>
> [2021-05-27 12:11:15.381] - 5368 SEVERE
> [tcp-disco-msg-worker-[crd]-#2%SERVER-NAME%-#42%SERVER-NAME%] ---
> org.apache.ignite.internal.util.typedef.G: Blocked system-critical thread
> has been detected. This can lead to cluster-wide undefined behaviour
> [workerName=partition-exchanger,
> threadName=exchange-worker-#46%SERVER-NAME%, blockedFor=1288s]
> [2021-05-27 12:11:15.381] - 5368 WARNING
> [tcp-disco-msg-worker-[crd]-#2%SERVER-NAME%-#42%SERVER-NAME%] ---
> org.apache.ignite.internal.util.typedef.G: Thread
> [name="exchange-worker-#46%SERVER-NAME%", id=87, state=BLOCKED, blockCnt=1,
> waitCnt=4]
> Lock [object=java.util.concurrent.ConcurrentHashMap@307677e2,
> ownerName=main, ownerId=1]
>
> [2021-05-27 12:11:15.382] - 5368 WARNING
> [tcp-disco-msg-worker-[crd]-#2%SERVER-NAME%-#42%SERVER-NAME%] --- :
> Possible
> failure suppressed accordingly to a configured handler
> [hnd=NoOpFailureHandler [super=AbstractFailureHandler
> [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED,
> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
> [name=partition-exchanger, igniteInstanceName=SERVER-NAME, finished=false,
> heartbeatTs=1622112586746]]]
> class org.apache.ignite.IgniteException: GridWorker
> [name=partition-exchanger, igniteInstanceName=SERVER-NAME, finished=false,
> heartbeatTs=1622112586746]
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$3.apply(IgnitionEx.java:1806)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$3.apply(IgnitionEx.java:1801)
> at
>
> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:234)
> at
>
> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$1(ServerImpl.java:2970)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:8057)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:3086)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7995)
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:58)
>
> [2021-05-27 12:11:15.382] - 5368 WARNING
> [tcp-disco-msg-worker-[crd]-#2%SERVER-NAME%-#42%SERVER-NAME%] ---
> org.apache.ignite.internal.processors.cache.CacheDiagnosticManager: Page
> locks dump:
>
> Thread=[name=exchange-worker-#46%SERVER-NAME%, id=87], state=BLOCKED
> Locked pages = []
> Locked pages log: name=exchange-worker-#46%SERVER-NAME%
> time=(1622113875382,
> 2021-05-27 12:11:15.382)
>
>
>  @Bean
> public CacheConfiguration cacheConfig() {
> CacheConfiguration cacheCfg = new CacheConfiguration("cache-name");
> cacheCfg.setCacheMode(CacheMode.REPLICATED);
> cacheCfg.setReadThrough(true);
> cacheCfg.setCacheStoreFactory(
> new FactoryBuilder.SingletonFactory<>(new
> TestCacheStore()));
> return cacheCfg;
> }
>
>
> public class TestCacheStore extends CacheStoreAdapter
> implements Serializable {
>
> private static final Logger log = getLogger(TestCacheStore .class);
>
> @SpringResource(resourceName = "serverConfig")
> private transient ServerConfig serverConfig;
>
> public TestCacheStore () {
> log.info("test");
> }
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Clients got disconnected during the endurance testing

2021-06-09 Thread Ilya Kasnacheev
Hello!

This may happen if your cluster has a long PME and connection pool is
exhausted. You need to check server nodes' logs for suspicious messages.

Regards,
-- 
Ilya Kasnacheev


чт, 3 июн. 2021 г. в 10:35, Naveen :

> HI All
>
> We are using Ignite 2.8.1 and carrying the endurance test lasting for 7 to
> 12 hours.
> Test ran for almost 6 hours and all of a sudden clients got disconnected
> and
> seeing the below logs
> what could be the reason for this behavior, we have enough resources like
> RAM, CPU during that time
>
>
> [2021-06-03 00:08:08,172][WARN ][tcp-disco-msg-worker-[8761dfbe
> 10.119.10.63:47500]-#2][root] Possible failure suppressed accordingly to a
> configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false,
> timeout=0, super=AbstractFailureHandler
> [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
> o.a.i.IgniteException: GridWorker [name=grid-timeout-worker,
> igniteInstanceName=null, finished=false, heartbeatTs=1622664488170]]]
> class org.apache.ignite.IgniteException: GridWorker
> [name=grid-timeout-worker, igniteInstanceName=null, finished=false,
> heartbeatTs=1622664488170]
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$3.apply(IgnitionEx.java:1810)
> at
>
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$3.apply(IgnitionEx.java:1805)
> at
>
> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:234)
> at
>
> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.lambda$new$0(ServerImpl.java:2858)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorker.body(ServerImpl.java:7759)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2946)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerThread.body(ServerImpl.java:7697)
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:61)
>
>
> [2021-06-03 00:08:08,172][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:45160]
> [2021-06-03 00:08:08,172][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:35382]
> [2021-06-03 00:08:08,174][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:60720]
> [2021-06-03 00:08:08,174][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:54156]
> [2021-06-03 00:08:08,174][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:55260]
> [2021-06-03 00:08:08,174][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:32804]
> [2021-06-03 00:08:08,174][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:54822]
> [2021-06-03 00:08:08,175][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:60692]
> [2021-06-03 00:08:08,175][WARN
> ][grid-timeout-worker-#23][ClientListenerNioListener] Unable to perform
> handshake within timeout [timeout=24, remoteAddr=/10.129.4.13:45316]
>
>
> Thread=[name=auth-#47, id=92], state=WAITING
> Locked pages = []
> Locked pages log: name=auth-#47 time=(1622664488172, 2021-06-03
> 00:08:08.172)
>
>
> Thread=[name=checkpoint-runner-#65, id=112], state=WAITING
> Locked pages = []
> Locked pages log: name=checkpoint-runner-#65 time=(1622664488172,
> 2021-06-03
> 00:08:08.172)
>
>
> Thread=[name=checkpoint-runner-#66, id=113], state=WAITING
> Locked pages = []
> Locked pages log: name=checkpoint-runner-#66 time=(1622664488172,
> 2021-06-03
> 00:08:08.172)
>
>
> Thread=[name=checkpoint-runner-#67, id=114], state=WAITING
> Locked pages = []
> Locked pages log: name=checkpoint-runner-#67 time=(1622664488172,
> 2021-06-03
> 00:08:08.172)
>
>
> Thread=[name=checkpoint-runner-#68, id=115], state=W

Re: AtomicReference issue with different userVersions

2021-06-09 Thread Ilya Kasnacheev
Hello!

Is this a bug which needs to be addressed? If so, can you file a ticket
against Ignite JIRA?

Thanks,
-- 
Ilya Kasnacheev


ср, 9 июн. 2021 г. в 11:19, tanshuai :

> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3172/20210609161010.jpg>
>
>
> I may have found the root cause.
>
> The invocation chain should be something like the one below.
>
> org.apache.ignite.internal.managers.deployment.GridDeploymentManager#deploy
> org.apache.ignite.internal.util.IgniteUtils#detectClassLoader
> org.apache.ignite.internal.util.GridClassLoaderCache#classLoader
> org.apache.ignite.internal.util.GridClassLoaderCache#detectClassLoader
>
> And you must have used Thread.currentThread().setContextClassLoader()
> somewhere in you app before the
>
> org.apache.ignite.internal.processors.continuous.GridContinuousProcessor#processStartRequestV2
> method is invoked.
>
> There is user version under the thread context classloader, which is
> different from the one in the StartRoutineDiscoveryMessageV2.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Peer ClassLoading Issue | Apache Ignite 2.10 with Spring Boot 2.3

2021-06-09 Thread Ilya Kasnacheev
Hello!

The "Failed to resolve class name" error also looks like the
https://issues.apache.org/jira/browse/IGNITE-14856

Regards,
-- 
Ilya Kasnacheev


вс, 9 мая 2021 г. в 08:28, :

> Hi,
>
>
>
> We are trying to use ignite for the first time in our project. We are
> trying to use ignite with persistence enabled.
>
>
>
> Architecture is as follows.
>
>
>
> SpringBoot 2.3 application (thick client ) tries to connect to apace
> ignite cluster (3 nodes ) with persistence enabled and peer class loading
> enabled.
>
>
>
> There seems to be a weird  issue with peer class loading.
>
>
>
> We are trying to load huge data following the same approach as here -
> https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api
>
>
>
> Cache Configuration
>
>
>
> cacheConfiguration.setName(CacheIdentifiers.*USER_IGNITE_CACHE*
> .toString());
> cacheConfiguration.setIndexedTypes(String.class, IgniteUser1.class);
> cacheConfiguration.setCacheMode(CacheMode.*PARTITIONED*);
> cacheConfiguration.setStoreKeepBinary(true);
> RendezvousAffinityFunction rendezvousAffinityFunction = new
> RendezvousAffinityFunction();
> rendezvousAffinityFunction.setPartitions(512);
> cacheConfiguration.setBackups(1);
> cacheConfiguration.setAffinity(rendezvousAffinityFunction);
>
>
>
>
>
> Scenario 1.
>
>
>
> Start the cluster à activate the cluster à start the thick client à
>  Loading clients/ignite.cluster fails
>
>
>
> Exception occured in adding the data javax.cache.CacheException: class
> org.apache.ignite.IgniteCheckedException: Failed to resolve class name
> [platformId=0, platform=Java, typeId=620850656]
>
>
>
> Scenario 2.
>
>
>
> Stop the Thick client , Rename the file from IgniteUser1 to IgniteUser and
> restart the thick client , the classes are now copied to the cluster and
> works fine.
>
>
>
> I am not sure if there is an issue with grid deployment. Any help would be
> appreciated.
>
>
>
> Thanks,
> Siva.
>
>
> _
> “This message is for information purposes only, it is not a
> recommendation, advice, offer or solicitation to buy or sell a product or
> service nor an official confirmation of any transaction. It is directed at
> persons who are professionals and is not intended for retail customer use.
> Intended for recipient only. This message is subject to the terms at:
> www.barclays.com/emaildisclaimer.
>
> For important disclosures, please see:
> www.barclays.com/salesandtradingdisclaimer regarding market commentary
> from Barclays Sales and/or Trading, who are active market participants;
> https://www.investmentbank.barclays.com/disclosures/barclays-global-markets-disclosures.html
> regarding our standard terms for the Investment Bank of Barclays where we
> trade with you in principal-to-principal wholesale markets transactions;
> and in respect of Barclays Research, including disclosures relating to
> specific issuers, please see http://publicresearch.barclays.com.”
>
> _
> If you are incorporated or operating in Australia, please see
> https://www.home.barclays/disclosures/importantapacdisclosures.html for
> important disclosure.
>
> _
> How we use personal information  see our privacy notice
> https://www.investmentbank.barclays.com/disclosures/personalinformationuse.html
>
> _
>


Re: ignite server restarted after Critical system error detected.

2021-06-09 Thread Ilya Kasnacheev
Hello!

Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Cannot
execute this query as it might involve data filtering and thus may have
unpredictable performance. If you want to execute this query despite the
performance unpredictability, use ALLOW FILTERING

Sounds pretty informative ^

No, REPLICATED caches will not replicate values loaded from the cache
store. It is assumed that it may be fetched transparently from underlying
store.

For REPLICATED cache there's nothing to rebalance when node leaves.

Regards,
-- 
Ilya Kasnacheev


чт, 3 июн. 2021 г. в 11:06, xmw45688 :

> Hi Ignitians,
>
> I fail to understand what causes and need your help -
> 1)  When k8s sees “Critical system error”, it will restart ignite-admin
> server. Restarting is fine because of the critical system error.  But what
> are the causes of the critical system error?
> 2)  Critical system error may be corresponding to JVM held.  Still we
> don’t
> know the reason why JVM got held.
> 3)  The cluster lost one ignite client node, probably due to OOME
> 4)  Why/how was ignite server node triggered to reload the data from
> Cassandra (all data in C* tables are cached once the ignite server starts.
> All SQL DML interacts with Ignite Cache which interact with Cassandra for
> insert/update/delete)  If Ignite needs to rebalance the data among the
> server nodes, why can't rebalance the data from one node to another?  if
> even rebalancing data, why submitting invalid queries?  We are using
> apache-ignite-2.8.0.20190215.
>
> Exceptions -
>
> [2021-06-02
> 17:09:04,005][ERROR][sys-#103562%ignite-procurant-admin-cluster%][root]
> Critical system error detected. Will be handled accordingly to configured
> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Committing a
> transaction has produced runtime exception]]
> ... 1 more
> at
> com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:104)
> class
> org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException:
> Committing a transaction has produced runtime exception
> at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.heuristicException(IgniteTxAdapter.java:800)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitRemoteTx(GridDistributedTxRemoteAdapter.java:847)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitIfLocked(GridDistributedTxRemoteAdapter.java:795)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.salvageTx(GridDistributedTxRemoteAdapter.java:898)
> at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager.salvageTx(IgniteTxManager.java:398)
> at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager.access$3100(IgniteTxManager.java:134)
> at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$NodeFailureTimeoutObject.onTimeout0(IgniteTxManager.java:2551)
> at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$NodeFailureTimeoutObject.access$3300(IgniteTxManager.java:2505)
> at
>
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$NodeFailureTimeoutObject$1.run(IgniteTxManager.java:2624)
> at
>
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6898)
> at
>
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:827)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> Caused by: com.datastax.driver.core.exceptions.InvalidQueryException:
> Cannot
> execute this query as it might involve data filtering and thus may have
> unpredictable performance. If you want to execute this query despite the
> performance unpredictability, use ALLOW FILTERING
> at java.lang.Thread.run(Thread.java:745)
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.load(GridCacheStoreManagerAdapter.java:293)
> at
>
> org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:338)
> Caused by: class org.apac

Re: Ignite throws "Failed to resolve class name" Exception

2021-06-09 Thread Ilya Kasnacheev
Hello!

I can see a very similar issue filed:
https://issues.apache.org/jira/browse/IGNITE-14856

There are hopes that it gets addresses in 2.11

Regards,
-- 
Ilya Kasnacheev


чт, 3 июн. 2021 г. в 14:37, Aleksandr Shapkin :

> Hello!
>
> It seems that you are trying to deploy DTO using peer class loading,
> unfortunately, that's not possible. Peer class loading is mostly about task
> deployment, see
> https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading
>
> To resolve this you need to have your CacheState deployed on all nodes
> before deserialization happens or to work with cache in raw binary format.
> https://ignite.apache.org/docs/latest/key-value-api/binary-objects
>
>
> > On 26 May 2021, at 18:27, tsipporah22  wrote:
> >
> > Hi Ilya,
> >
> > Sorry I get back to you late. I have key-value classes on the server
> node.
> > The peer class loading is enabled. I'm not getting this error
> consistently
> > so it's hard to reproduce. Below is the code snippet that throws the
> error:
> >
> > First I got CacheState object with tableName as the key:
> >
> > public class CacheState implements Serializable {
> >private static final long serialVersionUID = 1L;
> >
> >@QuerySqlField(index=true)
> >private String tableName;
> >
> >@QuerySqlField
> >private long updateVersion;
> >
> >
> >public CacheState() {
> >}
> >
> >public CacheState(String tableName) {
> >this.tableName = tableName;
> >}
> >
> >public String getTableName() {
> >return tableName;
> >}
> >
> >public void setTableName(String tableName) {
> >this.tableName = tableName;
> >}
> >
> >public long getUpdateVersion() {
> >return updateVersion;
> >}
> >
> >public void setUpdateVersion(long updateVersion) {
> >this.updateVersion = updateVersion;
> >}
> >
> >
> > And the error is thrown from this class:
> >
> > public class WindowOptimizer {
> >private final Ignite ignite;
> >
> >private IgniteCache cacheStates;
> >
> >public void init() {
> >if (cacheStates == null) {
> >cacheStates =
> > ignite.cache(CacheState.class.getSimpleName().toLowerCase());
> >}
> >}
> > }
> >
> >private IgniteFuture>
> > updateCacheState(IgniteCompute compute, BaselinePeriod period,
> >OffsetDateTime now, WindowOptimizerCfg cfg) {
> >
> >final IgniteFuture> future =
> > compute.broadcastAsync(updater);
> >future.listen(f -> {
> >
> >
> >CacheState cacheState = cacheStates.get(tableName);<---
> this
> > line throws Exception
> >
> >
> >})
> >}
> >
> >
> > Thanks,
> > tsipporah
> >
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


Re: Execute Ignite Callable Jobs with set priorities

2021-05-25 Thread Ilya Kasnacheev
Hello!

I don't think so.

Regards,
-- 
Ilya Kasnacheev


вт, 25 мая 2021 г. в 14:36, Krish :

> ilya.kasnacheev wrote
> > This means that Ignite prioritizing is a poor fit for you and you may
> need
> > to roll out your own, perhaps based on IgniteQueue.
>
> Does ignite provide priority queue implementation of IgniteQueue interface?
>
> Thanks,
> Krish
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Exception on CacheEntryProcessor invoke (2.10.0)

2021-05-25 Thread Ilya Kasnacheev
Hello!

I have checked your reproducer, I were not able to run it since it has a
lot of dependencies, but there's a lot of going there: You have a scheduled
method, which is run periodically and which executes code on the thread
pool which calls these entry processors.

I would recommend simplifying this use case until it works: such as,
running entry processors from main/application thread, then adding thread
pool, then adding schedule. This way you may identify the step which causes
this issue.

Maybe the issue is related to some class loading problem. Maybe some of
classes' dependency cannot be peer loaded. Since these errors do not seem
to contain stack trace, some debugging may be needed.

Regards,
-- 
Ilya Kasnacheev


пн, 24 мая 2021 г. в 18:47, ihalilaltun :

> Hi,
>
> I've run more detailed tests during the weekend and i can surely tell that
> problem is not related to the migrated data. With a new cluster setup and
> with 0 data we can still get the error.
>
> what i have in my mind is this; with the new version there may be a new
> configuration parameter that has to be set in order cacheentryprocessors to
> be DEPLOYED in SHARED mode to all cluster nodes, but i cannot find such a
> parameter.
>
> so at this point my problem becomes to this; is there a configuration
> parameter that forces all cacheentroprocessors the be deployed on every
> cluster node from client by force?
>
> following is the cluster and client configuration;
> client-server-configs.zip
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2515/client-server-configs.zip>
>
>
> when the client nodes starts and run necessary jobs -containing
> cacaheentryprocessors- first 1 or 2 cacheentryprocessors are deploed to
> both
> clusters after that new cacaheentryprocessors starts to get
> classnotfoundexception and cluster nodes keeps giving following warnings;
>
>
> [2021-05-24T15:36:00,010][WARN
> ][sys-stripe-4-#5][GridDeploymentPerVersionStore] Failed to load peer class
> (ignore if class got undeployed during preloading)
>
> [alias=com.segmentify.lotr.frodo.cacheentryprocessor.ShiftPromotionCountersEntryProcessor,
> dep=SharedDeployment [rmv=false, super=GridDeployment [ts=1621870560005,
> depMode=SHARED, clsLdr=GridDeploymentClassLoader
> [id=627410f9971-f4e082a1-4012-4720-bcbb-e438359221e1, singleNode=false,
> nodeLdrMap=HashMap
>
> {db22a85d-37a5-45c4-ae63-bdd535eaca44=75f020f9971-db22a85d-37a5-45c4-ae63-bdd535eaca44},
> p2pTimeout=5000, usrVer=0, depMode=SHARED, quiet=false],
> clsLdrId=627410f9971-f4e082a1-4012-4720-bcbb-e438359221e1, userVer=0,
> loc=false,
>
> sampleClsName=com.segmentify.lotr.frodo.cacheentryprocessor.ShiftPromotionCountersEntryProcessor,
> pendingUndeploy=false, undeployed=false, usage=0]]]
> [2021-05-24T15:36:00,103][WARN
> ][sys-stripe-2-#3][GridDeploymentPerVersionStore] Failed to load peer class
> (ignore if class got undeployed during preloading)
>
> [alias=com.segmentify.lotr.frodo.cacheentryprocessor.RockScoreResetProcessor,
> dep=SharedDeployment [rmv=false, super=GridDeployment [ts=1621870560100,
> depMode=SHARED, clsLdr=GridDeploymentClassLoader
> [id=a27410f9971-f4e082a1-4012-4720-bcbb-e438359221e1, singleNode=false,
> nodeLdrMap=HashMap
>
> {db22a85d-37a5-45c4-ae63-bdd535eaca44=75f020f9971-db22a85d-37a5-45c4-ae63-bdd535eaca44},
> p2pTimeout=5000, usrVer=0, depMode=SHARED, quiet=false],
> clsLdrId=a27410f9971-f4e082a1-4012-4720-bcbb-e438359221e1, userVer=0,
> loc=false,
>
> sampleClsName=com.segmentify.lotr.frodo.cacheentryprocessor.RockScoreResetProcessor,
> pendingUndeploy=false, undeployed=false, usage=0]]]
> [2021-05-24T15:36:00,180][WARN
> ][sys-stripe-1-#2][GridDeploymentPerVersionStore] Failed to load peer class
> (ignore if class got undeployed during preloading)
>
> [alias=com.segmentify.lotr.frodo.cacheentryprocessor.RockScoreResetProcessor,
> dep=SharedDeployment [rmv=false, super=GridDeployment [ts=1621870560171,
> depMode=SHARED, clsLdr=GridDeploymentClassLoader
> [id=f27410f9971-f4e082a1-4012-4720-bcbb-e438359221e1, singleNode=false,
> nodeLdrMap=HashMap
>
> {db22a85d-37a5-45c4-ae63-bdd535eaca44=75f020f9971-db22a85d-37a5-45c4-ae63-bdd535eaca44},
> p2pTimeout=5000, usrVer=0, depMode=SHARED, quiet=false],
> clsLdrId=f27410f9971-f4e082a1-4012-4720-bcbb-e438359221e1, userVer=0,
> loc=false,
>
> sampleClsName=com.segmentify.lotr.frodo.cacheentryprocessor.RockScoreResetProcessor,
> pendingUndeploy=false, undeployed=false, usage=0]]]
> [2021-05-24T15:36:00,202][WARN
> ][sys-stripe-1-#2][GridDeploymentPerVersionStore] Failed to load peer class
> (ignore if class got undeployed during preloading)
>
> [alias=com.segmentify.lotr.frodo.cacheentryprocessor.RockScoreResetProcessor,
> dep=SharedDeployment [rmv=false, su

Re: System.InvalidOperationException: 'No coercion operator is defined between types 'Apache.Ignite.Core.Impl.Binary.BinaryObjectBuilder' and 'System.DateTime'

2021-05-24 Thread Ilya Kasnacheev
Hello!

Maybe you are doing something wrong which only becomes apparent in the
persistent setup.

Can you share a runnable reproducer project which exhibits the behavior?

Regards,
-- 
Ilya Kasnacheev


пт, 21 мая 2021 г. в 20:20, Josh Katz :

> Hi Ilya,
>
>
>
> I’m getting this exception when calling ICache Get method to retrieve the
> object from the Cache. (The object has a DateTime property)
>
> I have no special configuration other than persistence enabled. In another
> project we didn’t use persistence and did not encounter this issue.
>
> Can you please clarify if this issue is only happening when using
> persistence? I’m going to try the IBinarizable approach from the docs.
>
>
>
> Thanks,
>
>
>
> Josh Katz
>
> Dodge & Cox | 415-262-7520
>
>
>
>
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* Friday, May 21, 2021 2:02 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: System.InvalidOperationException: 'No coercion operator is
> defined between types 'Apache.Ignite.Core.Impl.Binary.BinaryObjectBuilder'
> and 'System.DateTime'
>
>
>
> *This is an EXTERNAL EMAIL. Stop and think before clicking a link or
> opening attachments.*
>
> Hello!
>
>
>
> For starters, it looks like you're putting a BinaryObjectBuilder into
> cache instead of BinaryObjectBilder.Build() return value.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пт, 21 мая 2021 г. в 01:46, Josh Katz <
> josh.katz.contrac...@dodgeandcox.com>:
>
> Using .NET UnitTest to connect to the cluster and persistence enabled.
>
> Put works without errors. When calling Get we get the following exception:
>
> System.InvalidOperationException: 'No coercion operator is defined between
> types 'Apache.Ignite.Core.Impl.Binary.BinaryObjectBuilder' and
> 'System.DateTime'
>
>
>
> We are using System.Runtime.Serialization for the DateTime property with
> DataMemberAttribute.
>
>
>
> Thanks,
>
>
>
> *Josh Katz*
>
> *Dodge & Cox*
>
> 555 California Street | 40th floor | San Francisco, CA 94104
>
> 415-262-7520
>
>
>
> josh.katz.contrac...@dodgeandcox.com
>
> www.dodgeandcox.com
>
>
>
>
> --
>
> Please follow the hyperlink to important disclosures.
> https://www.dodgeandcox.com/disclosures/email_disclosure_funds.html
>
>


Re: Exception on CacheEntryProcessor invoke (2.10.0)

2021-05-24 Thread Ilya Kasnacheev
Hello!

If you can provide steps to reproduce, I can try to do that.

Regards,
-- 
Ilya Kasnacheev


пт, 21 мая 2021 г. в 18:45, ihalilaltun :

> hi
>
> the case can be reproduced only by upgrading from 2.7.6 to 2.10.0 with
> existing data. can you run that kind of reproduce step?
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: select * throws cannot find schema for object with compact footer

2021-05-24 Thread Ilya Kasnacheev
Hello!

Please refer to
https://ignite.apache.org/docs/latest/key-value-api/binary-objects#recommendations-on-binary-objects-tuning

It seems that some of the rows in the table has a layout of data which is
not present on any of nodes. So Ignite does not know how to unpack these
rows into columns.

Maybe you have lost the contents of binary_meta directory on the nodes or
something like that.

Regards,
-- 
Ilya Kasnacheev


пн, 24 мая 2021 г. в 15:50, Naveen :

> HI
>
> We are using Ignite 2.8.1
>
> when we run select * table, it throws below exception, how ever querying
> for
> a specific key works fine.
> what could have gone wrong here ? Anything we can make out with these ids
> mentioned below
> Error: Failed to execute map query on remote node
> [nodeId=3db2b3e5-21ae-46ad-9b14-cf3a1c8171de, errMsg=Failed to execute SQL
> query. General error: "class
> org.apache.ignite.binary.BinaryObjectException:
> Cannot find schema for object with compact footer
> [typeName=org.ignite.model.curated.Account, typeId=-2143671743,
> missingSchemaId=319535867, existingSchemaIds=[-284217025, -131460726,
> 738773736, -361085461, 130249633, -1686791135, 670818893, 1018521906,
> -978489660, 1225415027, 1484800635, 469100171]]"; SQL statement:
>
> [2021-05-24 16:42:58,117][WARN
> ][client-connector-#78][CacheObjectBinaryProcessorImpl] Schema is missing
> while no metadata updates are in progress (will wait for schema update
> within timeout defined by IGNITE_WAIT_SCHEMA_UPDATE system property)
> [typeId=-2143671743, missingSchemaId=319535867, pendingVer=3,
> acceptedVer=3,
> binMetaUpdateTimeout=3]
>
> Thanks
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Exception on CacheEntryProcessor invoke (2.10.0)

2021-05-21 Thread Ilya Kasnacheev
Hello!

The class was probably not found.

Without steps to reproduce I can't check anything further.

Regards,
-- 
Ilya Kasnacheev


пт, 21 мая 2021 г. в 16:10, ihalilaltun :

> Hi,
> sorry but i cannot share such a project, company policies restrtics it.
>
> I tried to reproduce it with new code but no luck (due to different
> environments and existing data structure). the idea was to call
> cacheentryprocessor's with in a executorservice but cannot get error.
>
> Let me give more information about migration step that causes the error; I
> have small portion of production data on test envitonment. ignite's running
> on 2.7.6 version. none of our tests fail, none of test automations causes
> the error. next step is to gracefully shut-down the ignite nodes, then *yum
> upgrade apache-ignite* command is executed and successfully upgraded
> message
> is received. Then ignite nodes are successfully started with 2.10.0
> version.
> After that point all cacheentryprocessors that is called from runnable
> contexts starts to give ClassNotFoundExceptions.
>
> DEBUG mode log is here ->  ignite-debug.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2515/ignite-debug.log>
>
>
> the most interesting log message is this;
>
> *[2021-05-21T12:10:00,012][DEBUG][sys-stripe-9-#10][GridDeploymentPerVersionStore]
> Failed to find class on remote node
>
> [class=com.segmentify.lotr.frodo.cacheentryprocessor.ShiftPromotionCountersEntryProcessor,
> nodeId=f2c50fc3-0e7b-43eb-bcbb-5d3dda635b6b,
> clsLdrId=d12c24e8971-f2c50fc3-0e7b-43eb-bcbb-5d3dda635b6b, reason=Failed to
> find local deployment for peer request: GridDeploymentRequest
>
> [rsrcName=com/segmentify/lotr/frodo/cacheentryprocessor/ShiftPromotionCountersEntryProcessor.class,
> ldrId=d12c24e8971-f2c50fc3-0e7b-43eb-bcbb-5d3dda635b6b, isUndeploy=false,
> nodeIds=null]]*
>
> although class is present on remote node, somehow ignite node cannot find
> it.
>
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: System.InvalidOperationException: 'No coercion operator is defined between types 'Apache.Ignite.Core.Impl.Binary.BinaryObjectBuilder' and 'System.DateTime'

2021-05-21 Thread Ilya Kasnacheev
Hello!

For starters, it looks like you're putting a BinaryObjectBuilder into cache
instead of BinaryObjectBilder.Build() return value.

Regards,
-- 
Ilya Kasnacheev


пт, 21 мая 2021 г. в 01:46, Josh Katz :

> Using .NET UnitTest to connect to the cluster and persistence enabled.
>
> Put works without errors. When calling Get we get the following exception:
>
> System.InvalidOperationException: 'No coercion operator is defined between
> types 'Apache.Ignite.Core.Impl.Binary.BinaryObjectBuilder' and
> 'System.DateTime'
>
>
>
> We are using System.Runtime.Serialization for the DateTime property with
> DataMemberAttribute.
>
>
>
> Thanks,
>
>
>
> *Josh Katz*
>
> *Dodge & Cox*
>
> 555 California Street | 40th floor | San Francisco, CA 94104
>
> 415-262-7520
>
>
>
> josh.katz.contrac...@dodgeandcox.com
>
> www.dodgeandcox.com
>
>
>
>
> --
> Please follow the hyperlink to important disclosures.
> https://www.dodgeandcox.com/disclosures/email_disclosure_funds.html
>
>


Re: Running sql query on partitioned cache

2021-05-21 Thread Ilya Kasnacheev
Hello!

1. You can try to reproduce the same issue in Java.
2. You should almost exclusively use cache.Size().

Regards,
-- 
Ilya Kasnacheev


пт, 21 мая 2021 г. в 12:02, rakshita04 :

> Hi Ilya,
>
> 1. We cant share the running exe as its for ARM plus it requires some
> specific kernel image to run.
> If i put it in simple words if i use partitioned mode and backup=1 and want
> to run sql query on the existing cache, do we need to do something w.r.t
> colocation(as keys are distributed across both the nodes)?
>
> 2. yes i want to understand when to use cache.LocalSize() and when to use
> cache.Size()?
>
> regards,
> rakshita Chaudhary
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re:

2021-05-21 Thread Ilya Kasnacheev
Hello!

Please send mail to user-unsubscr...@ignite.apache.org to unsubscribe from
the list.

Regards,
-- 
Ilya Kasnacheev


пт, 21 мая 2021 г. в 11:11, Hitesh Nandwana :

> Unsubscribe
>


Re: Execute Ignite Callable Jobs with set priorities

2021-05-20 Thread Ilya Kasnacheev
Hello!

It just sounds that you want to prioritize some other resource rather than
Ignite compute capacity, if you want cluster-wide priorities.

This means that Ignite prioritizing is a poor fit for you and you may need
to roll out your own, perhaps based on IgniteQueue.

Regards,
-- 
Ilya Kasnacheev


вс, 16 мая 2021 г. в 13:20, Krish :

> Thanks Stephen and Ilya,
>
> As mentioned in job scheduling documentation, collisionAPI will take care
> of
> job scheduling when jobs arrive at destination node. Lets say I use
> PriorityQueueCollisionSpi and send three jobs with priorities 5, 7 and 10
> to
> one node, then that node will execute job with priority 10 first then job
> with priority 7 and then 5.
>
> However, My use case is different than this. Going with above example, in
> my
> use case, ignite client will send three jobs to three different nodes and I
> would still want these jobs to be executed based on their priorities.
> Basically, no matter how these jobs are distributed across the cluster they
> should be executed based on priority. Can this be achieved using
> CollisionAPI?
>
> Many Thanks,
> K
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Data replication from kafka-topic to ignite cluster

2021-05-20 Thread Ilya Kasnacheev
Hello!

Maybe you should ask on Kafka list about the specifics of its class
loading. "org.apache.kafka.connect.runtime.isolation" suggests that there
may be some limitations.

Regards,
-- 
Ilya Kasnacheev


пн, 17 мая 2021 г. в 13:32, shubhamshirur :

> Thank you sir for replying. I have added ignite-core.jar and I have
> mentioned
> its path in plugin.path. This particular jar is loaded in while connector
> instantiation I have checked it in the log. Can you please enlighten me on
> this ?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Migration from Apache ignite 2.7.0 to 2.10.0

2021-05-20 Thread Ilya Kasnacheev
Hello!

2.10.0 is definitely more stable than 2.7.0.

Regards,
-- 
Ilya Kasnacheev


пн, 17 мая 2021 г. в 08:00, BEELA GAYATRI :

> Dear Team,
>
>We are planning to migrate Apache 2.7.0 to 2.10.0. We have seen few
> warnings in 2.10.0 release notes. Can we move to 2.10.0 in production?
> Please suggest.
>
> Thanks and Regards,
>
> Beela Gayatri.
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>


Re: Running sql query on partitioned cache

2021-05-20 Thread Ilya Kasnacheev
Hello!

1. Hard to say what happens here. Do you have a runnable reproducer for
this behavior? Can you share?

2.
https://ignite.apache.org/releases/latest/cppdoc/classignite_1_1cache_1_1Cache.html#a03574797da901a76180aad88476ef8ce
- Cache.Size()?

Regards,
-- 
Ilya Kasnacheev


чт, 20 мая 2021 г. в 09:39, rakshita04 :

> Hi Team,
>
> We are running sql query on our cache(which is configured in "Partitioned"
> mode with backup =1)
> We have two nodes connected over network to each other.
> Our application is C++ based and running on ARM environment(linux)
> We are facing 2 issues now-
> 1. when we are running below query on our code while in background we are
> adding entries in DB-
> const SqlFieldsQuery countQuery(
> "select count(reqType) "
> "from DBStorage "
> "where reqType=0"); DataBaseConfig.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2857/DataBaseConfig.xml>
>
> The count is returned is increasing with proper entry till 10,000 entries
> but after that it is getting reset again and starting again with 0.
> Is there anything else, we need to do for running sql query in partitioned
> mode for correct results?
>
> 2. What C++ API we need to use to get exact size of cache on partitioned
> mode? So cache.localSize() is returning half the size of actual size (may
> be
> due to partitioned mode)?
>
> I am attaching the xml for your reference.
>
> regards,
> Rakshita Chaudhary
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Partitioned cache behaviour when saving the same key on different nodes

2021-05-20 Thread Ilya Kasnacheev
Hello!

For every key there is a primary node. Both puts will happen on that node
in some order.

Please see
https://ignite.apache.org/docs/latest/data-modeling/data-partitioning

Regards,
-- 
Ilya Kasnacheev


чт, 20 мая 2021 г. в 18:10, r_s :

> Hi all,
>
> I am trying to understand what happens in the following situation:
> I have a partitioned cache that is accessed via the IgniteRepository from
> org.apache.ignite.springdata22.repository. Now I realized, that it might
> happen that two nodes will call the save method on the repository with the
> same key but different value objects. We can expect that the key is not
> present in the cache yet. What is the expected behaviour of the cache in
> this situation? Will the nodes hold the conflicting data on different
> partitions until they are somehow synchronized?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Multiple ignite nodes crashed at the same time due to "Maximum number of retries 100000 reached for Put operation" error

2021-05-20 Thread Ilya Kasnacheev
Hello!

This looks like a PDS corruption to me. Can you by chance share persistence
files from problematic node? I am assuming that it fails every time on
restart?

Regards,
-- 
Ilya Kasnacheev


чт, 20 мая 2021 г. в 12:52, Lo, Marcus :

> Hi,
>
>
>
> We have a 4 node ignite cluster setup. After running the cluster for 1
> day, we encounter the following error almost at the same time at node #2,
> #3, and #4:
>
>
>
> Critical system error detected. Will be handled accordingly to configured
> handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
> super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [
> SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.IgniteCheckedException: Maximum number of retries 1000 reached for
> Put operation (the tree may be corrupted). Increase
> IGNITE_BPLUS_TREE_LOCK_RETRIES system property if you regularly see this
> message (current value is 1000).]]
> org.apache.ignite.IgniteCheckedException: Maximum number of retries 1000
> reached for Put operation (the tree may be corrupted). Increase
> IGNITE_BPLUS_TREE_LOCK_RETRIES system property if you regularly see this
> message (current value is 1000). at
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Get.checkLockRetry
> (BPlusTree.java:3109) [ignite-core-2.10.0.jar:2.10.0] at
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.checkLockRetry
> (BPlusTree.java:3906) [ignite-core-2.10.0.jar:2.10.0]
>
>
>
> Tried increasing IGNITE_BPLUS_TREE_LOCK_RETRIES to 100,000 and restarted
> the nodes, but it didn’t help and the node went into the same error
> straight away.
>
>
>
> Can you please shed some lights on how to resolve the issue? Thanks.
>
>
>
> I also attach the logs for your reference:
>
> ignite-node-[1,2,3,4].log: the full log files for all nodes
>
> ignite-restart.log: the log for node 2 when it crashed
>
>
>
> Regards,
>
> Marcus
>
>
>


Re: Exception on CacheEntryProcessor invoke (2.10.0)

2021-05-20 Thread Ilya Kasnacheev
Hello!

Can you please share a runnable reproducer project which works on previous
version but fails on 2.10?

Regards,
-- 
Ilya Kasnacheev


чт, 20 мая 2021 г. в 17:08, ihalilaltun :

> Hi igniters,
>
> recenlty we have upgraded from 2.7.6 to 2.10.0 and some of
> cacheentryprocessors started to throw following errors on cache.invoke(...)
> calls.
>
> Caused by: java.lang.ClassNotFoundException:
> com.segmentify.lotr.frodo.cacheentryprocessor.RockScoreUpdateProcessor
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> ~[?:1.8.0_261]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
> ~[?:1.8.0_261]
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
> ~[?:1.8.0_261]
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
> ~[?:1.8.0_261]
> at java.lang.Class.forName0(Native Method) ~[?:1.8.0_261]
> at java.lang.Class.forName(Class.java:348) ~[?:1.8.0_261]
>
>
> on 2.7.6 version we also get these error from time to time, but when
> application that uses these cacaheentryprocessors is restarted errors does
> not occur. but on 2.10.0 version this solution did not solve our problem.
>
> currently we have 23 different cacheentryprocessors runs on the system.
> after many different test scenarios and checks we found a pattern on above
> error case. only 4 out of 23 cacheentryprocessor keeps getting this error,
> *3 of these are invoked by ExecutorServices*;
>
> sample usage is somithing like the following;
>
> private ExecutorService executorService = Executors.newCachedThreadPool();
> 
> executorService.submit(() -> {
> ...
> igniteCache.withKeepBinary()
> .invoke(record.getKey(), new
> RockScoreUpdateProcessor(),
> "arg1", "arg2", "arg3");
> });
>
> *one cacheentryprocessor is invoked by XSync
> (https://github.com/antkorwin/xsync) *
>
>
> so what we see here, somehow when a cacheentryprocessor is invoked from a
> runnable context classnotfoundexception is thrown.
>
> *peerclassloading* property is set to true, *deploymentmode* is set to
> SHARED and *persistenceEnabled* is set to true.
>
>
> can this be a bug either known or unknown?
>
>
> currently this is a blocker issue for us to upgrade on production
> environment. any help is appriciated.
>
> Thanks.
>
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite read stale data from backup node

2021-05-19 Thread Ilya Kasnacheev
Hello!

Can you please share a runnable reproducer project? You may use github.

Regards,
-- 
Ilya Kasnacheev


вт, 18 мая 2021 г. в 14:55, guetsxjm :

> Hi Ignites,
>
> I ran into in-consistency data issues on version 2.8.1. I have three nodes
> run as a cluster and the cache configuration as:
>
> CacheConfiguration cacheConfiguration =
> new
> CacheConfiguration<>(Balance.class.getSimpleName());
> cacheConfiguration.setIndexedTypes(String.class, Balance.class);
> cacheConfiguration.setSqlIndexMaxInlineSize(100);
> cacheConfiguration.setSqlSchema("PUBLIC");
>
> cacheConfiguration.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
> cacheConfiguration.setBackups(4);
>
>
> cacheConfiguration.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
>
>
> Then I have very simple code to add balance in a loop:
>
> for (int i = 0; i < 1; i++) {
> balance = balanceDao.findByKey(accountId, "USD");
> balance.setQuantity(balance.getQuantity().add(BigDecimal.ONE));
> balanceDao.save(balance);
> }
>
> when I run above on the primary node, I always have balance increased 1
> correctly, however when I run that on backup node, sometimes my balance
> increased around 8k, and sometimes 9k.
>
> if setWriteSynchronizationMode was set to PRIMARY_SYNC and
> setReadFromBackup
> was set to false, I can get correct balance on all nodes.
>
>
> is this a bug on 2.8.1 or anything wrong with my configuration?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Failed to Scan query data by partition index after insert data using DML

2021-05-13 Thread Ilya Kasnacheev
Hello!

You need to either use .withKeepBinary(), or provide a Java class with same
fields as your table value has, so that it can be natively mapped.

Please see
https://www.gridgain.com/docs/latest/developers-guide/SQL/sql-key-value-storage

Regards,
-- 
Ilya Kasnacheev


сб, 8 мая 2021 г. в 03:02, Henric :

> Hi,
> Thanks for replay
> I tried to used cache_name, but I still get Exception as below, I have
> specify the cache name, I don't know why I still get this error.
> I tried to set WRAP_VALUE to false, but it only works for single column.
> Did I miss something important?
>
> Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException:
> SQL_PUBLIC_CITY_5c1c4ecf_745a_4a99_bfbf_fde6de0bc215
> at
>
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:689)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:796)
> at
>
> org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinary(CacheObjectUtils.java:176)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinaryIfNeeded(CacheObjectUtils.java:62)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinariesIfNeeded(CacheObjectUtils.java:135)
> at
>
> org.apache.ignite.internal.processors.cache.CacheObjectUtils.unwrapBinariesIfNeeded(CacheObjectUtils.java:77)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheContext.unwrapBinariesIfNeeded(GridCacheContext.java:1796)
> at
>
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryFutureAdapter.onPage(GridCacheQueryFutureAdapter.java:351)
> at
>
> org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager.processQueryResponse(GridCacheDistributedQueryManager.java:403)
> at
>
> org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager.access$000(GridCacheDistributedQueryManager.java:64)
> at
>
> org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$1.apply(GridCacheDistributedQueryManager.java:94)
> at
>
> org.apache.ignite.internal.processors.cache.query.GridCacheDistributedQueryManager$1.apply(GridCacheDistributedQueryManager.java:92)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1142)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:591)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$800(GridCacheIoManager.java:109)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$OrderedMessageListener.onMessage(GridCacheIoManager.java:1707)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:241)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:3916)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1862)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.access$5500(GridIoManager.java:241)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager$10.run(GridIoManager.java:1829)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.ClassNotFoundException:
> SQL_PUBLIC_CITY_5c1c4ecf_745a_4a99_bfbf_fde6de0bc215
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:355)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at
> org.apache.ignite.internal.util.IgniteUtils.forName(Ignit

Re: Client node disconnected from cluster

2021-05-13 Thread Ilya Kasnacheev
Hello!

For some reason the client node failed to send metrics in 30,000 ms (half
minute).

Not sure what to do here. Insufficient memory? Laptop went to sleep? Disk
swapping? Garbage collection?

You can also increase the listed parameter.

Regards,
-- 
Ilya Kasnacheev


чт, 6 мая 2021 г. в 14:46, itsmeravikiran.c :

> Getting below error on ignite server:
>
> FAIL: [tcp-disco-client-message-worker-[[CredentialsTcpDiscoverySpi] Client
> node considered as unreachable and will be dropped from cluster, because no
> metrics update messages received in interval:
> TcpDiscoverySpi.clientFailureDetectionTimeout() ms. It may be caused by
> network problems or long GC pause on client node, try to increase this
> parameter. [clientFailureDetectionTimeout=3]
>
> Getting below error on my Client:
>
> 02:57:20.732 [tcp-client-disco-msg-worker-#4] INFO
> c.s.p.i.s.CredentialsTcpDiscoverySpi - Client node disconnected from
> cluster, will try to reconnect with new id.
>
> How to avoid above disconnection from cluster?
> Relationship with above two error messages?
> Do you have any root cause of above error messages ?
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Peer ClassLoading Issue | Apache Ignite 2.10 with Spring Boot 2.3

2021-05-13 Thread Ilya Kasnacheev
Hello!

Peer class loading will not peer load key-value classes so you need to have
them on server side if running code there (or use cache.withKeepBinary()).

Regards,
-- 
Ilya Kasnacheev


ср, 12 мая 2021 г. в 09:57, Vasily Laktionov :

> Hi,
> Try cacheConfiguration.setPeerClassLoadingEnabled(true).
> Also you can try cacheConfiguration.setDeploymentMode(PRIVATE).
>
> https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading#enabling-peer-class-loading
> <
> https://ignite.apache.org/docs/latest/code-deployment/peer-class-loading#enabling-peer-class-loading>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite throws "Failed to resolve class name" Exception

2021-05-13 Thread Ilya Kasnacheev
Hello again!

I have also noticed that you are running lambdas on the server side. Is
peer class loading enabled? Do you have key value/type classes on the
server node?

Ignite will not peer load key-value classes currently, so you need to
explicitly deploy them on the server to be able to reference them in
server-running code.

Regards,
-- 
Ilya Kasnacheev


чт, 13 мая 2021 г. в 15:20, Ilya Kasnacheev :

> Hello!
>
> Can you share a sample of code which causes the issue, as well as cache
> configuration?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 7 мая 2021 г. в 21:04, tsipporah22 :
>
>> Hi experts,
>>
>> I'm running ignite server node in k8s and recently I upgraded ignite to
>> 2.10.0. Ignite is started with below command:
>> /opt/rta/os/jre/bin/java -XX:+AggressiveOpts
>> -Djava.net.preferIPv4Stack=true
>> -XX:+UseG1GC -XX:+DisableExplicitGC -Dfile.encoding=UTF-8
>> -DIGNITE_QUIET=false
>>
>> -DIGNITE_SUCCESS_FILE=/opt/rta/os/ignite/work/ignite_success_f0dac24c-d7aa-48bd-82d6-1492ab47e18f
>> -DIGNITE_HOME=/opt/rta/os/ignite
>> -DIGNITE_PROG_NAME=/opt/rta/os/ignite/bin/ignite.sh -cp
>>
>> /opt/rta/os/ignite/libs/*:/opt/rta/os/ignite/libs/ignite-control-utility/*:/opt/rta/os/ignite/libs/ignite-indexing/*:/opt/rta/os/ignite/libs/ignite-kubernetes/*:/opt/rta/os/ignite/libs/ignite-rest-http/*:/opt/rta/os/ignite/libs/ignite-spring/*:/opt/rta/os/ignite/libs/licenses/*:/opt/rta/os/ignite/libs/rta-windows/*
>> org.apache.ignite.startup.cmdline.CommandLineStartup
>> /opt/rta/os/ignite/config/ignite-config.xml
>>
>> From time to time I'm getting below exception that complain about "Failed
>> to
>> resolve class name":
>>
>> SEVERE: Failed to notify listener:
>>
>> com.rta.rtanalytics.baseline.common.compute.WindowOptimizer$$Lambda$918/1241362859@cd85226
>> javax.cache.CacheException: class
>> org.apache.ignite.IgniteCheckedException:
>> Failed to resolve class name [platformId=0, platform=Java,
>> typeId=-1052093315]
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1263)
>> at
>>
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2083)
>> at
>>
>> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1110)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:676)
>> at
>>
>> com.rta.rtanalytics.baseline.common.compute.WindowOptimizer.lambda$updateCacheState$44eb080f$1(WindowOptimizer.java:177)
>> at
>>
>> org.apache.ignite.internal.util.future.IgniteFutureImpl$InternalFutureListener.apply(IgniteFutureImpl.java:214)
>> at
>>
>> org.apache.ignite.internal.util.future.IgniteFutureImpl$InternalFutureListener.apply(IgniteFutureImpl.java:179)
>> at
>>
>> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
>> at
>>
>> org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347)
>> at
>>
>> org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335)
>> at
>>
>> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511)
>> at
>>
>> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490)
>> at
>>
>> org.apache.ignite.internal.processors.task.GridTaskWorker.finishTask(GridTaskWorker.java:1650)
>> at
>>
>> org.apache.ignite.internal.processors.task.GridTaskWorker.finishTask(GridTaskWorker.java:1618)
>> at
>>
>> org.apache.ignite.internal.processors.task.GridTaskWorker.reduce(GridTaskWorker.java:1193)
>> at
>>
>> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:975)
>> at
>>
>> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1161)
>> at
>>
>> org.apache.ignite.internal.processors.job.GridJobWorker.finishJob(GridJobWorker.java:965)
>> at
>>
>> org.apache.ignite.internal.processors.job.GridJobWorker.finishJob(GridJobWorker.java:813)
>> at
>>
>> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:662)
>> at

Re: Ignite throws "Failed to resolve class name" Exception

2021-05-13 Thread Ilya Kasnacheev
Hello!

Can you share a sample of code which causes the issue, as well as cache
configuration?

Regards,
-- 
Ilya Kasnacheev


пт, 7 мая 2021 г. в 21:04, tsipporah22 :

> Hi experts,
>
> I'm running ignite server node in k8s and recently I upgraded ignite to
> 2.10.0. Ignite is started with below command:
> /opt/rta/os/jre/bin/java -XX:+AggressiveOpts
> -Djava.net.preferIPv4Stack=true
> -XX:+UseG1GC -XX:+DisableExplicitGC -Dfile.encoding=UTF-8
> -DIGNITE_QUIET=false
>
> -DIGNITE_SUCCESS_FILE=/opt/rta/os/ignite/work/ignite_success_f0dac24c-d7aa-48bd-82d6-1492ab47e18f
> -DIGNITE_HOME=/opt/rta/os/ignite
> -DIGNITE_PROG_NAME=/opt/rta/os/ignite/bin/ignite.sh -cp
>
> /opt/rta/os/ignite/libs/*:/opt/rta/os/ignite/libs/ignite-control-utility/*:/opt/rta/os/ignite/libs/ignite-indexing/*:/opt/rta/os/ignite/libs/ignite-kubernetes/*:/opt/rta/os/ignite/libs/ignite-rest-http/*:/opt/rta/os/ignite/libs/ignite-spring/*:/opt/rta/os/ignite/libs/licenses/*:/opt/rta/os/ignite/libs/rta-windows/*
> org.apache.ignite.startup.cmdline.CommandLineStartup
> /opt/rta/os/ignite/config/ignite-config.xml
>
> From time to time I'm getting below exception that complain about "Failed
> to
> resolve class name":
>
> SEVERE: Failed to notify listener:
>
> com.rta.rtanalytics.baseline.common.compute.WindowOptimizer$$Lambda$918/1241362859@cd85226
> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException:
> Failed to resolve class name [platformId=0, platform=Java,
> typeId=-1052093315]
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1263)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2083)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1110)
> at
>
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:676)
> at
>
> com.rta.rtanalytics.baseline.common.compute.WindowOptimizer.lambda$updateCacheState$44eb080f$1(WindowOptimizer.java:177)
> at
>
> org.apache.ignite.internal.util.future.IgniteFutureImpl$InternalFutureListener.apply(IgniteFutureImpl.java:214)
> at
>
> org.apache.ignite.internal.util.future.IgniteFutureImpl$InternalFutureListener.apply(IgniteFutureImpl.java:179)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:399)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:347)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:335)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:511)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:490)
> at
>
> org.apache.ignite.internal.processors.task.GridTaskWorker.finishTask(GridTaskWorker.java:1650)
> at
>
> org.apache.ignite.internal.processors.task.GridTaskWorker.finishTask(GridTaskWorker.java:1618)
> at
>
> org.apache.ignite.internal.processors.task.GridTaskWorker.reduce(GridTaskWorker.java:1193)
> at
>
> org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:975)
> at
>
> org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1161)
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.finishJob(GridJobWorker.java:965)
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.finishJob(GridJobWorker.java:813)
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:662)
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:521)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to
> resolve
> class name [platformId=0, platform=Java, typeId=-1052093315]
> at
> org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7587)
> at
>
> org.apache.ignite.internal.util.future.GridFutureAdapter.resolve(GridFutureAdapter.

Re: Execute Ignite Callable Jobs with set priorities

2021-05-13 Thread Ilya Kasnacheev
Hello!

Just make sure to use
new PriorityQueueCollisionSpi().setStarvationPreventionEnabled(false);

otherwise you may get sorting errors on newer JVMs.

Regards,
-- 
Ilya Kasnacheev



чт, 13 мая 2021 г. в 11:31, Stephen Darlington <
stephen.darling...@gridgain.com>:

> Yes, it’s configurable using the CollisionSPI. More here:
> https://ignite.apache.org/docs/latest/distributed-computing/job-scheduling
>
> Regards,
> Stephen
>
> > On 13 May 2021, at 06:39, Kishan  wrote:
> >
> > Hi All,
> >
> > I have a use case I need to create 1000s of ignite Callable tasks and
> > execute these tasks on ignite cluster. Some of these jobs should be
> executed
> > on high priority by ignite cluster. For example, ignite client has sent
> > around 50 tasks to cluster and now they are in queue or are being
> executed.
> > At this point, client receives a request which should be executed with
> > highest priority. Client will create a compute task with priority set to
> > High and send it to ignite cluster via executor service. Is there any way
> > ignite can know that certain tasks which are are submitted with high
> > priority should be executed before any other tasks present in the queue?
> >
> > Thanks..
> > K
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Re: Query with join hangs on ignite thick client

2021-05-13 Thread Ilya Kasnacheev
Hello!

Are you sure that your thick client has enough memory for reduce phase? It
may boil down to the fact that servers have more RAM doing join/aggregation.

Can you collect heap/thread dumps during the hanging and check it to see
what happens?

Regards,
-- 
Ilya Kasnacheev


ср, 5 мая 2021 г. в 20:20, harinath :

> Hi,
>
> I have a below query to run using joins
>
> SELECT income_summarytest.workclass, income_summarytest.education,
> income_summarytest.marital_status, income_summarytest.occupation,
> income_summarytest.race, income_summarytest.gender,
> income_summarytest.capital_gain, income_summarytest.capital_loss,
> income_summarytest.age, income_summarytest.native_country,
> income_summarytest.income FROM income_summarytest JOIN (SELECT
> max(hours_per_week) AS max_hours FROM income_summarytest) ON
> income_summarytest.hours_per_week = max_hours ORDER BY
> income_summarytest.id
> OFFSET 0 FETCH NEXT 500 ROWS ONLY
>
> When i try to run the query using thin client, everything looks fine and
> results are obtained as expected. However, when i try to run the same query
> using thick client it hangs.
>
> Is there anything that is missing? Or is there any workaround to achieve
> this?
>
>
> Thanks,
> Harinath
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Please unsubscribe me

2021-05-13 Thread Ilya Kasnacheev
Hello!

You need to send an email to user-unsubscr...@ignite.apache.org

Regards,
-- 
Ilya Kasnacheev


ср, 12 мая 2021 г. в 19:13, Bellrose, Brian :

> I tried to unsubscribe through the site, but I still get emails. Please
> unsubscribe my email.
>
> Brian
> *This email and any attachments are only for use by the intended
> recipient(s) and may contain legally privileged, confidential, proprietary
> or otherwise private information. Any unauthorized use, reproduction,
> dissemination, distribution or other disclosure of the contents of this
> e-mail or its attachments is strictly prohibited. If you have received this
> email in error, please notify the sender immediately and delete the
> original. Neither this information block, the typed name of the sender, nor
> anything else in this message is intended to constitute an electronic
> signature unless a specific statement to the contrary is included in this
> message. *
>


Re: ClassNotFoundException in Ignite cron occasionally

2021-05-12 Thread Ilya Kasnacheev
Hello!

Did you come around to solve the issue? Can you please share your findings
with our community?

Regards,
-- 
Ilya Kasnacheev


ср, 14 апр. 2021 г. в 17:39, mohdgadi :

> I am running ignite cron in ignite version 2.8.1. But I am facing the below
> exception occasionally on corn runs on ignite node servers. I have enable
> peer to peer ignite configuration still facing this issue sometimes. Below
> is the code and exception. Have tried removing withBinary option in cache
> but still the error persists. Not sure if this is a marshalling issue or an
> issue due to ignite lamda.
>
> Code -
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3125/Screenshot_2021-04-14_at_4.png>
>
>
> Error -
> class org.apache.ignite.IgniteCheckedException: Failed to deserialize
> object
>
> [typeName=org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4]
> at
>
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10310)
> at
>
> org.apache.ignite.internal.processors.job.GridJobWorker.initialize(GridJobWorker.java:468)
> at
>
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1287)
> at
>
> org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:2121)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1847)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1472)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager.access$5200(GridIoManager.java:229)
> at
>
> org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1367)
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: class org.apache.ignite.binary.BinaryObjectException: Failed to
> deserialize object
>
> [typeName=org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4]
> at
>
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:927)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
>
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:319)
> at
>
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:304)
> at
>
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:101)
> at
>
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:81)
> at
>
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10304)
> ... 10 more
> Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException:
> com.dream11.watchlive.IgniteReconciliationCron$$Lambda$615/798682906
> at
>
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:762)
> at
>
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:759)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1757)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
>
> org.apache.ignite.internal.binary.BinaryUtils.doReadObject(BinaryUtils.java:1798)
> at
>
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readObject(BinaryReaderExImpl.java:1331)
> at
>
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.readBinary(GridClosureProcessor.java:1959)
> at
>
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:878)
> ... 17 more
> Caused by: java.lang.ClassNotFoundException:
> com.dream11.watchlive.IgniteReconciliationCron$$Lambda$615/798682906
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at
> org.apache.ignite.internal.util.IgniteUtils.forName(IgniteUtils.java:8828)
> at
>
> org.apache.ignite.internal.MarshallerContextImpl.getClass(MarshallerContextImpl.java:324)
> at
>
> org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:753)
> ... 24 more
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Kafka connector module

2021-05-11 Thread Ilya Kasnacheev
Hello!

Please take a look at
https://github.com/apache/ignite-extensions/tree/master/modules/kafka-ext

Regards,
-- 
Ilya Kasnacheev


вт, 11 мая 2021 г. в 16:03, facundo.maldonado :

> In many posts here I saw a reference to this repository
> https://github.com/apache/ignite/tree/master/modules/kafka
> that seems to be moved or deleted.
>
> Is there any place where I can see a complete example of a kafka connector
> integration code?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [GridCacheMapEntry] Failed to update counter for atomic cache

2021-05-07 Thread Ilya Kasnacheev
Hello!

Can you please share complete log from that node?

It seems that the cache for which this Continuous Query was triggered
suddenly went away.

Regards,
-- 
Ilya Kasnacheev


пт, 7 мая 2021 г. в 01:09, Jigna :

> Hi Team, I got below exception on my ignite server node. Would you please
> help me to check why I am getting this exception?
>
> [19:33:25,235][SEVERE][sys-stripe-1-#2][GridCacheMapEntry] Failed to update
> counter for atomic cache [, initial=false, primaryCntr=null,
> part=GridDhtLocalPartition [id=425, delayedRenting=false,
> clearVer=1620155854141, grp=Activity, state=OWNING, reservations=1,
> empty=false, createTime=05/04/2021 19:17:48, fullSize=0, cntr=Counter
> [init=18, val=19, clearCntr=0]]]
> java.lang.NullPointerException
> at
>
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.handleEvent(CacheContinuousQueryHandler.java:1004)
> at
>
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.access$1500(CacheContinuousQueryHandler.java:90)
> at
>
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler$2.skipUpdateCounter(CacheContinuousQueryHandler.java:569)
> at
>
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.skipUpdateCounter(CacheContinuousQueryManager.java:262)
> at
>
> org.apache.ignite.internal.processors.cache.CacheGroupContext.onPartitionCounterUpdate(CacheGroupContext.java:1062)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtLocalPartition.nextUpdateCounter(GridDhtLocalPartition.java:767)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheEntry.nextPartitionCounter(GridDhtCacheEntry.java:103)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.update(GridCacheMapEntry.java:6336)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:6080)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry$AtomicCacheUpdateClosure.call(GridCacheMapEntry.java:5770)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:4022)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$5700(BPlusTree.java:3916)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:2045)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1923)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1860)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1843)
> at
>
> org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:2991)
> at
>
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:451)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2248)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2533)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:1993)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1824)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1679)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3146)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:151)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:286)
> at
>
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:281)
> at
>
> org.apache.ignite.internal.processors.c

Re: JMX Exporters with Ignite

2021-05-06 Thread Ilya Kasnacheev
Hello!

It will always add this number by default. You need to restart all nodes
with this setting to get rid of it.

Regards,
-- 
Ilya Kasnacheev


ср, 5 мая 2021 г. в 14:31, Naveen :

> Thanks Ilya, it works and resolve my issue to an extent, but need to
> restart
> all the nodes to get this change reflected.
> But my question is, there is only Ignite instance running on this VM and
> the
> exporter process is running, no other JVM running on the VM.
> what else could be the reason why it is generating a new text for the class
> loader.
> Has anything changed on VM on OS side or anything we can suspect
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Setting the threshold limits for SQL oeprations

2021-05-05 Thread Ilya Kasnacheev
Hello!

I don't think it is possible since the H2 engine does not provide options
to limit CPU and heap AFAIK. Ignite community have decided to not rely on
forked H2 engine as e.g. recent GridGain does.

Maybe this will change when Calcite is introduced, and limiting will be
possible.

You can tune query pool to change how many parallel query threads may
utilize CPU, you may also tune query parallelism to uptick CPU usage by
some queries.

Regards,
-- 
Ilya Kasnacheev


вс, 2 мая 2021 г. в 16:33, Naveen :

> Hello Everyone
>
> I have seen somewhere that we can set the limit for all the resources that
> can be allocated  for SQL operations, like you can set the limit for heap
> utilization for all the SQL operations, similarly CPU etc. if it goes
> beyond
> those limits, SQL operations will be failed with a appropriate error
> message.
>
> Can you please let me know what are those limits we can set and how do we
> set
>
> We are using ignite 2.8.1
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Help with affinityRun on collocated ignite queue

2021-05-05 Thread Ilya Kasnacheev
Hello!

I don't understand your question. A collocated queue is located on a single
node, but you still need a method to run code on that specific node, don't
you?

The benefit is presumably shorter round-trip times.

Regards,
-- 
Ilya Kasnacheev


вт, 4 мая 2021 г. в 18:50, ps594 :

> I am trying to understand the use case of  affinityRun / affinityCall
> <
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteQueue.html#affinityCall-org.apache.ignite.lang.IgniteCallable->
>
> methods of IgniteQueue interface. Even after delving into documentation I
> could not understand a good use case why would we want to run jobs on the
> collocated queue using affinityRun method of queue interface, since
> collocated queues are on the same node why can't I simply write my own
> lambda function. Specifically what benefits does the affinityRun offers for
> ignite queue (given affinityRun method is not supported for non-collocated
> queues)?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: JMX Exporters with Ignite

2021-05-05 Thread Ilya Kasnacheev
Hello!

This is in case you are running more than one node per VM.

You can disable addition of this number by setting
IGNITE_MBEAN_APPEND_CLASS_LOADER_ID system property or environment variable
to false prior to starting the VM, such as by adding the following JVM arg:

-DIGNITE_MBEAN_APPEND_CLASS_LOADER_ID=false

Regards,
-- 
Ilya Kasnacheev


ср, 5 мая 2021 г. в 10:40, Naveen :

> Hello ALl
>
> We are using Ignite 2.8.1 and have JMX exporters running on each node and
> everything seems to be working fine.
>
> Earlier it used to generate the metrics with the below 3d4eac69, all the 5
> nodes used to generate the same
> org_apache_*3d4eac69*_HeapMemoryUsed
>
> Recently, we had to restart 2 nodes for some changes, afterwards when I
> looked at the metrics, it started generating like this, changed from
> 3d4eac69 to 2c13da15. I have restaretd the node many times earlier, but
> this
> never got changed
> org_apache_*2c13da15*_HeapMemoryUsed
>
> with this change, metrics are not getting displayed properly on Dashboard.
> Any idea, how this number (alphanumeric 2c13da15) is changed all of a
> sudden, how can we have this number or text same for all the nodes of the
> cluster.
> Looks bit strange, but happened
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Regarding CQ monitoring and alerting

2021-05-04 Thread Ilya Kasnacheev
Hello!

I guess that CQ may stop working on node reconnect. You may need to handle
EVT_CLIENT_NODE_RECONNECTED and re-register all of your continuous queries.

Regards,
-- 
Ilya Kasnacheev


сб, 1 мая 2021 г. в 11:32, Devakumar J :

> Hi,
>
> We have wide usage of CQ listeners in the application registered from
> client
> nodes. But we observe randomly it stops working, i mean listener not
> notified on cache events, and it requires restarting the client node.
>
> Is there an easy way to alert immediately if CQ stops working?
>
>
> Thanks & Regards,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IgniteCheckedException: Requesting mapping from grid failed for [platformId=0, typeId=-1220482121]

2021-05-04 Thread Ilya Kasnacheev
Hello!

Can you please share some simple reproducer project which highlights the
issue?

You can put it on github or similar.

Did you enable simple name mapper, btw? The error is what you can expect
when type names on Java and C# side do not match.

Regards,
-- 
Ilya Kasnacheev


вт, 4 мая 2021 г. в 06:26, William.L :

> Hi,
>
> I used C# code (entity object class) to create and write to cache. And I am
> trying to use Java code to read (corresponding object class) from cache but
> running into IgniteCheckedException:
>
>
>
> Is this scenario supported?
>
> Here's the C# entity class:
>
>
> Here's the corresponding Java class:
>
>
> I am enable to write to the cache from the Java side and then read from it.
> The object written from the Java side does not show up in the SQL queries
> (cache/table was created using the C# entity class).
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Too many TCP discovery accepted incoming connections

2021-05-04 Thread Ilya Kasnacheev
Hello!

I'm not sure. Why?

Regards,
-- 
Ilya Kasnacheev


вс, 2 мая 2021 г. в 11:13, VeenaMithare :

> Hi Ilya,
>
> Could the issue be as mentioned here :
>
>
> http://apache-ignite-users.70518.x6.nabble.com/2-8-1-INFO-org-apache-ignite-spi-communication-tcp-TcpCommunicationSpi-Accepted-incoming-communicatin-tp33854p35224.html
>
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [External]Re: Cluster becomes unresponsive if multiple clients join at a time

2021-05-04 Thread Ilya Kasnacheev
Hello!

I think the relevant ticket is
https://issues.apache.org/jira/browse/IGNITE-9558

It is poorly documented, I believe the prime condition is that client node
should not define any caches in its configuration.

Regards,
-- 
Ilya Kasnacheev


чт, 22 апр. 2021 г. в 16:07, Kamlesh Joshi :

> Thanks Ilya!
>
>
>
> I have observed PME related log entries in latest version as well. Does
> this mean these ‘some conditions’ are not met?
>
>
>
> Can you elaborate on this please ?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* 22 April 2021 16:24
> *To:* user@ignite.apache.org
> *Subject:* Re: [External]Re: Cluster becomes unresponsive if multiple
> clients join at a time
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hello!
>
>
>
> Yes, it should no longer be blocking assuming some conditions are met.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> чт, 22 апр. 2021 г. в 08:26, Kamlesh Joshi :
>
> Hi Ilya,
>
>
>
> Yeah even that’s what we were suspecting, PME triggering might be causing
> issue. We are using 2.7.6 version.
>
> So you are saying, in recent version i.e. 2.10.0 version don’t have
> blocking global PME ?
>
>
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* 21 April 2021 20:12
> *To:* user@ignite.apache.org
> *Subject:* [External]Re: Cluster becomes unresponsive if multiple clients
> join at a time
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hello!
>
>
>
> What is the version used? Usually, adding a new thick client would trigger
> a PME (a global blocking operation). In recent versions, they should be
> able to join without exchange.
>
>
>
> You could use flavors of thin client if you need a massive number of those.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 21 апр. 2021 г. в 14:40, Kamlesh Joshi :
>
> Hi Igniters,
>
>
>
> We have observed that if multiple clients (say around 50) are joining
> within very short span of time, then cluster seemed unresponsive for
> sometime causing entire cluster traffic to go down.
>
> Have anyone encountered this behaviour before? Any parameters to be
> tweaked to avoid this?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: rebalancing & K8

2021-04-28 Thread Ilya Kasnacheev
Hello!

Usually you need to adjust (or auto-adjust) baseline topology after scaling
the cluster up or down.

You also need to make sure that nodes stay in one cluster.

Regards,
-- 
Ilya Kasnacheev


пн, 26 апр. 2021 г. в 15:02, narges saleh :

> Hi folks
>
> If I am deploying my ignite cluster using AKS, is defining the auto
> discovery service sufficient?
> I  am following this link:
>
> https://ignite.apache.org/docs/latest/installation/kubernetes/azure-deployment
>
> Specifically, I am concerned about ignite's node/partition rebalancing in
> case of auto-scaling. When K8 adds or removes nodes and pods, meaning,
> ignite nodes get added or removed, does rebalancing kicks in properly? Do I
> need to tune any parameter specifically for the purpose of deployment into
> K8? Do I need to set up liveness probes?
>
> thanks.
>


Re: Too many TCP discovery accepted incoming connections

2021-04-28 Thread Ilya Kasnacheev
Hello!

Please consider the following messages:

[2021-04-21T14:55:09,203][WARN
][tcp-comm-worker-#1%EDIFCustomer%][TcpCommunicationSpi] Connect timed out
(consider increasing 'failureDetectionTimeout' configuration property)
[addr=/10.40.0.78:47100, failureDetectionTimeout=6]
[2021-04-21T14:55:09,203][WARN
][tcp-comm-worker-#1%EDIFCustomer%][TcpCommunicationSpi] Failed to connect
to a remote node (make sure that destination node is alive and operating
system firewall is disabled on local and remote hosts) [addrs=[/
10.40.0.78:47100, /127.0.0.1:47100]]

I can see that communication threads will spend a lot of time on connect(),
indicating network or firewall issues:
Thread [name="tcp-comm-worker-#1%EDIFCustomer%", id=365, state=RUNNABLE,
blockCnt=1294, waitCnt=12569]
at sun.nio.ch.Net.poll(Native Method)
at sun.nio.ch.SocketChannelImpl.poll(SocketChannelImpl.java:954)
- locked java.lang.Object@65ec5b09
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:110)
- locked java.lang.Object@9ecd49c
at
o.a.i.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3299)
at
o.a.i.spi.communication.tcp.TcpCommunicationSpi.createNioClient(TcpCommunicationSpi.java:2987)
at
o.a.i.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:2870)
at
o.a.i.spi.communication.tcp.TcpCommunicationSpi.access$6000(TcpCommunicationSpi.java:271)
at
o.a.i.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.processDisconnect(TcpCommunicationSpi.java:4489)
at
o.a.i.spi.communication.tcp.TcpCommunicationSpi$CommunicationWorker.body(TcpCommunicationSpi.java:4294)
at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:120)
at
o.a.i.spi.communication.tcp.TcpCommunicationSpi$5.body(TcpCommunicationSpi.java:2237)
at o.a.i.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)

I think this is the root cause. Your server node cannot connect to some of
your remaining nodes' communication port. Maybe your server node is behind
NAT or firewall. Consider enabling NAT traversal feature:
https://ignite.apache.org/docs/latest/clustering/running-client-nodes-behind-nat

Regards,
-- 
Ilya Kasnacheev


чт, 22 апр. 2021 г. в 21:58, Gangaiah Gundeboina :

> HI Ilya,
>
> Please find attached full log file.
>
> Regards,
> Gangaiah
>
> server_log.zip
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2396/server_log.zip>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Understanding SQL join performance

2021-04-28 Thread Ilya Kasnacheev
Hello!

If you had any images in your email, we are not seeing them. Please provide
links.

Regards,
-- 
Ilya Kasnacheev


сб, 24 апр. 2021 г. в 03:24, William.L :

> Hi,
>
> I am trying to understand why my colocated join between two tables/caches
> are taking so long compare to the individual table filters.
>
> TABLE1
>
> Returns 1 count -- 0.13s
>
> TABLE2
>
> Returns 65000 count -- 0.643s
>
>
>  JOIN TABLE1 and TABLE2
>
> Returns 650K count -- 7s
>
> Both analysis_input and analysis_output has index on (cohort_id, user_id,
> timestamp). The affinity key is user_id. How do I analyze the performance
> further?
>
> Here's the explain which does not tell me much:
>
>
>
> Is Ignite doing the join and filtering at each data node and then sending
> the 650K total rows to the reduce before aggregation? If so, is it possible
> for Ignite to do the some aggregation at the data node first and then send
> the first level aggregation results to the reducer?
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Designing Affinity Key for more locality

2021-04-28 Thread Ilya Kasnacheev
Hello!

SQL query planner will not understand the locality if you use surrogate
value as affinity key.

Maybe you need to define your own affinity function (extends
RendezvousAffinityFunction) which will map keys to partitions. I'm not sure
that it will help query planner though.

Regards,
-- 
Ilya Kasnacheev


вт, 27 апр. 2021 г. в 09:01, Pavel Tupitsyn :

> Hi William,
>
> Can you describe the use case and domain model in more detail?
>
> 1. AffinityKey is used to colocate some data with other data.
>What do you achieve with user-id being the affinity key?
>
> 2. If you'd like to put all users for a given tenant/group
> to the same node for efficiency, then use tenant-id as the user
> affinity key.
> UUID is fine, no need for extra logic with ints.
>
> On Tue, Apr 27, 2021 at 5:33 AM William.L  wrote:
>
>> Came across this statement in the Data Partitioning documents:
>>
>> "The affinity function determines the mapping between keys and partitions.
>> Each partition is identified by a number from a limited set (0 to 1023 by
>> default)."
>>
>> Looks like there is no point for adding another layer of mapping unless I
>> am
>> going for a smaller number.
>> Are there other ways in ignite to get more locality for subset of the
>> data?
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Ignite 2.10. Performance tests in Azure

2021-04-23 Thread Ilya Kasnacheev
Hello!

Why do you expect it to scale if you are only seem to run this in a single
thread?

In a distributed system, throughput will scale with cluster growth, but
latency will be steady or become slightly worse.

You need to run the same thread with sufficient number of threads, and
maybe using more than one client (VM and all) to drive load in order to
saturate it.

What is the CPU usage during the test on server nodes, per cluster size?

Regards,
-- 
Ilya Kasnacheev


пт, 23 апр. 2021 г. в 10:59, jjimeno :

> Hello all,
>
> For our project we need a distributed database with transactional support,
> and Ignite is one of the options we are testing.
>
> Scalability is one of our must have, so we created an Ignite Kubernetes
> cluster in Azure to test it, but we found that the results were not what we
> expected.
>
> To discard the problem was in our code or in using transactional caches, we
> created a small test program for writing/reading 1.8M keys of 528 bytes
> each
> (it represents one of our data types).
>
> As you can see in this graph, reading doesn't seem to scale.  Especially
> for
> the transactional cache, where having 4, 8 or 16 nodes in the cluster
> performs worse than having only 2:
> <http://apache-ignite-users.70518.x6.nabble.com/file/t3059/reading.png>
>
> While writing in atomic caches does... until 8 nodes, then it gets steady
> (No transactional times because of  this
> <https://issues.apache.org/jira/browse/IGNITE-14076>  ):
> <http://apache-ignite-users.70518.x6.nabble.com/file/t3059/writing.png>
>
> Another strange thing is that, for atomic caches, reading seems to be
> slower
> than writing:
> <http://apache-ignite-users.70518.x6.nabble.com/file/t3059/atomic.png>
>
> So, my questions are:
>   - Could I been doing something wrong that could lead to this results?
>   - How could it be possible to get worse reading timings in a 4/8/16 nodes
> cluster than in a 2 nodes cluster for a transactional cache?
>   - How could reading be slower than writing in atomic caches?
>
> These are the source code and configuration files we're using:
> Test.cpp
> <http://apache-ignite-users.70518.x6.nabble.com/file/t3059/Test.cpp>
> Order.h <http://apache-ignite-users.70518.x6.nabble.com/file/t3059/Order.h>
>
> node-configuration.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t3059/node-configuration.xml>
>
>
> Best regards and thanks in advance!
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: AccessControlException

2021-04-22 Thread Ilya Kasnacheev
Hello!

I guess this is what JVM creates for us.

If you expect that Ignite would have dedicated support for security policy:
it doesn't.

Regards,
-- 
Ilya Kasnacheev


чт, 22 апр. 2021 г. в 16:58, Nico :

> More info: it looks like the threads that are created are of type
> InnocuousForkJoinWorkerThread, which means without permissions.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Too many TCP discovery accepted incoming connections

2021-04-22 Thread Ilya Kasnacheev
Hello!

I can see that a node has just joined, but I'm not sure why there are all
those messages.

However, the log file seems truncated. Can you provide the rest?

Regards,
-- 
Ilya Kasnacheev


чт, 22 апр. 2021 г. в 16:47, Gangaiah Gundeboina :

> Hi Ilya,
>
>  There is no jvm pause, only one entry in the server log but  that too
> after
> affect. The incoming connections started at 2021-04-21T14:52:50,035 and jvm
> pause could see at 2021-04-21T14:54:18. I have attached logs file, could
> you
> please check.
>
> [2021-04-21T14:54:18,035][WARN
> ][jvm-pause-detector-worker][IgniteKernal%EDIFCustomer] Possible too long
> JVM pause: 2020 milliseconds.
>
>
> server_log.zip
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2396/server_log.zip>
>
>
> Regards,
> Gangaiah
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: AccessControlException

2021-04-22 Thread Ilya Kasnacheev
Hello!

Obviously, you need to grant to all Ignite code permission to read any
system properties.

Obviously, granting all permissions didn't work in your case. I'm not sure
why.

Regards,
-- 
Ilya Kasnacheev


чт, 22 апр. 2021 г. в 14:33, Nico :

> Hi,
>
> I'm using Ignite in embedded mode, ignite sandbox is off. I'm getting the
> following exception:
> Caused by: java.security.AccessControlException: access denied
> ("java.util.PropertyPermission" "IGNITE_DEFAULT_DISK_PAGE_COMPRESSION"
> "read")
> at
>
> java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
> at
> java.security.AccessController.checkPermission(AccessController.java:884)
> at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
> at java.lang.SecurityManager.checkPropertyAccess(SecurityManager.java:1294)
> at java.lang.System.getProperty(System.java:717)
> at
>
> org.apache.ignite.IgniteSystemProperties.getString(IgniteSystemProperties.java:1385)
> at
>
> org.apache.ignite.IgniteSystemProperties.getEnum(IgniteSystemProperties.java:1362)
> at
>
> org.apache.ignite.IgniteSystemProperties.getEnum(IgniteSystemProperties.java:1342)
> at
>
> org.apache.ignite.configuration.CacheConfiguration.(CacheConfiguration.java:429)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.getOrCreateConfigFromTemplate(GridCacheProcessor.java:3377)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.getOrCreateFromTemplate(GridCacheProcessor.java:3280)
> at
>
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.getOrCreateFromTemplate(GridCacheProcessor.java:3258)
> at
>
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:3575)
> at
>
> lib.system.aggregation.event.consumer.DefaultAggregationEventConsumer.getNodeDataCache(DefaultAggregationEventConsumer.java:217)
>
> My application use the -Djava.security.policy option, which points to a
> file
> granting all the permissions. Any idea of what could go wrong?
> Thanks and Regards,
> Nicolas
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Too many TCP discovery accepted incoming connections

2021-04-22 Thread Ilya Kasnacheev
Hello!

Can you share complete logs from that node? You can also search them for
"JVM pause" messages or just for any reason why these connections were
closed in the first place.

Regards,
-- 
Ilya Kasnacheev


чт, 22 апр. 2021 г. в 14:39, Gangaiah Gundeboina :

> Hi Igniters,
>
> Some times we are seeing too many 'tcp discovery accepted incoming
> connections' from client servers in server logs. Is it due to network
> glitch
> or connectivity break between servers and clients?. This leads to system
> critical errors and many clients disconnected and reconnected. During the
> issue time cluster is not responding for some time around 5 mints and
> recovered it self. We haven't put any network and jointimeouts at discovery
> level, using default time outs.. Cluster should be respond for other
> clients, do we need to increase the timeouts. Could you please help us,
> below are few entries in server log.
>
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery accepted
> incoming connection [rmtAddr=/CIPS8.152, rmtPort=48986]
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery spawning
> a
> new thread for connection [rmtAddr=/CIPS8.152, rmtPort=48986]
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery accepted
> incoming connection [rmtAddr=/CIPS3.29, rmtPort=52172]
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery spawning
> a
> new thread for connection [rmtAddr=/CIPS3.29, rmtPort=52172]
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-sock-reader-#58331%EDIFCustomer%][TcpDiscoverySpi] Started
> serving remote node connection [rmtAddr=/CIPS8.152:48986, rmtPort=48986]
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery accepted
> incoming connection [rmtAddr=/CIPS4.29, rmtPort=35815]
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery spawning
> a
> new thread for connection [rmtAddr=/CIPS4.29, rmtPort=35815]
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-sock-reader-#58332%EDIFCustomer%][TcpDiscoverySpi] Started
> serving remote node connection [rmtAddr=/CIPS3.29:52172, rmtPort=52172]
> [2021-04-21T14:52:54,697][INFO
> ][tcp-disco-sock-reader-#58333%EDIFCustomer%][TcpDiscoverySpi] Started
> serving remote node connection [rmtAddr=/CIPS4.29:35815, rmtPort=35815]
> [2021-04-21T14:52:54,698][INFO
> ][tcp-disco-sock-reader-#47336%EDIFCustomer%][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/CIPS3.29:56843, rmtPort=56843
> [2021-04-21T14:52:54,751][INFO
> ][tcp-disco-sock-reader-#47340%EDIFCustomer%][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/CIPS6.107:55739, rmtPort=55739
> [2021-04-21T14:52:54,755][INFO
> ][tcp-disco-sock-reader-#38863%EDIFCustomer%][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/CIPS2.48:25981, rmtPort=25981
> [2021-04-21T14:52:54,797][INFO
> ][tcp-disco-sock-reader-#47296%EDIFCustomer%][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/CIPS7.31:43122, rmtPort=43122
> [2021-04-21T14:52:54,904][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery accepted
> incoming connection [rmtAddr=/CIPS2.45, rmtPort=39922]
> [2021-04-21T14:52:54,904][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery spawning
> a
> new thread for connection [rmtAddr=/CIPS2.45, rmtPort=39922]
> [2021-04-21T14:52:54,904][INFO
> ][tcp-disco-sock-reader-#58337%EDIFCustomer%][TcpDiscoverySpi] Started
> serving remote node connection [rmtAddr=/CIPS2.45:39922, rmtPort=39922]
> [2021-04-21T14:52:55,040][INFO
> ][tcp-disco-sock-reader-#47606%EDIFCustomer%][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/CIPS2.155:38128, rmtPort=38128
> [2021-04-21T14:52:55,261][INFO
> ][tcp-disco-sock-reader-#47254%EDIFCustomer%][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/CIPS3.44:44047, rmtPort=44047
> [2021-04-21T14:52:55,392][INFO
> ][tcp-disco-sock-reader-#47342%EDIFCustomer%][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/CIPS4.22:55079, rmtPort=55079
> [2021-04-21T14:52:55,433][INFO
> ][tcp-disco-sock-reader-#47302%EDIFCustomer%][TcpDiscoverySpi] Finished
> serving remote node connection [rmtAddr=/CIPS7.44:49161, rmtPort=49161
> [2021-04-21T14:52:55,477][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery accepted
> incoming connection [rmtAddr=/CIPS2.57, rmtPort=57963]
> [2021-04-21T14:52:55,477][INFO
> ][tcp-disco-srvr-#3%EDIFCustomer%][TcpDiscoverySpi] TCP discovery spawning
> a

Re: [External]Re: Cluster becomes unresponsive if multiple clients join at a time

2021-04-22 Thread Ilya Kasnacheev
Hello!

Yes, it should no longer be blocking assuming some conditions are met.

Regards,
-- 
Ilya Kasnacheev


чт, 22 апр. 2021 г. в 08:26, Kamlesh Joshi :

> Hi Ilya,
>
>
>
> Yeah even that’s what we were suspecting, PME triggering might be causing
> issue. We are using 2.7.6 version.
>
> So you are saying, in recent version i.e. 2.10.0 version don’t have
> blocking global PME ?
>
>
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
> *From:* Ilya Kasnacheev 
> *Sent:* 21 April 2021 20:12
> *To:* user@ignite.apache.org
> *Subject:* [External]Re: Cluster becomes unresponsive if multiple clients
> join at a time
>
>
>
> The e-mail below is from an external source. Please do not open
> attachments or click links from an unknown or suspicious origin.
>
> Hello!
>
>
>
> What is the version used? Usually, adding a new thick client would trigger
> a PME (a global blocking operation). In recent versions, they should be
> able to join without exchange.
>
>
>
> You could use flavors of thin client if you need a massive number of those.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> ср, 21 апр. 2021 г. в 14:40, Kamlesh Joshi :
>
> Hi Igniters,
>
>
>
> We have observed that if multiple clients (say around 50) are joining
> within very short span of time, then cluster seemed unresponsive for
> sometime causing entire cluster traffic to go down.
>
> Have anyone encountered this behaviour before? Any parameters to be
> tweaked to avoid this?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: Buffer Overflow on ARM with persistency enabled

2021-04-22 Thread Ilya Kasnacheev
Hello!

You can also configure your system to save core dumps on application crash,
then you can open the core dump with debugger.

Regards,
-- 
Ilya Kasnacheev


чт, 22 апр. 2021 г. в 07:07, rakshita04 :

> Hi ,
>
> We have 2 GB RAM.
> we unfortunately could not attached debugger, so we don't have stack trace
> info as of now.
> Also about RAM, we tried running our application on AMD linux VM with same
> RAM(2GB) but there we dont see this behavior and application runs fine.
>
> regards,
> Rakshita Chaudhary
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Creating a Lucene index thru SQL DDL

2021-04-21 Thread Ilya Kasnacheev
Hello!

You cannot add QueryEntity to DDL, but you can add it to CacheConfiguration
and it will create tables for you, so it's just as good as DDL.

Regards,
-- 
Ilya Kasnacheev


ср, 21 апр. 2021 г. в 17:43, Naveen :

> Hi Ilya
>
> How do we add QueryEntity in a DDL, can you please refer me to any
> documentation we have on this or add the code snippet
>
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite node heap gradually increasing and not getting stabilized

2021-04-21 Thread Ilya Kasnacheev
Hello!

You need to find out which objects keep references to these, eventually
tracing their usage to some large component of Ignite or your application.

Regards,
-- 
Ilya Kasnacheev


ср, 21 апр. 2021 г. в 13:37, Naveen :

> HI Ilya
>
> We hardly have 20 tables, data around 10 to 15 millions
>
> Thin clients also around 50 to 100 on each node, not many and load is also
> not so high.
>
> Here are the details about CipherSuite's objects
>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t1478/ssl_objects.png>
>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1478/GridH2Processors.png>
>
>
> what can go wrong after looking at this
>
> Thanks
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cluster becomes unresponsive if multiple clients join at a time

2021-04-21 Thread Ilya Kasnacheev
Hello!

What is the version used? Usually, adding a new thick client would trigger
a PME (a global blocking operation). In recent versions, they should be
able to join without exchange.

You could use flavors of thin client if you need a massive number of those.

Regards,
-- 
Ilya Kasnacheev


ср, 21 апр. 2021 г. в 14:40, Kamlesh Joshi :

> Hi Igniters,
>
>
>
> We have observed that if multiple clients (say around 50) are joining
> within very short span of time, then cluster seemed unresponsive for
> sometime causing entire cluster traffic to go down.
>
> Have anyone encountered this behaviour before? Any parameters to be
> tweaked to avoid this?
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: Generated affinity key from multiple fields and SQL partition pruning

2021-04-21 Thread Ilya Kasnacheev
Hello!

I suggest generating a surrogate affinity key in this case, such as
@AffinityKeyMapped String affKey = key1 + "-" + key2;

You can also file a feature request against Apache Ignite JIRA, but given
the lack of community interest towards indexing nested objects, it is
unlikely to be done.
Maybe we could have composite affinity keys eventually, but I can't bet on
that.

Regards,
-- 
Ilya Kasnacheev


ср, 14 апр. 2021 г. в 16:12, :

> Hello team,
>
>
>
> If we generate @AffinityKeyMapped field from multiple other fields and
> then use those original fields in SQL WHERE condition, they are not used
> for partition pruning:
>
>
> https://ignite.apache.org/docs/latest/perf-and-troubleshooting/sql-tuning#partition-pruning
>
>
>
> Because Ignite doesn’t know how and from which fields to generate the
> affinity key… and there’s probably no way to configure it.
>
>
>
> Could we get this feature, please?
>
> Regards,
> Michal
>
>
> _
> “This message is for information purposes only, it is not a
> recommendation, advice, offer or solicitation to buy or sell a product or
> service nor an official confirmation of any transaction. It is directed at
> persons who are professionals and is not intended for retail customer use.
> Intended for recipient only. This message is subject to the terms at:
> www.barclays.com/emaildisclaimer.
>
> For important disclosures, please see:
> www.barclays.com/salesandtradingdisclaimer regarding market commentary
> from Barclays Sales and/or Trading, who are active market participants;
> https://www.investmentbank.barclays.com/disclosures/barclays-global-markets-disclosures.html
> regarding our standard terms for the Investment Bank of Barclays where we
> trade with you in principal-to-principal wholesale markets transactions;
> and in respect of Barclays Research, including disclosures relating to
> specific issuers, please see http://publicresearch.barclays.com.”
>
> _
> If you are incorporated or operating in Australia, please see
> https://www.home.barclays/disclosures/importantapacdisclosures.html for
> important disclosure.
>
> _
> How we use personal information  see our privacy notice
> https://www.investmentbank.barclays.com/disclosures/personalinformationuse.html
>
> _
>


Re: Several problems with persistence

2021-04-21 Thread Ilya Kasnacheev
Hello!

If you are seeing any exceptions, please provide logs.

Yes, if you remove the node from baseline and have 1 backup, then the data
will be rebalanced between remaining nodes.

1K messages per seconds means 4M writes/sec just for checkpoints given page
size 4k, then add WAL to the mix.

Regards,
-- 
Ilya Kasnacheev


ср, 7 апр. 2021 г. в 22:26, facundo.maldonado :

> Hi everyone, kind of frustrated/disappointed here.
>
> I have a small cluster on a test environment where I'm trying to take some
> measures so
> I can size the cluster I will need in production and estimate some costs.
>
> The use case is simple, consume from a Kafka topic and populate the
> database
> so other components can start querying (key-value access only).
>
> The cluster is described below:
>
> AWS/K8S environment
> 4 data nodes and 4 'streamer' nodes.
>
> Data nodes:
> - 12 Gb memory requested
> - 4 Gb for JMV xms and xmx
> - 5 Gb DataRegion maxSize
> - persistence Enabled
> - writeThrottling Enabled
> - walSegmentSize 256 Mb
> - 10 Gb volume attached for storage /opt/work/storage
> - 3 Gb volume attached for WAL /opt/work/wal  (~10*walSegmentSize)
> - WalArchive disabled (walArchivePath==walArchive)
> - 1 cache
> - partitionLossPolicy READ_ONLY_SAFE
> - cacheMode PARTITIONED
> - writeSynchronizationMode PRIMARY_SYNC
> - rebalanceMode ASYNC
> - backups 1
> - expiryPolicyFactory AccessedExpiryPolicy 20 min
>
> Streamer nodes (Kafka streamer as grid service - node singleton)
> - 2 Gb memory requested
> - allowOverwrite false
> - autoflushFrequency 200ms
> - 16 consumers (64 partitions in topic)
>
> Streamer is configured to have a stream receiver, a StreamTransformer that
> checks an special case where I have to chose which record I will keep.
> Records are of 1.5 Kb (avg)
> They are deserialized and converted into domain objects that are streamed
> as
> BinaryObjects to the cache.
>
> Use case:
> Started with a clean environment. No data in cache, no data in wal/storage
> volumes, no data in the topic.
> Input data is generated at a constante rate of 1K mesages per second.
> First 20 minutes, cache size grow linearly. After that stays almost flat.
> Thats expected since ExpiryPolicy was set to 20 min.
> Around the hour, the lag in the consumers started to grow.
> After that, everything goes wrong.
> WAL size grew beyond the limits, exactly doubled before Kubernetes kills
> the
> pod.
> Around the same moment, memory usage started to grow to near the limit
> (12Gb)
> Throttling times and checkpointing duration were almost the same during the
> test. This last one is really high, (2 min avg), but I don't know if that
> is
> espected or not since I don't have nothing to compare.
>
> After 2 nodes were killed, they never join the cluster again.
> I increase the size of the wal volume size still they didn't join.
> Control.sh utility list both nodes as offline.
> Logs output a message like this:
> Blocked system-critical thread has been detected. This can lead to
> cluster-wide undefined behaviour [workerName=sys-stripe-6,
> threadName=sys-stripe-6-#7, blockedFor=74s]
>
> After restarting again them, one joined the cluster but not the other.
> Control.sh utility displayed the node as offline.
> By mistake I deleted the content of the wal folder. Shame on me.
> Now, the node don't even start.
> Node log displays:
> JVM will be halted immediately due to the failure:
> [failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.i.processors.cache.persistence.StorageException: Failed to read
> checkpoint record from WAL, persistence consistency cannot be guaranteed.
> Make sure configuration points to correct WAL folders and WAL folder is
> properly mounted [ptr=WALPointer [idx=179, fileOff=236972130, len=15006],
> walPath=/opt/work/wal, walArchive=/opt/work/wal]]]
>
> What I think is expected.
> Now the node is completely unusable.
>
> Finally my questions are:
> - How can I reuse that node? Can I reuse it? Is there a way to clean the
> data and rejoin the node?
> - Do I lost the data of that node? It should be recovered from backups once
> I remove the node from baseline, is that correct?
> - If I increase the input rate to 2K the lag generated at the consumers
> becomes unmanaged. Adding more consumers will not help since they are
> already matched with topic partitions.
> - 1 K messages per second is really really really slow.
> - How exactly WAL works? Why I'm constantly running out of space here.
> - Any clue of what I'm doing wrong?
>
>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2948/WalSIze.png>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2948/MemoryUsage.png>
>
>
>
>
> Hope someone could throw some light here.
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Creating a Lucene index thru SQL DDL

2021-04-21 Thread Ilya Kasnacheev
Hello!

You can use QueryEntity, they will let you add FULLTEXT indexes.

Regards,
-- 
Ilya Kasnacheev


ср, 21 апр. 2021 г. в 15:52, Naveen :

> Hello All
>
> We are using ignite 2.8.1, trying to evaluate full text search thru Lucene.
>
> Can we create a Lucene index on a specific field thru SQL create table
> DDL.
> if not, what are other options we have if we are not using POJOs.
>
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Historical rebalance doesn't work for caches with rebalanceDelay > 0

2021-04-21 Thread Ilya Kasnacheev
Hello again!

I have filed a ticket to mark this setting as deprecated:
https://issues.apache.org/jira/browse/IGNITE-14613

Regards,
-- 
Ilya Kasnacheev


ср, 21 апр. 2021 г. в 13:33, Ilya Kasnacheev :

> Hello!
>
> 1) I think that rebalanceDelay is an outdated option, now that we have
> baseline topology and baseline auto-adjust. Just set baseline auto-adjust
> to the value of rebalance delay and you will be much better off.
>
> 2) I'm not sure it was, but definitely not anymore.
>
> 3) I don't think so, you will have to recreate.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> сб, 10 апр. 2021 г. в 19:22, Dmitry Lazurkin :
>
>> Hello, folks.
>>
>> I have big cache with configured rebalanceMode = ASYNC, rebalanceDelay =
>> 10_000ms. Persistence is enabled, maxWalArchiveSize = 10GB. And I passed
>> -DIGNITE_PREFER_WAL_REBALANCE=true and
>> -DIGNITE_PDS_WAL_REBALANCE_THRESHOLD=1 to Ignite. So node should use
>> historical rebalance if there is enough WAL. But it doesn't. After
>> investigation I found that GridDhtPreloader#generateAssignments always
>> get called with exchFut = null, and this method can't set histPartitions
>> without exchFut. I think, that problem in
>> GridCachePartitionExchangeManager
>> (
>> https://github.com/apache/ignite/blob/bc24f6baf3e9b4f98cf98cc5df67fb5deb5ceb6c/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java#L3486
>> ).
>> It doesn't call generateAssignments without forcePreload if
>> rebalanceDelay is configured.
>>
>> Historical rebalance works after removing rebalanceDelay.
>>
>> - May be this is bug because I see proper usage of rebalaceDelay in
>> GridDhtPartitionDemander#addAssignments?
>>
>> - Is this useful to have rebalanceDelay for persistent caches?
>>
>> - Can I turn off rebalanceDelay for existing caches?
>>
>> Thank you all.
>>
>>
>>


Re: Historical rebalance doesn't work for caches with rebalanceDelay > 0

2021-04-21 Thread Ilya Kasnacheev
Hello!

1) I think that rebalanceDelay is an outdated option, now that we have
baseline topology and baseline auto-adjust. Just set baseline auto-adjust
to the value of rebalance delay and you will be much better off.

2) I'm not sure it was, but definitely not anymore.

3) I don't think so, you will have to recreate.

Regards,
-- 
Ilya Kasnacheev


сб, 10 апр. 2021 г. в 19:22, Dmitry Lazurkin :

> Hello, folks.
>
> I have big cache with configured rebalanceMode = ASYNC, rebalanceDelay =
> 10_000ms. Persistence is enabled, maxWalArchiveSize = 10GB. And I passed
> -DIGNITE_PREFER_WAL_REBALANCE=true and
> -DIGNITE_PDS_WAL_REBALANCE_THRESHOLD=1 to Ignite. So node should use
> historical rebalance if there is enough WAL. But it doesn't. After
> investigation I found that GridDhtPreloader#generateAssignments always
> get called with exchFut = null, and this method can't set histPartitions
> without exchFut. I think, that problem in
> GridCachePartitionExchangeManager
> (
> https://github.com/apache/ignite/blob/bc24f6baf3e9b4f98cf98cc5df67fb5deb5ceb6c/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/GridCachePartitionExchangeManager.java#L3486
> ).
> It doesn't call generateAssignments without forcePreload if
> rebalanceDelay is configured.
>
> Historical rebalance works after removing rebalanceDelay.
>
> - May be this is bug because I see proper usage of rebalaceDelay in
> GridDhtPartitionDemander#addAssignments?
>
> - Is this useful to have rebalanceDelay for persistent caches?
>
> - Can I turn off rebalanceDelay for existing caches?
>
> Thank you all.
>
>
>


Re: Ignite Server Upgrade While Keeping Data

2021-04-21 Thread Ilya Kasnacheev
Hello!

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100828263
> PDS compatibility tests framework is implemented
in IgnitePersistenceCompatibilityAbstractTest which has several subclasses.
General approach for such tests is to produce some PDS state on an older
version and then check that a newer Ignite versions can be started using
this state.

WRT compatibility matrix, I don't think there are any gaps there between
2.x releases.

Regards,
-- 
Ilya Kasnacheev


пн, 19 апр. 2021 г. в 13:42, starlight :

> Hello,
>
> Thank you for the answer.
>
> Is this fact documented somewhere? (that 2.10 is able to start with 2.7
> persistence files)
> Is there a version compatibility matrix? Between server versions and
> between
> the client and the server?
>
> When do you plan to support rolling upgrade?
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite node heap gradually increasing and not getting stabilized

2021-04-20 Thread Ilya Kasnacheev
Hello!

How many tables do you have? I don't see why you there are so many.

I'm not sure who holds all those CipherSuite's. Can you check?

If you have a ton of thin client connections with SSL which are doing MERGE
on many tables, I guess this may be OKish though.

Regards,
-- 
Ilya Kasnacheev


вт, 20 апр. 2021 г. в 16:11, Naveen :

> HI All
>
> We have analyzed the heap dump, some of the observations are
>
> what can we change for this, is there any configurations we can set to get
> rid of these leaks
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1478/tool_analysis1.gif>
>
>
> We do use SQL API, especially MERGE for upserts, anything we can do for h3
> engine
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1478/tool_h2_report.gif>
>
>
> We see 12% for H2GridTable and 14% for security/ssl/CipherSuite. any
> pointers here, are these expected or can we tuned further
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1478/tool_overall.gif>
>
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite node heap gradually increasing and not getting stabilized

2021-04-19 Thread Ilya Kasnacheev
Hello!

Did you try to trigger a full GC and see what is the low level of heap
usage?

Please try collecting a heap dump from the node after some heap growth, and
checking it with Eclipse MAT for example.

Regards,
-- 
Ilya Kasnacheev


пн, 19 апр. 2021 г. в 14:46, Naveen :

> Hello All
>
> we are using Ignite 2.8.1 with native persistence enabled.
>
> We have cluster with 5 nodes and did a cluster restart recently. Few days
> after the cluster restart, we are seeing node heap memory is constantly
> increasing and not getting stabilized.
>
> <http://apache-ignite-users.70518.x6.nabble.com/file/t1478/heap_memory.gif>
>
>
> And not much increase in offheap memory though
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1478/offheap-memory.gif>
>
>
> And, nothing has been changed, no change in the load like writes and reads
> are pretty much the same.
> We have not pre-loaded (or warming-up memory) the data after the cluster
> restart, it does increase the latency and disk IO etc, but does it increase
> the heap utilization so much.
>
> what else I can verify to know the root cause of this heap increase and how
> can we make this stable, not seeing any other side affects of this heap
> utilization though. Its just that its almost nearing the max heap set for
> the node
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Abrupt stop of application on using sql

2021-04-19 Thread Ilya Kasnacheev
Hello!

This sounds like https://issues.apache.org/jira/browse/IGNITE-8702

Maybe there are some issues left with poll versus select, or maybe it is
your own code that is using select() instead of poll()

Regards,
-- 
Ilya Kasnacheev


пн, 19 апр. 2021 г. в 15:30, rakshita04 :

> hello Ilya,
>
> As i mentioned it is an abrupt closure of application.
> So we dont have the exit code.
> There is one information that i have which might be useful for you. We
> noticed that number of open file descriptor goes beyond 1024(max range of
> open file range of the system).
> When we disable the persistence there is no problem and applications runs
> smoothly
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Abrupt stop of application on using sql

2021-04-19 Thread Ilya Kasnacheev
Hello!

You still did not respond about the return code of the process.

I recommend attaching a debugger to the process to see what happens.

Regards,
-- 
Ilya Kasnacheev


чт, 15 апр. 2021 г. в 15:16, rakshita04 :

> Hi Ilya,
>
> How is decreasing maxSize of data region related to "Buffer Overflow"
> problem?
> I mean we dont get any Out of memory log in DeMsg logs and also "top"
> command shows enough available RAM.
> We tried decreasing maxSize to 50mb now but the problem still persists.
> Can this also be related to checkpointing?
>
> regards,
> Rakshita Chaudhary
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Vertx, Kotlin and Ignite "Failed to deserialize object"...

2021-04-19 Thread Ilya Kasnacheev
Hello!

Apache Ignite will not peer load key/value type classes so you indeed have
to hold them in classpath if you plan to do computations on them.

Regards,
-- 
Ilya Kasnacheev


вс, 18 апр. 2021 г. в 23:37, Andreas Vogler :

> Ok, just had to add vertx-core-4.0.3.jar and vertx-ignite-4.0.3.jar
> to /opt/ignite/apache-ignite/libs/user_libs in the container.
>
> so, it was a Vertx issue…
>
> Now my SCADA Gateway can write OPC UA values to Ignite and I can query it
> with SQL …
>
> I will soon merge the dev-branch …
> https://github.com/vogler75/automation-gateway
>
>
>
> Am 18.04.2021 um 21:29 schrieb Andreas Vogler :
>
> I have now figured out that it is not a problem of putting values to the
> cache - this works.
>
> The error comes when I do a Vertx ServiceDiscovery.publish…. maybe I have
> to add some Vertx Libs to Ignite….
>
> https://vertx.io/docs/vertx-service-discovery/java/
>
>
>
> Am 17.04.2021 um 13:51 schrieb Andreas Vogler :
>
> Hi,
> Very new to Ignite, I run two docker nodes with Ignite 2.9.1 (image:
> apacheignite/ignite:2.9.1)
> And additionally I have a Vertx program with Ignite 2.9.1 - Client Mode.
> The client creates sql cache tables (indexed cache) - and I can also see
> and query the tables with sqlline (connected to one of the two docker
> containers)
> I have put a my lib.jar to /opt/ignite/apache-ignite/libs/user_libs
> Connection to the Ignite Cluster seems to work well.
> But at some point I got the following message and I have no glue from
> where it comes, I think it must come from the cache object.put commands -
> because I do not see any entries in my sql cache tables.
>
> Is there a way to find out from where this comes? It seems to be a lambda
> problem - see message.
>
> But I just call the cache.put with key and an object (of the type of the
> cache-tables).
>
> I am using Kotlin - may this be a problem?
>
> For me it is not clear what is going on there - when I do a cache.put -
> where is the object serialised? At my client? But the errors comes at the
> server…
>
> In my client I do:
>
> val current = OpcValue(topic, value)
> cache?.put(current.key(), current)
>
>
> ignite1_1  | [11:08:55,240][SEVERE][query-#83][BinaryContext] Failed to
> deserialize object [typeName=java.lang.invoke.SerializedLambda]
> ignite1_1  | class org.apache.ignite.binary.BinaryObjectException: Failed
> to read field [name=capturingClass]
> ignite1_1  |  at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:192)
> ignite1_1  |  at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:888)
> ignite1_1  |  at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
> ignite1_1  |  at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> ignite1_1  |  at
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:316)
> ignite1_1  |  at
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:301)
> ignite1_1  |  at
> org.apache.ignite.internal.binary.BinaryMarshaller.unmarshal0(BinaryMarshaller.java:101)
> ignite1_1  |
> at 
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80)
> ignite1_1  |  at
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10376)
> ignite1_1  |
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryRequest.finishUnmarshal(GridCacheQueryRequest.java:383)
> ignite1_1  |
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.unmarshall(GridCacheIoManager.java:1625)
> ignite1_1  |
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:586)
> ignite1_1  |
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:392)
> ignite1_1  |
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:318)
> ignite1_1  |
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:109)
> ignite1_1  |
> at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:308)
> ignite1_1  |
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907)
> ignite1_1  |
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1528)
> ignite1_1  |  at
> org.apache.ig

Re: Ignite Server Upgrade While Keeping Data

2021-04-16 Thread Ilya Kasnacheev
Hello!

If you have native persistence configured, you should be able to start 2.10
nodes with 2.7 persistence files. Just restart all of your nodes at once to
the new version.

If you have a purely in-memory cluster, it is assumed that you are ready
for occasional full cluster restart and starting anew (repopulating data,
etc). In this case it may be even possible to bring up the new cluster,
switch the load to new cluster, sunset the old one.

Apache Ignite does not have support for rolling upgrade yet: it is not
possible to bring new version nodes to old version cluster.

Regards,
-- 
Ilya Kasnacheev


пт, 16 апр. 2021 г. в 13:54, starlight :

> Hello,
>
> What is the general procedure of upgrading Ignite Server from one version
> to
> another while maintaining data?
>
> Particularly I plan to upgrade the Server and Client from 2.7 to 2.10. The
> server currently stores large amounts of data.
>
> Thanks in advance, best regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is Geo redundancy supported in Ignite

2021-04-15 Thread Ilya Kasnacheev
Hello!

The one I know of, yes.

Regards,
-- 
Ilya Kasnacheev


чт, 15 апр. 2021 г. в 09:30, Venkata Bhagavatula :

> Hi Ilya Kasnacheev,
>
> We are also looking for this.
> Can you please highlight the third party solution?  I know that GridGain
> supports this in their entrerprise solution.  Is this you are referring to ?
>
> Thanks n Regards,
> Chalapathi
>
> On Tue, Apr 13, 2021 at 6:29 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> There is a third-party solution for geo-redundancy based on Apache
>> Ignite. But as X Guest said, no native support in Apache Ignite.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 12 апр. 2021 г. в 16:20, vbm :
>>
>>> Hi,
>>>
>>> Is geo-redundancy supported in Ignite ?
>>>
>>> Can we  bring up 2 clusters in different location(geo) and make one
>>> cluster
>>> as backup for another ?
>>> Basically can ignite cluster in different location act as a active/
>>> passive
>>> setup ?
>>>
>>> If one of the cluster goes down due to some issue in one location, can
>>> another cluster across geo become active ?
>>>
>>>
>>> Regards,
>>> Vishwas
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: Ignite unable to understand recursive query.

2021-04-15 Thread Ilya Kasnacheev
Hello!

As far as my understanding goes, Ignite does not support recursive CTE.

You may still try running it as a local query by setting local=true in your
connection/query settings.

Regards,
-- 
Ilya Kasnacheev


чт, 15 апр. 2021 г. в 02:33, PunxsutawneyPhil3 :

> I am having a problem with the following query which results in this error:
>
> javax.cache.CacheException: Failed to parse query. Column "S.TAGID" not
> found; SQL statement:
>
> From reading it looks like the reason for this is explained in  this
> <https://apacheignite-sql.readme.io/docs/distributed-joins>
> documentation,
> and is that Ignite cannot resolve the columns of reportIdList after the
> Inner Join.
>
> Is there any way to restructure this query so it can be understood by
> Ignite?
>
>
>
> WITH RECURSIVE reportIdList AS
> (
> SELECT reportId, tagId, owner
> FROM "MyReportPojoCache".MyReportPojo
> WHERE id = ANY (SELECT id
> FROM "MyOtherReportPojoCache".MyOtherReportPojo
> WHERE owner IS NOT NULL
>   AND isManage IS TRUE
>   AND type = 'tag-group')
> UNION
> SELECT m.reportId, m.tagId, m.owner
> FROM "MyReportPojoCache".MyReportPojo m
> INNER JOIN reportIdList s ON s.tagId = m.reportId
>   AND s.owner IS NOT NULL
>   AND s.owner != 'admin'
>   AND s.owner = m.owner
> )
> SELECT qpIntId
> FROM "MyReportPojoCache".MyReportPojo
> WHERE (report_id, owner) IN (SELECT report_id, owner FROM reportIdList)
>
>
> link StackOverflow post
>
> https://stackoverflow.com/questions/67098705/how-to-rewrite-recursive-sql-query-to-work-with-ignite-sql-queries
> <
> https://stackoverflow.com/questions/67098705/how-to-rewrite-recursive-sql-query-to-work-with-ignite-sql-queries>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Fastest way to iterate over a persistence cache

2021-04-14 Thread Ilya Kasnacheev
Hello!

We had a per-page scanning of caches once, but it is disabled for some time
because it was causing synchronization issues.

Apache Ignite is still a memory-centric database, which assumed that data
is either in memory or may be loaded to memory relatively quickly.

So I guess the only cache scan option currently is by reading all blocks in
random.

We also assume that persistence setup uses SSD, which has random read
speeds on par with sequential (and the term  "sequential" may not be
applicable to SSD at all). If your setup is based on HDD it may indeed not
work optimally.

Regards,
-- 
Ilya Kasnacheev


вт, 13 апр. 2021 г. в 21:28, Sebastian Macke :

> Hi Ignite Team,
>
> I have stumbled across a problem when iterating over a persistence cache
> that does not
> fit into memory.
>
> The partitioned cache consists of 50M entries across 3 nodes with a total
> cache size of 3*80GB on the volumes.
>
> I use either a ScanQuery or a SQL query over a non-indexed table. Both
> results are the same.
>
> It can take over an hour to iterate over the entire cache. The problem
> seems
> to be that the cache is read in random 4kB (page size) chunks
> unparallelized
> from the volume. A page size of 8kB exactly doubles the iteration speed.
>
> Is this Ignite's default behaviour? Is there an option to enable a more
> streaming like solution?
> Of course, the order of the items in the cache doesn't matter.
>
> Thanks,
>
> Sebastian
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Abrupt stop of application on using sql

2021-04-14 Thread Ilya Kasnacheev
Hello!

You should probably decrease maxSize until you stop seeing this issue, and
then some more.

Regards,
-- 
Ilya Kasnacheev


ср, 14 апр. 2021 г. в 13:09, rakshita04 :

> Hi Ilya,
>
> We only have one node.
> Attached is our DataBaseConfig.xml file for your reference
> DataBaseConfig.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2857/DataBaseConfig.xml>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: reading encrypted password from Ignite config file

2021-04-14 Thread Ilya Kasnacheev
Hello!

There are a couple of third party implementations of security plugins, but
I doubt that there are any walkthroughs on implementing your own.

Regards,
-- 
Ilya Kasnacheev


вт, 13 апр. 2021 г. в 20:31, shivakumar :

> Hi Ilya,
> if I have to use username/password is it possible to implement
> SecurityCredentialsProvider.java interface?
>
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/plugin/security/SecurityCredentialsProvider.java
> is there any example for this ?
>
> regards,
> Shiva
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Abrupt stop of application on using sql

2021-04-14 Thread Ilya Kasnacheev
Hello!

How many nodes do you have?

Maybe when you start killing nodes, remaining nodes start rebalancing data
and they keep more data locally than fits in your memory.

The recommendation stays the same - decrease your data regions' and heap
size until it fits comfortably in the available memory to not trigger
OOMKiller.

Regards,
-- 
Ilya Kasnacheev


ср, 14 апр. 2021 г. в 07:39, rakshita04 :

> Hi Ilya,
>
> we could find out the problem of the application stop.
> We see "Buffer Overflow" error when application stops.
> We commented out mCache.Put() in our code(basically not calling Put API to
> write data to DB) and the restart did not happen.
> Also this restart happens after a certain number of entries in DB.
> Do you have any idea what can cause this buffer overflow while write in DB?
> and is there anything we can do to avoid it?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Is Geo redundancy supported in Ignite

2021-04-13 Thread Ilya Kasnacheev
Hello!

There is a third-party solution for geo-redundancy based on Apache Ignite.
But as X Guest said, no native support in Apache Ignite.

Regards,
-- 
Ilya Kasnacheev


пн, 12 апр. 2021 г. в 16:20, vbm :

> Hi,
>
> Is geo-redundancy supported in Ignite ?
>
> Can we  bring up 2 clusters in different location(geo) and make one cluster
> as backup for another ?
> Basically can ignite cluster in different location act as a active/ passive
> setup ?
>
> If one of the cluster goes down due to some issue in one location, can
> another cluster across geo become active ?
>
>
> Regards,
> Vishwas
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Re: SQL query performance with JOIN and ORDER BY or WHERE

2021-04-12 Thread Ilya Kasnacheev
Hello!

I think you can try a (QUEUEID, STATUS) index.

Or maybe a (STATUS, QUEUEID), probably makes sense to try both.

Regards,
-- 
Ilya Kasnacheev


сб, 10 апр. 2021 г. в 00:22, :

> The QUEUED field is a BIGINT that contains timestamp from
> System.currentTimeMillis(), so it should be pretty easy to sort, shouldn’t
> it? Looks like the field STATUS (used in where clause) and field QUEUED
> (used in order clause) are not working optimal when used together. Does
> this make sense? Do I need to create an index on both together?
>
> I will take a look at UNION and WHERE EXISTS, I‘m not familiar with these
> statements.
>
> Thanks!
>
>
> On 09.04.21 at 17:37, Ilya Kasnacheev wrote:
>
> From: "Ilya Kasnacheev" 
> Date: 9. April 2021
> To: user@ignite.apache.org
> Cc:
> Subject: Re: SQL query performance with JOIN and ORDER BY or WHERE
> Hello!
>
> ORDER BY will have to sort the whole table.
>
> I think that using index on QUEUED will be optimal here. What is the
> selectivity of this field? If it s boolean, you might as well use UNION
> queries.
>
> Have you tried joining JOBS via WHERE EXISTS?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
>
> пт, 9 апр. 2021 г. в 01:03, DonTequila :
>
>> Hi,
>>
>> I have a SQL performance issue. There are indexes on both fields that are
>> used in the ORDER BY clause and the WHERE clause.
>>
>> The following statement takes about 133941 ms with several warnings from
>> IgniteH2Indexing:
>>
>> SELECT JQ._KEY
>> FROM "JobQueue".JOBQUEUE AS JQ
>> INNER JOIN "Jobs".JOBS AS J ON JQ.jobid=J._key
>> WHERE JQ.STATUS = 2
>> ORDER BY JQ.QUEUED ASC
>> LIMIT 20
>>
>> But when I remove the ORDER BY part or the WHERE part from the statement
>> it
>> returns in <10ms.
>>
>> What may I do wrong?
>>
>> Thanks,
>> Thomas.
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


  1   2   3   4   5   6   7   8   9   10   >