RE: cache update slow

2019-04-23 Thread Coleman, JohnSteven (Agoda)
Hi,

Thanks for the tip. I implemented with data streamer and observe a significant 
improvement. However it still takes >1ms per cache entry addition which is fast 
enough for my requirements, but still >500 times slower than DMA. Is this 
largely a factor of network overhead (even though I use localhost cache), or 
the underlying caching mechanics?

Regards,
John

-Original Message-
From: Maxim.Pudov 
Sent: Tuesday, April 23, 2019 8:12 PM
To: user@ignite.apache.org
Subject: RE: cache update slow

Email received from outside the company. If in doubt don't click links nor open 
attachments!


Thanks for sharing your code. I didn't realise you use .NET. Check out how you 
can benefit from data streamer in .NET [1]. It was designed to populate your 
cache faster, so it could help you to improve performance.

[1] https://apacheignite-net.readme.io/docs/data-streamers



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


This message is confidential and is for the sole use of the intended 
recipient(s). It may also be privileged or otherwise protected by copyright or 
other legal rules. If you have received it by mistake please let us know by 
reply email and delete it from your system. It is prohibited to copy this 
message or disclose its content to anyone. Any confidentiality or privilege is 
not waived or lost by any mistaken delivery or unauthorized disclosure of the 
message. All messages sent to and from Agoda may be monitored to ensure 
compliance with company policies, to protect the company's interests and to 
remove potential malware. Electronic messages may be intercepted, amended, lost 
or deleted, or contain viruses.


Re: How to shut down Ignite properly

2019-04-23 Thread Dmitriy Pavlov
Hi,

Just Ignition.stop(true);  should be enough to wait till checkpoint ends.
So it could be just one call to this method without anything else.

If you did a prior call to ignite.close() this is equal to
Ignition.stop(false) and this cause Ignite node to stop without waiting for
checkpoint to finish. In that case, further calls have no effect.

In any case, this warning says that it may require a longer time to restore
memory during start-up, but not something that is dangerous for data.

Sincerely,
Dmitriy Pavlov

вт, 23 апр. 2019 г. в 22:58, Jorg Janke :

> We try to shut down Ignite properly via:
>
> m_ignite.close(); // Ignite.close()
> m_ignite.executorService().shutdown();
> Ignition.stop(true);
>
> but still get:
>
> WARN: Ignite node stopped in the middle of checkpoint. Will restore memory
> state and finish checkpoint on node start.
>
> What is the recommended way to stop/shut down an (embedded) ignite
> instance?
>
> Thanks!
> Jorg
>
> Jorg Janke - www.accorto.com - (650) 227-3271
>


How to shut down Ignite properly

2019-04-23 Thread Jorg Janke
We try to shut down Ignite properly via:

m_ignite.close(); // Ignite.close()
m_ignite.executorService().shutdown();
Ignition.stop(true);

but still get:

WARN: Ignite node stopped in the middle of checkpoint. Will restore memory
state and finish checkpoint on node start.

What is the recommended way to stop/shut down an (embedded) ignite instance?

Thanks!
Jorg

Jorg Janke - www.accorto.com - (650) 227-3271


Re: What happens when a client gets disconnected

2019-04-23 Thread Matt Nohelty
What period of time are you asking about?  We deploy fairly regularly so
our application servers (i.e. the Ignite clients) get restarted at least
weekly which will trigger a disconnect and reconnect event for each.  We
have not noticed any issues during our regular release process but in this
case we are shutting down the Ignite clients gracefully with Ignite#close.
However, it's also possible that something bad happens on an application
servers causing it to crash.  This is the scenario where we've seen
blocking across the cluster.  We'd obviously like our application servers
to be as independent of one another as possible and it's problematic if an
issue on one server is allowed to ripple across all of them.

I should have mentioned it in my initial post but we are currently using
version 2.4.  I received the following response on my Stack Overflow post:
"When topology changes, partition map exchange is triggered internally. It
blocks all operations on the cluster. Also in old versions ongoing
rebalancing was cancelled. But in the latest versions client
connection/disconnection doesn't affect some processes like this. So, it's
worth trying the most fresh release"

This comment also mentions PME so it sounds like you both are referencing
the same behavior.  However, this comment also states that client
connect/disconnect events do not trigger  PME in the more recent versions
of Ignite.  Can anyone confirm that this is true, and if so, which version
was this change made in?

Thank you very much for the help.

On Tue, Apr 23, 2019 at 10:00 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> What's the period of time?
>
> When client disconnects, topology will change, which will trigger waiting
> for PME, which will delay all further operations until PME is finished.
>
> Avoid having short-lived clients.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 23 апр. 2019 г. в 03:40, Matt Nohelty :
>
>> I already posted this question to stack overflow here
>> https://stackoverflow.com/questions/55801760/what-happens-in-apache-ignite-when-a-client-gets-disconnected
>> but this mailing list is probably more appropriate.
>>
>> We use Apache Ignite for caching and are seeing some unexpected behavior
>> across all of the clients of cluster when one of the clients fails. The
>> Ignite cluster itself has three servers and there are approximately 12
>> servers connecting to that cluster as clients. The cluster has persistence
>> disabled and many of the caches have near caching enabled.
>>
>> What we are seeing is that when one of the clients fail (out of memory,
>> high CPU, network connectivity, etc.), threads on all the other clients
>> block for a period of time. During these times, the Ignite servers
>> themselves seem fine but I see things like the following in the logs:
>>
>> Topology snapshot [ver=123, servers=3, clients=11, CPUs=XXX, offheap=XX.XGB, 
>> heap=XXX.GB]Topology snapshot [ver=124, servers=3, clients=10, CPUs=XXX, 
>> offheap=XX.XGB, heap=XXX.GB]
>>
>> The topology itself is clearly changing when a client
>> connects/disconnects but is there anything happening internally inside the
>> cluster that could cause blocking on other clients? I would expect
>> re-balancing of data when a server disconnects but not a client.
>>
>> From a thread dump, I see many threads stuck in the following state:
>>
>> java.lang.Thread.State: TIMED_WAITING (parking)
>> at sun.misc.Unsafe.park(Native Method)- parking to wait for  
>> <0x00078a86ff18> (a java.util.concurrent.CountDownLatch$Sync)
>> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
>> at 
>> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
>> at 
>> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
>> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
>> at org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7452)
>> at 
>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.awaitAllReplies(GridReduceQueryExecutor.java:1056)
>> at 
>> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:733)
>> at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1339)
>> at 
>> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
>> at 
>> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$9.iterator(IgniteH2Indexing.java:1403)
>> at 
>> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
>> at java.lang.Iterable.forEach(Iterable.java:74)...
>>
>> Any ideas, suggestions, or further avenues to investigate would be much
>> appreciated.
>>
>


Re: Ignite goes down with time out exceptions

2019-04-23 Thread Ilya Kasnacheev
Hello!

You can search "failure handler" mails in userlist history, but basically,
you should just tune away failure handler since checkpoints may take a lot
of time, and it decides that something wrong is going on:
https://apacheignite.readme.io/docs/critical-failures-handling
(Just replacing it with Noop handler will be easiest)

Regards,
-- 
Ilya Kasnacheev


вт, 23 апр. 2019 г. в 10:45, kresimir.horvat :

> Hi,
>
> we had Ignite shut downs with time out exceptions. Time out exceptions are
> occurring over few hours, and then Ignite goes down with "class
> org.apache.ignite.IgniteException: Checkpoint read lock acquisition has
> been
> timed out.".
>
> Full stack trace in file  Ignite_time_out_exception.txt
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2341/Ignite_time_out_exception.txt>
>
>
> Anyone with idea what might be wrong?
>
> Regards,
> Kresimir
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite support for continuous queries over sliding windows

2019-04-23 Thread stefano.rebora
Thank you for the answer.

What exactly do you mean by "combining several event batches based on your
logic"? Can you give me an example, please? Can Ignite cache eviction
policies be useful?

Does any one else have any suggestions about questions 1) and 3) ?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: What happens when a client gets disconnected

2019-04-23 Thread Ilya Kasnacheev
Hello!

What's the period of time?

When client disconnects, topology will change, which will trigger waiting
for PME, which will delay all further operations until PME is finished.

Avoid having short-lived clients.

Regards,
-- 
Ilya Kasnacheev


вт, 23 апр. 2019 г. в 03:40, Matt Nohelty :

> I already posted this question to stack overflow here
> https://stackoverflow.com/questions/55801760/what-happens-in-apache-ignite-when-a-client-gets-disconnected
> but this mailing list is probably more appropriate.
>
> We use Apache Ignite for caching and are seeing some unexpected behavior
> across all of the clients of cluster when one of the clients fails. The
> Ignite cluster itself has three servers and there are approximately 12
> servers connecting to that cluster as clients. The cluster has persistence
> disabled and many of the caches have near caching enabled.
>
> What we are seeing is that when one of the clients fail (out of memory,
> high CPU, network connectivity, etc.), threads on all the other clients
> block for a period of time. During these times, the Ignite servers
> themselves seem fine but I see things like the following in the logs:
>
> Topology snapshot [ver=123, servers=3, clients=11, CPUs=XXX, offheap=XX.XGB, 
> heap=XXX.GB]Topology snapshot [ver=124, servers=3, clients=10, CPUs=XXX, 
> offheap=XX.XGB, heap=XXX.GB]
>
> The topology itself is clearly changing when a client connects/disconnects
> but is there anything happening internally inside the cluster that could
> cause blocking on other clients? I would expect re-balancing of data when a
> server disconnects but not a client.
>
> From a thread dump, I see many threads stuck in the following state:
>
> java.lang.Thread.State: TIMED_WAITING (parking)
> at sun.misc.Unsafe.park(Native Method)- parking to wait for  
> <0x00078a86ff18> (a java.util.concurrent.CountDownLatch$Sync)
> at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> at org.apache.ignite.internal.util.IgniteUtils.await(IgniteUtils.java:7452)
> at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.awaitAllReplies(GridReduceQueryExecutor.java:1056)
> at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:733)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$8.iterator(IgniteH2Indexing.java:1339)
> at 
> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$9.iterator(IgniteH2Indexing.java:1403)
> at 
> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
> at java.lang.Iterable.forEach(Iterable.java:74)...
>
> Any ideas, suggestions, or further avenues to investigate would be much
> appreciated.
>


Re: CacheContinuousQuery memory use and best practices

2019-04-23 Thread Ilya Kasnacheev
Hello!

Since nobody is chiming in, my opinion is that:

1) Please don't try exceptions out of this handler!
2) I don't think you can switch it off, this is implemented as in other
places of JCache.
3) I think it still makes sense. Avoid blocking in continuous query handler.

As for your client-server scenario, I'm not sure what to do. If you have a
lot of small updates, please try to increase pageSize perhaps? It's 1024 by
default.

Regards,
-- 
Ilya Kasnacheev


вт, 16 апр. 2019 г. в 14:23, johnny_rotten :

> Hi, I'm looking into an issue where we have an Ignite (2.6) client node
> doing
> a CacheContinuousQuery on a cache full of binary objects, and eventually
> the
> server node gets Out of Memory. Usually the server node is happy to run
> with
> 2-3 gig of Heap size, however when this client is running with a
> CacheContinuousQuery on a cache, it can go >20gig, until the client is
> stopped, and then 20mins later objects are garbage collected on the server
> node and it goes back down to 2gig. In heap dumps I see its full of
> CacheContinuousQueryEvents.
>
> Some questions:
>
> 1) In my client Continuous Query handler code, what happens when an error
> is
> thrown:
> /private void handleCacheEvent(CacheEntryEvent MyBinaryObject> event) {
>  // try to deserialize event but fails with error.
> }/
>
> I don't see any exception thrown, will this event stay in the cache as an
> unconsumed event? Potentially causing a leak for events that have not been
> handled correctly?
>
> 2) Why does CacheContinuousQueryEvent keep a reference to 'oldVal'.. i.e
> the
> old value in the cache? This could be causing a problem, as we don't care
> about old values in the cache.. can we switch that off? Why isn't that the
> default
>
> 3) In the method that handles cache events, is it best practice to put the
> cache event straight onto a blocking queue to make sure there is no slow
> consumer problem? makes sense to me but I don't see it recommended
> anywhere.
> If we don't I can imagine the outbound queue of the server node growing...
>
> thanks for any pointers!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite DataStreamer Memory Problems

2019-04-23 Thread kellan
Any suggestions from where I can go from here? I'd like to find a way to
isolate this problem before I have to look into another storage/grid
solutions. A lot of work has gone into integrating Ignite into our platform,
and I'd really hate to start from scratch. I can provide as much information
as needed to help pinpoint this problem/do additional tests on my end.

Are there any projects out there that have successfully run Ignite on
Kubernetes with Persistence and a high-volume write load?

I've been looking into using third-party persistence but we require SQL
queries to fetch the bulk of our data and it seems like this isn't really
possible with Cassandra, et al, unless I can know in advance what data needs
to be loaded into memory. Is that a safe assumption to make?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: cache update slow

2019-04-23 Thread Maxim.Pudov
Thanks for sharing your code. I didn't realise you use .NET. Check out how
you can benefit from data streamer in .NET [1]. It was designed to populate
your cache faster, so it could help you to improve performance.

[1] https://apacheignite-net.readme.io/docs/data-streamers



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Most efficient way to update a specific field of a Value in cache

2019-04-23 Thread kcheng.mvp
Can you use sql update clause to achieve this?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Table not getting dropped

2019-04-23 Thread Ilya Kasnacheev
Hello!

This looks almost exactly like a known issue
https://issues.apache.org/jira/browse/IGNITE-8055 and its friends, which
will be fixed in 2.8.

Regards,

-- 
Ilya Kasnacheev


пн, 22 апр. 2019 г. в 18:12, shivakumar :

> Hi all,
> I created one table with JDBC connection, batch inserted around 13 crore
> records to that table then I'am trying to drop the table from sqlline, but
> it hangs for some time and gives *java.sql.SQLException: Statement is
> closed* exception and if i check number of records using *select count(*)
> from Cell;* statements then all 13 crore records still exists in that table
> but if I try to drop again it says table not exists.
>
> [root@ignite-st-controller bin]# ./sqlline.sh --verbose=true -u
> "jdbc:ignite:thin://10.*.*.*:10800;user=ignite;password=ignite;"
> issuing: !connect
> jdbc:ignite:thin://10.*.*.*:10800;user=ignite;password=ignite; '' ''
> org.apache.ignite.IgniteJdbcThinDriver
> Connecting to
> jdbc:ignite:thin://10.*.*.*:10800;user=ignite;password=ignite;
> Connected to: Apache Ignite (version 2.7.0#19700101-sha1:)
> Driver: Apache Ignite Thin JDBC Driver (version
> 2.7.0#20181130-sha1:256ae401)
> Autocommit status: true
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> sqlline version 1.3.0
> 0: jdbc:ignite:thin://10.*.*.*:10800> DROP TABLE IF EXISTS CELL;
> Error: Statement is closed. (state=,code=0)
> java.sql.SQLException: Statement is closed.
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.ensureNotClosed(JdbcThinStatement.java:862)
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.getWarnings(JdbcThinStatement.java:454)
> at sqlline.Commands.execute(Commands.java:849)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
> 0: jdbc:ignite:thin://10.*.*.*:10800>
> 0: jdbc:ignite:thin://10.*.*.*:10800> select count(*) from CELL;
> ++
> |COUNT(*)|
> ++
> | 131471437  |
> ++
> 1 row selected (4.564 seconds)
> 0: jdbc:ignite:thin://10.*.*.*:10800> DROP TABLE CELL;
> Error: Table doesn't exist: CELL (state=42000,code=3001)
> java.sql.SQLException: Table doesn't exist: CELL
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:750)
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:212)
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:475)
> at sqlline.Commands.execute(Commands.java:823)
> at sqlline.Commands.sql(Commands.java:733)
> at sqlline.SqlLine.dispatch(SqlLine.java:795)
> at sqlline.SqlLine.begin(SqlLine.java:668)
> at sqlline.SqlLine.start(SqlLine.java:373)
> at sqlline.SqlLine.main(SqlLine.java:265)
> 0: jdbc:ignite:thin://10.*.*.*:10800> select count(*) from CELL;
> ++
> |COUNT(*)|
> ++
> | 131482007  |
> ++
> 1 row selected (1.264 seconds)
> 0: jdbc:ignite:thin://10.*.*.*:10800>
>
>
>
>
> regards,
> shiva
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Most efficient way to update a specific field of a Value in cache

2019-04-23 Thread matanlevy
Hi,

I am using ignite in C# and I wonder what is the most efficient way to
update a specific field in my cache value.

for example, I have cache of  and in MyClass I have a
boolean field (with many other fields). I need to update only this spceific
boolean field for a large amount of keys in my cache.

Is there any alternative except Get all my keys, modify MyClass
Object(actually, only my boolean field) for all of them and Put all changes?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


SQL delete command is slow and can cause OOM

2019-04-23 Thread colinc
The Ignite SQL delete command seems to load all entries (both keys and
values) on heap before deleting them from the cache. This is slow and we
have seen it cause JVM heap to go OOM.

The docs state that a select is used to gather the keys of records being
deleted:
https://apacheignite-sql.readme.io/docs/delete

But the below stack trace indicates that the embedded select statement
retrieves both _KEY and _VAL. Is this required? Is there a recommended way
to delete entries without causing high heap usage?

Thanks,
Colin.


Caused by: org.apache.ignite.IgniteException: Failed to execute SQL query.
Out of memory.; SQL statement: 
SELECT 
_KEY, 
_VAL 
FROM "PortfolioDataAccessCompositeService:AGGREGATE_CACHE".INDEXEDMODELIMPL 
WHERE SESSIONID = ?1 [90108-197] 
at
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor$3.iterator(DmlStatementsProcessor.java:645)
~[ignite-indexing-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:95)
~[ignite-core-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doDelete(DmlStatementsProcessor.java:783)
~[ignite-indexing-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.processDmlSelectResult(DmlStatementsProcessor.java:710)
~[ignite-indexing-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:653)
~[ignite-indexing-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:185)
~[ignite-indexing-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsLocal(DmlStatementsProcessor.java:387)
~[ignite-indexing-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.doRunPrepared(IgniteH2Indexing.java:2266)
~[ignite-indexing-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:2209)
~[ignite-indexing-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2135)
~[ignite-core-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2130)
~[ignite-core-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
~[ignite-core-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2707)
~[ignite-core-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2144)
~[ignite-core-2.7.0.jar:2.7.0] 
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:685)
~[ignite-core-2.7.0.jar:2.7.0] 
... 27 more 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite goes down with time out exceptions

2019-04-23 Thread kresimir.horvat
Hi,

we had Ignite shut downs with time out exceptions. Time out exceptions are
occurring over few hours, and then Ignite goes down with "class
org.apache.ignite.IgniteException: Checkpoint read lock acquisition has been
timed out.".

Full stack trace in file  Ignite_time_out_exception.txt

  

Anyone with idea what might be wrong?

Regards,
Kresimir



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/