delete data error

2018-01-16 Thread Lucky
Hi
   When delete some data,it got an error.
   sql = "delete from \"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData ";
cache.query(new SqlFieldsQuery(sql))
I use jdbc to excute this sql ,it's the same.   
This table has about 20 records.
I had check the data ,and found the data is normal.


What  did I  miss?
Thanks.


Here is the trace:


javax.cache.CacheException: class 
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to 
process key '1516002613660-100-82' at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:597)
 at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:560)
 at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.query(GatewayProtectedCacheProxy.java:382)
 at 
xxx.xxx.xxx.basedata.framework.app.IgniteDatabaseDAssignService.batchExecute(IgniteDatabaseDAssignService.java:50)
 at 
xxx.xxx.xxx.sss.sss.sss.sss.CustomerControllerBean.batchAssignCompanyInfo2(CustomerControllerBean.java:5226)
 at 
xxx.xxx.xxx.sss.sss.sss.sss.CustomerControllerBean._batchAssignAssist(CustomerControllerBean.java:5718)
 at 
xxx.xxx.xxx.sss.sss.sss.sss.AbstractCustomerControllerBean.batchAssignAssist(AbstractCustomerControllerBean.java:862)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619) at 
xxx.xxx.x22.transaction.EJBTxFacade.TxInvokerBean.invoke(TxInvokerBean.java:125)
 at 
xxx.xxx.x22.transaction.EJBTxFacade.TxInvokerBean.INVOKE_REQUIRED(TxInvokerBean.java:60)
 at 
xxx.xxx.x22.transaction.EJBTxFacade.TxInvokerBean_LocalObjectImpl_2.INVOKE_REQUIRED(Unknown
 Source) at 
xxx.xxx.x22.transaction.EJBTransactionProxy.invoke(EJBTransactionProxy.java:171)
 at 
xxx.xxx.x22.transaction.EJBTransactionProxy.invoke(EJBTransactionProxy.java:324)
 at com.sun.proxy.$Proxy465.batchAssignAssist(Unknown Source)
   at 
xxx.xxx.xxx.basedata.master.cssp.Customer.batchAssignAssist(Customer.java:595) 
at 
rpc_generate._PROXY_com_1_kingdee_1_eas_1_basedata_1_master_1_cssp_1_ICustomer.pi120(Unknown
 Source) at 
rpc_generate._PROXY_com_1_kingdee_1_eas_1_basedata_1_master_1_cssp_1_ICustomer.processInvoke(Unknown
 Source) at 
xxx.xxx.x22.rpc.impl.ObjectProxy.processInvoke(ObjectProxy.java:177) at 
xxx.xxx.x22.rpc.impl.RPCService.serviceInvoke(RPCService.java:788) at 
xxx.xxx.x22.rpc.impl.RPCService.service(RPCService.java:141) at 
xxx.xxx.x22.rpc.impl.ServiceDispatcher.run(ServiceDispatcher.java:153) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:482) at 
java.util.concurrent.FutureTask.run(FutureTask.java:273) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1176) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) 
at java.lang.Thread.run(Thread.java:853) Caused by: class 
org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to 
process key '1516002613660-100-82'
at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.doDelete(DmlStatementsProcessor.java:581)
 at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.processDmlSelectResult(DmlStatementsProcessor.java:444)
 at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.executeUpdateStatement(DmlStatementsProcessor.java:420)
 at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFields(DmlStatementsProcessor.java:194)
 at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.updateSqlFieldsDistributed(DmlStatementsProcessor.java:229)
 at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.queryDistributedSqlFields(IgniteH2Indexing.java:1453)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1909)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:1907)
 at 
org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:2445)
 at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:1914)
 at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:585)
 ... 28 more
Caused by: java.sql.SQLException: Failed to process key '1516002613660-100-82' 
at 
org.apache.ignite.internal.processors.cache.query.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:116)
 at 
org.apache.ignite.internal.processors.query.h2.DmlStatementsProcessor.splitErrors(DmlStatementsProcessor.java:794)
 at 

Re: ignite c++ client CacheEntryEventFilter has no effect!

2018-01-16 Thread Edward Wu
Yes, my server node is a plain Java node.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd Party Persistence and two phase commit

2018-01-16 Thread vkulichenko
Aleksey,

Are you talking about use case when a transaction spawns a distributed cache
AND a local cache? If so, this sounds like a very weird use case. Do we even
allow this?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd Party Persistence and two phase commit

2018-01-16 Thread Denis Magda
Guys, let me step in and explain how it works with a 3rd party database like an 
RDBMS.

1. Write-through mode (all the changes are persisted right away).

Transaction coordinator commits a transaction at the RDBMS level first and only 
*then* commits it at the cluster level. This is actually what that blog is 
about. So, the transaction coordinator (your application) maintains a direct 
connection to the database.

If the transaction failed at the database level it won’t be committed on the 
cluster side at all.

2. Write-behind mode (the changes are persisted asynchronously).

Transaction coordinator commits a transaction at the cluster level and primary 
nodes will commit the changes asynchronously depending on the workload and 
settings of the write-behind store impl that writes to your database.

Hope this helps.

—
Denis

> On Jan 11, 2018, at 3:35 AM, ALEKSEY KUZNETSOV  
> wrote:
> 
> Local store means store, that resides only on one node. No other nodes see it.
> 
> If you don't have local stores in cluster(only distributed ones), then it 
> will be the only db connection within transaction opened. 
> But If you have local stores, then nodes could open their own connections to 
> local stores(i.e. in replicated cache nodes could open connections to local 
> stores, if any).
> 
> Only local stores could be filled with data from backup nodes, therefore new 
> connection must be opened(cannot reuse old one from primary node).
> 
> 
> чт, 11 янв. 2018 г. в 13:48, Andrey Nestrogaev  >:
> Hi Aleksey, thanks for info,
> 
> "/Actually, data could be persisted not on tx initiating node, but on
> primary(I.e. we have partitioned cache and local cache)/"
> Ok, but no matter where the data is persisted, there will always be only 1
> database connection within the transaction, no matter how many nodes are
> involved in the transaction.
> Right?
> 
> "/Additionally, data would be persisted on backup node if you enable the
> corresponding flag./"
> Would be persisted on backup node with using the same database connection as
> for primary node?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 
> 
> 
> 
> -- 
> Best Regards,
> 
> Kuznetsov Aleksey
> 



Re: Limit cache size ?

2018-01-16 Thread vkulichenko
Jeff,

By default you will have a single data region limited to 80% of physical
memory, and all caches will be assigned to this region.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cursor in TextQuery - first hasNex() is slow

2018-01-16 Thread vkulichenko
zbyszek,

Would you mind creating a Jira ticket that would describe the issue and
proposed solution? Even better if you provide a patch for it, sounds like
you're bigger expert in Lucene than me :)

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to handle Cache adapter failures?

2018-01-16 Thread vkulichenko
Matt,

Can you please clarify what you mean by a custom cache adapter?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to handle Cache adapter failures?

2018-01-16 Thread matt
Hi,

Our application has a custom cache adapter, and we'd like to deal with
failures when attempting to execute one of the adapter methods (load,
deleteAll etc.) - if our application has one centralized node to coordinate
a job, how can we detect these failures happening on other nodes? And I
guess this would be for calls to invoke, but also for the asynchronous
write-through calls.

Thanks,
- Matt



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Create BinaryObject without starting Ignite?

2018-01-16 Thread zbyszek
Hi Val,

thank you for confirmation.

>> What is the purpose of this?
Purpose of that was to prepare object prototype (object with the same
structure layout to ensure the same schema version for all updates) without
having access to Ignite.
But as it turned out I managed to obtain the reference to Ignite hence my
particular problem is solved.

But it is good to know that it is not easily achievable.

thanx and regards,
zbyszek



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteOutOfMemoryException when using putAll instead of put

2018-01-16 Thread Larry Mark
no problem, this is not a short term blocker, it is just something I need
to understand better to make sure that I do not configure things in a way
to get unexpected OOM in production.


On Mon, Jan 15, 2018 at 1:18 PM, Alexey Popov  wrote:

> Hi Larry,
>
> I am without my PC for a while. I will check the file you attached later
> this week.
>
> Thanks,
> Alexey
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Questions regarding Ignite as hdfs cache

2018-01-16 Thread Evgenii Zhuravlev
Hi,

1) I think you can run Ignite in the non-quiet mode with flag '-v' or even
with DEBUG logs, it will definitely show information if it used. Otherwise,
you can make some mistake in IGFS configuration and check if it will work.
If you will face exceptions, this means that Ignite worked.

2,3,4) I'm not sure how large is your dataset, but I would recommend to
IGFS more memory - all data should be placed in IGFS, otherwise, it could
lead to a lot of data moving from HDFS to IGFS, which, obviously, may
affect performance.
Also, regarding query you've shared - it looks quite strange to me, you
joining 6 tables while taking only 3 fields from them. Are you sure that
you use optimal DB structure?

5) Ignite uses its own serialization algorithm, you can read about it here:
https://apacheignite.readme.io/docs/binary-marshaller

Evgenii

2017-11-02 9:49 GMT+03:00 shailesh prajapati :

> Hello,
>
> I am evaluating Ignite to be able to use it as a hdfs cache to speedup my
> hive queries. I am using hive with tez. Below are my cluster and Ignite
> configurations,
>
> *Cluster: *
> 4 data nodes with 32gb RAM each, 1 edge node
> 4 ignite servers, one for each data node. Ignite servers were started with
> Xmx10g
>
> *Setup done using:*
> https://apacheignite-fs.readme.io/docs/installing-on-hortonworks-hdp
> https://apacheignite-fs.readme.io/docs/running-apache-
> hive-over-ignited-hadoop
>
> *Ignite configuration file (provided to each ignite server): *
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
>
> 
>
> 
> 
> 
> 
>  value="hdfs://:8020/"/>
> 
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> node1:47500..47509
> node2:47500..47509
>  node3:47500..47509
>  node4:47500..47509
> 
> 
> 
> 
> 
> 
> 
>
> *Dataset used for the experiment: *
> TPCH
> customer 150 rows
> lineitem 59986052 rows
> nation 25 rows
> orders 1500 rows
> part 200 rows
> partsupp 800 rows
> region 5 rows
> supplier 10 rows
>
> and using standard TPCH queries
>
> *Querying from hive shell with below properties:*
> set fs.default.name=igfs://igfs@node1:10500/;
>
>
>
> I have now following questions:
>
> 1) My queries are running fine with the above configurations. I want to
> see whether the data is caching and coming from cache or not. How should i
> check this? I used Ignite visor to see if the data is available in cache,
> but i did not find any cache there.
>
> Although, in the Ignite server logs, i can see messages for local node
> metrics like shown below. The Heap usage is continuously increases while
> running query. what does this means?
>
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=e38943b2, name=null, uptime=03:02:18:866]
> ^-- H/N/C [hosts=4, nodes=4, CPUs=32]
> ^-- CPU [cur=0.23%, avg=0.13%, GC=0%]
> ^-- PageMemory [pages=7381]
> ^-- Heap [used=1050MB, free=88.46%, comm=3343MB]
> ^-- Non heap [used=83MB, free=98.45%, comm=84MB]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> ^-- Outbound messages queue [size=0]
>
>
> 2) I ran queries on both hive+tez+hdfs and hive+tez+ignite+hdfs. I found
> that the queries are slower when using ignite as a cache layer. For example
> consider below TPCH standard query,
>
> select
> n_name,
> sum(l_extendedprice * (1 - l_discount)) as revenue
> from
> customer,
> orders,
> lineitem,
> supplier,
> nation,
> region
> where
> c_custkey = o_custkey
> and l_orderkey = o_orderkey
> and l_suppkey = s_suppkey
> and c_nationkey = s_nationkey
> and s_nationkey = n_nationkey
> and n_regionkey = r_regionkey
> and r_name = 'AFRICA'
> and o_orderdate >= '1993-01-01'
> and o_orderdate < '1994-01-01'
> group by
> n_name
> order by
> revenue desc;
>
> Hive+tez avg time: 35.542s
> Hive+tez+ignite avg time: 38.221s
>
> Am i using wrong configurations?
>
> 3) I tried running queries with ignite MR with below configs set in hive.
> set hive.rpc.query.plan = true;
> set hive.execution.engine = mr;
> set mapreduce.framework.name = ignite;
> set mapreduce.jobtracker.address = node1:11211;
>
> The queries were even slower than hive+tez+ignite. Is there any other
> configuration for Ignite MR that i need to do?
>
> 4) Are my configurations optimal? if not can you please suggest me one.
>
> 5) What serialization algo (kryo, native java ...) Ignite uses?
>
> Thanks
>
>
>
>


Re: ignite.sh spring xml file secret.properties file not found error

2018-01-16 Thread Ganesh Kumar
I have tried setting CLASSPATH
$ echo $CLASSPATH
/opt/vdp/ignite/config

and also providing the full path of secret.properties in my spring xml file.
Still I am getting the same error when I try to start the ignite instance.
**code snippet from the xml file : 






Error :
class org.apache.ignite.IgniteException: Failed to instantiate Spring XML
application context
[springUrl=file:/opt/vdp/apache-ignite-fabric-2.3.0-bin/config/ganesh.xml,
err=Could not load properties; nested exception is
java.io.FileNotFoundException: class path resource
[opt/vdp/ignite/config/secret.properties] cannot be opened because it does
not exist]
at
org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:966)

Any thoughts on how to solve this??

Thanks - Ganesh Kumar




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteDataStreamer

2018-01-16 Thread gene
Thank you - going to trying this in a bit.

-gene



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Purpose of cache in cache.query(Create Table ...) statement

2018-01-16 Thread Andrey Mashenkov
Hi

Seems, you are looking for CacheGroup [1].
You can use same cache group for number of caches [2] via setting
additional parameter in 'Create table' query.

[1] https://apacheignite.readme.io/docs/cache-groups
[2] https://apacheignite-sql.readme.io/docs/create-table

On Tue, Jan 16, 2018 at 9:43 AM, Shravya Nethula <
shravya.neth...@aline-consulting.com> wrote:

> Hi Andrey,
>
> Thank you for the information.
>
> I want to create some tables using cache.query(Create Table ...) statement.
> Is there any way in which I can group some of my tables in one cache? Is
> there any hierarchy in organizing the caches like a super cache holding
> some
> sub caches?
>
> Regards,
> Shravya Nethula.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Nodes can not join the cluster after reboot

2018-01-16 Thread Evgenii Zhuravlev
Hi,

Most possible that on the of the nodes you have hanged
transaction/future/lock or even a deadlock, that's why new nodes can't join
cluster - they can't perform exchange due to pending operation. Please
share full logs from all nodes with thread dumps, it will help to find a
root cause.

Evgenii

2018-01-16 5:35 GMT+03:00 aa...@tophold.com :

> Hi All,
>
> We have a ignite cluster running about 20+ nodes,   for any case JVM
> memory issue we schedule reboot those nodes at middle night.
>
> but in order to keep the service supplied, we reboot them one by one like
> A,B,C,D nodes we reboot them at 5 mins delay; but if we doing so, the
> reboot nodes can never join to the cluster again.
>
> Eventually the entire cluster can not work any more forever waiting for
> joining to the topology; we need to kill all and reboot from started, this
> sound incredible.
>
> I not sure whether any more meet this issue before, or any mistake we may
> make, attached is the ignite log.
>
>
> Thanks for your time!
>
> Regards
> Aaron
> --
> Aaron.Kuai
>


Re: ZooKeeper Based Discovery

2018-01-16 Thread Yakov Zhdanov
Dmitry, there were few threads on dev list discussing new discovery SPI
implementation. You can find current code in "ignite-zk" branch. The work
is still in progress, but is mostly ready and will be merged to master
soon. Documentation and algorithms descriptions are still on TODO list.

Stay tuned.

--Yakov

2018-01-16 14:32 GMT+03:00 Kvon, Dmitriy :

> Hi,
>
>
>1. What should be in the ZooKeeper node?
>2. Will DiscoverySpi watches the ZooKeeper node for changes?
>
>
>
> Thanks,
>
> Dmitriy
>
>


ZooKeeper Based Discovery

2018-01-16 Thread Kvon, Dmitriy
Hi,

1.  What should be in the ZooKeeper node?
2.  Will DiscoverySpi watches the ZooKeeper node for changes?


Thanks,

Dmitriy



Re: ignite c++ client CacheEntryEventFilter has no effect!

2018-01-16 Thread Igor Sapego
Hello. So are your server node is a plain Java node?

Best Regards,
Igor

On Sat, Jan 13, 2018 at 5:46 AM, Edward Wu  wrote:

> Hello Everyone, I use c++ client library to connect to ignite server node
> as
> a client node(),then run
> continuous-query-example,I find the RangeFilter(RangeFilter :
> event::CacheEntryEventFilter) doesn't
> have any effect, because Listener (Listener : public
> event::CacheEntryEventListener) print all the elements I have put。And I
> find
> the ignite server node print following errors:
> *[16:22:06,841][SEVERE][sys-stripe-4-#5][query] CacheEntryEventFilter
> failed: class o.a.i.IgniteException: Platforms are not available
> [nodeId=9ff3f363-c3bc-4a3a-a38d-8bc8655b8682] (Use
> Apache.Ignite.Core.Ignition.Start() or Apache.Ignite.exe to start
> Ignite.NET
> nodes; ignite::Ignition::Start() or ignite.exe to start Ignite C++ nodes).*
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Segmentation fault (JVM crash) while memory restoring on start with native persistance

2018-01-16 Thread Arseny Kovalchuk
Hi Andrey.

Unfortunately I couldn't copy all data from file system to try reproducing
that locally or in our cluster. That was very likely due to some issues
with our underlying CEPH behavior, I mean we also got some problems with
CEPH in our cluster at the same time, so that might cause data corruption.
So, no results with OracleJDK.

>From the other hand, we disabled backup copies of data "backups=0" (taking
into account information from mentioned JIRAs) and we haven't got any
severe issues with Ignite persistence so far.



​
Arseny Kovalchuk

Senior Software Engineer at Synesis
skype: arseny.kovalchuk
mobile: +375 (29) 666-16-16
​LinkedIn Profile ​

On 15 January 2018 at 17:50, Andrey Mashenkov 
wrote:

> Hi Arseny,
>
> Have you success with reproducing the issue and getting stacktrace?
> Do you observe same behavior on OracleJDK?
>
> On Mon, Jan 15, 2018 at 5:50 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi Arseny,
>>
>> Have you success with reproducing the issue and getting stacktrace?
>> Do you observe same behavior on OracleJDK?
>>
>> On Tue, Dec 26, 2017 at 2:43 PM, Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Hi Arseny,
>>>
>>> This looks like a known issues that is unresolved yet [1],
>>> but we can't sure it is same issue as there is no stacktrace in logs
>>> attached.
>>>
>>>
>>> [1] https://issues.apache.org/jira/browse/IGNITE-7278
>>>
>>> On Tue, Dec 26, 2017 at 12:54 PM, Arseny Kovalchuk <
>>> arseny.kovalc...@synesis.ru> wrote:
>>>
 Hi guys.

 We've successfully tested Ignite as in-memory solution, it showed
 acceptable performance. But we cannot get stable work of Ignite cluster
 with native persistence enabled. Our first error we've got is Segmentation
 fault (JVM crash) while memory restoring on start.

 [2017-12-22 11:11:51,992]  INFO [exchange-worker-#46%ignite-instance-0%]
 org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
 - Read checkpoint status [startMarker=/ignite-work-dire
 ctory/db/ignite_instance_0/cp/1513938154201-8c574131-763d-4c
 fa-99b6-0ce0321d61ab-START.bin, endMarker=/ignite-work-directo
 ry/db/ignite_instance_0/cp/1513932413840-55ea1713-8e9e-44cd-
 b51a-fcad8fb94de1-END.bin]
 [2017-12-22 11:11:51,993]  INFO [exchange-worker-#46%ignite-instance-0%]
 org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
 - Checking memory state [lastValidPos=FileWALPointer [idx=391,
 fileOffset=220593830, len=19573, forceFlush=false],
 lastMarked=FileWALPointer [idx=394, fileOffset=38532201, len=19573,
 forceFlush=false], lastCheckpointId=8c574131-763d
 -4cfa-99b6-0ce0321d61ab]
 [2017-12-22 11:11:51,993]  WARN [exchange-worker-#46%ignite-instance-0%]
 org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager:
 - Ignite node stopped in the middle of checkpoint. Will restore memory
 state and finish checkpoint on node start.
 [CodeBlob (0x7f9b58f24110)]
 Framesize: 0
 BufferBlob (0x7f9b58f24110) used for StubRoutines (2)
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  Internal Error (sharedRuntime.cpp:842), pid=221,
 tid=0x7f9b473c1ae8
 #  fatal error: exception happened outside interpreter, nmethods and
 vtable stubs at pc 0x7f9b58f248f6
 #
 # JRE version: OpenJDK Runtime Environment (8.0_151-b12) (build
 1.8.0_151-b12)
 # Java VM: OpenJDK 64-Bit Server VM (25.151-b12 mixed mode linux-amd64
 compressed oops)
 # Derivative: IcedTea 3.6.0
 # Distribution: Custom build (Tue Nov 21 11:22:36 GMT 2017)
 # Core dump written. Default location: /opt/ignite/core or core.221
 #
 # An error report file with more information is saved as:
 # /ignite-work-directory/core_dump_221.log
 #
 # If you would like to submit a bug report, please include
 # instructions on how to reproduce the bug and visit:
 #   http://icedtea.classpath.org/bugzilla
 #



 Please find logs and configs attached.

 We deploy Ignite along with our services in Kubernetes (v 1.8) on
 premises. Ignite cluster is a StatefulSet of 5 Pods (5 instances) of Ignite
 version 2.3. Each Pod mounts PersistentVolume backed by CEPH RBD.

 We put about 230 events/second into Ignite, 70% of events are ~200KB in
 size and 30% are 5000KB. Smaller events have indexed fields and we query
 them via SQL.

 The cluster is activated from a client node which also streams events
 into Ignite from Kafka. We use custom implementation of streamer which uses
 cache.putAll() API.

 We got the error when we stopped and restarted cluster again. It
 happened only on one instance.

 The general question is:

 *Is