apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-05-05 Thread rakshita04
Hi Team,

We are using apache-ignite for our C++ application based on debian
10(buster). We tried using apache-ignite_2.8.0-1_all.deb package but it
depends on openjdk-8 which is older version and not available for debian 10
buster. I have below questions-
1. Why older version of openjdk is used in deb package?
2. Is there any possibility of new released version of deb package with
source apache-ignite_2.8.0 with newer openjdk version?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-05-06 Thread rakshita04
Thanks Petr.
When can we expect 2.8.1 .deb package with newer openjdk dependency?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-05-06 Thread rakshita04
are you talking about oracle-java8-installer? mentioned in control file of
your package?
actually we are able to manually compile platform/cpp code using openjdk-11.
So wanted to check with you if there is any specific reason for using
openjdk-8 for .deb released package?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Are stored in apache ignite cache in random order

2020-05-27 Thread rakshita04
Hi Team,

If we use composite key as key to store data in apache ignite cache as
 pair.
Is the data stored in exact same sequence as we write it in? or its stored
in some random order(may be on the basis of hash value of the key)?
When we are trying to fetch the whole data stored in cache it is returning
us the data in random order and not in the sequence which we used to write
the data(we are using composite key which is combination of multiple
columns)
if you can please explain how data is stored in cache, we would be able to
understand the behavior.

regards,
Rakshita



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-05-27 Thread rakshita04
Can we have the newer debian package for apache ignite with newer version of
openjdk as dependencies?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to build apache ignite binaries using source code for C++ platform

2020-05-28 Thread rakshita04
Hi Team,

We want to build apache ignite binary using apache-ignite-2.8.0-src source
code.
We are using below command to build the same-
mvn clean package -DskipTests

But looks like it is downloading too many dependencies for the same.
Can we build it only for C++ platform? if yes how?
Also is it possible to build the binaries without internet connection?

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to build apache ignite binaries using source code for C++ platform

2020-05-28 Thread rakshita04
Hi , actually we need to build core module of apache , using Maven. Is there
a way to skip optional modules?
We are using below link to build apache binaries(jar files)-
https://ignite.apache.org/download.cgi#build-source






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


how to build only apache ignite core binaries and jar files using maven

2020-05-28 Thread rakshita04
Hi Team,

We want to build only modules/core specific binaries and jar files using
maven.
How can i do it.
Currently when i try to build the bianires/jar files using maven, it builds
a lot of other files and download a lot of other dependencies.
Is there a specific file where i can do this change?
I need jar files w.r.t CPP platform only. Rest of the binaries and jar files
are not of any use to me.

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to build only apache ignite core binaries and jar files using maven

2020-05-28 Thread rakshita04
Thanks Ilya.
I will try this command.
But what if i go to module/core and run below command-
mvn clean install -DskipTests -Dmaven.javadoc.skip=true

Will it still build everything?

Is there any difference if i run above command from whole src folder and if
i run this command from modules/core?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to build only apache ignite core binaries and jar files using maven

2020-05-29 Thread rakshita04
Thanks Ilya.
I am able to build the binaries using above command.
Is there any way that i can put the output jar files in a specific target
directory?
Currently its going in core/target and core/target/libs
And the apache ignite is trying to pick the libraries from modules/libs
folder.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to build only apache ignite core binaries and jar files using maven

2020-05-29 Thread rakshita04
i do not need to the zip of bin files.
Basically i want to build modules/core and i want to put the output jar
files inside /module/libs folder.
when i run the command mentioned by you in the thread it puts the generated
jar files in module/core/target folder.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Need to sync two apache ignite cache DB using ip address

2020-06-11 Thread rakshita04
Hi Team,

We have 2 apache ignite DBs located at separate IPs and connected through
TCP-ip connection.
ne of this DB should work as master dB and on any change should sync to
other apache ignite DB and update the entries accordingly.
How can i achieve this ? what changes do i need to do in xml file?
I am using C++ platform for apache ignite.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Need to sync two apache ignite cache DB using ip address

2020-06-11 Thread rakshita04
Hi Ilya,

I want one node to be master for all entries.
How can i achieve this? 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Need to sync two apache ignite cache DB using ip address

2020-06-11 Thread rakshita04
So are you saying apache ignite automatically does that.
Do i need to configure the ip of the other node somewhere in xml file so
that data keeps syncing?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


apache ignite compatibility with armhf(arm 32-bit)

2020-07-23 Thread rakshita04
I need to use apache ignite for my Debian linux 32-bit version.
Is it compatible with armhf(32-bit arm linux)?
can i use apache-ignite debian package for the same?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


apache-ignite compatibility with armhf(32-bit arm linux)

2020-07-23 Thread rakshita04
I need to use apache-ignite for my armf(2-bit arm) linux application.
Is apache-ignite compatible with 32-bit arm(armhf) linux?
can i use apache-ignite debian package present on the website for armhf?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Java Heap

2020-08-13 Thread rakshita04
But we are creating cache node in off heap memory right?
Then why is it showing double value in "On heap"?
Is it only showing double size as On heap or it will actually allocate that
much memory?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node is not added in baseline topology

2020-08-13 Thread rakshita04
How to achieve the same using C++?
Are there any cluster APIs for C++?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node is not added in baseline topology

2020-08-19 Thread rakshita04
How can we do activation using C++ and not control.sh script?
Basically how do we use baseline topology feature using C++?
Are there C++ libraries for the same?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node is not added in baseline topology

2020-08-19 Thread rakshita04


How do we know if the required that topology has required number ?Basically
how do we know, for how long we need to wait? Is there a C++ API to check if
topology has required number of nodes?
Also in our case there will be two nodes-
1. local node
2. Remote node(configured at specific ip which we will mention in xml)
If you could please explain how to achieve this in our scenario using C++
code snippet/steps?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node is not added in baseline topology

2020-08-19 Thread rakshita04
How do we know if the required that topology has required number ?Basically
how do we know, for how long we need to wait? Is there a C++ API to check if
topology has required number of nodes?
Also in our case there will be two nodes-
1. local node
2. Remote node(configured at specific ip which we will mention in xml)
If you could please explain how to achieve this in our scenario using C++
code snippet/steps?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Size of Data(Keys) are doubled when using CACHE in REPLICATED mode

2020-08-20 Thread rakshita04
What is auto-activate feature?
How does it work?
And for how long should we wait before calling setActive(true)?
Is there a way we can come to know whether cluster is activated or not?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Size of Data(Keys) are doubled when using CACHE in REPLICATED mode

2020-08-20 Thread rakshita04
Is there any API in C++ to know whether "all server nodes (which you expect
to have at the moment) have joined the cluster"??
name of the function would be great help.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


BaselineTopology branching error

2020-12-07 Thread rakshita04
Hi Team,

We are using baseline topology concept for our two nodes, running on
separate machines.
We are getting below error when both of these nodes run together-

Caused by: class org.apache.ignite.spi.IgniteSpiException: BaselineTopology
of joining node (DSU_B) is not compatible with BaselineTopology in the
cluster. Branching history of cluster BlT ([65356776]) doesn't contain
branching point hash of joining node BlT (65356777). Consider cleaning
persistent storage of the node and adding it to the cluster again.

After getting this error , when we deleted the nodes and restarted ignite.
it worked fine.
But this way we are loosing our data.
We are using similar configuration.xml file for both of them.
What can we do to avoid this issue?

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue with BaselineTopology Branching History

2020-12-08 Thread rakshita04
If i want to add a fresh node to cluster.
Is it possible to start the fresh node first and then start the older node?
How do i make sure that fresh node has persistence intact?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue with BaselineTopology Branching History

2020-12-08 Thread rakshita04
What if my second node changed due to hardware failure or something at
runtime.
Is there a way that i start new node first , delete baseline history of
first node somehow so that i can add older node to new node somehow?
I am asking this because in our software this scenario can occur and we
cannot control whether new node starts first or older node?
Is there a way we can make this scenario work?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BaselineTopology branching error

2020-12-09 Thread rakshita04
is there a way to not allow new node to create new baseline and just start
the node without baseline and when the older node comes up add it to the the
older node's baseline?
Or may be start the new node with older node's baseline?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BaselineTopology branching error

2020-12-09 Thread rakshita04
how to delete PDS data ?
by deleting PDS data , are we only deleting cluster and baseline information
or we also delete entries(keys,value) of the database?
if DB entries(key, value pair) still exists after PDS data deletion, is it
possible that once new baseline is created fresh and both nodes are then
added to this baseline, then DB data(key,value) of both the nodes will get
synced?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: BaselineTopology branching error

2020-12-10 Thread rakshita04
What if i delete "metastorage" folder created on older node?
Will it solve the problem?
Will i loose all Database entries(key,value) if i delete this folder?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issue with BaselineTopology Branching History

2020-12-12 Thread rakshita04
If by any chance, someone messes up this sequence, sometimes ignite is
throwing error which is great on which we can take some action but sometimes
its getting stuck and making our process also stuck.
Is there a way that the node(new node) does not get stuck and throws some
error or exception after a certain time?

Regards,
Rakshita 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Failed to send message to remote node error

2020-12-17 Thread rakshita04
Hi Teams,

I am using apache-ignite 2.8.0 version for our C++ applications.
We are using "TcpDiscoveryVmIpFinder" for connecting our 2 ignite nodes,
running on 2 separate machines.
But while running the ignite node are getting below error on one of the
nodes and application and node is stopping-
"Failed to send message to remote node" 
Below are the full logs for node 1-
[14:36:32,572][SEVERE][exchange-worker-#46][TcpCommunicationSpi] Failed to
send message to remote node [node=TcpDiscoveryNode
[id=7f683a9a-afcc-4d20-95f4-855147d759fd, consistentId=DSU_A,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 10.100.126.119, 127.0.0.1,
192.168.10.201, 192.168.100.100], sockAddrs=HashSet [/192.168.100.100:47500,
/0:0:0:0:0:0:0:1%lo:47500, /192.168.10.201:47500, /127.0.0.1:47500,
/10.100.126.119:47500], discPort=47500, order=4, intOrder=3,
lastExchangeTime=1608212160446, loc=false, ver=2.8.0#20200226-sha1:341b01df,
isClient=false], msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
ordered=false, timeout=0, skipOnTimeout=false,
msg=GridDhtPartitionsSingleMessage [parts=HashMap
{-2100569601=GridDhtPartitionMap [moving=0, top=AffinityTopologyVersion
[topVer=-1, minorTopVer=0], updateSeq=2, size=0],
-510489548=GridDhtPartitionMap [moving=0, top=AffinityTopologyVersion
[topVer=-1, minorTopVer=0], updateSeq=2, size=0]}, partCntrs=HashMap
{-2100569601=CachePartitionPartialCountersMap {},
-510489548=CachePartitionPartialCountersMap {}}, partsSizes=null,
partHistCntrs=null, err=null, client=false, exchangeStartTime=92638170010,
finishMsg=null, super=GridDhtPartitionsAbstractMessage
[exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
[topVer=6, minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=0c98991e-b271-4cae-b1c2-06036bd666e9, consistentId=DSU_B,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 10.100.126.120, 127.0.0.1,
192.168.100.101, 192.168.20.201], sockAddrs=HashSet [/192.168.100.101:47500,
/10.100.126.120:47500, /0:0:0:0:0:0:0:1%lo:47500, /192.168.20.201:47500,
/127.0.0.1:47500], discPort=47500, order=6, intOrder=4,
lastExchangeTime=1608212191947, loc=true, ver=2.8.0#20200226-sha1:341b01df,
isClient=false], topVer=6, nodeId8=0c98991e, msg=null, type=NODE_JOINED,
tstamp=1608212161928], nodeId=0c98991e, evt=NODE_JOINED],
lastVer=GridCacheVersion [topVer=0, order=1608212143633, nodeOrder=0],
super=GridCacheMessage [msgId=1, depInfo=null,
lastAffChangedTopVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0],
err=null, skipPrepare=false]
class org.apache.ignite.internal.cluster.ClusterTopologyCheckedException:
Failed to send message (node left topology): TcpDiscoveryNode
[id=7f683a9a-afcc-4d20-95f4-855147d759fd, consistentId=DSU_A,
addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 10.100.126.119, 127.0.0.1,
192.168.10.201, 192.168.100.100], sockAddrs=HashSet [/192.168.100.100:47500,
/0:0:0:0:0:0:0:1%lo:47500, /192.168.10.201:47500, /127.0.0.1:47500,
/10.100.126.119:47500], discPort=47500, order=4, intOrder=3,
lastExchangeTime=1608212160446, loc=false, ver=2.8.0#20200226-sha1:341b01df,
isClient=false]
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createNioSession(TcpCommunicationSpi.java:3521)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createTcpClient(TcpCommunicationSpi.java:3443)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.createCommunicationClient(TcpCommunicationSpi.java:3183)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.reserveClient(TcpCommunicationSpi.java:3066)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage0(TcpCommunicationSpi.java:2906)
at
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi.sendMessage(TcpCommunicationSpi.java:2865)
at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:2031)
at
org.apache.ignite.internal.managers.communication.GridIoManager.sendToGridTopic(GridIoManager.java:2128)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.send(GridCacheIoManager.java:1257)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendLocalPartitions(GridDhtPartitionsExchangeFuture.java:2014)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.sendPartitions(GridDhtPartitionsExchangeFuture.java:2149)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1614)
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:891)
at
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:3172)
at
org.apache.ignite.internal.processors.cache.GridCachePa

Application closing after java.net.SocketTimeoutException

2020-12-21 Thread rakshita04
Hi Team,

We are using ignite for persistent data between two nodes running on
separate machines.
sometimes our application stops with below error logs and exception-
[01:18:39,477][SEVERE][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] Failed
to accept TCP connection.
java.net.SocketTimeoutException: Accept timed out
at java.base/java.net.PlainSocketImpl.socketAccept(Native Method)
at
java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458)
at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565)
at
java.base/sun.security.ssl.SSLServerSocketImpl.accept(SSLServerSocketImpl.java:202)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:6353)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServerThread.body(ServerImpl.java:6276)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:61)
[01:18:39,505][SEVERE][tcp-disco-srvr-[:47500]-#3][] Critical system error
detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION,
err=java.net.SocketTimeoutException: Accept timed out]]
java.net.SocketTimeoutException: Accept timed out
at java.base/java.net.PlainSocketImpl.socketAccept(Native Method)
at
java.base/java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:458)
at java.base/java.net.ServerSocket.implAccept(ServerSocket.java:565)
at
java.base/sun.security.ssl.SSLServerSocketImpl.accept(SSLServerSocketImpl.java:202)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServer.body(ServerImpl.java:6353)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$TcpServerThread.body(ServerImpl.java:6276)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:61)
[01:18:39,562][SEVERE][tcp-disco-srvr-[:47500]-#3][] JVM will be halted
immediately due to the failure: [failureCtx=FailureContext
[type=SYSTEM_WORKER_TERMINATION, err=java.net.SocketTimeoutException: Accept
timed out]]

Can you tell us why its happenning?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


graceful shutdown for C++ applications

2021-01-07 Thread rakshita04
Hi Team,

We are using apache-ignite for our applications running on 2 machines and
connected over network.
We are facing some issue where if kill is performed on running application,
it somehow corrupts the node and then node never comes up and keep on
rebooting.
Is there a way to handle this shutdown gracefully? so that there is no data
loss and node corruption.

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: graceful shutdown for C++ applications

2021-01-07 Thread rakshita04
it works, the process is stopped but when application is started using the
same Database node, it crashes with below logs on terminal-
Ignite node stopped OK [uptime=00:00:55.197]
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x76c9f208, pid=26716, tid=26716
#
# JRE version: OpenJDK Runtime Environment (11.0.6+10) (build
11.0.6+10-post-Debian-1deb10u1)
# Java VM: OpenJDK Server VM (11.0.6+10-post-Debian-1deb10u1, mixed mode, g1
gc, linux-)
# Problematic frame:
# C  [libignite-2.8.0.44294.so.0+0x11208] 
ignite::Ignite::SetActive(bool)+0xb

Our application is calling SetActive method after node::start().

The application is recovered only when Databse is deleted and node is
started again.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: graceful shutdown for C++ applications

2021-01-07 Thread rakshita04
can SetActive() cause the crash?
is this way okay to terminate the process by kill or there is some better
way?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: graceful shutdown for C++ applications

2021-01-07 Thread rakshita04
I am also getting below error on my ignite logs-
[20:00:50,515][SEVERE][db-checkpoint-thread-#54][] Critical system error
detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
o.a.i.i.processors.cache.persistence.StorageException: Failed to write
checkpoint entry [ptr=FileWALPointer [idx=0, fileOff=188385, len=21409],
cpTs=1608042650462, cpId=a273b41f-b536-4c7d-afbd-51303114306b, type=START]]]
class
org.apache.ignite.internal.processors.cache.persistence.StorageException:
Failed to write checkpoint entry [ptr=FileWALPointer [idx=0, fileOff=188385,
len=21409], cpTs=1608042650462, cpId=a273b41f-b536-4c7d-afbd-51303114306b,
type=START]

what can cause this?
And how to avoid this problem?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: graceful shutdown for C++ applications

2021-01-07 Thread rakshita04
These are the full set of logs, if it helps-
[10:10:56,860][WARNING][main][G] Ignite work directory is not provided,
automatically resolved to: /home/dsudev/ignite-master/work
[10:10:56,873][WARNING][main][G] Consistent ID is not set, it is recommended
to set consistent ID for production clusters (use
IgniteConfiguration.setConsistentId property)
[10:10:57,103][INFO][main][IgniteKernal] 

>>>__    
>>>   /  _/ ___/ |/ /  _/_  __/ __/  
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/   
>>> 
>>> ver. 2.8.0#20200226-sha1:341b01df
>>> 2020 Copyright(C) Apache Software Foundation
>>> 
>>> Ignite documentation: http://ignite.apache.org

[10:10:57,134][INFO][main][IgniteKernal] Config URL: n/a
[10:10:57,190][INFO][main][IgniteKernal] IgniteConfiguration
[igniteInstanceName=null, pubPoolSize=8, svcPoolSize=8, callbackPoolSize=8,
stripedPoolSize=8, sysPoolSize=8, mgmtPoolSize=4, igfsPoolSize=1,
dataStreamerPoolSize=8, utilityCachePoolSize=8,
utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=8,
sqlQryHistSize=1000, dfltQryTimeout=0,
igniteHome=/home/dsudev/ignite-master,
igniteWorkDir=/home/dsudev/ignite-master/work,
mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@150fbeb,
nodeId=0aad560a-faad-4f86-b65c-a7f161bf2639, marsh=BinaryMarshaller [],
marshLocJobs=false, daemon=false, p2pEnabled=true, netTimeout=5000,
netCompressionLevel=1, sndRetryDelay=1000, sndRetryCnt=3,
metricsHistSize=1, metricsUpdateFreq=2000,
metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi
[addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10,
reconDelay=2000, maxAckTimeout=60, soLinger=5, forceSrvMode=false,
clientReconnectDisabled=false, internalLsnr=null,
skipAddrsRandomization=false], segPlc=STOP, segResolveAttempts=2,
waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1,
commSpi=TcpCommunicationSpi [connectGate=null,
connPlc=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$FirstConnectionPolicy@1390459,
chConnPlc=null, enableForcibleNodeKill=false,
enableTroubleshootingLog=false, locAddr=null, locHost=null, locPort=47100,
locPortRange=100, shmemPort=-1, directBuf=true, directSndBuf=false,
idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60,
reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0,
slowClientQueueLimit=0, nioSrvr=null, shmemSrv=null,
usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true,
filterReachableAddresses=false, ackSndThreshold=32, unackedMsgsBufSize=0,
sockWriteTimeout=2000, boundTcpPort=-1, boundTcpShmemPort=-1,
selectorsCnt=4, selectorSpins=0, addrRslvr=null,
ctxInitLatch=java.util.concurrent.CountDownLatch@156c3cd[Count = 1],
stopping=false, metricsLsnr=null],
evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@113052e,
colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [],
indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@45dbe,
addrRslvr=null,
encryptionSpi=org.apache.ignite.spi.encryption.noop.NoopEncryptionSpi@14658f7,
clientMode=false, rebalanceThreadPoolSize=4, rebalanceTimeout=1,
rebalanceBatchesPrefetchCnt=3, rebalanceThrottle=0,
rebalanceBatchSize=524288, txCfg=TransactionConfiguration
[txSerEnabled=false, dfltIsolation=REPEATABLE_READ,
dfltConcurrency=PESSIMISTIC, dfltTxTimeout=0,
txTimeoutOnPartitionMapExchange=0, deadlockTimeout=1,
pessimisticTxLogSize=0, pessimisticTxLogLinger=1, tmLookupClsName=null,
txManagerFactory=null, useJtaSync=false], cacheSanityCheckEnabled=true,
discoStartupDelay=6, deployMode=SHARED, p2pMissedCacheSize=100,
locHost=null, timeSrvPortBase=31100, timeSrvPortRange=100,
failureDetectionTimeout=1, sysWorkerBlockedTimeout=null,
clientFailureDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null,
connectorCfg=ConnectorConfiguration [jettyPath=null, host=null, port=11211,
noDelay=true, directBuf=false, sndBufSize=32768, rcvBufSize=32768,
idleQryCurTimeout=60, idleQryCurCheckFreq=6, sndQueueLimit=0,
selectorCnt=1, idleTimeout=7000, sslEnabled=false, sslClientAuth=false,
sslCtxFactory=null, sslFactory=null, portRange=100, threadPoolSize=8,
msgInterceptor=null], odbcCfg=null, warmupClos=null,
atomicCfg=AtomicConfiguration [seqReserveSize=1000, cacheMode=PARTITIONED,
backups=1, aff=null, grpName=null], classLdr=null,
sslCtxFactory=SslContextFactory[keyStoreType=JKS, proto=TLS,
keyStoreFile=/home/dsudev/config/keystore.jks,
trustStoreFile=/home/dsudev/config/truststore.jks],
platformCfg=PlatformConfiguration [], binaryCfg=BinaryConfiguration
[idMapper=BinaryBaseIdMapper [isLowerCase=true],
nameMapper=BinaryBaseNameMapper [isSimpleName=true], serializer=null,
compactFooter=false], memCfg=null, pstCfg=null,
dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040,
sysRegionMaxSize=104857600, pageSize=4096, concLvl=0,
dfltDataRegConf=DataRegionConfiguration [name=default, maxSize=419430400,
initSize=104857600, swapPath=null, pageEvictionMode=DISABLED,
evictionThreshold=0.9, e

Re: Application closing after java.net.SocketTimeoutException

2021-01-09 Thread rakshita04
we increased the "failureDetectionTimeout" to 4 minutes in our xml , still we
see this issue.
When are you planning to fix this issue?
Also is there any work around?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RAM required for apache-ignite (if persistence is enabled)

2021-02-01 Thread rakshita04
Hi,

We are using apache-ignite on embedded environment where we have RAM
limitation.
We are using persistence feature and using hard disk for it.
but as we insert keys in our db we see free RAM decreasing.
if we are using persistence still some data goes to RAM?
Apart from what we configure for JVM using xms and xmx options , does ignite
uses RAM for any other purpose?

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: RAM required for apache-ignite (if persistence is enabled)

2021-02-01 Thread rakshita04
Thanks for the response.
One more question-
if we enable "Enabling Direct I/O", is there any performance implication or
side effect?
will it save some RAM for us? considering it will direct read/write to disk?
Also can we enable it using xml option?

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: RAM required for apache-ignite (if persistence is enabled)

2021-02-01 Thread rakshita04
can we enable this option via xml?
is there a C++ API to enable/disable it?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Run sql query on key-value cache

2021-03-11 Thread rakshita04
Hi Team,

We are using apache-ignite node for our C++ application.
For one of the scenarios we need to get the result based on value of the the
columns(which is not key field).
If we do a getAll() on DB node then we have to linearly search the whole
Database which is taking too much time in case of more records.
If we run sql select query on the column value on our key-value cache, then
will it be faster than linear search? 
Do we need to use "indexing" for faster results?
if yes can we create index on our existing key-value cache? if yes how?

regards,
Rakshita 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Run sql query on key-value cache

2021-03-11 Thread rakshita04
How can we create indexes on our existing  cache.
As far as i could see on your portal no API support is available for C++ for
creating indexes.
does Query Entity automatically takes care of creating indexes? or we need
to explicitly create indexes on our  cache?
if we need to explicitly create , can you please help us how to do that?

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re[2]: Run sql query on key-value cache

2021-03-12 Thread rakshita04
Hi Team,

Thanks for the response.
If we create schema(table) with indexes using existing cache , will this
schema be created in memory?
in our existing xml config , we are using persistence for ignite node, will
apache use same persistence storage or create schema in-memory(RAM)?
Is there a way to create schema on persistent storage rather than in-memory? 

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re[4]: Run sql query on key-value cache

2021-03-16 Thread rakshita04
Hi team,

I believe the example you mentioned above is using C++ -
https://github.com/apache/ignite/blob/f37ec9eece4db627f2d5190e589f0522e445a251/modules/platforms/cpp/examples/query-example/src/query_example.cpp

is there any performance or memory benefit is we use odbc rather than C++
API?
We are using C++ put/get APIs to write/read data on cache.
Is it okay to use C++ APIs to perform sql query, similar way? or using odbc
client is more beneficial?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Abrupt stop of application on using sql

2021-04-08 Thread rakshita04
Hi Team,

We are using sql queries and sql feature of C++ for our existing cache data
on our application.
The code runs perfectly fine on AMD linux version but on ARM machine, it
abruptly stops the application , after few entries in the DB.
If we try to again start the application, it again stops it until we clear
the DB and run the application.
I am attaching the ignite logs here for your reference.
Can you please have a look and let us know the probable reason for this
abrupt stop? and possible solution to this?
There is no error or exception on terminal when application is stopped.

regards,
Rakshita Chaudhary ignite-acbac2f8.log
 
 
ignite-c372f3e8.log
 
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Abrupt stop of application on using sql

2021-04-08 Thread rakshita04
Hi Ilya,

No hs_err logs are being generated.
No information on terminal as well, its just a stop of application.

Regards,
Rakshita chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Abrupt stop of application on using sql

2021-04-09 Thread rakshita04
Hi Team,

Do you have any update on this issue?

regards,
Rakshita chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Abrupt stop of application on using sql

2021-04-13 Thread rakshita04
Hi Ilya,

we could find out the problem of the application stop.
We see "Buffer Overflow" error when application stops.
We commented out mCache.Put() in our code(basically not calling Put API to
write data to DB) and the restart did not happen.
Also this restart happens after a certain number of entries in DB.
Do you have any idea what can cause this buffer overflow while write in DB?
and is there anything we can do to avoid it?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Abrupt stop of application on using sql

2021-04-14 Thread rakshita04
Hi Ilya,

We only have one node.
Attached is our DataBaseConfig.xml file for your reference
DataBaseConfig.xml
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Abrupt stop of application on using sql

2021-04-15 Thread rakshita04
Hi Ilya,

How is decreasing maxSize of data region related to "Buffer Overflow"
problem?
I mean we dont get any Out of memory log in DeMsg logs and also "top"
command shows enough available RAM.
We tried decreasing maxSize to 50mb now but the problem still persists.
Can this also be related to checkpointing?

regards,
Rakshita Chaudhary




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Abrupt stop of application on using sql

2021-04-18 Thread rakshita04
Hi Team,

Can you please respond to my query above?

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Abrupt stop of application on using sql

2021-04-19 Thread rakshita04
hello Ilya,

As i mentioned it is an abrupt closure of application. 
So we dont have the exit code.
There is one information that i have which might be useful for you. We
noticed that number of open file descriptor goes beyond 1024(max range of
open file range of the system).
When we disable the persistence there is no problem and applications runs
smoothly



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Buffer Overflow on ARM with persistency enabled

2021-04-21 Thread rakshita04
Hi Team,

We are using ignite on our C++ application over 32-bit ARM linux machine.
Our application is closing abruptly after certain number of entries in
Database(4400 entries) and application is being terminated with "Buffer
Overflow" error.
There is no other information in ignite logs or system logs.
No dump file is also being generated.
When we disable the persistency there is no abrupt shutdown.
We have also observed that while persistency is enabled and write operations
are being performed on DB, there are too many files opened in /proc/fd for
ignite and when the write operation is stopped also the number of files open
is not decreasing.
Ideally these files should get closed by ignite after a certain time right?
Do you have any idea why it is happening ?
Also any help on "buffer overflow" issue?

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Buffer Overflow on ARM with persistency enabled

2021-04-21 Thread rakshita04
Hi ,

We have 2 GB RAM.
we unfortunately could not attached debugger, so we don't have stack trace
info as of now.
Also about RAM, we tried running our application on AMD linux VM with same
RAM(2GB) but there we dont see this behavior and application runs fine.

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Running sql query on partitioned cache

2021-05-19 Thread rakshita04
Hi Team,

We are running sql query on our cache(which is configured in "Partitioned"
mode with backup =1)
We have two nodes connected over network to each other.
Our application is C++ based and running on ARM environment(linux)
We are facing 2 issues now-
1. when we are running below query on our code while in background we are
adding entries in DB-
const SqlFieldsQuery countQuery(
"select count(reqType) "
"from DBStorage "
"where reqType=0"); DataBaseConfig.xml
  
The count is returned is increasing with proper entry till 10,000 entries
but after that it is getting reset again and starting again with 0.
Is there anything else, we need to do for running sql query in partitioned
mode for correct results?

2. What C++ API we need to use to get exact size of cache on partitioned
mode? So cache.localSize() is returning half the size of actual size (may be
due to partitioned mode)?

I am attaching the xml for your reference.

regards,
Rakshita Chaudhary



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Running sql query on partitioned cache

2021-05-21 Thread rakshita04
Hi Ilya,

1. We cant share the running exe as its for ARM plus it requires some
specific kernel image to run.
If i put it in simple words if i use partitioned mode and backup=1 and want
to run sql query on the existing cache, do we need to do something w.r.t
colocation(as keys are distributed across both the nodes)?

2. yes i want to understand when to use cache.LocalSize() and when to use
cache.Size()?

regards,
rakshita Chaudhary




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/