Re: Ignite JDBC connection pooling mechanism
We solved the problem by removing the complete hikari connection pooling mechanism. Instead we use IgniteJdbcThinDataSource ( https://apacheignite-sql.readme.io/docs/jdbc-driver) , with appropriate client connector configuration( https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setClientConnectorConfiguration-org.apache.ignite.configuration.ClientConnectorConfiguration- ) After doing a few hit and trials, we concluded ignite does not require connection pooling in the client side (like we do in RDBMS) , instead let the Ignite server handle the SQL queries, by providing appropriate client connection details. On Fri, Nov 6, 2020 at 7:01 PM Vladimir Pligin wrote: > In general it should be ok to use connection pooling with Ignite. Is your > network ok? It look like a connection is being closed because of network > issues. > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Ignite JDBC connection pooling mechanism
cluster. at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:760) at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:212) at org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeQuery(JdbcThinStatement.java:123) at com.zaxxer.hikari.pool.ProxyStatement.executeQuery(ProxyStatement.java:111) at com.zaxxer.hikari.pool.HikariProxyStatement.executeQuery(HikariProxyStatement.java) at org.springframework.jdbc.core.JdbcTemplate$1QueryStatementCallback.doInStatement(JdbcTemplate.java:439) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:376) ... 94 common frames omitted Caused by: java.net.SocketException: Operation timed out (Read failed) at java.base/java.net.SocketInputStream.socketRead0(Native Method) at java.base/java.net.SocketInputStream.socketRead(Unknown Source) at java.base/java.net.SocketInputStream.read(Unknown Source) at java.base/java.net.SocketInputStream.read(Unknown Source) at java.base/java.io.BufferedInputStream.fill(Unknown Source) at java.base/java.io.BufferedInputStream.read1(Unknown Source) at java.base/java.io.BufferedInputStream.read(Unknown Source) at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:605) at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.read(JdbcThinTcpIo.java:586) at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.readResponse(JdbcThinTcpIo.java:525) at org.apache.ignite.internal.jdbc.thin.JdbcThinTcpIo.sendRequest(JdbcThinTcpIo.java:510) at org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:747) ... 100 common frames omitted 03-11-2020 23:00:44.032 [http-nio-8080-exec-4] INFO c.e.cortix.cache.handler.health.HealthCheckHandler.38 cache-query-service prod v1 cache-query-service-v1-5c5d8cd74d-jgnbb - Health is DOWN 03-11-2020 23:00:52.114 [http-nio-8080-exec-6] INFO c.e.cortix.cache.controller.RequestLoggingFilter.37 cache-query-service prod v1 cache-query-service-v1-5c5d8cd74d-jgnbb On Wed, Nov 4, 2020 at 11:20 AM Sanjaya Kumar Sahoo wrote: > Hi, > > I truly appreciated the support we are getting from the community. > > As of now we don't have a re-producer, The above issue basically comes > once in a while. > > The server is up and running, *Note*: The ignite cluster has been > installed in azure kubernetes cluster as statefulset pods. > > We have other application pods they frequently talk to ignite. > > While analyzing we understood that the application pod which > creating problem, is running with ignite 2.6.0 where as the Ignite server > is 2.8.1 for us > > We followed below steps and deployed in production > > 1- Changed version of Hikari to 3.4.5 > 2- Ignite core changed to 2.8.1 > 3- Spring boot was auto configuring jdbc templates (with hikari), we > disabled auto configuration and configured manually. > > We deployed the application and we are monitoring, and will publish the > result. > > > Thanks, > Sanjaya > > > > > > On Tue, Nov 3, 2020 at 8:28 PM Ilya Kasnacheev > wrote: > >> Hello! >> >> Are you sure that the Ignite cluster is in fact up? :) >> >> If it is, maybe your usage patterns of this pool somehow assign the >> connection to two different threads, which try to do queries in parallel. >> In theory, this is what connection pools are explicitly created to avoid, >> but maybe there's some knob you have to turn to actually make them >> thread-exclusive. >> >> Also, does it happen every time? How soon would it happen? >> >> Regards, >> -- >> Ilya Kasnacheev >> >> >> пн, 2 нояб. 2020 г. в 12:31, Sanjaya : >> >>> Hi All, >>> >>> we are trying to use HIkari connection pooling with ignite >>> JdbcThinDriver. >>> we are facing issue as >>> >>> >>> Any idea what is the supported connection pooling mechanism work with >>> IgniteThinDriver >>> >>> >>> ERROR LOG >>> == >>> >>> WARN com.zaxxer.hikari.pool.ProxyConnection.157 sm-event-consumer prod >>> sm-event-consumer-v1-55f4db767d-2kskt - HikariPool-1 - Connection >>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection@68f0e2a1 marked >>> as >>> broken because of SQLSTATE(08006), ErrorCode(0) >>> >>> java.sql.SQLException: Failed to communicate with Ignite cluster. >>> >>> at >>> >>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:760) >>> >>> at >>> >>> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeBatch(JdbcThinState
Re: Ignite JDBC connection pooling mechanism
Hi, I truly appreciated the support we are getting from the community. As of now we don't have a re-producer, The above issue basically comes once in a while. The server is up and running, *Note*: The ignite cluster has been installed in azure kubernetes cluster as statefulset pods. We have other application pods they frequently talk to ignite. While analyzing we understood that the application pod which creating problem, is running with ignite 2.6.0 where as the Ignite server is 2.8.1 for us We followed below steps and deployed in production 1- Changed version of Hikari to 3.4.5 2- Ignite core changed to 2.8.1 3- Spring boot was auto configuring jdbc templates (with hikari), we disabled auto configuration and configured manually. We deployed the application and we are monitoring, and will publish the result. Thanks, Sanjaya On Tue, Nov 3, 2020 at 8:28 PM Ilya Kasnacheev wrote: > Hello! > > Are you sure that the Ignite cluster is in fact up? :) > > If it is, maybe your usage patterns of this pool somehow assign the > connection to two different threads, which try to do queries in parallel. > In theory, this is what connection pools are explicitly created to avoid, > but maybe there's some knob you have to turn to actually make them > thread-exclusive. > > Also, does it happen every time? How soon would it happen? > > Regards, > -- > Ilya Kasnacheev > > > пн, 2 нояб. 2020 г. в 12:31, Sanjaya : > >> Hi All, >> >> we are trying to use HIkari connection pooling with ignite JdbcThinDriver. >> we are facing issue as >> >> >> Any idea what is the supported connection pooling mechanism work with >> IgniteThinDriver >> >> >> ERROR LOG >> == >> >> WARN com.zaxxer.hikari.pool.ProxyConnection.157 sm-event-consumer prod >> sm-event-consumer-v1-55f4db767d-2kskt - HikariPool-1 - Connection >> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection@68f0e2a1 marked >> as >> broken because of SQLSTATE(08006), ErrorCode(0) >> >> java.sql.SQLException: Failed to communicate with Ignite cluster. >> >> at >> >> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:760) >> >> at >> >> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeBatch(JdbcThinStatement.java:651) >> >> at >> >> com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128) >> >> at >> >> com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java) >> >> at >> >> org.springframework.jdbc.core.JdbcTemplate.lambda$batchUpdate$2(JdbcTemplate.java:950) >> >> at >> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:617) >> >> at >> org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:647) >> >> at >> >> org.springframework.jdbc.core.JdbcTemplate.batchUpdate(JdbcTemplate.java:936) >> >> at >> >> org.springframework.jdbc.core.namedparam.NamedParameterJdbcTemplate.batchUpdate(NamedParameterJdbcTemplate.java:366) >> >> at >> >> com.ecoenergy.cortix.sm.event.cache.SMIgniteCacheManager.updateObjectStates(SMIgniteCacheManager.java:118) >> >> at >> >> com.ecoenergy.cortix.sm.event.notifcator.SMIgniteNotificator.notify(SMIgniteNotificator.java:69) >> >> at >> >> com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.notify(ObjectEventHandler.java:100) >> >> at >> >> com.ecoenergy.cortix.sm.event.eventhandler.ObjectEventHandler.receiveEvents(ObjectEventHandler.java:86) >> >> at >> >> com.ecoenergy.cortix.sm.event.consumer.ObjectEventConsumer.processObjectEvents(ObjectEventConsumer.java:60) >> >> >> >> >> >> -- >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >> >
Re: Ignite JDBC connection pooling mechanism
Please find details below Java - 1.8 Hikari - 3.4.1 Ignite - 2.6.0 SSL Enabled - false Thanks, Sanjaya On Mon, Nov 2, 2020 at 8:11 PM Vladimir Pligin wrote: > Hi, > > What java version do you use? What about Hikari & Ignite versions? Do you > have SSL enabled? > > > > -- > Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >
Re: Apache ignite statefulsets pods abruptly restarts
Hi, Thanks for your reply, we have set on heap as below in ignite configuration xml, https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching Basically we have set as true in xml configuration as . The idea to move on heap with persistence enable is to maximize the latency of query use case Am I doing anything wrong here ? Thanks, Sanjaya On Mon, Sep 21, 2020 at 10:11 PM Evgenii Zhuravlev wrote: > There is no such thing as "on heap cache only.". It's possible to enable > an additional cache level in heap, but it still will be storing all data in > the off heap. So, right now you need at least 10.25+8gb+ Checkpoint buffer > size for your Ignite node. > > Evgenii > > пн, 21 сент. 2020 г. в 09:29, Sanjaya : > >> Hi All, >> >> In out production environment, ignite v2.8.1 is install as a kubernetes >> stateful sets pods inside Azure Kubernetes cluster. There are 2 pods >> running. >> >> Ignite is persistence enabled, with on heap cache only. >> >> The pod is running with below guaranteed resources >> Memory : 11 GB >> CPU: 3 core >> >> Ignite is given heap as : 10.25 GB >> The total data region size is as : 8GB >> >> >> We are getting below error when 2 caches joins each other without any >> indexing, one of PODS jvm simply restarts, we are not sure whats going on. >> The usecase is that ignite cache grid hold all master data and gets loads >> from postgres, and plannned to being called from 30+ differen pods for >> same >> kind of queries. >> >> We are completely stuck in this usecase, and thinking if ignite is right >> for >> this usecase. >> >> >> The stack trace as is below >> = >> AND (A__Z0.ASSET_UID = B__Z1.ASSET_UID >> ORDER BY 9, 1] >> [09:43:10,370][WARNING][jvm-pause-detector-worker][IgniteKernal] Possible >> too long JVM pause: 872 milliseconds. >> [09:43:10,630][WARNING][client-connector-#52][IgniteH2Indexing] Long >> running >> query is finished [time=4316ms, type=MAP, distributedJoin=false, >> enforceJoinOrder=true, lazy=false, schema=CRTX, node=TcpDiscoveryNode >> [id=4093191a-f958-4b4b-bf55-ae774d450fa2, >> consistentId=4ed84cd6-d24c-4b2e-b61b-e747b0a6e6ba, addrs=ArrayList >> [10.188.0.108, 127.0.0.1], sockAddrs=HashSet >> [ignite-0.ignite.ignite.svc.cluster.local/10.188.0.108:47500, >> /127.0.0.1:47500], discPort=47500, order=2, intOrder=2, >> lastExchangeTime=1600681390383, loc=true, >> ver=2.8.1#20200521-sha1:86422096, >> isClient=false], reqId=145, segment=0, sql='SELECT >> A__Z0.ASSET_UID __C0_0, >> A__Z0.ATTRIBUTE_CODE __C0_1, >> B__Z1.TYPE __C0_2, >> A__Z0.NUMVALUE __C0_3, >> A__Z0.UNIT_SYMBOL __C0_4, >> A__Z0.ALNVALUE __C0_5, >> A__Z0.CHANGEDATE __C0_6, >> B__Z1.CHANGEDATE __C0_7, >> A__Z0.ORG_ID __C0_8 >> FROM CRTX.ASSET B__Z1 >> INNER JOIN CRTX.ASSETSPEC A__Z0 >> ON TRUE >> WHERE (B__Z1.LOCATION_UID = 'R02ERUS010843') AND ((A__Z0.ORG_ID = ?4) AND >> (((A__Z0.CHANGEDATE > ?2) OR (B__Z1.CHANGEDATE > ?3)) AND ((B__Z1.TYPE = >> ?1) >> AND (A__Z0.ASSET_UID = B__Z1.ASSET_UID >> ORDER BY 9, 1', plan=SELECT >> A__Z0.ASSET_UID AS __C0_0, >> A__Z0.ATTRIBUTE_CODE AS __C0_1, >> B__Z1.TYPE AS __C0_2, >> A__Z0.NUMVALUE AS __C0_3, >> A__Z0.UNIT_SYMBOL AS __C0_4, >> A__Z0.ALNVALUE AS __C0_5, >> A__Z0.CHANGEDATE AS __C0_6, >> B__Z1.CHANGEDATE AS __C0_7, >> A__Z0.ORG_ID AS __C0_8 >> FROM CRTX.ASSET B__Z1 >> /* CRTX.ASSET.__SCAN_ */ >> /* WHERE (B__Z1.LOCATION_UID = 'R02ERUS010843') >> AND (B__Z1.TYPE = ?1) >> */ >> /* scanCount: 377126 */ >> INNER JOIN CRTX.ASSETSPEC A__Z0 >> /* CRTX."_key_PK": ASSET_UID = B__Z1.ASSET_UID */ >> ON 1=1 >> WHERE (B__Z1.LOCATION_UID = 'R02ERUS010843') >> AND ((A__Z0.ORG_ID = ?4) >> AND (((A__Z0.CHANGEDATE > ?2) >> OR (B__Z1.CHANGEDATE > ?3)) >> AND ((B__Z1.TYPE = ?1) >> AND (A__Z0.ASSET_UID = B__Z1.ASSET_UID >> ORDER BY 9, 1] >> /opt/ignite/apache-ignite/bin/ignite.sh: line 207:74 Killed >> >> "$JAVA" ${JVM_OPTS} ${QUIET} "${RESTART_SUCCESS_OPT}" ${JMX_MON:-} >> -DIGNITE_HOME="${IGNITE_HOME}" -DIGNITE_PROG_NAME="$0" ${JVM_XOPTS} -cp >> "${CP}" ${MAIN_CLASS} "${CONFIG}" >> >> >> >> >> -- >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >> >