RE: Getting error Node is out of topology (probably, due to short-time network problems)
Hi, According to the provided GC logs I don't see anything suspicious. Do you run Ignite nodes on VMs? If yes, do you have monitoring and is it possible to check CPU usage during period of time when the issue happend? Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite cache TextQuery on JsonString data Field
Can you provide your CacheConfiguration? Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: SQLRowCount in unixodbc driver
Hi Wolfgang, According to the SQLRowCount function description, it shouldn't work for the SELECT queries: "SQLRowCount returns the number of rows affected by an UPDATE, INSERT, or DELETE statement;" More information can be found here: https://docs.microsoft.com/en-us/sql/odbc/reference/syntax/sqlrowcount-function Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite cache TextQuery on JsonString data Field
Hi, Seems like you should use "Person" as a type for the TextQuery instead of "CommonConstruction", here: /var textquery = new TextQuery(typeof(CommonConstruction), "Mobile");/ Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Getting error Node is out of topology (probably, due to short-time network problems)
Hi, According to the provided log I see "Blocked system-critical thread has been detected" message and that the node was segmented since it was unable to respond to another node. Most probably it's caused by JVM pauses, possibly related with GC. Do you collect GC logs for the nodes? You can find an information how to enable GC logs here: https://ignite.apache.org/docs/latest/perf-and-troubleshooting/troubleshooting#detailed-gc-logs Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Getting error Node is out of topology (probably, due to short-time network problems)
Can you also provide the logs for the few minutes before the "Node is out of topology" message? Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: DC replication via Kafka Connect for Apache Ignite
Ignite Kafka sink connector doesn't support removals processing. I've created a ticket to implement this improvement: https://issues.apache.org/jira/browse/IGNITE-13442 But, as I know, such functionality is implemented in GridGain Kafka Connector, you can find more details regarding it and data replication, here: https://www.gridgain.com/docs/latest/integrations/kafka/kc-ex-replication Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite client stuck
Do you have server logs for the period of time when you were observing "Blocked system-critical thread has been detected" error on the client? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to know memory used by a cache or a set
Hi, Did you turn on cache metrics for your data region? To turn the metrics on, use one of the following approaches: 1. Set DataRegionConfiguration.setMetricsEnabled(true) for every region you want to collect the metrics for. 2. Use the DataRegionMetricsMXBean.enableMetrics() method exposed by a special JMX bean. More information regarding cache metrics available here: https://apacheignite.readme.io/docs/cache-metrics Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: zk connect loss
Hi, Could you please provide your ZookeeperDiscoverySpi configuration? Meybe there is an issue with network configuration, is it possible to establish connection between host with ignite node and zookeeper host? Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: PartitionLossPolicy | IGNORE
Hi, Looks like next part is missing for IGNORE policy description in readme.io documentation: "The result of reading from a lost partition is undefined and may be different on different nodes in the cluster." There is no guarantee for IGNORE policy that reading from the lost partition will return "null" on all nodes. Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite Audit logs into a separate file
Hi, In case you're going to use log4j2 as gridLogger for Ignite you should add "ignite-log4j2" dependency and set up Log4J2Logger as described here: https://apacheignite.readme.io/docs/logging#section-log4j2 Or in case you just want to use "Audit-logs" logger independently, you should create logger in this way: import org.apache.logging.log4j.LogManager; import org.apache.logging.log4j.Logger; private static final Logger logger = LogManager.getLogger("HelloWorld"); logger.info("Hi Hello ..."); More information regarding log4j2 can be found here: https://logging.apache.org/log4j/2.x/manual/api.html Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
RE: Failed to start cache due to conflicting cache ID
Hi, Seems like "nodeId" is deprecated property and "consistentId" should be used instead of it. Javadoc: https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#getNodeId-- Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: accessing a file on IGFS via HTTP
Hi, No, Ignite doesn't provide HTTP API for IGFS. It's only possible to set up TCP endpoint for Hadoop integration as described here: https://apacheignite-fs.readme.io/docs/file-system#section-configure-ignite Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Use onheap and offheap memory at the same time recommend?
Hello, There are 2 types of eviction in Ignite: 1. Off-Heap eviction works when Native Persistence is turned off and off-heap eviction policy is specified. In this case entries will be stored on the off-heap until eviction policy limit is reached, after that entries will be removed from the off-heap according to policy. https://apacheignite.readme.io/docs/evictions#section-off-heap-memory 2. On-Heap eviction works when "onheapCacheEnabled" is turned on and on-heap eviction policy is specified. In this case entries will be stored on the off-heap similar to the previos case, but additional copy will stored on the on-heap. If on-heap eviction policy limit is reached, entries will be removed from the on-heap according to policy. https://apacheignite.readme.io/docs/evictions#section-on-heap-cache More information regarding On-heap caching can be found here: https://apacheignite.readme.io/docs/memory-configuration#section-on-heap-caching In case you're using Native Persistence, there is a process similar to eviction - called Page replacement, but it's not configurable. For Native Persistence, if you have reached off-heap limit then entries will be stored to persistence storage. https://apacheignite.readme.io/docs/distributed-persistent-store Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to disable replication over multicast for development nodes?
Yes, you're right. If you wish to use specific port just add it to the value: 127.0.0.1:47500 or ports range, in case you're going to use multiple nodes in the cluster: 127.0.0.1:47500..47509 Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How to disable replication over multicast for development nodes?
Hi. To disable multicast, you need to change "ipFinder" in "discoverySpi" configuration to "TcpDiscoveryVmIpFinder" and specify the list of node's addresses. Check the link below for the configuration example: https://apacheignite.readme.io/docs/tcpip-discovery#section-static-ip-finder Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: How many Ignite nodes per Server + backup behavior
Hello, 1. It's recommended to run 1 Ignite node per server, try to reduce nodes count to 4. 2. In case you're going to use 4 nodes per server, you need to specify backup=1 and also set excludeNeighbors on affinityFunction as described here: https://apacheignite.readme.io/docs/affinity-collocation#section-crash-safe-affinity After that, data will be stored on 2 nodes (1 primary and 1 backup) and these nodes will be on the different servers. For 1 node per server case just set backup=1. 3. Try to set query parallelism equals to available threads on the server, it should increase performance. (1 node per server, parallelism=16) Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: no data by JDBC or Cache query!
Hello, Could you please provide cache configuration for server nodes? Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite client connection issue "Cache doesn't exists ..." even though cache severs and caches are up and running.
Hello, Could you please provide small example of cache using from your application which produces described issue? Also try to invoke ignite.cacheNames() and check that requested cache name exists in this list. (case sensitive) https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/Ignite.html#cacheNames-- Regards, Igor -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/