Re: Increase the indexing speed while loading the cache from an RDBMS
Thanks Anton & Mikhail for your responses. I'll try to disable the WAL and test the cache preload performance. Also will execute a JFR to capture some info. @Mikhail I've checked the RDBMS perf metrics. I do not see any spike in the CPU or memory usage. Is there anything in particular that could cause a bottleneck from the DB perspective ? The DB in use is a Postgres DB. @Anton Is there a sample/best practice reference code for Using Datastreamers to directly query the DB to preload the cache ? My intention is to use Spring Data via IgniteRepositories. In order to use an Ignite repository, I would have to preload the cache. And now, to preload the cache, I would have to query the DB. A bit of a chicken and egg problem. Regards, Srikanta On Thu, Aug 27, 2020 at 7:54 PM Mikhail Cherkasov wrote: > btw, might be bottleneck is your RDBMS, it can just stream data slowly, > slower then ignite can save, if make sense to check this version too. > > On Thu, Aug 27, 2020 at 10:46 AM akurbanov wrote: > >> Hi, >> >> Which API are you using to load the entries and what is the node >> configuration? I would recommend to share the configs and try utilizing >> the >> data streamer. >> >> https://apacheignite.readme.io/docs/data-streamers >> >> I would recommend recording a JFR to find where the VM spends most time. >> >> Best regards, >> Anton >> >> >> >> -- >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ >> > > > -- > Thanks, > Mikhail. >
Re: Shared counter
Hi Bastien, Ignore provides the ability to have distributed data structure. And in your case an atomic long would work as a distributed counter. For more info https://apacheignite.readme.io/docs/atomic-types Regards, Srikanta On Wed, 26 Aug 2020, 12:24 pm Bastien Durel, wrote: > Hello, > > I whish to know if there is a supported way to implement some kind of > shared counter in ignite, where any node could increment or decrement a > value, which would be decremented automatically if a node is leaving > the cluster ? > I know I can use an AtomicInteger but there will be no decrement on > exit, i guess ? > > Should I use a cache with (summing all counters) and > manually evict rows when I get a EVT_NODE_FAILED/EVT_NODE_LEFT event, > or is there a better way ? > > Thanks, > > -- > Bastien Durel > DATA > Intégration des données de l'entreprise, > Systèmes d'information décisionnels. > > bastien.du...@data.fr > tel : +33 (0) 1 57 19 59 28 > fax : +33 (0) 1 57 19 59 73 > 12 avenue Raspail, 94250 GENTILLY France > www.data.fr > >
Increase the indexing speed while loading the cache from an RDBMS
Currently I'm using Apache Ignite v2.8.1 to preload a cache from the RDBMS. There are two tables with each 27M rows. The index is defined on a single column of type String in 1st table and Integer in the 2nd table. Together the total size of the two tables is around 120GB. The preloading process (triggered using loadCacheAsync() from within a Java app) takes about 45hrs. The cache is persistence enabled and a common EBS volume (SSD) is being used for both the WAL and other locations. I'm unable to figure out the bottleneck for increasing the speed. Apart from defining a separate path for WAL and the persistence, is there any other way to load the cache faster (with indexing enabled) ? Thanks, Srikanta
Re: Issue with using Ignite with Spring Data
Hi Denis, Thanks for taking time to reply and sharing those links. I can confirm to you that I've read through them before and have been following them as well. However as I read them again, I realised that it was anyway necessary to load the cache before executing the SELECT sql queries on top of a cache, now, would this hold true in the case of Spring Data as well ? (Very likely yes, but want to get the confirmation) If so, then are we expected to preload the cache on start up and only after that will the read-through property kick in to add the entries into the cache for the ones which are missing ? If my above understanding is correct then that explains why I was getting null results from the queries executed once the Spring boot is instantiated as the cache load on startup was not complete yet. Regards, Srikanta On Fri, Aug 21, 2020 at 6:47 PM Denis Magda wrote: > Hi Srikanta, > > You forgot to share the configuration. Anyway, I think it's clear what you > are looking for. > > Check this example showing how to configure CacheJdbcPojoStoreFactory > programmatically (click on the "Java" tab, by default the example shows the > XML version): > > https://www.gridgain.com/docs/latest/developers-guide/persistence/external-storage#cachejdbcpojostore > > Also, if you need to import an existing schema of a relational database > and turn it into the CacheJdbcPojoStore config, then this feature of Web > Console can be helpful: > > https://www.gridgain.com/docs/web-console/latest/automatic-rdbms-integration > > Finally, keep an eye on this Spring Data + Ignite tutorial that covers > other areas of the integration. You might have other questions and issue > going forward and the tutorial can help to address them quickly: > https://www.gridgain.com/docs/tutorials/spring/spring-ignite-tutorial > > - > Denis > > > On Fri, Aug 21, 2020 at 9:09 AM Srikanta Patanjali > wrote: > >> I'm trying to integrate a Spring data project (without JPA) with Ignite >> and struggling to understand some basic traits. Would be very helpful if >> you can share some insights on the issue I'm facing. >> >> Currently the cache has been defined as below with the client node, this >> config is not present in the server node, gets created when the client node >> joins the cluster. The repositories are detected during the instantiation >> of the Spring Boot application. >> >> All the documentation including the official example repo of the Apache >> Ignite does not pass in the data source but instead cachConfig is set with >> the IndexedTypes. >> >> Question: Where should I pass on the DataSource object ? Should I create >> a CacheJdbcPojoStoreFactory and pass on the dataSource ? >> >> >> Thanks, >> Srikanta >> >
Issue with using Ignite with Spring Data
I'm trying to integrate a Spring data project (without JPA) with Ignite and struggling to understand some basic traits. Would be very helpful if you can share some insights on the issue I'm facing. Currently the cache has been defined as below with the client node, this config is not present in the server node, gets created when the client node joins the cluster. The repositories are detected during the instantiation of the Spring Boot application. All the documentation including the official example repo of the Apache Ignite does not pass in the data source but instead cachConfig is set with the IndexedTypes. Question: Where should I pass on the DataSource object ? Should I create a CacheJdbcPojoStoreFactory and pass on the dataSource ? Thanks, Srikanta
Re: Ignite 3rd party persistency DataSourceBean Config in Java
Hi, I'm using the "org.postgresql.ds.PGPoolingDataSource" dialect within the datasource bean for Postgresql db. The configuration of the datasource is independent of Ignite and can be done as per the framework you are using. Here is an example, if you are using Spring Boot --> https://docs.spring.io/spring-boot/docs/2.1.13.RELEASE/reference/html/howto-data-access.html#howto-configure-a-datasource An example via the programmatic configuration approach can be found here --> https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcStoreExample.java Regards, Srikanta On 2020/08/14 07:03:32, "m...@coinflex.com" wrote: > Hi Experts, > > > From the ignite 3rd party rdbms persistency doc, it shows the example with> > xml config, like below, > > > > > > > > > > > > > > Can help an example how to do at the java code? Use which bean? thanks, > > And btw, as currently, ignite dialet not support the postgresql, how can we> > use the postgresql to do the 3rd party persistency? thanks a lot.> > > > > > > --> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/> >
Re: Consistently B+Tree is getting corrupted in a specific scenario
] at org.apache.ignite.internal.processors.cache.GridCacheMapEntry$UpdateClosure.call(GridCacheMapEntry.java:5707) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.invokeClosure(BPlusTree.java:3817) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Invoke.access$5700(BPlusTree.java:3711) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1961) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1932) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invokeDown(BPlusTree.java:1932) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.invoke(BPlusTree.java:1839) [ignite-core-2.8.1.jar!/:2.8.1] ... 19 more 2020-08-16 20:40:40,184 ERROR o.a.i.l.l.Log4J2Logger [jdbc-cache-loader-#117%ig_svc_cluster%] A critical problem with persistence data structures was detected. Please make backup of persistence storage and WAL files for further analysis. Persistence storage path: null WAL path: db/wal WAL archive path: db/wal/archive On Sun, Aug 16, 2020 at 4:11 PM Srikanta Patanjali wrote: > Found the below exception with further analysis of the logs : > > 2020-08-16 13:34:14,306 ERROR o.a.i.l.l.Log4J2Logger > [jdbc-cache-loader-#10199%ig_svc_cluster%] Critical system error detected. > Will be handled accordingly to configured handler > [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, > super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet > [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], > failureCtx=FailureContext [type=CRITICAL_ERROR, err=class > o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is > corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=795360926, > val2=844420635324779]], cacheId=-175498, cacheName=T_NAME, > indexName=FOO_NAMESEARCH_BRN_DESC_IDX, msg=Runtime failure on row: > Row@6a714d32[ key: 311822, val: foo.TName [...REDACTED...] ][ > ...REDACTED... > org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException: > B+Tree is corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=795360926, > val2=844420635324779]], cacheId=-175498, cacheName=T_NAME, > indexName=FOO_NAMESEARCH_BRN_DESC_IDX, msg=Runtime failure on row: > Row@6a714d32[ key: 311822, val: foo.TName [...REDACTED...] > ][...REDACTED... ]] > at > org.apache.ignite.internal.processors.query.h2.database.H2Tree.corruptedTreeException(H2Tree.java:673) > [ignite-indexing-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2380) > [ignite-core-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2327) > [ignite-core-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:428) > [ignite-indexing-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.addToIndex(GridH2Table.java:844) > [ignite-indexing-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:782) > [ignite-indexing-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:380) > [ignite-indexing-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:2079) > [ignite-core-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:409) > [ignite-core-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:2627) > [ignite-core-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1713) > [ignite-core-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1688) > [ignite-core-2.8.1.jar!/:2.8.1] > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:2444) > [ignite-core-2.8.1.j
Re: Consistently B+Tree is getting corrupted in a specific scenario
] at org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore$1.call(CacheAbstractJdbcStore.java:434) [ignite-core-2.8.1.jar!/:2.8.1] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] Caused by: java.lang.IllegalStateException: Duplicate row in index. at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:442) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Insert.run0(BPlusTree.java:428) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:5711) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:5697) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:360) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:297) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$11300(BPlusTree.java:94) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.tryInsert(BPlusTree.java:3681) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Put.access$7100(BPlusTree.java:3361) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2625) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2606) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2606) ~[ignite-core-2.8.1.jar!/:2.8.1] at org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2347) [ignite-core-2.8.1.jar!/:2.8.1] ... 27 more I also observe a bug with the same exception being fixed recently as part of https://issues.apache.org/jira/browse/IGNITE-10873 Are there any known reasons for the trigger of " Caused by: java.lang.IllegalStateException: Duplicate row in index. " ? On Sat, Aug 15, 2020 at 1:56 PM Srikanta Patanjali wrote: > Resharing the cache settings as it got snipped in the previous email: > > > > > > > > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> > > class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> > > > 127.0.0.1:47500..47510 > > > > > > > > > > class="org.apache.ignite.configuration.DataStorageConfiguration"> > > class="org.apache.ignite.configuration.DataRegionConfiguration"> > > > > > > > > > > class="org.apache.ignite.configuration.DataRegionConfiguration"> > value="FooNamesCache_Region"/> > > > > > value="true"/> > > > > > > > > > > value="FooNamesCache_Region"/> > > > > > > > > > class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory"> > > > >
Re: Consistently B+Tree is getting corrupted in a specific scenario
Resharing the cache settings as it got snipped in the previous email: 127.0.0.1:47500..47510