Re: ignite server restarted after Critical system error detected.
The cache is replicated , has two nodes configured. 1) Why does ignite trigger the data rebalancing from Casssandra store? Shouldn't it load the data cache from one node to another node? In what case, does Ignite have to go the data store (Cassandra) to load data if the cache is configured as "Replicated" ? 2) If a node with client mode is lost from Ignite Topology, does ignite trigger the data rebalancing? My understanding is that the client node has some data cached for this specific call. This cached data will not used for the other call. 3) topology changed [17:09:02] Topology snapshot [ver=7, locNode=103dfcb4, servers=2, *clients=3*, state=ACTIVE, CPUs=15, offheap=6.5GB, heap=8.0GB] clients should have 4 instead of 3. one client node was lost. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
ignite server restarted after Critical system error detected.
Hi Ignitians, I fail to understand what causes and need your help - 1) When k8s sees “Critical system error”, it will restart ignite-admin server. Restarting is fine because of the critical system error. But what are the causes of the critical system error? 2) Critical system error may be corresponding to JVM held. Still we don’t know the reason why JVM got held. 3) The cluster lost one ignite client node, probably due to OOME 4) Why/how was ignite server node triggered to reload the data from Cassandra (all data in C* tables are cached once the ignite server starts. All SQL DML interacts with Ignite Cache which interact with Cassandra for insert/update/delete) If Ignite needs to rebalance the data among the server nodes, why can't rebalance the data from one node to another? if even rebalancing data, why submitting invalid queries? We are using apache-ignite-2.8.0.20190215. Exceptions - [2021-06-02 17:09:04,005][ERROR][sys-#103562%ignite-procurant-admin-cluster%][root] Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=CRITICAL_ERROR, err=class o.a.i.i.transactions.IgniteTxHeuristicCheckedException: Committing a transaction has produced runtime exception]] ... 1 more at com.datastax.driver.core.AbstractSession.prepare(AbstractSession.java:104) class org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException: Committing a transaction has produced runtime exception at org.apache.ignite.internal.processors.cache.transactions.IgniteTxAdapter.heuristicException(IgniteTxAdapter.java:800) at org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitRemoteTx(GridDistributedTxRemoteAdapter.java:847) at org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitIfLocked(GridDistributedTxRemoteAdapter.java:795) at org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.salvageTx(GridDistributedTxRemoteAdapter.java:898) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager.salvageTx(IgniteTxManager.java:398) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager.access$3100(IgniteTxManager.java:134) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$NodeFailureTimeoutObject.onTimeout0(IgniteTxManager.java:2551) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$NodeFailureTimeoutObject.access$3300(IgniteTxManager.java:2505) at org.apache.ignite.internal.processors.cache.transactions.IgniteTxManager$NodeFailureTimeoutObject$1.run(IgniteTxManager.java:2624) at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6898) at org.apache.ignite.internal.processors.closure.GridClosureProcessor$1.body(GridClosureProcessor.java:827) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING at java.lang.Thread.run(Thread.java:745) at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.load(GridCacheStoreManagerAdapter.java:293) at org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadFromStore(GridCacheStoreManagerAdapter.java:338) Caused by: class org.apache.ignite.IgniteCheckedException: class org.apache.ignite.IgniteException: Failed to execute Cassandra CQL statement: select "id", "sourceid", "versioning", "colname", "colnewvalue", "cololdvalue", "createdby", "ts", "op" from "admin"."user_history" where "id"=?; at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerReload(GridCacheMapEntry.java:984) ... 13 more at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.readThrough(GridCacheMapEntry.java:619) at org.apache.ignite.internal.processors.cache.distributed.GridDistributedTxRemoteAdapter.commitIfLocked(GridDistributedTxRemoteAdapter.java:701) ... 18 more at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:198) Caused by: class org.apache.ignite.IgniteException: Failed to execute Cassandra CQL statement: select "id", "sourceid", "versioning", "colname", "colnewvalue", "cololdvalue", "createdby", "ts", "op" from "admin"."user_history" where "id"=?; Caused by: javax.cache.integration.CacheLoaderException: class
no ignite spring data 2.2 on repo for ignite 2.10.0 release?
Hi All, I don't see ignite spring data 2.2 maven repo for the recent ignite 2.10.0 release. But I see ignite spring 2.10.0 maven repo. Did I miss anything? Thanks, Xinmin -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Eager TTL and query
Excepted from the latest Ignite doc - "If the property (eagerTtl) is set to false, expired entries are not removed immediately. Instead, they are removed when they are requested in a cache operation by the thread that executes the operation." Say mycache has two entries, e1, e2 with 5 min created expiration and eager ttl = false. After 5 min, e3 is put into mycache("e3", "e3"). will e1 and e2 will be triggered to be removed from the cache when putting e3 into mycache? Thanks, Xinmin -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence
1) My testing shows that CassandraCacheStoreFactory is read from xml config once if the cache name does not exist in Ignite Persistence. It appears that CassandraCacheStoreFactory bean is persisted along with cache name in Ignite Persistence Store, CassandraCacheStoreFactory will not be re-instantiated. So I need to pre-decide dynamicConfigurationReload value. Modifying dynamicConfigurationReload value later won't take effect at dynamically. 2) Yes, the similar implementation is applied to data source. Thanks for remaindering. 3) given dynamicConfigurationReload is only loaded once at very first time of creating cache, I'm thinking we don't use dynamicConfigurationReload, instead implement the following 3.1) read data source from the xml config file each time an Ignite server starts, if data source is not null, use the latest one, else use the existing one. 3.2) read persistenceSettingsBean from xml config file each time an Ignite server starts, if persistence settings are not null, use the latest one, else using the existing one. 3.3) obviously, this is aggressive. But how many times do we start/restart an Ignite instance. The performance impacts are minimal. if you think it's okay, I start implementing and testing. Let me know if you have any other suggestions/recommendations. Appreciated your help. Xinmin -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence
Thanks for your recommendations! 1) The dynamicConfigurationReload is not able to reset once cacheStoreFactory is cached. It means that cacheStoreFactory does not read this value unless cacheStoreFactory is first instantiated. Do I miss something? Which factory should I put dynamicConfigurationReload in the xml configuration file? 2) can you please elaborate more about your comments "the data source may come from local Spring context on each node.". http://apache-ignite-users.70518.x6.nabble.com/
Re: not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence
Thanks for your clarifications, and appreciated your suggestions and guidance. My college and I went to ignite-cassandra module, commented two lines, */testing purpose/*, in org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory.getPersistenceSettings(), see the changes below. Removing these two lines is to force Ignite to read the configuration settings from the xml at runtime. It looks the change works and meets our requirement. I'm looking for the suggestions and recommendations from Ignite community and you, and wondering if we make these changes in Ignite repository. I think that it's very useful for our Ignite community. Similar changes should be applied to org.apache.ignite.cache.store.cassandra.CassandraCacheStoreFactory.getDataSource() private KeyValuePersistenceSettings getPersistenceSettings() { // comment the two line below for testing purpose. //if (persistenceSettings != null) //return persistenceSettings; if (persistenceSettingsBean == null) { throw new IllegalStateException("Either persistence settings bean or persistence settings itself " + "should be specified"); } if (appCtx == null) { throw new IllegalStateException("Failed to get Cassandra persistence settings cause Spring application " + "context wasn't injected into CassandraCacheStoreFactory"); } Object obj = loadSpringContextBean(appCtx, persistenceSettingsBean); if (!(obj instanceof KeyValuePersistenceSettings)) { throw new IllegalStateException("Incorrect persistence settings bean '" + persistenceSettingsBean + "' specified"); } return persistenceSettings = (KeyValuePersistenceSettings)obj; } -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence
Hi Ilya, Thanks for your guidance and happy new year! Sorry for late catchup. You are right, Ignite Native Persistence does track the changes in class definition for "Ignite Native Persistence". But the configuration to load data from Cache to Cassandra store is stored in xml configuration file. This configuration xml file is passed at runtime (see the example configuration below). When ignite server is stopped, the cache configuration is gone, not stored in the any place as far as I understand. When the Ignite is restarted, the new xml is passed, there is no previous configuration for the old class definition in the xml configuration. My question - how/where does the ignite server get/read the old class definition if this class definition is provided in the most recent xml config file? So what I said is that the Cassandra Store implementation may not need to change. It's the call that reads the xml configuration and stores the data in Cassandra Store via Ignite Cache. If I remove Ignite Native configuration ("persistenceEnabled" value="false"), then Ignite server uses the xml configuration file passed at runtime. The Ignite Cache has all new class definiton. I'd like your help to guide me make the changes to read the xml configuration at runtime instead of reading the previously cached configuration. java.lang.String com.procurant.catalog.entity.Uom -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence
Thanks for your suggestion, Ilya. Can you give me a reference to override default Cassandra Store implementation? I thought that the changes would be on Ignite Persistence side because caching data from standalone Cassandra store (with Ignite Persistence) did read configuration including class definition from the xml file. Regards, Xinmin -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence
Thanks for your confirmation, Ilya. I do have a follow-up question for you. When Ignite Persistence is used together with Cassandra. The caches for Cassandra table mappings are provided via xml file. The java class for the mappings is for BOTH Ignite Persistence and Cassandra cache store. When a table is changed, the mapping class gets changed, and the cluster is started. Why is Ignite Persistence able to use this updated java class (added/removed variables from the class) to insert/update data into Ignite Persistence stores. But for Cassandra Store, it appears to use the old definition of class. For this case, what's effort if we want to use the new class of definition? Do you think that it's reasonable to read the cache configuration (i.e. via xml) dynamically for 3rd party Cache store which does if the 3rd party store is used along. I'm not an expert in java, I can definitely help (or find a resource to do it) to improve this request. Pleas let me know if I can submit a feature request. Appreciate your help. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence
Thank you for your answer. I'm able to load the data from Cassandra to Ignite Cache stores using Cache.loadfull(). This works well for the use cases where 3rd party persistence store is used first, then Ignite Native Persistence store is used with 3rd party persistence together later. However, the configuration of 3rd party persistence store along with Ignite Native Persistence store appears to be stored inside the Native Persistence Store or somewhere, any changes in xml configuration will not propagate to the 3rd party data store. For example, adding a new column to an ignite cache, the data for this newly added column is not saved in the 3rd party persistence store when the data in is saved into the Ignite Native Persistence Store. In order to save the new added data for this new column, the existing cache need be destroyed and created. This works for caching small sets of data. It's probably not practically to destroy a cache with huge data set and recreate the cache again. I don't know what I've missed, or this is by design. If it's by design, I'd like to request an enhance so that the configuration can be dynamically read from xml file when the Ignite Native Persistence store is configured with a 3rd party store together. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
not able to change keyspace name when Using 3rd Party Persistence (Cassandra) Together with Ignite Native Persistence
I'm able to use Cassandra Together with Ignite Persistence (see the example below) usingIgnite 2.8.1. However I could not change to another keyspace which has the same tables as the keyspace initially configurfed in the ignite config file. The data changes are still stored in the old keyspace instead of the new keyspace that are in ignite config file. It's seem to me that the keyspace are cached in the Ignite native persistence layer. How do I change the keyspaces of Cassandra? The use case is that we use Ignite as cache initially with the data stored in Cassandra. With the data increase, caching all data is impractical. So we plan to flush the data into the disk when the memory reaches the limits. We still want to store the data into Cassandra at the same time. How do we load the the existing data from Cassandra Store to Native Persistence store when switching from Ignite Cache + Cassandra Store to Ignite Cache +Ignite persistence + Cassandra Store? Here is what I did - 1) duplicated the existing tables (no data) into the another keyspace in Cassandra 2) read the data from the original keyspace with Cassndra driver. This loaded the data into 2.1) ignite cache and native persistence 2.2) store the data into the new Cassandra keyspace. it's undesirable to the store the original data back to the original keyspace. 3) change the cassandra connection setting to switch to the original keyspace because Ignite Cache and Native Persistence layer have all data from the Cassandra. 3.1) switching to the original cassandra keyspace did not take any effect. The data changes were still stored into the new keyspace. Config files 1. cassandra-connection-settings.xml http://www.springframework.org/schema/beans; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd;> 2. dim_store_key_persistence.xm 3. ignite config http://www.springframework.org/schema/beans; xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance; xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd;> com.procurant.model.IdKey com.procurant.model.DimStore -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite client node raises "Sequence was removed from cache" after Ignite Server node restarts
Thanks. We tried it outside the transaction when the ignite instance restarted with EventType.EVT_CLIENT_NODE_RECONNECTED. However, it is hang at igniteInstance.atomicLong(). // when the ignite instance restarted, it hangs here userSeq = igniteInstance.atomicLong("userSeq", maxId, true); userSeq.getAndSet(maxId); We know that the sequence was removed, does the client keep the reference to the old Ignite server? How do we clear this reference and reset? Or we did it incorrectly. Thanks, Xinmin -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Getting NullPointerException during commit into cassandra, after reconnecting to ignite server
I got this exception for ignite 2.8.0. When the note restarted and the data is inserted into the cache and Cassandra store. Please note 1) my setup is that Native Persistence is enabled. just one server node, no client node 2) there is no issue for the very first start (i.e. the native storage is not created) 3) Ignite can be restarted if I delete all data from Native Storage folder. So basically, Ignite server will not able to update/insert data once the storage directory exists. Please any help is appreciated. [16:32:47] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT [16:32:48] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides. [16:32:48] Security status [authentication=off, tls/ssl=off] [16:32:50] Both Ignite native persistence and CacheStore are configured for cache 'FactLine'. This configuration does not guarantee strict consistency between CacheStore and Ignite data storage upon restarts. Consult documentation for more details. [16:32:50] Both Ignite native persistence and CacheStore are configured for cache 'InvoiceLine'. This configuration does not guarantee strict consistency between CacheStore and Ignite data storage upon restarts. Consult documentation for more details. [16:32:50] Both Ignite native persistence and CacheStore are configured for cache 'DimProduct'. This configuration does not guarantee strict consistency between CacheStore and Ignite data storage upon restarts. Consult documentation for more details. [16:32:50] Both Ignite native persistence and CacheStore are configured for cache 'Fact'. This configuration does not guarantee strict consistency between CacheStore and Ignite data storage upon restarts. Consult documentation for more details. [16:32:50] Both Ignite native persistence and CacheStore are configured for cache 'DimStore'. This configuration does not guarantee strict consistency between CacheStore and Ignite data storage upon restarts. Consult documentation for more details. [16:33:09] Performance suggestions for grid 'MyCluster' (fix if possible) [16:33:09] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true [16:33:09] ^-- Enable ATOMIC mode if not using transactions (set 'atomicityMode' to ATOMIC) [16:33:09] ^-- Enable write-behind to persistent store (set 'writeBehindEnabled' to true) [16:33:09] ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options) [16:33:09] ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to JVM options) [16:33:09] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options) [16:33:09] ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options) [16:33:09] Refer to this page for more performance suggestions: https://apacheignite.readme.io/docs/jvm-and-system-tuning [16:33:09] [16:33:09] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat} [16:33:09] Data Regions Configured: [16:33:09] ^-- Default_Region [initSize=10.0 MiB, maxSize=100.0 MiB, persistence=true, lazyMemoryAllocation=true] [16:33:09] [16:33:09] Ignite node started OK (id=53454c70, instance name=MyCluster) [16:33:09] Topology snapshot [ver=1, locNode=53454c70, servers=1, clients=0, state=INACTIVE, CPUs=8, offheap=0.1GB, heap=7.1GB] [16:33:09] ^-- Baseline [id=0, size=1, online=1, offline=0] [16:33:09] ^-- All baseline nodes are online, will start auto-activation >>> *** Start... >>> *** start populateDimStore... 16:33:09.755 [main] INFO com.datastax.driver.core.GuavaCompatibility - Detected Guava >= 19 in the classpath, using modern compatibility layer 16:33:09.757 [main] DEBUG com.datastax.driver.core.SystemProperties - com.datastax.driver.NEW_NODE_DELAY_SECONDS is undefined, using default value 1 16:33:09.759 [main] DEBUG com.datastax.driver.core.SystemProperties - com.datastax.driver.NOTIF_LOCK_TIMEOUT_SECONDS is undefined, using default value 60 16:33:09.771 [main] INFO com.datastax.driver.core.SystemProperties - com.datastax.driver.USE_NATIVE_CLOCK is defined, using value false 16:33:09.771 [main] INFO com.datastax.driver.core.ClockFactory - Using java.lang.System clock to generate timestamps. 16:33:09.774 [main] DEBUG com.datastax.driver.core.SystemProperties - com.datastax.driver.NON_BLOCKING_EXECUTOR_SIZE is undefined, using default value 8 16:33:09.818 [main] DEBUG com.datastax.driver.core.Cluster - Starting new cluster with contact points [127.0.0.1:9042] 16:33:09.830 [main] DEBUG io.netty.util.internal.logging.InternalLoggerFactory - Using SLF4J as the default logging framework 16:33:09.840 [main] DEBUG io.netty.util.internal.InternalThreadLocalMap -
Re: InvalidClassException local class incompatible for custom cache store factory
Hi Denis, updated - I'm hitting the NPE bug (https://issues.apache.org/jira/browse/IGNITE-13431) because I'm using Cassandra as the data store, and key persistence strategy PRIMITIVE in the original test. When I switched to a composite key, it worked. Not when this bug gets fixed. -- this fails -- this works -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: InvalidClassException local class incompatible for custom cache store factory
Hi Denis, please help... I'm trying to use Cassandra Persistence Together with Ignite Persistence (https://apacheignite.readme.io/docs/3rd-party-store#:~:text=3rd%20Party%20Persistence-,Overview,database%20that%20persists%20the%20data.=cache.,-integration.). The Cassandra is used to store the data for the other applications. It works at the first start. The data is inserted into the Cassandra store and Ignite cache. However, when the node stops and restarts, the exception is raised as follows. What is the missing from the configuration? [02:02:39] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides. [02:02:39] Security status [authentication=off, tls/ssl=off] [02:02:40] Ignite node stopped in the middle of checkpoint. Will restore memory state and finish checkpoint on node start. [2020-09-13 02:02:41,093][ERROR][main][IgniteKernal%MyCluster] Exception during start processors, node will be stopped and close connections class org.apache.ignite.IgniteException: Failed to enrich cache configuration [cacheName=DimProduct] at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:129) at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:62) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheInRecoveryMode(GridCacheProcessor.java:2268) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.access$1700(GridCacheProcessor.java:202) at org.apache.ignite.internal.processors.cache.GridCacheProcessor$CacheRecoveryLifecycle.afterBinaryMemoryRestore(GridCacheProcessor.java:5386) at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreBinaryMemory(GridCacheDatabaseSharedManager.java:1075) at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.startMemoryRestore(GridCacheDatabaseSharedManager.java:2049) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1254) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2045) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1035) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:921) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659) at org.apache.ignite.Ignition.start(Ignition.java:346) at com.procurant.test.partition.FactPartitionNativePersistenceSimpleTester.main(FactPartitionNativePersistenceSimpleTester.java:71) Caused by: class org.apache.ignite.IgniteException: Failed to deserialize field storeFactory at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.deserialize(CacheConfigurationEnricher.java:154) at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:122) ... 17 more Caused by: class org.apache.ignite.IgniteCheckedException: Failed to deserialize object with given class loader: sun.misc.Launcher$AppClassLoader@764c12b6 at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:132) at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:139) at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:81) at org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10304) at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.deserialize(CacheConfigurationEnricher.java:151) ... 18 more Caused by: java.lang.NullPointerException at java.util.Collections$UnmodifiableCollection.(Collections.java:1026) at java.util.Collections$UnmodifiableList.(Collections.java:1302) at java.util.Collections.unmodifiableList(Collections.java:1287) at org.apache.ignite.cache.store.cassandra.persistence.PersistenceSettings.readObject(PersistenceSettings.java:533) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1170) at
Re: Ignite client node raises "Sequence was removed from cache" after Ignite Server node restarts
Hi Ilya, Tried your suggestion, removing the userSeq field from the class. It still raise the same exception - org.apache.ignite.IgniteException: Cannot start/stop cache within lock or transaction. The error is thrown when trying to get igniteInstance.atomicLong("userSeq", maxId, true); Thanks. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: InvalidClassException local class incompatible for custom cache store factory
Hi Denis, No, I don't change any types of fields. The data is stored in the disk with Ignite 2.8.0. When I upgraded to 2.8.1, a serialization exception was raised. Adding java.io.serializable, generating serialVersionUID did not help. It seems that the data stored in the native persistence db has generated different version of serialization class when upgrading from Ignite 2.8.0 to 2.8.1. Any suggestion is appreciated. Caused by: java.io.InvalidClassException: org.apache.ignite.cache.store.cassandra.persistence.PersistenceSettings; local class incompatible: stream classdesc serialVersionUID = 1922252004176098172, local class serialVersionUID = 504991993937024313 lass org.apache.ignite.IgniteException: Failed to enrich cache configuration [cacheName=Invoice] at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:129) at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:62) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCacheInRecoveryMode(GridCacheProcessor.java:2268) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.access$1700(GridCacheProcessor.java:202) at org.apache.ignite.internal.processors.cache.GridCacheProcessor$CacheRecoveryLifecycle.afterBinaryMemoryRestore(GridCacheProcessor.java:5386) at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreBinaryMemory(GridCacheDatabaseSharedManager.java:1075) at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.startMemoryRestore(GridCacheDatabaseSharedManager.java:2049) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1254) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2045) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1703) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1117) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1035) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:921) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:820) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:659) at org.apache.ignite.Ignition.start(Ignition.java:346) at com.procurant.test.partition.FactPartitionNativePersistenceNoPrecachedTester.main(FactPartitionNativePersistenceNoPrecachedTester.java:71) Caused by: class org.apache.ignite.IgniteException: Failed to deserialize field storeFactory at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.deserialize(CacheConfigurationEnricher.java:154) at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.enrich(CacheConfigurationEnricher.java:122) ... 17 more Caused by: class org.apache.ignite.IgniteCheckedException: Failed to deserialize object with given class loader: sun.misc.Launcher$AppClassLoader@4e25154f at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:132) at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:139) at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:81) at org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:10304) at org.apache.ignite.internal.processors.cache.CacheConfigurationEnricher.deserialize(CacheConfigurationEnricher.java:151) ... 18 more Caused by: java.io.InvalidClassException: org.apache.ignite.cache.store.cassandra.persistence.PersistenceSettings; local class incompatible: stream classdesc serialVersionUID = 1922252004176098172, local class serialVersionUID = 504991993937024313 -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: InvalidClassException local class incompatible for custom cache store factory
Hi Denis, I have the same issue when persistenceEnabled = true and when Ignite is upgrade from 2.8.0 to 2.8.1. The data has been stored in the disk (single node) running on Ignite 2.8.0. This exception happens for persistence classes that do not implement java.io.serializable when upgrading to 2.8.1. Adding java.io.serializable does not resolve this exception. In general when persistenceEnabled = true and the data is stored in the disk 1) how to resolve the discrepancies issues of Ignite internal implementation between Ignite releases, i.e. the issue I'm experiencing. 2) how to resolve the key and value class changes (add/remove variables) Thanks, Xinmin -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: Ignite client node raises "Sequence was removed from cache" after Ignite Server node restarts
How do we enable the persistence for this atomicLong? My understanding is that atomicLong or atomicReference can't be persisted. Even if it can be persisted, isn't next value reset and started from the initial value. My issue is that the cached sequence was removed from Spring bean. How do I re-initialize this sequence once the Ignite server node restarted while the client node was still running. The initialization of the sequence is from the client side using @PostConstruct. We need a way to re-initialize with the max value from DB when the ignite server restarted and the client node was connected to the ignite server. @PostConstruct @Override public void initSequence() { Long maxId = userRepository.getMaxId(); if (maxId == null) { maxId = 0L; } LOG.info("Max User id: {}", maxId); userSeq = igniteInstance.atomicLong("userSeq", maxId, true); userSeq.getAndSet(maxId); } -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ignite client node raises "Sequence was removed from cache" after Ignite Server node restarts
*USE CASE *- use IgniteAtomicLong for table sequence generation (may not be correct approach in a distributed environment). *Ignite Server *(start Ignite as server mode) - apache-ignite-2.8.0.20190215 daily build *Ignite Service* (start Ignite as client mode) - use Ignite Spring to initialize the sequence, see code snippet below. *code snippet* IgniteAtomicLong userSeq; @Autowired UserRepository userRepository; @Autowired Ignite igniteInstance; @PostConstruct @Override public void initSequence() { Long maxId = userRepository.getMaxId(); if (maxId == null) { maxId = 0L; } LOG.info("Max User id: {}", maxId); userSeq = igniteInstance.atomicLong("userSeq", maxId, true); userSeq.getAndSet(maxId); } @Override public Long getNextSequence() { return userSeq.incrementAndGet(); } *Exception* This code works well until the Ignite Server restarted (Ignite Service was not restarted). It raised "Sequence was removed from cache" after Ignite Server node restarted. 020-08-11 16:14:46 [http-nio-8282-exec-3] ERROR c.p.c.p.service.PersistenceService - Error while saving entity: java.lang.IllegalStateException: Sequence was removed from cache: userSeq at org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy.removedError(AtomicDataStructureProxy.java:145) at org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy.checkRemoved(AtomicDataStructureProxy.java:116) at org.apache.ignite.internal.processors.datastructures.GridCacheAtomicLongImpl.incrementAndGet(GridCacheAtomicLongImpl.java:94) *Tried to reinitialize when the server node is down. But raises another exception - "cannot start/stop cache within lock or transaction"* How to solve such issues? Any suggestions are appreciated. @Override public Long getNextSequence() { if (useSeq == null || userSeq.removed()) { initSeqence(); } return userSeq.incrementAndGet(); } -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: adding new primitive type columns to the existing tables (table has data) in Cassandra causes Ignite to raise an exception in Ignite 2.7.0 or 2.8.0 daily build when loading the data from Cassandra
The issue was created in Ignite Jira (https://issues.apache.org/jira/browse/IGNITE-11523). I don't think that we need a reproducer. Instead I point out the implementation logic in the java class, and the reasons. The previous implementation (2.6) was correct, and the expected behavior. -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
adding new primitive type columns to the existing tables (table has data) in Cassandra causes Ignite to raise an exception in Ignite 2.7.0 or 2.8.0 daily build when loading the data from Cassandra int
Hi All, Adding new primitive type columns to the existing tables (table has data) in Cassandra causes Ignite to raise an exception (see below) in Ignite 2.7.0 or 2.8.0 nightly build when loading the data from Cassandra into Ignite Cache store. This works before Ignite 2.6, and even apache-ignite-fabric-2.7.0.20180918 nightly build. The assumption seems not correct because both Ignite and Cassandra are (key, value) store. In fact, SQLLINE tool, we don't need to insert into primitive value, see the example blow CREATE TABLE aa (col1 int PRIMARY KEY, col2 double ); INSERT INTO aa(col1) VALUES (1); SELECT * FROM aa; 0: jdbc:ignite:thin://localhost> select * from aa; +++ | COL1 | COL2 | +++ | 1 | null | +++ 1 row selected (0.052 seconds) 0: jdbc:ignite:thin://localhost> [2019-03-11 09:44:34,352][ERROR][cassandra-cache-loader-#61%ignite-procurant-purchase-order-cluster%][CassandraCacheStore] Failed to build Ignite value object from provided Cassandra row java.lang.IllegalArgumentException: Can't cast null value from Cassandra table column 'suppliertotamt' to double value used in domain object model at org.apache.ignite.cache.store.cassandra.common.PropertyMappingHelper.getCassandraColumnValue(PropertyMappingHelper.java:169) at org.apache.ignite.cache.store.cassandra.persistence.PojoField.setValueFromRow(PojoField.java:205) at org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.buildObject(PersistenceController.java:405) at org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.buildValueObject(PersistenceController.java:227) at org.apache.ignite.cache.store.cassandra.session.LoadCacheCustomQueryWorker$1.process(LoadCacheCustomQueryWorker.java:107) at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:402) at org.apache.ignite.cache.store.cassandra.session.LoadCacheCustomQueryWorker.call(LoadCacheCustomQueryWorker.java:81) at org.apache.ignite.cache.store.cassandra.session.LoadCacheCustomQueryWorker.call(LoadCacheCustomQueryWorker.java:35) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) [2019-03-11 09:44:34,360][ERROR][cassandra-cache-loader-#61%ignite-procurant-purchase-order-cluster%][CassandraCacheStore] Failed to execute Cassandra loadCache operation class org.apache.ignite.IgniteException: Failed to execute Cassandra loadCache operation at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:415) at org.apache.ignite.cache.store.cassandra.session.LoadCacheCustomQueryWorker.call(LoadCacheCustomQueryWorker.java:81) at org.apache.ignite.cache.store.cassandra.session.LoadCacheCustomQueryWorker.call(LoadCacheCustomQueryWorker.java:35) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: class org.apache.ignite.IgniteException: Failed to build Ignite value object from provided Cassandra row at org.apache.ignite.cache.store.cassandra.session.LoadCacheCustomQueryWorker$1.process(LoadCacheCustomQueryWorker.java:112) at org.apache.ignite.cache.store.cassandra.session.CassandraSessionImpl.execute(CassandraSessionImpl.java:402) ... 6 more Caused by: java.lang.IllegalArgumentException: Can't cast null value from Cassandra table column 'suppliertotamt' to double value used in domain object model at org.apache.ignite.cache.store.cassandra.common.PropertyMappingHelper.getCassandraColumnValue(PropertyMappingHelper.java:169) at org.apache.ignite.cache.store.cassandra.persistence.PojoField.setValueFromRow(PojoField.java:205) at org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.buildObject(PersistenceController.java:405) at org.apache.ignite.cache.store.cassandra.persistence.PersistenceController.buildValueObject(PersistenceController.java:227) CREATE TABLE aa (col1 int PRIMARY KEY, col2 double ); INSERT INTO aa(col1) VALUES (1); SELECT * FROM aa; -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: same cache cannot update twice in one transaction
1. why do we need the storage (whether native or 3rd party)? I need the storage because 1) down time for production deployment. 2) the data in the cache may be crashed and All ignite clusters are completed down (sorry) 2. So I need a storage to store the data that have been committed when transactions are committed. Do I have to care MVCC query in 3rd party storage? No. It's very difficult (if not possible) to achieve this goal given people are using distributed NoSQL databases. Can we directly query Ignite native storage with MVCC without the caching layer running on top of it? 3. Comparing to the other database storage, Ignite native storage is relative new. The features to support ML, data mining are provided, but not comprehensive like Apach Spark... So questions are raised when the integration of Ignite native storage or 3rd party storage with Spark - which storage is better? 4. I think that Ignite should specialize in caching, MVCC on cache layer (grid). Provide own native storage, ML, data mining feature are great, but there is limited resource for this community to complete this mission. Leave the other specialized groups (exports) focusing on storage store, ML, data mining, Block Chain... 5. Instead Ignite might focus on to integrate these things together to build data-ecosystem. I think that there are many dis-integrated, separated data storage, cache, ML, Deep Learning, blockchain. To put these things together, it needs visionary leaders, high skilled engineers. Why can we make our world life easier? 6. Igniters, please focus the grid computing (cache), ACID-compliant transaction, MVCC-enabled SQL in cache layer, and provide services to integrate with the other storage, spark (ML), hyper ledge fabric or ethereum or bitcoin (block chain). -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: pre-load data (Apache ignite native persistence store or Cassandra) into two partitioned cache tables
Can some one comments on the following questions in my previous post? 4. fact_purhcase_line, invoice and invoice line via factLineId and InvoiceID do not work, please see annotation below public class InvoiceLineKey { /** Primary key. */ private long id; /** Foreign key to fact_purhcase_line */ @AffinityKeyMapped private long factLineId; /** Foreign key to invoice */ @AffinityKeyMapped private long invoiceId; 5. I don't quite understand that invoiceId affinity key mapped between invoice and invoice_line does not require factLineId key mapped between fact_purchase_line and invoice_line. Is this because of having factId key affinity between purchase_fact and purchase_fact_line, between purchase_fact and invoice. So I just have the following key affinity mapped - purchase_fact -> factId-> purchase_fact_line purchase_fact -> factId -> invoice invoice -> invoiceId -> invoice_line Interestingly, invoice_line join fact_purhcase_line works fine (see queries below). Can someone please shed some lights on this? // expected SELECT count(*) from PARTITION.invoice inv, PARTITION.invoiceline il WHERE inv.id = il.invoiceid; // why does this query work? note there is a join between li.id=il.factLineId which is not a key affinity mapped. SELECT count(*) from PARTITION.factpurchaseline li, PARTITION.invoice inv, PARTITION.invoiceline il WHERE li.id = il.factlineid AND inv.id = il.invoiceid ; // why does this query work? note there is a join between li.id=il.factLineId which is not a key affinity mapped. SELECT count(*) from PARTITION.factpurchaseline li, PARTITION.invoiceline il WHERE li.id = il.factlineid ; -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: same cache cannot update twice in one transaction
Hi Ilya, It'd better if this was mentioned in Ignite Doc. It seems very limited if MVCC only supports the Ignite native persistence. Yes, supporting MVCC in 3rd party persistence is challenging. However, do we really need MVCC when the data from Cache (where MVCC already enabled) is ready to write to a 3rd party persistence store. I think that an "eventual consistence" for writing cached data into a 3rd persistence layer seems sufficient when Ignite is used as cache stored, and the data in cache store is persistent. Does Ignite have a plan to support MVCC in cache layer and write the data from the cached store into a 3rd party persistence store with some limited feature like "eventual consistence". Can some gurus shed some lights on this subject? -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: same cache cannot update twice in one transaction
Hi Ilya, Since I'm using Cassandra as data store, it raises the following exceptions once MVCC is enabled - class org.apache.ignite.IgniteCheckedException: Grid configuration parameter invalid: readThrough cannot be used with TRANSACTIONAL_SNAPSHOT atomicity mode at org.apache.ignite.internal.processors.GridProcessorAdapter.assertParameter(GridProcessorAdapter.java:140) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.validate(GridCacheProcessor.java:527) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCacheContext(GridCacheProcessor.java:1543) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheContext(GridCacheProcessor.java:2324) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$null$fd62dedb$1(GridCacheProcessor.java:2163) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$5(GridCacheProcessor.java:2086) at org.apache.ignite.internal.processors.cache.GridCacheProcessor.lambda$prepareStartCaches$937cbe24$1(GridCacheProcessor.java:2160) -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: pre-load data (Apache ignite native persistence store or Cassandra) into two partitioned cache tables
Thanks for reply. I got the most working thanks to the example (https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/starschema/CacheStarSchemaExample.java) provided by Ignite. Here is my sql for POC (Cassandra DDL scripts) create table ignitetest.dim_store (id bigint primary key, name varchar, addr varchar, zip varchar); create table ignitetest.dim_product (id bigint primary key, name varchar, price double, qty int); create table ignitetest.fact_purchase (id bigint primary key, productId bigint, storeId bigint, purchasePrice double); create table ignitetest.fact_purchase_line(id bigint , factId bigint, line int, linePrice double, lineQty int, primary key (id, factId)); create table ignitetest.invoice (id bigint, factId bigint, productId bigint, storeId bigint, purchasePrice double, primary key (id)); create table ignitetest.invoice_line (id bigint, invoiceId bigint, factLineId bigint, line int, price double, qty int, primary key (id, invoiceId, factLineId)); Have following key affinity mapped - purchase_fact -> factId-> purchase_fact_line purchase_fact -> factId -> invoice invoice -> invoiceId -> invoice_line 1. fact_purchase and fact_purchase_line via factId affinity works expected. 2. fact_purchase and invoice via factId affinity works expected. 3. invoice and invoice_line via invoiceId affinity works expected. However, 4. fact_purhcase_line, invoice and invoice line via factLineId and InvoiceID do not work, please see annotation below public class InvoiceLineKey { /** Primary key. */ private long id; /** Foreign key to fact_purhcase_line */ @AffinityKeyMapped private long factLineId; /** Foreign key to invoice */ @AffinityKeyMapped private long invoiceId; 5. I don't quite understand that invoiceId affinity key mapped between invoice and invoice_line does not require factLineId key mapped between fact_purchase_line and invoice_line. Is this because of having factId key affinity between purchase_fact and purchase_fact_line, between purchase_fact and invoice. So I just have the following key affinity mapped - purchase_fact -> factId-> purchase_fact_line purchase_fact -> factId -> invoice invoice -> invoiceId -> invoice_line Interestingly, invoice_line join fact_purhcase_line works fine (see queries below). Can someone please shed some lights on this? // expected SELECT count(*) from PARTITION.invoice inv, PARTITION.invoiceline il WHERE inv.id = il.invoiceid; // why does this query work? note there is a join between li.id=il.factLineId which is not a key affinity mapped. SELECT count(*) from PARTITION.factpurchaseline li, PARTITION.invoice inv, PARTITION.invoiceline il WHERE li.id = il.factlineid AND inv.id = il.invoiceid ; // why does this query work? note there is a join between li.id=il.factLineId which is not a key affinity mapped. SELECT count(*) from PARTITION.factpurchaseline li, PARTITION.invoiceline il WHERE li.id = il.factlineid ; -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Re: same cache cannot update twice in one transaction
It seems that this enhancement has not been implemented yet for the following cases: trx.start() { 1. update t1 set col1='a' where col2='c'; 2. update the same table t1 with cache API. } trx.end(); Can someone confirm? many thanks, Xinmin -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/
pre-load data (Apache ignite native persistence store or Cassandra) into two partitioned cache tables
Hi All, Apache Ignite recommends to collocate data with data, which is a very nice feature. This works well if you insert data through APIs provided by Ignite (see the following code I copy from Ignite Doc). However, I don't know how to handle the following cases 1. the data (company and person) exist in the other databases such as Cassandra. How do I pre-cache company and person from pre-existed data? Do I have to use JDBC to load company, person data row by row, insert into person and cache objects, such as for org: orgs { find persons for org { personCache.put(affinityKey(p.id, org.id), org); } orgCache.put(org.id, org); } If I used Ignite native persistence store, how does Ignite cluster know that both company and person are collocated? 2. How do I know that @AffinityKeyMapped annotated key comes from the cache I need to map. I don't see the logic to define relationship between person.companyId and company.id. If I have two separated two methods to load company and person cache separately, will the collocation of persons with companies still work? preloadCompany(); preloadPerson(long companyId); preloadAllPersons() { 1. get all companies from the companyCache // how personsCache will know this companyId is the same cluster node as companyCache? for (c: companies) { preloadPerson(c.companyId); } } The following code is excerpted from https://apacheignite.readme.io/docs/affinity-collocation. public class PersonKey { // Person ID used to identify a person. private String personId; // Company ID which will be used for affinity. @AffinityKeyMapped private String companyId; ... } // Instantiate person keys with the same company ID which is used as affinity key. Object personKey1 = new PersonKey("myPersonId1", "myCompanyId"); Object personKey2 = new PersonKey("myPersonId2", "myCompanyId"); Person p1 = new Person(personKey1, ...); Person p2 = new Person(personKey2, ...); // Both, the company and the person objects will be cached on the same node. comCache.put("myCompanyId", new Company(...)); perCache.put(personKey1, p1); perCache.put(personKey2, p2); -- Sent from: http://apache-ignite-users.70518.x6.nabble.com/