Cannot Find memory.nonheap.max
Hello Everyone, When using prometheus to monitor the ignite(ver. 2.13), I cannot find the parameter ‘sys_memory_nonheap_max’ . It seems that this parameter is available in the official documentation. How to solve this problem?Thanks. Tiany Hu, 2024/2/20
Re:Re: Re: Parallel execution CreateTable
Thanks , Stephen Based on your suggestion, we are optimizing for the first and second issues. For the third suggestion, our business has many 'temporary tables', which have different column and be detroyed when used. So I'm afraid it is hard for us to avoid create/destory tables. The column is dynamic so it cannot be written in XML either . I’ not sure whether createCaches() can meets the requirements or not . Thank you for your help! At 2023-10-17 17:18:56, "Stephen Darlington" wrote: There's a lot going on there, and I don't have the time to fully analyse, but I will note a few things: You have a lot of very poorly optimised queries (full table scans) I see a few "long JVM pauses" which suggests poorly configured garbage collector Ignite is not designed for large numbers of cache creation/destruction. Lots of simultaneous queries are fine, because those do not require a global lock If you need to create a lot of caches at once, you can use XML or a programming language and use the createCaches() method On Tue, 17 Oct 2023 at 10:43, y wrote: Here is cluster log(from one node) and createtable sql(Appendix). The Cluster has 10 nodes and do stress testing with 50 thread .It means there 50 query or createtable operations at the same time. Notice that when create table,the cluster node log has many ‘Thread - WAITING’. That's not normal, right? At 2023-10-17 15:19:33, "Stephen Darlington" wrote: Can you share some more information about your cluster? There is no way that creating a cache should take so long. On Tue, 17 Oct 2023 at 03:51, y wrote: Hello. Everyone! Creating table statements is executed synchronously and will block other DDL statements. There are serious performance issues in concurrent environments. It takes one minute to create one table. If there is any way to solve this problem, such as changing it to serial execution? Thanks, Tianyu-Hu
Parallel execution CreateTable
Hello. Everyone! Creating table statements is executed synchronously and will block other DDL statements. There are serious performance issues in concurrent environments. It takes one minute to create one table. If there is any way to solve this problem, such as changing it to serial execution? Thanks, Tianyu-Hu
Re:Re: SetLoacl is not work for Calcite
We noticed that Calcite can obtain better performance, especially the insertion of data. We decide to try to use Calcite in the production environment until we found that it did not support ‘setLocal’ .. glad to learn that calcite will support SETLOCAL in the future. At 2023-06-30 16:05:09, "Stephen Darlington" wrote: I’m curious: why did you switch to Calcite if your deployment is in production? I would generally be cautious about putting beta software into production. Was there some critical feature that you needed? On 30 Jun 2023, at 09:34, y wrote: Thanks for your suggestion. We are considering the feasibility of your proposed solution. For complex business production environments, it is difficult to distinguish whether the query requires setLocal or not. Anyway, thank you for your help. At 2023-06-30 15:18:07, "Stephen Darlington" wrote: If this is an important feature for you, the obvious solution would be to use the H2 SQL engine (which is still the default, since the Calcite engine is still considered beta). As noted in the documentation, you can even keep Calcite as the default engine in your cluster and only route these queries to H2. https://ignite.apache.org/docs/latest/SQL/sql-calcite#query_engine-hint On 30 Jun 2023, at 03:50, y wrote: Hello. I'm sorry for it took so long time to reply to the message duing to I missed some messages. For example, I have multiple nodes performing the same computational tasks. The cache mode is partition and the data is cached by affinity_key. So the different nodes have different data and node only query/calculate data from itself. If there is no setLocal, the node will query data from other nodes, which is inefficient. That's why i need setLocal. What should I do if without setLocal? Yours, Hu Tiany 2023/6/30 At 2023-06-05 19:21:13, "Alex Plehanov" wrote: >Hello, > >The Calcite-based SQL engine currently doesn't analyze any properties >of SqlFieldsQuery except "Sql", "Schema", "Args" and >"QueryInitiatorId". Some of the rest properties are useless for the >Calcite-based engine at all (for example, "DistributedJoins", since >all joins in the Calcite-based engine are distributed by default if >needed). But, perhaps, others can be useful. If you are really sure >that the "Local" property is necessary for the new SQL engine, feel >free to create a ticket and describe the reason why we need it. > >пн, 5 июн. 2023 г. в 12:05, y : >> >> Hello igniters, >> Just like the title, setLocal seems invalid for Calcite 2.15. When I set >> ‘setLocal = true’ and query data from one node, the result sets is returned >> from all data nodes. This problem is not present in version 2.13 ,which not >> use Calcite. I'd like to know is this an error? If yes it is, When will it >> be fixed? >> >> SqlFieldsQuery fieldsQuery = new SqlFieldsQuery(query); >> fieldsQuery.setLocal(true); // uneffective for this line >> List> rs = >> ignite.cache("bfaccounttitle2020").query(fieldsQuery).getAll(); >> >>
Re:Re: SetLoacl is not work for Calcite
Thanks for your suggestion. We are considering the feasibility of your proposed solution. For complex business production environments, it is difficult to distinguish whether the query requires setLocal or not. Anyway, thank you for your help. At 2023-06-30 15:18:07, "Stephen Darlington" wrote: If this is an important feature for you, the obvious solution would be to use the H2 SQL engine (which is still the default, since the Calcite engine is still considered beta). As noted in the documentation, you can even keep Calcite as the default engine in your cluster and only route these queries to H2. https://ignite.apache.org/docs/latest/SQL/sql-calcite#query_engine-hint On 30 Jun 2023, at 03:50, y wrote: Hello. I'm sorry for it took so long time to reply to the message duing to I missed some messages. For example, I have multiple nodes performing the same computational tasks. The cache mode is partition and the data is cached by affinity_key. So the different nodes have different data and node only query/calculate data from itself. If there is no setLocal, the node will query data from other nodes, which is inefficient. That's why i need setLocal. What should I do if without setLocal? Yours, Hu Tiany 2023/6/30 At 2023-06-05 19:21:13, "Alex Plehanov" wrote: >Hello, > >The Calcite-based SQL engine currently doesn't analyze any properties >of SqlFieldsQuery except "Sql", "Schema", "Args" and >"QueryInitiatorId". Some of the rest properties are useless for the >Calcite-based engine at all (for example, "DistributedJoins", since >all joins in the Calcite-based engine are distributed by default if >needed). But, perhaps, others can be useful. If you are really sure >that the "Local" property is necessary for the new SQL engine, feel >free to create a ticket and describe the reason why we need it. > >пн, 5 июн. 2023 г. в 12:05, y : >> >> Hello igniters, >> Just like the title, setLocal seems invalid for Calcite 2.15. When I set >> ‘setLocal = true’ and query data from one node, the result sets is returned >> from all data nodes. This problem is not present in version 2.13 ,which not >> use Calcite. I'd like to know is this an error? If yes it is, When will it >> be fixed? >> >> SqlFieldsQuery fieldsQuery = new SqlFieldsQuery(query); >> fieldsQuery.setLocal(true); // uneffective for this line >> List> rs = >> ignite.cache("bfaccounttitle2020").query(fieldsQuery).getAll(); >> >>
Re:Re: SetLoacl is not work for Calcite
Hello. I'm sorry for it took so long time to reply to the message duing to I missed some messages. For example, I have multiple nodes performing the same computational tasks. The cache mode is partition and the data is cached by affinity_key. So the different nodes have different data and node only query/calculate data from itself. If there is no setLocal, the node will query data from other nodes, which is inefficient. That's why i need setLocal. What should I do if without setLocal? Yours, Hu Tiany 2023/6/30 At 2023-06-05 19:21:13, "Alex Plehanov" wrote: >Hello, > >The Calcite-based SQL engine currently doesn't analyze any properties >of SqlFieldsQuery except "Sql", "Schema", "Args" and >"QueryInitiatorId". Some of the rest properties are useless for the >Calcite-based engine at all (for example, "DistributedJoins", since >all joins in the Calcite-based engine are distributed by default if >needed). But, perhaps, others can be useful. If you are really sure >that the "Local" property is necessary for the new SQL engine, feel >free to create a ticket and describe the reason why we need it. > >пн, 5 июн. 2023 г. в 12:05, y : >> >> Hello igniters, >> Just like the title, setLocal seems invalid for Calcite 2.15. When I set >> ‘setLocal = true’ and query data from one node, the result sets is returned >> from all data nodes. This problem is not present in version 2.13 ,which not >> use Calcite. I'd like to know is this an error? If yes it is, When will it >> be fixed? >> >> SqlFieldsQuery fieldsQuery = new SqlFieldsQuery(query); >> fieldsQuery.setLocal(true); // uneffective for this line >> List> rs = >> ignite.cache("bfaccounttitle2020").query(fieldsQuery).getAll(); >> >>
SetLoacl is not work for Calcite
Hello igniters, Just like the title, setLocal seems invalid for Calcite 2.15. When I set ‘setLocal = true’ and query data from one node, the result sets is returned from all data nodes. This problem is not present in version 2.13 ,which not use Calcite. I'd like to know is this an error? If yes it is, When will it be fixed? SqlFieldsQuery fieldsQuery = new SqlFieldsQuery(query); fieldsQuery.setLocal(true); // uneffective for this line List> rs = ignite.cache("bfaccounttitle2020").query(fieldsQuery).getAll();
Re:Re: Maximum concurrency of ComputeTask
Yes, it works.Thank you Pavel !! At 2023-03-10 14:12:43, "Pavel Tupitsyn" wrote: Two things can limit the number of active compute tasks: IgniteConfiguration#publicThreadPoolSize (defaults to max(8, AVAILABLE_PROC_CNT)) ThinClientConfiguration#maxActiveComputeTasksPerConnection (defaults to 0 => compute from thin clients is disabled) Please check those settings on your servers. https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html#setPublicThreadPoolSize-int- https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/ThinClientConfiguration.html#setMaxActiveComputeTasksPerConnection-int- On Fri, Mar 10, 2023 at 5:17 AM y wrote: Hi Everyone: When multiple users were calling the same computeTask through the thin client, I noticed that there is a limit to the maximum number of computing tasks running simultaneously on the server. In my server, 30 or 31 ComputeTasks can be run simultaneously at most. No matter how increasing the number of users, it still can only run 30 computeTask on Server. How can i improve the maximum number of running task? Thanks, Hu ty
Maximum concurrency of ComputeTask
Hi Everyone: When multiple users were calling the same computeTask through the thin client, I noticed that there is a limit to the maximum number of computing tasks running simultaneously on the server. In my server, 30 or 31 ComputeTasks can be run simultaneously at most. No matter how increasing the number of users, it still can only run 30 computeTask on Server. How can i improve the maximum number of running task? Thanks, Hu ty
Failed to deserialize object with given class loader
Hi Everyone: The following error occurred when I started the cluster in the test environment. It seems that the error is related to persistence and WAL. There are two things that i ensure: 1、Ignite used native persistence. 2、The server where the cluster is located was not shut down correctly last time (sudden power off) How to start the cluster without losing the above persistence data?Sincerely waiting for your answer. [15:40:19,353][SEVERE][main][IgniteKernal] Exception during start processors, node will be stopped and close connections class org.apache.ignite.IgniteCheckedException: Failed to deserialize object with given class loader: jdk.internal.loader.ClassLoaders$AppClassLoader@5ffd2b27 at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:132) at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:139) at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:80) at org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointMarkersStorage.initialize(CheckpointMarkersStorage.java:202) at org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointManager.initializeStorage(CheckpointManager.java:315) at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.readMetastore(GridCacheDatabaseSharedManager.java:878) at org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.notifyMetaStorageSubscribersOnReadyForRead(GridCacheDatabaseSharedManager.java:3200) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1116) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1721) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1160) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1054) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:940) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:839) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:709) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:678) at org.apache.ignite.Ignition.start(Ignition.java:353) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:365) Caused by: java.io.EOFException at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2894) at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3389) at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:931) at java.base/java.io.ObjectInputStream.(ObjectInputStream.java:374) at org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream.(JdkMarshallerObjectInputStream.java:43) at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:122) ... 17 more Tianyue Hu 2022/09/14
Ignite Page-window function
Hi Igniters: Does ignite have a function similar to row_number() or rownum?Now I need to give each row a unique number. Which function can I use?Thanks! Tianyue Hu, 2022/5/1
Re:Re: BinaryObject Data Can Not Mapping To SQL-Data
Hi Vladimir Yes, my first name is Tianyue... In fact, I work for an ERP company. ERP you know , "there are strict latency requirements for data processing and for the data-loading phase" in your words. I has deployed Ignite in a small, simple production environment. Now I need to deploy another one on a more complex environment which has xx TB data. For some business reasons, the previous code is no more appropriate. The current plan is ’Computetask + Binaryserializer‘, but this is also some problems and I will describe it in the future. Besides, I have tried to customize CacheStore, failed: ( To be honest, since I have only graduated from University for three years and my technical level is so limited, exploring ignite often frustrates me. (Damn it, why is it wrong again ! why? Tianyue Hu, 2022/4/20 PS:Zhenya sounds really really like a Chinese name,maybe a Russian Chinese, just kidding . 在 2022-04-20 12:50:42,vtcher...@gmail.com 写道: Hi Tianyue, IMHO fully compilable project is usefull for newbie, while short code snippets are not. You can start a single server cluster and debug code in your IDE, check some suggestions about how it works. Couple of years ago I found such a compilable project describing microservices written by @DenisMagda, it helped a lot. So I hope my post will be usefull. Vladimir PS I know out there in China last name is written first, while here in Russia it is written last, so the names are Vladimir or Zhenya. Hope I am correct and your name is Tianyue 5:12, 20 апреля 2022 г., y : Hi Stanilovsky, I don't know how to describe my problem to you, but I'm sure there is not an error and the data was successfully inserted but not mapped to SQL-data. Vladimir give me a link https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api. I decided to take a look at this link first. Anyway, thanks for your advise and hope can help you in the future. Tianyue Hu, 2022/4/20 在 2022-04-19 14:50:51,"Zhenya Stanilovsky" 写道: hi ! BinaryObjectBuilder oldBuilder = igniteClient.binary().builder(«com.inspur...PubPartitionKeys_1_7»); do you call: oldBuilder.build(); // after ? If so — what this mean ? «data is not mapped to sql» is it error in log or client side or smth ? thanks ! Hi, I have had the same experience without sql, using KV API only. My cluster consists of several data nodes and self-written jar application that starts the client node. When started, client node executes mapreduce tasks for data load and processing. The workaround is as follows: 1. create POJO on the client node; 2. convert it to the binary object; 3. on the data node, get binary object over the network and get its builder (obj.toBuilder()); 4. set some fields, build and put in the cache. The builder on the step 3 seems to be the same as the one on the cluent node. Hope that helps, Vladimir 13:06, 18 апреля 2022 г., y : Hi , When using binary to insert data, I need to get an exist BinaryObject/BinaryObjectBuilder from the database, similar to the code below. 442062c6$3$1803c222cba$Coremail$hty1994712$163.com If I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data? 3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com -- Отправлено из мобильного приложения Яндекс Почты -- Отправлено из мобильного приложения Яндекс Почты
Re: BinaryObject Data Can Not Mapping To SQL-Data
Hi Stanilovsky, I don't know how to describe my problem to you, but I'm sure there is not an error and the data was successfully inserted but not mapped to SQL-data. Vladimir give me a link https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api. I decided to take a look at this link first. Anyway, thanks for your advise and hope can help you in the future. Tianyue Hu, 2022/4/20 在 2022-04-19 14:50:51,"Zhenya Stanilovsky" 写道: hi ! BinaryObjectBuilder oldBuilder = igniteClient.binary().builder(«com.inspur...PubPartitionKeys_1_7»); do you call: oldBuilder.build(); // after ? If so — what this mean ? «data is not mapped to sql» is it error in log or client side or smth ? thanks ! Hi, I have had the same experience without sql, using KV API only. My cluster consists of several data nodes and self-written jar application that starts the client node. When started, client node executes mapreduce tasks for data load and processing. The workaround is as follows: 1. create POJO on the client node; 2. convert it to the binary object; 3. on the data node, get binary object over the network and get its builder (obj.toBuilder()); 4. set some fields, build and put in the cache. The builder on the step 3 seems to be the same as the one on the cluent node. Hope that helps, Vladimir 13:06, 18 апреля 2022 г., y : Hi , When using binary to insert data, I need to get an exist BinaryObject/BinaryObjectBuilder from the database, similar to the code below. 442062c6$3$1803c222cba$Coremail$hty1994712$163.com If I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data? 3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com -- Отправлено из мобильного приложения Яндекс Почты
Re:Re: Re: BinaryObject Data Can Not Mapping To SQL-Data
Hello Vladimir, Thanks for your reply, I will carefully refer to the link you sent to see how it is written differently from mine. If there is a problem, I will ask for help again.(hope not) By the way, i am not Russiasn. I am a young developer from China and studied ignite for more than a year : ) Tianyue Hu 2022/4/20 在 2022-04-19 17:12:39,"Vladimir Tchernyi" 写道: Hello Huty, please read my post [1]. The approach in that paper works successfully in production for more than one year and seems to be correct [1] https://www.gridgain.com/resources/blog/how-fast-load-large-datasets-apache-ignite-using-key-value-api Vladimir telegram @vtchernyi PS hope I named you correct, the name is not widespread here in Russia вт, 19 апр. 2022 г. в 09:46, y : Hi Vladimir, Thank you for your answer. Emmm.Actually, most of my methods are the same as yours except for the following two points: 1、I didn't use ComputeTask. The data is sent to the server node through the thin client. 2、I didn't use the standard POJO. Key-type is the following code and value-type is an empty class. That means All columns are dynamically specified through BinaryObjectBuilder. public class PubPartionKeys_1_7 { @AffinityKeyMapped private String TBDATA_DX01; private String TBDATA_DX02; private String TBDATA_DX03; private String TBDATA_DX04; private String TBDATA_DX05; private String TBDATA_DX06; private String TBDATA_DX07; public PubPartionKeys_1_7() { } // get/set method // . } I would be appreciate it very much if you attach your code back! :) Huty, 2022/4/19 At 2022-04-19 12:40:20, vtcher...@gmail.com wrote: Hi, I have had the same experience without sql, using KV API only. My cluster consists of several data nodes and self-written jar application that starts the client node. When started, client node executes mapreduce tasks for data load and processing. The workaround is as follows: 1. create POJO on the client node; 2. convert it to the binary object; 3. on the data node, get binary object over the network and get its builder (obj.toBuilder()); 4. set some fields, build and put in the cache. The builder on the step 3 seems to be the same as the one on the cluent node. Hope that helps, Vladimir 13:06, 18 апреля 2022 г., y : Hi , When using binary to insert data, I need to get an exist BinaryObject/BinaryObjectBuilder from the database, similar to the code below. 442062c6$3$1803c222cba$Coremail$hty1994712$163.com If I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data? 3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com -- Отправлено из мобильного приложения Яндекс Почты
Re:Re: BinaryObject Data Can Not Mapping To SQL-Data
Hi Vladimir, Thank you for your answer. Emmm.Actually, most of my methods are the same as yours except for the following two points: 1、I didn't use ComputeTask. The data is sent to the server node through the thin client. 2、I didn't use the standard POJO. Key-type is the following code and value-type is an empty class. That means All columns are dynamically specified through BinaryObjectBuilder. public class PubPartionKeys_1_7 { @AffinityKeyMapped private String TBDATA_DX01; private String TBDATA_DX02; private String TBDATA_DX03; private String TBDATA_DX04; private String TBDATA_DX05; private String TBDATA_DX06; private String TBDATA_DX07; public PubPartionKeys_1_7() { } // get/set method // . } I would be appreciate it very much if you attach your code back! :) Huty, 2022/4/19 At 2022-04-19 12:40:20, vtcher...@gmail.com wrote: Hi, I have had the same experience without sql, using KV API only. My cluster consists of several data nodes and self-written jar application that starts the client node. When started, client node executes mapreduce tasks for data load and processing. The workaround is as follows: 1. create POJO on the client node; 2. convert it to the binary object; 3. on the data node, get binary object over the network and get its builder (obj.toBuilder()); 4. set some fields, build and put in the cache. The builder on the step 3 seems to be the same as the one on the cluent node. Hope that helps, Vladimir 13:06, 18 апреля 2022 г., y : Hi , When using binary to insert data, I need to get an exist BinaryObject/BinaryObjectBuilder from the database, similar to the code below. 442062c6$3$1803c222cba$Coremail$hty1994712$163.com If I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data? 3ecbd8f9$4$1803c222cba$Coremail$hty1994712$163.com -- Отправлено из мобильного приложения Яндекс Почты
BinaryObject Data Can Not Mapping To SQL-Data
Hi , When using binary to insert data, I need to get an exist BinaryObject/BinaryObjectBuilder from the database, similar to the code below. If I create a BinaryObjectBuilder directly, inserting binary data does not map to table data. The following code will not throw error, but the data is not mapped to sql. If there is no data in my table at first, how can I insert data?
Re:Re: DML Statement error
Here is my insert code. BinaryObjectBuilder builder = ignite.binary().builder("com.inspur.edp.caf.db.dbaccess.DynamicResultRow"); Map datasBinary = new HashMap<>(); IgniteDataStreamer stmr = ignite.dataStreamer("TBDATA_GLFY_1"); .. .. for(int i = 0; i < meta.length; i++){ builder.setField(meta[i], value[i]); } PubPartionKeys_1_7 pubPartionKeys_1_7 = new PubPartionKeys_1_7( "cf2d3d93-df0f-dd8d-8038-bac8b54b2042", "01bc29a3-e82a-a1f2-1afe-ba732910ed72", "202100102", "6a4b352a-f5c7-56c6-dfa8-ed501a87d982", "DA65A696-8C7B-4899-B9D6-85A3F07CD11A", "aaaeb252-b562-2242-10de-4d6d419f6d56", "516e4ea5-cb28-37e1-b3ce-439d640370b6"); datasBinary.put(pubPartionKeys_1_7, builder.build()); stmr.addData(datasBinary); stmr.flush(); stmr.close(); 在 2022-04-05 16:56:25,"Vasily A. Laktionov" 写道: Hi, It seems we have solved that problem. We need to see your code where you insert data via BinaryObject and SQL both. сб, 2 апр. 2022 г. в 12:51, y : Hi Igniters: First, I use custom primary key classes and BinaryObject to insert several pieces of data. And then, when I use the INSERT statement to insert data, the following error occurs: Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to update keys on primary node. at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKeys(GridNearAtomicUpdateResponse.java:340) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1918) ... 37 more Suppressed: class org.apache.ignite.binary.BinaryObjectException: Failed to get field because type ID of passed object differs from type ID this BinaryField belongs to [expected=[typeId=-181945856, typeName=com.inspur.edp.qdp.config.api.ketype.PubPartionKeys_1_7], actual=[typeId=1324872999, typeName=com.inspur.edp.qdp.config.keytype.PubPartionKeys_1_7], fieldId=-435253668, fieldName=TBDATA_DX01, fieldType=null] at org.apache.ignite.internal.binary.BinaryFieldImpl.fieldOrder(BinaryFieldImpl.java:302) at org.apache.ignite.internal.binary.BinaryFieldImpl.value(BinaryFieldImpl.java:110) at org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.fieldValue(QueryBinaryProperty.java:223) at org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.value(QueryBinaryProperty.java:120) at org.apache.ignite.internal.processors.query.h2.opt.GridH2RowDescriptor.columnValue(GridH2RowDescriptor.java:235) at org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler.getKey(QueryIndexRowHandler.java:110) at org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler.indexKey(QueryIndexRowHandler.java:73) at org.apache.ignite.internal.cache.query.index.sorted.IndexRowImpl.key(IndexRowImpl.java:68) at org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexImpl.onUpdate(InlineIndexImpl.java:248) at org.apache.ignite.internal.cache.query.index.IndexProcessor.updateIndex(IndexProcessor.java:452) at org.apache.ignite.internal.cache.query.index.IndexProcessor.updateIndexes(IndexProcessor.java:295) at org.apache.ignite.internal.cache.query.index.IndexProcessor.store(IndexProcessor.java:142) at org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:2549) at org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:422) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:2666) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1742) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1717) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:441) at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2327) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2553) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:2016) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1833) Who knows why?Thanks! Hty, 2022/04/02 -- С наилучшими пожеланиями, Василий А. Лактионов vasilylaktio...@gmail.com
DML Statement error
Hi Igniters: First, I use custom primary key classes and BinaryObject to insert several pieces of data. And then, when I use the INSERT statement to insert data, the following error occurs: Suppressed: class org.apache.ignite.IgniteCheckedException: Failed to update keys on primary node. at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKeys(GridNearAtomicUpdateResponse.java:340) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1918) ... 37 more Suppressed: class org.apache.ignite.binary.BinaryObjectException: Failed to get field because type ID of passed object differs from type ID this BinaryField belongs to [expected=[typeId=-181945856, typeName=com.inspur.edp.qdp.config.api.ketype.PubPartionKeys_1_7], actual=[typeId=1324872999, typeName=com.inspur.edp.qdp.config.keytype.PubPartionKeys_1_7], fieldId=-435253668, fieldName=TBDATA_DX01, fieldType=null] at org.apache.ignite.internal.binary.BinaryFieldImpl.fieldOrder(BinaryFieldImpl.java:302) at org.apache.ignite.internal.binary.BinaryFieldImpl.value(BinaryFieldImpl.java:110) at org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.fieldValue(QueryBinaryProperty.java:223) at org.apache.ignite.internal.processors.query.property.QueryBinaryProperty.value(QueryBinaryProperty.java:120) at org.apache.ignite.internal.processors.query.h2.opt.GridH2RowDescriptor.columnValue(GridH2RowDescriptor.java:235) at org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler.getKey(QueryIndexRowHandler.java:110) at org.apache.ignite.internal.processors.query.h2.index.QueryIndexRowHandler.indexKey(QueryIndexRowHandler.java:73) at org.apache.ignite.internal.cache.query.index.sorted.IndexRowImpl.key(IndexRowImpl.java:68) at org.apache.ignite.internal.cache.query.index.sorted.inline.InlineIndexImpl.onUpdate(InlineIndexImpl.java:248) at org.apache.ignite.internal.cache.query.index.IndexProcessor.updateIndex(IndexProcessor.java:452) at org.apache.ignite.internal.cache.query.index.IndexProcessor.updateIndexes(IndexProcessor.java:295) at org.apache.ignite.internal.cache.query.index.IndexProcessor.store(IndexProcessor.java:142) at org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:2549) at org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:422) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:2666) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke0(IgniteCacheOffheapManagerImpl.java:1742) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1717) at org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:441) at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2327) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2553) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:2016) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1833) Who knows why?Thanks! Hty, 2022/04/02
NullPointerException when using BinaryConfiguration
Hi Igniters: When i use ignite (ver. 2.11.0) thin client to connet the server with below code:clientConfiguration.setBinaryConfiguration(getBinaryConfiguration()),it throw NullPointerException. Can't thin clients use setBinaryConfiguration function? java.lang.NullPointerException at org.apache.ignite.internal.client.thin.TcpIgniteClient$ClientBinaryMetadataHandler.addMeta(TcpIgniteClient.java:380) at org.apache.ignite.internal.binary.BinaryContext.registerUserType(BinaryContext.java:1164) at org.apache.ignite.internal.binary.BinaryContext.configure(BinaryContext.java:414) at org.apache.ignite.internal.binary.BinaryContext.configure(BinaryContext.java:348) at org.apache.ignite.internal.client.thin.ClientBinaryMarshaller.createImpl(ClientBinaryMarshaller.java:117) at org.apache.ignite.internal.client.thin.ClientBinaryMarshaller.setBinaryConfiguration(ClientBinaryMarshaller.java:89) at org.apache.ignite.internal.client.thin.TcpIgniteClient.(TcpIgniteClient.java:115) at org.apache.ignite.internal.client.thin.TcpIgniteClient.(TcpIgniteClient.java:101) at org.apache.ignite.internal.client.thin.TcpIgniteClient.start(TcpIgniteClient.java:327) at org.apache.ignite.Ignition.startClient(Ignition.java:612) HuTy, 2022/02/28
Re: Ignite Cluster Config Issue
Thanks. I have met the same question.This helps me to understand it. On 2021/11/26 12:40, Gurmehar Kalra wrote: Issue got resolved after adding below line of code. ignite.cluster().baselineAutoAdjustEnabled(*true*); ignite.cluster().baselineAutoAdjustTimeout(1);
Ignite BinaryHeapOutputStream OutOfMemoryError
Hi Igniters: When i using thin client binary to insert data(BinaryObjectBuilder). The client has the following error.It seems the memory of BinaryHeapOutputStream is not enough. How can i increase the memory when using binary to insert? Caused by: java.lang.OutOfMemoryError: Java heap space at org.apache.ignite.internal.binary.streams.BinaryHeapOutputStream.arrayCopy(BinaryHeapOutputStream.java:77) at org.apache.ignite.internal.client.thin.TcpClientChannel.send(TcpClientChannel.java:269) at org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:216) at org.apache.ignite.internal.client.thin.ReliableChannel.lambda$service$1(ReliableChannel.java:166) at org.apache.ignite.internal.client.thin.ReliableChannel$$Lambda$1841/187663740.apply(Unknown Source) at org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:744) at org.apache.ignite.internal.client.thin.ReliableChannel.applyOnDefaultChannel(ReliableChannel.java:712) at org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:165) at org.apache.ignite.internal.client.thin.ReliableChannel.request(ReliableChannel.java:287) at org.apache.ignite.internal.client.thin.TcpClientCache.putAll(TcpClientCache.java:316) Tinayue Hu
Re:RE: Failed to find security context for subject with given ID
Ignite Version:2.11.0. Here are some important configuration. Using binary mode to insert data. At 2021-11-19 15:07:01, "Mikhail Petrov" wrote: Pavel, at first glance these are not related issues. Tianyue Hu, could you please specify the version of Ignite you are using, the server nodes configuration, and which Ignite mechanism you are using to insert data? -- Mikhail On 2021/11/19 06:37:47 y wrote: > Hello Igniters: > > > I start multiple nodes on one server.When I did data insertion, I got the > following error:Failed to find security context for subject with given ID . > And then the node stopped. Would you please help? > > Thanks! > Tianyue Hu > >
Failed to find security context
Hello Igniters: When I started 20 nodes on one machine , i got a Exception like this "Failed to find security context for subject with given ID : a7e071b3-de48-3ec1-9d24-de6cbe6c7bf1".And then it looks like something is blocking . Has anyone encountered this problem? PS:Ignite version:2.11.0 OS:CentOS Huty 2021/11/12 java.lang.IllegalStateException: Failed to find security context for subject with given ID : a7e071b3-de48-3ec1-9d24-de6cbe6c7bf1 at org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.withContext(IgniteSecurityProcessor.java:153) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907) at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1529) at org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:242) at org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1422) at org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55) at org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:569) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) [17:41:50,340][SEVERE][sys-stripe-5-#6][] Critical system error detected. Will be handled accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=java.lang.IllegalStateException: Failed to find security context for subject with given ID : a7e071b3-de48-3ec1-9d24-de6cbe6c7bf1]] java.lang.IllegalStateException: Failed to find security context for subject with given ID : a7e071b3-de48-3ec1-9d24-de6cbe6c7bf1 at org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.withContext(IgniteSecurityProcessor.java:153) at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1907) at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1529) at org.apache.ignite.internal.managers.communication.GridIoManager.access$5300(GridIoManager.java:242) at org.apache.ignite.internal.managers.communication.GridIoManager$9.execute(GridIoManager.java:1422) at org.apache.ignite.internal.managers.communication.TraceRunnable.run(TraceRunnable.java:55) at org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:569) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) at java.lang.Thread.run(Thread.java:748) [17:41:50,342][SEVERE][sys-stripe-5-#6][FailureProcessor] No deadlocked threads detected. [17:41:50,422][SEVERE][sys-stripe-5-#6][FailureProcessor] Thread dump at 2021/11/12 17:41:50 GMT+08:00 Thread [name="sys-#198", id=229, state=TIMED_WAITING, blockCnt=0, waitCnt=1] Lock [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@478a8ddc, ownerName=null, ownerId=-1] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Thread [name="mgmt-#197", id=228, state=TIMED_WAITING, blockCnt=0, waitCnt=1] Lock [object=java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@71439f24, ownerName=null, ownerId=-1] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1073) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) .. . . [17:42:00,597][SEVERE][tcp-disco-msg-worker-[f18796d4 0:0:0:0:0:0:0:1%lo:47503]-#3-#77][G] Blocked system-critical thread has been detected.
IGNITE_SQL_MERGE_TABLE_MAX_SIZE
Hi Igniters: When I execute the sql statement directly, the following error occurred. How can i increase ‘IGNITE_SQL_MERGE_TABLE_MAX_SIZE’? Failed to run reduce query locally. General error: "class org.apache.ignite.IgniteException: Fetched result set was too large. IGNITE_SQL_MERGE_TABLE_MAX_SIZE(1) should be increased."; SQL statement: SELECT TBDATAA__Z0__TBDATA_DANM TBDATA_DANM, TBDATAA__Z0__TBDATA_DX01 TBDATA_DX01, TBDATAA__Z0__TBDATA_DX02 TBDATA_DX02, TBDATAA__Z0__TBDATA_DX03 TBDATA_DX03, TBDATAA__Z0__TBDATA_DX04 TBDATA_DX04, Yours, Hu 2021/09/09
Wrong value has been set when using binary
Hi : My ignite version is 2.10.0 . When I use binary to insert data, the following errors often occur:Wrong value has been set [trpeName=.., fieldName=tbdata_danm, fieldType=String, assignedValuType=Object]. This value is not an object, actually, it's null. I don't know why this mistake often occurs. Yours Hu 2020/08/31
Re:Re: NullPointerException With Using ThinClient (ignite 2.9. 0)
Thank you Ivan!!Thank you very much for your answers. I just got an email from the developers saying that they are about to release version 2.11.0. Can this problem be solved with version 2.10.0 ? Or.Maybe I should wait for the new version to come out? At 2021-07-26 15:57:14, ivan.fedoren...@tdameritrade.com wrote: >Here is the bug report that I’ve created a couple of weeks ago: >https://issues.apache.org/jira/browse/IGNITE-15138 > >They are trying to register your binary configuration in cluster before >initializing the TCP channel. > >Best regards, >Ivan Fedorenkov > >From: y >Reply-To: "user@ignite.apache.org" >Date: Monday, July 26, 2021 at 10:51 AM >To: "user@ignite.apache.org" >Subject: NullPointerException With Using ThinClient (ignite 2.9.0) > >Hi Igniters: >When i uses Binary Object Marshaller with ThinClient, I used the following >configuration: > java: > [cid:image001.png@01D7820C.FA7F63B0] > [cid:image002.png@01D7820C.FA7F63B0] > >xml: >[cid:image003.png@01D7820C.FA7F63B0] > >The following error was reported during the program: >Exception in thread "main" java.lang.NullPointerException >at >org.apache.ignite.internal.client.thin.TcpIgniteClient$ClientBinaryMetadataHandler.addMeta(TcpIgniteClient.java:311) >at >org.apache.ignite.internal.binary.BinaryContext.registerUserType(BinaryContext.java:1100) >at >org.apache.ignite.internal.binary.BinaryContext.configure(BinaryContext.java:414) >at >org.apache.ignite.internal.binary.BinaryContext.configure(BinaryContext.java:348) >at >org.apache.ignite.internal.client.thin.ClientBinaryMarshaller.createImpl(ClientBinaryMarshaller.java:117) >at >org.apache.ignite.internal.client.thin.ClientBinaryMarshaller.setBinaryConfiguration(ClientBinaryMarshaller.java:89) >at >org.apache.ignite.internal.client.thin.TcpIgniteClient.(TcpIgniteClient.java:110) >at >org.apache.ignite.internal.client.thin.TcpIgniteClient.(TcpIgniteClient.java:96) >at >org.apache.ignite.internal.client.thin.TcpIgniteClient.start(TcpIgniteClient.java:258) >at org.apache.ignite.Ignition.startClient(Ignition.java:612) >at hutianyue.TestBinarySerializer.main(TestBinarySerializer.java:61) > > >Does any one know how to solve it? Do I need to use a newer version of ignite? > >Thanks!! >Ti-Yong > > > > >
"Failed to update keys" error
Hi Igniters: I have a very strange problem. I have two code environment:One is the simplest program (a "Hello world" program),can execute SQL statements correctly。Another code environment is the formal system, which use Spring Boot and a custom class loader named CAFClassLoader. The DML statement cannot be executed on formal system , but the DQL statement can be executed (‘select ..’ is OK ). Part of error messages are as follows. What makes me confused is that the code of the two environments is the same! I really can't figure out why. Does anyone know why? Attached is the code and complete error information. Error message Caused by: java.sql.SQLException: Failed to update keys (retry update if possible).: [2] at org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.processPage(DmlBatchSender.java:248) at org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.sendBatch(DmlBatchSender.java:196) at org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.add(DmlBatchSender.java:124) at org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.doUpdate(DmlUtils.java:255) ... 143 more Caused by: class org.apache.ignite.internal.processors.query.IgniteSQLException: Failed to update keys (retry update if possible).: [2] at org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.doUpdate(DmlUtils.java:280) at org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.processSelectResult(DmlUtils.java:171) at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateNonTransactional(IgniteH2Indexing.java:2899) at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdate(IgniteH2Indexing.java:2753) at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeUpdateDistributed(IgniteH2Indexing.java:2683) at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.executeDml(IgniteH2Indexing.java:1186) at org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.querySqlFields(IgniteH2Indexing.java:1112) at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2779) at org.apache.ignite.internal.processors.query.GridQueryProcessor$4.applyx(GridQueryProcessor.java:2775) at org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36) at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuery(GridQueryProcessor.java:3338) at org.apache.ignite.internal.processors.query.GridQueryProcessor.lambda$querySqlFields$2(GridQueryProcessor.java:2795) at org.apache.ignite.internal.processors.query.GridQueryProcessor.executeQuerySafe(GridQueryProcessor.java:2833) at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2769) at org.apache.ignite.internal.processors.query.GridQueryProcessor.querySqlFields(GridQueryProcessor.java:2696) at org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(IgniteCacheProxyImpl.java:819) ... 128 more Caused by: java.sql.SQLException: Failed to update keys (retry update if possible).: [2] at org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.processPage(DmlBatchSender.java:248) at org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.sendBatch(DmlBatchSender.java:196) at org.apache.ignite.internal.processors.query.h2.dml.DmlBatchSender.add(DmlBatchSender.java:124) at org.apache.ignite.internal.processors.query.h2.dml.DmlUtils.doUpdate(DmlUtils.java:255) ... 143 more Caused by: class org.apache.ignite.internal.processors.cache.CachePartialUpdateCheckedException: Failed to update keys (retry update if possible).: [2] at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onPrimaryError(GridNearAtomicAbstractUpdateFuture.java:397) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.onPrimaryResponse(GridNearAtomicUpdateFuture.java:413) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.onSendError(GridNearAtomicAbstractUpdateFuture.java:489) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:326) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:814) at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:666) at
ignite_backup_restore_query
I was trying to backup and restore the ignite persistence and wal data . Are there any steps available that can be followed to restore the data to pods. Thanks and regards Prerana