[jira] [Assigned] (IGNITE-19561) Ignite thin client continuous query listener cannot listen to all events
[ https://issues.apache.org/jira/browse/IGNITE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn reassigned IGNITE-19561: --- Assignee: Pavel Tupitsyn > Ignite thin client continuous query listener cannot listen to all events > > > Key: IGNITE-19561 > URL: https://issues.apache.org/jira/browse/IGNITE-19561 > Project: Ignite > Issue Type: Bug > Components: cache, clients >Affects Versions: 2.15 > Environment: JDK 1.8 > Windows 10 >Reporter: Mengyu Jing >Assignee: Pavel Tupitsyn >Priority: Major > Attachments: result1.log, result2.log > > > *Problem scenario:* > Start the Ignite server of one node, start one thin client and create a > continuous query listener, and then use 50 threads to add 500 data to the > cache concurrently. > *Problem phenomenon:* > Through the information printed on the listener, it was found that the number > of events listened to each time varies, possibly 496, 498, 499 or 500... > *Test Code:* > {code:java} > import org.apache.ignite.Ignite; > import org.apache.ignite.Ignition; > import org.apache.ignite.configuration.IgniteConfiguration; > import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; > import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; > import java.util.ArrayList; > import java.util.List; > public class StartServer { > public static void main(String[] args) { > IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); > TcpDiscoverySpi spi = new TcpDiscoverySpi(); > List addrList = new ArrayList<>(); > addrList.add("127.0.0.1:47500"); > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > ipFinder.setAddresses(addrList); > spi.setIpFinder(ipFinder); > igniteConfiguration.setDiscoverySpi(spi); > Ignite ignite = Ignition.start(igniteConfiguration); > } > } > {code} > {code:java} > import org.apache.ignite.Ignition; > import org.apache.ignite.cache.query.ContinuousQuery; > import org.apache.ignite.client.ClientCache; > import org.apache.ignite.client.IgniteClient; > import org.apache.ignite.configuration.ClientConfiguration; > import javax.cache.event.CacheEntryEvent; > import javax.cache.event.CacheEntryListenerException; > import javax.cache.event.CacheEntryUpdatedListener; > import java.util.Iterator; > public class StartThinClient { > public static void main(String[] args) throws InterruptedException { > String addr = "127.0.0.1:10800"; > int threadNmu = 50; > ClientConfiguration clientConfiguration = new ClientConfiguration(); > clientConfiguration.setAddresses(addr); > IgniteClient client1 = Ignition.startClient(clientConfiguration); > ClientCache cache1 = client1.getOrCreateCache("test"); > ContinuousQuery query = new ContinuousQuery<>(); > query.setLocalListener(new CacheEntryUpdatedListener Object>() { > @Override > public void onUpdated(Iterable> > cacheEntryEvents) throws CacheEntryListenerException { > Iterator> iterator = > cacheEntryEvents.iterator(); > while (iterator.hasNext()) { > CacheEntryEvent next = iterator.next(); > System.out.println("" + next.getKey()); > } > } > }); > cache1.query(query); > IgniteClient client2 = Ignition.startClient(clientConfiguration); > ClientCache cache2 = client2.cache("test"); > Thread[] threads = new Thread[threadNmu]; > for (int i = 0; i < threads.length; ++i) { > threads[i] = new Thread(new OperationInsert(cache2, i, 500, > threadNmu)); > } > for (int i = 0; i < threads.length; ++i) { > threads[i].start(); > } > for (Thread thread : threads) { > thread.join(); > } > Thread.sleep(6); > } > static class OperationInsert implements Runnable { > private ClientCache cache; > private int k; > private Integer test_rows; > private Integer thread_cnt; > public OperationInsert(ClientCache cache, int k, > Integer test_rows, Integer thread_cnt) { > this.cache = cache; > this.k = k; > this.test_rows = test_rows; > this.thread_cnt = thread_cnt; > } > @Override > public void run() { > for (int i = 100 + (test_rows/thread_cnt) * k; i < 100 + > (test_rows/thread_cnt) * (k + 1); i++) { > cache.put("" + i, "aaa"); > } > } > } > } {code} > *Running results:* > *[^result1.log][^result2.log]* > *Version:* > The testing pro
[jira] [Updated] (IGNITE-19561) Ignite thin client continuous query listener cannot listen to all events
[ https://issues.apache.org/jira/browse/IGNITE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19561: Fix Version/s: 2.16 > Ignite thin client continuous query listener cannot listen to all events > > > Key: IGNITE-19561 > URL: https://issues.apache.org/jira/browse/IGNITE-19561 > Project: Ignite > Issue Type: Bug > Components: cache, thin client >Affects Versions: 2.15 > Environment: JDK 1.8 > Windows 10 >Reporter: Mengyu Jing >Assignee: Pavel Tupitsyn >Priority: Major > Fix For: 2.16 > > Attachments: result1.log, result2.log > > > *Problem scenario:* > Start the Ignite server of one node, start one thin client and create a > continuous query listener, and then use 50 threads to add 500 data to the > cache concurrently. > *Problem phenomenon:* > Through the information printed on the listener, it was found that the number > of events listened to each time varies, possibly 496, 498, 499 or 500... > *Test Code:* > {code:java} > import org.apache.ignite.Ignite; > import org.apache.ignite.Ignition; > import org.apache.ignite.configuration.IgniteConfiguration; > import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; > import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; > import java.util.ArrayList; > import java.util.List; > public class StartServer { > public static void main(String[] args) { > IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); > TcpDiscoverySpi spi = new TcpDiscoverySpi(); > List addrList = new ArrayList<>(); > addrList.add("127.0.0.1:47500"); > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > ipFinder.setAddresses(addrList); > spi.setIpFinder(ipFinder); > igniteConfiguration.setDiscoverySpi(spi); > Ignite ignite = Ignition.start(igniteConfiguration); > } > } > {code} > {code:java} > import org.apache.ignite.Ignition; > import org.apache.ignite.cache.query.ContinuousQuery; > import org.apache.ignite.client.ClientCache; > import org.apache.ignite.client.IgniteClient; > import org.apache.ignite.configuration.ClientConfiguration; > import javax.cache.event.CacheEntryEvent; > import javax.cache.event.CacheEntryListenerException; > import javax.cache.event.CacheEntryUpdatedListener; > import java.util.Iterator; > public class StartThinClient { > public static void main(String[] args) throws InterruptedException { > String addr = "127.0.0.1:10800"; > int threadNmu = 50; > ClientConfiguration clientConfiguration = new ClientConfiguration(); > clientConfiguration.setAddresses(addr); > IgniteClient client1 = Ignition.startClient(clientConfiguration); > ClientCache cache1 = client1.getOrCreateCache("test"); > ContinuousQuery query = new ContinuousQuery<>(); > query.setLocalListener(new CacheEntryUpdatedListener Object>() { > @Override > public void onUpdated(Iterable> > cacheEntryEvents) throws CacheEntryListenerException { > Iterator> iterator = > cacheEntryEvents.iterator(); > while (iterator.hasNext()) { > CacheEntryEvent next = iterator.next(); > System.out.println("" + next.getKey()); > } > } > }); > cache1.query(query); > IgniteClient client2 = Ignition.startClient(clientConfiguration); > ClientCache cache2 = client2.cache("test"); > Thread[] threads = new Thread[threadNmu]; > for (int i = 0; i < threads.length; ++i) { > threads[i] = new Thread(new OperationInsert(cache2, i, 500, > threadNmu)); > } > for (int i = 0; i < threads.length; ++i) { > threads[i].start(); > } > for (Thread thread : threads) { > thread.join(); > } > Thread.sleep(6); > } > static class OperationInsert implements Runnable { > private ClientCache cache; > private int k; > private Integer test_rows; > private Integer thread_cnt; > public OperationInsert(ClientCache cache, int k, > Integer test_rows, Integer thread_cnt) { > this.cache = cache; > this.k = k; > this.test_rows = test_rows; > this.thread_cnt = thread_cnt; > } > @Override > public void run() { > for (int i = 100 + (test_rows/thread_cnt) * k; i < 100 + > (test_rows/thread_cnt) * (k + 1); i++) { > cache.put("" + i, "aaa"); > } > } > } > } {code} > *Running results:* > *[^result1.log][^result2.log]* > *Versi
[jira] [Updated] (IGNITE-19561) Ignite thin client continuous query listener cannot listen to all events
[ https://issues.apache.org/jira/browse/IGNITE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19561: Component/s: thin client (was: clients) > Ignite thin client continuous query listener cannot listen to all events > > > Key: IGNITE-19561 > URL: https://issues.apache.org/jira/browse/IGNITE-19561 > Project: Ignite > Issue Type: Bug > Components: cache, thin client >Affects Versions: 2.15 > Environment: JDK 1.8 > Windows 10 >Reporter: Mengyu Jing >Assignee: Pavel Tupitsyn >Priority: Major > Attachments: result1.log, result2.log > > > *Problem scenario:* > Start the Ignite server of one node, start one thin client and create a > continuous query listener, and then use 50 threads to add 500 data to the > cache concurrently. > *Problem phenomenon:* > Through the information printed on the listener, it was found that the number > of events listened to each time varies, possibly 496, 498, 499 or 500... > *Test Code:* > {code:java} > import org.apache.ignite.Ignite; > import org.apache.ignite.Ignition; > import org.apache.ignite.configuration.IgniteConfiguration; > import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; > import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; > import java.util.ArrayList; > import java.util.List; > public class StartServer { > public static void main(String[] args) { > IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); > TcpDiscoverySpi spi = new TcpDiscoverySpi(); > List addrList = new ArrayList<>(); > addrList.add("127.0.0.1:47500"); > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > ipFinder.setAddresses(addrList); > spi.setIpFinder(ipFinder); > igniteConfiguration.setDiscoverySpi(spi); > Ignite ignite = Ignition.start(igniteConfiguration); > } > } > {code} > {code:java} > import org.apache.ignite.Ignition; > import org.apache.ignite.cache.query.ContinuousQuery; > import org.apache.ignite.client.ClientCache; > import org.apache.ignite.client.IgniteClient; > import org.apache.ignite.configuration.ClientConfiguration; > import javax.cache.event.CacheEntryEvent; > import javax.cache.event.CacheEntryListenerException; > import javax.cache.event.CacheEntryUpdatedListener; > import java.util.Iterator; > public class StartThinClient { > public static void main(String[] args) throws InterruptedException { > String addr = "127.0.0.1:10800"; > int threadNmu = 50; > ClientConfiguration clientConfiguration = new ClientConfiguration(); > clientConfiguration.setAddresses(addr); > IgniteClient client1 = Ignition.startClient(clientConfiguration); > ClientCache cache1 = client1.getOrCreateCache("test"); > ContinuousQuery query = new ContinuousQuery<>(); > query.setLocalListener(new CacheEntryUpdatedListener Object>() { > @Override > public void onUpdated(Iterable> > cacheEntryEvents) throws CacheEntryListenerException { > Iterator> iterator = > cacheEntryEvents.iterator(); > while (iterator.hasNext()) { > CacheEntryEvent next = iterator.next(); > System.out.println("" + next.getKey()); > } > } > }); > cache1.query(query); > IgniteClient client2 = Ignition.startClient(clientConfiguration); > ClientCache cache2 = client2.cache("test"); > Thread[] threads = new Thread[threadNmu]; > for (int i = 0; i < threads.length; ++i) { > threads[i] = new Thread(new OperationInsert(cache2, i, 500, > threadNmu)); > } > for (int i = 0; i < threads.length; ++i) { > threads[i].start(); > } > for (Thread thread : threads) { > thread.join(); > } > Thread.sleep(6); > } > static class OperationInsert implements Runnable { > private ClientCache cache; > private int k; > private Integer test_rows; > private Integer thread_cnt; > public OperationInsert(ClientCache cache, int k, > Integer test_rows, Integer thread_cnt) { > this.cache = cache; > this.k = k; > this.test_rows = test_rows; > this.thread_cnt = thread_cnt; > } > @Override > public void run() { > for (int i = 100 + (test_rows/thread_cnt) * k; i < 100 + > (test_rows/thread_cnt) * (k + 1); i++) { > cache.put("" + i, "aaa"); > } > } > } > } {code} > *Running results:* > *[^result1.log][^result2.log
[jira] [Updated] (IGNITE-19561) Ignite thin client continuous query listener cannot listen to all events
[ https://issues.apache.org/jira/browse/IGNITE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mengyu Jing updated IGNITE-19561: - External issue URL: (was: https://stackoverflow.com/questions/76216469/ignite-thin-client-continuous-query-listener-cannot-listen-to-all-events) > Ignite thin client continuous query listener cannot listen to all events > > > Key: IGNITE-19561 > URL: https://issues.apache.org/jira/browse/IGNITE-19561 > Project: Ignite > Issue Type: Bug > Components: cache, clients >Affects Versions: 2.15 > Environment: JDK 1.8 > Windows 10 >Reporter: Mengyu Jing >Priority: Major > Attachments: result1.log, result2.log > > > *Problem scenario:* > Start the Ignite server of one node, start one thin client and create a > continuous query listener, and then use 50 threads to add 500 data to the > cache concurrently. > *Problem phenomenon:* > Through the information printed on the listener, it was found that the number > of events listened to each time varies, possibly 496, 498, 499 or 500... > *Test Code:* > {code:java} > import org.apache.ignite.Ignite; > import org.apache.ignite.Ignition; > import org.apache.ignite.configuration.IgniteConfiguration; > import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; > import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; > import java.util.ArrayList; > import java.util.List; > public class StartServer { > public static void main(String[] args) { > IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); > TcpDiscoverySpi spi = new TcpDiscoverySpi(); > List addrList = new ArrayList<>(); > addrList.add("127.0.0.1:47500"); > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > ipFinder.setAddresses(addrList); > spi.setIpFinder(ipFinder); > igniteConfiguration.setDiscoverySpi(spi); > Ignite ignite = Ignition.start(igniteConfiguration); > } > } > {code} > {code:java} > import org.apache.ignite.Ignition; > import org.apache.ignite.cache.query.ContinuousQuery; > import org.apache.ignite.client.ClientCache; > import org.apache.ignite.client.IgniteClient; > import org.apache.ignite.configuration.ClientConfiguration; > import javax.cache.event.CacheEntryEvent; > import javax.cache.event.CacheEntryListenerException; > import javax.cache.event.CacheEntryUpdatedListener; > import java.util.Iterator; > public class StartThinClient { > public static void main(String[] args) throws InterruptedException { > String addr = "127.0.0.1:10800"; > int threadNmu = 50; > ClientConfiguration clientConfiguration = new ClientConfiguration(); > clientConfiguration.setAddresses(addr); > IgniteClient client1 = Ignition.startClient(clientConfiguration); > ClientCache cache1 = client1.getOrCreateCache("test"); > ContinuousQuery query = new ContinuousQuery<>(); > query.setLocalListener(new CacheEntryUpdatedListener Object>() { > @Override > public void onUpdated(Iterable> > cacheEntryEvents) throws CacheEntryListenerException { > Iterator> iterator = > cacheEntryEvents.iterator(); > while (iterator.hasNext()) { > CacheEntryEvent next = iterator.next(); > System.out.println("" + next.getKey()); > } > } > }); > cache1.query(query); > IgniteClient client2 = Ignition.startClient(clientConfiguration); > ClientCache cache2 = client2.cache("test"); > Thread[] threads = new Thread[threadNmu]; > for (int i = 0; i < threads.length; ++i) { > threads[i] = new Thread(new OperationInsert(cache2, i, 500, > threadNmu)); > } > for (int i = 0; i < threads.length; ++i) { > threads[i].start(); > } > for (Thread thread : threads) { > thread.join(); > } > Thread.sleep(6); > } > static class OperationInsert implements Runnable { > private ClientCache cache; > private int k; > private Integer test_rows; > private Integer thread_cnt; > public OperationInsert(ClientCache cache, int k, > Integer test_rows, Integer thread_cnt) { > this.cache = cache; > this.k = k; > this.test_rows = test_rows; > this.thread_cnt = thread_cnt; > } > @Override > public void run() { > for (int i = 100 + (test_rows/thread_cnt) * k; i < 100 + > (test_rows/thread_cnt) * (k + 1); i++) { > cache.put("" + i, "aaa"); > } > } > } > } {code} > *Running r
[jira] [Updated] (IGNITE-19561) Ignite thin client continuous query listener cannot listen to all events
[ https://issues.apache.org/jira/browse/IGNITE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mengyu Jing updated IGNITE-19561: - Description: *Problem scenario:* Start the Ignite server of one node, start one thin client and create a continuous query listener, and then use 50 threads to add 500 data to the cache concurrently. *Problem phenomenon:* Through the information printed on the listener, it was found that the number of events listened to each time varies, possibly 496, 498, 499 or 500... *Test Code:* {code:java} import org.apache.ignite.Ignite; import org.apache.ignite.Ignition; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import java.util.ArrayList; import java.util.List; public class StartServer { public static void main(String[] args) { IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); TcpDiscoverySpi spi = new TcpDiscoverySpi(); List addrList = new ArrayList<>(); addrList.add("127.0.0.1:47500"); TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); ipFinder.setAddresses(addrList); spi.setIpFinder(ipFinder); igniteConfiguration.setDiscoverySpi(spi); Ignite ignite = Ignition.start(igniteConfiguration); } } {code} {code:java} import org.apache.ignite.Ignition; import org.apache.ignite.cache.query.ContinuousQuery; import org.apache.ignite.client.ClientCache; import org.apache.ignite.client.IgniteClient; import org.apache.ignite.configuration.ClientConfiguration; import javax.cache.event.CacheEntryEvent; import javax.cache.event.CacheEntryListenerException; import javax.cache.event.CacheEntryUpdatedListener; import java.util.Iterator; public class StartThinClient { public static void main(String[] args) throws InterruptedException { String addr = "127.0.0.1:10800"; int threadNmu = 50; ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setAddresses(addr); IgniteClient client1 = Ignition.startClient(clientConfiguration); ClientCache cache1 = client1.getOrCreateCache("test"); ContinuousQuery query = new ContinuousQuery<>(); query.setLocalListener(new CacheEntryUpdatedListener() { @Override public void onUpdated(Iterable> cacheEntryEvents) throws CacheEntryListenerException { Iterator> iterator = cacheEntryEvents.iterator(); while (iterator.hasNext()) { CacheEntryEvent next = iterator.next(); System.out.println("" + next.getKey()); } } }); cache1.query(query); IgniteClient client2 = Ignition.startClient(clientConfiguration); ClientCache cache2 = client2.cache("test"); Thread[] threads = new Thread[threadNmu]; for (int i = 0; i < threads.length; ++i) { threads[i] = new Thread(new OperationInsert(cache2, i, 500, threadNmu)); } for (int i = 0; i < threads.length; ++i) { threads[i].start(); } for (Thread thread : threads) { thread.join(); } Thread.sleep(6); } static class OperationInsert implements Runnable { private ClientCache cache; private int k; private Integer test_rows; private Integer thread_cnt; public OperationInsert(ClientCache cache, int k, Integer test_rows, Integer thread_cnt) { this.cache = cache; this.k = k; this.test_rows = test_rows; this.thread_cnt = thread_cnt; } @Override public void run() { for (int i = 100 + (test_rows/thread_cnt) * k; i < 100 + (test_rows/thread_cnt) * (k + 1); i++) { cache.put("" + i, "aaa"); } } } } {code} *Running results:* *[^result1.log][^result2.log]* *Version:* The testing program uses Ignite version 2.15.0. I attempted to insert data using one thread and did not observe any event loss. In addition, I also attempted an Ignite cluster with two or three nodes, which can still listen to all 500 events even when inserting data using multiple threads.This problem seems to only occur when concurrent threads insert data into a node. was: *Problem scenario:* Start the Ignite server of one node, start one thin client and create a continuous query listener, and then use 50 threads to add 500 data to the cache concurrently. *Problem phenomenon:* Through the information printed on the listener, it was found that the number of events listened to each time varies, possibly 496, 498, 499 or 500... *Test Code:* {code:java} import org.apache.ignite.Ignite; i
[jira] [Updated] (IGNITE-19561) Ignite thin client continuous query listener cannot listen to all events
[ https://issues.apache.org/jira/browse/IGNITE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mengyu Jing updated IGNITE-19561: - External issue URL: https://stackoverflow.com/questions/76216469/ignite-thin-client-continuous-query-listener-cannot-listen-to-all-events Environment: JDK 1.8 Windows 10 was:JDK 1.8 > Ignite thin client continuous query listener cannot listen to all events > > > Key: IGNITE-19561 > URL: https://issues.apache.org/jira/browse/IGNITE-19561 > Project: Ignite > Issue Type: Bug > Components: cache, clients >Affects Versions: 2.15 > Environment: JDK 1.8 > Windows 10 >Reporter: Mengyu Jing >Priority: Major > Attachments: result1.log, result2.log > > > *Problem scenario:* > Start the Ignite server of one node, start one thin client and create a > continuous query listener, and then use 50 threads to add 500 data to the > cache concurrently. > *Problem phenomenon:* > Through the information printed on the listener, it was found that the number > of events listened to each time varies, possibly 496, 498, 499 or 500... > *Test Code:* > {code:java} > import org.apache.ignite.Ignite; > import org.apache.ignite.Ignition; > import org.apache.ignite.configuration.IgniteConfiguration; > import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; > import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; > import java.util.ArrayList; > import java.util.List; > public class StartServer { > public static void main(String[] args) { > IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); > TcpDiscoverySpi spi = new TcpDiscoverySpi(); > List addrList = new ArrayList<>(); > addrList.add("127.0.0.1:47500"); > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > ipFinder.setAddresses(addrList); > spi.setIpFinder(ipFinder); > igniteConfiguration.setDiscoverySpi(spi); > Ignite ignite = Ignition.start(igniteConfiguration); > } > } > {code} > {code:java} > package com.example.continuebug; > import org.apache.ignite.Ignition; > import org.apache.ignite.cache.query.ContinuousQuery; > import org.apache.ignite.client.ClientCache; > import org.apache.ignite.client.IgniteClient; > import org.apache.ignite.configuration.ClientConfiguration; > import javax.cache.event.CacheEntryEvent; > import javax.cache.event.CacheEntryListenerException; > import javax.cache.event.CacheEntryUpdatedListener; > import java.util.Iterator; > public class StartThinClient { > public static void main(String[] args) throws InterruptedException { > String addr = "127.0.0.1:10800"; > int threadNmu = 50; > ClientConfiguration clientConfiguration = new ClientConfiguration(); > clientConfiguration.setAddresses(addr); > IgniteClient client1 = Ignition.startClient(clientConfiguration); > ClientCache cache1 = client1.getOrCreateCache("test"); > ContinuousQuery query = new ContinuousQuery<>(); > query.setLocalListener(new CacheEntryUpdatedListener Object>() { > @Override > public void onUpdated(Iterable> > cacheEntryEvents) throws CacheEntryListenerException { > Iterator> iterator = > cacheEntryEvents.iterator(); > while (iterator.hasNext()) { > CacheEntryEvent next = iterator.next(); > System.out.println("" + next.getKey()); > } > } > }); > cache1.query(query); > IgniteClient client2 = Ignition.startClient(clientConfiguration); > ClientCache cache2 = client2.cache("test"); > Thread[] threads = new Thread[threadNmu]; > for (int i = 0; i < threads.length; ++i) { > threads[i] = new Thread(new OperationInsert(cache2, i, 500, > threadNmu)); > } > for (int i = 0; i < threads.length; ++i) { > threads[i].start(); > } > for (Thread thread : threads) { > thread.join(); > } > Thread.sleep(6); > } > static class OperationInsert implements Runnable { > private ClientCache cache; > private int k; > private Integer test_rows; > private Integer thread_cnt; > public OperationInsert(ClientCache cache, int k, > Integer test_rows, Integer thread_cnt) { > this.cache = cache; > this.k = k; > this.test_rows = test_rows; > this.thread_cnt = thread_cnt; > } > @Override > public void run() { > for (int i = 100 + (test_rows/thread_cnt) * k; i < 100 + > (test_rows/thread_cnt) * (k + 1); i++) { >
[jira] [Created] (IGNITE-19561) Ignite thin client continuous query listener cannot listen to all events
Mengyu Jing created IGNITE-19561: Summary: Ignite thin client continuous query listener cannot listen to all events Key: IGNITE-19561 URL: https://issues.apache.org/jira/browse/IGNITE-19561 Project: Ignite Issue Type: Bug Components: cache, clients Affects Versions: 2.15 Environment: JDK 1.8 Reporter: Mengyu Jing Attachments: result1.log, result2.log *Problem scenario:* Start the Ignite server of one node, start one thin client and create a continuous query listener, and then use 50 threads to add 500 data to the cache concurrently. *Problem phenomenon:* Through the information printed on the listener, it was found that the number of events listened to each time varies, possibly 496, 498, 499 or 500... *Test Code:* {code:java} import org.apache.ignite.Ignite; import org.apache.ignite.Ignition; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; import java.util.ArrayList; import java.util.List; public class StartServer { public static void main(String[] args) { IgniteConfiguration igniteConfiguration = new IgniteConfiguration(); TcpDiscoverySpi spi = new TcpDiscoverySpi(); List addrList = new ArrayList<>(); addrList.add("127.0.0.1:47500"); TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); ipFinder.setAddresses(addrList); spi.setIpFinder(ipFinder); igniteConfiguration.setDiscoverySpi(spi); Ignite ignite = Ignition.start(igniteConfiguration); } } {code} {code:java} package com.example.continuebug; import org.apache.ignite.Ignition; import org.apache.ignite.cache.query.ContinuousQuery; import org.apache.ignite.client.ClientCache; import org.apache.ignite.client.IgniteClient; import org.apache.ignite.configuration.ClientConfiguration; import javax.cache.event.CacheEntryEvent; import javax.cache.event.CacheEntryListenerException; import javax.cache.event.CacheEntryUpdatedListener; import java.util.Iterator; public class StartThinClient { public static void main(String[] args) throws InterruptedException { String addr = "127.0.0.1:10800"; int threadNmu = 50; ClientConfiguration clientConfiguration = new ClientConfiguration(); clientConfiguration.setAddresses(addr); IgniteClient client1 = Ignition.startClient(clientConfiguration); ClientCache cache1 = client1.getOrCreateCache("test"); ContinuousQuery query = new ContinuousQuery<>(); query.setLocalListener(new CacheEntryUpdatedListener() { @Override public void onUpdated(Iterable> cacheEntryEvents) throws CacheEntryListenerException { Iterator> iterator = cacheEntryEvents.iterator(); while (iterator.hasNext()) { CacheEntryEvent next = iterator.next(); System.out.println("" + next.getKey()); } } }); cache1.query(query); IgniteClient client2 = Ignition.startClient(clientConfiguration); ClientCache cache2 = client2.cache("test"); Thread[] threads = new Thread[threadNmu]; for (int i = 0; i < threads.length; ++i) { threads[i] = new Thread(new OperationInsert(cache2, i, 500, threadNmu)); } for (int i = 0; i < threads.length; ++i) { threads[i].start(); } for (Thread thread : threads) { thread.join(); } Thread.sleep(6); } static class OperationInsert implements Runnable { private ClientCache cache; private int k; private Integer test_rows; private Integer thread_cnt; public OperationInsert(ClientCache cache, int k, Integer test_rows, Integer thread_cnt) { this.cache = cache; this.k = k; this.test_rows = test_rows; this.thread_cnt = thread_cnt; } @Override public void run() { for (int i = 100 + (test_rows/thread_cnt) * k; i < 100 + (test_rows/thread_cnt) * (k + 1); i++) { cache.put("" + i, "aaa"); } } } } {code} *Running results:* *[^result1.log][^result2.log]* *Version:* The testing program uses Ignite version 2.15.0. I attempted to insert data using one thread and did not observe any event loss. In addition, I also attempted an Ignite cluster with two or three nodes, which can still listen to all 500 events even when inserting data using multiple threads.This problem seems to only occur when concurrent threads insert data into a node.{*}{*} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-19410) Node failure in case multiple nodes join and leave a cluster simultaneously with security is enabled.
[ https://issues.apache.org/jira/browse/IGNITE-19410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725933#comment-17725933 ] Ignite TC Bot commented on IGNITE-19410: {panel:title=Branch: [pull/10701/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/10701/head] Base: [master] : New Tests (1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1} {color:#8b}Security{color} [[tests 1|https://ci2.ignite.apache.org/viewLog.html?buildId=7188616]] * {color:#013220}SecurityTestSuite: NodeSecurityContextPropagationTest.testProcessCustomDiscoveryMessageFromLeftNode - PASSED{color} {panel} [TeamCity *--> Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7186616&buildTypeId=IgniteTests24Java8_RunAll] > Node failure in case multiple nodes join and leave a cluster simultaneously > with security is enabled. > -- > > Key: IGNITE-19410 > URL: https://issues.apache.org/jira/browse/IGNITE-19410 > Project: Ignite > Issue Type: Bug >Reporter: Mikhail Petrov >Priority: Major > Labels: ise > Attachments: NodeSecurityContextTest.java > > Time Spent: 2.5h > Remaining Estimate: 0h > > The case when nodes with security enabled join and leave the cluster > simultaneously can cause the joining nodes to fail with the following > exception: > {code:java} > [2023-05-03T14:54:31,208][ERROR][disco-notifier-worker-#332%ignite.NodeSecurityContextTest2%][IgniteTestResources] > Critical system error detected. Will be handled accordingly to configured > handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, > super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet > [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], > failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, > err=java.lang.IllegalStateException: Failed to find security context for > subject with given ID : 4725544a-f144-4486-a705-46b2ac200011]] > java.lang.IllegalStateException: Failed to find security context for subject > with given ID : 4725544a-f144-4486-a705-46b2ac200011 > at > org.apache.ignite.internal.processors.security.IgniteSecurityProcessor.withContext(IgniteSecurityProcessor.java:164) > ~[classes/:?] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$3$SecurityAwareNotificationTask.run(GridDiscoveryManager.java:949) > ~[classes/:?] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body0(GridDiscoveryManager.java:2822) > ~[classes/:?] > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$DiscoveryMessageNotifierWorker.body(GridDiscoveryManager.java:2860) > [classes/:?] > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:125) > [classes/:?] > at java.lang.Thread.run(Thread.java:750) [?:1.8.0_351] {code} > Reproducer is attached. > Simplified steps that leads to the failure: > 1. The client node sends an arbitrary discovery message which produces an > acknowledgement message when it processed by the all cluster nodes . > 2. The client node gracefully leaves the cluster. > 3. The new node joins the cluster and receives a topology snapshot that does > not include the left client node. > 4. The new node receives an acknowledgment for the message from the step 1 > and fails during its processing because message originator node is not listed > in the current discovery cache or discovery cache history (see > IgniteSecurityProcessor#withContext(java.util.UUID)) . This is because > currently the GridDiscoveryManager#historicalNode method only aware of the > topology history that occurs after a node has joined the cluster. The > complete cluster topology history that exists at the time a new node joined > the cluster is stored in GridDiscoveryManager#topHist and is not taken into > account by the GridDiscoveryManager#historicalNode method. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-19401) Sql. ArrayIndexOutOfBoundsException for ANY/ALL with subselect
[ https://issues.apache.org/jira/browse/IGNITE-19401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky reassigned IGNITE-19401: --- Assignee: Evgeny Stanilovsky > Sql. ArrayIndexOutOfBoundsException for ANY/ALL with subselect > -- > > Key: IGNITE-19401 > URL: https://issues.apache.org/jira/browse/IGNITE-19401 > Project: Ignite > Issue Type: Improvement > Components: sql >Reporter: Yury Gerzhedovich >Assignee: Evgeny Stanilovsky >Priority: Major > Labels: calcite2-required, calcite3-required, ignite-3 > > For SqlLogicTest sql/subquery/any_all/test_correlated_any_all.test the > following query fails with ArrayIndexOutOfBoundsException: > {code:java} > SELECT i=ALL(SELECT i FROM integers WHERE i<>i1.i) FROM integers i1 ORDER BY > i; > SELECT i FROM integers i1 WHERE i<>ANY(SELECT i FROM integers WHERE i=i1.i) > ORDER BY i; > SELECT i=ALL(SELECT i FROM integers WHERE i=i1.i) FROM integers i1 ORDER BY i; > SELECT i<>ANY(SELECT i FROM integers WHERE i=i1.i) FROM integers i1 ORDER BY > i; > SELECT i=ALL(SELECT i FROM integers WHERE i<>i1.i) FROM integers i1 ORDER BY > i; > SELECT i=ALL(SELECT i FROM integers WHERE i=i1.i OR i IS NULL) FROM integers > i1 ORDER BY i; > {code} > > {noformat} > Caused by: java.lang.ArrayIndexOutOfBoundsException: Index 0 out of bounds > for length 0 > at > org.apache.ignite.internal.sql.engine.exec.exp.agg.Accumulators$DistinctAccumulator.add(Accumulators.java:885) > at > org.apache.ignite.internal.sql.engine.exec.exp.agg.AccumulatorsFactory$AccumulatorWrapperImpl.add(AccumulatorsFactory.java:302) > at > org.apache.ignite.internal.sql.engine.exec.rel.HashAggregateNode$Grouping.addOnMapper(HashAggregateNode.java:294) > at > org.apache.ignite.internal.sql.engine.exec.rel.HashAggregateNode$Grouping.add(HashAggregateNode.java:261) > at > org.apache.ignite.internal.sql.engine.exec.rel.HashAggregateNode.push(HashAggregateNode.java:127) > at > org.apache.ignite.internal.sql.engine.exec.rel.Inbox.pushUnordered(Inbox.java:302) > at > org.apache.ignite.internal.sql.engine.exec.rel.Inbox.push(Inbox.java:190) > at > org.apache.ignite.internal.sql.engine.exec.rel.Inbox.onBatchReceived(Inbox.java:168) > at > org.apache.ignite.internal.sql.engine.exec.ExchangeServiceImpl.onMessage(ExchangeServiceImpl.java:148){noformat} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19541) Revise all deprecated constructors in public exceptions classes
[ https://issues.apache.org/jira/browse/IGNITE-19541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-19541: - Summary: Revise all deprecated constructors in public exceptions classes (was: Revise all deprecated constructors in public and internal exceptions classes) > Revise all deprecated constructors in public exceptions classes > --- > > Key: IGNITE-19541 > URL: https://issues.apache.org/jira/browse/IGNITE-19541 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Major > Labels: iep-84, ignite-3 > Fix For: 3.0.0-beta2 > > > Need to revise all deprecated constructors in IgniteException, > IgniteCheckedException, IgniteInternalException, > IgniteInternalCheckedException, and their sub-classes. > All deprecated methods should be removed along with their usage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19541) Revise all deprecated constructors in public exceptions classes
[ https://issues.apache.org/jira/browse/IGNITE-19541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-19541: - Description: Need to revise all deprecated constructors in IgniteException and its sub-classes. All deprecated methods should be removed along with their usage. (was: Need to revise all deprecated constructors in IgniteException, IgniteCheckedException, IgniteInternalException, IgniteInternalCheckedException, and their sub-classes. All deprecated methods should be removed along with their usage.) > Revise all deprecated constructors in public exceptions classes > --- > > Key: IGNITE-19541 > URL: https://issues.apache.org/jira/browse/IGNITE-19541 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Major > Labels: iep-84, ignite-3 > Fix For: 3.0.0-beta2 > > > Need to revise all deprecated constructors in IgniteException and its > sub-classes. All deprecated methods should be removed along with their usage. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18443) Sql. Provide extend commands and handlers for distributed zones operation
[ https://issues.apache.org/jira/browse/IGNITE-18443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Maksim Zhuravkov reassigned IGNITE-18443: - Assignee: Maksim Zhuravkov > Sql. Provide extend commands and handlers for distributed zones operation > - > > Key: IGNITE-18443 > URL: https://issues.apache.org/jira/browse/IGNITE-18443 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Yury Gerzhedovich >Assignee: Maksim Zhuravkov >Priority: Major > Labels: ignite-3 > > After implementing IGNITE-18254 and IGNITE-18156 we have handlers just for > part of parameters: name, {{{}DATA_NODES_AUTO_ADJUST{}}}, > DATA_NODES_AUTO_ADJUST_SCALE_UP, DATA_NODES_AUTO_ADJUST_SCALE_DOWN. Need to > provide DDL commands and their handlers to altering and create zones > configuration, as well as translation to these command from AST > representation for all the rest parameters. > As a result, we will be able to translate AST to a command (see > DdlSqlToCommandConverter) and execute this command in order to apply changes > to configuration (see DdlCommandHandler). > The following configuration parameters should be covered by the ticket for > CREATE and ALTER operations: PARTITIONS, REPLICAS, AFFINITY_FUNCTION, > DATA_NODES_FILTER -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-19530) Reduce size of configuration keys
[ https://issues.apache.org/jira/browse/IGNITE-19530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725856#comment-17725856 ] Roman Puchkovskiy commented on IGNITE-19530: It is not that easy to implement what was requested, so I conducted an experiment and reduced key sizes in a dirty way. For the longest table-related keys, the reduction is almost twofold, and (with the test in IGNITE-19275) there are 40 such keys of 55 per table (the remaining keys are much shorter), so the total reduction of keys is around 30%. After enabling this optimization, I did not see any difference: with it and without it, the test was failing after creating approximately 45 tables. I also discovered that most of the commands that are sent to the Metastorage RAFT group are writes under placementdriver.lease (around 16k writes, versus 93 writes for table metadata). When I disabled writing of leases, the situation improved drasticatlly, even though a big dispersion emerged on my machine: the test started failing after 341 or 676 tables, and once it even created all 1000 tables. But even with disabled lease writing, enabling or disabling the keys optimization I mentioned in the beginning has no influence on the results. So I suggest the following: 1. Create an issue about making table config keys lighter, but with low priority 2. Close this ticket as the suggested optimization does not change anything in the 'problem of 1000 tables' [~Denis Chudov] , [~sanpwc] what do you think? > Reduce size of configuration keys > - > > Key: IGNITE-19530 > URL: https://issues.apache.org/jira/browse/IGNITE-19530 > Project: Ignite > Issue Type: Improvement >Reporter: Denis Chudov >Assignee: Roman Puchkovskiy >Priority: Major > Labels: ignite-3 > > *Motivation* > The distributed configuration keys are byte arrays formed from strings that > contain some constant prefixes, postfixes, delimiters and identificators, > mostly UUIDs. Example of the configuration key for default value provider > type of table column: > {{dst-cfg.table.tables.d7b99c6a-de10-454d-9370-38d18b65e9c0.columns.d8482dae-cfb8-42b8-a759-9727dd3763a6.defaultValueProvider.type}} > It contains 2 UUIDs in string representation. Unfortunately, there are > several configuration entries for each table column (having similar keys) and > besides that about a dozen of keys for table itself. > As a result, configuration keys take 68% of a meta storage message related to > table creation (for one node cluster, for a table of 2 columns and 25 > partitions) which creates excessive load on meta storage raft group in case > of mass table creation (see IGNITE-19275 ) > *Definition of done* > We should get rid of string representation of UUIDs in configuration keys, > UUIDs should be written as 16 bytes each into byte array directly. Also, > string constants should be reduced (or even replaced to constants consisting > of few bytes) because there is no need to keep them human readable. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-19164) Improve message about requested partitions during snapshot restore
[ https://issues.apache.org/jira/browse/IGNITE-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725847#comment-17725847 ] Ignite TC Bot commented on IGNITE-19164: {panel:title=Branch: [pull/10638/head] Base: [master] : No blockers found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel} {panel:title=Branch: [pull/10638/head] Base: [master] : No new tests found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}{panel} [TeamCity *--> Run :: All* Results|https://ci2.ignite.apache.org/viewLog.html?buildId=7188479&buildTypeId=IgniteTests24Java8_RunAll] > Improve message about requested partitions during snapshot restore > -- > > Key: IGNITE-19164 > URL: https://issues.apache.org/jira/browse/IGNITE-19164 > Project: Ignite > Issue Type: Task >Reporter: Ilya Shishkov >Assignee: Julia Bakulina >Priority: Minor > Labels: iep-43, ise > Time Spent: 1.5h > Remaining Estimate: 0h > > Currently, during snapshot restore message is logged before requesting > partitions from remote nodes: > {quote} > [2023-03-24T18:06:59,910][INFO > ]\[disco-notifier-worker-#792%node%|#792%node%][SnapshotRestoreProcess] > Trying to request partitions from remote nodes > [reqId=ff682204-9554-4fbb-804c-38a79c0b286a, snapshot=snapshot_name, > map={*{color:#FF}76e22ef5-3c76-4987-bebd-9a6222a0{color}*={*{color:#FF}-903566235{color}*=[0,2,4,6,11,12,18,98,100,170,190,194,1015], > > *{color:#FF}1544803905{color}*=[1,11,17,18,22,25,27,35,37,42,45,51,62,64,67,68,73,76,1017]}}] > {quote} > It is necessary to make this output "human readable": > # Print messages per node instead of one message for all nodes. > # Print consistent id and address of remote node, suppling partitions. > # Print cache / group name. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table
[ https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-19488: Ignite Flags: (was: Docs Required,Release Notes Required) > RemoteFragmentExecutionException when inserting more than 30 000 rows into > one table > > > Key: IGNITE-19488 > URL: https://issues.apache.org/jira/browse/IGNITE-19488 > Project: Ignite > Issue Type: Bug > Components: jdbc, sql >Reporter: Igor >Assignee: Evgeny Stanilovsky >Priority: Critical > Labels: ignite-3 > Attachments: logs.zip, logs_with_ignored_erorr.zip > > > h1. Steps to reproduce > Ignite 3 main branch commit 45380a6c802203dab0d72bd1eb9fb202b2a345b0 > # Create table with 5 columns > # Insert into table rows by batches 1000 rows each batch. > # Repeat previous step untill exception is thrown. > h1. Expected behaviour > Created more than 30 000 rows. > h1. Actual behaviour > An exception after 29 000 rows are inserted: > {code:java} > Exception while executing query [query=SELECT COUNT(*) FROM > rows_capacity_table]. Error message:IGN-CMN-1 > TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 > TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution > failed: nodeName=TablesAmountCapacityTest_cluster_0, > queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, > originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907 > java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) > FROM rows_capacity_table]. Error message:IGN-CMN-1 > TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 > TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution > failed: nodeName=TablesAmountCapacityTest_cluster_0, > queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, > originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907 > at > org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57) > at > org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149) > at > org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108) > {code} > Logs are in the attachment. > [^logs.zip] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-19533) Rename UNKNOWN_ERR error code to INTERNAL_ERR
[ https://issues.apache.org/jira/browse/IGNITE-19533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725806#comment-17725806 ] Alexander Lapin commented on IGNITE-19533: -- [~slava.koptilin] LGTM! > Rename UNKNOWN_ERR error code to INTERNAL_ERR > - > > Key: IGNITE-19533 > URL: https://issues.apache.org/jira/browse/IGNITE-19533 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Major > Labels: iep-84, ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 10m > Remaining Estimate: 0h > > The UNKNOWN_ERR error code should be renamed to INTERNAL_ERR. The @Deprecated > should be removed as well. > The INTRNAL_ERR error code should be considered as the product's internal > error caused by faulty logic or coding in the product. In general, this error > code represents a non-recoverable error that should be provided to code > maintainers. > It seems to me the UNEXPECTED_ERR and Sql.INTERNAL_ERR error codes just > duplicate the `Common.INTERNAL_ERR` one, so, both of them should be removed. > [1] > https://docs.oracle.com/en/database/oracle/oracle-database/19/errmg/using-messages.html#GUID-3D523C69-502E-4E8B-8E56-BEA97EBE50ED > {noformat} > ORA-00600: internal error code, arguments: [string], [string], [string], > [string], [string], [string], [string], [string], [string], [string], > [string], [string] > Cause: This is the generic internal error number for Oracle program > exceptions. It indicates that a process has encountered a low-level, > unexpected condition. The first argument is the internal message number. This > argument and the database version number are critical in identifying the root > cause and the potential impact to your system. > Action: Visit My Oracle Support to access the ORA-00600 Lookup tool > (reference Note 600.1) for more information regarding the specific ORA-00600 > error encountered. An Incident has been created for this error in the > Automatic Diagnostic Repository (ADR). When logging a service request, use > the Incident Packaging Service (IPS) from the Support Workbench or the ADR > Command Interpreter (ADRCI) to automatically package the relevant trace > information (reference My Oracle Support Note 411.1). The following > information should also be gathered to help determine the root cause: - > changes leading up to the error - events or unusual circumstances leading up > to the error - operations attempted prior to the error - conditions of the > operating system and databases at the time of the error Note: The cause of > this message may manifest itself as different errors at different times. Be > aware of the history of errors that occurred before this internal error. > {noformat} > [2] https://www.postgresql.org/docs/current/errcodes-appendix.html > {noformat} > Class XX — Internal Error > XX000 internal_error > XX001 data_corrupted > XX002 index_corrupted > {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19552) ArrayStoreException on connect to remote server node
[ https://issues.apache.org/jira/browse/IGNITE-19552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Belyak updated IGNITE-19552: -- Description: The problem consistently reproduces with the remote server node, sometimes - with the local one on AI3 after {color:#d1d2d3}a4912c63 IGNITE-19318 Unable to build stand alone/fat jar with JDBC driver (#2048){color}. 1) Start single server node 2) Active cluster 3) Connect to the cluster via JDBC driver. Expected result: connection established Actual result: {code:java} [16:08:46][INFO ][Thread-2] Create table request: CREATE TABLE IF NOT EXISTS usertable (yscb_key VARCHAR PRIMARY KEY, field0 VARCHAR, field1 VARCHAR, field2 VARCHAR, field3 VARCHAR, field4 VARCHAR, field5 VARCHAR, field6 VARCHAR, field7 VARCHAR, field8 VARCHAR, field9 VARCHAR);May 23, 2023 4:08:46 PM io.netty.channel.ChannelInitializer exceptionCaughtWARNING: Failed to initialize a channel. Closing: [id: 0x0392f8b4, L:/127.0.0.1:10800 - R:/127.0.0.1:53528]java.lang.ArrayStoreException: org.apache.ignite.internal.client.proto.ClientMessageDecoderat org.apache.ignite.client.handler.ClientHandlerModule$1.initChannel(ClientHandlerModule.java:250) at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129) at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112) at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:1114) at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609) at io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46) at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463) at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115) at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:514) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486) at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:829) {code} was: The problem consistently reproduces with the remote server node, sometimes - with the local one on AI3 after {color:#d1d2d3}a4912c63 IGNITE-19318 Unable to build stand alone/fat jar with JDBC driver (#2048){color}. 1) Start single server node 2) Active cluster 3) Connect to the cluster via JDBC driver. Expected result: connection established Actual result: {code:java} [16:08:46][INFO ][Thread-2] Create table request: CREATE TABLE IF NOT EXISTS usertable (yscb_key VARCHAR PRIMARY KEY, field0 VARCHAR, field1 VARCHAR, field2 VARCHAR, field3 VARCHAR, field4 VARCHAR, field5 VARCHAR, field6 VARCHAR, field7 VARCHAR, field8 VARCHAR, field9 VARCHAR);May 23, 2023 4:08:46 PM io.netty.channel.ChannelInitializer exceptionCaughtWARNING: Failed to initialize a channel. Closing: [id: 0x0392f8b4, L:/127.0.0.1:10800 - R:/127.0.0.1:53528]java.lang.ArrayStoreException: org.apache.ignite.internal.client.proto.ClientMessageDecoderat org.apache.ignite.client.handler.ClientHandlerModule$1.initChannel(ClientHandlerModule.java:250) at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129) at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112) at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:1114) at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609) at io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46) at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463) at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115) at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.j
[jira] [Commented] (IGNITE-19552) ArrayStoreException on connect to remote server node
[ https://issues.apache.org/jira/browse/IGNITE-19552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725802#comment-17725802 ] Alexander Belyak commented on IGNITE-19552: --- Problem was related to IGNITE-19318, due to installing fat JDBC jar into the local maven repo on my host. > ArrayStoreException on connect to remote server node > > > Key: IGNITE-19552 > URL: https://issues.apache.org/jira/browse/IGNITE-19552 > Project: Ignite > Issue Type: Bug > Components: jdbc >Affects Versions: 3.0 >Reporter: Alexander Belyak >Assignee: Konstantin Orlov >Priority: Major > Labels: ignite-3 > > The problem consistently reproduces with the remote server node, sometimes - > with the local one on AI3 after {color:#d1d2d3}a4912c63 IGNITE-19318 Unable > to build stand alone/fat jar with JDBC driver (#2048){color}. > 1) Start single server node > 2) Active cluster > 3) Connect to the cluster via JDBC driver. > Expected result: connection established > Actual result: > {code:java} > [16:08:46][INFO ][Thread-2] Create table request: CREATE TABLE IF NOT EXISTS > usertable (yscb_key VARCHAR PRIMARY KEY, field0 VARCHAR, field1 VARCHAR, > field2 VARCHAR, field3 VARCHAR, field4 VARCHAR, field5 VARCHAR, field6 > VARCHAR, field7 VARCHAR, field8 VARCHAR, field9 VARCHAR);May 23, 2023 4:08:46 > PM io.netty.channel.ChannelInitializer exceptionCaughtWARNING: Failed to > initialize a channel. Closing: [id: 0x0392f8b4, L:/127.0.0.1:10800 - > R:/127.0.0.1:53528]java.lang.ArrayStoreException: > org.apache.ignite.internal.client.proto.ClientMessageDecoderat > org.apache.ignite.client.handler.ClientHandlerModule$1.initChannel(ClientHandlerModule.java:250) > at > io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129) > at > io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112) >at > io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:1114) > at > io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609) > at > io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46) > at > io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463) > at > io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115) > at > io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650) > at > io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:514) > at > io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429) > at > io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486) > at > io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174) > at > io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167) > at > io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) > at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)at > io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) > at > io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) > at > io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) > at java.base/java.lang.Thread.run(Thread.java:829) {code} > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19552) ArrayStoreException on connect to remote server node
[ https://issues.apache.org/jira/browse/IGNITE-19552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Belyak updated IGNITE-19552: -- Description: The problem consistently reproduces with the remote server node, sometimes - with the local one on AI3 after {color:#d1d2d3}a4912c63 IGNITE-19318 Unable to build stand alone/fat jar with JDBC driver (#2048){color}. 1) Start single server node 2) Active cluster 3) Connect to the cluster via JDBC driver. Expected result: connection established Actual result: {code:java} [16:08:46][INFO ][Thread-2] Create table request: CREATE TABLE IF NOT EXISTS usertable (yscb_key VARCHAR PRIMARY KEY, field0 VARCHAR, field1 VARCHAR, field2 VARCHAR, field3 VARCHAR, field4 VARCHAR, field5 VARCHAR, field6 VARCHAR, field7 VARCHAR, field8 VARCHAR, field9 VARCHAR);May 23, 2023 4:08:46 PM io.netty.channel.ChannelInitializer exceptionCaughtWARNING: Failed to initialize a channel. Closing: [id: 0x0392f8b4, L:/127.0.0.1:10800 - R:/127.0.0.1:53528]java.lang.ArrayStoreException: org.apache.ignite.internal.client.proto.ClientMessageDecoderat org.apache.ignite.client.handler.ClientHandlerModule$1.initChannel(ClientHandlerModule.java:250) at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129) at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112) at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:1114) at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609) at io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46) at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463) at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115) at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:514) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429) at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486) at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174) at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:569)at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:829) {code} Important - use one of the last builds. was: The problem consistently reproduces with the remote server node, sometimes - with local one. 1) Start single server node 2) Active cluster 3) Connect to the cluster via JDBC driver. Expected result: connection established Actual result: {code:java} [16:08:46][INFO ][Thread-2] Create table request: CREATE TABLE IF NOT EXISTS usertable (yscb_key VARCHAR PRIMARY KEY, field0 VARCHAR, field1 VARCHAR, field2 VARCHAR, field3 VARCHAR, field4 VARCHAR, field5 VARCHAR, field6 VARCHAR, field7 VARCHAR, field8 VARCHAR, field9 VARCHAR);May 23, 2023 4:08:46 PM io.netty.channel.ChannelInitializer exceptionCaughtWARNING: Failed to initialize a channel. Closing: [id: 0x0392f8b4, L:/127.0.0.1:10800 - R:/127.0.0.1:53528]java.lang.ArrayStoreException: org.apache.ignite.internal.client.proto.ClientMessageDecoderat org.apache.ignite.client.handler.ClientHandlerModule$1.initChannel(ClientHandlerModule.java:250) at io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129) at io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112) at io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:1114) at io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609) at io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46) at io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463) at io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115) at io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650) at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:514) at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractC
[jira] [Updated] (IGNITE-19556) Sql. Implement cache for parsing sql requests.
[ https://issues.apache.org/jira/browse/IGNITE-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-19556: Epic Link: IGNITE-19479 > Sql. Implement cache for parsing sql requests. > -- > > Key: IGNITE-19556 > URL: https://issues.apache.org/jira/browse/IGNITE-19556 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Evgeny Stanilovsky >Priority: Major > Labels: calcite3-required, ignite-3, perfomance, sql-performance > > starting point IgniteSqlParser#parse > In current implementation: > {noformat} > PreparedStatement insertPrepStmt ... > insertPrepStmt.addBatch(); > {noformat} > will call sequential insertion row by row which leads to repeatedly parsing > same > {noformat} > INSERT INTO ... > {noformat} > requests... > local benchmark with 1 server node shows 12% grow with cached requests. > TpchParseBenchmark ceases to be significant at all. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-18330) Fix javadoc in Transaction#resume(), Transaction#suspend
[ https://issues.apache.org/jira/browse/IGNITE-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julia Bakulina reassigned IGNITE-18330: --- Assignee: Julia Bakulina > Fix javadoc in Transaction#resume(), Transaction#suspend > > > Key: IGNITE-18330 > URL: https://issues.apache.org/jira/browse/IGNITE-18330 > Project: Ignite > Issue Type: Improvement >Reporter: Luchnikov Alexander >Assignee: Julia Bakulina >Priority: Trivial > Labels: ise, newbie > > After implementation IGNITE-5714, this api can be used with pessimistic > transactions. > Now in javadoc - Supported only for optimistic transactions.. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19533) Rename UNKNOWN_ERR error code to INTERNAL_ERR
[ https://issues.apache.org/jira/browse/IGNITE-19533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-19533: - Description: The UNKNOWN_ERR error code should be renamed to INTERNAL_ERR. The @Deprecated should be removed as well. The INTRNAL_ERR error code should be considered as the product's internal error caused by faulty logic or coding in the product. In general, this error code represents a non-recoverable error that should be provided to code maintainers. It seems to me the UNEXPECTED_ERR and Sql.INTERNAL_ERR error codes just duplicate the `Common.INTERNAL_ERR` one, so, both of them should be removed. [1] https://docs.oracle.com/en/database/oracle/oracle-database/19/errmg/using-messages.html#GUID-3D523C69-502E-4E8B-8E56-BEA97EBE50ED {noformat} ORA-00600: internal error code, arguments: [string], [string], [string], [string], [string], [string], [string], [string], [string], [string], [string], [string] Cause: This is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The first argument is the internal message number. This argument and the database version number are critical in identifying the root cause and the potential impact to your system. Action: Visit My Oracle Support to access the ORA-00600 Lookup tool (reference Note 600.1) for more information regarding the specific ORA-00600 error encountered. An Incident has been created for this error in the Automatic Diagnostic Repository (ADR). When logging a service request, use the Incident Packaging Service (IPS) from the Support Workbench or the ADR Command Interpreter (ADRCI) to automatically package the relevant trace information (reference My Oracle Support Note 411.1). The following information should also be gathered to help determine the root cause: - changes leading up to the error - events or unusual circumstances leading up to the error - operations attempted prior to the error - conditions of the operating system and databases at the time of the error Note: The cause of this message may manifest itself as different errors at different times. Be aware of the history of errors that occurred before this internal error. {noformat} [2] https://www.postgresql.org/docs/current/errcodes-appendix.html {noformat} Class XX — Internal Error XX000 internal_error XX001 data_corrupted XX002 index_corrupted {noformat} was: The UNKNOWN_ERR error code should be renamed to INTERNAL_ERR. The @Deprecated should be removed as well. The INTRNAL_ERR error code should be considered as the product's internal error caused by faulty logic or coding in the product. In general, this error code represents a non-recoverable error that should be provided to code maintainers. It seems to me the UNEXPECTED_ERR and Sql.INTERNAL_ERR error codes just duplicate the `Common.INTERNAL_ERR` one, so, both of them should be removed. > Rename UNKNOWN_ERR error code to INTERNAL_ERR > - > > Key: IGNITE-19533 > URL: https://issues.apache.org/jira/browse/IGNITE-19533 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Major > Labels: iep-84, ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 10m > Remaining Estimate: 0h > > The UNKNOWN_ERR error code should be renamed to INTERNAL_ERR. The @Deprecated > should be removed as well. > The INTRNAL_ERR error code should be considered as the product's internal > error caused by faulty logic or coding in the product. In general, this error > code represents a non-recoverable error that should be provided to code > maintainers. > It seems to me the UNEXPECTED_ERR and Sql.INTERNAL_ERR error codes just > duplicate the `Common.INTERNAL_ERR` one, so, both of them should be removed. > [1] > https://docs.oracle.com/en/database/oracle/oracle-database/19/errmg/using-messages.html#GUID-3D523C69-502E-4E8B-8E56-BEA97EBE50ED > {noformat} > ORA-00600: internal error code, arguments: [string], [string], [string], > [string], [string], [string], [string], [string], [string], [string], > [string], [string] > Cause: This is the generic internal error number for Oracle program > exceptions. It indicates that a process has encountered a low-level, > unexpected condition. The first argument is the internal message number. This > argument and the database version number are critical in identifying the root > cause and the potential impact to your system. > Action: Visit My Oracle Support to access the ORA-00600 Lookup tool > (reference Note 600.1) for more information regarding the specific ORA-00600 > error encountered. An Incident has been created for this error in the > Automatic Diag
[jira] [Updated] (IGNITE-19533) Rename UNKNOWN_ERR error code to INTERNAL_ERR
[ https://issues.apache.org/jira/browse/IGNITE-19533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin updated IGNITE-19533: - Description: The UNKNOWN_ERR error code should be renamed to INTERNAL_ERR. The @Deprecated should be removed as well. The INTRNAL_ERR error code should be considered as the product's internal error caused by faulty logic or coding in the product. In general, this error code represents a non-recoverable error that should be provided to code maintainers. It seems to me the UNEXPECTED_ERR and Sql.INTERNAL_ERR error codes just duplicate the `Common.INTERNAL_ERR` one, so, both of them should be removed. was: The UNKNOWN_ERR error code should be renamed to INTERNAL_ERR. The @Deprecated should be removed as well. The INTRNAL_ERR error code should be considered as the product's internal error caused by faulty logic or coding in the product. In general, this error code represents a non-recoverable error that should be provided to code maintainers. It seems to me the UNEXPECTED_ERR error code just duplicates `INTERNAL_ERR` one, so, it should be removed. > Rename UNKNOWN_ERR error code to INTERNAL_ERR > - > > Key: IGNITE-19533 > URL: https://issues.apache.org/jira/browse/IGNITE-19533 > Project: Ignite > Issue Type: Bug >Reporter: Vyacheslav Koptilin >Assignee: Vyacheslav Koptilin >Priority: Major > Labels: iep-84, ignite-3 > Fix For: 3.0.0-beta2 > > Time Spent: 10m > Remaining Estimate: 0h > > The UNKNOWN_ERR error code should be renamed to INTERNAL_ERR. The @Deprecated > should be removed as well. > The INTRNAL_ERR error code should be considered as the product's internal > error caused by faulty logic or coding in the product. In general, this error > code represents a non-recoverable error that should be provided to code > maintainers. > It seems to me the UNEXPECTED_ERR and Sql.INTERNAL_ERR error codes just > duplicate the `Common.INTERNAL_ERR` one, so, both of them should be removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19560) Thin 3.0: Netty buffer leak in ConfigurationTest
[ https://issues.apache.org/jira/browse/IGNITE-19560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19560: Attachment: _Test_Run_Unit_Tests_14560.log > Thin 3.0: Netty buffer leak in ConfigurationTest > > > Key: IGNITE-19560 > URL: https://issues.apache.org/jira/browse/IGNITE-19560 > Project: Ignite > Issue Type: Bug > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Attachments: _Test_Run_Unit_Tests_14560.log > > > {code} > ClientTupleTest > testTypedGetters() PASSED > org.apache.ignite.client.ClientTupleTest.testTypedGettersWithIncorrectType() > ClientTupleTest > testTypedGettersWithIncorrectType() PASSED > org.apache.ignite.client.ConfigurationTest > ConfigurationTest STANDARD_ERROR > 2023-05-24 13:53:59:238 +0300 [INFO][Test worker][ClientHandlerModule] > Thin client protocol started successfully [port=10800] > 2023-05-24 13:53:59:249 +0300 > [ERROR][nioEventLoopGroup-168-1][ResourceLeakDetector] LEAK: > ByteBuf.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > #1: > > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) > > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) > > io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) > {code} > https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunUnitTests/7247037?hideProblemsFromDependencies=false&hideTestsFromDependencies=false -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19560) Thin 3.0: Netty buffer leak in ConfigurationTest
[ https://issues.apache.org/jira/browse/IGNITE-19560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19560: Description: {code} ClientTupleTest > testTypedGetters() PASSED org.apache.ignite.client.ClientTupleTest.testTypedGettersWithIncorrectType() ClientTupleTest > testTypedGettersWithIncorrectType() PASSED org.apache.ignite.client.ConfigurationTest ConfigurationTest STANDARD_ERROR 2023-05-24 13:53:59:238 +0300 [INFO][Test worker][ClientHandlerModule] Thin client protocol started successfully [port=10800] 2023-05-24 13:53:59:249 +0300 [ERROR][nioEventLoopGroup-168-1][ResourceLeakDetector] LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: #1: io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) {code} https://ci.ignite.apache.org/buildConfiguration/ApacheIgnite3xGradle_Test_RunUnitTests/7247037?hideProblemsFromDependencies=false&hideTestsFromDependencies=false was: {code} ClientTupleTest > testTypedGetters() PASSED org.apache.ignite.client.ClientTupleTest.testTypedGettersWithIncorrectType() ClientTupleTest > testTypedGettersWithIncorrectType() PASSED org.apache.ignite.client.ConfigurationTest ConfigurationTest STANDARD_ERROR 2023-05-24 13:53:59:238 +0300 [INFO][Test worker][ClientHandlerModule] Thin client protocol started successfully [port=10800] 2023-05-24 13:53:59:249 +0300 [ERROR][nioEventLoopGroup-168-1][ResourceLeakDetector] LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: #1: io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) {code} > Thin 3.0: Netty buffer leak in ConfigurationTest > > > Key: IGNITE-19560 > URL: https://issues.apache.org/jira/browse/IGNITE-19560 > Project: Ignite > Issue Type: Bug > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > Attachments: _Test_Run_Unit_Tests_14560.log > > > {code} > ClientTupleTest > testTypedGetters() PASSED > org.apache.ignite.client.ClientTupleTest.testTypedGettersWithIncorrectType() > ClientTupleTest > testTypedGettersWithIncorrectType() PASSED > org.apache.ignite.client.ConfigurationTest > ConfigurationTest STANDARD_ERROR > 2023-05-24 13:53:59:238 +0300 [INFO][Test worker][ClientHandlerModule] > Thin client protocol started successfully [port=10800] > 2023-05-24 13:53:59:249 +0300 > [ERROR][nioEventLoopGroup-168-1][ResourceLeakDetector] LEAK: > ByteBuf.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > #1: > > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) > > io.netty.channel.AbstractChannelH
[jira] [Updated] (IGNITE-19560) Thin 3.0: Netty buffer leak in ConfigurationTest
[ https://issues.apache.org/jira/browse/IGNITE-19560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19560: Description: {code} ClientTupleTest > testTypedGetters() PASSED org.apache.ignite.client.ClientTupleTest.testTypedGettersWithIncorrectType() ClientTupleTest > testTypedGettersWithIncorrectType() PASSED org.apache.ignite.client.ConfigurationTest ConfigurationTest STANDARD_ERROR 2023-05-24 13:53:59:238 +0300 [INFO][Test worker][ClientHandlerModule] Thin client protocol started successfully [port=10800] 2023-05-24 13:53:59:249 +0300 [ERROR][nioEventLoopGroup-168-1][ResourceLeakDetector] LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information. Recent access records: #1: io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) {code} > Thin 3.0: Netty buffer leak in ConfigurationTest > > > Key: IGNITE-19560 > URL: https://issues.apache.org/jira/browse/IGNITE-19560 > Project: Ignite > Issue Type: Bug > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > > {code} > ClientTupleTest > testTypedGetters() PASSED > org.apache.ignite.client.ClientTupleTest.testTypedGettersWithIncorrectType() > ClientTupleTest > testTypedGettersWithIncorrectType() PASSED > org.apache.ignite.client.ConfigurationTest > ConfigurationTest STANDARD_ERROR > 2023-05-24 13:53:59:238 +0300 [INFO][Test worker][ClientHandlerModule] > Thin client protocol started successfully [port=10800] > 2023-05-24 13:53:59:249 +0300 > [ERROR][nioEventLoopGroup-168-1][ResourceLeakDetector] LEAK: > ByteBuf.release() was not called before it's garbage-collected. See > https://netty.io/wiki/reference-counted-objects.html for more information. > Recent access records: > #1: > > io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) > > io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412) > > io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440) > > io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19560) Thin 3.0: Netty buffer leak in ConfigurationTest
[ https://issues.apache.org/jira/browse/IGNITE-19560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Tupitsyn updated IGNITE-19560: Ignite Flags: (was: Docs Required,Release Notes Required) > Thin 3.0: Netty buffer leak in ConfigurationTest > > > Key: IGNITE-19560 > URL: https://issues.apache.org/jira/browse/IGNITE-19560 > Project: Ignite > Issue Type: Bug > Components: thin client >Affects Versions: 3.0.0-beta1 >Reporter: Pavel Tupitsyn >Assignee: Pavel Tupitsyn >Priority: Major > Labels: ignite-3 > Fix For: 3.0.0-beta2 > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19560) Thin 3.0: Netty buffer leak in ConfigurationTest
Pavel Tupitsyn created IGNITE-19560: --- Summary: Thin 3.0: Netty buffer leak in ConfigurationTest Key: IGNITE-19560 URL: https://issues.apache.org/jira/browse/IGNITE-19560 Project: Ignite Issue Type: Bug Components: thin client Affects Versions: 3.0.0-beta1 Reporter: Pavel Tupitsyn Assignee: Pavel Tupitsyn Fix For: 3.0.0-beta2 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19556) Sql. Implement cache for parsing sql requests.
[ https://issues.apache.org/jira/browse/IGNITE-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-19556: Labels: calcite3-required ignite-3 perfomance sql-performance (was: calcite3-required ignite-3) > Sql. Implement cache for parsing sql requests. > -- > > Key: IGNITE-19556 > URL: https://issues.apache.org/jira/browse/IGNITE-19556 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Evgeny Stanilovsky >Priority: Major > Labels: calcite3-required, ignite-3, perfomance, sql-performance > > starting point IgniteSqlParser#parse > In current implementation: > {noformat} > PreparedStatement insertPrepStmt ... > insertPrepStmt.addBatch(); > {noformat} > will call sequential insertion row by row which leads to repeatedly parsing > same > {noformat} > INSERT INTO ... > {noformat} > requests... > local benchmark with 1 server node shows 12% grow with cached requests. > TpchParseBenchmark ceases to be significant at all. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19557) Sql. Insert through JDBC with batch optimization.
[ https://issues.apache.org/jira/browse/IGNITE-19557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-19557: Labels: calcite3-required ignite-3 perfomance sql-performance (was: calcite3-required ignite-3) > Sql. Insert through JDBC with batch optimization. > --- > > Key: IGNITE-19557 > URL: https://issues.apache.org/jira/browse/IGNITE-19557 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Evgeny Stanilovsky >Priority: Major > Labels: calcite3-required, ignite-3, perfomance, sql-performance > > JdbcQueryEventHandlerImpl#batchPrepStatementAsync > process batch rows sequentially row by row, seems it brings essential > throughput boost if rows will be processed as batch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19559) NPE in deploy/undeploy calls in non-REPL mode
Vadim Pakhnushev created IGNITE-19559: - Summary: NPE in deploy/undeploy calls in non-REPL mode Key: IGNITE-19559 URL: https://issues.apache.org/jira/browse/IGNITE-19559 Project: Ignite Issue Type: Bug Components: cli Reporter: Vadim Pakhnushev When running ItDeployUndeployCallsTest the following exception is printed: {code:java} java.lang.NullPointerException at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1690) at org.apache.ignite.internal.cli.logger.CliLoggers.addApiClient(CliLoggers.java:65) at org.apache.ignite.internal.cli.core.rest.ApiClientFactory.getClient(ApiClientFactory.java:83) at org.apache.ignite.internal.cli.call.unit.ListUnitCall.execute(ListUnitCall.java:46) at org.apache.ignite.internal.cli.core.repl.registry.impl.UnitsRegistryImpl.lambda$updateState$1(UnitsRegistryImpl.java:57) {code} This happens due to the {{UnitsRegistryImpl}} being called from the {{DeployUnitCall}} even if we are not in the REPL mode. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (IGNITE-19488) RemoteFragmentExecutionException when inserting more than 30 000 rows into one table
[ https://issues.apache.org/jira/browse/IGNITE-19488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17725739#comment-17725739 ] Evgeny Stanilovsky commented on IGNITE-19488: - This case can`t be reproduced on current main branch, but there is present other (erroneous plan) issue, that fixed in current PR. > RemoteFragmentExecutionException when inserting more than 30 000 rows into > one table > > > Key: IGNITE-19488 > URL: https://issues.apache.org/jira/browse/IGNITE-19488 > Project: Ignite > Issue Type: Bug > Components: jdbc, sql >Reporter: Igor >Assignee: Evgeny Stanilovsky >Priority: Critical > Labels: ignite-3 > Attachments: logs.zip, logs_with_ignored_erorr.zip > > > h1. Steps to reproduce > Ignite 3 main branch commit 45380a6c802203dab0d72bd1eb9fb202b2a345b0 > # Create table with 5 columns > # Insert into table rows by batches 1000 rows each batch. > # Repeat previous step untill exception is thrown. > h1. Expected behaviour > Created more than 30 000 rows. > h1. Actual behaviour > An exception after 29 000 rows are inserted: > {code:java} > Exception while executing query [query=SELECT COUNT(*) FROM > rows_capacity_table]. Error message:IGN-CMN-1 > TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 > TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution > failed: nodeName=TablesAmountCapacityTest_cluster_0, > queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, > originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907 > java.sql.SQLException: Exception while executing query [query=SELECT COUNT(*) > FROM rows_capacity_table]. Error message:IGN-CMN-1 > TraceId:24c93463-f078-410a-8831-36b5c549a907 IGN-CMN-1 > TraceId:24c93463-f078-410a-8831-36b5c549a907 Query remote fragment execution > failed: nodeName=TablesAmountCapacityTest_cluster_0, > queryId=ecd14026-5366-4ee2-b73a-f38757d3ba4f, fragmentId=1561, > originalMessage=IGN-CMN-1 TraceId:24c93463-f078-410a-8831-36b5c549a907 > at > org.apache.ignite.internal.jdbc.proto.IgniteQueryErrorCode.createJdbcSqlException(IgniteQueryErrorCode.java:57) > at > org.apache.ignite.internal.jdbc.JdbcStatement.execute0(JdbcStatement.java:149) > at > org.apache.ignite.internal.jdbc.JdbcStatement.executeQuery(JdbcStatement.java:108) > {code} > Logs are in the attachment. > [^logs.zip] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19558) Sql. Use statistics snapshot per sql request.
Evgeny Stanilovsky created IGNITE-19558: --- Summary: Sql. Use statistics snapshot per sql request. Key: IGNITE-19558 URL: https://issues.apache.org/jira/browse/IGNITE-19558 Project: Ignite Issue Type: Bug Components: sql Affects Versions: 3.0.0-beta1 Reporter: Evgeny Stanilovsky Now possible situation when in scope of one sql request planning it`s possible to obtain different table row count, check: StatisticsImpl#getRowCount, this can lead to erroneous or not optimal plan. Seems we need smth. like statistics snapshot for such a case. starting points: {noformat} IgniteTableImpl.StatisticsImpl {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19557) Sql. Insert through JDBC with batch optimization.
Evgeny Stanilovsky created IGNITE-19557: --- Summary: Sql. Insert through JDBC with batch optimization. Key: IGNITE-19557 URL: https://issues.apache.org/jira/browse/IGNITE-19557 Project: Ignite Issue Type: Improvement Components: sql Affects Versions: 3.0.0-beta1 Reporter: Evgeny Stanilovsky JdbcQueryEventHandlerImpl#batchPrepStatementAsync process batch rows sequentially row by row, seems it brings essential throughput boost if rows will be processed as batch. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19556) Sql. Implement cache for parsing sql requests.
[ https://issues.apache.org/jira/browse/IGNITE-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Evgeny Stanilovsky updated IGNITE-19556: Labels: calcite3-required ignite-3 (was: ) > Sql. Implement cache for parsing sql requests. > -- > > Key: IGNITE-19556 > URL: https://issues.apache.org/jira/browse/IGNITE-19556 > Project: Ignite > Issue Type: Improvement > Components: sql >Affects Versions: 3.0.0-beta1 >Reporter: Evgeny Stanilovsky >Priority: Major > Labels: calcite3-required, ignite-3 > > starting point IgniteSqlParser#parse > In current implementation: > {noformat} > PreparedStatement insertPrepStmt ... > insertPrepStmt.addBatch(); > {noformat} > will call sequential insertion row by row which leads to repeatedly parsing > same > {noformat} > INSERT INTO ... > {noformat} > requests... > local benchmark with 1 server node shows 12% grow with cached requests. > TpchParseBenchmark ceases to be significant at all. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19556) Sql. Implement cache for parsing sql requests.
Evgeny Stanilovsky created IGNITE-19556: --- Summary: Sql. Implement cache for parsing sql requests. Key: IGNITE-19556 URL: https://issues.apache.org/jira/browse/IGNITE-19556 Project: Ignite Issue Type: Improvement Components: sql Affects Versions: 3.0.0-beta1 Reporter: Evgeny Stanilovsky starting point IgniteSqlParser#parse In current implementation: {noformat} PreparedStatement insertPrepStmt ... insertPrepStmt.addBatch(); {noformat} will call sequential insertion row by row which leads to repeatedly parsing same {noformat} INSERT INTO ... {noformat} requests... local benchmark with 1 server node shows 12% grow with cached requests. TpchParseBenchmark ceases to be significant at all. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19555) Allow json rendering in Ignite 3 CLI
[ https://issues.apache.org/jira/browse/IGNITE-19555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr updated IGNITE-19555: --- Description: Current State: The majority of CLI commands, with an output resembling a tabular format, possess the {{-plain}} option. This option enables the output to be presented in a tab-separated table format, offering ease of integration with scripts based on AWK. Proposed Enhancement: It is suggested to incorporate a {{-json}} option for all CLI commands, supplementing the existing {{-plain}} option. The introduction of this option would allow output to be piped into JSON-oriented tools such as jq. Moreover, it would facilitate the usage of the output as a body for RESTful API calls. was: Current State: The majority of CLI commands, with an output resembling a tabular format, possess the {{- -plain}} option. This option enables the output to be presented in a tab-separated table format, offering ease of integration with scripts based on AWK. Proposed Enhancement: It is suggested to incorporate a {{- -json}} option for all CLI commands, supplementing the existing {{- -plain}} option. The introduction of this option would allow output to be piped into JSON-oriented tools such as jq. Moreover, it would facilitate the usage of the output as a body for RESTful API calls. > Allow json rendering in Ignite 3 CLI > > > Key: IGNITE-19555 > URL: https://issues.apache.org/jira/browse/IGNITE-19555 > Project: Ignite > Issue Type: Improvement > Components: cli >Reporter: Aleksandr >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > > Current State: The majority of CLI commands, with an output resembling a > tabular format, possess the {{-plain}} option. This option enables the output > to be presented in a tab-separated table format, offering ease of integration > with scripts based on AWK. > Proposed Enhancement: It is suggested to incorporate a {{-json}} option for > all CLI commands, supplementing the existing {{-plain}} option. The > introduction of this option would allow output to be piped into JSON-oriented > tools such as jq. Moreover, it would facilitate the usage of the output as a > body for RESTful API calls. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19555) Allow json rendering in Ignite 3 CLI
[ https://issues.apache.org/jira/browse/IGNITE-19555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr updated IGNITE-19555: --- Description: Current State: The majority of CLI commands, with an output resembling a tabular format, possess the {{--plain}} option. This option enables the output to be presented in a tab-separated table format, offering ease of integration with scripts based on AWK. Proposed Enhancement: It is suggested to incorporate a {{--json}} option for all CLI commands, supplementing the existing {{--plain}} option. The introduction of this option would allow output to be piped into JSON-oriented tools such as jq. Moreover, it would facilitate the usage of the output as a body for RESTful API calls. was: Current State: The majority of Command Line Interface (CLI) commands, with an output resembling a tabular format, possess the '--plain' option. This option enables the output to be presented in a tab-separated table format, offering ease of integration with scripts based on AWK (Aho, Weinberger, and Kernighan). Proposed Enhancement: It is suggested to incorporate a '--json' option for all CLI commands, supplementing the existing '--plain' option. The introduction of the '--json' option would allow output to be piped into JSON-oriented tools such as 'jq'. Moreover, it would facilitate the usage of the output as a body for RESTful API calls. > Allow json rendering in Ignite 3 CLI > > > Key: IGNITE-19555 > URL: https://issues.apache.org/jira/browse/IGNITE-19555 > Project: Ignite > Issue Type: Improvement > Components: cli >Reporter: Aleksandr >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > > Current State: The majority of CLI commands, with an output resembling a > tabular format, possess the {{--plain}} option. This option enables the > output to be presented in a tab-separated table format, offering ease of > integration with scripts based on AWK. > Proposed Enhancement: It is suggested to incorporate a {{--json}} option for > all CLI commands, supplementing the existing {{--plain}} option. The > introduction of this option would allow output to be piped into JSON-oriented > tools such as jq. Moreover, it would facilitate the usage of the output as a > body for RESTful API calls. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19555) Allow json rendering in Ignite 3 CLI
[ https://issues.apache.org/jira/browse/IGNITE-19555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr updated IGNITE-19555: --- Description: Current State: The majority of CLI commands, with an output resembling a tabular format, possess the {{- -plain}} option. This option enables the output to be presented in a tab-separated table format, offering ease of integration with scripts based on AWK. Proposed Enhancement: It is suggested to incorporate a {{- -json}} option for all CLI commands, supplementing the existing {{- -plain}} option. The introduction of this option would allow output to be piped into JSON-oriented tools such as jq. Moreover, it would facilitate the usage of the output as a body for RESTful API calls. was: Current State: The majority of CLI commands, with an output resembling a tabular format, possess the {{--plain}} option. This option enables the output to be presented in a tab-separated table format, offering ease of integration with scripts based on AWK. Proposed Enhancement: It is suggested to incorporate a {{--json}} option for all CLI commands, supplementing the existing {{--plain}} option. The introduction of this option would allow output to be piped into JSON-oriented tools such as jq. Moreover, it would facilitate the usage of the output as a body for RESTful API calls. > Allow json rendering in Ignite 3 CLI > > > Key: IGNITE-19555 > URL: https://issues.apache.org/jira/browse/IGNITE-19555 > Project: Ignite > Issue Type: Improvement > Components: cli >Reporter: Aleksandr >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > > Current State: The majority of CLI commands, with an output resembling a > tabular format, possess the {{- -plain}} option. This option enables the > output to be presented in a tab-separated table format, offering ease of > integration with scripts based on AWK. > Proposed Enhancement: It is suggested to incorporate a {{- -json}} option for > all CLI commands, supplementing the existing {{- -plain}} option. The > introduction of this option would allow output to be piped into JSON-oriented > tools such as jq. Moreover, it would facilitate the usage of the output as a > body for RESTful API calls. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (IGNITE-19555) Allow json rendering in Ignite 3 CLI
[ https://issues.apache.org/jira/browse/IGNITE-19555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksandr updated IGNITE-19555: --- Description: Current State: The majority of Command Line Interface (CLI) commands, with an output resembling a tabular format, possess the '--plain' option. This option enables the output to be presented in a tab-separated table format, offering ease of integration with scripts based on AWK (Aho, Weinberger, and Kernighan). Proposed Enhancement: It is suggested to incorporate a '--json' option for all CLI commands, supplementing the existing '--plain' option. The introduction of the '--json' option would allow output to be piped into JSON-oriented tools such as 'jq'. Moreover, it would facilitate the usage of the output as a body for RESTful API calls. > Allow json rendering in Ignite 3 CLI > > > Key: IGNITE-19555 > URL: https://issues.apache.org/jira/browse/IGNITE-19555 > Project: Ignite > Issue Type: Improvement > Components: cli >Reporter: Aleksandr >Priority: Major > Labels: ignite-3, ignite-3-cli-tool > > Current State: The majority of Command Line Interface (CLI) commands, with an > output resembling a tabular format, possess the '--plain' option. This option > enables the output to be presented in a tab-separated table format, offering > ease of integration with scripts based on AWK (Aho, Weinberger, and > Kernighan). > Proposed Enhancement: It is suggested to incorporate a '--json' option for > all CLI commands, supplementing the existing '--plain' option. The > introduction of the '--json' option would allow output to be piped into > JSON-oriented tools such as 'jq'. Moreover, it would facilitate the usage of > the output as a body for RESTful API calls. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (IGNITE-19555) Allow json rendering in Ignite 3 CLI
Aleksandr created IGNITE-19555: -- Summary: Allow json rendering in Ignite 3 CLI Key: IGNITE-19555 URL: https://issues.apache.org/jira/browse/IGNITE-19555 Project: Ignite Issue Type: Improvement Components: cli Reporter: Aleksandr -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (IGNITE-19508) Design and breakdown tx on unstable topology topic.
[ https://issues.apache.org/jira/browse/IGNITE-19508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vyacheslav Koptilin reassigned IGNITE-19508: Assignee: Alexander Lapin > Design and breakdown tx on unstable topology topic. > --- > > Key: IGNITE-19508 > URL: https://issues.apache.org/jira/browse/IGNITE-19508 > Project: Ignite > Issue Type: Task >Reporter: Alexander Lapin >Assignee: Alexander Lapin >Priority: Major > Labels: ignite-3 > > h3. Motivation > General approach of tx recovery/tx on unstable topology is available in > described in > [https://cwiki.apache.org/confluence/display/IGNITE/IEP-91%3A+Transaction+protocol#IEP91:Transactionprotocol-Recovery|IEP-91] > Thus it's required to prepare full set of tickets in order to handle > * Data node (primaryReplica|majority) failure. > * Commit partition failure. > * Tx coordinator failure. > except disaster recovery and graceful primary replica switch, that will be > covered in separate topics. > h3. Definition of Done > * Corresponding epic is created. > * Epic is populated with full set of tasks. > * Tasks are estimated and assigned to sprints. -- This message was sent by Atlassian Jira (v8.20.10#820010)