[jira] [Created] (IGNITE-8996) Web console: Dropdown pupup menu is scrolled with page body on modal dialog.
Vasiliy Sisko created IGNITE-8996: - Summary: Web console: Dropdown pupup menu is scrolled with page body on modal dialog. Key: IGNITE-8996 URL: https://issues.apache.org/jira/browse/IGNITE-8996 Project: Ignite Issue Type: Bug Components: wizards Affects Versions: 2.6 Reporter: Vasiliy Sisko Assignee: Dmitriy Shabalin # Open *Cluster* configuration of *Advanced* tab of *Configure* page. # Open *Import from database* dialog # On *Tables* step expand any dropdown # Try to scroll mouse on area instead of import dialog. On scroll the dropdown popup is scrolled together with background content. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8995) FailureHandler executed on error in ScanQuery's IgniteBiPredicate
[ https://issues.apache.org/jira/browse/IGNITE-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542513#comment-16542513 ] Dmitriy Gladkikh commented on IGNITE-8995: -- TC: [Ignite Tests 2.4+ (Java 8)--> Run :: All|https://ci.ignite.apache.org/viewLog.html?buildId=1486044] > FailureHandler executed on error in ScanQuery's IgniteBiPredicate > - > > Key: IGNITE-8995 > URL: https://issues.apache.org/jira/browse/IGNITE-8995 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Dmitriy Gladkikh >Assignee: Dmitriy Gladkikh >Priority: Major > > This code demonstrates this behavior: > {code:java} > import java.util.Collections; > import javax.cache.Cache; > import org.apache.ignite.Ignite; > import org.apache.ignite.IgniteCache; > import org.apache.ignite.Ignition; > import org.apache.ignite.binary.BinaryObject; > import org.apache.ignite.cache.CacheAtomicityMode; > import org.apache.ignite.cache.CacheMode; > import org.apache.ignite.cache.query.QueryCursor; > import org.apache.ignite.cache.query.ScanQuery; > import org.apache.ignite.configuration.CacheConfiguration; > import org.apache.ignite.configuration.DataRegionConfiguration; > import org.apache.ignite.configuration.DataStorageConfiguration; > import org.apache.ignite.configuration.IgniteConfiguration; > import org.apache.ignite.lang.IgniteBiPredicate; > import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; > import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; > /** > * -ea -DIGNITE_QUIET=false > */ > public class ScanQueryIgniteBiPredicateWithError { > private static final String CACHE_NAME = "test_cache_name"; > public static void main(String[] args) { > try (Ignite igniteServer = Ignition.start(getCfg("node_server", > false)); > Ignite igniteClient = Ignition.start(getCfg("node_client", > true))) > { > IgniteCache cache = > igniteClient.cache(CACHE_NAME); > cache.put(1, > igniteClient.binary().builder("test_type").setField("field_0", > "field_0_val").build()); > try (QueryCursor> cursor = > cache.withKeepBinary().query(new ScanQuery<>( > new IgniteBiPredicate() { > @Override public boolean apply(Integer key, BinaryObject > value) { > throw new AssertionError(); // Error. > //return value.field(null) != null; // Error. > //return true; // Ok. > } > }))) > { > for (Cache.Entry entry : cursor) > // Without error in IgniteBiPredicate: > // Key = 1, Val = test_type [idHash=2024711353, > hash=394028655, field_0=val_0] > System.out.printf("Key = %s, Val = %s%n", entry.getKey(), > entry.getValue()); > } > } > } > /** > * @param instanceName Ignite instance name. > * @param clientMode Client mode. > * @return Ignite configuration. > */ > private static IgniteConfiguration getCfg(String instanceName, boolean > clientMode) { > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > > ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509")); > TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi(); > tcpDiscoverySpi.setIpFinder(ipFinder); > DataRegionConfiguration dataRegionCfg = new DataRegionConfiguration(); > dataRegionCfg.setPersistenceEnabled(true); > DataStorageConfiguration dataStorageCfg = new > DataStorageConfiguration(); > dataStorageCfg.setDefaultDataRegionConfiguration(dataRegionCfg); > CacheConfiguration ccfg = new > CacheConfiguration(CACHE_NAME) > .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) > .setCacheMode(CacheMode.PARTITIONED); > IgniteConfiguration cfg = new IgniteConfiguration(); > cfg.setIgniteInstanceName(instanceName); > cfg.setDiscoverySpi(tcpDiscoverySpi); > cfg.setDataStorageConfiguration(dataStorageCfg); > cfg.setCacheConfiguration(ccfg); > if (!clientMode) > cfg.setAutoActivationEnabled(true); > else > cfg.setClientMode(true); > return cfg; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8995) FailureHandler executed on error in ScanQuery's IgniteBiPredicate
[ https://issues.apache.org/jira/browse/IGNITE-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542480#comment-16542480 ] ASF GitHub Bot commented on IGNITE-8995: GitHub user dgladkikh opened a pull request: https://github.com/apache/ignite/pull/4354 IGNITE-8995 FailureHandler executed on error in ScanQuery's IgniteBiP… …redicate. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-8995 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4354.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4354 commit 7d608e2c6e21b51d44f6f2c23dd3e9abd831d868 Author: dgladkikh Date: 2018-07-13T03:59:49Z IGNITE-8995 FailureHandler executed on error in ScanQuery's IgniteBiPredicate. > FailureHandler executed on error in ScanQuery's IgniteBiPredicate > - > > Key: IGNITE-8995 > URL: https://issues.apache.org/jira/browse/IGNITE-8995 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Dmitriy Gladkikh >Assignee: Dmitriy Gladkikh >Priority: Major > > This code demonstrates this behavior: > {code:java} > import java.util.Collections; > import javax.cache.Cache; > import org.apache.ignite.Ignite; > import org.apache.ignite.IgniteCache; > import org.apache.ignite.Ignition; > import org.apache.ignite.binary.BinaryObject; > import org.apache.ignite.cache.CacheAtomicityMode; > import org.apache.ignite.cache.CacheMode; > import org.apache.ignite.cache.query.QueryCursor; > import org.apache.ignite.cache.query.ScanQuery; > import org.apache.ignite.configuration.CacheConfiguration; > import org.apache.ignite.configuration.DataRegionConfiguration; > import org.apache.ignite.configuration.DataStorageConfiguration; > import org.apache.ignite.configuration.IgniteConfiguration; > import org.apache.ignite.lang.IgniteBiPredicate; > import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; > import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; > /** > * -ea -DIGNITE_QUIET=false > */ > public class ScanQueryIgniteBiPredicateWithError { > private static final String CACHE_NAME = "test_cache_name"; > public static void main(String[] args) { > try (Ignite igniteServer = Ignition.start(getCfg("node_server", > false)); > Ignite igniteClient = Ignition.start(getCfg("node_client", > true))) > { > IgniteCache cache = > igniteClient.cache(CACHE_NAME); > cache.put(1, > igniteClient.binary().builder("test_type").setField("field_0", > "field_0_val").build()); > try (QueryCursor> cursor = > cache.withKeepBinary().query(new ScanQuery<>( > new IgniteBiPredicate() { > @Override public boolean apply(Integer key, BinaryObject > value) { > throw new AssertionError(); // Error. > //return value.field(null) != null; // Error. > //return true; // Ok. > } > }))) > { > for (Cache.Entry entry : cursor) > // Without error in IgniteBiPredicate: > // Key = 1, Val = test_type [idHash=2024711353, > hash=394028655, field_0=val_0] > System.out.printf("Key = %s, Val = %s%n", entry.getKey(), > entry.getValue()); > } > } > } > /** > * @param instanceName Ignite instance name. > * @param clientMode Client mode. > * @return Ignite configuration. > */ > private static IgniteConfiguration getCfg(String instanceName, boolean > clientMode) { > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > > ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509")); > TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi(); > tcpDiscoverySpi.setIpFinder(ipFinder); > DataRegionConfiguration dataRegionCfg = new DataRegionConfiguration(); > dataRegionCfg.setPersistenceEnabled(true); > DataStorageConfiguration dataStorageCfg = new > DataStorageConfiguration(); > dataStorageCfg.setDefaultDataRegionConfiguration(dataRegionCfg); > CacheConfiguration ccfg = new > CacheConfiguration(CACHE_NAME) > .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) > .setCacheMode(CacheMode.PARTITIONED); > IgniteConfiguration cfg = new IgniteConfiguration(); > cfg.setIgniteInstanceName(instanceName); > cfg.setDiscoverySpi(tcpDiscoverySpi); >
[jira] [Updated] (IGNITE-8988) Web console: error in Readme.txt of generated project
[ https://issues.apache.org/jira/browse/IGNITE-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasiliy Sisko updated IGNITE-8988: -- Description: 1. Readme.txt says XML configuration files are located in /config folder, but actually its located in src/main/resources/META-INF !screenshot-1.png! 2. Error in log on opening of README.txt, jdbc-drivers/README.txt and .dockerignore file preview: {code:java} Uncaught SyntaxError: Unexpected token <{code} was: 1. Readme.txt says XML configuration files are located in /config folder, but actually its located in src/main/resources/META-INF !screenshot-1.png! 2. Error in log on opening of README.txt file preview: {code:java} Uncaught SyntaxError: Unexpected token <{code} > Web console: error in Readme.txt of generated project > - > > Key: IGNITE-8988 > URL: https://issues.apache.org/jira/browse/IGNITE-8988 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Trivial > Fix For: 2.7 > > Attachments: screenshot-1.png > > > 1. Readme.txt says XML configuration files are located in /config folder, but > actually its located in src/main/resources/META-INF > !screenshot-1.png! 2. Error in log on opening of README.txt, > jdbc-drivers/README.txt and .dockerignore file preview: > {code:java} > Uncaught SyntaxError: Unexpected token <{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8988) Web console: error in Readme.txt of generated project
[ https://issues.apache.org/jira/browse/IGNITE-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasiliy Sisko updated IGNITE-8988: -- Description: 1. Readme.txt says XML configuration files are located in /config folder, but actually its located in src/main/resources/META-INF !screenshot-1.png! 2. Error in log on opening of README.txt file preview: {code:java} Uncaught SyntaxError: Unexpected token <{code} was: Readme.txt says XML configuration files are located in /config folder, but actually its located in src/main/resources/META-INF !screenshot-1.png! > Web console: error in Readme.txt of generated project > - > > Key: IGNITE-8988 > URL: https://issues.apache.org/jira/browse/IGNITE-8988 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Trivial > Fix For: 2.7 > > Attachments: screenshot-1.png > > > 1. Readme.txt says XML configuration files are located in /config folder, but > actually its located in src/main/resources/META-INF > !screenshot-1.png! 2. Error in log on opening of README.txt file preview: > {code:java} > Uncaught SyntaxError: Unexpected token <{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8995) FailureHandler executed on error in ScanQuery's IgniteBiPredicate
[ https://issues.apache.org/jira/browse/IGNITE-8995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Gladkikh reassigned IGNITE-8995: Assignee: Dmitriy Gladkikh > FailureHandler executed on error in ScanQuery's IgniteBiPredicate > - > > Key: IGNITE-8995 > URL: https://issues.apache.org/jira/browse/IGNITE-8995 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Dmitriy Gladkikh >Assignee: Dmitriy Gladkikh >Priority: Major > > This code demonstrates this behavior: > {code:java} > import java.util.Collections; > import javax.cache.Cache; > import org.apache.ignite.Ignite; > import org.apache.ignite.IgniteCache; > import org.apache.ignite.Ignition; > import org.apache.ignite.binary.BinaryObject; > import org.apache.ignite.cache.CacheAtomicityMode; > import org.apache.ignite.cache.CacheMode; > import org.apache.ignite.cache.query.QueryCursor; > import org.apache.ignite.cache.query.ScanQuery; > import org.apache.ignite.configuration.CacheConfiguration; > import org.apache.ignite.configuration.DataRegionConfiguration; > import org.apache.ignite.configuration.DataStorageConfiguration; > import org.apache.ignite.configuration.IgniteConfiguration; > import org.apache.ignite.lang.IgniteBiPredicate; > import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; > import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; > /** > * -ea -DIGNITE_QUIET=false > */ > public class ScanQueryIgniteBiPredicateWithError { > private static final String CACHE_NAME = "test_cache_name"; > public static void main(String[] args) { > try (Ignite igniteServer = Ignition.start(getCfg("node_server", > false)); > Ignite igniteClient = Ignition.start(getCfg("node_client", > true))) > { > IgniteCache cache = > igniteClient.cache(CACHE_NAME); > cache.put(1, > igniteClient.binary().builder("test_type").setField("field_0", > "field_0_val").build()); > try (QueryCursor> cursor = > cache.withKeepBinary().query(new ScanQuery<>( > new IgniteBiPredicate() { > @Override public boolean apply(Integer key, BinaryObject > value) { > throw new AssertionError(); // Error. > //return value.field(null) != null; // Error. > //return true; // Ok. > } > }))) > { > for (Cache.Entry entry : cursor) > // Without error in IgniteBiPredicate: > // Key = 1, Val = test_type [idHash=2024711353, > hash=394028655, field_0=val_0] > System.out.printf("Key = %s, Val = %s%n", entry.getKey(), > entry.getValue()); > } > } > } > /** > * @param instanceName Ignite instance name. > * @param clientMode Client mode. > * @return Ignite configuration. > */ > private static IgniteConfiguration getCfg(String instanceName, boolean > clientMode) { > TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); > > ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509")); > TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi(); > tcpDiscoverySpi.setIpFinder(ipFinder); > DataRegionConfiguration dataRegionCfg = new DataRegionConfiguration(); > dataRegionCfg.setPersistenceEnabled(true); > DataStorageConfiguration dataStorageCfg = new > DataStorageConfiguration(); > dataStorageCfg.setDefaultDataRegionConfiguration(dataRegionCfg); > CacheConfiguration ccfg = new > CacheConfiguration(CACHE_NAME) > .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) > .setCacheMode(CacheMode.PARTITIONED); > IgniteConfiguration cfg = new IgniteConfiguration(); > cfg.setIgniteInstanceName(instanceName); > cfg.setDiscoverySpi(tcpDiscoverySpi); > cfg.setDataStorageConfiguration(dataStorageCfg); > cfg.setCacheConfiguration(ccfg); > if (!clientMode) > cfg.setAutoActivationEnabled(true); > else > cfg.setClientMode(true); > return cfg; > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8989) Web console: incorrect initial state of some checkboxes on Client Connector Configuration panel
[ https://issues.apache.org/jira/browse/IGNITE-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Konstantinov reassigned IGNITE-8989: -- Assignee: Alexey Kuznetsov (was: Pavel Konstantinov) > Web console: incorrect initial state of some checkboxes on Client Connector > Configuration panel > --- > > Key: IGNITE-8989 > URL: https://issues.apache.org/jira/browse/IGNITE-8989 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Alexey Kuznetsov >Priority: Minor > Attachments: screenshot-1.png > > > !screenshot-1.png! > These checkboxes should be ON on UI due to its default value in the source > code is true. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8989) Web console: incorrect initial state of some checkboxes on Client Connector Configuration panel
[ https://issues.apache.org/jira/browse/IGNITE-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542459#comment-16542459 ] Pavel Konstantinov commented on IGNITE-8989: Tested on the branch > Web console: incorrect initial state of some checkboxes on Client Connector > Configuration panel > --- > > Key: IGNITE-8989 > URL: https://issues.apache.org/jira/browse/IGNITE-8989 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Minor > Attachments: screenshot-1.png > > > !screenshot-1.png! > These checkboxes should be ON on UI due to its default value in the source > code is true. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8995) FailureHandler executed on error in ScanQuery's IgniteBiPredicate
Dmitriy Gladkikh created IGNITE-8995: Summary: FailureHandler executed on error in ScanQuery's IgniteBiPredicate Key: IGNITE-8995 URL: https://issues.apache.org/jira/browse/IGNITE-8995 Project: Ignite Issue Type: Bug Affects Versions: 2.5 Reporter: Dmitriy Gladkikh This code demonstrates this behavior: {code:java} import java.util.Collections; import javax.cache.Cache; import org.apache.ignite.Ignite; import org.apache.ignite.IgniteCache; import org.apache.ignite.Ignition; import org.apache.ignite.binary.BinaryObject; import org.apache.ignite.cache.CacheAtomicityMode; import org.apache.ignite.cache.CacheMode; import org.apache.ignite.cache.query.QueryCursor; import org.apache.ignite.cache.query.ScanQuery; import org.apache.ignite.configuration.CacheConfiguration; import org.apache.ignite.configuration.DataRegionConfiguration; import org.apache.ignite.configuration.DataStorageConfiguration; import org.apache.ignite.configuration.IgniteConfiguration; import org.apache.ignite.lang.IgniteBiPredicate; import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi; import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder; /** * -ea -DIGNITE_QUIET=false */ public class ScanQueryIgniteBiPredicateWithError { private static final String CACHE_NAME = "test_cache_name"; public static void main(String[] args) { try (Ignite igniteServer = Ignition.start(getCfg("node_server", false)); Ignite igniteClient = Ignition.start(getCfg("node_client", true))) { IgniteCache cache = igniteClient.cache(CACHE_NAME); cache.put(1, igniteClient.binary().builder("test_type").setField("field_0", "field_0_val").build()); try (QueryCursor> cursor = cache.withKeepBinary().query(new ScanQuery<>( new IgniteBiPredicate() { @Override public boolean apply(Integer key, BinaryObject value) { throw new AssertionError(); // Error. //return value.field(null) != null; // Error. //return true; // Ok. } }))) { for (Cache.Entry entry : cursor) // Without error in IgniteBiPredicate: // Key = 1, Val = test_type [idHash=2024711353, hash=394028655, field_0=val_0] System.out.printf("Key = %s, Val = %s%n", entry.getKey(), entry.getValue()); } } } /** * @param instanceName Ignite instance name. * @param clientMode Client mode. * @return Ignite configuration. */ private static IgniteConfiguration getCfg(String instanceName, boolean clientMode) { TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(); ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509")); TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi(); tcpDiscoverySpi.setIpFinder(ipFinder); DataRegionConfiguration dataRegionCfg = new DataRegionConfiguration(); dataRegionCfg.setPersistenceEnabled(true); DataStorageConfiguration dataStorageCfg = new DataStorageConfiguration(); dataStorageCfg.setDefaultDataRegionConfiguration(dataRegionCfg); CacheConfiguration ccfg = new CacheConfiguration(CACHE_NAME) .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL) .setCacheMode(CacheMode.PARTITIONED); IgniteConfiguration cfg = new IgniteConfiguration(); cfg.setIgniteInstanceName(instanceName); cfg.setDiscoverySpi(tcpDiscoverySpi); cfg.setDataStorageConfiguration(dataStorageCfg); cfg.setCacheConfiguration(ccfg); if (!clientMode) cfg.setAutoActivationEnabled(true); else cfg.setClientMode(true); return cfg; } } {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8411) Binary Client Protocol spec: other parts clarifications
[ https://issues.apache.org/jira/browse/IGNITE-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16542096#comment-16542096 ] Denis Magda commented on IGNITE-8411: - [~isapego], do we still need to update the docs using the suggestions from this ticket? > Binary Client Protocol spec: other parts clarifications > --- > > Key: IGNITE-8411 > URL: https://issues.apache.org/jira/browse/IGNITE-8411 > Project: Ignite > Issue Type: Improvement > Components: documentation, thin client >Affects Versions: 2.4 >Reporter: Alexey Kosenchuk >Assignee: Igor Sapego >Priority: Major > Fix For: 2.7 > > > issues against previous parts: IGNITE-8039 IGNITE-8212 > Cache Configuration > --- > > [https://apacheignite.readme.io/docs/binary-client-protocol-cache-configuration-operations] > - OP_CACHE_GET_CONFIGURATION and OP_CACHE_CREATE_WITH_CONFIGURATION - > QueryEntity - Structure of QueryField: > absent "default value - type Object" - it is the last field of the > QueryField in reality. > - OP_CACHE_GET_CONFIGURATION - Structure of Cache Configuration: > Absent CacheAtomicityMode - is the first field in reality. > Absent MaxConcurrentAsyncOperations - is between DefaultLockTimeout and > MaxQueryIterators in reality. > "Invalidate" field - does not exist in reality. > - meaning and possible values of every configuration parameter must be > clarified. If clarified in other docs, this spec must have link(s) to that > docs. > - suggest to combine somehow Cache Configuration descriptions in > OP_CACHE_GET_CONFIGURATION and OP_CACHE_CREATE_WITH_CONFIGURATION - to avoid > duplicated descriptions. > SQL and Scan Queries > > [https://apacheignite.readme.io/docs/binary-client-protocol-sql-operations] > - "Flag. Pass 0 for default, or 1 to keep the value in binary form.": > "the value in binary form" flag should be left end clarified in the > operations to which it is applicable for. > - OP_QUERY_SQL: > most of the fields in the request must be clarified. If clarified in other > docs, this spec must have link(s) to that docs. > For example: > ** "Name of a type or SQL table": name of what type? > - OP_QUERY_SQL_FIELDS: > most of the fields in the request must be clarified. If clarified in other > docs, this spec must have link(s) to that docs. > For example: > ** is there any correlation between "Query cursor page size" and "Max rows"? > ** "Statement type": why there are only three types? what about INSERT, etc.? > - OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE Response does not contain Cursor id. > But responses for all other query operations contain it. Is it intentional? > - OP_QUERY_SCAN_CURSOR_GET_PAGE Response - Cursor id is absent in reality. > - OP_QUERY_SCAN_CURSOR_GET_PAGE Response - Row count field: says type > "long". Should be "int". > - OP_QUERY_SCAN: > format and rules of the Filter object must be clarified. If clarified in > other docs, this spec must have link(s) to that docs. > - OP_QUERY_SCAN: > in general, it's not clear how this operation should be supported on > platforms other than the mentioned in "Filter platform" field. > - OP_QUERY_SCAN: "Number of partitions to query" > Should be updated to "A partition number to query" > > Binary Types > > > [https://apacheignite.readme.io/docs/binary-client-protocol-binary-type-operations] > - somewhere should be explained when and why these operations need to be > supported by a client. > - Type id and Field id: > should be clarified that before an Id calculation Type and Field names must > be updated to low case. > - OP_GET_BINARY_TYPE and OP_PUT_BINARY_TYPE - BinaryField - Type id: > in reality it is not a type id (hash code) but a type code (1, 2,... 10,... > 103,...). > - OP_GET_BINARY_TYPE and OP_PUT_BINARY_TYPE - "Affinity key field name": > should be explained what is it. If explained in other docs, this spec must > have link(s) to that docs. > - OP_PUT_BINARY_TYPE - schema id: > mandatory algorithm of schema Id calculation must be described somewhere. If > described in other docs, this spec must have link(s) to that docs. > - OP_REGISTER_BINARY_TYPE_NAME and OP_GET_BINARY_TYPE_NAME: > should be explained when and why these operations need to be supported by a > client. > How this operation should be supported on platforms other than the mentioned > in "Platform id" field. > - OP_REGISTER_BINARY_TYPE_NAME: > Type name - is it "full" or "short" name here? > Type id - is it a hash from "full" or "short" name here? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8994) Configuring dedicated volumes for WAL and data with Kuberenetes
Denis Magda created IGNITE-8994: --- Summary: Configuring dedicated volumes for WAL and data with Kuberenetes Key: IGNITE-8994 URL: https://issues.apache.org/jira/browse/IGNITE-8994 Project: Ignite Issue Type: Task Components: documentation Reporter: Denis Magda Fix For: 2.7 The current StatefulSet documentation request only one persistent volume for both WAL and data/index files: https://apacheignite.readme.io/docs/stateful-deployment#section-statefulset-deployment However, according to Ignite performance guide the WAL has to be located on a dedicated volume: https://apacheignite.readme.io/docs/durable-memory-tuning#section-separate-disk-device-for-wal Provide StatefulSet configuration that shows how to request separate volumes for the WAL and data/index files. If needed, provide YAML configs for StorageClass and volume claims. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8993) Configuring sticky LoadBalancer for Ignite Service with Kubernetes
Denis Magda created IGNITE-8993: --- Summary: Configuring sticky LoadBalancer for Ignite Service with Kubernetes Key: IGNITE-8993 URL: https://issues.apache.org/jira/browse/IGNITE-8993 Project: Ignite Issue Type: Task Components: documentation Reporter: Denis Magda Fix For: 2.7 Ignite service used for Ignite pods auto-discovery and access to the cluster from remote applications is deployed as LoadBalancer: https://apacheignite.readme.io/docs/ignite-service This might lead to problems when a stateful session is needed between an app and the cluster. For instance, Ignite JDBC driver preserves the state of an opened connection meaning that once LoadBalancer connects the driver to an Ignite pod, all the queries have to be redirected to that Ignite pod only (unless the pod is down). We need to show how to configure a sticky LoadBalancer that will assign a client connection to a specific pod and won't change it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8982) SQL TX: reuse H2 connections
[ https://issues.apache.org/jira/browse/IGNITE-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541970#comment-16541970 ] Dmitriy Pavlov commented on IGNITE-8982: [~ruchirc] thank you > SQL TX: reuse H2 connections > > > Key: IGNITE-8982 > URL: https://issues.apache.org/jira/browse/IGNITE-8982 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Pavlukhin >Assignee: Ivan Pavlukhin >Priority: Major > > H2 Connection creation is not very fast. Reusing already created connections > could speed up execution in several cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8982) SQL TX: reuse H2 connections
[ https://issues.apache.org/jira/browse/IGNITE-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541969#comment-16541969 ] ruchir choudhry commented on IGNITE-8982: - No problem , changed it to Ivan Pavlukhin's name. > SQL TX: reuse H2 connections > > > Key: IGNITE-8982 > URL: https://issues.apache.org/jira/browse/IGNITE-8982 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Pavlukhin >Assignee: Ivan Pavlukhin >Priority: Major > > H2 Connection creation is not very fast. Reusing already created connections > could speed up execution in several cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8982) SQL TX: reuse H2 connections
[ https://issues.apache.org/jira/browse/IGNITE-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ruchir choudhry reassigned IGNITE-8982: --- Assignee: Ivan Pavlukhin (was: ruchir choudhry) > SQL TX: reuse H2 connections > > > Key: IGNITE-8982 > URL: https://issues.apache.org/jira/browse/IGNITE-8982 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Pavlukhin >Assignee: Ivan Pavlukhin >Priority: Major > > H2 Connection creation is not very fast. Reusing already created connections > could speed up execution in several cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8968) Failed to shutdown node due to "Error saving backup value"
[ https://issues.apache.org/jira/browse/IGNITE-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541960#comment-16541960 ] Pavel Vinokurov commented on IGNITE-8968: - [~agoncharuk] Please review > Failed to shutdown node due to "Error saving backup value" > -- > > Key: IGNITE-8968 > URL: https://issues.apache.org/jira/browse/IGNITE-8968 > Project: Ignite > Issue Type: Bug > Components: cache, persistence >Affects Versions: 2.4 >Reporter: Pavel Vinokurov >Assignee: Pavel Vinokurov >Priority: Major > > On node shutdown ignite prints following logs infinitely: > org.apache.ignite.internal.NodeStoppingException: Operation has been > cancelled (node is stopping). > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl.java:1263) > at > org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:370) > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.storeValue(GridCacheMapEntry.java:3626) > at > org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:2783) > at > org.apache.ignite.internal.processors.cache.GridCacheUtils$22.process(GridCacheUtils.java:1734) > at > org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1782) > at > org.apache.ignite.internal.processors.cache.GridCacheUtils$22.apply(GridCacheUtils.java:1724) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8861) Wrong method call in IgniteServise documentation snippet
[ https://issues.apache.org/jira/browse/IGNITE-8861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541950#comment-16541950 ] Dmitriy Pavlov commented on IGNITE-8861: [~roman_s] can we resove issue? I can see doc was updated with new method name. > Wrong method call in IgniteServise documentation snippet > > > Key: IGNITE-8861 > URL: https://issues.apache.org/jira/browse/IGNITE-8861 > Project: Ignite > Issue Type: Improvement > Components: documentation >Reporter: Oleg Ostanin >Assignee: Roman Shtykh >Priority: Minor > > [https://apacheignite.readme.io/docs/service-example] > {{ClusterGroup cacheGrp = ignite.cluster().forCache("myCounterService");}} > {{This string does not compile if we use 2.5 version:}} > {{[ERROR] Failed to execute goal > org.apache.maven.plugins:maven-compiler-plugin:3.6.1:compile > (default-compile) on project poc-tester: Compilation failure}} > {{[ERROR] > /home/oostanin/gg-qa/poc-tester/src/main/java/org/apache/ignite/scenario/ServiceTask.java:[53,51] > cannot find symbol}} > {{[ERROR] symbol: method forCache(java.lang.String)}} > {{[ERROR] location: interface org.apache.ignite.IgniteCluster}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8774) Daemon moves cluster to compatibility mode when joins
[ https://issues.apache.org/jira/browse/IGNITE-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541937#comment-16541937 ] Dmitriy Pavlov commented on IGNITE-8774: This PR does not contain run-all (or it was removed by TC by some reasons). I'd like to automatically check results, so I've triggered https://ci.ignite.apache.org/viewQueued.html?itemId=1484835=queuedBuildOverviewTab > Daemon moves cluster to compatibility mode when joins > - > > Key: IGNITE-8774 > URL: https://issues.apache.org/jira/browse/IGNITE-8774 > Project: Ignite > Issue Type: Bug >Reporter: Stanislav Lukyanov >Assignee: Aleksey Plekhanov >Priority: Major > Fix For: 2.7 > > > When a daemon node joins the cluster seems to switch to compatibility mode > (allowing nodes without baseline support). It prevents baseline nodes from > being restarted. > Example: > {code} > Ignite ignite1 = > IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml", > "srv1"); > Ignite ignite2 = > IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml", > "srv2"); > ignite2.cluster().active(true); > IgnitionEx.setClientMode(true); > IgnitionEx.setDaemon(true); > Ignite daemon = > IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml", > "daemon"); > IgnitionEx.setClientMode(false); > IgnitionEx.setDaemon(false); > ignite2.close(); > IgnitionEx.start("examples/config/persistentstore/example-persistent-store.xml", > "srv2"); > {code} > The attempt to restart ignite2 throws an exception: > {code} > [2018-06-11 18:45:25,766][ERROR][tcp-disco-msg-worker-#39%srv2%][root] > Critical system error detected. Will be handled accordingly to configured > handler [hnd=class o.a.i.failure.StopNodeOrHaltFailureHandler, > failureCtx=FailureContext [type=SYSTEM_WORKER_TERMINATION, err=class > o.a.i.IgniteException: Node with BaselineTopology cannot join mixed cluster > running in compatibility mode]] > class org.apache.ignite.IgniteException: Node with BaselineTopology cannot > join mixed cluster running in compatibility mode > at > org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onGridDataReceived(GridClusterStateProcessor.java:714) > at > org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:883) > at > org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1939) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4354) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2744) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775) > at > org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621) > at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-584) Need to make sure that scan query returns consistent results on topology changes
[ https://issues.apache.org/jira/browse/IGNITE-584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541912#comment-16541912 ] Stanilovsky Evgeny commented on IGNITE-584: --- looks ok > Need to make sure that scan query returns consistent results on topology > changes > > > Key: IGNITE-584 > URL: https://issues.apache.org/jira/browse/IGNITE-584 > Project: Ignite > Issue Type: Sub-task > Components: data structures >Affects Versions: 1.9, 2.0, 2.1 >Reporter: Artem Shutak >Assignee: Stanilovsky Evgeny >Priority: Major > Labels: MakeTeamcityGreenAgain, Muted_test > Fix For: 2.7 > > Attachments: tc1.png > > > Consistent results on topology changes was implemented for sql queries, but > looks like it still does not work for scan queries. > This affects 'cache set' tests since set uses scan query for set iteration > (to be unmuted on TC): > GridCacheSetAbstractSelfTest testNodeJoinsAndLeaves and > testNodeJoinsAndLeavesCollocated; > Also see todos here GridCacheSetFailoverAbstractSelfTest -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8955) Checkpoint can't get write lock if massive eviction on node start started
[ https://issues.apache.org/jira/browse/IGNITE-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541900#comment-16541900 ] Dmitriy Pavlov commented on IGNITE-8955: Pushed to master test fix of http://apache-ignite-developers.2346864.n4.nabble.com/MTCGA-new-failures-in-builds-1479951-needs-to-be-handled-td32499.html commit: https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=commit;h=584a88d4285a7db4d10dcdfa235633498f96e583 > Checkpoint can't get write lock if massive eviction on node start started > - > > Key: IGNITE-8955 > URL: https://issues.apache.org/jira/browse/IGNITE-8955 > Project: Ignite > Issue Type: Bug >Reporter: Eduard Shangareev >Assignee: Eduard Shangareev >Priority: Major > Fix For: 2.7 > > > Many sys-threads start eviction and start being throttled, so they couldn't > proceed because of it, while checkpoint couldn't start because they hold > checkpoint read lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8955) Checkpoint can't get write lock if massive eviction on node start started
[ https://issues.apache.org/jira/browse/IGNITE-8955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541901#comment-16541901 ] Dmitriy Pavlov commented on IGNITE-8955: Pushed to master test fix of http://apache-ignite-developers.2346864.n4.nabble.com/MTCGA-new-failures-in-builds-1479951-needs-to-be-handled-td32499.html commit: https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=commit;h=584a88d4285a7db4d10dcdfa235633498f96e583 > Checkpoint can't get write lock if massive eviction on node start started > - > > Key: IGNITE-8955 > URL: https://issues.apache.org/jira/browse/IGNITE-8955 > Project: Ignite > Issue Type: Bug >Reporter: Eduard Shangareev >Assignee: Eduard Shangareev >Priority: Major > Fix For: 2.7 > > > Many sys-threads start eviction and start being throttled, so they couldn't > proceed because of it, while checkpoint couldn't start because they hold > checkpoint read lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-8776) Eviction policy MBeans are never registered if evictionPolicyFactory is used
[ https://issues.apache.org/jira/browse/IGNITE-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541862#comment-16541862 ] Dmitriy Pavlov edited comment on IGNITE-8776 at 7/12/18 4:04 PM: - Merged to master. I've - updated PR in accordance with code style, - added empty lines after variable declaration and before return. - added test to suite so it could be run on TC. [~slukyanov], thank you for review. [~kcheng.mvp] thank you for contribution. was (Author: dpavlov): Merged to master. I've - updated PR in accordance with code style, - added empty lines after variable declaration and before return. - added test to suite so it could be run on TC. [~slukyanov], thank you for review > Eviction policy MBeans are never registered if evictionPolicyFactory is used > > > Key: IGNITE-8776 > URL: https://issues.apache.org/jira/browse/IGNITE-8776 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Stanislav Lukyanov >Assignee: kcheng.mvp >Priority: Minor > Labels: newbie > Fix For: 2.7 > > > Eviction policy MBeans, such as LruEvictionPolicyMBean, are never registered > if evictionPolicyFactory is set instead of evictionPolicy (the latter is > deprecated). > This happens because GridCacheProcessor::registerMbean attempts to find > either an *MBean interface or IgniteMBeanAware interface on the passed > object. It works for LruEvictionPolicy but not for LruEvictionPolicyFactory > (which doesn't implement these interfaces). > The code needs to be adjusted to handle factories correctly. > New tests are needed to make sure that all standard beans are registered > (IgniteKernalMbeansTest does that for kernal mbeans - need the same for cache > beans). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8914) SQL TX: Partition update counter fix.
[ https://issues.apache.org/jira/browse/IGNITE-8914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Kondakov reassigned IGNITE-8914: -- Assignee: Roman Kondakov > SQL TX: Partition update counter fix. > - > > Key: IGNITE-8914 > URL: https://issues.apache.org/jira/browse/IGNITE-8914 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Roman Kondakov >Assignee: Roman Kondakov >Priority: Major > Labels: mvcc > > Partition counters are broken in mvcc branch. This leads to the faulty > partition recreation during a rebalance because of the differences in update > counters on primary and backup nodes. Reproducer: > {{CacheMvccReplicatedSqlCoordinatorFailoverTest#testUpdate_N_Objects_ClientServer_Backups0_Sql_Persistence}} > We need to fix it somehow, -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8974) MVCC TX: Vacuum cleanup version obtaining optimization.
[ https://issues.apache.org/jira/browse/IGNITE-8974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Roman Kondakov reassigned IGNITE-8974: -- Assignee: (was: Roman Kondakov) > MVCC TX: Vacuum cleanup version obtaining optimization. > --- > > Key: IGNITE-8974 > URL: https://issues.apache.org/jira/browse/IGNITE-8974 > Project: Ignite > Issue Type: Improvement > Components: cache, sql >Reporter: Roman Kondakov >Priority: Major > Labels: mvcc > > At the moment vacuum process obtains cleanup version as the same way as > transactions do. It implies some unnecessary complications and even minor > performance drop due to calculation entire tx snapshot instead of just a > cleanup version number or sending unnsecessary tx end acks back to the > coordinator. Possible solutions are: > * Local caching cleanup version from the last obtained tx snapshot and use > it in vacuum process. But in this way not all outdated versions could be > cleaned (i.e. keys updated by this last tx). > * Implement a special method for calculating cleanup version on the > coordinator site and Request and Response messages for vacuum runned on > non-coordinator site. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8957) testFailGetLock() constantly fails. Last entry checkpoint history can be empty
[ https://issues.apache.org/jira/browse/IGNITE-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541833#comment-16541833 ] Ivan Rakov commented on IGNITE-8957: [~sergey-chugunov], let's make output clear for cases when there are one or zero segments cleared. I propose [idx] and [] correspondingly. > testFailGetLock() constantly fails. Last entry checkpoint history can be empty > -- > > Key: IGNITE-8957 > URL: https://issues.apache.org/jira/browse/IGNITE-8957 > Project: Ignite > Issue Type: Bug > Components: persistence >Affects Versions: 2.7 >Reporter: Maxim Muzafarov >Assignee: Andrew Medvedev >Priority: Major > Labels: MakeTeamcityGreenAgain > > IgniteChangeGlobalStateTest#testFailGetLock constantly fails with exception: > {code} > java.lang.AssertionError > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.onCheckpointFinished(CheckpointHistory.java:205) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointEnd(GridCacheDatabaseSharedManager.java:3654) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:3178) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2953) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:748) > {code} > As Sergey Chugunov > [mentioned|https://issues.apache.org/jira/browse/IGNITE-8737?focusedCommentId=16535062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16535062], > issue can be solved different ways: > {quote} > It seems we missed a case when lastEntry may be empty. We may choose here > from two options: > * Check if histMap is empty inside onCheckpointFinished. If it is just don't > log anything (it was the very first checkpoint). > * Check in caller that there is no history, calculate necessary index in > caller and pass it to onCheckpointFinished to prepare correct log > message.{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8957) testFailGetLock() constantly fails. Last entry checkpoint history can be empty
[ https://issues.apache.org/jira/browse/IGNITE-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541828#comment-16541828 ] Maxim Muzafarov commented on IGNITE-8957: - [~sergey-chugunov], I haven't seen any changes regarding printing `rage` for cleared segments or I've missed something? Will we do it in another task? Also, I've left some comments in PR. > testFailGetLock() constantly fails. Last entry checkpoint history can be empty > -- > > Key: IGNITE-8957 > URL: https://issues.apache.org/jira/browse/IGNITE-8957 > Project: Ignite > Issue Type: Bug > Components: persistence >Affects Versions: 2.7 >Reporter: Maxim Muzafarov >Assignee: Andrew Medvedev >Priority: Major > Labels: MakeTeamcityGreenAgain > > IgniteChangeGlobalStateTest#testFailGetLock constantly fails with exception: > {code} > java.lang.AssertionError > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.onCheckpointFinished(CheckpointHistory.java:205) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointEnd(GridCacheDatabaseSharedManager.java:3654) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:3178) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2953) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:748) > {code} > As Sergey Chugunov > [mentioned|https://issues.apache.org/jira/browse/IGNITE-8737?focusedCommentId=16535062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16535062], > issue can be solved different ways: > {quote} > It seems we missed a case when lastEntry may be empty. We may choose here > from two options: > * Check if histMap is empty inside onCheckpointFinished. If it is just don't > log anything (it was the very first checkpoint). > * Check in caller that there is no history, calculate necessary index in > caller and pass it to onCheckpointFinished to prepare correct log > message.{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8957) testFailGetLock() constantly fails. Last entry checkpoint history can be empty
[ https://issues.apache.org/jira/browse/IGNITE-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541794#comment-16541794 ] Sergey Chugunov commented on IGNITE-8957: - [~ivan.glukos], It is already implemented, you can see the change in the PR. > testFailGetLock() constantly fails. Last entry checkpoint history can be empty > -- > > Key: IGNITE-8957 > URL: https://issues.apache.org/jira/browse/IGNITE-8957 > Project: Ignite > Issue Type: Bug > Components: persistence >Affects Versions: 2.7 >Reporter: Maxim Muzafarov >Assignee: Andrew Medvedev >Priority: Major > Labels: MakeTeamcityGreenAgain > > IgniteChangeGlobalStateTest#testFailGetLock constantly fails with exception: > {code} > java.lang.AssertionError > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.onCheckpointFinished(CheckpointHistory.java:205) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointEnd(GridCacheDatabaseSharedManager.java:3654) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:3178) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2953) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:748) > {code} > As Sergey Chugunov > [mentioned|https://issues.apache.org/jira/browse/IGNITE-8737?focusedCommentId=16535062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16535062], > issue can be solved different ways: > {quote} > It seems we missed a case when lastEntry may be empty. We may choose here > from two options: > * Check if histMap is empty inside onCheckpointFinished. If it is just don't > log anything (it was the very first checkpoint). > * Check in caller that there is no history, calculate necessary index in > caller and pass it to onCheckpointFinished to prepare correct log > message.{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8783) Failover tests periodically cause hanging of the whole Data Structures suite on TC
[ https://issues.apache.org/jira/browse/IGNITE-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541789#comment-16541789 ] Ilya Lantukh commented on IGNITE-8783: -- [~avinogradov], I think this code was written to avoid race between handling ack and processing node failure. As far as I understand, there is no mechanism to cancel latch for outdated topology version. > Failover tests periodically cause hanging of the whole Data Structures suite > on TC > -- > > Key: IGNITE-8783 > URL: https://issues.apache.org/jira/browse/IGNITE-8783 > Project: Ignite > Issue Type: Bug > Components: data structures >Reporter: Ivan Rakov >Assignee: Anton Vinogradov >Priority: Major > Labels: MakeTeamcityGreenAgain > > History of suite runs: > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E > Chance of suite hang is 18% in master (based on previous 50 runs). > Hang is always caused by one of the following failover tests: > {noformat} > GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange > GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8957) testFailGetLock() constantly fails. Last entry checkpoint history can be empty
[ https://issues.apache.org/jira/browse/IGNITE-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541787#comment-16541787 ] Ivan Rakov commented on IGNITE-8957: +1 for printing segment range. Let's fix it here or create a separate ticket for this change. > testFailGetLock() constantly fails. Last entry checkpoint history can be empty > -- > > Key: IGNITE-8957 > URL: https://issues.apache.org/jira/browse/IGNITE-8957 > Project: Ignite > Issue Type: Bug > Components: persistence >Affects Versions: 2.7 >Reporter: Maxim Muzafarov >Assignee: Andrew Medvedev >Priority: Major > Labels: MakeTeamcityGreenAgain > > IgniteChangeGlobalStateTest#testFailGetLock constantly fails with exception: > {code} > java.lang.AssertionError > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.onCheckpointFinished(CheckpointHistory.java:205) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.markCheckpointEnd(GridCacheDatabaseSharedManager.java:3654) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.doCheckpoint(GridCacheDatabaseSharedManager.java:3178) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager$Checkpointer.body(GridCacheDatabaseSharedManager.java:2953) > at > org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:748) > {code} > As Sergey Chugunov > [mentioned|https://issues.apache.org/jira/browse/IGNITE-8737?focusedCommentId=16535062=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16535062], > issue can be solved different ways: > {quote} > It seems we missed a case when lastEntry may be empty. We may choose here > from two options: > * Check if histMap is empty inside onCheckpointFinished. If it is just don't > log anything (it was the very first checkpoint). > * Check in caller that there is no history, calculate necessary index in > caller and pass it to onCheckpointFinished to prepare correct log > message.{quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8783) Failover tests periodically cause hanging of the whole Data Structures suite on TC
[ https://issues.apache.org/jira/browse/IGNITE-8783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541697#comment-16541697 ] Anton Vinogradov commented on IGNITE-8783: -- Hang reason found at {{org.apache.ignite.internal.processors.cache.distributed.dht.preloader.latch.ExchangeLatchManager#createClientLatch}} you can see code {noformat} // There is final ack for created latch. if (pendingAcks.containsKey(latchId)) { latch.complete(); pendingAcks.remove(latchId); // this cause pending acks loss when coordinator failure was not handled yet (eg. we handling another node fail) } else clientLatches.put(latchId, latch); {noformat} so, I propose to replace this code with simple {noformat} clientLatches.put(latchId, latch); {noformat} [~Jokser], Could you please explain idea of handling final message from old_coordinator? As far as I see - latches will be recreated on each topology change and acks will be resent. > Failover tests periodically cause hanging of the whole Data Structures suite > on TC > -- > > Key: IGNITE-8783 > URL: https://issues.apache.org/jira/browse/IGNITE-8783 > Project: Ignite > Issue Type: Bug > Components: data structures >Reporter: Ivan Rakov >Assignee: Anton Vinogradov >Priority: Major > Labels: MakeTeamcityGreenAgain > > History of suite runs: > https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_DataStructures=buildTypeHistoryList_IgniteTests24Java8=%3Cdefault%3E > Chance of suite hang is 18% in master (based on previous 50 runs). > Hang is always caused by one of the following failover tests: > {noformat} > GridCacheReplicatedDataStructuresFailoverSelfTest#testAtomicSequenceConstantTopologyChange > GridCachePartitionedDataStructuresFailoverSelfTest#testFairReentrantLockConstantTopologyChangeNonFailoverSafe > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-5539) MemoryMetrics.getTotalAllocatedPages return 0 when persistence is enabled
[ https://issues.apache.org/jira/browse/IGNITE-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541665#comment-16541665 ] Stanislav Lukyanov commented on IGNITE-5539: Cannot reproduce this in 2.4+. Seems like it was fixed during the migration from MemoryPolicyConfiguration to DataRegionConfiguration. Closing as Cannot Reproduce. > MemoryMetrics.getTotalAllocatedPages return 0 when persistence is enabled > - > > Key: IGNITE-5539 > URL: https://issues.apache.org/jira/browse/IGNITE-5539 > Project: Ignite > Issue Type: Bug > Components: persistence >Affects Versions: 2.1 >Reporter: Alexey Kuznetsov >Assignee: Sergey Chugunov >Priority: Major > Labels: iep-6 > > In memory only mode metrics show some not zero values. > With persistence it shows zero. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-5539) MemoryMetrics.getTotalAllocatedPages return 0 when persistence is enabled
[ https://issues.apache.org/jira/browse/IGNITE-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stanislav Lukyanov resolved IGNITE-5539. Resolution: Cannot Reproduce > MemoryMetrics.getTotalAllocatedPages return 0 when persistence is enabled > - > > Key: IGNITE-5539 > URL: https://issues.apache.org/jira/browse/IGNITE-5539 > Project: Ignite > Issue Type: Bug > Components: persistence >Affects Versions: 2.1 >Reporter: Alexey Kuznetsov >Assignee: Sergey Chugunov >Priority: Major > Labels: iep-6 > > In memory only mode metrics show some not zero values. > With persistence it shows zero. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8992) Wrong log when LongJVMPauseDetector stops the worker thread
[ https://issues.apache.org/jira/browse/IGNITE-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541662#comment-16541662 ] Pavel Pereslegin commented on IGNITE-8992: -- LGTM > Wrong log when LongJVMPauseDetector stops the worker thread > --- > > Key: IGNITE-8992 > URL: https://issues.apache.org/jira/browse/IGNITE-8992 > Project: Ignite > Issue Type: Bug >Reporter: Denis Garus >Assignee: Denis Garus >Priority: Minor > > When LongJVMPauseDetector stops the worker thread, a log will contain follow > error: > [2018-07-12 > 12:57:28,332][ERROR][jvm-pause-detector-worker][CacheMetricsEnableRuntimeTest1] > jvm-pause-detector-worker has been interrupted > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.ignite.internal.LongJVMPauseDetector$1.run(LongJVMPauseDetector.java:97) > The error must be only if worker thread stopped unintentionally. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8581) MVCC TX: data streamer support
[ https://issues.apache.org/jira/browse/IGNITE-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Pavlukhin reassigned IGNITE-8581: -- Assignee: Ivan Pavlukhin > MVCC TX: data streamer support > -- > > Key: IGNITE-8581 > URL: https://issues.apache.org/jira/browse/IGNITE-8581 > Project: Ignite > Issue Type: Bug > Components: sql >Reporter: Sergey Kalashnikov >Assignee: Ivan Pavlukhin >Priority: Major > Labels: mvcc, sql > > Add support for data streamer for mvcc caches. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-8645) CacheMetrics.getCacheTxCommits() doesn't include transactions started on client node
[ https://issues.apache.org/jira/browse/IGNITE-8645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527303#comment-16527303 ] Alexey Kuznetsov edited comment on IGNITE-8645 at 7/12/18 1:20 PM: --- [~guseinov] [~dpavlov] Hi Ticket is ready for review. Can you review it or ask somebody to review it? was (Author: alexey kuznetsov): [~guseinov] Hi Ticket is ready for review. Can you review it or ask somebody to review it? > CacheMetrics.getCacheTxCommits() doesn't include transactions started on > client node > > > Key: IGNITE-8645 > URL: https://issues.apache.org/jira/browse/IGNITE-8645 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.4 >Reporter: Roman Guseinov >Assignee: Alexey Kuznetsov >Priority: Major > Fix For: 2.7 > > Attachments: CacheTxCommitsMetricTest.java > > > The test is attached [^CacheTxCommitsMetricTest.java] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8980) Web console: regression on Queries screen
[ https://issues.apache.org/jira/browse/IGNITE-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov reassigned IGNITE-8980: Assignee: Pavel Konstantinov (was: Alexey Kuznetsov) Merged to master. > Web console: regression on Queries screen > - > > Key: IGNITE-8980 > URL: https://issues.apache.org/jira/browse/IGNITE-8980 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Major > Fix For: 2.7 > > Attachments: screenshot-1.png > > > check box is too small > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8982) SQL TX: reuse H2 connections
[ https://issues.apache.org/jira/browse/IGNITE-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541600#comment-16541600 ] Dmitriy Pavlov commented on IGNITE-8982: Hi [~ruchirc], thank you for interest to the product, and welcome to the Ignite community. As far as I know code for this issue is mostly completed, so if you don't mind let's give a chance to [~Pavlukhin] to complete this issue. Could you please move issue to unassigned, so [~Pavlukhin] could assign issue to yourself. > SQL TX: reuse H2 connections > > > Key: IGNITE-8982 > URL: https://issues.apache.org/jira/browse/IGNITE-8982 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Pavlukhin >Assignee: ruchir choudhry >Priority: Major > > H2 Connection creation is not very fast. Reusing already created connections > could speed up execution in several cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8982) SQL TX: reuse H2 connections
[ https://issues.apache.org/jira/browse/IGNITE-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541602#comment-16541602 ] Igor Seliverstov commented on IGNITE-8982: -- [~ruchirc], due to some access issues we had no chance to assign the ticket, unfortunately it's already done. You could choose another one to work on and I'll provide any necessary assistance (or point to a person who can) with contributing flow, task details, etc. > SQL TX: reuse H2 connections > > > Key: IGNITE-8982 > URL: https://issues.apache.org/jira/browse/IGNITE-8982 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Pavlukhin >Assignee: ruchir choudhry >Priority: Major > > H2 Connection creation is not very fast. Reusing already created connections > could speed up execution in several cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-8982) SQL TX: reuse H2 connections
[ https://issues.apache.org/jira/browse/IGNITE-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541540#comment-16541540 ] Ivan Pavlukhin edited comment on IGNITE-8982 at 7/12/18 1:00 PM: - Hi [~ruchirc] actually the task is already in progress. was (Author: pavlukhin): Hi [~ruchirc] actually the task is already in progress. PR [https://github.com/gridgain/apache-ignite/pull/97] > SQL TX: reuse H2 connections > > > Key: IGNITE-8982 > URL: https://issues.apache.org/jira/browse/IGNITE-8982 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Pavlukhin >Assignee: ruchir choudhry >Priority: Major > > H2 Connection creation is not very fast. Reusing already created connections > could speed up execution in several cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8982) SQL TX: reuse H2 connections
[ https://issues.apache.org/jira/browse/IGNITE-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541540#comment-16541540 ] Ivan Pavlukhin commented on IGNITE-8982: Hi [~ruchirc] actually the task is already in progress. PR [https://github.com/gridgain/apache-ignite/pull/97] > SQL TX: reuse H2 connections > > > Key: IGNITE-8982 > URL: https://issues.apache.org/jira/browse/IGNITE-8982 > Project: Ignite > Issue Type: Improvement >Reporter: Ivan Pavlukhin >Assignee: ruchir choudhry >Priority: Major > > H2 Connection creation is not very fast. Reusing already created connections > could speed up execution in several cases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7196) Exchange can stuck and wait while new node restoring state from disk and starting caches
[ https://issues.apache.org/jira/browse/IGNITE-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541511#comment-16541511 ] Maxim Muzafarov commented on IGNITE-7196: - [~agoncharuk], I would like to take this ticket for myself and start investigation. Can I? > Exchange can stuck and wait while new node restoring state from disk and > starting caches > > > Key: IGNITE-7196 > URL: https://issues.apache.org/jira/browse/IGNITE-7196 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.3 >Reporter: Mikhail Cherkasov >Assignee: Alexey Goncharuk >Priority: Critical > Fix For: 2.7 > > > Exchange can stuck and wait while new node restoring state from disk and > starting caches, there's a log snippet from a just joined new node that shows > the issue: > [21:36:13,023][INFO][exchange-worker-#62%statement_grid%][time] Started > exchange init [topVer=AffinityTopologyVersion [topVer=57, minorTopVer=0], > crd=false, evt=NODE_JOINED, evtNode=3ac1160e-0de4-41bc-a366-59292c9f03c1, > customEvt=null, allowMerge=true] > [21:36:13,023][INFO][exchange-worker-#62%statement_grid%][FilePageStoreManager] > Resolved page store work directory: > /mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463 > [21:36:13,024][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager] > Resolved write ahead log work directory: > /mnt/wal/WAL/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463 > [21:36:13,024][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager] > Resolved write ahead log archive directory: > /mnt/wal/WAL_archive/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463 > [21:36:13,046][INFO][exchange-worker-#62%statement_grid%][FileWriteAheadLogManager] > Started write-ahead log manager [mode=DEFAULT] > [21:36:13,065][INFO][exchange-worker-#62%statement_grid%][PageMemoryImpl] > Started page memory [memoryAllocated=100.0 MiB, pages=6352, tableSize=373.4 > KiB, checkpointBuffer=100.0 MiB] > [21:36:13,105][INFO][exchange-worker-#62%statement_grid%][PageMemoryImpl] > Started page memory [memoryAllocated=32.0 GiB, pages=2083376, tableSize=119.6 > MiB, checkpointBuffer=896.0 MiB] > [21:36:13,428][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] > Read checkpoint status > [startMarker=/mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463/cp/1512930965253-306c0895-1f5f-4237-bebf-8bf2b49682af-START.bin, > > endMarker=/mnt/store/node00-d1eb270c-d2cc-4550-87aa-64f6df2a9463/cp/1512930869357-1c24b6dc-d64c-4b83-8166-11edf1bfdad3-END.bin] > [21:36:13,429][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] > Checking memory state [lastValidPos=FileWALPointer [idx=3582, > fileOffset=59186076, len=9229, forceFlush=false], lastMarked=FileWALPointer > [idx=3629, fileOffset=50829700, len=9229, forceFlush=false], > lastCheckpointId=306c0895-1f5f-4237-bebf-8bf2b49682af] > [21:36:13,429][WARNING][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] > Ignite node stopped in the middle of checkpoint. Will restore memory state > and finish checkpoint on node start. > [21:36:18,312][INFO][grid-nio-worker-tcp-comm-0-#41%statement_grid%][TcpCommunicationSpi] > Accepted incoming communication connection [locAddr=/172.31.20.209:48100, > rmtAddr=/172.31.17.115:57148] > [21:36:21,619][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] > Found last checkpoint marker [cpId=306c0895-1f5f-4237-bebf-8bf2b49682af, > pos=FileWALPointer [idx=3629, fileOffset=50829700, len=9229, > forceFlush=false]] > [21:36:21,620][INFO][exchange-worker-#62%statement_grid%][GridCacheDatabaseSharedManager] > Finished applying memory changes [changesApplied=165103, time=8189ms] > [21:36:22,403][INFO][grid-nio-worker-tcp-comm-1-#42%statement_grid%][TcpCommunicationSpi] > Accepted incoming communication connection [locAddr=/172.31.20.209:48100, > rmtAddr=/172.31.28.10:47964] > [21:36:23,414][INFO][grid-nio-worker-tcp-comm-2-#43%statement_grid%][TcpCommunicationSpi] > Accepted incoming communication connection [locAddr=/172.31.20.209:48100, > rmtAddr=/172.31.27.101:46000] > [21:36:33,019][WARNING][main][GridCachePartitionExchangeManager] Failed to > wait for initial partition map exchange. Possible reasons are: > ^-- Transactions in deadlock. > ^-- Long running transactions (ignore if this is the case). > ^-- Unreleased explicit locks. > [21:36:53,021][WARNING][main][GridCachePartitionExchangeManager] Still > waiting for initial partition map exchange > [fut=GridDhtPartitionsExchangeFuture [firstDiscoEvt=DiscoveryEvent > [evtNode=TcpDiscoveryNode [id=3ac1160e-0de4-41bc-a366-59292c9f03c1, > addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.31.20.209], >
[jira] [Commented] (IGNITE-8992) Wrong log when LongJVMPauseDetector stops the worker thread
[ https://issues.apache.org/jira/browse/IGNITE-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541504#comment-16541504 ] ASF GitHub Bot commented on IGNITE-8992: GitHub user dgarus opened a pull request: https://github.com/apache/ignite/pull/4352 IGNITE-8992. Wrong log when LongJVMPauseDetector stops the worker thread You can merge this pull request into a Git repository by running: $ git pull https://github.com/dgarus/ignite ignite-8992 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4352.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4352 commit 8f319f50c1e775e8bcbdc9371643312031129f02 Author: Garus Denis Date: 2018-07-12T11:34:47Z IGNITE-8992. Wrong log when LongJVMPauseDetector stops the worker thread > Wrong log when LongJVMPauseDetector stops the worker thread > --- > > Key: IGNITE-8992 > URL: https://issues.apache.org/jira/browse/IGNITE-8992 > Project: Ignite > Issue Type: Bug >Reporter: Denis Garus >Assignee: Denis Garus >Priority: Minor > > When LongJVMPauseDetector stops the worker thread, a log will contain follow > error: > [2018-07-12 > 12:57:28,332][ERROR][jvm-pause-detector-worker][CacheMetricsEnableRuntimeTest1] > jvm-pause-detector-worker has been interrupted > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.ignite.internal.LongJVMPauseDetector$1.run(LongJVMPauseDetector.java:97) > The error must be only if worker thread stopped unintentionally. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-7165) Re-balancing is cancelled if client node joins
[ https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541489#comment-16541489 ] Maxim Muzafarov edited comment on IGNITE-7165 at 7/12/18 11:26 AM: --- I've checked tests (not related to current change): * IgnitePdsDynamicCacheTest.testRestartAndCreate(fail rate 0,0%) * IgnitePdsCheckpointSimulationWithRealCpDisabledTest.testCheckpointSimulationMultiThreaded (fail rate 0,0%) * GridCachePartitionedDataStructuresFailoverSelfTest.testFairReentrantLockFailsWhenServersLeft (fail rate 0,0%) * CacheStopAndDestroySelfTest.testClientClose (fail rate 0,0%) * CacheStopAndDestroySelfTest.testLocalClose(fail rate 0,0%) * GridCacheLocalMultithreadedSelfTest.testBasicLocks(fail rate 0,0%) * GridCacheLocalMultithreadedSelfTest.testBasicLocks(fail rate 0,0%) * IgniteClientReconnectFailoverTest.testReconnectStreamerApi(fail rate 0,0%) was (Author: mmuzaf): Check tests (not related to current change): * IgnitePdsDynamicCacheTest.testRestartAndCreate(fail rate 0,0%) * IgnitePdsCheckpointSimulationWithRealCpDisabledTest.testCheckpointSimulationMultiThreaded (fail rate 0,0%) * GridCachePartitionedDataStructuresFailoverSelfTest.testFairReentrantLockFailsWhenServersLeft (fail rate 0,0%) * CacheStopAndDestroySelfTest.testClientClose (fail rate 0,0%) * CacheStopAndDestroySelfTest.testLocalClose(fail rate 0,0%) * GridCacheLocalMultithreadedSelfTest.testBasicLocks(fail rate 0,0%) * GridCacheLocalMultithreadedSelfTest.testBasicLocks(fail rate 0,0%) * IgniteClientReconnectFailoverTest.testReconnectStreamerApi(fail rate 0,0%) > Re-balancing is cancelled if client node joins > -- > > Key: IGNITE-7165 > URL: https://issues.apache.org/jira/browse/IGNITE-7165 > Project: Ignite > Issue Type: Bug >Reporter: Mikhail Cherkasov >Assignee: Maxim Muzafarov >Priority: Critical > Labels: rebalance > Fix For: 2.7 > > > Re-balancing is canceled if client node joins. Re-balancing can take hours > and each time when client node joins it starts again: > [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager] > Added new node to topology: TcpDiscoveryNode > [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, > 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, > /172.31.16.213:0], discPort=0, order=36, intOrder=24, > lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, > isClient=true] > [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager] > Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB] > [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started > exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], > crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, > customEvt=null, allowMerge=true] > [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture] > Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, > minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], > err=null] > [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished > exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], > crd=false] > [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] > Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion > [topVer=36, minorTopVer=0], evt=NODE_JOINED, > node=979cf868-1c37-424a-9ad1-12db501f32ef] > [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion > [topVer=35, minorTopVer=0]] > [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] > Rebalancing scheduled [order=[statementp]] > [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] > Rebalancing started [top=null, evt=NODE_JOINED, > node=a8be3c14-9add-48c3-b099-3fd304cfdbf4] > [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Starting rebalancing [mode=ASYNC, > fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, > topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], > updateSeq=-1754630006] > [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Starting rebalancing [mode=ASYNC, > fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, > topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], > updateSeq=-1754630006] >
[jira] [Commented] (IGNITE-7165) Re-balancing is cancelled if client node joins
[ https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541489#comment-16541489 ] Maxim Muzafarov commented on IGNITE-7165: - Check tests (not related to current change): * IgnitePdsDynamicCacheTest.testRestartAndCreate(fail rate 0,0%) * IgnitePdsCheckpointSimulationWithRealCpDisabledTest.testCheckpointSimulationMultiThreaded (fail rate 0,0%) * GridCachePartitionedDataStructuresFailoverSelfTest.testFairReentrantLockFailsWhenServersLeft (fail rate 0,0%) * CacheStopAndDestroySelfTest.testClientClose (fail rate 0,0%) * CacheStopAndDestroySelfTest.testLocalClose(fail rate 0,0%) * GridCacheLocalMultithreadedSelfTest.testBasicLocks(fail rate 0,0%) * GridCacheLocalMultithreadedSelfTest.testBasicLocks(fail rate 0,0%) * IgniteClientReconnectFailoverTest.testReconnectStreamerApi(fail rate 0,0%) > Re-balancing is cancelled if client node joins > -- > > Key: IGNITE-7165 > URL: https://issues.apache.org/jira/browse/IGNITE-7165 > Project: Ignite > Issue Type: Bug >Reporter: Mikhail Cherkasov >Assignee: Maxim Muzafarov >Priority: Critical > Labels: rebalance > Fix For: 2.7 > > > Re-balancing is canceled if client node joins. Re-balancing can take hours > and each time when client node joins it starts again: > [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager] > Added new node to topology: TcpDiscoveryNode > [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, > 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, > /172.31.16.213:0], discPort=0, order=36, intOrder=24, > lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, > isClient=true] > [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager] > Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB] > [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started > exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], > crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef, > customEvt=null, allowMerge=true] > [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionsExchangeFuture] > Finish exchange future [startVer=AffinityTopologyVersion [topVer=36, > minorTopVer=0], resVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], > err=null] > [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Finished > exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], > crd=false] > [15:10:05,703][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] > Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion > [topVer=36, minorTopVer=0], evt=NODE_JOINED, > node=979cf868-1c37-424a-9ad1-12db501f32ef] > [15:10:08,706][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Cancelled rebalancing from all nodes [topology=AffinityTopologyVersion > [topVer=35, minorTopVer=0]] > [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] > Rebalancing scheduled [order=[statementp]] > [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridCachePartitionExchangeManager] > Rebalancing started [top=null, evt=NODE_JOINED, > node=a8be3c14-9add-48c3-b099-3fd304cfdbf4] > [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Starting rebalancing [mode=ASYNC, > fromNode=2f6bde48-ffb5-4815-bd32-df4e57dc13e0, partitionsCount=18, > topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], > updateSeq=-1754630006] > [15:10:08,707][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Starting rebalancing [mode=ASYNC, > fromNode=35d01141-4dce-47dd-adf6-a4f3b2bb9da9, partitionsCount=15, > topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], > updateSeq=-1754630006] > [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Starting rebalancing [mode=ASYNC, > fromNode=b3a8be53-e61f-4023-a906-a265923837ba, partitionsCount=15, > topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], > updateSeq=-1754630006] > [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Starting rebalancing [mode=ASYNC, > fromNode=f825cb4e-7dcc-405f-a40d-c1dc1a3ade5a, partitionsCount=12, > topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], > updateSeq=-1754630006] > [15:10:08,708][INFO][exchange-worker-#62%statement_grid%][GridDhtPartitionDemander] > Starting rebalancing [mode=ASYNC, > fromNode=4ae1db91-8b88-4180-a84b-127a303959e9, partitionsCount=11, > topology=AffinityTopologyVersion [topVer=36, minorTopVer=0], >
[jira] [Assigned] (IGNITE-8989) Web console: incorrect initial state of some checkboxes on Client Connector Configuration panel
[ https://issues.apache.org/jira/browse/IGNITE-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasiliy Sisko reassigned IGNITE-8989: - Assignee: Pavel Konstantinov (was: Vasiliy Sisko) > Web console: incorrect initial state of some checkboxes on Client Connector > Configuration panel > --- > > Key: IGNITE-8989 > URL: https://issues.apache.org/jira/browse/IGNITE-8989 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Minor > Attachments: screenshot-1.png > > > !screenshot-1.png! > These checkboxes should be ON on UI due to its default value in the source > code is true. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8989) Web console: incorrect initial state of some checkboxes on Client Connector Configuration panel
[ https://issues.apache.org/jira/browse/IGNITE-8989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541491#comment-16541491 ] Vasiliy Sisko commented on IGNITE-8989: --- Fixed default values for JDBC, ODBC, Thin client and Use Ignite SSL fields of new cluster. > Web console: incorrect initial state of some checkboxes on Client Connector > Configuration panel > --- > > Key: IGNITE-8989 > URL: https://issues.apache.org/jira/browse/IGNITE-8989 > Project: Ignite > Issue Type: Bug >Reporter: Pavel Konstantinov >Assignee: Vasiliy Sisko >Priority: Minor > Attachments: screenshot-1.png > > > !screenshot-1.png! > These checkboxes should be ON on UI due to its default value in the source > code is true. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-5427) Add cluster activation/deactivation lifecycle events
[ https://issues.apache.org/jira/browse/IGNITE-5427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541479#comment-16541479 ] Dmitriy Setrakyan commented on IGNITE-5427: --- In my view, activation events should be regular Ignite events, not lifecycle events. I would close this ticket as "Won't Fix" in favor of IGNITE-8376 > Add cluster activation/deactivation lifecycle events > > > Key: IGNITE-5427 > URL: https://issues.apache.org/jira/browse/IGNITE-5427 > Project: Ignite > Issue Type: Improvement > Components: general >Affects Versions: 2.0 >Reporter: Alexey Goncharuk >Assignee: Sergey Dorozhkin >Priority: Major > Fix For: 2.7 > > > We should add AFTER_ACTIVATE and BEFORE_DEACTIVATE lifecycle event types. > Add methods for these event to LifecycleListener interface. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-584) Need to make sure that scan query returns consistent results on topology changes
[ https://issues.apache.org/jira/browse/IGNITE-584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541469#comment-16541469 ] ASF GitHub Bot commented on IGNITE-584: --- GitHub user zstan opened a pull request: https://github.com/apache/ignite/pull/4351 IGNITE-584 Correct results returned, while changing topology You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-584 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4351.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4351 commit 1d71bc5966c335547f48292a12c7f2a5f5781130 Author: Evgeny Stanilovskiy Date: 2018-07-12T10:42:27Z IGNITE-584 Correct results returned, while changing topology > Need to make sure that scan query returns consistent results on topology > changes > > > Key: IGNITE-584 > URL: https://issues.apache.org/jira/browse/IGNITE-584 > Project: Ignite > Issue Type: Sub-task > Components: data structures >Affects Versions: 1.9, 2.0, 2.1 >Reporter: Artem Shutak >Assignee: Stanilovsky Evgeny >Priority: Major > Labels: MakeTeamcityGreenAgain, Muted_test > Fix For: 2.7 > > Attachments: tc1.png > > > Consistent results on topology changes was implemented for sql queries, but > looks like it still does not work for scan queries. > This affects 'cache set' tests since set uses scan query for set iteration > (to be unmuted on TC): > GridCacheSetAbstractSelfTest testNodeJoinsAndLeaves and > testNodeJoinsAndLeavesCollocated; > Also see todos here GridCacheSetFailoverAbstractSelfTest -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8985) Node segmented itself after connRecoveryTimeout
[ https://issues.apache.org/jira/browse/IGNITE-8985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitry Karachentsev reassigned IGNITE-8985: --- Assignee: Dmitry Karachentsev > Node segmented itself after connRecoveryTimeout > --- > > Key: IGNITE-8985 > URL: https://issues.apache.org/jira/browse/IGNITE-8985 > Project: Ignite > Issue Type: Bug >Reporter: Mikhail Cherkasov >Assignee: Dmitry Karachentsev >Priority: Major > Attachments: Archive.zip > > > I can see the following message in logs: > [2018-07-10 16:27:13,111][WARN ][tcp-disco-msg-worker-#2] Unable to connect > to next nodes in a ring, it seems local node is experiencing connectivity > issues. Segmenting local node to avoid case when one node fails a big part of > cluster. To disable that behavior set > TcpDiscoverySpi.setConnectionRecoveryTimeout() to 0. > [connRecoveryTimeout=1, effectiveConnRecoveryTimeout=1] > [2018-07-10 16:27:13,112][WARN ][disco-event-worker-#61] Local node > SEGMENTED: TcpDiscoveryNode [id=e1a19d8e-2253-458c-9757-e3372de3bef9, > addrs=[127.0.0.1, 172.17.0.1, 172.25.1.17], sockAddrs=[/172.17.0.1:47500, > lab17.gridgain.local/172.25.1.17:47500, /127.0.0.1:47500], discPort=47500, > order=2, intOrder=2, lastExchangeTime=1531229233103, loc=true, > ver=2.4.7#20180710-sha1:a48ae923, isClient=false] > I have failure detection time out 60_000 and during the test I had GC > <25secs, so I don't expect that node should be segmented. > > Logs are attached. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8992) Wrong log when LongJVMPauseDetector stops the worker thread
Denis Garus created IGNITE-8992: --- Summary: Wrong log when LongJVMPauseDetector stops the worker thread Key: IGNITE-8992 URL: https://issues.apache.org/jira/browse/IGNITE-8992 Project: Ignite Issue Type: Bug Reporter: Denis Garus When LongJVMPauseDetector stops the worker thread, a log will contain follow error: [2018-07-12 12:57:28,332][ERROR][jvm-pause-detector-worker][CacheMetricsEnableRuntimeTest1] jvm-pause-detector-worker has been interrupted java.lang.InterruptedException: sleep interrupted at java.lang.Thread.sleep(Native Method) at org.apache.ignite.internal.LongJVMPauseDetector$1.run(LongJVMPauseDetector.java:97) The error must be only if worker thread stopped unintentionally. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8992) Wrong log when LongJVMPauseDetector stops the worker thread
[ https://issues.apache.org/jira/browse/IGNITE-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Denis Garus reassigned IGNITE-8992: --- Assignee: Denis Garus > Wrong log when LongJVMPauseDetector stops the worker thread > --- > > Key: IGNITE-8992 > URL: https://issues.apache.org/jira/browse/IGNITE-8992 > Project: Ignite > Issue Type: Bug >Reporter: Denis Garus >Assignee: Denis Garus >Priority: Minor > > When LongJVMPauseDetector stops the worker thread, a log will contain follow > error: > [2018-07-12 > 12:57:28,332][ERROR][jvm-pause-detector-worker][CacheMetricsEnableRuntimeTest1] > jvm-pause-detector-worker has been interrupted > java.lang.InterruptedException: sleep interrupted > at java.lang.Thread.sleep(Native Method) > at > org.apache.ignite.internal.LongJVMPauseDetector$1.run(LongJVMPauseDetector.java:97) > The error must be only if worker thread stopped unintentionally. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8980) Web console: regression on Queries screen
[ https://issues.apache.org/jira/browse/IGNITE-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541449#comment-16541449 ] Pavel Konstantinov commented on IGNITE-8980: Tested on the branch. > Web console: regression on Queries screen > - > > Key: IGNITE-8980 > URL: https://issues.apache.org/jira/browse/IGNITE-8980 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Major > Fix For: 2.7 > > Attachments: screenshot-1.png > > > check box is too small > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8975) Invalid initialization of compressed archived WAL segment when WAL compression is switched off.
[ https://issues.apache.org/jira/browse/IGNITE-8975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541459#comment-16541459 ] Ivan Daschinskiy commented on IGNITE-8975: -- I've fixed this issue, merged with latest master and run TC and checked it's status in [mtcga|https://mtcga.gridgain.com/pr.html?serverId=public=IgniteTests24Java8_RunAll=pull%2F4345%2Fhead=Latest]. It seems that I did'nt introduce new tests failures. Please, review my issue. > Invalid initialization of compressed archived WAL segment when WAL > compression is switched off. > --- > > Key: IGNITE-8975 > URL: https://issues.apache.org/jira/browse/IGNITE-8975 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Ivan Daschinskiy >Assignee: Ivan Daschinskiy >Priority: Major > Fix For: 2.7 > > > After restarting node with WAL compression disabled and when compressed wal > archive > presentd, current implementation of FileWriteAheadLogManager ignores > presenting compressed wal segment and initalizes empty brand new one. This > causes following error: > {code:java} > 2018-07-05 16:14:25.761 > [ERROR][exchange-worker-#153%DPL_GRID%DplGridNodeName%][o.a.i.i.p.c.p.c.CheckpointHistory] > Failed to process checkpoint: CheckpointEntry > [id=8dc4b1cc-dedd-4a57-8748-f5a7ecfd389d, timestamp=1530785506909, > ptr=FileWALPointer [idx=4520, fileOff=860507725, len=691515]] > org.apache.ignite.IgniteCheckedException: Failed to find checkpoint record at > the given WAL pointer: FileWALPointer [idx=4520, fileOff=860507725, > len=691515] > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry$GroupStateLazyStore.initIfNeeded(CheckpointEntry.java:346) > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry$GroupStateLazyStore.access$300(CheckpointEntry.java:231) > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry.initIfNeeded(CheckpointEntry.java:123) > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointEntry.groupState(CheckpointEntry.java:105) > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.isCheckpointApplicableForGroup(CheckpointHistory.java:377) > at > org.apache.ignite.internal.processors.cache.persistence.checkpoint.CheckpointHistory.searchAndReserveCheckpoints(CheckpointHistory.java:304) > at > org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.reserveHistoryForExchange(GridCacheDatabaseSharedManager.java:1614) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1139) > at > org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:724) > at > org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2477) > at > org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2357) > at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at java.lang.Thread.run(Thread.java:745) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8980) Web console: regression on Queries screen
[ https://issues.apache.org/jira/browse/IGNITE-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pavel Konstantinov reassigned IGNITE-8980: -- Assignee: Alexey Kuznetsov (was: Pavel Konstantinov) > Web console: regression on Queries screen > - > > Key: IGNITE-8980 > URL: https://issues.apache.org/jira/browse/IGNITE-8980 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Alexey Kuznetsov >Priority: Major > Fix For: 2.7 > > Attachments: screenshot-1.png > > > check box is too small > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-7165) Re-balancing is cancelled if client node joins
[ https://issues.apache.org/jira/browse/IGNITE-7165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541426#comment-16541426 ] Maxim Muzafarov commented on IGNITE-7165: - h5. Changes ready * TC: [#2636 (11 Jul 18 21:20)|https://ci.ignite.apache.org/viewLog.html?buildId=1479780=buildResultsDiv=IgniteTests24Java8_RunAll] * PR: [#4097|https://github.com/apache/ignite/pull/4097] * Upsource: [IGNT-CR-670|https://reviews.ignite.apache.org/ignite/review/IGNT-CR-670] h5. Implementation details # _Keep topology version to demand (now it's not the last topology version)_ To calculate affinity assignment difference with the last topology version we should save version on which rebalance is being currently running. Update this version from exchange thread after PME will keep us away from unnecessary processing of stale supply messages. # _{{RebalanceFuture.demanded}} to process cache groups independently_ We have a long chain for starting rebalance process of cache groups builded by {{addAssignments}} method (e.g. {{ignite-sys-cache -> cacheR -> cacheR3 -> cacheR2}}). If rebalance started but initial demand message for some groups have not been sent (e.g. due to long cleaning\evicting processes previous groups) it can be easily cancelled and started new rebalance future. # _REPLICATED cache processing_ Affinity assignment for this type of cache always not changed. We don't need to stop rebalance for this cache each time new topology version arrived. Rebalance should be run only once, except situations when nodes {{LEFT}} or {{FAIL}} cluster from which cache partition being demanded for this group. # _EMPTY assignments handling_ Each time {{generateAssignments}} method determind no difference with current topology version (return empty map) no matter how affinity changed we should return successfull result as fast as possible. # _Pengind exchanges handling (cancelled assignments)_ Exchange thread can have pending exchanges in it's queue ({{hasPendingExchanges}} method). If such pending changes exists starting new rebalance routine has no meaning and we should skip rebalance. This pengind changes can cause no affinity assignments partition changes in our case and that's why we do not need to cancel current rebalance future. # _RENTING\EVICTING partiton after PME_ PME prepares partition to be {{RENTED}} or {{EVICTED}} if they are not assign on local node regarding new affinity calculation. Processing stale supply message (on previous versions) can lead to exceptions with getting partitions on local node with incorrect state. Thats why stale {{GridDhtPartitionSupplyMessage}} must be ignored by {{Demander}}. # _Clear suppy contex map changed_ Previously, supply context map have been cleared after each topology version change occurs. Since we can preform rebalance not on the latest topology version this behavior should be changed. Clear context only for nodes left\failed from topology. # _{{LEFT}} or {{FAIL}} nodes from cluster (rebalance restart)_ If rebalance future demand partitions from nodes which have left the cluster rebalance must be restarted. # _OWNING → MOVING on coordinator due to obsolete partititon update counter_ Affinity assingment can have no chanes and rebalance is currently running. Coordinator performs PME and after megre all SingleMessages marks partitions with obsolete update sequence to be demanded from remote nodes (by change OWNING -> MOVING partition state). We should schedule new rebalance in this case. > Re-balancing is cancelled if client node joins > -- > > Key: IGNITE-7165 > URL: https://issues.apache.org/jira/browse/IGNITE-7165 > Project: Ignite > Issue Type: Bug >Reporter: Mikhail Cherkasov >Assignee: Maxim Muzafarov >Priority: Critical > Labels: rebalance > Fix For: 2.7 > > > Re-balancing is canceled if client node joins. Re-balancing can take hours > and each time when client node joins it starts again: > [15:10:05,700][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager] > Added new node to topology: TcpDiscoveryNode > [id=979cf868-1c37-424a-9ad1-12db501f32ef, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, > 172.31.16.213], sockAddrs=[/0:0:0:0:0:0:0:1:0, /127.0.0.1:0, > /172.31.16.213:0], discPort=0, order=36, intOrder=24, > lastExchangeTime=1512907805688, loc=false, ver=2.3.1#20171129-sha1:4b1ec0fe, > isClient=true] > [15:10:05,701][INFO][disco-event-worker-#61%statement_grid%][GridDiscoveryManager] > Topology snapshot [ver=36, servers=7, clients=5, CPUs=128, heap=160.0GB] > [15:10:05,702][INFO][exchange-worker-#62%statement_grid%][time] Started > exchange init [topVer=AffinityTopologyVersion [topVer=36, minorTopVer=0], > crd=false, evt=NODE_JOINED, evtNode=979cf868-1c37-424a-9ad1-12db501f32ef,
[jira] [Commented] (IGNITE-8863) Tx rollback can cause remote tx hang
[ https://issues.apache.org/jira/browse/IGNITE-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541422#comment-16541422 ] ASF GitHub Bot commented on IGNITE-8863: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4262 > Tx rollback can cause remote tx hang > > > Key: IGNITE-8863 > URL: https://issues.apache.org/jira/browse/IGNITE-8863 > Project: Ignite > Issue Type: Bug >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.7 > > Attachments: Ignite_Tests_2.4_Java_8_Cache_5_1434.log.zip > > > {noformat} > [16:33:56]W: [org.apache.ignite:ignite-core] [2018-06-08 > 13:33:56,931][WARN ][sys-#66696%client%][GridNearTxLocal] The transaction was > forcibly rolled back because a timeout is reached: > GridNearTxLocal[xid=e198a9fd361--0857-6387--0004, > xidVersion=GridCacheVersion [topVer=139944839, order=1528464836894, > nodeOrder=4], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, > state=MARKED_ROLLBACK, invalidate=false, rollbackOnly=true, > nodeId=3c8d85b2-4eb9-46b2-8bd1-6f18f542fc7a, timeout=1, duration=11] > [16:35:55]W: [org.apache.ignite:ignite-core] [2018-06-08 > 13:35:55,056][WARN > ][grid-timeout-worker-#66394%transactions.TxRollbackOnTimeoutTest0%][diagnostic] > Found long running transaction [startTime=13:33:56.931, > curTime=13:35:55.054, tx=GridDhtTxRemote > [nearNodeId=3c8d85b2-4eb9-46b2-8bd1-6f18f542fc7a, > rmtFutId=af940d0e361-79c59341-3292-46e4-92ce-5c4ef4eddef8, > nearXidVer=GridCacheVersion [topVer=139944839, order=1528464836894, > nodeOrder=4], storeWriteThrough=false, super=GridDistributedTxRemoteAdapter > [explicitVers=null, started=true, commitAllowed=0, > txState=IgniteTxRemoteSingleStateImpl [entry=IgniteTxEntry > [key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], cacheId=3556498, > txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], > cacheId=3556498], val=[op=CREATE, val=CacheObjectImpl [val=null, > hasValBytes=true]], prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null], > entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null, > explicitVer=null, dhtVer=null, filters=[], filtersPassed=false, > filtersSet=false, entry=GridDhtCacheEntry [rdrs=[], part=1, > super=GridDistributedCacheEntry [super=GridCacheMapEntry > [key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], > val=CacheObjectImpl [val=null, hasValBytes=true], startVer=1528464836879, > ver=GridCacheVersion [topVer=139944839, order=1528464836863, nodeOrder=2], > hash=1, extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc [locs=null, > rmts=[GridCacheMvccCandidate [nodeId=97ee44cd-73c9-4e79-95df-e1a03481, > ver=GridCacheVersion [topVer=139944839, order=1528464836897, nodeOrder=2], > threadId=75880, id=2310313, topVer=AffinityTopologyVersion [topVer=-1, > minorTopVer=0], reentry=null, > otherNodeId=3c8d85b2-4eb9-46b2-8bd1-6f18f542fc7a, otherVer=null, > mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, serOrder=null, > key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], > masks=local=0|owner=0|ready=0|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=0|removed=0|read=0, > prevVer=null, nextVer=null], GridCacheMvccCandidate > [nodeId=97ee44cd-73c9-4e79-95df-e1a03481, ver=GridCacheVersion > [topVer=139944839, order=1528464836900, nodeOrder=2], threadId=75875, > id=2310317, topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], > reentry=null, otherNodeId=3c8d85b2-4eb9-46b2-8bd1-6f18f542fc7a, > otherVer=null, mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, > serOrder=null, key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], > masks=local=0|owner=1|ready=0|reentry=0|used=1|tx=1|single_implicit=0|dht_local=0|near_local=0|removed=0|read=0, > prevVer=null, nextVer=null, flags=2]]], prepared=1, locked=false, > nodeId=null, locMapped=false, expiryPlc=null, transferExpiryPlc=false, > flags=0, partUpdateCntr=0, serReadVer=null, xidVer=null]], > skipCompletedVers=false, super=IgniteTxAdapter [xidVer=GridCacheVersion > [topVer=139944839, order=1528464836897, nodeOrder=2], > writeVer=GridCacheVersion [topVer=139944839, order=1528464836898, > nodeOrder=2], implicit=false, loc=false, threadId=75880, > startTime=1528464836931, nodeId=97ee44cd-73c9-4e79-95df-e1a03481, > startVer=GridCacheVersion [topVer=139944839, order=1528464836864, > nodeOrder=1], endVer=null, isolation=REPEATABLE_READ, > concurrency=PESSIMISTIC, timeout=1, sysInvalidate=false, sys=false, plc=2, > commitVer=null, finalizing=NONE, invalidParts=null, state=PREPARED, > timedOut=false, topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0], > duration=118123ms,
[jira] [Assigned] (IGNITE-8980) Web console: regression on Queries screen
[ https://issues.apache.org/jira/browse/IGNITE-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov reassigned IGNITE-8980: Assignee: Pavel Konstantinov (was: Alexey Kuznetsov) Fixed in branch ignite-8980. Please test. > Web console: regression on Queries screen > - > > Key: IGNITE-8980 > URL: https://issues.apache.org/jira/browse/IGNITE-8980 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Major > Fix For: 2.7 > > Attachments: screenshot-1.png > > > check box is too small > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8980) Web console: regression on Queries screen
[ https://issues.apache.org/jira/browse/IGNITE-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov updated IGNITE-8980: - Fix Version/s: 2.7 > Web console: regression on Queries screen > - > > Key: IGNITE-8980 > URL: https://issues.apache.org/jira/browse/IGNITE-8980 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Alexey Kuznetsov >Priority: Major > Fix For: 2.7 > > Attachments: screenshot-1.png > > > check box is too small > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8980) Web console: regression on Queries screen
[ https://issues.apache.org/jira/browse/IGNITE-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov reassigned IGNITE-8980: Assignee: Alexey Kuznetsov (was: Dmitriy Shabalin) > Web console: regression on Queries screen > - > > Key: IGNITE-8980 > URL: https://issues.apache.org/jira/browse/IGNITE-8980 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Alexey Kuznetsov >Priority: Major > Fix For: 2.7 > > Attachments: screenshot-1.png > > > check box is too small > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8411) Binary Client Protocol spec: other parts clarifications
[ https://issues.apache.org/jira/browse/IGNITE-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541391#comment-16541391 ] Igor Sapego commented on IGNITE-8411: - [~dmagda], what do you mean? > Binary Client Protocol spec: other parts clarifications > --- > > Key: IGNITE-8411 > URL: https://issues.apache.org/jira/browse/IGNITE-8411 > Project: Ignite > Issue Type: Improvement > Components: documentation, thin client >Affects Versions: 2.4 >Reporter: Alexey Kosenchuk >Assignee: Igor Sapego >Priority: Major > Fix For: 2.7 > > > issues against previous parts: IGNITE-8039 IGNITE-8212 > Cache Configuration > --- > > [https://apacheignite.readme.io/docs/binary-client-protocol-cache-configuration-operations] > - OP_CACHE_GET_CONFIGURATION and OP_CACHE_CREATE_WITH_CONFIGURATION - > QueryEntity - Structure of QueryField: > absent "default value - type Object" - it is the last field of the > QueryField in reality. > - OP_CACHE_GET_CONFIGURATION - Structure of Cache Configuration: > Absent CacheAtomicityMode - is the first field in reality. > Absent MaxConcurrentAsyncOperations - is between DefaultLockTimeout and > MaxQueryIterators in reality. > "Invalidate" field - does not exist in reality. > - meaning and possible values of every configuration parameter must be > clarified. If clarified in other docs, this spec must have link(s) to that > docs. > - suggest to combine somehow Cache Configuration descriptions in > OP_CACHE_GET_CONFIGURATION and OP_CACHE_CREATE_WITH_CONFIGURATION - to avoid > duplicated descriptions. > SQL and Scan Queries > > [https://apacheignite.readme.io/docs/binary-client-protocol-sql-operations] > - "Flag. Pass 0 for default, or 1 to keep the value in binary form.": > "the value in binary form" flag should be left end clarified in the > operations to which it is applicable for. > - OP_QUERY_SQL: > most of the fields in the request must be clarified. If clarified in other > docs, this spec must have link(s) to that docs. > For example: > ** "Name of a type or SQL table": name of what type? > - OP_QUERY_SQL_FIELDS: > most of the fields in the request must be clarified. If clarified in other > docs, this spec must have link(s) to that docs. > For example: > ** is there any correlation between "Query cursor page size" and "Max rows"? > ** "Statement type": why there are only three types? what about INSERT, etc.? > - OP_QUERY_SQL_FIELDS_CURSOR_GET_PAGE Response does not contain Cursor id. > But responses for all other query operations contain it. Is it intentional? > - OP_QUERY_SCAN_CURSOR_GET_PAGE Response - Cursor id is absent in reality. > - OP_QUERY_SCAN_CURSOR_GET_PAGE Response - Row count field: says type > "long". Should be "int". > - OP_QUERY_SCAN: > format and rules of the Filter object must be clarified. If clarified in > other docs, this spec must have link(s) to that docs. > - OP_QUERY_SCAN: > in general, it's not clear how this operation should be supported on > platforms other than the mentioned in "Filter platform" field. > - OP_QUERY_SCAN: "Number of partitions to query" > Should be updated to "A partition number to query" > > Binary Types > > > [https://apacheignite.readme.io/docs/binary-client-protocol-binary-type-operations] > - somewhere should be explained when and why these operations need to be > supported by a client. > - Type id and Field id: > should be clarified that before an Id calculation Type and Field names must > be updated to low case. > - OP_GET_BINARY_TYPE and OP_PUT_BINARY_TYPE - BinaryField - Type id: > in reality it is not a type id (hash code) but a type code (1, 2,... 10,... > 103,...). > - OP_GET_BINARY_TYPE and OP_PUT_BINARY_TYPE - "Affinity key field name": > should be explained what is it. If explained in other docs, this spec must > have link(s) to that docs. > - OP_PUT_BINARY_TYPE - schema id: > mandatory algorithm of schema Id calculation must be described somewhere. If > described in other docs, this spec must have link(s) to that docs. > - OP_REGISTER_BINARY_TYPE_NAME and OP_GET_BINARY_TYPE_NAME: > should be explained when and why these operations need to be supported by a > client. > How this operation should be supported on platforms other than the mentioned > in "Platform id" field. > - OP_REGISTER_BINARY_TYPE_NAME: > Type name - is it "full" or "short" name here? > Type id - is it a hash from "full" or "short" name here? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-4210) CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose data.
[ https://issues.apache.org/jira/browse/IGNITE-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541377#comment-16541377 ] Alexey Kuznetsov edited comment on IGNITE-4210 at 7/12/18 9:39 AM: --- [~agura] [~andrey-kuznetsov] In case of topology changes during store loading I wanted to throw *ClusterTopologyCheckedException* with retry future, but it failed to be serialized. Imagine client calling store load, and loading failed on remote node, In this case *ClusterTopologyCheckedException* would be returned back to client, but without retry future. Is it ok, if no retry future present inside exception ? was (Author: alexey kuznetsov): [~agura] [~andrey-kuznetsov] In case of topology changes during store loading I wanted to throw *ClusterTopologyCheckedException* with retry future, but it failed to be serialized. Imagine client calling store load, and loading failed on remote node, In this case *ClusterTopologyCheckedException* would be returned back to client, but without retry future. > CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose > data. > > > Key: IGNITE-4210 > URL: https://issues.apache.org/jira/browse/IGNITE-4210 > Project: Ignite > Issue Type: Bug >Reporter: Anton Vinogradov >Assignee: Alexey Kuznetsov >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.7 > > > org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTest#testLoadCacheFromStore > sometimes have failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-4210) CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose data.
[ https://issues.apache.org/jira/browse/IGNITE-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541377#comment-16541377 ] Alexey Kuznetsov edited comment on IGNITE-4210 at 7/12/18 9:31 AM: --- [~agura] [~andrey-kuznetsov] In case of topology changes during store loading I wanted to throw *ClusterTopologyCheckedException* with retry future, but it failed to be serialized. Imagine client calling store load, and loading failed on remote node, In this case *ClusterTopologyCheckedException* would be returned back to client, but without retry future. was (Author: alexey kuznetsov): [~agura] In case of topology changes during store loading I wanted to throw *ClusterTopologyCheckedException* with retry future, but it failed to be serialized. Imagine client calling store load, and loading failed on remote node, In this case *ClusterTopologyCheckedException* would be returned back to client, but without retry future. > CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose > data. > > > Key: IGNITE-4210 > URL: https://issues.apache.org/jira/browse/IGNITE-4210 > Project: Ignite > Issue Type: Bug >Reporter: Anton Vinogradov >Assignee: Alexey Kuznetsov >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.7 > > > org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTest#testLoadCacheFromStore > sometimes have failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-4210) CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose data.
[ https://issues.apache.org/jira/browse/IGNITE-4210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541377#comment-16541377 ] Alexey Kuznetsov commented on IGNITE-4210: -- [~agura] In case of topology changes during store loading I wanted to throw *ClusterTopologyCheckedException* with retry future, but it failed to be serialized. Imagine client calling store load, and loading failed on remote node, In this case *ClusterTopologyCheckedException* would be returned back to client, but without retry future. > CacheLoadingConcurrentGridStartSelfTest.testLoadCacheFromStore() test lose > data. > > > Key: IGNITE-4210 > URL: https://issues.apache.org/jira/browse/IGNITE-4210 > Project: Ignite > Issue Type: Bug >Reporter: Anton Vinogradov >Assignee: Alexey Kuznetsov >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.7 > > > org.apache.ignite.internal.processors.cache.distributed.CacheLoadingConcurrentGridStartSelfTest#testLoadCacheFromStore > sometimes have failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8863) Tx rollback can cause remote tx hang
[ https://issues.apache.org/jira/browse/IGNITE-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541350#comment-16541350 ] Igor Seliverstov commented on IGNITE-8863: -- [~ascherbakov], have looked at the changes, looks OK to me. > Tx rollback can cause remote tx hang > > > Key: IGNITE-8863 > URL: https://issues.apache.org/jira/browse/IGNITE-8863 > Project: Ignite > Issue Type: Bug >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.7 > > Attachments: Ignite_Tests_2.4_Java_8_Cache_5_1434.log.zip > > > {noformat} > [16:33:56]W: [org.apache.ignite:ignite-core] [2018-06-08 > 13:33:56,931][WARN ][sys-#66696%client%][GridNearTxLocal] The transaction was > forcibly rolled back because a timeout is reached: > GridNearTxLocal[xid=e198a9fd361--0857-6387--0004, > xidVersion=GridCacheVersion [topVer=139944839, order=1528464836894, > nodeOrder=4], concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, > state=MARKED_ROLLBACK, invalidate=false, rollbackOnly=true, > nodeId=3c8d85b2-4eb9-46b2-8bd1-6f18f542fc7a, timeout=1, duration=11] > [16:35:55]W: [org.apache.ignite:ignite-core] [2018-06-08 > 13:35:55,056][WARN > ][grid-timeout-worker-#66394%transactions.TxRollbackOnTimeoutTest0%][diagnostic] > Found long running transaction [startTime=13:33:56.931, > curTime=13:35:55.054, tx=GridDhtTxRemote > [nearNodeId=3c8d85b2-4eb9-46b2-8bd1-6f18f542fc7a, > rmtFutId=af940d0e361-79c59341-3292-46e4-92ce-5c4ef4eddef8, > nearXidVer=GridCacheVersion [topVer=139944839, order=1528464836894, > nodeOrder=4], storeWriteThrough=false, super=GridDistributedTxRemoteAdapter > [explicitVers=null, started=true, commitAllowed=0, > txState=IgniteTxRemoteSingleStateImpl [entry=IgniteTxEntry > [key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], cacheId=3556498, > txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], > cacheId=3556498], val=[op=CREATE, val=CacheObjectImpl [val=null, > hasValBytes=true]], prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null], > entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null, > explicitVer=null, dhtVer=null, filters=[], filtersPassed=false, > filtersSet=false, entry=GridDhtCacheEntry [rdrs=[], part=1, > super=GridDistributedCacheEntry [super=GridCacheMapEntry > [key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], > val=CacheObjectImpl [val=null, hasValBytes=true], startVer=1528464836879, > ver=GridCacheVersion [topVer=139944839, order=1528464836863, nodeOrder=2], > hash=1, extras=GridCacheMvccEntryExtras [mvcc=GridCacheMvcc [locs=null, > rmts=[GridCacheMvccCandidate [nodeId=97ee44cd-73c9-4e79-95df-e1a03481, > ver=GridCacheVersion [topVer=139944839, order=1528464836897, nodeOrder=2], > threadId=75880, id=2310313, topVer=AffinityTopologyVersion [topVer=-1, > minorTopVer=0], reentry=null, > otherNodeId=3c8d85b2-4eb9-46b2-8bd1-6f18f542fc7a, otherVer=null, > mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, serOrder=null, > key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], > masks=local=0|owner=0|ready=0|reentry=0|used=0|tx=1|single_implicit=0|dht_local=0|near_local=0|removed=0|read=0, > prevVer=null, nextVer=null], GridCacheMvccCandidate > [nodeId=97ee44cd-73c9-4e79-95df-e1a03481, ver=GridCacheVersion > [topVer=139944839, order=1528464836900, nodeOrder=2], threadId=75875, > id=2310317, topVer=AffinityTopologyVersion [topVer=-1, minorTopVer=0], > reentry=null, otherNodeId=3c8d85b2-4eb9-46b2-8bd1-6f18f542fc7a, > otherVer=null, mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null, > serOrder=null, key=KeyCacheObjectImpl [part=1, val=1, hasValBytes=true], > masks=local=0|owner=1|ready=0|reentry=0|used=1|tx=1|single_implicit=0|dht_local=0|near_local=0|removed=0|read=0, > prevVer=null, nextVer=null, flags=2]]], prepared=1, locked=false, > nodeId=null, locMapped=false, expiryPlc=null, transferExpiryPlc=false, > flags=0, partUpdateCntr=0, serReadVer=null, xidVer=null]], > skipCompletedVers=false, super=IgniteTxAdapter [xidVer=GridCacheVersion > [topVer=139944839, order=1528464836897, nodeOrder=2], > writeVer=GridCacheVersion [topVer=139944839, order=1528464836898, > nodeOrder=2], implicit=false, loc=false, threadId=75880, > startTime=1528464836931, nodeId=97ee44cd-73c9-4e79-95df-e1a03481, > startVer=GridCacheVersion [topVer=139944839, order=1528464836864, > nodeOrder=1], endVer=null, isolation=REPEATABLE_READ, > concurrency=PESSIMISTIC, timeout=1, sysInvalidate=false, sys=false, plc=2, > commitVer=null, finalizing=NONE, invalidParts=null, state=PREPARED, > timedOut=false, topVer=AffinityTopologyVersion [topVer=4, minorTopVer=0], > duration=118123ms, onePhaseCommit=false >
[jira] [Assigned] (IGNITE-8962) Web console: Failed to load blob on configuration pages
[ https://issues.apache.org/jira/browse/IGNITE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov reassigned IGNITE-8962: Resolution: Fixed Assignee: Pavel Konstantinov (was: Alexey Kuznetsov) Merged to master. > Web console: Failed to load blob on configuration pages > --- > > Key: IGNITE-8962 > URL: https://issues.apache.org/jira/browse/IGNITE-8962 > Project: Ignite > Issue Type: Bug > Components: wizards >Affects Versions: 2.7 >Reporter: Vasiliy Sisko >Assignee: Pavel Konstantinov >Priority: Major > Fix For: 2.7 > > > On opening *Advanced* tab of cluster configuration in log printed several > messages: > {code:java} > Refused to create a worker from > 'blob:http://localhost:9000/171b5d08-5d9a-4966-98ba-a0649cc433ab' because it > violates the following Content Security Policy directive: "script-src 'self' > 'unsafe-inline' 'unsafe-eval' data: http: https:". Note that 'worker-src' was > not explicitly set, so 'script-src' is used as a fallback. > WorkerClient @ index.js:16810 > createWorker@ xml.js:647 > $startWorker@ index.js:9120{code} > and > {code:java} > Could not load worker DOMException: Failed to construct 'Worker': Access to > the script at > 'blob:http://localhost:9000/639cb195-acb1-4080-8983-91ca55f5b588' is denied > by the document's Content Security Policy. > at new WorkerClient > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:114712:28) > at Mode.createWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:119990:22) > at EditSession.$startWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:107022:39) > at EditSession.$onChangeMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106978:18) > at EditSession.setMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106943:18) > at setOptions (http://localhost:9000/app.9504b7da2e0719a61777.js:28327:59) > at updateOptions > (http://localhost:9000/app.9504b7da2e0719a61777.js:28498:17) > at Object.link > (http://localhost:9000/app.9504b7da2e0719a61777.js:28505:13) > at http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:61725:18 > at invokeLinkFn > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:70885:9) > warn @ index.js:3532 > $startWorker@ index.js:9122 > $onChangeMode @ index.js:9076{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-8962) Web console: Failed to load blob on configuration pages
[ https://issues.apache.org/jira/browse/IGNITE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov closed IGNITE-8962. > Web console: Failed to load blob on configuration pages > --- > > Key: IGNITE-8962 > URL: https://issues.apache.org/jira/browse/IGNITE-8962 > Project: Ignite > Issue Type: Bug > Components: wizards >Affects Versions: 2.7 >Reporter: Vasiliy Sisko >Assignee: Alexey Kuznetsov >Priority: Major > Fix For: 2.7 > > > On opening *Advanced* tab of cluster configuration in log printed several > messages: > {code:java} > Refused to create a worker from > 'blob:http://localhost:9000/171b5d08-5d9a-4966-98ba-a0649cc433ab' because it > violates the following Content Security Policy directive: "script-src 'self' > 'unsafe-inline' 'unsafe-eval' data: http: https:". Note that 'worker-src' was > not explicitly set, so 'script-src' is used as a fallback. > WorkerClient @ index.js:16810 > createWorker@ xml.js:647 > $startWorker@ index.js:9120{code} > and > {code:java} > Could not load worker DOMException: Failed to construct 'Worker': Access to > the script at > 'blob:http://localhost:9000/639cb195-acb1-4080-8983-91ca55f5b588' is denied > by the document's Content Security Policy. > at new WorkerClient > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:114712:28) > at Mode.createWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:119990:22) > at EditSession.$startWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:107022:39) > at EditSession.$onChangeMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106978:18) > at EditSession.setMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106943:18) > at setOptions (http://localhost:9000/app.9504b7da2e0719a61777.js:28327:59) > at updateOptions > (http://localhost:9000/app.9504b7da2e0719a61777.js:28498:17) > at Object.link > (http://localhost:9000/app.9504b7da2e0719a61777.js:28505:13) > at http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:61725:18 > at invokeLinkFn > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:70885:9) > warn @ index.js:3532 > $startWorker@ index.js:9122 > $onChangeMode @ index.js:9076{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8962) Web console: Failed to load blob on configuration pages
[ https://issues.apache.org/jira/browse/IGNITE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov updated IGNITE-8962: - Fix Version/s: 2.7 > Web console: Failed to load blob on configuration pages > --- > > Key: IGNITE-8962 > URL: https://issues.apache.org/jira/browse/IGNITE-8962 > Project: Ignite > Issue Type: Bug > Components: wizards >Affects Versions: 2.7 >Reporter: Vasiliy Sisko >Assignee: Alexey Kuznetsov >Priority: Major > Fix For: 2.7 > > > On opening *Advanced* tab of cluster configuration in log printed several > messages: > {code:java} > Refused to create a worker from > 'blob:http://localhost:9000/171b5d08-5d9a-4966-98ba-a0649cc433ab' because it > violates the following Content Security Policy directive: "script-src 'self' > 'unsafe-inline' 'unsafe-eval' data: http: https:". Note that 'worker-src' was > not explicitly set, so 'script-src' is used as a fallback. > WorkerClient @ index.js:16810 > createWorker@ xml.js:647 > $startWorker@ index.js:9120{code} > and > {code:java} > Could not load worker DOMException: Failed to construct 'Worker': Access to > the script at > 'blob:http://localhost:9000/639cb195-acb1-4080-8983-91ca55f5b588' is denied > by the document's Content Security Policy. > at new WorkerClient > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:114712:28) > at Mode.createWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:119990:22) > at EditSession.$startWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:107022:39) > at EditSession.$onChangeMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106978:18) > at EditSession.setMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106943:18) > at setOptions (http://localhost:9000/app.9504b7da2e0719a61777.js:28327:59) > at updateOptions > (http://localhost:9000/app.9504b7da2e0719a61777.js:28498:17) > at Object.link > (http://localhost:9000/app.9504b7da2e0719a61777.js:28505:13) > at http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:61725:18 > at invokeLinkFn > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:70885:9) > warn @ index.js:3532 > $startWorker@ index.js:9122 > $onChangeMode @ index.js:9076{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-8988) Web console: error in Readme.txt of generated project
[ https://issues.apache.org/jira/browse/IGNITE-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov resolved IGNITE-8988. -- Resolution: Fixed Assignee: Pavel Konstantinov (was: Alexey Kuznetsov) Fixed. Merged to master. > Web console: error in Readme.txt of generated project > - > > Key: IGNITE-8988 > URL: https://issues.apache.org/jira/browse/IGNITE-8988 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Pavel Konstantinov >Priority: Trivial > Fix For: 2.7 > > Attachments: screenshot-1.png > > > Readme.txt says XML configuration files are located in /config folder, but > actually its located in src/main/resources/META-INF > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8988) Web console: error in Readme.txt of generated project
[ https://issues.apache.org/jira/browse/IGNITE-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov updated IGNITE-8988: - Fix Version/s: 2.7 > Web console: error in Readme.txt of generated project > - > > Key: IGNITE-8988 > URL: https://issues.apache.org/jira/browse/IGNITE-8988 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Alexey Kuznetsov >Priority: Trivial > Fix For: 2.7 > > Attachments: screenshot-1.png > > > Readme.txt says XML configuration files are located in /config folder, but > actually its located in src/main/resources/META-INF > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8962) Web console: Failed to load blob on configuration pages
[ https://issues.apache.org/jira/browse/IGNITE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kalinin reassigned IGNITE-8962: - Assignee: Alexey Kuznetsov (was: Alexander Kalinin) > Web console: Failed to load blob on configuration pages > --- > > Key: IGNITE-8962 > URL: https://issues.apache.org/jira/browse/IGNITE-8962 > Project: Ignite > Issue Type: Bug > Components: wizards >Affects Versions: 2.7 >Reporter: Vasiliy Sisko >Assignee: Alexey Kuznetsov >Priority: Major > > On opening *Advanced* tab of cluster configuration in log printed several > messages: > {code:java} > Refused to create a worker from > 'blob:http://localhost:9000/171b5d08-5d9a-4966-98ba-a0649cc433ab' because it > violates the following Content Security Policy directive: "script-src 'self' > 'unsafe-inline' 'unsafe-eval' data: http: https:". Note that 'worker-src' was > not explicitly set, so 'script-src' is used as a fallback. > WorkerClient @ index.js:16810 > createWorker@ xml.js:647 > $startWorker@ index.js:9120{code} > and > {code:java} > Could not load worker DOMException: Failed to construct 'Worker': Access to > the script at > 'blob:http://localhost:9000/639cb195-acb1-4080-8983-91ca55f5b588' is denied > by the document's Content Security Policy. > at new WorkerClient > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:114712:28) > at Mode.createWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:119990:22) > at EditSession.$startWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:107022:39) > at EditSession.$onChangeMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106978:18) > at EditSession.setMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106943:18) > at setOptions (http://localhost:9000/app.9504b7da2e0719a61777.js:28327:59) > at updateOptions > (http://localhost:9000/app.9504b7da2e0719a61777.js:28498:17) > at Object.link > (http://localhost:9000/app.9504b7da2e0719a61777.js:28505:13) > at http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:61725:18 > at invokeLinkFn > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:70885:9) > warn @ index.js:3532 > $startWorker@ index.js:9122 > $onChangeMode @ index.js:9076{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8988) Web console: error in Readme.txt of generated project
[ https://issues.apache.org/jira/browse/IGNITE-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov reassigned IGNITE-8988: Assignee: Alexey Kuznetsov (was: Vasiliy Sisko) > Web console: error in Readme.txt of generated project > - > > Key: IGNITE-8988 > URL: https://issues.apache.org/jira/browse/IGNITE-8988 > Project: Ignite > Issue Type: Bug > Components: wizards >Reporter: Pavel Konstantinov >Assignee: Alexey Kuznetsov >Priority: Trivial > Attachments: screenshot-1.png > > > Readme.txt says XML configuration files are located in /config folder, but > actually its located in src/main/resources/META-INF > !screenshot-1.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-8962) Web console: Failed to load blob on configuration pages
[ https://issues.apache.org/jira/browse/IGNITE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541287#comment-16541287 ] Alexander Kalinin edited comment on IGNITE-8962 at 7/12/18 8:07 AM: After headers deletion in web pack config problem dissapeared. Fixed in branch ignite-8962, [~kuaw26] was (Author: alexdel): After headers deletion in web pack config problem dissapeared. > Web console: Failed to load blob on configuration pages > --- > > Key: IGNITE-8962 > URL: https://issues.apache.org/jira/browse/IGNITE-8962 > Project: Ignite > Issue Type: Bug > Components: wizards >Affects Versions: 2.7 >Reporter: Vasiliy Sisko >Assignee: Alexander Kalinin >Priority: Major > > On opening *Advanced* tab of cluster configuration in log printed several > messages: > {code:java} > Refused to create a worker from > 'blob:http://localhost:9000/171b5d08-5d9a-4966-98ba-a0649cc433ab' because it > violates the following Content Security Policy directive: "script-src 'self' > 'unsafe-inline' 'unsafe-eval' data: http: https:". Note that 'worker-src' was > not explicitly set, so 'script-src' is used as a fallback. > WorkerClient @ index.js:16810 > createWorker@ xml.js:647 > $startWorker@ index.js:9120{code} > and > {code:java} > Could not load worker DOMException: Failed to construct 'Worker': Access to > the script at > 'blob:http://localhost:9000/639cb195-acb1-4080-8983-91ca55f5b588' is denied > by the document's Content Security Policy. > at new WorkerClient > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:114712:28) > at Mode.createWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:119990:22) > at EditSession.$startWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:107022:39) > at EditSession.$onChangeMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106978:18) > at EditSession.setMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106943:18) > at setOptions (http://localhost:9000/app.9504b7da2e0719a61777.js:28327:59) > at updateOptions > (http://localhost:9000/app.9504b7da2e0719a61777.js:28498:17) > at Object.link > (http://localhost:9000/app.9504b7da2e0719a61777.js:28505:13) > at http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:61725:18 > at invokeLinkFn > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:70885:9) > warn @ index.js:3532 > $startWorker@ index.js:9122 > $onChangeMode @ index.js:9076{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8962) Web console: Failed to load blob on configuration pages
[ https://issues.apache.org/jira/browse/IGNITE-8962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16541287#comment-16541287 ] Alexander Kalinin commented on IGNITE-8962: --- After headers deletion in web pack config problem dissapeared. > Web console: Failed to load blob on configuration pages > --- > > Key: IGNITE-8962 > URL: https://issues.apache.org/jira/browse/IGNITE-8962 > Project: Ignite > Issue Type: Bug > Components: wizards >Affects Versions: 2.7 >Reporter: Vasiliy Sisko >Assignee: Alexander Kalinin >Priority: Major > > On opening *Advanced* tab of cluster configuration in log printed several > messages: > {code:java} > Refused to create a worker from > 'blob:http://localhost:9000/171b5d08-5d9a-4966-98ba-a0649cc433ab' because it > violates the following Content Security Policy directive: "script-src 'self' > 'unsafe-inline' 'unsafe-eval' data: http: https:". Note that 'worker-src' was > not explicitly set, so 'script-src' is used as a fallback. > WorkerClient @ index.js:16810 > createWorker@ xml.js:647 > $startWorker@ index.js:9120{code} > and > {code:java} > Could not load worker DOMException: Failed to construct 'Worker': Access to > the script at > 'blob:http://localhost:9000/639cb195-acb1-4080-8983-91ca55f5b588' is denied > by the document's Content Security Policy. > at new WorkerClient > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:114712:28) > at Mode.createWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:119990:22) > at EditSession.$startWorker > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:107022:39) > at EditSession.$onChangeMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106978:18) > at EditSession.setMode > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:106943:18) > at setOptions (http://localhost:9000/app.9504b7da2e0719a61777.js:28327:59) > at updateOptions > (http://localhost:9000/app.9504b7da2e0719a61777.js:28498:17) > at Object.link > (http://localhost:9000/app.9504b7da2e0719a61777.js:28505:13) > at http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:61725:18 > at invokeLinkFn > (http://localhost:9000/vendors~app.df60bff111cc62fe15ec.js:70885:9) > warn @ index.js:3532 > $startWorker@ index.js:9122 > $onChangeMode @ index.js:9076{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-7941) Update dependencies to latest versions and migrate to caret(^) in package.json + package.json.lock
[ https://issues.apache.org/jira/browse/IGNITE-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kalinin resolved IGNITE-7941. --- Resolution: Won't Do As the research has shown using package-lock.json and versions with carets leads to lots of complexities and build failures. Pinned version are more preferable so far. > Update dependencies to latest versions and migrate to caret(^) in > package.json + package.json.lock > --- > > Key: IGNITE-7941 > URL: https://issues.apache.org/jira/browse/IGNITE-7941 > Project: Ignite > Issue Type: Improvement > Components: wizards >Reporter: Alexander Kalinin >Assignee: Alexander Kalinin >Priority: Minor > Fix For: 2.7 > > > We should orgnized package.json files in way that latest stable dependencies > are installed > * move to caret versions notations. > * add package.json.lock to git versioning. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8991) Failed server node if predicate in scan query throw AssertionError
[ https://issues.apache.org/jira/browse/IGNITE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexand Polyakov updated IGNITE-8991: - Description: Reproducer [attach|^RunFD7747.java] , server nodes react by stopping at Error when scan query {code} org/apache/ignite/internal/processors/cache/query/GridCacheQueryManager.java:1335 if (e instanceof Error) throw (Error)e; {code} execute query, kill server node {code} cache.query(new ScanQuery<>(new IgniteBiPredicate() { @Override public boolean apply(Object key, Object value) { throw new AssertionError("It's not Exception, it's worse."); } })); {code} > Failed server node if predicate in scan query throw AssertionError > -- > > Key: IGNITE-8991 > URL: https://issues.apache.org/jira/browse/IGNITE-8991 > Project: Ignite > Issue Type: Improvement > Components: cache >Affects Versions: 2.5 >Reporter: Alexand Polyakov >Priority: Major > Attachments: RunFD7747.java > > > Reproducer [attach|^RunFD7747.java] , > server nodes react by stopping at Error when scan query > {code} > org/apache/ignite/internal/processors/cache/query/GridCacheQueryManager.java:1335 > if (e instanceof Error) > throw (Error)e; > {code} > > execute query, kill server node > {code} > cache.query(new ScanQuery<>(new IgniteBiPredicate() { > @Override public boolean apply(Object key, Object value) { > throw new AssertionError("It's not Exception, it's worse."); > } > })); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-8991) Failed server node if predicate in scan query throw AssertionError
[ https://issues.apache.org/jira/browse/IGNITE-8991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexand Polyakov updated IGNITE-8991: - Attachment: RunFD7747.java > Failed server node if predicate in scan query throw AssertionError > -- > > Key: IGNITE-8991 > URL: https://issues.apache.org/jira/browse/IGNITE-8991 > Project: Ignite > Issue Type: Improvement > Components: cache >Affects Versions: 2.5 >Reporter: Alexand Polyakov >Priority: Major > Attachments: RunFD7747.java > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-8991) Failed server node if predicate in scan query throw AssertionError
Alexand Polyakov created IGNITE-8991: Summary: Failed server node if predicate in scan query throw AssertionError Key: IGNITE-8991 URL: https://issues.apache.org/jira/browse/IGNITE-8991 Project: Ignite Issue Type: Improvement Components: cache Affects Versions: 2.5 Reporter: Alexand Polyakov -- This message was sent by Atlassian JIRA (v7.6.3#76005)