[jira] [Commented] (IGNITE-9353) Remove "Known issue, possible deadlock in case of low priority cache rebalancing delayed" comment from GridCacheRebalancingSyncSelfTest#getConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16591213#comment-16591213 ] Maxim Muzafarov commented on IGNITE-9353: - [~roman_s] Thank you for the contribution! > Remove "Known issue, possible deadlock in case of low priority cache > rebalancing delayed" comment from > GridCacheRebalancingSyncSelfTest#getConfiguration > > > Key: IGNITE-9353 > URL: https://issues.apache.org/jira/browse/IGNITE-9353 > Project: Ignite > Issue Type: Task >Reporter: Roman Shtykh >Assignee: Roman Shtykh >Priority: Trivial > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9353) Remove "Known issue, possible deadlock in case of low priority cache rebalancing delayed" comment from GridCacheRebalancingSyncSelfTest#getConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16591016#comment-16591016 ] ASF GitHub Bot commented on IGNITE-9353: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4599 > Remove "Known issue, possible deadlock in case of low priority cache > rebalancing delayed" comment from > GridCacheRebalancingSyncSelfTest#getConfiguration > > > Key: IGNITE-9353 > URL: https://issues.apache.org/jira/browse/IGNITE-9353 > Project: Ignite > Issue Type: Task >Reporter: Roman Shtykh >Assignee: Roman Shtykh >Priority: Trivial > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9353) Remove "Known issue, possible deadlock in case of low priority cache rebalancing delayed" comment from GridCacheRebalancingSyncSelfTest#getConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16591011#comment-16591011 ] Roman Shtykh commented on IGNITE-9353: -- [~Mmuzaf] Thanks for checking Cache 8 Suite. Looks like we have a flaky test that is not related to this change. Thanks for pointers to {{FailureHandler}}! > Remove "Known issue, possible deadlock in case of low priority cache > rebalancing delayed" comment from > GridCacheRebalancingSyncSelfTest#getConfiguration > > > Key: IGNITE-9353 > URL: https://issues.apache.org/jira/browse/IGNITE-9353 > Project: Ignite > Issue Type: Task >Reporter: Roman Shtykh >Assignee: Roman Shtykh >Priority: Trivial > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-9305) Wrong off-heap size is reported for a node
[ https://issues.apache.org/jira/browse/IGNITE-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590964#comment-16590964 ] Denis Magda edited comment on IGNITE-9305 at 8/24/18 12:06 AM: --- [~xtern], That's a good point. Let's show the aggregated information first and the breakdown by specific regions as a sublist. Plus please add "region" to the region names. For instance, "default" -> "default region", "systemMemPlc" -> "systemMemPlc region" Please send a final format for the review. was (Author: dmagda): [~xtern], That's a good point. Let's show the aggregated information first and the breakdown by specific regions as a sublist. Plus please add "region" to the region names. For instance, "default" -> "default region", "systemMemPlc" -> "systemMemPlc region" > Wrong off-heap size is reported for a node > -- > > Key: IGNITE-9305 > URL: https://issues.apache.org/jira/browse/IGNITE-9305 > Project: Ignite > Issue Type: Task >Affects Versions: 2.6 >Reporter: Denis Magda >Assignee: Pavel Pereslegin >Priority: Blocker > Fix For: 2.7 > > > Was troubleshooting an Ignite deployment today and couldn't find out from the > logs what was the actual off-heap space used. > Those were the given memory resoures (Ignite 2.6): > {code} > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager] Topology > snapshot [ver=1, servers=1, clients=0, CPUs=64, offheap=30.0GB, heap=24.0GB] > {code} > And that weird stuff was reported by the node (pay attention to the last > line): > {code} > [2018-08-16 15:45:50,211][INFO > ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017] > > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=c033026e, name=cluster_31-Dec-2017, uptime=00:38:00.257] > ^-- H/N/C [hosts=1, nodes=1, CPUs=64] > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%] > ^-- PageMemory [pages=6997377] > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB] > ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is always the > same! > {code} > Had to change the code by using > {code}dataRegion.getPhysicalMemoryPages(){code} to find out that actual > off-heap usage size was > {code} > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB > {code} > The logs have to report the following instead: > {code} > ^-- Off-heap {Data Region 1} [used={dataRegion1.getPhysicalMemorySize()}, > free=X%, comm=dataRegion1.maxSize()] > ^-- Off-heap {Data Region 2} [used={dataRegion2.getPhysicalMemorySize()}, > free=X%, comm=dataRegion2.maxSize()] > {code} > If Ignite persistence is enabled then the following extra lines have to be > added to see the disk used space: > {code} > ^-- Ignite persistence {Data Region 1}: > used={dataRegion1.getTotalAllocatedSize() - > dataRegion1.getPhysicalMemorySize()} > ^-- Ignite persistence {Data Region 2} > [used={dataRegion2.getTotalAllocatedSize() - > dataRegion2.getPhysicalMemorySize()}] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9305) Wrong off-heap size is reported for a node
[ https://issues.apache.org/jira/browse/IGNITE-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590964#comment-16590964 ] Denis Magda commented on IGNITE-9305: - [~xtern], That's a good point. Let's show the aggregated information first and the breakdown by specific regions as a sublist. Plus please add "region" to the region names. For instance, "default" -> "default region", "systemMemPlc" -> "systemMemPlc region" > Wrong off-heap size is reported for a node > -- > > Key: IGNITE-9305 > URL: https://issues.apache.org/jira/browse/IGNITE-9305 > Project: Ignite > Issue Type: Task >Affects Versions: 2.6 >Reporter: Denis Magda >Assignee: Pavel Pereslegin >Priority: Blocker > Fix For: 2.7 > > > Was troubleshooting an Ignite deployment today and couldn't find out from the > logs what was the actual off-heap space used. > Those were the given memory resoures (Ignite 2.6): > {code} > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager] Topology > snapshot [ver=1, servers=1, clients=0, CPUs=64, offheap=30.0GB, heap=24.0GB] > {code} > And that weird stuff was reported by the node (pay attention to the last > line): > {code} > [2018-08-16 15:45:50,211][INFO > ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017] > > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=c033026e, name=cluster_31-Dec-2017, uptime=00:38:00.257] > ^-- H/N/C [hosts=1, nodes=1, CPUs=64] > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%] > ^-- PageMemory [pages=6997377] > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB] > ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is always the > same! > {code} > Had to change the code by using > {code}dataRegion.getPhysicalMemoryPages(){code} to find out that actual > off-heap usage size was > {code} > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB > {code} > The logs have to report the following instead: > {code} > ^-- Off-heap {Data Region 1} [used={dataRegion1.getPhysicalMemorySize()}, > free=X%, comm=dataRegion1.maxSize()] > ^-- Off-heap {Data Region 2} [used={dataRegion2.getPhysicalMemorySize()}, > free=X%, comm=dataRegion2.maxSize()] > {code} > If Ignite persistence is enabled then the following extra lines have to be > added to see the disk used space: > {code} > ^-- Ignite persistence {Data Region 1}: > used={dataRegion1.getTotalAllocatedSize() - > dataRegion1.getPhysicalMemorySize()} > ^-- Ignite persistence {Data Region 2} > [used={dataRegion2.getTotalAllocatedSize() - > dataRegion2.getPhysicalMemorySize()}] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9147) Race between tx async rollback and lock mapping on near node can produce hanging primary tx
[ https://issues.apache.org/jira/browse/IGNITE-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590860#comment-16590860 ] Alexei Scherbakov commented on IGNITE-9147: --- TC run revealed issue reproduced by test from Basic suite. Fixed, new run is scheduled. > Race between tx async rollback and lock mapping on near node can produce > hanging primary tx > --- > > Key: IGNITE-9147 > URL: https://issues.apache.org/jira/browse/IGNITE-9147 > Project: Ignite > Issue Type: Bug > Components: general >Affects Versions: 2.5 >Reporter: ARomantsov >Assignee: Alexei Scherbakov >Priority: Critical > Fix For: 2.7 > > > I ran a simple test > 1) Start 15 servers node > 2) Start client with long transaction > 3) Additional start 5 client with loading in many caches (near 2 thousand) > 4) Stop 1 server node, wait 1 minute and start it back > Cluster freenze on more than hour -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6055) SQL: Add String length constraint
[ https://issues.apache.org/jira/browse/IGNITE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590858#comment-16590858 ] Nikolay Izhikov commented on IGNITE-6055: - [~isapego] I've fixed your comments and added a new test. Please, review. Tests results - https://ci.ignite.apache.org/viewLog.html?buildId=1719883&tab=queuedBuildOverviewTab > SQL: Add String length constraint > - > > Key: IGNITE-6055 > URL: https://issues.apache.org/jira/browse/IGNITE-6055 > Project: Ignite > Issue Type: Task > Components: sql >Affects Versions: 2.1 >Reporter: Vladimir Ozerov >Assignee: Nikolay Izhikov >Priority: Major > Labels: sql-engine > Fix For: 2.7 > > > We should support {{CHAR(X)}} and {{VARCHAR{X}} syntax. Currently, we ignore > it. First, it affects semantics. E.g., one can insert a string with greater > length into a cache/table without any problems. Second, it limits efficiency > of our default configuration. E.g., index inline cannot be applied to > {{String}} data type as we cannot guess it's length. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9365) Force backups to different AWS availability zones using only Spring XML
David Harvey created IGNITE-9365: Summary: Force backups to different AWS availability zones using only Spring XML Key: IGNITE-9365 URL: https://issues.apache.org/jira/browse/IGNITE-9365 Project: Ignite Issue Type: Improvement Components: cache Environment: Reporter: David Harvey Assignee: David Harvey Fix For: 2.7 As a developer, I want to be able to force cache backups each to a different "Availability Zone", when I'm running out-of-the-box Ignite, without additional Jars installed. "Availability zone" is a AWS feature with different names for the same function by other cloud providers. A single availability zone has the characteristic that some or all of the EC2 instances in that zone can fail together due to a single fault. You have no control over the hosts on which the EC2 instance VMs run on in AWS, except by controlling the availability zone . I could write a few lines of a custom affinityBackupFilter, and configure it a RendezvousAffinityFunction, but then I have to get it deployed on all nodes in the cluster, and peer class loading will not work to this. The code to do this should just be part of Ignite. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9365) Force backups to different AWS availability zones using only Spring XML
[ https://issues.apache.org/jira/browse/IGNITE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590829#comment-16590829 ] David Harvey commented on IGNITE-9365: -- I was thinking of adding new class along these lines, where the apply function will return true only if none of the node's attributes match those of any of the nodes in the list. This would become part of the code base, but would only be used if configured as the backupAffinityFunction ClusterNodeNoAttributesMatchBiPredicate implements IgniteBiPredicate> { ClusterNodeNoAttributesMatchBiPredicate(String[] attributeNames) \{} For AvailabilityZones, there would be only one attribute examined, but we have some potential use cases for distributing backups across two sub-groups of an AZ. Alternately, we could enhance the RendezvousAffinityFunction to allow one or more arbitrary attributes to be compared to determine neighbors, rather than only org.apache.ignite.macs, and to add a setting that controls whether backups should be placed on neighbors if they can't be placed anywhere else. If I have 2 backups and three availability zones (AZ), I want one copy of the data in each AZ. If all nodes in one AZ fail, I want to be able to decide to try to get to three copies anyway, increasing the per node footprint by 50%, or to only run with one backup. This would also give be a convoluted way to change the number of backups of a cache dynamically: Start the cache with a large number of backups, but don't provide a location where the backup would be allowed to run initially. > Force backups to different AWS availability zones using only Spring XML > --- > > Key: IGNITE-9365 > URL: https://issues.apache.org/jira/browse/IGNITE-9365 > Project: Ignite > Issue Type: Improvement > Components: cache > Environment: >Reporter: David Harvey >Assignee: David Harvey >Priority: Minor > Fix For: 2.7 > > Original Estimate: 168h > Remaining Estimate: 168h > > As a developer, I want to be able to force cache backups each to a different > "Availability Zone", when I'm running out-of-the-box Ignite, without > additional Jars installed. "Availability zone" is a AWS feature with > different names for the same function by other cloud providers. A single > availability zone has the characteristic that some or all of the EC2 > instances in that zone can fail together due to a single fault. You have no > control over the hosts on which the EC2 instance VMs run on in AWS, except by > controlling the availability zone . > > I could write a few lines of a custom affinityBackupFilter, and configure it > a RendezvousAffinityFunction, but then I have to get it deployed on all nodes > in the cluster, and peer class loading will not work to this. The code to > do this should just be part of Ignite. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-9364) SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC
[ https://issues.apache.org/jira/browse/IGNITE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Daschinskiy resolved IGNITE-9364. -- Resolution: Not A Bug There is a problem in ignite-9147, need to fix there > SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC > --- > > Key: IGNITE-9364 > URL: https://issues.apache.org/jira/browse/IGNITE-9364 > Project: Ignite > Issue Type: Bug >Reporter: Alexei Scherbakov >Assignee: Ivan Daschinskiy >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.7 > > Attachments: Ignite_Tests_2.4_Java_8_Basic_1_3255.log.zip > > > Failed run: > https://ci.ignite.apache.org/viewLog.html?buildId=1707476&buildTypeId=IgniteTests24Java8_Basic1&tab=buildLog -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Closed] (IGNITE-9364) SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC
[ https://issues.apache.org/jira/browse/IGNITE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Daschinskiy closed IGNITE-9364. > SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC > --- > > Key: IGNITE-9364 > URL: https://issues.apache.org/jira/browse/IGNITE-9364 > Project: Ignite > Issue Type: Bug >Reporter: Alexei Scherbakov >Assignee: Ivan Daschinskiy >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.7 > > Attachments: Ignite_Tests_2.4_Java_8_Basic_1_3255.log.zip > > > Failed run: > https://ci.ignite.apache.org/viewLog.html?buildId=1707476&buildTypeId=IgniteTests24Java8_Basic1&tab=buildLog -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9364) SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC
[ https://issues.apache.org/jira/browse/IGNITE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590707#comment-16590707 ] Ivan Daschinskiy commented on IGNITE-9364: -- [~ascherbakov] Tests hangs only in your branch. During test I started deadlock and it should be rolled back during exchange by calling rollbackOnTopologyChange. But we hangs forever in GridDhtColocatedLockFuture#cancel while waiting for mappingsReady == true. > SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC > --- > > Key: IGNITE-9364 > URL: https://issues.apache.org/jira/browse/IGNITE-9364 > Project: Ignite > Issue Type: Bug >Reporter: Alexei Scherbakov >Assignee: Ivan Daschinskiy >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.7 > > Attachments: Ignite_Tests_2.4_Java_8_Basic_1_3255.log.zip > > > Failed run: > https://ci.ignite.apache.org/viewLog.html?buildId=1707476&buildTypeId=IgniteTests24Java8_Basic1&tab=buildLog -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
[ https://issues.apache.org/jira/browse/IGNITE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590595#comment-16590595 ] ASF GitHub Bot commented on IGNITE-9363: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4609 > Jetty tests forget to stop nodes on finished. > - > > Key: IGNITE-9363 > URL: https://issues.apache.org/jira/browse/IGNITE-9363 > Project: Ignite > Issue Type: Improvement > Components: clients >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.7 > > > JettyRestProcessorCommonSelfTest.afterTestsStopped() method should call it's > super. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
[ https://issues.apache.org/jira/browse/IGNITE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Pavlov updated IGNITE-9363: --- Fix Version/s: 2.7 > Jetty tests forget to stop nodes on finished. > - > > Key: IGNITE-9363 > URL: https://issues.apache.org/jira/browse/IGNITE-9363 > Project: Ignite > Issue Type: Improvement > Components: clients >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain > Fix For: 2.7 > > > JettyRestProcessorCommonSelfTest.afterTestsStopped() method should call it's > super. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
[ https://issues.apache.org/jira/browse/IGNITE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov updated IGNITE-9363: - Description: JettyRestProcessorCommonSelfTest.afterTestsStopped() method should call it's super. > Jetty tests forget to stop nodes on finished. > - > > Key: IGNITE-9363 > URL: https://issues.apache.org/jira/browse/IGNITE-9363 > Project: Ignite > Issue Type: Improvement > Components: clients >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain > > JettyRestProcessorCommonSelfTest.afterTestsStopped() method should call it's > super. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
[ https://issues.apache.org/jira/browse/IGNITE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov reassigned IGNITE-9363: Assignee: Andrew Mashenkov > Jetty tests forget to stop nodes on finished. > - > > Key: IGNITE-9363 > URL: https://issues.apache.org/jira/browse/IGNITE-9363 > Project: Ignite > Issue Type: Improvement > Components: clients >Reporter: Andrew Mashenkov >Assignee: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9364) SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC
Alexei Scherbakov created IGNITE-9364: - Summary: SetTxTimeoutOnPartitionMapExchangeTest.java hangs on TC Key: IGNITE-9364 URL: https://issues.apache.org/jira/browse/IGNITE-9364 Project: Ignite Issue Type: Bug Reporter: Alexei Scherbakov Assignee: Ivan Daschinskiy Fix For: 2.7 Attachments: Ignite_Tests_2.4_Java_8_Basic_1_3255.log.zip Failed run: https://ci.ignite.apache.org/viewLog.html?buildId=1707476&buildTypeId=IgniteTests24Java8_Basic1&tab=buildLog -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9362) SQL: Remove NODES.IS_LOCAL attribute
[ https://issues.apache.org/jira/browse/IGNITE-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590543#comment-16590543 ] ASF GitHub Bot commented on IGNITE-9362: GitHub user alex-plekhanov opened a pull request: https://github.com/apache/ignite/pull/4610 IGNITE-9362 SQL: Remove NODES.IS_LOCAL attribute You can merge this pull request into a Git repository by running: $ git pull https://github.com/alex-plekhanov/ignite ignite-9362 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4610.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4610 commit 51778d7edb1af6793f505a05cec147cc29320d40 Author: Aleksey Plekhanov Date: 2018-08-23T17:03:23Z IGNITE-9362 SQL: Remove NODES.IS_LOCAL attribute > SQL: Remove NODES.IS_LOCAL attribute > > > Key: IGNITE-9362 > URL: https://issues.apache.org/jira/browse/IGNITE-9362 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Aleksey Plekhanov >Priority: Major > Fix For: 2.7 > > > We need to remove {{IS_LOCAL}} attribute from {{NODES}} system view. This > attribute doesn't make sense: it depends on where SQL query is executed. When > executed from JDBC/ODBC driver user will received strange result, when remote > node is displayed as local. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
[ https://issues.apache.org/jira/browse/IGNITE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590542#comment-16590542 ] ASF GitHub Bot commented on IGNITE-9363: GitHub user AMashenkov opened a pull request: https://github.com/apache/ignite/pull/4609 IGNITE-9363: Fix Jetty tests. Minor refactoring. You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-9363 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4609.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4609 commit bfba8a163890e163d8623ece25dbedfc7b5cd754 Author: Andrey V. Mashenkov Date: 2018-08-23T16:15:19Z GG-14137: Fix Jetty tests. Minor refactoring. Signed-off-by: Andrey V. Mashenkov > Jetty tests forget to stop nodes on finished. > - > > Key: IGNITE-9363 > URL: https://issues.apache.org/jira/browse/IGNITE-9363 > Project: Ignite > Issue Type: Improvement > Components: clients >Reporter: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
[ https://issues.apache.org/jira/browse/IGNITE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov updated IGNITE-9363: - Component/s: clients > Jetty tests forget to stop nodes on finished. > - > > Key: IGNITE-9363 > URL: https://issues.apache.org/jira/browse/IGNITE-9363 > Project: Ignite > Issue Type: Improvement > Components: clients >Reporter: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
[ https://issues.apache.org/jira/browse/IGNITE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov updated IGNITE-9363: - Labels: MakeTeamcityGreenAgain (was: ) > Jetty tests forget to stop nodes on finished. > - > > Key: IGNITE-9363 > URL: https://issues.apache.org/jira/browse/IGNITE-9363 > Project: Ignite > Issue Type: Improvement >Reporter: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
[ https://issues.apache.org/jira/browse/IGNITE-9363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Mashenkov updated IGNITE-9363: - Ignite Flags: (was: Docs Required) > Jetty tests forget to stop nodes on finished. > - > > Key: IGNITE-9363 > URL: https://issues.apache.org/jira/browse/IGNITE-9363 > Project: Ignite > Issue Type: Improvement >Reporter: Andrew Mashenkov >Priority: Major > Labels: MakeTeamcityGreenAgain > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9363) Jetty tests forget to stop nodes on finished.
Andrew Mashenkov created IGNITE-9363: Summary: Jetty tests forget to stop nodes on finished. Key: IGNITE-9363 URL: https://issues.apache.org/jira/browse/IGNITE-9363 Project: Ignite Issue Type: Improvement Reporter: Andrew Mashenkov -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9246) Transactions can wait for topology future on remap for a long time even if timeout is set.
[ https://issues.apache.org/jira/browse/IGNITE-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590536#comment-16590536 ] Pavel Kovalenko commented on IGNITE-9246: - [~ascherbakov] Looks good to me. > Transactions can wait for topology future on remap for a long time even if > timeout is set. > -- > > Key: IGNITE-9246 > URL: https://issues.apache.org/jira/browse/IGNITE-9246 > Project: Ignite > Issue Type: Improvement >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.7 > > > This is possible if long PME is occured during tx remap phase. > Fix: wait for new topology on remap with timeout if set. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-9362) SQL: Remove NODES.IS_LOCAL attribute
[ https://issues.apache.org/jira/browse/IGNITE-9362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Plekhanov reassigned IGNITE-9362: - Assignee: Aleksey Plekhanov > SQL: Remove NODES.IS_LOCAL attribute > > > Key: IGNITE-9362 > URL: https://issues.apache.org/jira/browse/IGNITE-9362 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Vladimir Ozerov >Assignee: Aleksey Plekhanov >Priority: Major > Fix For: 2.7 > > > We need to remove {{IS_LOCAL}} attribute from {{NODES}} system view. This > attribute doesn't make sense: it depends on where SQL query is executed. When > executed from JDBC/ODBC driver user will received strange result, when remote > node is displayed as local. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-9361) Remove IgniteInternalCache.isMongo*Cache() and other such stuff
[ https://issues.apache.org/jira/browse/IGNITE-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev reassigned IGNITE-9361: --- Assignee: Ilya Kasnacheev > Remove IgniteInternalCache.isMongo*Cache() and other such stuff > --- > > Key: IGNITE-9361 > URL: https://issues.apache.org/jira/browse/IGNITE-9361 > Project: Ignite > Issue Type: Improvement >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Minor > > Nobody needs it for a long time already. It's all internal API, we can drop > it outright. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9360) Destroy SnapTreeMap and related classes
[ https://issues.apache.org/jira/browse/IGNITE-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-9360: Description: It's not used anywhere and noone wants it, and it's a solid block of code. (was: It's not used anywhere and noone wants it, and it's a solid block of code. On slightly unrelated note, GridCacheProxyImpl.isMongoDataCache() and friends have to go probably.) > Destroy SnapTreeMap and related classes > --- > > Key: IGNITE-9360 > URL: https://issues.apache.org/jira/browse/IGNITE-9360 > Project: Ignite > Issue Type: Improvement >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Minor > > It's not used anywhere and noone wants it, and it's a solid block of code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9361) Remove IgniteInternalCache.isMongo*Cache() and other such stuff
Ilya Kasnacheev created IGNITE-9361: --- Summary: Remove IgniteInternalCache.isMongo*Cache() and other such stuff Key: IGNITE-9361 URL: https://issues.apache.org/jira/browse/IGNITE-9361 Project: Ignite Issue Type: Improvement Reporter: Ilya Kasnacheev Nobody needs it for a long time already. It's all internal API, we can drop it outright. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9362) SQL: Remove NODES.IS_LOCAL attribute
Vladimir Ozerov created IGNITE-9362: --- Summary: SQL: Remove NODES.IS_LOCAL attribute Key: IGNITE-9362 URL: https://issues.apache.org/jira/browse/IGNITE-9362 Project: Ignite Issue Type: Task Components: sql Reporter: Vladimir Ozerov Fix For: 2.7 We need to remove {{IS_LOCAL}} attribute from {{NODES}} system view. This attribute doesn't make sense: it depends on where SQL query is executed. When executed from JDBC/ODBC driver user will received strange result, when remote node is displayed as local. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9360) Destroy SnapTreeMap and related classes
Ilya Kasnacheev created IGNITE-9360: --- Summary: Destroy SnapTreeMap and related classes Key: IGNITE-9360 URL: https://issues.apache.org/jira/browse/IGNITE-9360 Project: Ignite Issue Type: Improvement Reporter: Ilya Kasnacheev Assignee: Ilya Kasnacheev It's not used anywhere and noone wants it, and it's a solid block of code. On slightly unrelated note, GridCacheProxyImpl.isMongoDataCache() and friends have to go probably. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-8158) Missed cleanups if afterTestsStop throws exception
[ https://issues.apache.org/jira/browse/IGNITE-8158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590394#comment-16590394 ] Nikolai Kulagin edited comment on IGNITE-8158 at 8/23/18 3:43 PM: -- Wrap the method afterTestsStopped() call with try/catch block. PR: [https://github.com/apache/ignite/pull/4464/] TC: [https://ci.ignite.apache.org/viewLog.html?buildId=1626362&tab=buildResultsDiv&buildTypeId=IgniteTests24Java8_RunAll] Project is buildable, tests are ok, flacky as usual. [~Mmuzaf], please, review my change. was (Author: zzzadruga): Wrap the method afterTestsStopped() call with try/catch block. > Missed cleanups if afterTestsStop throws exception > -- > > Key: IGNITE-8158 > URL: https://issues.apache.org/jira/browse/IGNITE-8158 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.4 >Reporter: Maxim Muzafarov >Assignee: Nikolai Kulagin >Priority: Minor > Labels: newbie, test > Fix For: 2.7 > > Attachments: StopAllGridsTest.java > > > Method {{afterTestsStopped}} might throw exception. Contibutor should provide > solution for ensuring that all cleanups in {{tearDown}} method would be > executed in this case. > {code:java|title=GridAbstractTest.java} > /** {@inheritDoc} */ > @Override protected void tearDown() throws Exception { > ... > try { > afterTest(); > } > finally { > serializedObj.clear(); > if (isLastTest()) { > ... > afterTestsStopped(); > if (startGrid) > G.stop(getTestIgniteInstanceName(), true); > // Remove counters. > tests.remove(getClass()); > // Remove resources cached in static, if any. > GridClassLoaderCache.clear(); > U.clearClassCache(); > MarshallerExclusions.clearCache(); > BinaryEnumCache.clear(); > } > Thread.currentThread().setContextClassLoader(clsLdr); > clsLdr = null; > cleanReferences(); > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8158) Missed cleanups if afterTestsStop throws exception
[ https://issues.apache.org/jira/browse/IGNITE-8158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590395#comment-16590395 ] Nikolai Kulagin commented on IGNITE-8158: - Wrap the method afterTestsStopped() call with try/catch block. > Missed cleanups if afterTestsStop throws exception > -- > > Key: IGNITE-8158 > URL: https://issues.apache.org/jira/browse/IGNITE-8158 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.4 >Reporter: Maxim Muzafarov >Assignee: Nikolai Kulagin >Priority: Minor > Labels: newbie, test > Fix For: 2.7 > > Attachments: StopAllGridsTest.java > > > Method {{afterTestsStopped}} might throw exception. Contibutor should provide > solution for ensuring that all cleanups in {{tearDown}} method would be > executed in this case. > {code:java|title=GridAbstractTest.java} > /** {@inheritDoc} */ > @Override protected void tearDown() throws Exception { > ... > try { > afterTest(); > } > finally { > serializedObj.clear(); > if (isLastTest()) { > ... > afterTestsStopped(); > if (startGrid) > G.stop(getTestIgniteInstanceName(), true); > // Remove counters. > tests.remove(getClass()); > // Remove resources cached in static, if any. > GridClassLoaderCache.clear(); > U.clearClassCache(); > MarshallerExclusions.clearCache(); > BinaryEnumCache.clear(); > } > Thread.currentThread().setContextClassLoader(clsLdr); > clsLdr = null; > cleanReferences(); > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8158) Missed cleanups if afterTestsStop throws exception
[ https://issues.apache.org/jira/browse/IGNITE-8158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590394#comment-16590394 ] Nikolai Kulagin commented on IGNITE-8158: - Wrap the method afterTestsStopped() call with try/catch block. > Missed cleanups if afterTestsStop throws exception > -- > > Key: IGNITE-8158 > URL: https://issues.apache.org/jira/browse/IGNITE-8158 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.4 >Reporter: Maxim Muzafarov >Assignee: Nikolai Kulagin >Priority: Minor > Labels: newbie, test > Fix For: 2.7 > > Attachments: StopAllGridsTest.java > > > Method {{afterTestsStopped}} might throw exception. Contibutor should provide > solution for ensuring that all cleanups in {{tearDown}} method would be > executed in this case. > {code:java|title=GridAbstractTest.java} > /** {@inheritDoc} */ > @Override protected void tearDown() throws Exception { > ... > try { > afterTest(); > } > finally { > serializedObj.clear(); > if (isLastTest()) { > ... > afterTestsStopped(); > if (startGrid) > G.stop(getTestIgniteInstanceName(), true); > // Remove counters. > tests.remove(getClass()); > // Remove resources cached in static, if any. > GridClassLoaderCache.clear(); > U.clearClassCache(); > MarshallerExclusions.clearCache(); > BinaryEnumCache.clear(); > } > Thread.currentThread().setContextClassLoader(clsLdr); > clsLdr = null; > cleanReferences(); > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (IGNITE-8158) Missed cleanups if afterTestsStop throws exception
[ https://issues.apache.org/jira/browse/IGNITE-8158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nikolai Kulagin updated IGNITE-8158: Comment: was deleted (was: Wrap the method afterTestsStopped() call with try/catch block.) > Missed cleanups if afterTestsStop throws exception > -- > > Key: IGNITE-8158 > URL: https://issues.apache.org/jira/browse/IGNITE-8158 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.4 >Reporter: Maxim Muzafarov >Assignee: Nikolai Kulagin >Priority: Minor > Labels: newbie, test > Fix For: 2.7 > > Attachments: StopAllGridsTest.java > > > Method {{afterTestsStopped}} might throw exception. Contibutor should provide > solution for ensuring that all cleanups in {{tearDown}} method would be > executed in this case. > {code:java|title=GridAbstractTest.java} > /** {@inheritDoc} */ > @Override protected void tearDown() throws Exception { > ... > try { > afterTest(); > } > finally { > serializedObj.clear(); > if (isLastTest()) { > ... > afterTestsStopped(); > if (startGrid) > G.stop(getTestIgniteInstanceName(), true); > // Remove counters. > tests.remove(getClass()); > // Remove resources cached in static, if any. > GridClassLoaderCache.clear(); > U.clearClassCache(); > MarshallerExclusions.clearCache(); > BinaryEnumCache.clear(); > } > Thread.currentThread().setContextClassLoader(clsLdr); > clsLdr = null; > cleanReferences(); > } > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message
[ https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590374#comment-16590374 ] Sergey Kosarev commented on IGNITE-8971: https://ci.ignite.apache.org/viewLog.html?buildId=1714987&tab=queuedBuildOverviewTab [~agoncharuk], TC results look not bad. > GridRestProcessor should propagate error message > > > Key: IGNITE-8971 > URL: https://issues.apache.org/jira/browse/IGNITE-8971 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Andrew Medvedev >Assignee: Sergey Kosarev >Priority: Major > Fix For: 2.7 > > > GridRestProcessor should propagate error message (stack trace) for handling > disk full error messages -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8867) Bootstrapping for learning sample
[ https://issues.apache.org/jira/browse/IGNITE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oleg Ignatenko reassigned IGNITE-8867: -- Assignee: Oleg Ignatenko (was: Alexey Platonov) > Bootstrapping for learning sample > - > > Key: IGNITE-8867 > URL: https://issues.apache.org/jira/browse/IGNITE-8867 > Project: Ignite > Issue Type: Improvement > Components: ml >Reporter: Yury Babak >Assignee: Oleg Ignatenko >Priority: Major > Fix For: 2.7 > > > Need to implement bootstrapping algorithm in Bagging-classifier -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8867) Bootstrapping for learning sample
[ https://issues.apache.org/jira/browse/IGNITE-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590373#comment-16590373 ] Oleg Ignatenko commented on IGNITE-8867: reassigned back to me per discussion with [~aplatonov] > Bootstrapping for learning sample > - > > Key: IGNITE-8867 > URL: https://issues.apache.org/jira/browse/IGNITE-8867 > Project: Ignite > Issue Type: Improvement > Components: ml >Reporter: Yury Babak >Assignee: Oleg Ignatenko >Priority: Major > Fix For: 2.7 > > > Need to implement bootstrapping algorithm in Bagging-classifier -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6856) SQL: invalid security checks during query execution
[ https://issues.apache.org/jira/browse/IGNITE-6856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590370#comment-16590370 ] Igor Sapego commented on IGNITE-6856: - [~vozerov], maybe, we should hide this ticket? > SQL: invalid security checks during query execution > --- > > Key: IGNITE-6856 > URL: https://issues.apache.org/jira/browse/IGNITE-6856 > Project: Ignite > Issue Type: Bug > Components: cache, sql >Affects Versions: 2.3 >Reporter: Vladimir Ozerov >Priority: Major > > Currently security check is performed inside {{IgniteCacheProxy}}. This is > wrong place. Instead, we should perform it inside query processor after > parsing when all affected caches are known. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9246) Transactions can wait for topology future on remap for a long time even if timeout is set.
[ https://issues.apache.org/jira/browse/IGNITE-9246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590369#comment-16590369 ] Alexei Scherbakov commented on IGNITE-9246: --- [~Jokser], All review issues are fixed. TC rerun with latest master is in progress. > Transactions can wait for topology future on remap for a long time even if > timeout is set. > -- > > Key: IGNITE-9246 > URL: https://issues.apache.org/jira/browse/IGNITE-9246 > Project: Ignite > Issue Type: Improvement >Reporter: Alexei Scherbakov >Assignee: Alexei Scherbakov >Priority: Major > Fix For: 2.7 > > > This is possible if long PME is occured during tx remap phase. > Fix: wait for new topology on remap with timeout if set. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-6055) SQL: Add String length constraint
[ https://issues.apache.org/jira/browse/IGNITE-6055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590365#comment-16590365 ] Igor Sapego commented on IGNITE-6055: - [~NIzhikov], left several comments in Upsource. Overall good, but we need to add test, where we are putting and getting string longer than 999 chars to ensure expected behaviour. Tell me, if you need help with them. > SQL: Add String length constraint > - > > Key: IGNITE-6055 > URL: https://issues.apache.org/jira/browse/IGNITE-6055 > Project: Ignite > Issue Type: Task > Components: sql >Affects Versions: 2.1 >Reporter: Vladimir Ozerov >Assignee: Nikolay Izhikov >Priority: Major > Labels: sql-engine > Fix For: 2.7 > > > We should support {{CHAR(X)}} and {{VARCHAR{X}} syntax. Currently, we ignore > it. First, it affects semantics. E.g., one can insert a string with greater > length into a cache/table without any problems. Second, it limits efficiency > of our default configuration. E.g., index inline cannot be applied to > {{String}} data type as we cannot guess it's length. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-9309) LocalNodeMovingPartitionsCount metrics may calculates incorrect due to processFullPartitionUpdate
[ https://issues.apache.org/jira/browse/IGNITE-9309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590338#comment-16590338 ] Pavel Kovalenko edited comment on IGNITE-9309 at 8/23/18 2:59 PM: -- The actual problem was introduced in https://issues.apache.org/jira/browse/IGNITE-8684 . The key issue that partition state changes now happens only after receiving FullMap with exchangeId (PME). There can be race between handling FullMap with echangeId != null (PME) and FullMap without exchangeId. If we receive fresh FullMap without exchangeId earlier than with, we override our local partition states, and FullMap with exchangeId will be rejected as outdated. It means that the partition states will never be changed and no rebalance will start. was (Author: jokser): The actual problem was introduced in https://issues.apache.org/jira/browse/IGNITE-8684 . The key problem that partition state changes now happened only after receiving FullMap with exchangeId (PME). There can be race between handling FullMap with echangeId != null (PME) and FullMap without exchangeId. If we receive fresh FullMap without exchangeId earlier than with, we override our local partition states, and FullMap with exchangeId will be rejected as outdated. It means that the partition states will not be changed and no rebalance will start. > LocalNodeMovingPartitionsCount metrics may calculates incorrect due to > processFullPartitionUpdate > - > > Key: IGNITE-9309 > URL: https://issues.apache.org/jira/browse/IGNITE-9309 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.6 >Reporter: Maxim Muzafarov >Priority: Major > > [~qvad] have found incorrect {{LocalNodeMovingPartitionsCount}} metrics > calculation on client node {{JOIN\LEFT}}. Full issue reproducer is absent. > Probable scenario: > {code} > Repeat 10 times: > 1. stop node > 2. clean lfs > 3. add stopped node (trigger rebalance) > 4. 3 times: start 2 clients, wait for topology snapshot, close clients > 5. for each cache group check JMX metrics LocalNodeMovingPartitionsCount > (like waitForFinishRebalance()) > {code} > Whole discussion and all configuration details can be found in comments of > [IGNITE-7165|https://issues.apache.org/jira/browse/IGNITE-7165]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9309) LocalNodeMovingPartitionsCount metrics may calculates incorrect due to processFullPartitionUpdate
[ https://issues.apache.org/jira/browse/IGNITE-9309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590338#comment-16590338 ] Pavel Kovalenko commented on IGNITE-9309: - The actual problem was introduced in https://issues.apache.org/jira/browse/IGNITE-8684 . The key problem that partition state changes now happened only after receiving FullMap with exchangeId (PME). There can be race between handling FullMap with echangeId != null (PME) and FullMap without exchangeId. If we receive fresh FullMap without exchangeId earlier than with, we override our local partition states, and FullMap with exchangeId will be rejected as outdated. It means that the partition states will not be changed and no rebalance will start. > LocalNodeMovingPartitionsCount metrics may calculates incorrect due to > processFullPartitionUpdate > - > > Key: IGNITE-9309 > URL: https://issues.apache.org/jira/browse/IGNITE-9309 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.6 >Reporter: Maxim Muzafarov >Priority: Major > > [~qvad] have found incorrect {{LocalNodeMovingPartitionsCount}} metrics > calculation on client node {{JOIN\LEFT}}. Full issue reproducer is absent. > Probable scenario: > {code} > Repeat 10 times: > 1. stop node > 2. clean lfs > 3. add stopped node (trigger rebalance) > 4. 3 times: start 2 clients, wait for topology snapshot, close clients > 5. for each cache group check JMX metrics LocalNodeMovingPartitionsCount > (like waitForFinishRebalance()) > {code} > Whole discussion and all configuration details can be found in comments of > [IGNITE-7165|https://issues.apache.org/jira/browse/IGNITE-7165]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9359) OptimizeMakeChangeGAExample hangs forever with additional nods in topology
Alex Volkov created IGNITE-9359: --- Summary: OptimizeMakeChangeGAExample hangs forever with additional nods in topology Key: IGNITE-9359 URL: https://issues.apache.org/jira/browse/IGNITE-9359 Project: Ignite Issue Type: Bug Components: ml Affects Versions: 2.6 Reporter: Alex Volkov To reproduce this issue please follow these steps: 1. Run two nodes using ignite.sh script. For example: {code:java} bin/ignite.sh examples/config/example-ignite.xml -J-Xmx1g -J-Xms1g -J-DCONSISTENT_ID=node1 -J-DIGNITE_QUIET=false {code} 2. Run HelloWorldGAExample from IDEA IDE. *Expecting result:* Example successfully run and completed. *Actual result:* There are a lot of NPE exceptions in example log: {code:java} [2018-08-23 17:38:59,246][ERROR][pub-#20][GridJobWorker] Failed to execute job due to unexpected runtime exception [jobId=2a309376561-70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, ses=GridJobSessionImpl [ses=GridTaskSessionImpl [taskName=o.a.i.ml.genetic.FitnessTask, dep=GridDeployment [ts=1535035116486, depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, clsLdrId=4baf8376561-70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, userVer=0, loc=true, sampleClsName=o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap, pendingUndeploy=false, undeployed=false, usage=2], taskClsName=o.a.i.ml.genetic.FitnessTask, sesId=b4209376561-70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, startTime=1535035123014, endTime=9223372036854775807, taskNodeId=70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, topPred=o.a.i.i.cluster.ClusterGroupAdapter$AttributeFilter@5668ad01, subjId=70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, mapFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=574227802]IgniteFuture [orig=], execName=null], jobId=2a309376561-70889d5c-33f2-4c96-bf1e-f280c0ac4a1c], err=null] java.lang.NullPointerException at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:76) at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:35) at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:568) at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6749) at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:562) at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:491) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} and it hangs on this one: {code:java} [2018-08-23 17:38:59,582][WARN ][sys-#54][AlwaysFailoverSpi] Received topology with only nodes that job had failed on (forced to fail) [failedNodes=[3db84480-08b8-4d54-9d3a-e23b53761f29, 70889d5c-33f2-4c96-bf1e-f280c0ac4a1c, 4f815cff-f77c-4a41-9ae1-ebb00b1dd44c]] class org.apache.ignite.cluster.ClusterTopologyException: Failed to failover a job to another node (failover SPI returned null) [job=org.apache.ignite.ml.genetic.FitnessJob@1045c79e, node=TcpDiscoveryNode [id=4f815cff-f77c-4a41-9ae1-ebb00b1dd44c, addrs=ArrayList [0:0:0:0:0:0:0:1, 127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.92:47501, /172.25.4.42:47501, /0:0:0:0:0:0:0:1:47501, /127.0.0.1:47501], discPort=47501, order=2, intOrder=2, lastExchangeTime=1535035115978, loc=false, ver=2.7.0#19700101-sha1:, isClient=false]] at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:853) at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:851) at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985) at org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:541) at org.apache.ignite.ml.genetic.GAGrid.calculateFitness(GAGrid.java:102) at org.apache.ignite.ml.genetic.GAGrid.evolve(GAGrid.java:171) at org.apache.ignite.examples.ml.genetic.change.OptimizeMakeChangeGAExample.main(OptimizeMakeChangeGAExample.java:148) Caused by: class org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: Failed to failover a job to another node (failover SPI returned null) [job=org.apache.ignite.ml.genetic.FitnessJob@1045c79e, node=TcpDiscoveryNode [id=4f815cff-f77c-4a41-9ae1-ebb00b1dd44c, addrs=ArrayList [0:0:0:0:0:0:0:1, 127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.92:47501, /172.25.4.42:47501, /0:0:0:0:0:0:0:1:47501, /127.0.0.1:47501], discPort=47501, order=2, intOrder=2, lastExchangeTime=1535035115978, loc=false, ver=2.7.0#19700101-sha1:, isClient=false]] at org.apache.ignite.internal.processo
[jira] [Comment Edited] (IGNITE-9305) Wrong off-heap size is reported for a node
[ https://issues.apache.org/jira/browse/IGNITE-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589960#comment-16589960 ] Pavel Pereslegin edited comment on IGNITE-9305 at 8/23/18 2:25 PM: --- Hi [~dmagda]. Do I understand correctly that we don't log aggregated information about off-heap usage? Example of such output: {noformat} ^-- H/N/C [hosts=1, nodes=2, CPUs=8] ^-- CPU [cur=2.57%, avg=4.44%, GC=0%] ^-- PageMemory [pages=34] ^-- Heap [used=130MB, free=96.31%, comm=244MB] ^-- Off-heap sysMemPlc [used=0MB, free=99.98%, comm=100MB] ^-- Off-heap default [used=0MB, free=99.62%, comm=20MB] ^-- Off-heap metastoreMemPlc [used=0MB, free=99.96%, comm=100MB] ^-- Ignite persistence default [used=0MB] ^-- Outbound messages queue [size=0] ^-- Public thread pool [active=0, idle=6, qSize=0] ^-- System thread pool [active=0, idle=7, qSize=0] ^-- Custom executor 0 [active=0, idle=0, qSize=0] ^-- Custom executor 1 [active=0, idle=0, qSize=0] {noformat} Is this the required format? was (Author: xtern): Hi [~dmagda]. Do I understand correctly that we don't log aggregated information about off-heap usage? Example of such output: {noformat} ^-- H/N/C [hosts=1, nodes=2, CPUs=8] ^-- CPU [cur=2.57%, avg=4.44%, GC=0%] ^-- PageMemory [pages=34] ^-- Heap [used=130MB, free=96.31%, comm=244MB] ^-- Off-heap sysMemPlc [used=0MB, free=99.98%, comm=100MB] ^-- Off-heap default [used=0MB, free=99.62%, comm=20MB] ^-- Off-heap metastoreMemPlc [used=0MB, free=99.96%, comm=100MB] ^-- Ignite persistence default [used=0MB] ^-- Outbound messages queue [size=0] ^-- Public thread pool [active=0, idle=6, qSize=0] ^-- System thread pool [active=0, idle=7, qSize=0] ^-- Custom executor 0 [active=0, idle=0, qSize=0] ^-- Custom executor 1 [active=0, idle=0, qSize=0] {noformat} > Wrong off-heap size is reported for a node > -- > > Key: IGNITE-9305 > URL: https://issues.apache.org/jira/browse/IGNITE-9305 > Project: Ignite > Issue Type: Task >Affects Versions: 2.6 >Reporter: Denis Magda >Assignee: Pavel Pereslegin >Priority: Blocker > Fix For: 2.7 > > > Was troubleshooting an Ignite deployment today and couldn't find out from the > logs what was the actual off-heap space used. > Those were the given memory resoures (Ignite 2.6): > {code} > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager] Topology > snapshot [ver=1, servers=1, clients=0, CPUs=64, offheap=30.0GB, heap=24.0GB] > {code} > And that weird stuff was reported by the node (pay attention to the last > line): > {code} > [2018-08-16 15:45:50,211][INFO > ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017] > > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=c033026e, name=cluster_31-Dec-2017, uptime=00:38:00.257] > ^-- H/N/C [hosts=1, nodes=1, CPUs=64] > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%] > ^-- PageMemory [pages=6997377] > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB] > ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is always the > same! > {code} > Had to change the code by using > {code}dataRegion.getPhysicalMemoryPages(){code} to find out that actual > off-heap usage size was > {code} > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB > {code} > The logs have to report the following instead: > {code} > ^-- Off-heap {Data Region 1} [used={dataRegion1.getPhysicalMemorySize()}, > free=X%, comm=dataRegion1.maxSize()] > ^-- Off-heap {Data Region 2} [used={dataRegion2.getPhysicalMemorySize()}, > free=X%, comm=dataRegion2.maxSize()] > {code} > If Ignite persistence is enabled then the following extra lines have to be > added to see the disk used space: > {code} > ^-- Ignite persistence {Data Region 1}: > used={dataRegion1.getTotalAllocatedSize() - > dataRegion1.getPhysicalMemorySize()} > ^-- Ignite persistence {Data Region 2} > [used={dataRegion2.getTotalAllocatedSize() - > dataRegion2.getPhysicalMemorySize()}] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9301) Support method compute withNoResultCache in .Net
[ https://issues.apache.org/jira/browse/IGNITE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590296#comment-16590296 ] Dmitriy Pavlov commented on IGNITE-9301: [~ilyak] feature was added in the separate ticket IGNITE-6284, and I signed it off, this ticket is mostly about fixing test in .NET and proxying call to correct Java method. So IMO we can accept this contribution. what do you think? > Support method compute withNoResultCache in .Net > > > Key: IGNITE-9301 > URL: https://issues.apache.org/jira/browse/IGNITE-9301 > Project: Ignite > Issue Type: Task > Components: platforms >Affects Versions: 2.6 >Reporter: Dmitriy Pavlov >Assignee: Aleksei Zaitsev >Priority: Major > Labels: .net > Fix For: 2.7 > > > During https://issues.apache.org/jira/browse/IGNITE-6284 implementation new > method was added - > org.apache.ignite.IgniteCompute#withNoResultCache > but this method was not supported in .NET API version. > It is required to add correct support to .NET. > Please remove method name from UnneededMethods in > modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/ComputeParityTest.cs > once issue is done -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-9305) Wrong off-heap size is reported for a node
[ https://issues.apache.org/jira/browse/IGNITE-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589960#comment-16589960 ] Pavel Pereslegin edited comment on IGNITE-9305 at 8/23/18 2:23 PM: --- Hi [~dmagda]. Do I understand correctly that we don't log aggregated information about off-heap usage? Example of such output: {noformat} ^-- H/N/C [hosts=1, nodes=2, CPUs=8] ^-- CPU [cur=2.57%, avg=4.44%, GC=0%] ^-- PageMemory [pages=34] ^-- Heap [used=130MB, free=96.31%, comm=244MB] ^-- Off-heap sysMemPlc [used=0MB, free=99.98%, comm=100MB] ^-- Off-heap default [used=0MB, free=99.62%, comm=20MB] ^-- Off-heap metastoreMemPlc [used=0MB, free=99.96%, comm=100MB] ^-- Ignite persistence default [used=0MB] ^-- Outbound messages queue [size=0] ^-- Public thread pool [active=0, idle=6, qSize=0] ^-- System thread pool [active=0, idle=7, qSize=0] ^-- Custom executor 0 [active=0, idle=0, qSize=0] ^-- Custom executor 1 [active=0, idle=0, qSize=0] {noformat} was (Author: xtern): Hi [~dmagda]. Do I understand correctly that we don't logging aggregated information about off-heap usage? Example of such output: {noformat} ^-- H/N/C [hosts=1, nodes=2, CPUs=8] ^-- CPU [cur=2.57%, avg=4.44%, GC=0%] ^-- PageMemory [pages=34] ^-- Heap [used=130MB, free=96.31%, comm=244MB] ^-- Off-heap sysMemPlc [used=0MB, free=99.98%, comm=100MB] ^-- Off-heap default [used=0MB, free=99.62%, comm=20MB] ^-- Off-heap metastoreMemPlc [used=0MB, free=99.96%, comm=100MB] ^-- Ignite persistence default [used=0MB] ^-- Outbound messages queue [size=0] ^-- Public thread pool [active=0, idle=6, qSize=0] ^-- System thread pool [active=0, idle=7, qSize=0] ^-- Custom executor 0 [active=0, idle=0, qSize=0] ^-- Custom executor 1 [active=0, idle=0, qSize=0] {noformat} > Wrong off-heap size is reported for a node > -- > > Key: IGNITE-9305 > URL: https://issues.apache.org/jira/browse/IGNITE-9305 > Project: Ignite > Issue Type: Task >Affects Versions: 2.6 >Reporter: Denis Magda >Assignee: Pavel Pereslegin >Priority: Blocker > Fix For: 2.7 > > > Was troubleshooting an Ignite deployment today and couldn't find out from the > logs what was the actual off-heap space used. > Those were the given memory resoures (Ignite 2.6): > {code} > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager] Topology > snapshot [ver=1, servers=1, clients=0, CPUs=64, offheap=30.0GB, heap=24.0GB] > {code} > And that weird stuff was reported by the node (pay attention to the last > line): > {code} > [2018-08-16 15:45:50,211][INFO > ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017] > > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=c033026e, name=cluster_31-Dec-2017, uptime=00:38:00.257] > ^-- H/N/C [hosts=1, nodes=1, CPUs=64] > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%] > ^-- PageMemory [pages=6997377] > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB] > ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is always the > same! > {code} > Had to change the code by using > {code}dataRegion.getPhysicalMemoryPages(){code} to find out that actual > off-heap usage size was > {code} > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB > {code} > The logs have to report the following instead: > {code} > ^-- Off-heap {Data Region 1} [used={dataRegion1.getPhysicalMemorySize()}, > free=X%, comm=dataRegion1.maxSize()] > ^-- Off-heap {Data Region 2} [used={dataRegion2.getPhysicalMemorySize()}, > free=X%, comm=dataRegion2.maxSize()] > {code} > If Ignite persistence is enabled then the following extra lines have to be > added to see the disk used space: > {code} > ^-- Ignite persistence {Data Region 1}: > used={dataRegion1.getTotalAllocatedSize() - > dataRegion1.getPhysicalMemorySize()} > ^-- Ignite persistence {Data Region 2} > [used={dataRegion2.getTotalAllocatedSize() - > dataRegion2.getPhysicalMemorySize()}] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9301) Support method compute withNoResultCache in .Net
[ https://issues.apache.org/jira/browse/IGNITE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590277#comment-16590277 ] Ilya Kasnacheev commented on IGNITE-9301: - [~alexzaitzev] My take, if you add a feature .Net, you add a test on that feature. Now, I can't speak for [~ptupitsyn], he's a principal contributor to .Net codebase. I am not and I am wary of vetting commits without tests. Are there objective reasons why this test is not feasible to write (e.g. almost impossible to construct a negative case). > Support method compute withNoResultCache in .Net > > > Key: IGNITE-9301 > URL: https://issues.apache.org/jira/browse/IGNITE-9301 > Project: Ignite > Issue Type: Task > Components: platforms >Affects Versions: 2.6 >Reporter: Dmitriy Pavlov >Assignee: Aleksei Zaitsev >Priority: Major > Labels: .net > Fix For: 2.7 > > > During https://issues.apache.org/jira/browse/IGNITE-6284 implementation new > method was added - > org.apache.ignite.IgniteCompute#withNoResultCache > but this method was not supported in .NET API version. > It is required to add correct support to .NET. > Please remove method name from UnneededMethods in > modules/platforms/dotnet/Apache.Ignite.Core.Tests/ApiParity/ComputeParityTest.cs > once issue is done -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9328) IgniteDevOnlyLogTest.testDevOnlyQuietMessage() fails to write.
[ https://issues.apache.org/jira/browse/IGNITE-9328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590270#comment-16590270 ] Ilya Kasnacheev commented on IGNITE-9328: - Yes we discussed this issue before assignment were made. > IgniteDevOnlyLogTest.testDevOnlyQuietMessage() fails to write. > -- > > Key: IGNITE-9328 > URL: https://issues.apache.org/jira/browse/IGNITE-9328 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Kasnacheev >Assignee: Stanislav Lukyanov >Priority: Major > > After I have re-enabled it in IGNITE-9220 it started failing. I have also > migrated it to multiJvm. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9068) Node fails to stop when CacheObjectBinaryProcessor.addMeta() is executed inside guard()/unguard()
[ https://issues.apache.org/jira/browse/IGNITE-9068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590264#comment-16590264 ] Ilya Kasnacheev commented on IGNITE-9068: - I don't really have ideas how to make it more correct. This test attains same thread dump that was observed on a problematic node. > Node fails to stop when CacheObjectBinaryProcessor.addMeta() is executed > inside guard()/unguard() > - > > Key: IGNITE-9068 > URL: https://issues.apache.org/jira/browse/IGNITE-9068 > Project: Ignite > Issue Type: Bug > Components: binary, managed services >Affects Versions: 2.5 >Reporter: Ilya Kasnacheev >Assignee: Ilya Lantukh >Priority: Blocker > Labels: test > Fix For: 2.7 > > Attachments: GridServiceDeadlockTest.java, MyService.java > > > When addMeta is called in e.g. service deployment it us executed inside > guard()/unguard() > If node will be stopped at this point, Ignite.stop() will hang. > Consider the following thread dump: > {code} > "Thread-1" #57 prio=5 os_prio=0 tid=0x7f7780005000 nid=0x7f26 runnable > [0x7f766cbef000] >java.lang.Thread.State: TIMED_WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > - parking to wait for <0x0005cb7b0468> (a > java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) > at > java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:934) > at > java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1247) > at > java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.tryLock(ReentrantReadWriteLock.java:1115) > at > org.apache.ignite.internal.util.StripedCompositeReadWriteLock$WriteLock.tryLock(StripedCompositeReadWriteLock.java:220) > at > org.apache.ignite.internal.GridKernalGatewayImpl.tryWriteLock(GridKernalGatewayImpl.java:143) > // Waiting for lock to cancel futures of BinaryMetadataTransport > at org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:2171) > at org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:2094) > at > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2545) > - locked <0x0005cb423f00> (a > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance) > at > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2508) > at > org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.run(IgnitionEx.java:2033) > "test-runner-#1%service.GridServiceDeadlockTest%" #13 prio=5 os_prio=0 > tid=0x7f77b87d5800 nid=0x7eb8 waiting on condition [0x7f778cdfc000] >java.lang.Thread.State: WAITING (parking) > at sun.misc.Unsafe.park(Native Method) > at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304) > // May never return if there's discovery problems > at > org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177) > at > org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:140) > at > org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.addMeta(CacheObjectBinaryProcessorImpl.java:463) > at > org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl$2.addMeta(CacheObjectBinaryProcessorImpl.java:188) > at > org.apache.ignite.internal.binary.BinaryContext.registerUserClassDescriptor(BinaryContext.java:802) > at > org.apache.ignite.internal.binary.BinaryContext.registerClassDescriptor(BinaryContext.java:761) > at > org.apache.ignite.internal.binary.BinaryContext.descriptorForClass(BinaryContext.java:627) > at > org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:174) > at > org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:157) > at > org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:144) > at > org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:254) > at > org.apache.ignite.internal.binary.BinaryMarshaller.marshal0(BinaryMarshaller.java:82) > at > org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.marshal(AbstractNodeNameAwareMarshaller.java:58) > at > org.apache.ignite.internal.util.IgniteUtils.marshal(IgniteUtils.java:10069) > at > org.apache.ignite.internal.processors.service.GridServiceProcessor.prepareServiceConfigurations(GridServiceProc
[jira] [Commented] (IGNITE-9054) ScanQuery responses are serialized with Optimized Marshaller
[ https://issues.apache.org/jira/browse/IGNITE-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590260#comment-16590260 ] Ilya Kasnacheev commented on IGNITE-9054: - [~agoncharuk] please review proposed fix! > ScanQuery responses are serialized with Optimized Marshaller > > > Key: IGNITE-9054 > URL: https://issues.apache.org/jira/browse/IGNITE-9054 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.5 >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Major > Labels: easyfix > Attachments: 22530.diff > > > When you do ContinuousQuery on a cache, its initial query sends results via > OptimizedMarshaller (which has binary compatibility implications) but its > continuous part uses BinaryMarshaller. They should both be using > BinaryMarshaller. Fix seems to be one-liner, see patch and userlist thread. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9294) StandaloneWalRecordsIterator: support iteration from custom pointer
[ https://issues.apache.org/jira/browse/IGNITE-9294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dmitriy Govorukhin updated IGNITE-9294: --- Ignite Flags: (was: Docs Required) > StandaloneWalRecordsIterator: support iteration from custom pointer > --- > > Key: IGNITE-9294 > URL: https://issues.apache.org/jira/browse/IGNITE-9294 > Project: Ignite > Issue Type: Improvement > Components: persistence >Reporter: Ivan Rakov >Assignee: Dmitriy Govorukhin >Priority: Major > Fix For: 2.7 > > > StandaloneWalRecordsIterator can be constructed from set of files and dirs, > but there's no option to pass WAL pointer to the iterator factory class to > start iteration with. It can be worked around (by filtering all records prior > to needed pointer), but also it would be handy to add such option to > IgniteWalIteratorFactory API. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-9274) Pass transaction label to cache events
[ https://issues.apache.org/jira/browse/IGNITE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590179#comment-16590179 ] Yury Gerzhedovich edited comment on IGNITE-9274 at 8/23/18 12:50 PM: - [~tledkov-gridgain], # Thanks, will be more attentive in the future :) # 3. My first attempts were separated tests for each of case - huge number of tests not readable. Current approach looks better. was (Author: jooger): [~tledkov-gridgain], # Thanks, will be more attentive in the future :) # 3. My first attempts was separate tests for each of case - huge number of tests not readable. Current approach looks better. > Pass transaction label to cache events > -- > > Key: IGNITE-9274 > URL: https://issues.apache.org/jira/browse/IGNITE-9274 > Project: Ignite > Issue Type: Task > Components: cache >Reporter: Vladimir Ozerov >Assignee: Yury Gerzhedovich >Priority: Major > Fix For: 2.7 > > > It is possible to set transaction label - \{{IgniteTransactions.withLabel}}. > We need to pass this label to related cache and transaction events: > 1) EVT_TX_STARTED, EVT_TX_COMMITTED, EVT_TX_ROLLED_BACK, EVT_TX_SUSPENDED, > EVT_TX_RESUMED > 2) EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_REMOVED > For TX events most probably everything is already passed (see > \{{TransactionStateChangedEvent}}), we only need to add tests. > For put/remove events we need to investigate correct messages to pass label, > prepare requests appear to be good candidates for this. > For read operation we may need to add pass label to get/lock requests > ({{GridNearLockRequest}}, {{GridNearGetRequest}}, > {{GridNearSingleGetRequest}}). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9286) Redesign and Refactor UI
[ https://issues.apache.org/jira/browse/IGNITE-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov updated IGNITE-9286: - Fix Version/s: 2.7 > Redesign and Refactor UI > > > Key: IGNITE-9286 > URL: https://issues.apache.org/jira/browse/IGNITE-9286 > Project: Ignite > Issue Type: Improvement > Components: wizards >Reporter: Dmitriy Shabalin >Assignee: Alexey Kuznetsov >Priority: Major > Labels: web-console-configuration > Fix For: 2.7 > > Time Spent: 3h 35m > Remaining Estimate: 0h > > We should refactor all screens to use latest modern controls as on > "Configuration" screen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9274) Pass transaction label to cache events
[ https://issues.apache.org/jira/browse/IGNITE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590179#comment-16590179 ] Yury Gerzhedovich commented on IGNITE-9274: --- [~tledkov-gridgain], # Thanks, will be more attentive in the future :) # 3. My first attempts was separate tests for each of case - huge number of tests not readable. Current approach looks better. > Pass transaction label to cache events > -- > > Key: IGNITE-9274 > URL: https://issues.apache.org/jira/browse/IGNITE-9274 > Project: Ignite > Issue Type: Task > Components: cache >Reporter: Vladimir Ozerov >Assignee: Yury Gerzhedovich >Priority: Major > Fix For: 2.7 > > > It is possible to set transaction label - \{{IgniteTransactions.withLabel}}. > We need to pass this label to related cache and transaction events: > 1) EVT_TX_STARTED, EVT_TX_COMMITTED, EVT_TX_ROLLED_BACK, EVT_TX_SUSPENDED, > EVT_TX_RESUMED > 2) EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_REMOVED > For TX events most probably everything is already passed (see > \{{TransactionStateChangedEvent}}), we only need to add tests. > For put/remove events we need to investigate correct messages to pass label, > prepare requests appear to be good candidates for this. > For read operation we may need to add pass label to get/lock requests > ({{GridNearLockRequest}}, {{GridNearGetRequest}}, > {{GridNearSingleGetRequest}}). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9274) Pass transaction label to cache events
[ https://issues.apache.org/jira/browse/IGNITE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590160#comment-16590160 ] Taras Ledkov commented on IGNITE-9274: -- [~jooger], my comments: # I've pushed minor changes, please take a look; # Does make sense to collect all errors at the all test cases instead separate assertions at the each case? # Does make sense combine all tests cases into one test? > Pass transaction label to cache events > -- > > Key: IGNITE-9274 > URL: https://issues.apache.org/jira/browse/IGNITE-9274 > Project: Ignite > Issue Type: Task > Components: cache >Reporter: Vladimir Ozerov >Assignee: Yury Gerzhedovich >Priority: Major > Fix For: 2.7 > > > It is possible to set transaction label - \{{IgniteTransactions.withLabel}}. > We need to pass this label to related cache and transaction events: > 1) EVT_TX_STARTED, EVT_TX_COMMITTED, EVT_TX_ROLLED_BACK, EVT_TX_SUSPENDED, > EVT_TX_RESUMED > 2) EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_REMOVED > For TX events most probably everything is already passed (see > \{{TransactionStateChangedEvent}}), we only need to add tests. > For put/remove events we need to investigate correct messages to pass label, > prepare requests appear to be good candidates for this. > For read operation we may need to add pass label to get/lock requests > ({{GridNearLockRequest}}, {{GridNearGetRequest}}, > {{GridNearSingleGetRequest}}). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9054) ScanQuery responses are serialized with Optimized Marshaller
[ https://issues.apache.org/jira/browse/IGNITE-9054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590167#comment-16590167 ] Stanislav Lukyanov commented on IGNITE-9054: LGTM! > ScanQuery responses are serialized with Optimized Marshaller > > > Key: IGNITE-9054 > URL: https://issues.apache.org/jira/browse/IGNITE-9054 > Project: Ignite > Issue Type: Bug > Components: cache >Affects Versions: 2.5 >Reporter: Ilya Kasnacheev >Assignee: Ilya Kasnacheev >Priority: Major > Labels: easyfix > Attachments: 22530.diff > > > When you do ContinuousQuery on a cache, its initial query sends results via > OptimizedMarshaller (which has binary compatibility implications) but its > continuous part uses BinaryMarshaller. They should both be using > BinaryMarshaller. Fix seems to be one-liner, see patch and userlist thread. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (IGNITE-8911) While cache is restarting it's possible to start new cache with this name
[ https://issues.apache.org/jira/browse/IGNITE-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eduard Shangareev updated IGNITE-8911: -- Comment: was deleted (was: PR is https://github.com/apache/ignite/pull/4291) > While cache is restarting it's possible to start new cache with this name > - > > Key: IGNITE-8911 > URL: https://issues.apache.org/jira/browse/IGNITE-8911 > Project: Ignite > Issue Type: Bug >Reporter: Eduard Shangareev >Assignee: Eduard Shangareev >Priority: Major > > We have the state "restarting" for caches when we certainly now that these > caches would start at some moment in future. But we could start new cache > with the same name. > Plus, NPE is throwing when we try to get proxy for such caches (in > "restarting" state). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8911) While cache is restarting it's possible to start new cache with this name
[ https://issues.apache.org/jira/browse/IGNITE-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590134#comment-16590134 ] ASF GitHub Bot commented on IGNITE-8911: GitHub user EdShangGG opened a pull request: https://github.com/apache/ignite/pull/4605 IGNITE-8911 You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-8911-1 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4605.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4605 commit b22bbf79dbcfb5127b41cde9d955542f1e1bb033 Author: EdShangGG Date: 2018-07-31T14:03:55Z IGNITE-8911 While cache is restarting it's possible to start new cache with this name commit 55bb3ac8cf334ddf84877e8c907ee2900258b12e Author: EdShangGG Date: 2018-07-31T15:50:28Z IGNITE-8911 While cache is restarting it's possible to start new cache with this name -added test commit 30333bf4a7a58d0b18e2f425ea6a645611c48f3a Author: EdShangGG Date: 2018-08-23T12:04:03Z Merge branch 'master1' into ignite-8911-1 > While cache is restarting it's possible to start new cache with this name > - > > Key: IGNITE-8911 > URL: https://issues.apache.org/jira/browse/IGNITE-8911 > Project: Ignite > Issue Type: Bug >Reporter: Eduard Shangareev >Assignee: Eduard Shangareev >Priority: Major > > We have the state "restarting" for caches when we certainly now that these > caches would start at some moment in future. But we could start new cache > with the same name. > Plus, NPE is throwing when we try to get proxy for such caches (in > "restarting" state). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-6044) SQL insert waits for transaction commit, but it must be executed right away
[ https://issues.apache.org/jira/browse/IGNITE-6044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yury Gerzhedovich updated IGNITE-6044: -- Ignite Flags: Docs Required > SQL insert waits for transaction commit, but it must be executed right away > --- > > Key: IGNITE-6044 > URL: https://issues.apache.org/jira/browse/IGNITE-6044 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.1 >Reporter: Mikhail Cherkasov >Assignee: Yury Gerzhedovich >Priority: Critical > Labels: sql-stability, usability > > Doc says: > ""Presently, DML supports the atomic mode only meaning that if there is a DML > query that is executed as a part of an Ignite transaction then it will not be > enlisted in the transaction's writing queue and will be executed right away."" > https://apacheignite.readme.io/docs/dml#section-transactional-support > However the data will be added to cache only after transaction commit. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9355) Document 3 new system views (nodes, node attributes, baseline nodes)
[ https://issues.apache.org/jira/browse/IGNITE-9355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-9355: Ignite Flags: (was: Docs Required) > Document 3 new system views (nodes, node attributes, baseline nodes) > > > Key: IGNITE-9355 > URL: https://issues.apache.org/jira/browse/IGNITE-9355 > Project: Ignite > Issue Type: Task > Components: documentation, sql >Reporter: Vladimir Ozerov >Priority: Major > Fix For: 2.7 > > > We need to document three new SQL system views. > # Explain users that new system SQL schema appeared, named "IGNITE", where > all views are stored > # System view NODES - list of current nodes in topology. Columns: ID, > CONSISTENT_ID, VERSION, IS_LOCAL, IS_CLIENT, IS_DAEMON, NODE_ORDER, > ADDRESSES, HOSTNAMES > # System view NODE_ATTRIBUTES - attributes for all nodes. Columns: NODE_ID, > NAME, VALUE > # System view BASELINE_NODES - list of baseline topology nodes. Columns: > CONSISTENT_ID, ONLINE (whether node is up and running at the moment) > # Explain limitations: views cannot be joined with user tables; it is not > allowed to create other objects (tables, indexes) in "IGNITE" schema. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9274) Pass transaction label to cache events
[ https://issues.apache.org/jira/browse/IGNITE-9274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590132#comment-16590132 ] Yury Gerzhedovich commented on IGNITE-9274: --- tests passed - [https://ci.ignite.apache.org/viewLog.html?buildTypeId=IgniteTests24Java8_Cache4&buildId=1714667&branch_IgniteTests24Java8_Cache4=pull/4600/head] > Pass transaction label to cache events > -- > > Key: IGNITE-9274 > URL: https://issues.apache.org/jira/browse/IGNITE-9274 > Project: Ignite > Issue Type: Task > Components: cache >Reporter: Vladimir Ozerov >Assignee: Yury Gerzhedovich >Priority: Major > Fix For: 2.7 > > > It is possible to set transaction label - \{{IgniteTransactions.withLabel}}. > We need to pass this label to related cache and transaction events: > 1) EVT_TX_STARTED, EVT_TX_COMMITTED, EVT_TX_ROLLED_BACK, EVT_TX_SUSPENDED, > EVT_TX_RESUMED > 2) EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_REMOVED > For TX events most probably everything is already passed (see > \{{TransactionStateChangedEvent}}), we only need to add tests. > For put/remove events we need to investigate correct messages to pass label, > prepare requests appear to be good candidates for this. > For read operation we may need to add pass label to get/lock requests > ({{GridNearLockRequest}}, {{GridNearGetRequest}}, > {{GridNearSingleGetRequest}}). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9358) DynamicIndexPartitionedTransactionalConcurrentSelfTest#testConcurrentRebalance is flaky
[ https://issues.apache.org/jira/browse/IGNITE-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590129#comment-16590129 ] Ilya Kasnacheev commented on IGNITE-9358: - {code} java.lang.IllegalStateException: Grid is in invalid state to perform this operation. It either not started yet or has already being or have stopped [igniteInstanceName=index.DynamicIndexPartitionedTransactionalConcurrentSelfTest3, state=STOPPED] at org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:201) at org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:95) at org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:169) at org.apache.ignite.internal.cluster.IgniteClusterImpl.localNode(IgniteClusterImpl.java:128) at org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.awaitPartitionMapExchange(GridCommonAbstractTest.java:619) at org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.awaitPartitionMapExchange(GridCommonAbstractTest.java:551) at org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.awaitPartitionMapExchange(GridCommonAbstractTest.java:535) at org.apache.ignite.internal.processors.cache.index.DynamicIndexAbstractConcurrentSelfTest.testConcurrentRebalance(DynamicIndexAbstractConcurrentSelfTest.java:421) at sun.reflect.GeneratedMethodAccessor31.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at junit.framework.TestCase.runTest(TestCase.java:176) at org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:2156) at org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:143) at org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:2071) at java.lang.Thread.run(Thread.java:748) {code} > DynamicIndexPartitionedTransactionalConcurrentSelfTest#testConcurrentRebalance > is flaky > --- > > Key: IGNITE-9358 > URL: https://issues.apache.org/jira/browse/IGNITE-9358 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Kasnacheev >Priority: Major > Labels: MakeTeamcityGreenAgain > > Fails approximately in 1/3 of runs > [~DmitriyGovorukhin] can you please take a look? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9358) DynamicIndexAbstractConcurrentSelfTest#testConcurrentRebalance is flaky
Ilya Kasnacheev created IGNITE-9358: --- Summary: DynamicIndexAbstractConcurrentSelfTest#testConcurrentRebalance is flaky Key: IGNITE-9358 URL: https://issues.apache.org/jira/browse/IGNITE-9358 Project: Ignite Issue Type: Bug Reporter: Ilya Kasnacheev Fails approximately in 1/3 of runs [~DmitriyGovorukhin] can you please take a look? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9358) DynamicIndexPartitionedTransactionalConcurrentSelfTest#testConcurrentRebalance is flaky
[ https://issues.apache.org/jira/browse/IGNITE-9358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ilya Kasnacheev updated IGNITE-9358: Summary: DynamicIndexPartitionedTransactionalConcurrentSelfTest#testConcurrentRebalance is flaky (was: DynamicIndexAbstractConcurrentSelfTest#testConcurrentRebalance is flaky) > DynamicIndexPartitionedTransactionalConcurrentSelfTest#testConcurrentRebalance > is flaky > --- > > Key: IGNITE-9358 > URL: https://issues.apache.org/jira/browse/IGNITE-9358 > Project: Ignite > Issue Type: Bug >Reporter: Ilya Kasnacheev >Priority: Major > Labels: MakeTeamcityGreenAgain > > Fails approximately in 1/3 of runs > [~DmitriyGovorukhin] can you please take a look? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-6173) SQL: do not start caches on client nodes
[ https://issues.apache.org/jira/browse/IGNITE-6173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-6173: --- Assignee: Yury Gerzhedovich > SQL: do not start caches on client nodes > > > Key: IGNITE-6173 > URL: https://issues.apache.org/jira/browse/IGNITE-6173 > Project: Ignite > Issue Type: Task > Components: cache, sql >Affects Versions: 2.1 >Reporter: Vladimir Ozerov >Assignee: Yury Gerzhedovich >Priority: Major > Labels: sql-stability > > When cache is started, this even is distributed through custom discovery > message. Server nodes start the cache, client nodes do nothing until cache is > requested explicitly. At the same time H2 database objects are created only > when cache is really started. > For this reason query parsing could lead to {{TABLE NOT FOUND}}, {{INDEX NOT > FOUND}}, etc. errors. If such exception is observed, we force start of all > known cache on a client and then retry. See > {{GridCacheProcessor#createMissingQueryCaches}} method. > First, client node cache start leads to another custom discovery message. So > query performance may suffer. Second, this is not needed! We already have all > necessary cache info in discovery. > Let's try to find a way to use available discovery data and do not start > cache on a client for SQL query execution. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-7926) Web console: demo faled to start under java >= 9
[ https://issues.apache.org/jira/browse/IGNITE-7926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vasiliy Sisko updated IGNITE-7926: -- Summary: Web console: demo faled to start under java >= 9 (was: Web console: demo faled to start under java 9) > Web console: demo faled to start under java >= 9 > > > Key: IGNITE-7926 > URL: https://issues.apache.org/jira/browse/IGNITE-7926 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.4 >Reporter: Pavel Konstantinov >Assignee: Vasiliy Sisko >Priority: Minor > Fix For: 2.7 > > > We need to add support for Java 9 to web console demo. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9356) Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return more line in linux than windows
[ https://issues.apache.org/jira/browse/IGNITE-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590117#comment-16590117 ] ARomantsov commented on IGNITE-9356: {code:java} Windows "response": "[04:13:21,197][INFO][main][IgniteKernal] >>>__ >>> / _/ ___/ |/ / _/_ __/ __/ >>> _/ // (7 7// / / / / _/>>> /___/\\___/_/|_/___/ /_/ /___/ >>> >>> ver. 2.5.0#20180813-sha1:ee24d852>>> 2018 Copyright(C) Apache Software Foundation>>> >>> Ignite documentation: http://ignite.apache.org[04:13:21,198][INFO][main][IgniteKernal] Config URL: file:/C:/BuildAgent/work/dd4d79acf76cc870/i2test/var/suite-client/test_modules/test_rest/multicast.xml[04:13:21,214][INFO][main][IgniteKernal] IgniteConfiguration [igniteInstanceName=null, pubPoolSize=32, svcPoolSize=32, callbackPoolSize=32, stripedPoolSize=32, sysPoolSize=32, mgmtPoolSize=4, igfsPoolSize=32, dataStreamerPoolSize=32, utilityCachePoolSize=32, utilityCacheKeepAliveTime=6, p2pPoolSize=2, qryPoolSize=32, igniteHome=C:/BuildAgent/work/dd4d79acf76cc870/i2test/var/suite-client/gg-ent-fab, igniteWorkDir=C:\\BuildAgent\\work\\dd4d79acf76cc870\\i2test\\var\\suite-client\\gg-ent-fab\\work, mbeanSrv=com.sun.jmx.mbeanserver.JmxMBeanServer@6f94fa3e, nodeId=0ae39439-6c3f-4837-a42c-f0de9f1462ce, marsh=org.apache.ignite.internal.binary.BinaryMarshaller@175b9425, marshLocJobs=false, daemon=false, p2pEnabled=false, netTimeout=5000, sndRetryDelay=1000, sndRetryCnt=3, metricsHistSize=1, metricsUpdateFreq=2000, metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi [addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10, reconDelay=2000, maxAckTimeout=60, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null], segPlc=STOP, segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true, segChkFreq=1, commSpi=TcpCommunicationSpi [connectGate=null, connPlc=null, enableForcibleNodeKill=false, enableTroubleshootingLog=false, srvLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi$2@32910148, locAddr=null, locHost=null, locPort=47100, locPortRange=100, shmemPort=-1, directBuf=true, directSndBuf=false, idleConnTimeout=60, connTimeout=5000, maxConnTimeout=60, reconCnt=10, sockSndBuf=32768, sockRcvBuf=32768, msgQueueLimit=0, slowClientQueueLimit=0, nioSrvr=null, shmemSrv=null, usePairedConnections=false, connectionsPerNode=1, tcpNoDelay=true, filterReachableAddresses=false, ackSndThreshold=32, unackedMsgsBufSize=0, sockWriteTimeout=2000, lsnr=null, boundTcpPort=-1, boundTcpShmemPort=-1, selectorsCnt=16, selectorSpins=0, addrRslvr=null, ctxInitLatch=java.util.concurrent.CountDownLatch@3f56875e[Count = 1], stopping=false, metricsLsnr=org.apache.ignite.spi.communication.tcp.TcpCommunicationMetricsListener@2b4bac49], evtSpi=org.apache.ignite.spi.eventstorage.NoopEventStorageSpi@fd07cbb, colSpi=NoopCollisionSpi [], deploySpi=LocalDeploymentSpi [lsnr=null], indexingSpi=org.apache.ignite.spi.indexing.noop.NoopIndexingSpi@7c83dc97, addrRslvr=null, clientMode=false, rebalanceThreadPoolSize=1, txCfg=org.apache.ignite.configuration.TransactionConfiguration@7748410a, cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED, p2pMissedCacheSize=100, locHost=127.0.0.1, timeSrvPortBase=31100, timeSrvPortRange=100, failureDetectionTimeout=1, clientFailureDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null, connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@740773a3, odbcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration [seqReserveSize=1, cacheMode=REPLICATED, backups=1, aff=null, grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null, binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=DataStorageConfiguration [sysRegionInitSize=41943040, sysCacheMaxSize=104857600, pageSize=0, concLvl=0, dfltDataRegConf=DataRegionConfiguration [name=default_data_region, maxSize=20608658636, initSize=3221225472, swapPath=null, pageEvictionMode=DISABLED, evictionThreshold=0.9, emptyPagesPoolSize=100, metricsEnabled=false, metricsSubIntervalCount=5, metricsRateTimeInterval=6, persistenceEnabled=true, checkpointPageBufSize=0], storagePath=null, checkpointFreq=18, lockWaitTime=1, checkpointThreads=4, checkpointWriteOrder=SEQUENTIAL, walHistSize=20, walSegments=10, walSegmentSize=67108864, walPath=db/wal, walArchivePath=db/wal/archive, metricsEnabled=false, walMode=LOG_ONLY, walTlbSize=131072, walBuffSize=0, walFlushFreq=2000, walFsyncDelay=1000, walRecordIterBuffSize=67108864, alwaysWriteFullPages=false, fileIOFactory=org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIOFactory@37f1104d, metricsSubIntervalCnt=5, metricsRateTimeInterval=6, walAutoArchiveAfterInactivity=-1, writeThrottlingEnabled=false, walCompactionEnabled=false], activeOnStart=true, auto
[jira] [Commented] (IGNITE-9338) ML TF integration: tf cluster can't connect after killing first node with default port 10800
[ https://issues.apache.org/jira/browse/IGNITE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590112#comment-16590112 ] ASF GitHub Bot commented on IGNITE-9338: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4601 > ML TF integration: tf cluster can't connect after killing first node with > default port 10800 > > > Key: IGNITE-9338 > URL: https://issues.apache.org/jira/browse/IGNITE-9338 > Project: Ignite > Issue Type: Bug > Components: ml >Reporter: Stepan Pilschikov >Assignee: Anton Dmitriev >Priority: Major > Labels: tf-integration > > Case: > - Run cluster with 3 node on 1 host > - Filling caches with data > - Running python script > - Killing lead node with port 10800 with chief + user_script processes > Expect: > - chief and user_script restarted on other node > - script rerun > Actual: > - chief and user_secript restarted on other node but started to crash and run > again because can't connect to default 10800 port -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-6044) SQL insert waits for transaction commit, but it must be executed right away
[ https://issues.apache.org/jira/browse/IGNITE-6044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov reassigned IGNITE-6044: --- Assignee: Yury Gerzhedovich (was: Sergey Kalashnikov) > SQL insert waits for transaction commit, but it must be executed right away > --- > > Key: IGNITE-6044 > URL: https://issues.apache.org/jira/browse/IGNITE-6044 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.1 >Reporter: Mikhail Cherkasov >Assignee: Yury Gerzhedovich >Priority: Critical > Labels: sql-stability, usability > > Doc says: > ""Presently, DML supports the atomic mode only meaning that if there is a DML > query that is executed as a part of an Ignite transaction then it will not be > enlisted in the transaction's writing queue and will be executed right away."" > https://apacheignite.readme.io/docs/dml#section-transactional-support > However the data will be added to cache only after transaction commit. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9353) Remove "Known issue, possible deadlock in case of low priority cache rebalancing delayed" comment from GridCacheRebalancingSyncSelfTest#getConfiguration
[ https://issues.apache.org/jira/browse/IGNITE-9353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590078#comment-16590078 ] Maxim Muzafarov commented on IGNITE-9353: - [~roman_s] Changes looks good to me. I've rerun and checked `Cache 8 Suite` with {{GridCacheRebalancingSyncSelfTest}}. Can you please look at it too? https://ci.ignite.apache.org/viewType.html?buildTypeId=IgniteTests24Java8_Cache8&branch_IgniteTests24Java8=pull%2F4599%2Fhead&tab=buildTypeStatusDiv {{org.apache.ignite.failure.FailureHandler}} is a common way of handling any exceptions, as well as dumping pending futures and deadlocked threads. Here is more about it -- [IEP-14|https://cwiki.apache.org/confluence/display/IGNITE/IEP-14+Ignite+failures+handling]. > Remove "Known issue, possible deadlock in case of low priority cache > rebalancing delayed" comment from > GridCacheRebalancingSyncSelfTest#getConfiguration > > > Key: IGNITE-9353 > URL: https://issues.apache.org/jira/browse/IGNITE-9353 > Project: Ignite > Issue Type: Task >Reporter: Roman Shtykh >Assignee: Roman Shtykh >Priority: Trivial > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8619) Remote node could not start in ssh connection
[ https://issues.apache.org/jira/browse/IGNITE-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590046#comment-16590046 ] Amelchev Nikita commented on IGNITE-8619: - [~ivanan.fed], Looks good to me. > Remote node could not start in ssh connection > - > > Key: IGNITE-8619 > URL: https://issues.apache.org/jira/browse/IGNITE-8619 > Project: Ignite > Issue Type: Bug >Reporter: Ivan Fedotov >Assignee: Ivan Fedotov >Priority: Major > Labels: MakeTeamcityGreenAgain > > Now there is a problem with launch remote node via ssh. Initially was an > assumption that it's due to remote process has not enough time to write > information into log: > [IGNITE-8085|https://issues.apache.org/jira/browse/IGNITE-8085]. But this > correction didn't fix [TeamCity > |https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8&testNameId=6814497542781613621&tab=testDetails] > (IgniteProjectionStartStopRestartSelfTest.testStartFiveNodesInTwoCalls). > So it's necessary to make launch remote node via ssh always succesful. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-9286) Redesign and Refactor UI
[ https://issues.apache.org/jira/browse/IGNITE-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kuznetsov reassigned IGNITE-9286: Assignee: Alexey Kuznetsov (was: Dmitriy Shabalin) > Redesign and Refactor UI > > > Key: IGNITE-9286 > URL: https://issues.apache.org/jira/browse/IGNITE-9286 > Project: Ignite > Issue Type: Improvement > Components: wizards >Reporter: Dmitriy Shabalin >Assignee: Alexey Kuznetsov >Priority: Major > Labels: web-console-configuration > Time Spent: 3h 35m > Remaining Estimate: 0h > > We should refactor all screens to use latest modern controls as on > "Configuration" screen. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink
[ https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexey Kukushkin updated IGNITE-9357: - Description: We are working on a PoC where we want to use Ignite as a data storage and Spark as a computation engine. We found that Ignite is supported neither as a source nor as a Sink when using Spark Structured Streaming, which is a must for us. We are enhancing Ignite to support Spark streaming with Ignite. We will send docs and code for review for the Ignite Community to consider if the community wants to accept this feature. was: We are working on a PoC where we want to use Ignite as a data storage and Spark as a computation engine. We found that Ignite is not supported neither as a source nor as a Sink when using Spark Structured Streaming, which is a must for us. We are enhancing Ignite to support Spark streaming with Ignite. We will send docs and code for review for the Ignite Community to consider if the Community want to accept this feature. > Spark Structured Streaming with Ignite as data source and sink > -- > > Key: IGNITE-9357 > URL: https://issues.apache.org/jira/browse/IGNITE-9357 > Project: Ignite > Issue Type: New Feature > Components: spark >Affects Versions: 2.7 >Reporter: Alexey Kukushkin >Assignee: Alexey Kukushkin >Priority: Major > > We are working on a PoC where we want to use Ignite as a data storage and > Spark as a computation engine. We found that Ignite is supported neither as a > source nor as a Sink when using Spark Structured Streaming, which is a must > for us. > We are enhancing Ignite to support Spark streaming with Ignite. We will send > docs and code for review for the Ignite Community to consider if the > community wants to accept this feature. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message
[ https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590018#comment-16590018 ] Sergey Kosarev commented on IGNITE-8971: [~agoncharuk], I've pulled master changes and queued a new TC Run. > GridRestProcessor should propagate error message > > > Key: IGNITE-8971 > URL: https://issues.apache.org/jira/browse/IGNITE-8971 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Andrew Medvedev >Assignee: Sergey Kosarev >Priority: Major > > GridRestProcessor should propagate error message (stack trace) for handling > disk full error messages -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message
[ https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16590002#comment-16590002 ] ASF GitHub Bot commented on IGNITE-8971: GitHub user macrergate opened a pull request: https://github.com/apache/ignite/pull/4604 IGNITE-8971: make GridRestProcessor propagate error message You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-8971 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4604.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4604 commit 6eca747e8e52765f7518b1ccd05219d7873a7c12 Author: AMedvedev Date: 2018-07-13T16:02:04Z IGNITE-8971: make GridRestProcessor propagate error message > GridRestProcessor should propagate error message > > > Key: IGNITE-8971 > URL: https://issues.apache.org/jira/browse/IGNITE-8971 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Andrew Medvedev >Assignee: Sergey Kosarev >Priority: Major > > GridRestProcessor should propagate error message (stack trace) for handling > disk full error messages -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-8971) GridRestProcessor should propagate error message
[ https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Kosarev reassigned IGNITE-8971: -- Assignee: Sergey Kosarev (was: Andrew Medvedev) > GridRestProcessor should propagate error message > > > Key: IGNITE-8971 > URL: https://issues.apache.org/jira/browse/IGNITE-8971 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Andrew Medvedev >Assignee: Sergey Kosarev >Priority: Major > > GridRestProcessor should propagate error message (stack trace) for handling > disk full error messages -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink
[ https://issues.apache.org/jira/browse/IGNITE-9357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589998#comment-16589998 ] ASF GitHub Bot commented on IGNITE-9357: GitHub user kukushal opened a pull request: https://github.com/apache/ignite/pull/4603 IGNITE-9357 Spark Structured Streaming with Ignite as data source and sink You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-9357 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4603.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4603 commit b022dfcf87579431c3257ae4cc4ed29e7240b27e Author: kukushal Date: 2018-08-23T09:58:55Z IGNITE-9357 Spark Structured Streaming with Ignite as data source and sink > Spark Structured Streaming with Ignite as data source and sink > -- > > Key: IGNITE-9357 > URL: https://issues.apache.org/jira/browse/IGNITE-9357 > Project: Ignite > Issue Type: New Feature > Components: spark >Affects Versions: 2.7 >Reporter: Alexey Kukushkin >Assignee: Alexey Kukushkin >Priority: Major > > We are working on a PoC where we want to use Ignite as a data storage and > Spark as a computation engine. We found that Ignite is not supported neither > as a source nor as a Sink when using Spark Structured Streaming, which is a > must for us. > We are enhancing Ignite to support Spark streaming with Ignite. We will send > docs and code for review for the Ignite Community to consider if the > Community want to accept this feature. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9357) Spark Structured Streaming with Ignite as data source and sink
Alexey Kukushkin created IGNITE-9357: Summary: Spark Structured Streaming with Ignite as data source and sink Key: IGNITE-9357 URL: https://issues.apache.org/jira/browse/IGNITE-9357 Project: Ignite Issue Type: New Feature Components: spark Affects Versions: 2.7 Reporter: Alexey Kukushkin Assignee: Alexey Kukushkin We are working on a PoC where we want to use Ignite as a data storage and Spark as a computation engine. We found that Ignite is not supported neither as a source nor as a Sink when using Spark Structured Streaming, which is a must for us. We are enhancing Ignite to support Spark streaming with Ignite. We will send docs and code for review for the Ignite Community to consider if the Community want to accept this feature. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (IGNITE-9305) Wrong off-heap size is reported for a node
[ https://issues.apache.org/jira/browse/IGNITE-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589960#comment-16589960 ] Pavel Pereslegin edited comment on IGNITE-9305 at 8/23/18 9:28 AM: --- Hi [~dmagda]. Do I understand correctly that we don't logging aggregated information about off-heap usage? Example of such output: {noformat} ^-- H/N/C [hosts=1, nodes=2, CPUs=8] ^-- CPU [cur=2.57%, avg=4.44%, GC=0%] ^-- PageMemory [pages=34] ^-- Heap [used=130MB, free=96.31%, comm=244MB] ^-- Off-heap sysMemPlc [used=0MB, free=99.98%, comm=100MB] ^-- Off-heap default [used=0MB, free=99.62%, comm=20MB] ^-- Off-heap metastoreMemPlc [used=0MB, free=99.96%, comm=100MB] ^-- Ignite persistence default [used=0MB] ^-- Outbound messages queue [size=0] ^-- Public thread pool [active=0, idle=6, qSize=0] ^-- System thread pool [active=0, idle=7, qSize=0] ^-- Custom executor 0 [active=0, idle=0, qSize=0] ^-- Custom executor 1 [active=0, idle=0, qSize=0] {noformat} was (Author: xtern): Hi [~dmagda], Do I understand correctly that we don't logging aggregated information about off-heap usage? Example of such output: {noformat} ^-- H/N/C [hosts=1, nodes=2, CPUs=8] ^-- CPU [cur=2.57%, avg=4.44%, GC=0%] ^-- PageMemory [pages=34] ^-- Heap [used=130MB, free=96.31%, comm=244MB] ^-- Off-heap sysMemPlc [used=0MB, free=99.98%, comm=100MB] ^-- Off-heap default [used=0MB, free=99.62%, comm=20MB] ^-- Off-heap metastoreMemPlc [used=0MB, free=99.96%, comm=100MB] ^-- Ignite persistence default [used=0MB] ^-- Outbound messages queue [size=0] ^-- Public thread pool [active=0, idle=6, qSize=0] ^-- System thread pool [active=0, idle=7, qSize=0] ^-- Custom executor 0 [active=0, idle=0, qSize=0] ^-- Custom executor 1 [active=0, idle=0, qSize=0] {noformat} > Wrong off-heap size is reported for a node > -- > > Key: IGNITE-9305 > URL: https://issues.apache.org/jira/browse/IGNITE-9305 > Project: Ignite > Issue Type: Task >Affects Versions: 2.6 >Reporter: Denis Magda >Assignee: Pavel Pereslegin >Priority: Blocker > Fix For: 2.7 > > > Was troubleshooting an Ignite deployment today and couldn't find out from the > logs what was the actual off-heap space used. > Those were the given memory resoures (Ignite 2.6): > {code} > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager] Topology > snapshot [ver=1, servers=1, clients=0, CPUs=64, offheap=30.0GB, heap=24.0GB] > {code} > And that weird stuff was reported by the node (pay attention to the last > line): > {code} > [2018-08-16 15:45:50,211][INFO > ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017] > > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=c033026e, name=cluster_31-Dec-2017, uptime=00:38:00.257] > ^-- H/N/C [hosts=1, nodes=1, CPUs=64] > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%] > ^-- PageMemory [pages=6997377] > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB] > ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is always the > same! > {code} > Had to change the code by using > {code}dataRegion.getPhysicalMemoryPages(){code} to find out that actual > off-heap usage size was > {code} > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB > {code} > The logs have to report the following instead: > {code} > ^-- Off-heap {Data Region 1} [used={dataRegion1.getPhysicalMemorySize()}, > free=X%, comm=dataRegion1.maxSize()] > ^-- Off-heap {Data Region 2} [used={dataRegion2.getPhysicalMemorySize()}, > free=X%, comm=dataRegion2.maxSize()] > {code} > If Ignite persistence is enabled then the following extra lines have to be > added to see the disk used space: > {code} > ^-- Ignite persistence {Data Region 1}: > used={dataRegion1.getTotalAllocatedSize() - > dataRegion1.getPhysicalMemorySize()} > ^-- Ignite persistence {Data Region 2} > [used={dataRegion2.getTotalAllocatedSize() - > dataRegion2.getPhysicalMemorySize()}] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9305) Wrong off-heap size is reported for a node
[ https://issues.apache.org/jira/browse/IGNITE-9305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589960#comment-16589960 ] Pavel Pereslegin commented on IGNITE-9305: -- Hi [~dmagda], Do I understand correctly that we don't logging aggregated information about off-heap usage? Example of such output: {noformat} ^-- H/N/C [hosts=1, nodes=2, CPUs=8] ^-- CPU [cur=2.57%, avg=4.44%, GC=0%] ^-- PageMemory [pages=34] ^-- Heap [used=130MB, free=96.31%, comm=244MB] ^-- Off-heap sysMemPlc [used=0MB, free=99.98%, comm=100MB] ^-- Off-heap default [used=0MB, free=99.62%, comm=20MB] ^-- Off-heap metastoreMemPlc [used=0MB, free=99.96%, comm=100MB] ^-- Ignite persistence default [used=0MB] ^-- Outbound messages queue [size=0] ^-- Public thread pool [active=0, idle=6, qSize=0] ^-- System thread pool [active=0, idle=7, qSize=0] ^-- Custom executor 0 [active=0, idle=0, qSize=0] ^-- Custom executor 1 [active=0, idle=0, qSize=0] {noformat} > Wrong off-heap size is reported for a node > -- > > Key: IGNITE-9305 > URL: https://issues.apache.org/jira/browse/IGNITE-9305 > Project: Ignite > Issue Type: Task >Affects Versions: 2.6 >Reporter: Denis Magda >Assignee: Pavel Pereslegin >Priority: Blocker > Fix For: 2.7 > > > Was troubleshooting an Ignite deployment today and couldn't find out from the > logs what was the actual off-heap space used. > Those were the given memory resoures (Ignite 2.6): > {code} > [2018-08-16 15:07:49,961][INFO ][main][GridDiscoveryManager] Topology > snapshot [ver=1, servers=1, clients=0, CPUs=64, offheap=30.0GB, heap=24.0GB] > {code} > And that weird stuff was reported by the node (pay attention to the last > line): > {code} > [2018-08-16 15:45:50,211][INFO > ][grid-timeout-worker-#135%cluster_31-Dec-2017%][IgniteKernal%cluster_31-Dec-2017] > > Metrics for local node (to disable set 'metricsLogFrequency' to 0) > ^-- Node [id=c033026e, name=cluster_31-Dec-2017, uptime=00:38:00.257] > ^-- H/N/C [hosts=1, nodes=1, CPUs=64] > ^-- CPU [cur=0.03%, avg=5.54%, GC=0%] > ^-- PageMemory [pages=6997377] > ^-- Heap [used=9706MB, free=61.18%, comm=22384MB] > ^-- Non heap [used=144MB, free=-1%, comm=148MB] - this line is always the > same! > {code} > Had to change the code by using > {code}dataRegion.getPhysicalMemoryPages(){code} to find out that actual > off-heap usage size was > {code} > >>> Physical Memory Size: 28651614208 => 27324 MB, 26 GB > {code} > The logs have to report the following instead: > {code} > ^-- Off-heap {Data Region 1} [used={dataRegion1.getPhysicalMemorySize()}, > free=X%, comm=dataRegion1.maxSize()] > ^-- Off-heap {Data Region 2} [used={dataRegion2.getPhysicalMemorySize()}, > free=X%, comm=dataRegion2.maxSize()] > {code} > If Ignite persistence is enabled then the following extra lines have to be > added to see the disk used space: > {code} > ^-- Ignite persistence {Data Region 1}: > used={dataRegion1.getTotalAllocatedSize() - > dataRegion1.getPhysicalMemorySize()} > ^-- Ignite persistence {Data Region 2} > [used={dataRegion2.getTotalAllocatedSize() - > dataRegion2.getPhysicalMemorySize()}] > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9356) Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return more line in linux than windows
[ https://issues.apache.org/jira/browse/IGNITE-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589958#comment-16589958 ] Sergey Kozlov commented on IGNITE-9356: --- [~ARomantsov] could you put examples under Linux and windows? > Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return > more line in linux than windows > - > > Key: IGNITE-9356 > URL: https://issues.apache.org/jira/browse/IGNITE-9356 > Project: Ignite > Issue Type: Improvement > Components: rest >Affects Versions: 2.5 > Environment: Centos/ Windows10 >Reporter: ARomantsov >Priority: Major > Fix For: 2.7 > > > I run cluster in diffrent configuration (centos and windows 10) and notice > that log command return diffrent count of rows in same from and to > Windows rest return 1 less rows -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9356) Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return more line in linux than windows
[ https://issues.apache.org/jira/browse/IGNITE-9356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ARomantsov updated IGNITE-9356: --- Summary: Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return more line in linux than windows(was: Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return more line in windows than linux) > Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return > more line in linux than windows > - > > Key: IGNITE-9356 > URL: https://issues.apache.org/jira/browse/IGNITE-9356 > Project: Ignite > Issue Type: Improvement > Components: rest >Affects Versions: 2.5 > Environment: Centos/ Windows10 >Reporter: ARomantsov >Priority: Major > Fix For: 2.7 > > > I run cluster in diffrent configuration (centos and windows 10) and notice > that log command return diffrent count of rows in same from and to > Windows rest return 1 less rows -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9356) Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return more line in windows than linux
ARomantsov created IGNITE-9356: -- Summary: Ignite rest command http://localhost:8080/ignite?cmd=log&from=n&to=m return more line in windows than linux Key: IGNITE-9356 URL: https://issues.apache.org/jira/browse/IGNITE-9356 Project: Ignite Issue Type: Improvement Components: rest Affects Versions: 2.5 Environment: Centos/ Windows10 Reporter: ARomantsov Fix For: 2.7 I run cluster in diffrent configuration (centos and windows 10) and notice that log command return diffrent count of rows in same from and to Windows rest return 1 less rows -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9309) LocalNodeMovingPartitionsCount metrics may calculates incorrect due to processFullPartitionUpdate
[ https://issues.apache.org/jira/browse/IGNITE-9309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589926#comment-16589926 ] Anton Vinogradov commented on IGNITE-9309: -- [~qvad], Please continue the discussion. Maxim cannot fix the issue without your help. > LocalNodeMovingPartitionsCount metrics may calculates incorrect due to > processFullPartitionUpdate > - > > Key: IGNITE-9309 > URL: https://issues.apache.org/jira/browse/IGNITE-9309 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.6 >Reporter: Maxim Muzafarov >Priority: Major > > [~qvad] have found incorrect {{LocalNodeMovingPartitionsCount}} metrics > calculation on client node {{JOIN\LEFT}}. Full issue reproducer is absent. > Probable scenario: > {code} > Repeat 10 times: > 1. stop node > 2. clean lfs > 3. add stopped node (trigger rebalance) > 4. 3 times: start 2 clients, wait for topology snapshot, close clients > 5. for each cache group check JMX metrics LocalNodeMovingPartitionsCount > (like waitForFinishRebalance()) > {code} > Whole discussion and all configuration details can be found in comments of > [IGNITE-7165|https://issues.apache.org/jira/browse/IGNITE-7165]. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9338) ML TF integration: tf cluster can't connect after killing first node with default port 10800
[ https://issues.apache.org/jira/browse/IGNITE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589913#comment-16589913 ] ASF GitHub Bot commented on IGNITE-9338: GitHub user dmitrievanthony opened a pull request: https://github.com/apache/ignite/pull/4601 IGNITE-9338 Add connection data int env variables of TensorFlow worker processes You can merge this pull request into a Git repository by running: $ git pull https://github.com/gridgain/apache-ignite ignite-9338 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4601.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4601 commit 597ca635203f1cbf77504a5e18519f45abea73e3 Author: Anton Dmitriev Date: 2018-08-22T13:12:01Z IGNITE-9338 Pass Ignite dataset host and port into Python processes. commit fe4bb1f04e95bf8c09c17eeed51bbf2cde696510 Author: Anton Dmitriev Date: 2018-08-22T14:02:33Z IGNITE-9338 Pass Ignite dataset host and port into Python processes. commit 18a936a1acd28eb0ae95ed0127a3874e8165ba7c Author: Anton Dmitriev Date: 2018-08-22T14:04:12Z IGNITE-9338 Pass Ignite dataset host and port into Python processes. > ML TF integration: tf cluster can't connect after killing first node with > default port 10800 > > > Key: IGNITE-9338 > URL: https://issues.apache.org/jira/browse/IGNITE-9338 > Project: Ignite > Issue Type: Bug > Components: ml >Reporter: Stepan Pilschikov >Assignee: Anton Dmitriev >Priority: Major > Labels: tf-integration > > Case: > - Run cluster with 3 node on 1 host > - Filling caches with data > - Running python script > - Killing lead node with port 10800 with chief + user_script processes > Expect: > - chief and user_script restarted on other node > - script rerun > Actual: > - chief and user_secript restarted on other node but started to crash and run > again because can't connect to default 10800 port -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9355) Document 3 new system views (nodes, node attributes, baseline nodes)
Vladimir Ozerov created IGNITE-9355: --- Summary: Document 3 new system views (nodes, node attributes, baseline nodes) Key: IGNITE-9355 URL: https://issues.apache.org/jira/browse/IGNITE-9355 Project: Ignite Issue Type: Task Components: documentation, sql Reporter: Vladimir Ozerov Fix For: 2.7 We need to document three new SQL system views. # Explain users that new system SQL schema appeared, named "IGNITE", where all views are stored # System view NODES - list of current nodes in topology. Columns: ID, CONSISTENT_ID, VERSION, IS_LOCAL, IS_CLIENT, IS_DAEMON, NODE_ORDER, ADDRESSES, HOSTNAMES # System view NODE_ATTRIBUTES - attributes for all nodes. Columns: NODE_ID, NAME, VALUE # System view BASELINE_NODES - list of baseline topology nodes. Columns: CONSISTENT_ID, ONLINE (whether node is up and running at the moment) # Explain limitations: views cannot be joined with user tables; it is not allowed to create other objects (tables, indexes) in "IGNITE" schema. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9318) SQL system view for list of baseline topology nodes
[ https://issues.apache.org/jira/browse/IGNITE-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589898#comment-16589898 ] ASF GitHub Bot commented on IGNITE-9318: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4575 > SQL system view for list of baseline topology nodes > --- > > Key: IGNITE-9318 > URL: https://issues.apache.org/jira/browse/IGNITE-9318 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: iep-13 > Fix For: 2.7 > > > Implement SQL system view to show list of baseline topology nodes. View must > contain information about node consistentId and online/offline status. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9318) SQL system view for list of baseline topology nodes
[ https://issues.apache.org/jira/browse/IGNITE-9318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-9318: Fix Version/s: 2.7 > SQL system view for list of baseline topology nodes > --- > > Key: IGNITE-9318 > URL: https://issues.apache.org/jira/browse/IGNITE-9318 > Project: Ignite > Issue Type: Task > Components: sql >Reporter: Aleksey Plekhanov >Assignee: Aleksey Plekhanov >Priority: Major > Labels: iep-13 > Fix For: 2.7 > > > Implement SQL system view to show list of baseline topology nodes. View must > contain information about node consistentId and online/offline status. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9235) Transitivity violation in GridMergeIndex Comparator
[ https://issues.apache.org/jira/browse/IGNITE-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589883#comment-16589883 ] ASF GitHub Bot commented on IGNITE-9235: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4498 > Transitivity violation in GridMergeIndex Comparator > --- > > Key: IGNITE-9235 > URL: https://issues.apache.org/jira/browse/IGNITE-9235 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.5 >Reporter: Andrew Medvedev >Assignee: Andrew Medvedev >Priority: Major > Fix For: 2.7 > > > Currently comparator in > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex is: > > Private final Comparator streamCmp = new Comparator() { > @Override public int compare(RowStream o1, RowStream o2) { > // Nulls at the beginning. > if (o1 == null) > return -1; > if (o2 == null) > return 1; > return compareRows(o1.get(), o2.get()); > } > }; > -- > > This comparator violates transitivity when o1 and o2 are null. Thus we get > exception in JDK1.8: > > > {color:#d04437}Caused by: java.lang.IllegalArgumentException: Comparison > method violates its general contract!{color} > {color:#d04437} at java.util.TimSort.mergeHi(TimSort.java:899){color} > {color:#d04437} at java.util.TimSort.mergeAt(TimSort.java:516){color} > {color:#d04437} at java.util.TimSort.mergeCollapse(TimSort.java:441){color} > {color:#d04437} at java.util.TimSort.sort(TimSort.java:245){color} > {color:#d04437} at java.util.Arrays.sort(Arrays.java:1438){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndexSorted$MergeStreamIterator.goFirst(GridMergeIndexSorted.java:248){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndexSorted$MergeStreamIterator.hasNext(GridMergeIndexSorted.java:270){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex$FetchingCursor.fetchRows(GridMergeIndex.java:614){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex$FetchingCursor.next(GridMergeIndex.java:658){color} > {color:#d04437} at org.h2.index.IndexCursor.next(IndexCursor.java:305){color} > {color:#d04437} at org.h2.table.TableFilter.next(TableFilter.java:499){color} > {color:#d04437} at > org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1452){color} > {color:#d04437} at > org.h2.result.LazyResult.hasNext(LazyResult.java:79){color} > {color:#d04437} at org.h2.result.LazyResult.next(LazyResult.java:59){color} > {color:#d04437} at > org.h2.command.dml.Select.queryFlat(Select.java:519){color} > {color:#d04437} at > org.h2.command.dml.Select.queryWithoutCache(Select.java:625){color} > {color:#d04437} at > org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114){color} > {color:#d04437} at org.h2.command.dml.Query.query(Query.java:352){color} > {color:#d04437} at org.h2.command.dml.Query.query(Query.java:333){color} > {color:#d04437} at > org.h2.command.CommandContainer.query(CommandContainer.java:113){color} > {color:#d04437} at > org.h2.command.Command.executeQuery(Command.java:201){color} > {color:#d04437} ... 44 more{color} > > WA: use -Djava.util.Arrays.useLegacyMergeSort=true > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9235) Transitivity violation in GridMergeIndex Comparator
[ https://issues.apache.org/jira/browse/IGNITE-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-9235: Component/s: sql > Transitivity violation in GridMergeIndex Comparator > --- > > Key: IGNITE-9235 > URL: https://issues.apache.org/jira/browse/IGNITE-9235 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.5 >Reporter: Andrew Medvedev >Assignee: Andrew Medvedev >Priority: Major > Fix For: 2.7 > > > Currently comparator in > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex is: > > Private final Comparator streamCmp = new Comparator() { > @Override public int compare(RowStream o1, RowStream o2) { > // Nulls at the beginning. > if (o1 == null) > return -1; > if (o2 == null) > return 1; > return compareRows(o1.get(), o2.get()); > } > }; > -- > > This comparator violates transitivity when o1 and o2 are null. Thus we get > exception in JDK1.8: > > > {color:#d04437}Caused by: java.lang.IllegalArgumentException: Comparison > method violates its general contract!{color} > {color:#d04437} at java.util.TimSort.mergeHi(TimSort.java:899){color} > {color:#d04437} at java.util.TimSort.mergeAt(TimSort.java:516){color} > {color:#d04437} at java.util.TimSort.mergeCollapse(TimSort.java:441){color} > {color:#d04437} at java.util.TimSort.sort(TimSort.java:245){color} > {color:#d04437} at java.util.Arrays.sort(Arrays.java:1438){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndexSorted$MergeStreamIterator.goFirst(GridMergeIndexSorted.java:248){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndexSorted$MergeStreamIterator.hasNext(GridMergeIndexSorted.java:270){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex$FetchingCursor.fetchRows(GridMergeIndex.java:614){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex$FetchingCursor.next(GridMergeIndex.java:658){color} > {color:#d04437} at org.h2.index.IndexCursor.next(IndexCursor.java:305){color} > {color:#d04437} at org.h2.table.TableFilter.next(TableFilter.java:499){color} > {color:#d04437} at > org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1452){color} > {color:#d04437} at > org.h2.result.LazyResult.hasNext(LazyResult.java:79){color} > {color:#d04437} at org.h2.result.LazyResult.next(LazyResult.java:59){color} > {color:#d04437} at > org.h2.command.dml.Select.queryFlat(Select.java:519){color} > {color:#d04437} at > org.h2.command.dml.Select.queryWithoutCache(Select.java:625){color} > {color:#d04437} at > org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114){color} > {color:#d04437} at org.h2.command.dml.Query.query(Query.java:352){color} > {color:#d04437} at org.h2.command.dml.Query.query(Query.java:333){color} > {color:#d04437} at > org.h2.command.CommandContainer.query(CommandContainer.java:113){color} > {color:#d04437} at > org.h2.command.Command.executeQuery(Command.java:201){color} > {color:#d04437} ... 44 more{color} > > WA: use -Djava.util.Arrays.useLegacyMergeSort=true > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9235) Transitivity violation in GridMergeIndex Comparator
[ https://issues.apache.org/jira/browse/IGNITE-9235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vladimir Ozerov updated IGNITE-9235: Ignite Flags: (was: Docs Required) > Transitivity violation in GridMergeIndex Comparator > --- > > Key: IGNITE-9235 > URL: https://issues.apache.org/jira/browse/IGNITE-9235 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.5 >Reporter: Andrew Medvedev >Assignee: Andrew Medvedev >Priority: Major > Fix For: 2.7 > > > Currently comparator in > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex is: > > Private final Comparator streamCmp = new Comparator() { > @Override public int compare(RowStream o1, RowStream o2) { > // Nulls at the beginning. > if (o1 == null) > return -1; > if (o2 == null) > return 1; > return compareRows(o1.get(), o2.get()); > } > }; > -- > > This comparator violates transitivity when o1 and o2 are null. Thus we get > exception in JDK1.8: > > > {color:#d04437}Caused by: java.lang.IllegalArgumentException: Comparison > method violates its general contract!{color} > {color:#d04437} at java.util.TimSort.mergeHi(TimSort.java:899){color} > {color:#d04437} at java.util.TimSort.mergeAt(TimSort.java:516){color} > {color:#d04437} at java.util.TimSort.mergeCollapse(TimSort.java:441){color} > {color:#d04437} at java.util.TimSort.sort(TimSort.java:245){color} > {color:#d04437} at java.util.Arrays.sort(Arrays.java:1438){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndexSorted$MergeStreamIterator.goFirst(GridMergeIndexSorted.java:248){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndexSorted$MergeStreamIterator.hasNext(GridMergeIndexSorted.java:270){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex$FetchingCursor.fetchRows(GridMergeIndex.java:614){color} > {color:#d04437} at > org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex$FetchingCursor.next(GridMergeIndex.java:658){color} > {color:#d04437} at org.h2.index.IndexCursor.next(IndexCursor.java:305){color} > {color:#d04437} at org.h2.table.TableFilter.next(TableFilter.java:499){color} > {color:#d04437} at > org.h2.command.dml.Select$LazyResultQueryFlat.fetchNextRow(Select.java:1452){color} > {color:#d04437} at > org.h2.result.LazyResult.hasNext(LazyResult.java:79){color} > {color:#d04437} at org.h2.result.LazyResult.next(LazyResult.java:59){color} > {color:#d04437} at > org.h2.command.dml.Select.queryFlat(Select.java:519){color} > {color:#d04437} at > org.h2.command.dml.Select.queryWithoutCache(Select.java:625){color} > {color:#d04437} at > org.h2.command.dml.Query.queryWithoutCacheLazyCheck(Query.java:114){color} > {color:#d04437} at org.h2.command.dml.Query.query(Query.java:352){color} > {color:#d04437} at org.h2.command.dml.Query.query(Query.java:333){color} > {color:#d04437} at > org.h2.command.CommandContainer.query(CommandContainer.java:113){color} > {color:#d04437} at > org.h2.command.Command.executeQuery(Command.java:201){color} > {color:#d04437} ... 44 more{color} > > WA: use -Djava.util.Arrays.useLegacyMergeSort=true > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8911) While cache is restarting it's possible to start new cache with this name
[ https://issues.apache.org/jira/browse/IGNITE-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589840#comment-16589840 ] Alexey Goncharuk commented on IGNITE-8911: -- [~EdShangGG], please fix the code style: 1) catch block should be started from a new line 2) Misaligned identation of method arguments (8 instead of 4) (looks like IDE refactoring did this) 3) Missing javadoc for new field and methods in IgniteCacheRestartingException > While cache is restarting it's possible to start new cache with this name > - > > Key: IGNITE-8911 > URL: https://issues.apache.org/jira/browse/IGNITE-8911 > Project: Ignite > Issue Type: Bug >Reporter: Eduard Shangareev >Assignee: Eduard Shangareev >Priority: Major > > We have the state "restarting" for caches when we certainly now that these > caches would start at some moment in future. But we could start new cache > with the same name. > Plus, NPE is throwing when we try to get proxy for such caches (in > "restarting" state). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-8971) GridRestProcessor should propagate error message
[ https://issues.apache.org/jira/browse/IGNITE-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16589826#comment-16589826 ] Alexey Goncharuk commented on IGNITE-8971: -- [~andmed], high number of test failures is reproduced in the PR. Please take a look (probably, you need to pull the latest master). > GridRestProcessor should propagate error message > > > Key: IGNITE-8971 > URL: https://issues.apache.org/jira/browse/IGNITE-8971 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Andrew Medvedev >Assignee: Andrew Medvedev >Priority: Major > > GridRestProcessor should propagate error message (stack trace) for handling > disk full error messages -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9354) HelloWorldGAExample hangs forever with additional nods in topology
[ https://issues.apache.org/jira/browse/IGNITE-9354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Volkov updated IGNITE-9354: Attachment: log.zip > HelloWorldGAExample hangs forever with additional nods in topology > -- > > Key: IGNITE-9354 > URL: https://issues.apache.org/jira/browse/IGNITE-9354 > Project: Ignite > Issue Type: Bug > Components: ml >Affects Versions: 2.6 >Reporter: Alex Volkov >Priority: Major > Attachments: log.zip > > > To reproduce this issue please follow these steps: > 1. Run two nodes using ignite.sh script. > For example: > {code:java} > bin/ignite.sh examples/config/example-ignite.xml -J-Xmx1g -J-Xms1g > -J-DCONSISTENT_ID=node1 -J-DIGNITE_QUIET=false > {code} > 2. Run HelloWorldGAExample from IDEA IDE. > *Expecting result:* > Example successfully run and completed. > *Actual result:* > There are a lot of NPE exceptions in example log: > {code:java} > [2018-08-23 09:49:25,029][ERROR][pub-#19][GridJobWorker] Failed to execute > job due to unexpected runtime exception > [jobId=c296b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, > ses=GridJobSessionImpl [ses=GridTaskSessionImpl > [taskName=o.a.i.ml.genetic.FitnessTask, dep=GridDeployment [ts=1535006960878, > depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, > clsLdrId=8d16b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, userVer=0, > loc=true, > sampleClsName=o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap, > pendingUndeploy=false, undeployed=false, usage=2], > taskClsName=o.a.i.ml.genetic.FitnessTask, > sesId=b196b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, > startTime=1535006964236, endTime=9223372036854775807, > taskNodeId=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, > clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, closed=false, cpSpi=null, > failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, > topPred=o.a.i.i.cluster.ClusterGroupAdapter$AttributeFilter@2d746ce4, > subjId=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, mapFut=GridFutureAdapter > [ignoreInterrupts=false, state=INIT, res=null, hash=679592043]IgniteFuture > [orig=], execName=null], > jobId=c296b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1], err=null] > java.lang.NullPointerException > at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:76) > at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:35) > at > org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:568) > at > org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6749) > at > org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:562) > at > org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:491) > at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > and it hangs on this one: > {code:java} > [2018-08-23 09:49:35,229][WARN ][pub-#17][AlwaysFailoverSpi] Received > topology with only nodes that job had failed on (forced to fail) > [failedNodes=[eac48ea7-da79-453a-a94c-291039c5cc15, > 0907d876-e0ce-4fda-966d-ad91a03f9722, e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1]] > class org.apache.ignite.cluster.ClusterTopologyException: Failed to failover > a job to another node (failover SPI returned null) > [job=org.apache.ignite.ml.genetic.FitnessJob@35f8a9d3, node=TcpDiscoveryNode > [id=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, addrs=ArrayList [0:0:0:0:0:0:0:1, > 127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.42:47502, > /172.25.4.92:47502, /0:0:0:0:0:0:0:1:47502, /127.0.0.1:47502], > discPort=47502, order=3, intOrder=3, lastExchangeTime=1535006974981, > loc=true, ver=2.7.0#19700101-sha1:, isClient=false]] > at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:853) > at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:851) > at > org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985) > at > org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:541) > at org.apache.ignite.ml.genetic.GAGrid.calculateFitness(GAGrid.java:102) > at org.apache.ignite.ml.genetic.GAGrid.evolve(GAGrid.java:171) > at > org.apache.ignite.examples.ml.genetic.helloworld.HelloWorldGAExample.main(HelloWorldGAExample.java:90) > Caused by: class > org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: Failed to > failover a job to another node (failover SPI returned null) > [job=org.apache.ignite.ml.genetic.FitnessJob@35f8a9d3, node=TcpDi
[jira] [Created] (IGNITE-9354) HelloWorldGAExample hangs forever with additional nods in topology
Alex Volkov created IGNITE-9354: --- Summary: HelloWorldGAExample hangs forever with additional nods in topology Key: IGNITE-9354 URL: https://issues.apache.org/jira/browse/IGNITE-9354 Project: Ignite Issue Type: Bug Components: ml Affects Versions: 2.6 Reporter: Alex Volkov Attachments: log.zip To reproduce this issue please follow these steps: 1. Run two nodes using ignite.sh script. For example: {code:java} bin/ignite.sh examples/config/example-ignite.xml -J-Xmx1g -J-Xms1g -J-DCONSISTENT_ID=node1 -J-DIGNITE_QUIET=false {code} 2. Run HelloWorldGAExample from IDEA IDE. *Expecting result:* Example successfully run and completed. *Actual result:* There are a lot of NPE exceptions in example log: {code:java} [2018-08-23 09:49:25,029][ERROR][pub-#19][GridJobWorker] Failed to execute job due to unexpected runtime exception [jobId=c296b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, ses=GridJobSessionImpl [ses=GridTaskSessionImpl [taskName=o.a.i.ml.genetic.FitnessTask, dep=GridDeployment [ts=1535006960878, depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, clsLdrId=8d16b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, userVer=0, loc=true, sampleClsName=o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionFullMap, pendingUndeploy=false, undeployed=false, usage=2], taskClsName=o.a.i.ml.genetic.FitnessTask, sesId=b196b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, startTime=1535006964236, endTime=9223372036854775807, taskNodeId=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, clsLdr=sun.misc.Launcher$AppClassLoader@18b4aac2, closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, topPred=o.a.i.i.cluster.ClusterGroupAdapter$AttributeFilter@2d746ce4, subjId=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, mapFut=GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null, hash=679592043]IgniteFuture [orig=], execName=null], jobId=c296b856561-e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1], err=null] java.lang.NullPointerException at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:76) at org.apache.ignite.ml.genetic.FitnessJob.execute(FitnessJob.java:35) at org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:568) at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6749) at org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:562) at org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:491) at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} and it hangs on this one: {code:java} [2018-08-23 09:49:35,229][WARN ][pub-#17][AlwaysFailoverSpi] Received topology with only nodes that job had failed on (forced to fail) [failedNodes=[eac48ea7-da79-453a-a94c-291039c5cc15, 0907d876-e0ce-4fda-966d-ad91a03f9722, e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1]] class org.apache.ignite.cluster.ClusterTopologyException: Failed to failover a job to another node (failover SPI returned null) [job=org.apache.ignite.ml.genetic.FitnessJob@35f8a9d3, node=TcpDiscoveryNode [id=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, addrs=ArrayList [0:0:0:0:0:0:0:1, 127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.42:47502, /172.25.4.92:47502, /0:0:0:0:0:0:0:1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3, lastExchangeTime=1535006974981, loc=true, ver=2.7.0#19700101-sha1:, isClient=false]] at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:853) at org.apache.ignite.internal.util.IgniteUtils$7.apply(IgniteUtils.java:851) at org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985) at org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:541) at org.apache.ignite.ml.genetic.GAGrid.calculateFitness(GAGrid.java:102) at org.apache.ignite.ml.genetic.GAGrid.evolve(GAGrid.java:171) at org.apache.ignite.examples.ml.genetic.helloworld.HelloWorldGAExample.main(HelloWorldGAExample.java:90) Caused by: class org.apache.ignite.internal.cluster.ClusterTopologyCheckedException: Failed to failover a job to another node (failover SPI returned null) [job=org.apache.ignite.ml.genetic.FitnessJob@35f8a9d3, node=TcpDiscoveryNode [id=e5eca24b-6f5a-4d3e-9e9e-94ad404b44d1, addrs=ArrayList [0:0:0:0:0:0:0:1, 127.0.0.1, 172.25.4.42, 172.25.4.92], sockAddrs=HashSet [/172.25.4.42:47502, /172.25.4.92:47502, /0:0:0:0:0:0:0:1:47502, /127.0.0.1:47502], discPort=47502, order=3, intOrder=3, lastExchangeTime=1535006974981, loc=true, ver=2.7.0#19700101-sha1:, isClient=false]] at org.apache.ignite.internal.pr