[jira] [Commented] (IGNITE-9031) SpringCacheManager throws AssertionError during Spring initialization
[ https://issues.apache.org/jira/browse/IGNITE-9031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16577656#comment-16577656 ] Amir Akhmedov commented on IGNITE-9031: --- [~vkulichenko], can you please review my changes? > SpringCacheManager throws AssertionError during Spring initialization > - > > Key: IGNITE-9031 > URL: https://issues.apache.org/jira/browse/IGNITE-9031 > Project: Ignite > Issue Type: Bug > Components: spring >Affects Versions: 2.6 >Reporter: Joel Lang >Assignee: Amir Akhmedov >Priority: Major > > When initializing Ignite using an IgniteSpringBean and also having a > SpringCacheManager defined, the SpringCacheManager throws an AssertionError > in the onApplicationEvent() method due to it being called more than once. > There is an "assert ignite == null" that fails after the first call. > This is related to the changes in IGNITE-8740. This happened immediately when > I first tried to start Ignite after upgrading from 2.5 to 2.6. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (IGNITE-9251) Starting cache on join from unauthorized node should be failed.
Ivan Daschinskiy created IGNITE-9251: Summary: Starting cache on join from unauthorized node should be failed. Key: IGNITE-9251 URL: https://issues.apache.org/jira/browse/IGNITE-9251 Project: Ignite Issue Type: Bug Affects Versions: 2.6 Reporter: Ivan Daschinskiy Assignee: Ivan Daschinskiy When starting cache on join from node (caches from configuration) , which is not authorized to create cache, leads to fail of joining node without meaningful error. Moreover, cache is started on others nodes, that collects data from joining node. 1. Joining node should be failed with meaningful error with guide that configuration should be corrected. 2. When starting caches from joining node, caches from discovery data should be checked whether joining node has permission to start these caches and if not,skip them -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (IGNITE-9251) Starting cache on join from unauthorized node should be failed.
[ https://issues.apache.org/jira/browse/IGNITE-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Daschinskiy updated IGNITE-9251: - Fix Version/s: 2.7 > Starting cache on join from unauthorized node should be failed. > --- > > Key: IGNITE-9251 > URL: https://issues.apache.org/jira/browse/IGNITE-9251 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.6 >Reporter: Ivan Daschinskiy >Assignee: Ivan Daschinskiy >Priority: Major > Fix For: 2.7 > > > When starting cache on join from node (caches from configuration) , which is > not authorized to create cache, leads to fail of joining node without > meaningful error. Moreover, cache is started on others nodes, that collects > data from joining node. > 1. Joining node should be failed with meaningful error with guide that > configuration should be corrected. > 2. When starting caches from joining node, caches from discovery data should > be checked whether joining node has permission to start these caches and if > not,skip them -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (IGNITE-9251) Starting cache on join from unauthorized node should be failed.
[ https://issues.apache.org/jira/browse/IGNITE-9251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan Daschinskiy resolved IGNITE-9251. -- Resolution: Duplicate > Starting cache on join from unauthorized node should be failed. > --- > > Key: IGNITE-9251 > URL: https://issues.apache.org/jira/browse/IGNITE-9251 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.6 >Reporter: Ivan Daschinskiy >Assignee: Ivan Daschinskiy >Priority: Major > Fix For: 2.7 > > > When starting cache on join from node (caches from configuration) , which is > not authorized to create cache, leads to fail of joining node without > meaningful error. Moreover, cache is started on others nodes, that collects > data from joining node. > 1. Joining node should be failed with meaningful error with guide that > configuration should be corrected. > 2. When starting caches from joining node, caches from discovery data should > be checked whether joining node has permission to start these caches and if > not,skip them -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (IGNITE-9028) Web console: update to Babel 7
[ https://issues.apache.org/jira/browse/IGNITE-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrey Novikov reassigned IGNITE-9028: -- Assignee: Vasiliy Sisko (was: Andrey Novikov) > Web console: update to Babel 7 > -- > > Key: IGNITE-9028 > URL: https://issues.apache.org/jira/browse/IGNITE-9028 > Project: Ignite > Issue Type: Improvement > Components: wizards >Reporter: Ilya Borisov >Assignee: Vasiliy Sisko >Priority: Major > Time Spent: 1h > Remaining Estimate: 0h > > While Babel 7 is still in beta, I think it might be a good idea to update a > beta version of major dependency once in a while. It will be released soon > enough, so even if any issues occur we'll most like be able to iron those > out. As a benefit, Babel 7 will also provide an easy TypeScript migration > path. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9028) Web console: update to Babel 7
[ https://issues.apache.org/jira/browse/IGNITE-9028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16577804#comment-16577804 ] Andrey Novikov commented on IGNITE-9028: Upgraded babel to version 7.0.0-rc.1. [~vsisko], please do smoke test. > Web console: update to Babel 7 > -- > > Key: IGNITE-9028 > URL: https://issues.apache.org/jira/browse/IGNITE-9028 > Project: Ignite > Issue Type: Improvement > Components: wizards >Reporter: Ilya Borisov >Assignee: Andrey Novikov >Priority: Major > Time Spent: 1h > Remaining Estimate: 0h > > While Babel 7 is still in beta, I think it might be a good idea to update a > beta version of major dependency once in a while. It will be released soon > enough, so even if any issues occur we'll most like be able to iron those > out. As a benefit, Babel 7 will also provide an easy TypeScript migration > path. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9238) Test GridTaskFailoverAffinityRunTest.testNodeRestartClient hangs when coordinator forces client to reconnect on grid startup.
[ https://issues.apache.org/jira/browse/IGNITE-9238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16577851#comment-16577851 ] Andrew Mashenkov commented on IGNITE-9238: -- There is a known issue with client force server mode [1]. Is it possible this caused by wrong-way check "if node is a client" somewhere in code? [1] https://issues.apache.org/jira/browse/IGNITE-9241 > Test GridTaskFailoverAffinityRunTest.testNodeRestartClient hangs when > coordinator forces client to reconnect on grid startup. > - > > Key: IGNITE-9238 > URL: https://issues.apache.org/jira/browse/IGNITE-9238 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.6 >Reporter: Pavel Pereslegin >Assignee: Pavel Pereslegin >Priority: Major > Fix For: 2.7 > > > Example of such hang on TC: > https://ci.ignite.apache.org/viewLog.html?buildId=1605243&tab=buildResultsDiv&buildTypeId=IgniteTests24Java8_ComputeGrid > Log output: > {noformat} > ... > [2018-08-07 12:20:09,804][WARN > ][sys-#12799%internal.GridTaskFailoverAffinityRunTest1%][GridCachePartitionExchangeManager] > Client node tries to connect but its exchange info is cleaned up from > exchange history. Consider increasing 'IGNITE_EXCHANGE_HISTORY_SIZE' property > or start clients in smaller batches. Current settings and versions: > [IGNITE_EXCHANGE_HISTORY_SIZE=1000, initVer=AffinityTopologyVersion > [topVer=3, minorTopVer=0], readyVer=AffinityTopologyVersion [topVer=4, > minorTopVer=0]]. > [2018-08-07 12:20:09,804][INFO > ][exchange-worker-#12782%internal.GridTaskFailoverAffinityRunTest1%][GridDhtPartitionsExchangeFuture] > Completed partition exchange > [localNode=511d5932-5f22-4919-807d-575c7f61, > exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion > [topVer=3, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode > [id=6b9a7a1d-07bf-4d20-882a-8462ada3, addrs=ArrayList [127.0.0.1], > sockAddrs=HashSet [/127.0.0.1:47502], discPort=47502, order=3, intOrder=3, > lastExchangeTime=1533644409739, loc=false, ver=2.7.0#20180807-sha1:e96616f5, > isClient=false], done=true], topVer=AffinityTopologyVersion [topVer=4, > minorTopVer=0], durationFromInit=21] > [2018-08-07 12:20:09,806][INFO > ][exchange-worker-#12782%internal.GridTaskFailoverAffinityRunTest1%][time] > Finished exchange init [topVer=AffinityTopologyVersion [topVer=3, > minorTopVer=0], crd=true] > [2018-08-07 12:20:09,807][INFO > ][exchange-worker-#12782%internal.GridTaskFailoverAffinityRunTest1%][GridCachePartitionExchangeManager] > Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion > [topVer=4, minorTopVer=0], force=false, evt=NODE_JOINED, > node=6b9a7a1d-07bf-4d20-882a-8462ada3] > [2018-08-07 12:20:09,811][INFO > ][sys-#12798%internal.GridTaskFailoverAffinityRunTest2%][GridDhtPartitionsExchangeFuture] > Finish exchange future [startVer=AffinityTopologyVersion [topVer=4, > minorTopVer=0], resVer=AffinityTopologyVersion [topVer=4, minorTopVer=0], > err=null] > [2018-08-07 12:20:09,813][INFO > ][sys-#12798%internal.GridTaskFailoverAffinityRunTest2%][GridDhtPartitionsExchangeFuture] > Completed partition exchange > [localNode=a3206c1f-6d57-4fd6-8aa5-e22f3b42, > exchange=GridDhtPartitionsExchangeFuture [topVer=AffinityTopologyVersion > [topVer=4, minorTopVer=0], evt=NODE_JOINED, evtNode=TcpDiscoveryNode > [id=a3206c1f-6d57-4fd6-8aa5-e22f3b42, addrs=ArrayList [127.0.0.1], > sockAddrs=HashSet [/127.0.0.1:47503], discPort=47503, order=4, intOrder=4, > lastExchangeTime=1533644409779, loc=true, ver=2.7.0#20180807-sha1:e96616f5, > isClient=false], done=true], topVer=AffinityTopologyVersion [topVer=4, > minorTopVer=0], durationFromInit=41] > [2018-08-07 12:20:09,814][INFO > ][grid-starter-testNodeRestartClient-1][GridTaskFailoverAffinityRunTest1] To > start Console Management & Monitoring run ignitevisorcmd.{sh|bat} > [2018-08-07 12:20:09,815][INFO > ][grid-starter-testNodeRestartClient-1][GridTaskFailoverAffinityRunTest1] > [2018-08-07 12:20:09,815][INFO > ][grid-starter-testNodeRestartClient-1][GridTaskFailoverAffinityRunTest1] > >>> +---+ > >>> Ignite ver. > >>> 2.7.0-SNAPSHOT#20180807-sha1:e96616f580930f267eab44f75d410fa29a876bcb > >>> +---+ > >>> OS name: Linux 4.4.0-128-generic amd64 > >>> CPU(s): 5 > >>> Heap: 2.0GB > >>> VM name: 20126@8790182f15a5 > >>> Ignite instance name: internal.GridTaskFailoverAffinityRunTest1 > >>> Local node [ID=511D5932-5F22-4919-807D-575C7F61, order=2, > >>> clientMode=false] > >>> Local node addresses: [127.0.0.1] > >>> L
[jira] [Created] (IGNITE-9252) Update RocketMQ dependencies to 4.3.0
Roman Shtykh created IGNITE-9252: Summary: Update RocketMQ dependencies to 4.3.0 Key: IGNITE-9252 URL: https://issues.apache.org/jira/browse/IGNITE-9252 Project: Ignite Issue Type: Task Reporter: Roman Shtykh Assignee: Roman Shtykh http://rocketmq.apache.org/release_notes/release-notes-4.3.0/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9196) SQL: Memory leak in MapNodeResults
[ https://issues.apache.org/jira/browse/IGNITE-9196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16577861#comment-16577861 ] Denis Mekhanikov commented on IGNITE-9196: -- [~tledkov-gridgain], [~ilyak], I changed the test to check difference in size of heap dumps. Added the test to IgniteCacheQuerySelfTestSuite2 suite. Does everything else seem fine to you? > SQL: Memory leak in MapNodeResults > -- > > Key: IGNITE-9196 > URL: https://issues.apache.org/jira/browse/IGNITE-9196 > Project: Ignite > Issue Type: Bug > Components: sql >Affects Versions: 2.6 >Reporter: Denis Mekhanikov >Assignee: Denis Mekhanikov >Priority: Blocker > Fix For: 2.7 > > > When size of a SQL query result set is a multiple of {{Query#pageSize}}, then > {{MapQueryResult}} is never closed and removed from {{MapNodeResults#res}} > collection. > The following code leads to OOME when run with 1Gb heap: > {code:java} > public class MemLeakRepro { > public static void main(String[] args) { > Ignition.start(getConfiguration("server")); > try (Ignite client = > Ignition.start(getConfiguration("client").setClientMode(true))) { > IgniteCache cache = startPeopleCache(client); > int pages = 10; > int pageSize = 1024; > for (int i = 0; i < pages * pageSize; i++) { > Person p = new Person("Person #" + i, 25); > cache.put(i, p); > } > for (int i = 0; i < 1_000_000; i++) { > if (i % 1000 == 0) > System.out.println("Select iteration #" + i); > Query> qry = new SqlFieldsQuery("select * from > people"); > qry.setPageSize(pageSize); > QueryCursor> cursor = cache.query(qry); > cursor.getAll(); > cursor.close(); > } > } > } > private static IgniteConfiguration getConfiguration(String instanceName) { > IgniteConfiguration igniteCfg = new IgniteConfiguration(); > igniteCfg.setIgniteInstanceName(instanceName); > TcpDiscoverySpi discoSpi = new TcpDiscoverySpi(); > discoSpi.setIpFinder(new TcpDiscoveryVmIpFinder(true)); > return igniteCfg; > } > private static IgniteCache startPeopleCache(Ignite node) > { > CacheConfiguration cacheCfg = new > CacheConfiguration<>("cache"); > QueryEntity qe = new QueryEntity(Integer.class, Person.class); > qe.setTableName("people"); > cacheCfg.setQueryEntities(Collections.singleton(qe)); > cacheCfg.setSqlSchema("PUBLIC"); > return node.getOrCreateCache(cacheCfg); > } > public static class Person { > @QuerySqlField > private String name; > @QuerySqlField > private int age; > public Person(String name, int age) { > this.name = name; > this.age = age; > } > } > } > {code} > > At the same time it works perfectly fine, when there are, for example, > {{pages * pageSize - 1}} records in cache instead. > The reason for it is that {{MapQueryResult#fetchNextPage(...)}} method > doesn't return true, when the result set size is a multiple of the page size. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9252) Update RocketMQ dependencies to 4.3.0
[ https://issues.apache.org/jira/browse/IGNITE-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16577865#comment-16577865 ] ASF GitHub Bot commented on IGNITE-9252: GitHub user shroman opened a pull request: https://github.com/apache/ignite/pull/4523 IGNITE-9252: Update RocketMQ dependencies to 4.3.0. You can merge this pull request into a Git repository by running: $ git pull https://github.com/shroman/ignite IGNITE-9252 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/ignite/pull/4523.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #4523 commit d2e90a09847c2c5ef97138664271d90b1bdf1506 Author: shroman Date: 2018-08-13T06:07:08Z IGNITE-9252: Update RocketMQ dependencies to 4.3.0. > Update RocketMQ dependencies to 4.3.0 > - > > Key: IGNITE-9252 > URL: https://issues.apache.org/jira/browse/IGNITE-9252 > Project: Ignite > Issue Type: Task >Reporter: Roman Shtykh >Assignee: Roman Shtykh >Priority: Minor > > http://rocketmq.apache.org/release_notes/release-notes-4.3.0/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9009) Local continuous query listeners may be called on partition reassignment
[ https://issues.apache.org/jira/browse/IGNITE-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16577873#comment-16577873 ] Andrew Mashenkov commented on IGNITE-9009: -- Answered on dev list [1]. [1]http://apache-ignite-developers.2346864.n4.nabble.com/Backup-queue-for-local-continuous-query-tp33391.html > Local continuous query listeners may be called on partition reassignment > > > Key: IGNITE-9009 > URL: https://issues.apache.org/jira/browse/IGNITE-9009 > Project: Ignite > Issue Type: Bug >Affects Versions: 2.5 >Reporter: Denis Mekhanikov >Assignee: Denis Mekhanikov >Priority: Major > Fix For: 2.7 > > Attachments: ContinuousQueryReassignmentTest.java > > > When a node becomes primary for some partitions, then local continuous query > listeners receive updates on entries from that partitions. Such invocations > shouldn't happen. > Attached test class demonstrates this behaviour. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (IGNITE-9252) Update RocketMQ dependencies to 4.3.0
[ https://issues.apache.org/jira/browse/IGNITE-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16577890#comment-16577890 ] ASF GitHub Bot commented on IGNITE-9252: Github user asfgit closed the pull request at: https://github.com/apache/ignite/pull/4523 > Update RocketMQ dependencies to 4.3.0 > - > > Key: IGNITE-9252 > URL: https://issues.apache.org/jira/browse/IGNITE-9252 > Project: Ignite > Issue Type: Task >Reporter: Roman Shtykh >Assignee: Roman Shtykh >Priority: Minor > > http://rocketmq.apache.org/release_notes/release-notes-4.3.0/ -- This message was sent by Atlassian JIRA (v7.6.3#76005)