[jira] [Commented] (IGNITE-9878) Failed to start near cache after second call of getOrCreateCache

2019-05-21 Thread Roman Guseinov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845423#comment-16845423
 ] 

Roman Guseinov commented on IGNITE-9878:


[~vkulichenko], yes, it throws on the second invocation.

> Failed to start near cache after second call of getOrCreateCache
> 
>
> Key: IGNITE-9878
> URL: https://issues.apache.org/jira/browse/IGNITE-9878
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.6
>Reporter: Roman Guseinov
>Assignee: Roman Guseinov
>Priority: Major
> Attachments: NearCacheIssueReproducer.java, NearCacheTest.java
>
>
> Repeated call of `Ignite.getOrCreateCache(CacheConfiguration cacheCfg, 
> NearCacheConfiguration nearCfg)` lead the following exception:
> {code:java}
> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
> Failed to start near cache (local node is an affinity node for cache): 
> ignite-test-near-rep
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1305)
>   at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2995)
>   at 
> org.apache.ignite.reproducers.cache.NearCacheIssueReproducer.testRepeatedGetOrCreateCache(NearCacheIssueReproducer.java:24)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> near cache (local node is an affinity node for cache): ignite-test-near-rep
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheChangeRequest(GridCacheProcessor.java:5235)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.dynamicStartCache(GridCacheProcessor.java:3621)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.dynamicStartCache(GridCacheProcessor.java:3560)
>   at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2983)
>   ... 23 more
> {code}
> Also, if a cache is specified in the IgniteConfiguration, 
> `Ignite#getOrCreateNearCache` will fail with the following exception:
> {code:java}
> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
> Failed to start near cache (a cache with the same name without near cache is 
> already started)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1305)
>   at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateNearCache(IgniteKernal.java:3072)
>   at 
> org.apache.ignite.reproducers.cache.NearCacheIssueReproducer.testGetOrCreateNearCache(NearCacheIssueReproducer.java:32)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> near cache (a cache with the same name without near cache is already started)
>   at 
> org.apache.ignite.internal.IgniteKernal.checkNearCacheStarted(IgniteKernal.java:3085)
>   at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateNearCache(IgniteKernal.java:3067)
>   ... 23 more
> {code}
> The test is attached   [^NearCacheIssueReproducer.java]. The workaround is to 
> put near cache config into cache configuration 
> `CacheConfiguration.setNearConfiguration`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (IGNITE-11819) Add query timeouts support for JDBC statement

2019-05-21 Thread Mikhail Cherkasov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov resolved IGNITE-11819.

Resolution: Duplicate

> Add query timeouts support for JDBC statement
> -
>
> Key: IGNITE-11819
> URL: https://issues.apache.org/jira/browse/IGNITE-11819
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Reporter: Mikhail Cherkasov
>Priority: Major
>
> statement.setQueryTimeout(5_000); - this timeout doesn't have any effect for 
> ignite, it ignores it, event if we have delays in network for minutes, it 
> will wait all this time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-9878) Failed to start near cache after second call of getOrCreateCache

2019-05-21 Thread Valentin Kulichenko (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-9878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845111#comment-16845111
 ] 

Valentin Kulichenko commented on IGNITE-9878:
-

[~guseinov], so you're saying that "Failed to start near cache (local node is 
an affinity node for cache)" error is thrown only on second invocation, right? 
If so, it's indeed inconsistent. However, I believe it should be ALWAYS thrown 
rather than NEVER thrown :)

> Failed to start near cache after second call of getOrCreateCache
> 
>
> Key: IGNITE-9878
> URL: https://issues.apache.org/jira/browse/IGNITE-9878
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.6
>Reporter: Roman Guseinov
>Assignee: Roman Guseinov
>Priority: Major
> Attachments: NearCacheIssueReproducer.java, NearCacheTest.java
>
>
> Repeated call of `Ignite.getOrCreateCache(CacheConfiguration cacheCfg, 
> NearCacheConfiguration nearCfg)` lead the following exception:
> {code:java}
> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
> Failed to start near cache (local node is an affinity node for cache): 
> ignite-test-near-rep
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1305)
>   at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2995)
>   at 
> org.apache.ignite.reproducers.cache.NearCacheIssueReproducer.testRepeatedGetOrCreateCache(NearCacheIssueReproducer.java:24)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> near cache (local node is an affinity node for cache): ignite-test-near-rep
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheChangeRequest(GridCacheProcessor.java:5235)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.dynamicStartCache(GridCacheProcessor.java:3621)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.dynamicStartCache(GridCacheProcessor.java:3560)
>   at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2983)
>   ... 23 more
> {code}
> Also, if a cache is specified in the IgniteConfiguration, 
> `Ignite#getOrCreateNearCache` will fail with the following exception:
> {code:java}
> javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException: 
> Failed to start near cache (a cache with the same name without near cache is 
> already started)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1305)
>   at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateNearCache(IgniteKernal.java:3072)
>   at 
> org.apache.ignite.reproducers.cache.NearCacheIssueReproducer.testGetOrCreateNearCache(NearCacheIssueReproducer.java:32)
> Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start 
> near cache (a cache with the same name without near cache is already started)
>   at 
> org.apache.ignite.internal.IgniteKernal.checkNearCacheStarted(IgniteKernal.java:3085)
>   at 
> org.apache.ignite.internal.IgniteKernal.getOrCreateNearCache(IgniteKernal.java:3067)
>   ... 23 more
> {code}
> The test is attached   [^NearCacheIssueReproducer.java]. The workaround is to 
> put near cache config into cache configuration 
> `CacheConfiguration.setNearConfiguration`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10078) Node failure during concurrent partition updates may cause partition desync between primary and backup.

2019-05-21 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845094#comment-16845094
 ] 

Ivan Rakov commented on IGNITE-10078:
-

Overall looks good.

1) 
org/apache/ignite/internal/processors/cache/CacheAffinitySharedManager.java:1728
 - IgniteCheckedException is never thrown
2) 
org/apache/ignite/internal/processors/cache/distributed/dht/topology/GridDhtPartitionTopologyImpl.java:482
 - please add @param tag description
3) 
org.apache.ignite.internal.processors.cache.distributed.dht.topology.GridDhtPartitionTopologyImpl#finalizeUpdateCounters
 - this logic needs documentation or comments. why do we need to log this set 
of rollback records? why "gapStart - 1" and "gapStop - gapStart + 1".
4) How https://issues.apache.org/jira/browse/IGNITE-11797 and 
IgniteCacheGroupsTest are related? If scenario is not working until issue is 
resolved, perhaps we should mute tests.
5) The same with https://issues.apache.org/jira/browse/IGNITE-11791 and 
IgnitePdsContinuousRestartTestWithExpiryPolicy
6) Please add javadoc to CacheAffinitySharedManager#addToWaitGroup. What does 
this method do?
7) CacheContinuousQueryHandler - there are unused imports
8) Please add class description for MetastoreDataPageIO
9) I think, we should expand javadoc for PartitionUpdateCounter (or for 
PartitionTxUpdateCounterImpl, if t's applicable only for this instance). From 
doc it's understood what are LWM and HWM, but their flow in cluster is still 
unclear. Let's add brief description of counters flow between backup and 
primary (which counters are generated / sent to remote node on each case), 
maybe with examples.

Please fix minor comments mentioned above, recheck tests that failed in 
previous bot comment and then proceed to merge.

> Node failure during concurrent partition updates may cause partition desync 
> between primary and backup.
> ---
>
> Key: IGNITE-10078
> URL: https://issues.apache.org/jira/browse/IGNITE-10078
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexei Scherbakov
>Assignee: Alexei Scherbakov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This is possible if some updates are not written to WAL before node failure. 
> They will be not applied by rebalancing due to same partition counters in 
> certain scenario:
> 1. Start grid with 3 nodes, 2 backups.
> 2. Preload some data to partition P.
> 3. Start two concurrent transactions writing single key to the same partition 
> P, keys are different
> {noformat}
> try(Transaction tx = client.transactions().txStart(PESSIMISTIC, 
> REPEATABLE_READ, 0, 1)) {
>   client.cache(DEFAULT_CACHE_NAME).put(k, v);
>   tx.commit();
> }
> {noformat}
> 4. Order updates on backup in the way such update with max partition counter 
> is written to WAL and update with lesser partition counter failed due to 
> triggering of FH before it's added to WAL
> 5. Return failed node to grid, observe no rebalancing due to same partition 
> counters.
> Possible solution: detect gaps in update counters on recovery and force 
> rebalance from a node without gaps if detected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10913) Reduce heap occupation by o.a.i.i.processors.cache.persistence.file.FilePageStore instances.

2019-05-21 Thread Eduard Shangareev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16845042#comment-16845042
 ] 

Eduard Shangareev commented on IGNITE-10913:


I have left some comments in PR. [~Denis Chudov], please, take a look.

> Reduce heap occupation by 
> o.a.i.i.processors.cache.persistence.file.FilePageStore instances.
> 
>
> Key: IGNITE-10913
> URL: https://issues.apache.org/jira/browse/IGNITE-10913
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Assignee: Denis Chudov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> With large topology and large amount of caches/partitions and enabled 
> persistence could be millions of FilePageStore objects in heap (for each 
> partition).
> Each instance has a reference to a File (field cfgFile) storing as String 
> absolute path to a partition.
> Also internal File inplementation (on example UnixFile) also allocates space 
> for file path.
> I observed about 2Gb of heap space occupied by these objects in one of 
> environments.
> Solution: dereference (set to null) cfgFile after object creation, create 
> File object lazily on demand when needed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11312) JDBC: Thin driver doesn't reports incorrect property names

2019-05-21 Thread Suraj Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844989#comment-16844989
 ] 

Suraj Singh commented on IGNITE-11312:
--

Hi [~slukyanov], Could you please have a look into the last correspondence? 
Thanks!

> JDBC: Thin driver doesn't reports incorrect property names
> --
>
> Key: IGNITE-11312
> URL: https://issues.apache.org/jira/browse/IGNITE-11312
> Project: Ignite
>  Issue Type: Improvement
>  Components: jdbc
>Reporter: Stanislav Lukyanov
>Assignee: Suraj Singh
>Priority: Major
>  Labels: newbie
>
> JDBC driver reports the properties it supports via getPropertyInfo method. It 
> currently reports the property names as simple strings, like 
> "enforceJoinOrder". However, when the properties are processed on connect 
> they are looked up with prefix "ignite.jdbc", e.g. 
> "ignite.jdbc.enforceJoinOrder".
> Because of this UI tools like DBeaver can't properly pass the properties to 
> Ignite. For example, when "enforceJoinOrder" is set to true in "Connection 
> settings" -> "Driver properties" menu of DBeaver it has no effect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10078) Node failure during concurrent partition updates may cause partition desync between primary and backup.

2019-05-21 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844970#comment-16844970
 ] 

Ignite TC Bot commented on IGNITE-10078:


{panel:title=-- Run :: All: Possible 
Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Cache 6{color} [[tests 0 TIMEOUT , Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3894733]]

{color:#d04437}MVCC Cache 7{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=3894770]]
* IgniteCacheMvccTestSuite7: 
GridCacheRebalancingWithAsyncClearingTest.testCorrectRebalancingCurrentlyRentingPartitions

{color:#d04437}[Check Code Style]{color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3894778]]

{color:#d04437}PDS (Indexing){color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=3894744]]
* IgnitePdsWithIndexingCoreTestSuite: 
IgniteLogicalRecoveryTest.testRecoveryOnJoinToActiveCluster

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3894779buildTypeId=IgniteTests24Java8_RunAll]

> Node failure during concurrent partition updates may cause partition desync 
> between primary and backup.
> ---
>
> Key: IGNITE-10078
> URL: https://issues.apache.org/jira/browse/IGNITE-10078
> Project: Ignite
>  Issue Type: Bug
>Reporter: Alexei Scherbakov
>Assignee: Alexei Scherbakov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> This is possible if some updates are not written to WAL before node failure. 
> They will be not applied by rebalancing due to same partition counters in 
> certain scenario:
> 1. Start grid with 3 nodes, 2 backups.
> 2. Preload some data to partition P.
> 3. Start two concurrent transactions writing single key to the same partition 
> P, keys are different
> {noformat}
> try(Transaction tx = client.transactions().txStart(PESSIMISTIC, 
> REPEATABLE_READ, 0, 1)) {
>   client.cache(DEFAULT_CACHE_NAME).put(k, v);
>   tx.commit();
> }
> {noformat}
> 4. Order updates on backup in the way such update with max partition counter 
> is written to WAL and update with lesser partition counter failed due to 
> triggering of FH before it's added to WAL
> 5. Return failed node to grid, observe no rebalancing due to same partition 
> counters.
> Possible solution: detect gaps in update counters on recovery and force 
> rebalance from a node without gaps if detected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11757) Missed partitions during rebalancing when new blank node joins

2019-05-21 Thread Vladislav Pyatkov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844956#comment-16844956
 ] 

Vladislav Pyatkov commented on IGNITE-11757:


Hi [~ilyak] I think if we will wait to topology distribution changed, then 
missed partitions should not appear.
Look at my PR:
https://github.com/apache/ignite/pull/6560

> Missed partitions during rebalancing when new blank node joins
> --
>
> Key: IGNITE-11757
> URL: https://issues.apache.org/jira/browse/IGNITE-11757
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Ilya Kasnacheev
>Assignee: Ivan Rakov
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Please take a look at newly added test
> GridCachePartitionedSupplyEventsSelfTest.testSupplyEvents
> There's logging of missed partitions during rebalancing, and as you can see 
> partitions are missed even when a new node joins stable topology, with no 
> nodes leaving.
> Expected behavior is that in this case no partitions will be missed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11861) GridEventConsumeSelfTest.testMultithreadedWithNodeRestart fails on TC

2019-05-21 Thread Ivan Bessonov (JIRA)
Ivan Bessonov created IGNITE-11861:
--

 Summary: GridEventConsumeSelfTest.testMultithreadedWithNodeRestart 
fails on TC
 Key: IGNITE-11861
 URL: https://issues.apache.org/jira/browse/IGNITE-11861
 Project: Ignite
  Issue Type: Test
Reporter: Ivan Bessonov
Assignee: Ivan Bessonov


[https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=4911099288413140059=testDetails]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11860) Temporarily peg SSL version to TLSv1.2 to fix Java 11/12

2019-05-21 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844921#comment-16844921
 ] 

Ignite TC Bot commented on IGNITE-11860:


{panel:title=-- Run :: All: Possible 
Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Interceptor Cache (Full API Config Variations / Basic)*{color} 
[[tests 1|https://ci.ignite.apache.org/viewLog.html?buildId=3903412]]
* InterceptorCacheConfigVariationsFullApiTestSuite: TestSuite$1.warning

{color:#d04437}Cache 6{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903461]]
* IgniteCacheTestSuite6: TestSuite$1.warning

{color:#d04437}Platform C++ (Linux)*{color} [[tests 0 Exit Code , Failure on 
metric |https://ci.ignite.apache.org/viewLog.html?buildId=3903432]]

{color:#d04437}Cache 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903457]]
* IgniteCacheTestSuite2: TestSuite$1.warning

{color:#d04437}Cache (Restarts) 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903454]]
* IgniteCacheRestartTestSuite2: TestSuite$1.warning

{color:#d04437}ZooKeeper (Discovery) 1{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903430]]
* ZookeeperDiscoverySpiTestSuite1: TestSuite$1.warning

{color:#d04437}PDS (Compatibility){color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903470]]
* IgniteCompatibilityBasicTestSuite: TestSuite$1.warning

{color:#d04437}MVCC Queries{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903441]]
* IgniteCacheMvccSqlTestSuite: TestSuite$1.warning

{color:#d04437}SPI{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903423]]
* IgniteSpiTestSuite: TestSuite$1.warning

{color:#d04437}ZooKeeper (Discovery) 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903431]]
* ZookeeperDiscoverySpiTestSuite2: TestSuite$1.warning

{color:#d04437}MVCC Cache{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903439]]
* IgniteCacheMvccTestSuite: TestSuite$1.warning

{color:#d04437}PDS 4{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903477]]
* IgnitePdsTestSuite4: TestSuite$1.warning

{color:#d04437}Data Structures{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903468]]
* IgniteCacheDataStructuresSelfTestSuite: TestSuite$1.warning

{color:#d04437}Queries 1{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903484]]
* IgniteBinaryCacheQueryTestSuite: TestSuite$1.warning

{color:#d04437}Cache (Expiry Policy){color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903446]]
* IgniteCacheExpiryPolicyTestSuite: TestSuite$1.warning

{color:#d04437}RDD{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903417]]
* IgniteRDDTestSuite: TestSuite$1.warning

{color:#d04437}Cache 7{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=3903462]]
* IgniteCacheWithIndexingAndPersistenceTestSuite: TestSuite$1.warning
* IgniteCacheTestSuite7: TestSuite$1.warning

{color:#d04437}Queries 2{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903418]]
* IgniteBinaryCacheQueryTestSuite2: TestSuite$1.warning

{color:#d04437}Start Nodes{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903422]]
* org.apache.ignite.internal.IgniteStartStopRestartTestSuite.initializationError

{color:#d04437}Streamers{color} [[tests 
9|https://ci.ignite.apache.org/viewLog.html?buildId=3903421]]
* IgniteStreamSelfTestSuite: TestSuite$1.warning
* IgniteJmsStreamerTestSuite: TestSuite$1.warning
* IgniteKafkaStreamerSelfTestSuite: TestSuite$1.warning
* IgniteZeroMqStreamerTestSuite: TestSuite$1.warning
* FlinkIgniteSinkSelfTestSuite: TestSuite$1.warning
* IgniteMqttStreamerTestSuite: TestSuite$1.warning
* IgniteTwitterStreamerTestSuite: TestSuite$1.warning
* IgniteStormStreamerSelfTestSuite: TestSuite$1.warning
* IgniteCamelStreamerTestSuite: TestSuite$1.warning

{color:#d04437}Cache (Restarts) 1{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903453]]
* IgniteCacheRestartTestSuite: TestSuite$1.warning

{color:#d04437}Cache 1{color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=3903456]]
* IgniteBinaryCacheTestSuite: TestSuite$1.warning
* IgniteRestHandlerTestSuite: TestSuite$1.warning

{color:#d04437}JDBC Driver{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903415]]
* IgniteJdbcDriverTestSuite: TestSuite$1.warning

{color:#d04437}Cache 8{color} [[tests 
1|https://ci.ignite.apache.org/viewLog.html?buildId=3903463]]
* IgniteCacheTestSuite8: TestSuite$1.warning

{color:#d04437}PDS (Indexing){color} [[tests 
2|https://ci.ignite.apache.org/viewLog.html?buildId=3903472]]
* IgnitePdsWithIndexingCoreTestSuite: TestSuite$1.warning
* IgnitePdsWithIndexingTestSuite: TestSuite$1.warning

{color:#d04437}PDS 2{color} [[tests 

[jira] [Issue Comment Deleted] (IGNITE-10663) Implement cache mode allows reads with consistency check and fix

2019-05-21 Thread Anton Vinogradov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-10663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anton Vinogradov updated IGNITE-10663:
--
Comment: was deleted

(was: {panel:title=-- Run :: All: Possible 
Blockers|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}Thin client: Node.js{color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=3496581]]

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3491169buildTypeId=IgniteTests24Java8_RunAll])

> Implement cache mode allows reads with consistency check and fix
> 
>
> Key: IGNITE-10663
> URL: https://issues.apache.org/jira/browse/IGNITE-10663
> Project: Ignite
>  Issue Type: Task
>Reporter: Anton Vinogradov
>Assignee: Anton Vinogradov
>Priority: Major
>  Labels: iep-31
> Fix For: 2.8
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> The main idea is to provide special "read from cache" mode which will read a 
> value from primary and all backups and will check that values are the same.
> In case values differ they should be fixed according to the appropriate 
> strategy.
> ToDo list:
> 1) {{cache.withConsistency().get(key)}} should guarantee values will be 
> checked across the topology and fixed if necessary.
> 2) LWW (Last Write Wins) strategy should be used for validation.
> 3) Since  LWW and any other strategy do not guarantee that the correct value 
> will be chosen.
> We have to record the event contains all values and the chosen one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11708) Unable to run tests in IgniteConfigVariationsAbstractTest subclasses

2019-05-21 Thread Ivan Pavlukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844840#comment-16844840
 ] 

Ivan Pavlukhin commented on IGNITE-11708:
-

[~ivanan.fed], after latest changes everything looks fine. But could you please 
rerun TC as various configs were not injected properly beforehand?

> Unable to run tests in IgniteConfigVariationsAbstractTest subclasses
> 
>
> Key: IGNITE-11708
> URL: https://issues.apache.org/jira/browse/IGNITE-11708
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep30
> Attachments: read_through_eviction_self_test.patch, 
> tx_out_test_fixed.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> It seems that test classes that extend from 
> IgniteConfigVariationsAbstractTest cannot be started with JUnit4 @Test 
> annotation. 
> It is easy to check: if throw exception in any test methods, nothing will 
> happen.
> Reason can be in rule chain in IgniteConfigVariationsAbstractTest class [1], 
> maybe it destroys existing test workflow.
> [1] 
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/testframework/junits/IgniteConfigVariationsAbstractTest.java#L62



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11858) IgniteClientRejoinTest.testClientsReconnectAfterStart is flaky

2019-05-21 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844830#comment-16844830
 ] 

Ignite TC Bot commented on IGNITE-11858:


{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3902843buildTypeId=IgniteTests24Java8_RunAll]

> IgniteClientRejoinTest.testClientsReconnectAfterStart is flaky
> --
>
> Key: IGNITE-11858
> URL: https://issues.apache.org/jira/browse/IGNITE-11858
> Project: Ignite
>  Issue Type: Test
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=594525236246121383=testDetails_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11860) Temporarily peg SSL version to TLSv1.2 to fix Java 11/12

2019-05-21 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-11860:


 Summary: Temporarily peg SSL version to TLSv1.2 to fix Java 11/12
 Key: IGNITE-11860
 URL: https://issues.apache.org/jira/browse/IGNITE-11860
 Project: Ignite
  Issue Type: Improvement
  Components: security
Affects Versions: 2.7
Reporter: Ilya Kasnacheev
Assignee: Ilya Kasnacheev
 Fix For: 2.7.5






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11859) CommandHandlerParsingTest#testExperimentalCommandIsDisabled() don't works

2019-05-21 Thread Sergey Antonov (JIRA)
Sergey Antonov created IGNITE-11859:
---

 Summary: 
CommandHandlerParsingTest#testExperimentalCommandIsDisabled() don't works
 Key: IGNITE-11859
 URL: https://issues.apache.org/jira/browse/IGNITE-11859
 Project: Ignite
  Issue Type: Bug
Reporter: Sergey Antonov
 Fix For: 2.8


{{parseArgs(Arrays.asList(WAL.text(), WAL_PRINT));}} and 
{{parseArgs(Arrays.asList(WAL.text(), WAL_DELETE));}} should throw 
IllegalArgumentException, but it isn't.

h2. How to reproduce:
Replace test to:

{code:java}
/**
 * Test that experimental command (i.e. WAL command) is disabled by default.
 */
@Test
public void testExperimentalCommandIsDisabled() {
System.clearProperty(IGNITE_ENABLE_EXPERIMENTAL_COMMAND);

GridTestUtils.assertThrows(null, 
()->parseArgs(Arrays.asList(WAL.text(), WAL_PRINT)), 
IllegalArgumentException.class, null);

GridTestUtils.assertThrows(null, 
()->parseArgs(Arrays.asList(WAL.text(), WAL_DELETE)), 
IllegalArgumentException.class, null);
}
{code}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10654) Report in case of creating index with already existing fields collection.

2019-05-21 Thread Yury Gerzhedovich (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844747#comment-16844747
 ] 

Yury Gerzhedovich commented on IGNITE-10654:


[~zstan], reviewed. Please  check my comments at github.

> Report in case of creating index with already existing fields collection.
> -
>
> Key: IGNITE-10654
> URL: https://issues.apache.org/jira/browse/IGNITE-10654
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.7
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Report in log if new index creating with already existing fields collection.
> for example, need to log warn here:
> {code:java}
> cache.query(new SqlFieldsQuery("create index \"idx1\" on Val(keyStr, 
> keyLong)"));
> cache.query(new SqlFieldsQuery("create index \"idx3\" on Val(keyStr, 
> keyLong)"));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11592) NPE in case of continuing tx and cache stop operation.

2019-05-21 Thread Stanilovsky Evgeny (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844734#comment-16844734
 ] 

Stanilovsky Evgeny commented on IGNITE-11592:
-

i check my repro but change cache.destroy with cache.close, test fail but 
instead of destroy case no timeout failure detected, i think this is not a 
problem for now.

> NPE in case of continuing tx and cache stop operation. 
> ---
>
> Key: IGNITE-11592
> URL: https://issues.apache.org/jira/browse/IGNITE-11592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Parallel cache stop and tx operations may lead to NPE.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectImpl.finishUnmarshal(CacheObjectImpl.java:129)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.TxEntryValueHolder.unmarshal(TxEntryValueHolder.java:151)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry.unmarshal(IgniteTxEntry.java:964)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.unmarshal(IgniteTxHandler.java:306)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:338)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:154)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.lambda$null$0(IgniteTxHandler.java:580)
>   at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:496)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> i hope that correct decision would be to roll back tx (on exchange phase) 
> participating in stopped caches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-7883) Cluster can have inconsistent affinity configuration

2019-05-21 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-7883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844718#comment-16844718
 ] 

Ignite TC Bot commented on IGNITE-7883:
---

{panel:title=-- Run :: All: No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=3855564buildTypeId=IgniteTests24Java8_RunAll]

> Cluster can have inconsistent affinity configuration 
> -
>
> Key: IGNITE-7883
> URL: https://issues.apache.org/jira/browse/IGNITE-7883
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.3
>Reporter: Mikhail Cherkasov
>Assignee: Alexand Polyakov
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> A cluster can have inconsistent affinity configuration if you created two 
> nodes, one with affinity key configuration and other without it(in IgniteCfg 
> or CacheCfg),  both nodes will work fine with no exceptions, but in the same 
> time they will apply different affinity rules to keys:
>  
> {code:java}
> package affinity;
> import org.apache.ignite.Ignite;
> import org.apache.ignite.Ignition;
> import org.apache.ignite.cache.CacheAtomicityMode;
> import org.apache.ignite.cache.CacheKeyConfiguration;
> import org.apache.ignite.cache.CacheMode;
> import org.apache.ignite.cache.affinity.Affinity;
> import org.apache.ignite.configuration.CacheConfiguration;
> import org.apache.ignite.configuration.IgniteConfiguration;
> import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi;
> import org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder;
> import java.util.Arrays;
> public class Test {
> private static int id = 0;
> public static void main(String[] args) {
> Ignite ignite = Ignition.start(getConfiguration(true, false));
> Ignite ignite2 = Ignition.start(getConfiguration(false, false));
> Affinity affinity = ignite.affinity("TEST");
> Affinity affinity2 = ignite2.affinity("TEST");
> for (int i = 0; i < 1_000_000; i++) {
> AKey key = new AKey(i);
> if(affinity.partition(key) != affinity2.partition(key))
> System.out.println("FAILED for: " + key);
> }
> System.out.println("DONE");
> }
> private static IgniteConfiguration getConfiguration(boolean 
> withAffinityCfg, boolean client) {
> IgniteConfiguration cfg = new IgniteConfiguration();
> TcpDiscoveryVmIpFinder finder = new TcpDiscoveryVmIpFinder(true);
> finder.setAddresses(Arrays.asList("localhost:47500..47600"));
> cfg.setClientMode(client);
> cfg.setIgniteInstanceName("test" + id++);
> CacheConfiguration cacheCfg = new CacheConfiguration("TEST");
> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> cacheCfg.setCacheMode(CacheMode.PARTITIONED);
> if(withAffinityCfg) {
> cacheCfg.setKeyConfiguration(new 
> CacheKeyConfiguration("affinity.AKey", "a"));
> }
> cfg.setCacheConfiguration(cacheCfg);
> cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(finder));
> return cfg;
> }
> }
> class AKey {
> int a;
> public AKey(int a) {
> this.a = a;
> }
> @Override public String toString() {
> return "AKey{" +
> "a=" + a +
> '}';
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11858) IgniteClientRejoinTest.testClientsReconnectAfterStart is flaky

2019-05-21 Thread Ivan Bessonov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov updated IGNITE-11858:
---
Fix Version/s: 2.8

> IgniteClientRejoinTest.testClientsReconnectAfterStart is flaky
> --
>
> Key: IGNITE-11858
> URL: https://issues.apache.org/jira/browse/IGNITE-11858
> Project: Ignite
>  Issue Type: Test
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
> Fix For: 2.8
>
>
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=594525236246121383=testDetails_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11858) IgniteClientRejoinTest.testClientsReconnectAfterStart is flaky

2019-05-21 Thread Ivan Bessonov (JIRA)
Ivan Bessonov created IGNITE-11858:
--

 Summary: IgniteClientRejoinTest.testClientsReconnectAfterStart is 
flaky
 Key: IGNITE-11858
 URL: https://issues.apache.org/jira/browse/IGNITE-11858
 Project: Ignite
  Issue Type: Test
Reporter: Ivan Bessonov
Assignee: Ivan Bessonov


[https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=594525236246121383=testDetails_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11858) IgniteClientRejoinTest.testClientsReconnectAfterStart is flaky

2019-05-21 Thread Ivan Bessonov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Bessonov updated IGNITE-11858:
---
Labels: MakeTeamcityGreenAgain  (was: )

> IgniteClientRejoinTest.testClientsReconnectAfterStart is flaky
> --
>
> Key: IGNITE-11858
> URL: https://issues.apache.org/jira/browse/IGNITE-11858
> Project: Ignite
>  Issue Type: Test
>Reporter: Ivan Bessonov
>Assignee: Ivan Bessonov
>Priority: Major
>  Labels: MakeTeamcityGreenAgain
>
> [https://ci.ignite.apache.org/project.html?projectId=IgniteTests24Java8=594525236246121383=testDetails_IgniteTests24Java8=%3Cdefault%3E]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11592) NPE in case of continuing tx and cache stop operation.

2019-05-21 Thread Stanilovsky Evgeny (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844663#comment-16844663
 ] 

Stanilovsky Evgeny commented on IGNITE-11592:
-

I found similar logic, called from 
org.apache.ignite.internal.processors.cache.GridCacheProcessor#closeCache but 
now it `s not called from cache.destroy and can be still erroneous place for 
case described above, a will check it.

> NPE in case of continuing tx and cache stop operation. 
> ---
>
> Key: IGNITE-11592
> URL: https://issues.apache.org/jira/browse/IGNITE-11592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Parallel cache stop and tx operations may lead to NPE.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectImpl.finishUnmarshal(CacheObjectImpl.java:129)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.TxEntryValueHolder.unmarshal(TxEntryValueHolder.java:151)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry.unmarshal(IgniteTxEntry.java:964)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.unmarshal(IgniteTxHandler.java:306)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:338)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:154)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.lambda$null$0(IgniteTxHandler.java:580)
>   at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:496)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> i hope that correct decision would be to roll back tx (on exchange phase) 
> participating in stopped caches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11592) NPE in case of continuing tx and cache stop operation.

2019-05-21 Thread Stanilovsky Evgeny (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844654#comment-16844654
 ] 

Stanilovsky Evgeny commented on IGNITE-11592:
-

[~EdShangGG] thanks for review, will fix all your mentions.
[~Jokser] case is : concurrently destroy cache(s) and active tx`s on them, 
check 
org.apache.ignite.internal.processors.cache.transactions.TxOnCachesStopTest#runTxOnCacheStop
there at first runs tx(), block GridNearTxPrepareRequest and concurrently start 
cache.destroy,  after calling IgniteTxHandler#processNearTxPrepareRequest0 will 
follow into waitForExchangeFuture where cache can be already destroyed and for 
example IgniteTxHandler#prepareNearTx will throw NPE from 
{noformat}IgniteTxEntry firstWrite = unmarshal(req.writes());{noformat} or in 
section with GridCacheProcessor#rollbackCoveredTx we can still observe 
GridNearTxLocal tx`s (simple break point on tests run will show this) that we 
need to rollback before cache will be stopped.


> NPE in case of continuing tx and cache stop operation. 
> ---
>
> Key: IGNITE-11592
> URL: https://issues.apache.org/jira/browse/IGNITE-11592
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Parallel cache stop and tx operations may lead to NPE.
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.CacheObjectImpl.finishUnmarshal(CacheObjectImpl.java:129)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.TxEntryValueHolder.unmarshal(TxEntryValueHolder.java:151)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry.unmarshal(IgniteTxEntry.java:964)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.unmarshal(IgniteTxHandler.java:306)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.prepareNearTx(IgniteTxHandler.java:338)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.processNearTxPrepareRequest0(IgniteTxHandler.java:154)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.lambda$null$0(IgniteTxHandler.java:580)
>   at 
> org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:496)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> i hope that correct decision would be to roll back tx (on exchange phase) 
> participating in stopped caches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11857) Investigate performance drop after IGNITE-10078

2019-05-21 Thread Alexei Scherbakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexei Scherbakov updated IGNITE-11857:
---
Description: 
After IGNITE-10078 yardstick tests show performance drop up to 8% in some 
scenarios:

* tx-optim-repRead-put-get

* tx-optimistic-put

* tx-putAll

Partially this is due new update counter implementation, but not only. 
Investigation is required.

  was:
After IGNITE-1078 yardstick tests show performance drop up to 8% in some 
scenarios:

* tx-optim-repRead-put-get

* tx-optimistic-put

* tx-putAll

Partially this is due new update counter implementation, but not only. 
Investigation is required.


> Investigate performance drop after IGNITE-10078
> ---
>
> Key: IGNITE-11857
> URL: https://issues.apache.org/jira/browse/IGNITE-11857
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Priority: Major
>
> After IGNITE-10078 yardstick tests show performance drop up to 8% in some 
> scenarios:
> * tx-optim-repRead-put-get
> * tx-optimistic-put
> * tx-putAll
> Partially this is due new update counter implementation, but not only. 
> Investigation is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-11857) Investigate performance drop after IGNITE-10078

2019-05-21 Thread Alexei Scherbakov (JIRA)
Alexei Scherbakov created IGNITE-11857:
--

 Summary: Investigate performance drop after IGNITE-10078
 Key: IGNITE-11857
 URL: https://issues.apache.org/jira/browse/IGNITE-11857
 Project: Ignite
  Issue Type: Improvement
Reporter: Alexei Scherbakov


After IGNITE-1078 yardstick tests show performance drop up to 8% in some 
scenarios:

* tx-optim-repRead-put-get

* tx-optimistic-put

* tx-putAll

Partially this is due new update counter implementation, but not only. 
Investigation is required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-10619) Add support files transmission between nodes over connection via CommunicationSpi

2019-05-21 Thread Dmitriy Govorukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844632#comment-16844632
 ] 

Dmitriy Govorukhin commented on IGNITE-10619:
-

I will do it soon.

> Add support files transmission between nodes over connection via 
> CommunicationSpi
> -
>
> Key: IGNITE-10619
> URL: https://issues.apache.org/jira/browse/IGNITE-10619
> Project: Ignite
>  Issue Type: Sub-task
>  Components: persistence
>Reporter: Maxim Muzafarov
>Assignee: Maxim Muzafarov
>Priority: Major
>  Labels: iep-28
>
> Partition preloader must support cache partition file relocation from one 
> cluster node to another (the zero copy algorithm [1] assume to be used by 
> default). To achieve this, the file transfer machinery must be implemented at 
> Apache Ignite over Communication SPI.
> _CommunicationSpi_
> Ignite's Comminication SPI must support to:
> * establishing channel connections to the remote node to an arbitrary topic 
> (GridTopic) with predefined processing policy;
> * listening incoming channel creation events and registering connection 
> handlers on the particular node;
> * an arbitrary set of channel parameters on connection handshake;
> _FileTransmitProcessor_
> The file transmission manager must support to:
> * using different approaches of incoming data handling – buffered and direct 
> (zero-copy approach of FileChannel#transferTo);
> * transferring data by chunks of predefined size with saving intermediate 
> results;
> * re-establishing connection if an error occurs and continue file 
> upload\download;
> * limiting connection bandwidth (upload and download) at runtime;



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (IGNITE-11512) Add counter left partition for index rebuild in CacheGroupMetricsMXBean

2019-05-21 Thread Alexand Polyakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16844621#comment-16844621
 ] 

Alexand Polyakov commented on IGNITE-11512:
---

[~ivan.glukos], questions on comments
3. specially out from the visitor to initialize the value before creating 
threads
6. don't understand the comment, in fact there are 2 tests: rebuild of indexes 
(indexes are physically removed from disk); create index (sql CREATE INDEX)
7. 
Variable.apache.ignite.internal.processors.cache.CacheGroupMetricsImpl#indexBuildCountPartitionsLeft
 and it does not require an MXBean, or do something else mean

> Add counter left partition for index rebuild in CacheGroupMetricsMXBean
> ---
>
> Key: IGNITE-11512
> URL: https://issues.apache.org/jira/browse/IGNITE-11512
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.7
>Reporter: Alexand Polyakov
>Assignee: Alexand Polyakov
>Priority: Major
>
> Now, if ran rebuild indexes, this can determined only on load CPU and thread 
> dump. The presence of the "how many partitions left to index" metric will 
> help determine whether the rebuilding is going on and on which cache, as well 
> as the percentage of completion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (IGNITE-11512) Add counter left partition for index rebuild in CacheGroupMetricsMXBean

2019-05-21 Thread Alexand Polyakov (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexand Polyakov updated IGNITE-11512:
--
Reviewer: Ivan Rakov

> Add counter left partition for index rebuild in CacheGroupMetricsMXBean
> ---
>
> Key: IGNITE-11512
> URL: https://issues.apache.org/jira/browse/IGNITE-11512
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.7
>Reporter: Alexand Polyakov
>Assignee: Alexand Polyakov
>Priority: Major
>
> Now, if ran rebuild indexes, this can determined only on load CPU and thread 
> dump. The presence of the "how many partitions left to index" metric will 
> help determine whether the rebuilding is going on and on which cache, as well 
> as the percentage of completion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)