[jira] [Commented] (IGNITE-10973) Migrate example module tests from Junit 4 to 5

2019-08-01 Thread Ivan Pavlukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898289#comment-16898289
 ] 

Ivan Pavlukhin commented on IGNITE-10973:
-

[~ivanan.fed], good news. Am I getting it right that the PR will be ready when 
a problem with .NET tests is solved?

> Migrate example module tests from Junit 4 to 5
> --
>
> Key: IGNITE-10973
> URL: https://issues.apache.org/jira/browse/IGNITE-10973
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Ivan Fedotov
>Assignee: Ivan Fedotov
>Priority: Major
>  Labels: iep-30
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> For more information refer parent task.
> Migrate from Junit 4 to 5 in the example module.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (IGNITE-11977) Data streamer pool MXBean is registered as ThreadPoolMXBean instead of StripedExecutorMXBean

2019-08-01 Thread Ruslan Kamashev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruslan Kamashev updated IGNITE-11977:
-
Affects Version/s: 2.7

> Data streamer pool MXBean is registered as ThreadPoolMXBean instead of 
> StripedExecutorMXBean
> 
>
> Key: IGNITE-11977
> URL: https://issues.apache.org/jira/browse/IGNITE-11977
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanislav Lukyanov
>Assignee: Ruslan Kamashev
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Data streamer pool is registered with a ThreadPoolMXBean while it is actually 
> a StripedExecutor and can use a StripedExecutorMXBean.
> Need to change the registration in the IgniteKernal code. It should be 
> registered the same way as the striped executor pool.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (IGNITE-11977) Data streamer pool MXBean is registered as ThreadPoolMXBean instead of StripedExecutorMXBean

2019-08-01 Thread Ruslan Kamashev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruslan Kamashev updated IGNITE-11977:
-
Fix Version/s: 2.8

> Data streamer pool MXBean is registered as ThreadPoolMXBean instead of 
> StripedExecutorMXBean
> 
>
> Key: IGNITE-11977
> URL: https://issues.apache.org/jira/browse/IGNITE-11977
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.7
>Reporter: Stanislav Lukyanov
>Assignee: Ruslan Kamashev
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Data streamer pool is registered with a ThreadPoolMXBean while it is actually 
> a StripedExecutor and can use a StripedExecutorMXBean.
> Need to change the registration in the IgniteKernal code. It should be 
> registered the same way as the striped executor pool.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-12021) Inserting date from Node.JS to a cache which has Java.SQL.Timestamp

2019-08-01 Thread Alexey Kosenchuk (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898203#comment-16898203
 ] 

Alexey Kosenchuk commented on IGNITE-12021:
---

please provide snippets of your code, where you send and receive data

> Inserting date from Node.JS to a cache which has Java.SQL.Timestamp
> ---
>
> Key: IGNITE-12021
> URL: https://issues.apache.org/jira/browse/IGNITE-12021
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, thin client
>Affects Versions: 2.7
> Environment: We are in DEV right now. can't proceed to higher 
> environment with this show stopper
>Reporter: Gaurav
>Priority: Blocker
>  Labels: Node.JS, ignite,
>
> I have cache which has one field with type java.sql.Timestamp
>  
> From, Node.JS i am inserting it as new Date(). 
> If the cache is empty the inserts are successful. Issue come when java 
> inserted few records in this cache (Java inserts java.sql.Timestamp) . Now , 
> if I run Node.JS program which tries to insert it gives me this error.
>  
> Binary type has different field types [typeName=XYZCacheName, 
> fieldName=updateTime, fieldTypeName1=Timestamp, fieldTypeName2=Date]
>  
> Please help, its stopped my work totally!
>  
> P.S : JavaScript new Date() is itself a Timestamp, so cache should ideally 
> accept it as Timestamp and not Date.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-5227) StackOverflowError in GridCacheMapEntry#checkOwnerChanged()

2019-08-01 Thread Ivan Rakov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898197#comment-16898197
 ] 

Ivan Rakov commented on IGNITE-5227:


[~mstepachev], thanks, merged to master.

> StackOverflowError in GridCacheMapEntry#checkOwnerChanged()
> ---
>
> Key: IGNITE-5227
> URL: https://issues.apache.org/jira/browse/IGNITE-5227
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Alexey Goncharuk
>Assignee: Stepachev Maksim
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A simple test reproducing this error:
> {code}
> /**
>  * @throws Exception if failed.
>  */
> public void testBatchUnlock() throws Exception {
>startGrid(0);
>grid(0).createCache(new CacheConfiguration Integer>(DEFAULT_CACHE_NAME)
> .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL));
> try {
> final CountDownLatch releaseLatch = new CountDownLatch(1);
> IgniteInternalFuture fut = GridTestUtils.runAsync(new 
> Callable() {
> @Override public Object call() throws Exception {
> IgniteCache cache = grid(0).cache(null);
> Lock lock = cache.lock("key");
> try {
> lock.lock();
> releaseLatch.await();
> }
> finally {
> lock.unlock();
> }
> return null;
> }
> });
> Map putMap = new LinkedHashMap<>();
> putMap.put("key", "trigger");
> for (int i = 0; i < 10_000; i++)
> putMap.put("key-" + i, "value");
> IgniteCache asyncCache = 
> grid(0).cache(null).withAsync();
> asyncCache.putAll(putMap);
> IgniteFuture resFut = asyncCache.future();
> Thread.sleep(1000);
> releaseLatch.countDown();
> fut.get();
> resFut.get();
> }
> finally {
> stopAllGrids();
> }
> {code}
> We should replace a recursive call with a simple iteration over the linked 
> list.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IGNITE-11248) H2 Debug Console reports NPE when launched

2019-08-01 Thread Ilya Kasnacheev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-11248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Kasnacheev resolved IGNITE-11248.
--
Resolution: Won't Fix

> H2 Debug Console reports NPE when launched
> --
>
> Key: IGNITE-11248
> URL: https://issues.apache.org/jira/browse/IGNITE-11248
> Project: Ignite
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 2.7
>Reporter: Ilya Kasnacheev
>Priority: Minor
>
> Reliably happens on invocation of 
> IGNITE_H2_DEBUG_CONSOLE=true bin/ignite.sh
> {code}
> Внутренняя ошибка: "java.lang.NullPointerException"
> General error: "java.lang.NullPointerException"; SQL statement:
> SELECT TABLE_CAT, TABLE_SCHEM, TABLE_NAME, TABLE_TYPE, REMARKS, TYPE_CAT, 
> TYPE_SCHEM, TYPE_NAME, SELF_REFERENCING_COL_NAME, REF_GENERATION, SQL FROM 
> (SELECT SYNONYM_CATALOG TABLE_CAT, SYNONYM_SCHEMA TABLE_SCHEM, SYNONYM_NAME 
> as TABLE_NAME, TYPE_NAME AS TABLE_TYPE, REMARKS, TYPE_NAME TYPE_CAT, 
> TYPE_NAME TYPE_SCHEM, TYPE_NAME AS TYPE_NAME, TYPE_NAME 
> SELF_REFERENCING_COL_NAME, TYPE_NAME REF_GENERATION, NULL AS SQL FROM 
> INFORMATION_SCHEMA.SYNONYMS WHERE SYNONYM_CATALOG LIKE ? ESCAPE ? AND 
> SYNONYM_SCHEMA LIKE ? ESCAPE ? AND SYNONYM_NAME LIKE ? ESCAPE ? AND (true)  
> UNION SELECT TABLE_CATALOG TABLE_CAT, TABLE_SCHEMA TABLE_SCHEM, TABLE_NAME, 
> TABLE_TYPE, REMARKS, TYPE_NAME TYPE_CAT, TYPE_NAME TYPE_SCHEM, TYPE_NAME, 
> TYPE_NAME SELF_REFERENCING_COL_NAME, TYPE_NAME REF_GENERATION, SQL FROM 
> INFORMATION_SCHEMA.TABLES WHERE TABLE_CATALOG LIKE ? ESCAPE ? AND 
> TABLE_SCHEMA LIKE ? ESCAPE ? AND TABLE_NAME LIKE ? ESCAPE ? AND (TABLE_TYPE 
> IN(?, ?, ?, ?, ?, ?, ?)) ) ORDER BY TABLE_TYPE, TABLE_SCHEM, TABLE_NAME 
> [5-197] HY000/5
> {code}
> in browser window that is opened.
> Reportedly, used to work just fine 2.6



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IGNITE-11986) Failed to deserialize object with given class loader: sun.misc.Launcher$AppClassLoader

2019-08-01 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/IGNITE-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Denis Giguère resolved IGNITE-11986.
-
Resolution: Information Provided

Information provided about Geronimo JCache spec vs original JCache spec and 
linkage guidance solve this issue.

Thanks!

>  Failed to deserialize object with given class loader: 
> sun.misc.Launcher$AppClassLoader
> ---
>
> Key: IGNITE-11986
> URL: https://issues.apache.org/jira/browse/IGNITE-11986
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: mas
> Environment: Ignite master: commit 
> {{1a2c35caf805769ca4e3f169d7a5c72c31147e41}}
> spark 2.4.3
> hadoop 3.1.2
> OpenJDK 8
> scala 2.11.12
>  
>Reporter: Jean-Denis Giguère
>Priority: Major
> Attachments: server-not-ok.log, spark.log
>
>
> h1. Current situation
> Trying to create connect to a remote ignite cluster from {{spark-submit}}, I 
> get the error message given in the error log attached.
> See code snippet here : 
> https://github.com/jdenisgiguere/ignite_failed_unmarshal_discovery_data
> h2. Expected situation
> We shall be able to connect to a remote Ignite even when we are using Hadoop 
> 3.1.x. 
> h3. Steps to reproduce
> See: [https://github.com/jdenisgiguere/ignite_failed_unmarshal_discovery_data]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11924) [IEP-35] Migrate TransactionMetricsMxBean

2019-08-01 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898178#comment-16898178
 ] 

Ignite TC Bot commented on IGNITE-11924:


{panel:title=Branch: [pull/6733/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4445967buildTypeId=IgniteTests24Java8_RunAll]

> [IEP-35] Migrate TransactionMetricsMxBean
> -
>
> Key: IGNITE-11924
> URL: https://issues.apache.org/jira/browse/IGNITE-11924
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After merging of IGNITE-11848 we should migrate `TransactionMetricsMxBean` to 
> the new metric framework.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11584) Implement batch insertion of new cache entries in FreeList to improve rebalancing

2019-08-01 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898172#comment-16898172
 ] 

Pavel Pereslegin commented on IGNITE-11584:
---

[~DmitriyGovorukhin], [~agoncharuk],
please take a look at these changes.

> Implement batch insertion of new cache entries in FreeList to improve 
> rebalancing
> -
>
> Key: IGNITE-11584
> URL: https://issues.apache.org/jira/browse/IGNITE-11584
> Project: Ignite
>  Issue Type: Sub-task
>Affects Versions: 2.7
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-32
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Main goals:
>  * Implement batch insert operation into FreeList - insert several data rows 
> at once
>  * Use batch insertion in the preloader
>   
> Implementation notes:
>  # Preloader cannot lock multiple cache entries at once, because this may 
> lead to a deadlock with concurrent batch updates. Therefore, it pre-creates 
> batch of data rows in the page memory, and then sequentially initializes the 
> cache entries one by one.
>  # Batch writing of data rows into data pages uses the free list as usual 
> because other approaches increase memory fragmentation (for example, using 
> only "reuse" or "most free" buckets).
>  # Eviction tracker assumes that only data pages with "heads" of fragmented 
> data row are tracked, so all other fragments of large data row should be 
> written on separate data pages (without other data rows which may cause page 
> tracking).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-12027) NPE on try to read the MinimumNumberOfPartitionCopies metric.

2019-08-01 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898171#comment-16898171
 ] 

Ignite TC Bot commented on IGNITE-12027:


{panel:title=Branch: [pull/6738/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4446332buildTypeId=IgniteTests24Java8_RunAll]

> NPE on try to read the MinimumNumberOfPartitionCopies metric.
> -
>
> Key: IGNITE-12027
> URL: https://issues.apache.org/jira/browse/IGNITE-12027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NPE on try to read the MinimumNumberOfPartitionCopies metric before node 
> starts.
> Details:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.CacheGroupMetricsImpl.numberOfPartitionCopies(CacheGroupMetricsImpl.java:218)
>   at 
> org.apache.ignite.internal.processors.cache.CacheGroupMetricsImpl.getMinimumNumberOfPartitionCopies(CacheGroupMetricsImpl.java:232)
>   at 
> org.apache.ignite.internal.util.lang.GridFunc.lambda$nonThrowableSupplier$2(GridFunc.java:3302)
>   at 
> org.apache.ignite.internal.processors.metric.impl.IntGauge.value(IntGauge.java:45)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.lambda$null$5(OpenCensusMetricExporterSpi.java:152)
>   at java.lang.Iterable.forEach(Iterable.java:75)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.lambda$export$6(OpenCensusMetricExporterSpi.java:141)
>   at java.lang.Iterable.forEach(Iterable.java:75)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.export(OpenCensusMetricExporterSpi.java:137)
>   at 
> org.apache.ignite.internal.processors.metric.PushMetricsExporterAdapter.lambda$spiStart$0(PushMetricsExporterAdapter.java:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Reason: {{GridDhtPartitionFullMap partFullMap = 
> ctx.topology().partitionMap(false);}} is null.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-12028) [IEP-35] HitRateMetric should provide rateTimeInterval value to metrics exporter

2019-08-01 Thread Amelchev Nikita (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898152#comment-16898152
 ] 

Amelchev Nikita commented on IGNITE-12028:
--

[~NIzhikov], Hi, could you take a look, please?

> [IEP-35] HitRateMetric should provide rateTimeInterval value to metrics 
> exporter
> 
>
> Key: IGNITE-12028
> URL: https://issues.apache.org/jira/browse/IGNITE-12028
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Gura
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{HitRateMetric}} allows to get only counter value while it would be useful 
> to get also {{rateTimeInterval}} in order to export this value as part of 
> metric name. 
> For example look at cache metric {{RebalancingKeysRate}}. Value of this 
> measurement could be exported as smth like this: 
> {{cache..RebalancingKeysRate. = }}.
> So {{HitRateMetric}} should implement {{ObjectMetric}} interface instead of 
> {{LongMetric}} interface.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (IGNITE-12027) NPE on try to read the MinimumNumberOfPartitionCopies metric.

2019-08-01 Thread Amelchev Nikita (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita updated IGNITE-12027:
-
Labels: IEP-35  (was: )

> NPE on try to read the MinimumNumberOfPartitionCopies metric.
> -
>
> Key: IGNITE-12027
> URL: https://issues.apache.org/jira/browse/IGNITE-12027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NPE on try to read the MinimumNumberOfPartitionCopies metric before node 
> starts.
> Details:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.CacheGroupMetricsImpl.numberOfPartitionCopies(CacheGroupMetricsImpl.java:218)
>   at 
> org.apache.ignite.internal.processors.cache.CacheGroupMetricsImpl.getMinimumNumberOfPartitionCopies(CacheGroupMetricsImpl.java:232)
>   at 
> org.apache.ignite.internal.util.lang.GridFunc.lambda$nonThrowableSupplier$2(GridFunc.java:3302)
>   at 
> org.apache.ignite.internal.processors.metric.impl.IntGauge.value(IntGauge.java:45)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.lambda$null$5(OpenCensusMetricExporterSpi.java:152)
>   at java.lang.Iterable.forEach(Iterable.java:75)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.lambda$export$6(OpenCensusMetricExporterSpi.java:141)
>   at java.lang.Iterable.forEach(Iterable.java:75)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.export(OpenCensusMetricExporterSpi.java:137)
>   at 
> org.apache.ignite.internal.processors.metric.PushMetricsExporterAdapter.lambda$spiStart$0(PushMetricsExporterAdapter.java:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Reason: {{GridDhtPartitionFullMap partFullMap = 
> ctx.topology().partitionMap(false);}} is null.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (IGNITE-12027) NPE on try to read the MinimumNumberOfPartitionCopies metric.

2019-08-01 Thread Amelchev Nikita (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita updated IGNITE-12027:
-
Fix Version/s: 2.8

> NPE on try to read the MinimumNumberOfPartitionCopies metric.
> -
>
> Key: IGNITE-12027
> URL: https://issues.apache.org/jira/browse/IGNITE-12027
> Project: Ignite
>  Issue Type: Bug
>Reporter: Amelchev Nikita
>Assignee: Amelchev Nikita
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> NPE on try to read the MinimumNumberOfPartitionCopies metric before node 
> starts.
> Details:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.CacheGroupMetricsImpl.numberOfPartitionCopies(CacheGroupMetricsImpl.java:218)
>   at 
> org.apache.ignite.internal.processors.cache.CacheGroupMetricsImpl.getMinimumNumberOfPartitionCopies(CacheGroupMetricsImpl.java:232)
>   at 
> org.apache.ignite.internal.util.lang.GridFunc.lambda$nonThrowableSupplier$2(GridFunc.java:3302)
>   at 
> org.apache.ignite.internal.processors.metric.impl.IntGauge.value(IntGauge.java:45)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.lambda$null$5(OpenCensusMetricExporterSpi.java:152)
>   at java.lang.Iterable.forEach(Iterable.java:75)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.lambda$export$6(OpenCensusMetricExporterSpi.java:141)
>   at java.lang.Iterable.forEach(Iterable.java:75)
>   at 
> org.apache.ignite.spi.metric.opencensus.OpenCensusMetricExporterSpi.export(OpenCensusMetricExporterSpi.java:137)
>   at 
> org.apache.ignite.internal.processors.metric.PushMetricsExporterAdapter.lambda$spiStart$0(PushMetricsExporterAdapter.java:57)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Reason: {{GridDhtPartitionFullMap partFullMap = 
> ctx.topology().partitionMap(false);}} is null.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11584) Implement batch insertion of new cache entries in FreeList to improve rebalancing

2019-08-01 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898130#comment-16898130
 ] 

Ignite TC Bot commented on IGNITE-11584:


{panel:title=Branch: [pull/6364/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4445486buildTypeId=IgniteTests24Java8_RunAll]

> Implement batch insertion of new cache entries in FreeList to improve 
> rebalancing
> -
>
> Key: IGNITE-11584
> URL: https://issues.apache.org/jira/browse/IGNITE-11584
> Project: Ignite
>  Issue Type: Sub-task
>Affects Versions: 2.7
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-32
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Main goals:
>  * Implement batch insert operation into FreeList - insert several data rows 
> at once
>  * Use batch insertion in the preloader
>   
> Implementation notes:
>  # Preloader cannot lock multiple cache entries at once, because this may 
> lead to a deadlock with concurrent batch updates. Therefore, it pre-creates 
> batch of data rows in the page memory, and then sequentially initializes the 
> cache entries one by one.
>  # Batch writing of data rows into data pages uses the free list as usual 
> because other approaches increase memory fragmentation (for example, using 
> only "reuse" or "most free" buckets).
>  # Eviction tracker assumes that only data pages with "heads" of fragmented 
> data row are tracked, so all other fragments of large data row should be 
> written on separate data pages (without other data rows which may cause page 
> tracking).



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Assigned] (IGNITE-12028) [IEP-35] HitRateMetric should provide rateTimeInterval value to metrics exporter

2019-08-01 Thread Amelchev Nikita (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amelchev Nikita reassigned IGNITE-12028:


Assignee: Amelchev Nikita

> [IEP-35] HitRateMetric should provide rateTimeInterval value to metrics 
> exporter
> 
>
> Key: IGNITE-12028
> URL: https://issues.apache.org/jira/browse/IGNITE-12028
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Andrey Gura
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>
> {{HitRateMetric}} allows to get only counter value while it would be useful 
> to get also {{rateTimeInterval}} in order to export this value as part of 
> metric name. 
> For example look at cache metric {{RebalancingKeysRate}}. Value of this 
> measurement could be exported as smth like this: 
> {{cache..RebalancingKeysRate. = }}.
> So {{HitRateMetric}} should implement {{ObjectMetric}} interface instead of 
> {{LongMetric}} interface.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (IGNITE-12021) Inserting date from Node.JS to a cache which has Java.SQL.Timestamp

2019-08-01 Thread Gaurav (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897953#comment-16897953
 ] 

Gaurav edited comment on IGNITE-12021 at 8/1/19 2:03 PM:
-

Tried that. It didn't work. Please advise.

While doing this I got one more observation. 

I have used putAll and the data passes is array of cache entry. After inserting 
when I try to read using cache.Get() , I get Null.

 

If I insert using cache.put(new keyObj(), new valObj()) , I can read it from 
cache.Get()

 

What am I missing when inserting through cache entry???

 

 

Thanks,

Gaurav


was (Author: g21wadhwa):
Tried that. It didn't work. Please advise.

While doing this I got one more observation. 

I have used putAll and the data passes is array of cache entry. After inserting 
when I try to read using cache.Get() , I get Null.

 

If I insert using cache.put(new keyObj(), new valObj()) , I can read it from 
cache.Get()

 

What am I missing when inserting through cache entry???

 

Also, when I run the scan query in Node.JS, I get all the records inserted by 
JAVA and not the ones which I inserted from Node.JS

 

Thanks,

Gaurav

> Inserting date from Node.JS to a cache which has Java.SQL.Timestamp
> ---
>
> Key: IGNITE-12021
> URL: https://issues.apache.org/jira/browse/IGNITE-12021
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, thin client
>Affects Versions: 2.7
> Environment: We are in DEV right now. can't proceed to higher 
> environment with this show stopper
>Reporter: Gaurav
>Priority: Blocker
>  Labels: Node.JS, ignite,
>
> I have cache which has one field with type java.sql.Timestamp
>  
> From, Node.JS i am inserting it as new Date(). 
> If the cache is empty the inserts are successful. Issue come when java 
> inserted few records in this cache (Java inserts java.sql.Timestamp) . Now , 
> if I run Node.JS program which tries to insert it gives me this error.
>  
> Binary type has different field types [typeName=XYZCacheName, 
> fieldName=updateTime, fieldTypeName1=Timestamp, fieldTypeName2=Date]
>  
> Please help, its stopped my work totally!
>  
> P.S : JavaScript new Date() is itself a Timestamp, so cache should ideally 
> accept it as Timestamp and not Date.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (IGNITE-12021) Inserting date from Node.JS to a cache which has Java.SQL.Timestamp

2019-08-01 Thread Gaurav (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897953#comment-16897953
 ] 

Gaurav edited comment on IGNITE-12021 at 8/1/19 1:26 PM:
-

Tried that. It didn't work. Please advise.

While doing this I got one more observation. 

I have used putAll and the data passes is array of cache entry. After inserting 
when I try to read using cache.Get() , I get Null.

 

If I insert using cache.put(new keyObj(), new valObj()) , I can read it from 
cache.Get()

 

What am I missing when inserting through cache entry???

 

Also, when I run the scan query in Node.JS, I get all the records inserted by 
JAVA and not the ones which I inserted from Node.JS

 

Thanks,

Gaurav


was (Author: g21wadhwa):
Tried that. It didn't work. Please advise.

While doing this I got one more observation. 

I have used putAll and the data passes is array of cache entry. After inserting 
when I try to read using cache.Get() , I get Null.

 

If I insert using cache.put(new keyObj(), new valObj()) , I can read it from 
cache.Get()

 

What am I missing when inserting through cache entry???

 

Thanks,

Gaurav

> Inserting date from Node.JS to a cache which has Java.SQL.Timestamp
> ---
>
> Key: IGNITE-12021
> URL: https://issues.apache.org/jira/browse/IGNITE-12021
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, thin client
>Affects Versions: 2.7
> Environment: We are in DEV right now. can't proceed to higher 
> environment with this show stopper
>Reporter: Gaurav
>Priority: Blocker
>  Labels: Node.JS, ignite,
>
> I have cache which has one field with type java.sql.Timestamp
>  
> From, Node.JS i am inserting it as new Date(). 
> If the cache is empty the inserts are successful. Issue come when java 
> inserted few records in this cache (Java inserts java.sql.Timestamp) . Now , 
> if I run Node.JS program which tries to insert it gives me this error.
>  
> Binary type has different field types [typeName=XYZCacheName, 
> fieldName=updateTime, fieldTypeName1=Timestamp, fieldTypeName2=Date]
>  
> Please help, its stopped my work totally!
>  
> P.S : JavaScript new Date() is itself a Timestamp, so cache should ideally 
> accept it as Timestamp and not Date.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-10654) Report in case of creating index with already existing fields collection.

2019-08-01 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-10654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898082#comment-16898082
 ] 

Andrey Gura commented on IGNITE-10654:
--

[~zstan] LGTM. Merged to master branch. Thanks for contribution!

> Report in case of creating index with already existing fields collection.
> -
>
> Key: IGNITE-10654
> URL: https://issues.apache.org/jira/browse/IGNITE-10654
> Project: Ignite
>  Issue Type: Improvement
>  Components: sql
>Affects Versions: 2.7
>Reporter: Stanilovsky Evgeny
>Assignee: Stanilovsky Evgeny
>Priority: Major
> Fix For: 2.8
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> Report in log if new index creating with already existing fields collection.
> for example, need to log warn here:
> {code:java}
> cache.query(new SqlFieldsQuery("create index \"idx1\" on Val(keyStr, 
> keyLong)"));
> cache.query(new SqlFieldsQuery("create index \"idx3\" on Val(keyStr, 
> keyLong)"));
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (IGNITE-11927) [IEP-35] Add ability to enable\disable subset of metrics

2019-08-01 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898076#comment-16898076
 ] 

Andrey Gura edited comment on IGNITE-11927 at 8/1/19 1:13 PM:
--

[~NIzhikov] Not certainly in that way. WE don't need NO_OP_METRIC concept 
because we don't disable metric itself, we disable whole set of metrics that is 
represented by metric registry. So metric registry also should be removed. 
Moreover, holder itself can contain logic related with change if metric state. 
It should look like:

{code:java}
class MetricHolder {
  boolean enabled;

  AtomicLongMetric m;

  public MetricHolder(GirdKernalContext ctx) {
ctx.monitoring.onDisable("metrics", () -> {
  ctx.metric().remove("registryName");

  m = null;  
 }
);

ctx.monitoring.onEnable("metrics", r -> {
  MetricRegistry r = new MetricRegistry("registryName");

  m = r.longMetric("metric");

  ctx.metric().add("registryName");
 }
);
  }

  public long m() {
assert enabled;

return m;
  }

  public void changeState() {
if (enabled)
  m().increment();
  }
}

class SomeProcessor {
  public SomeProcesor() {
metricHolder = new MetricHolder(ctx);
  }
}
{code}


was (Author: agura):
[~NIzhikov] Not certainly in that way. WE don't need NO_OP_METRIC concept 
because we don't disable metric itself, we disable whole set of metrics that is 
represented by metric registry. So metric registry also should be removed. 
Moreover, holder itself can contain logic related with change if metric state. 
It should look like:

{code:java}
class MetricHolder {
  boolean enabled;

  AtomicLongMetric m;

  public MetricHolder(GirdKernalContext ctx) {
ctx.monitoring.onDisable("metrics", () -> {
  ctx.metric().remove("registryName");

  m = null;  
 }
);

ctx.monitoring.onEnable("metrics", r -> {
  MetricRegistry r = new MetricRegistry("registryName");

  m = r.longMetric("metric");

  ctx.metric().add("registryName");
 }
);
  }

  public long m() {
assert enabled;

return m;
  }

  public void changeState() {
if (enabled)
  m().increment();
  }
}

class SomeProcessor {
  public SomeProcesor() {
metricHolder = new MetricHolder(ctx);
  }
}
{java}

> [IEP-35] Add ability to enable\disable subset of metrics
> 
>
> Key: IGNITE-11927
> URL: https://issues.apache.org/jira/browse/IGNITE-11927
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-35
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ignite should be able to:
> * Enable or disable an arbitrary subset of the metrics. User should be able 
> to do it in runtime.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11927) [IEP-35] Add ability to enable\disable subset of metrics

2019-08-01 Thread Andrey Gura (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898076#comment-16898076
 ] 

Andrey Gura commented on IGNITE-11927:
--

[~NIzhikov] Not certainly in that way. WE don't need NO_OP_METRIC concept 
because we don't disable metric itself, we disable whole set of metrics that is 
represented by metric registry. So metric registry also should be removed. 
Moreover, holder itself can contain logic related with change if metric state. 
It should look like:

{code:java}
class MetricHolder {
  boolean enabled;

  AtomicLongMetric m;

  public MetricHolder(GirdKernalContext ctx) {
ctx.monitoring.onDisable("metrics", () -> {
  ctx.metric().remove("registryName");

  m = null;  
 }
);

ctx.monitoring.onEnable("metrics", r -> {
  MetricRegistry r = new MetricRegistry("registryName");

  m = r.longMetric("metric");

  ctx.metric().add("registryName");
 }
);
  }

  public long m() {
assert enabled;

return m;
  }

  public void changeState() {
if (enabled)
  m().increment();
  }
}

class SomeProcessor {
  public SomeProcesor() {
metricHolder = new MetricHolder(ctx);
  }
}
{java}

> [IEP-35] Add ability to enable\disable subset of metrics
> 
>
> Key: IGNITE-11927
> URL: https://issues.apache.org/jira/browse/IGNITE-11927
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Nikolay Izhikov
>Priority: Major
>  Labels: IEP-35
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Ignite should be able to:
> * Enable or disable an arbitrary subset of the metrics. User should be able 
> to do it in runtime.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11986) Failed to deserialize object with given class loader: sun.misc.Launcher$AppClassLoader

2019-08-01 Thread Vyacheslav Koptilin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898055#comment-16898055
 ] 

Vyacheslav Koptilin commented on IGNITE-11986:
--

Hello [~jean-denis_at_anagraph],

Yes, Geronimo JCache spec is fully compatible with the original JCache spec on 
the API level, but there are different serial versions on some of serializable 
classes.

Please make sure that the same version is used in everywhere in the cluster. 
In case you're using Maven, replacing Geronimo dependency with the following 
should help:
{code:java}

javax.cache
cache-api
1.0.0
{code}
 

>  Failed to deserialize object with given class loader: 
> sun.misc.Launcher$AppClassLoader
> ---
>
> Key: IGNITE-11986
> URL: https://issues.apache.org/jira/browse/IGNITE-11986
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: mas
> Environment: Ignite master: commit 
> {{1a2c35caf805769ca4e3f169d7a5c72c31147e41}}
> spark 2.4.3
> hadoop 3.1.2
> OpenJDK 8
> scala 2.11.12
>  
>Reporter: Jean-Denis Giguère
>Priority: Major
> Attachments: server-not-ok.log, spark.log
>
>
> h1. Current situation
> Trying to create connect to a remote ignite cluster from {{spark-submit}}, I 
> get the error message given in the error log attached.
> See code snippet here : 
> https://github.com/jdenisgiguere/ignite_failed_unmarshal_discovery_data
> h2. Expected situation
> We shall be able to connect to a remote Ignite even when we are using Hadoop 
> 3.1.x. 
> h3. Steps to reproduce
> See: [https://github.com/jdenisgiguere/ignite_failed_unmarshal_discovery_data]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-12033) .Net callbacks from striped pool due to async/await may hang cluster

2019-08-01 Thread Ilya Kasnacheev (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898053#comment-16898053
 ] 

Ilya Kasnacheev commented on IGNITE-12033:
--

Relevant stack trace:
{code}
"sys-stripe-0-#1" #12 prio=5 os_prio=0 tid=0x55cecf9e2800 nid=0x3358 
waiting on condition [0x7f25b85c7000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2470)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$22.op(GridCacheAdapter.java:2468)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4233)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(GridCacheAdapter.java:2468)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2449)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.replace(GridCacheAdapter.java:2896)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.replace(IgniteCacheProxyImpl.java:1294)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.replace(GatewayProtectedCacheProxy.java:1012)
 <-- next cache op started from stripe
at 
org.apache.ignite.internal.processors.platform.cache.PlatformCache.processInStreamOutLong(PlatformCache.java:483)
at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutLong(PlatformTargetProxyImpl.java:67)
at 
org.apache.ignite.internal.processors.platform.callback.PlatformCallbackUtils.inLongOutLong(Native
 Method)
at 
org.apache.ignite.internal.processors.platform.callback.PlatformCallbackGateway.futureNullResult(PlatformCallbackGateway.java:643)
at 
org.apache.ignite.internal.processors.platform.utils.PlatformFutureUtils$1.apply(PlatformFutureUtils.java:208)
at 
org.apache.ignite.internal.processors.platform.utils.PlatformFutureUtils$1.apply(PlatformFutureUtils.java:189)
at 
org.apache.ignite.internal.processors.platform.utils.PlatformFutureUtils$FutureListenable$1.apply(PlatformFutureUtils.java:382)
at 
org.apache.ignite.internal.processors.platform.utils.PlatformFutureUtils$FutureListenable$1.apply(PlatformFutureUtils.java:377)
at 
org.apache.ignite.internal.util.future.IgniteFutureImpl$InternalFutureListener.apply(IgniteFutureImpl.java:215)
 <-- future invocation 
at 
org.apache.ignite.internal.util.future.IgniteFutureImpl$InternalFutureListener.apply(IgniteFutureImpl.java:179)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:385)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:349)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:337)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:497)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:476)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:453)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.applyCallback(GridFutureChainListener.java:78)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:70)
at 
org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:30)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:385)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.unblock(GridFutureAdapter.java:349)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.unblockAll(GridFutureAdapter.java:337)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:497)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:476)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:453)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$AsyncOpRetryFuture$1.apply(GridCacheAdapter.java:5022)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$AsyncOpRetryFuture$1.apply(GridCacheAdapter.java:5017)
at 

[jira] [Created] (IGNITE-12033) .Net callbacks from striped pool due to async/await may hang cluster

2019-08-01 Thread Ilya Kasnacheev (JIRA)
Ilya Kasnacheev created IGNITE-12033:


 Summary: .Net callbacks from striped pool due to async/await may 
hang cluster
 Key: IGNITE-12033
 URL: https://issues.apache.org/jira/browse/IGNITE-12033
 Project: Ignite
  Issue Type: Bug
  Components: cache, platforms
Affects Versions: 2.7.5
Reporter: Ilya Kasnacheev


http://apache-ignite-users.70518.x6.nabble.com/Replace-or-Put-after-PutAsync-causes-Ignite-to-hang-td27871.html#a28051

There's a reproducer project. Long story short, .Net can invoke cache 
operations with future callbacks, which will be invoked from striped pool. If 
such callbacks are to use cache operations, those will be possibly sheduled to 
the same stripe and cause a deadlock.

The code is very simple:

{code}
Console.WriteLine("PutAsync");
await cache.PutAsync(1, "Test");

Console.WriteLine("Replace");
cache.Replace(1, "Testing"); // Hangs here

Console.WriteLine("Wait");
await Task.Delay(Timeout.Infinite); 
{code}

async/await should absolutely not allow any client code to be run from stripes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (IGNITE-12021) Inserting date from Node.JS to a cache which has Java.SQL.Timestamp

2019-08-01 Thread Gaurav (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897953#comment-16897953
 ] 

Gaurav edited comment on IGNITE-12021 at 8/1/19 11:58 AM:
--

Tried that. It didn't work. Please advise.

While doing this I got one more observation. 

I have used putAll and the data passes is array of cache entry. After inserting 
when I try to read using cache.Get() , I get Null.

 

If I insert using cache.put(new keyObj(), new valObj()) , I can read it from 
cache.Get()

 

What am I missing when inserting through cache entry???

 

Thanks,

Gaurav


was (Author: g21wadhwa):
Tried that. I didn't work. While doing this I got one more observation. 

I have used putAll and the data passes is array of cache entry. After inserting 
when I try to read using cache.Get() , I get Null.

 

If I insert using cache.put(new keyObj(), new valObj()) , I can read it from 
cache.Get()

 

What am I missing when inserting through cache entry???

 

Thanks,

Gaurav

> Inserting date from Node.JS to a cache which has Java.SQL.Timestamp
> ---
>
> Key: IGNITE-12021
> URL: https://issues.apache.org/jira/browse/IGNITE-12021
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, thin client
>Affects Versions: 2.7
> Environment: We are in DEV right now. can't proceed to higher 
> environment with this show stopper
>Reporter: Gaurav
>Priority: Blocker
>  Labels: Node.JS, ignite,
>
> I have cache which has one field with type java.sql.Timestamp
>  
> From, Node.JS i am inserting it as new Date(). 
> If the cache is empty the inserts are successful. Issue come when java 
> inserted few records in this cache (Java inserts java.sql.Timestamp) . Now , 
> if I run Node.JS program which tries to insert it gives me this error.
>  
> Binary type has different field types [typeName=XYZCacheName, 
> fieldName=updateTime, fieldTypeName1=Timestamp, fieldTypeName2=Date]
>  
> Please help, its stopped my work totally!
>  
> P.S : JavaScript new Date() is itself a Timestamp, so cache should ideally 
> accept it as Timestamp and not Date.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (IGNITE-12032) Server node prints exception when ODBC driver disconnects

2019-08-01 Thread Evgenii Zhuravlev (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgenii Zhuravlev updated IGNITE-12032:
---
Labels: newbie usability  (was: usability)

> Server node prints exception when ODBC driver disconnects
> -
>
> Key: IGNITE-12032
> URL: https://issues.apache.org/jira/browse/IGNITE-12032
> Project: Ignite
>  Issue Type: Bug
>  Components: odbc
>Affects Versions: 2.7.5
>Reporter: Evgenii Zhuravlev
>Priority: Major
>  Labels: newbie, usability
>
> Whenever a process using ODBC clients is finished, it's printing in the 
> node logs this exception: 
> {code:java}
> *[07:45:19,559][SEVERE][grid-nio-worker-client-listener-1-#30][ClientListenerProcessor]
>  
> Failed to process selector key [s 
> es=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker 
> [readBuf=java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192 
> ], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, 
> bytesRcvd0=0, bytesSent0=0, select=true, super=GridWo 
> rker [name=grid-nio-worker-client-listener-1, igniteInstanceName=null, 
> finished=false, heartbeatTs=1564289118230, hashCo 
> de=1829856117, interrupted=false, 
> runner=grid-nio-worker-client-listener-1-#30]]], writeBuf=null, 
> readBuf=null, inRecove 
> ry=null, outRecovery=null, super=GridNioSessionImpl 
> [locAddr=/0:0:0:0:0:0:0:1:10800, rmtAddr=/0:0:0:0:0:0:0:1:63697, cre 
> ateTime=1564289116225, closeTime=0, bytesSent=1346, bytesRcvd=588, 
> bytesSent0=0, bytesRcvd0=0, sndSchedTime=156428911623 
> 5, lastSndTime=1564289116235, lastRcvTime=1564289116235, readsPaused=false, 
> filterChain=FilterChain[filters=[GridNioAsyn 
> cNotifyFilter, GridNioCodecFilter [parser=ClientListenerBufferedParser, 
> directMode=false]], accepted=true, markedForClos 
> e=false]]] 
> java.io.IOException: An existing connection was forcibly closed by the 
> remote host 
> at sun.nio.ch.SocketDispatcher.read0(Native Method) 
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) 
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
> at sun.nio.ch.IOUtil.read(IOUtil.java:197) 
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) 
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:11
>  
> 04) 
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNi
>  
> oServer.java:2389) 
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:215
>  
> 6) 
> at 
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1797)
>  
> at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) 
> at java.lang.Thread.run(Thread.java:748)* 
> {code}
> It's absolutely normal behavior when ODBC client disconnects from the node, 
> so, we shouldn't print exception in the log. We should replace it with 
> something like INFO message about ODBC client disconnection.
> Thread from user list: 
> http://apache-ignite-users.70518.x6.nabble.com/exceptions-in-Ignite-node-when-a-thin-client-process-ends-td28970.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IGNITE-12032) Server node prints exception when ODBC driver disconnects

2019-08-01 Thread Evgenii Zhuravlev (JIRA)
Evgenii Zhuravlev created IGNITE-12032:
--

 Summary: Server node prints exception when ODBC driver disconnects
 Key: IGNITE-12032
 URL: https://issues.apache.org/jira/browse/IGNITE-12032
 Project: Ignite
  Issue Type: Bug
  Components: odbc
Affects Versions: 2.7.5
Reporter: Evgenii Zhuravlev


Whenever a process using ODBC clients is finished, it's printing in the 
node logs this exception: 

{code:java}
*[07:45:19,559][SEVERE][grid-nio-worker-client-listener-1-#30][ClientListenerProcessor]
 
Failed to process selector key [s 
es=GridSelectorNioSessionImpl [worker=ByteBufferNioClientWorker 
[readBuf=java.nio.HeapByteBuffer[pos=0 lim=8192 cap=8192 
], super=AbstractNioClientWorker [idx=1, bytesRcvd=0, bytesSent=0, 
bytesRcvd0=0, bytesSent0=0, select=true, super=GridWo 
rker [name=grid-nio-worker-client-listener-1, igniteInstanceName=null, 
finished=false, heartbeatTs=1564289118230, hashCo 
de=1829856117, interrupted=false, 
runner=grid-nio-worker-client-listener-1-#30]]], writeBuf=null, 
readBuf=null, inRecove 
ry=null, outRecovery=null, super=GridNioSessionImpl 
[locAddr=/0:0:0:0:0:0:0:1:10800, rmtAddr=/0:0:0:0:0:0:0:1:63697, cre 
ateTime=1564289116225, closeTime=0, bytesSent=1346, bytesRcvd=588, 
bytesSent0=0, bytesRcvd0=0, sndSchedTime=156428911623 
5, lastSndTime=1564289116235, lastRcvTime=1564289116235, readsPaused=false, 
filterChain=FilterChain[filters=[GridNioAsyn 
cNotifyFilter, GridNioCodecFilter [parser=ClientListenerBufferedParser, 
directMode=false]], accepted=true, markedForClos 
e=false]]] 
java.io.IOException: An existing connection was forcibly closed by the 
remote host 
at sun.nio.ch.SocketDispatcher.read0(Native Method) 
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43) 
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) 
at sun.nio.ch.IOUtil.read(IOUtil.java:197) 
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) 
at 
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:11
 
04) 
at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNi
 
oServer.java:2389) 
at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:215
 
6) 
at 
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1797)
 
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120) 
at java.lang.Thread.run(Thread.java:748)* 
{code}

It's absolutely normal behavior when ODBC client disconnects from the node, so, 
we shouldn't print exception in the log. We should replace it with something 
like INFO message about ODBC client disconnection.

Thread from user list: 
http://apache-ignite-users.70518.x6.nabble.com/exceptions-in-Ignite-node-when-a-thin-client-process-ends-td28970.html



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-12021) Inserting date from Node.JS to a cache which has Java.SQL.Timestamp

2019-08-01 Thread Gaurav (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897953#comment-16897953
 ] 

Gaurav commented on IGNITE-12021:
-

Tried that. I didn't work. While doing this I got one more observation. 

I have used putAll and the data passes is array of cache entry. After inserting 
when I try to read using cache.Get() , I get Null.

 

If I insert using cache.put(new keyObj(), new valObj()) , I can read it from 
cache.Get()

 

What am I missing when inserting through cache entry???

 

Thanks,

Gaurav

> Inserting date from Node.JS to a cache which has Java.SQL.Timestamp
> ---
>
> Key: IGNITE-12021
> URL: https://issues.apache.org/jira/browse/IGNITE-12021
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, thin client
>Affects Versions: 2.7
> Environment: We are in DEV right now. can't proceed to higher 
> environment with this show stopper
>Reporter: Gaurav
>Priority: Blocker
>  Labels: Node.JS, ignite,
>
> I have cache which has one field with type java.sql.Timestamp
>  
> From, Node.JS i am inserting it as new Date(). 
> If the cache is empty the inserts are successful. Issue come when java 
> inserted few records in this cache (Java inserts java.sql.Timestamp) . Now , 
> if I run Node.JS program which tries to insert it gives me this error.
>  
> Binary type has different field types [typeName=XYZCacheName, 
> fieldName=updateTime, fieldTypeName1=Timestamp, fieldTypeName2=Date]
>  
> Please help, its stopped my work totally!
>  
> P.S : JavaScript new Date() is itself a Timestamp, so cache should ideally 
> accept it as Timestamp and not Date.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-12030) node freeze and not able to login after few days stop and start the service give the message in description

2019-08-01 Thread Vyacheslav Koptilin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897950#comment-16897950
 ] 

Vyacheslav Koptilin commented on IGNITE-12030:
--

Hi [~yabushaib],

Could you please attach log files from all nodes to the ticket? The message 
itself (\{{Failed to wait for partition map exchange}}) does not provide info 
about the issue and possible reasons for that behavior.

> node freeze and not able to login after few days stop and start the service 
> give the message in description
> ---
>
> Key: IGNITE-12030
> URL: https://issues.apache.org/jira/browse/IGNITE-12030
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.7.5
> Environment: Linux 7.5
>Reporter: Yaser Mohammad Abushaip
>Priority: Critical
> Fix For: None
>
>
> During startup I receive the below error
> Failed to wait for partition map exchange
> [topVer=AffinityTopologyVersion [topVer=2,minorTopVer=1],
> node=adfsdfdsfxx. Dumping pending objects that might be the cause
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Closed] (IGNITE-12031) node freeze and not able to login after few days stop and start the service give the message in description

2019-08-01 Thread Yaser Mohammad Abushaip (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yaser Mohammad Abushaip closed IGNITE-12031.


> node freeze and not able to login after few days stop and start the service 
> give the message in description
> ---
>
> Key: IGNITE-12031
> URL: https://issues.apache.org/jira/browse/IGNITE-12031
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.7.5
> Environment: Linux 7.5
>Reporter: Yaser Mohammad Abushaip
>Priority: Critical
> Fix For: None
>
>
> During startup I receive the below error
> Failed to wait for partition map exchange
> [topVer=AffinityTopologyVersion [topVer=2,minorTopVer=1],
> node=adfsdfdsfxx. Dumping pending objects that might be the cause
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (IGNITE-12031) node freeze and not able to login after few days stop and start the service give the message in description

2019-08-01 Thread Ivan Pavlukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/IGNITE-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Pavlukhin resolved IGNITE-12031.
-
Resolution: Duplicate

[~yabushaib], I resolved this ticket as a duplicate of IGNITE-12030. Feel free 
to reopen if I misunderstood something.

> node freeze and not able to login after few days stop and start the service 
> give the message in description
> ---
>
> Key: IGNITE-12031
> URL: https://issues.apache.org/jira/browse/IGNITE-12031
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.7.5
> Environment: Linux 7.5
>Reporter: Yaser Mohammad Abushaip
>Priority: Critical
> Fix For: None
>
>
> During startup I receive the below error
> Failed to wait for partition map exchange
> [topVer=AffinityTopologyVersion [topVer=2,minorTopVer=1],
> node=adfsdfdsfxx. Dumping pending objects that might be the cause
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-5227) StackOverflowError in GridCacheMapEntry#checkOwnerChanged()

2019-08-01 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897931#comment-16897931
 ] 

Ignite TC Bot commented on IGNITE-5227:
---

{panel:title=Branch: [pull/6736/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4438310buildTypeId=IgniteTests24Java8_RunAll]

> StackOverflowError in GridCacheMapEntry#checkOwnerChanged()
> ---
>
> Key: IGNITE-5227
> URL: https://issues.apache.org/jira/browse/IGNITE-5227
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Alexey Goncharuk
>Assignee: Stepachev Maksim
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A simple test reproducing this error:
> {code}
> /**
>  * @throws Exception if failed.
>  */
> public void testBatchUnlock() throws Exception {
>startGrid(0);
>grid(0).createCache(new CacheConfiguration Integer>(DEFAULT_CACHE_NAME)
> .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL));
> try {
> final CountDownLatch releaseLatch = new CountDownLatch(1);
> IgniteInternalFuture fut = GridTestUtils.runAsync(new 
> Callable() {
> @Override public Object call() throws Exception {
> IgniteCache cache = grid(0).cache(null);
> Lock lock = cache.lock("key");
> try {
> lock.lock();
> releaseLatch.await();
> }
> finally {
> lock.unlock();
> }
> return null;
> }
> });
> Map putMap = new LinkedHashMap<>();
> putMap.put("key", "trigger");
> for (int i = 0; i < 10_000; i++)
> putMap.put("key-" + i, "value");
> IgniteCache asyncCache = 
> grid(0).cache(null).withAsync();
> asyncCache.putAll(putMap);
> IgniteFuture resFut = asyncCache.future();
> Thread.sleep(1000);
> releaseLatch.countDown();
> fut.get();
> resFut.get();
> }
> finally {
> stopAllGrids();
> }
> {code}
> We should replace a recursive call with a simple iteration over the linked 
> list.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (IGNITE-11584) Implement batch insertion of new cache entries in FreeList to improve rebalancing

2019-08-01 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865098#comment-16865098
 ] 

Pavel Pereslegin edited comment on IGNITE-11584 at 8/1/19 9:41 AM:
---

Results of microbenchmarks.
Environment: Linux, 16Gb Ram, Core i7 8700
Config: memory page size - 4096 bytes.

1.{{CacheFreeList.insertDataRow()}} vs {{CacheFreeList.insertDataRows()}}.
Source code: [https://gist.github.com/xtern/8443638a026785655f1cd4d084ded6fd]

Benchmark measures the time required to insert 100 data rows of different sizes 
(the size does not include object overhead ~40 bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*|*100 - 32000*|
||single (μs)|162.3 ^± 4.2^|140.7  ^± 1.2^|159.9  ^± 4.6^|175.2  ^± 1.6^|239.8  
^± 2.3^| |422.3  ^± 5.7^|867.9  ^± 89.3^|1287.0  ^± 55.8^|
||batch (μs)|28.0  ^± 0.9^|43.4  ^± 0.4^|74.6  ^± 0.7^|115.8  ^± 2.0^|232.9  ^± 
5.7^| |398.5  ^± 8.6^|794.6  ^± 8.7^|1209.0  ^± 20.9^|

2. Comparison of preloading performance (master branch vs patch branch).
Source code: [https://gist.github.com/xtern/4a4699efd06f147df2b7b342169aee0d]

Benchmark measures the time required for handling one supply message with 100 
objects of different sizes (the size does not include object overhead ~40 
bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*| *100 - 32000*|
||master(μs)|198.7  ^± 7.2^|205.7  ^± 10.0^|213.3  ^± 17.4^|243.4  ^± 
31.7^|261.2  ^± 10.6^| |371.8  ^± 13.5^|639.5  ^± 36.4^|914.5  ^± 85.5^|
||patch (μs)|121.9  ^± 4.1^|141.3  ^± 23.0^|155.3  ^± 15.8^|178.7  ^± 
21.4^|241.2  ^± 6.3^| |359.1  ^± 31.3^|637.3  ^± 145.7^|898.6  ^± 81.6^|

Free list benchmark shows that performance increases in cases when the memory 
page has enough free space to store more than one object.
The performance boost in preloading is significantly lower since this process 
involves more overhead. Real life rebalancing speedup will be even less.


was (Author: xtern):
Results of microbenchmarks.
Environment: Linux, 16Gb Ram, Core i7 8700
Config: memory page size - 4096 bytes.

1.{{CacheFreeList.insertDataRow()}} vs {{CacheFreeList.insertDataRows()}}.
Source code: [https://gist.github.com/xtern/8443638a026785655f1cd4d084ded6fd
]Benchmark measures the time required to insert 100 data rows of different 
sizes (the size does not include object overhead ~40 bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*|*100 - 32000*|
||single (μs)|162.3 ^± 4.2^|140.7  ^± 1.2^|159.9  ^± 4.6^|175.2  ^± 1.6^|239.8  
^± 2.3^| |422.3  ^± 5.7^|867.9  ^± 89.3^|1287.0  ^± 55.8^|
||batch (μs)|28.0  ^± 0.9^|43.4  ^± 0.4^|74.6  ^± 0.7^|115.8  ^± 2.0^|232.9  ^± 
5.7^| |398.5  ^± 8.6^|794.6  ^± 8.7^|1209.0  ^± 20.9^|

2. Comparison of preloading performance (master branch vs patch branch).
Source code: [https://gist.github.com/xtern/4a4699efd06f147df2b7b342169aee0d
]Benchmark measures the time required for handling one supply message with 100 
objects of different sizes (the size does not include object overhead ~40 
bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*| *100 - 32000*|
||master(μs)|198.7  ^± 7.2^|205.7  ^± 10.0^|213.3  ^± 17.4^|243.4  ^± 
31.7^|261.2  ^± 10.6^| |371.8  ^± 13.5^|639.5  ^± 36.4^|914.5  ^± 85.5^|
||patch (μs)|121.9  ^± 4.1^|141.3  ^± 23.0^|155.3  ^± 15.8^|178.7  ^± 
21.4^|241.2  ^± 6.3^| |359.1  ^± 31.3^|637.3  ^± 145.7^|898.6  ^± 81.6^|

Free list benchmark shows that performance increases in cases when the memory 
page has enough free space to store more than one object.
The performance boost in preloading is significantly lower since this process 
involves more overhead. Real life rebalancing speedup will be even less.

> Implement batch insertion of new cache entries in FreeList to improve 
> rebalancing
> -
>
> Key: IGNITE-11584
> URL: https://issues.apache.org/jira/browse/IGNITE-11584
> Project: Ignite
>  Issue Type: Sub-task
>Affects Versions: 2.7
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-32
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Main goals:
>  * Implement batch insert operation into FreeList - insert several data rows 
> at once
>  * Use batch insertion in the preloader
>   
> Implementation notes:
>  # Preloader cannot lock multiple cache entries at once, because this may 
> lead to a deadlock with concurrent batch updates. Therefore, it pre-creates 
> batch of data rows in the page memory, and then sequentially initializes the 
> cache 

[jira] [Comment Edited] (IGNITE-11584) Implement batch insertion of new cache entries in FreeList to improve rebalancing

2019-08-01 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865098#comment-16865098
 ] 

Pavel Pereslegin edited comment on IGNITE-11584 at 8/1/19 9:40 AM:
---

Results of microbenchmarks.
Environment: Linux, 16Gb Ram, Core i7 8700
Config: memory page size - 4096 bytes.

1.{{CacheFreeList.insertDataRow()}} vs {{CacheFreeList.insertDataRows()}}.
Source code: [https://gist.github.com/xtern/8443638a026785655f1cd4d084ded6fd
]Benchmark measures the time required to insert 100 data rows of different 
sizes (the size does not include object overhead ~40 bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*|*100 - 32000*|
||single (μs)|162.3 ^± 4.2^|140.7  ^± 1.2^|159.9  ^± 4.6^|175.2  ^± 1.6^|239.8  
^± 2.3^| |422.3  ^± 5.7^|867.9  ^± 89.3^|1287.0  ^± 55.8^|
||batch (μs)|28.0  ^± 0.9^|43.4  ^± 0.4^|74.6  ^± 0.7^|115.8  ^± 2.0^|232.9  ^± 
5.7^| |398.5  ^± 8.6^|794.6  ^± 8.7^|1209.0  ^± 20.9^|

2. Comparison of preloading performance (master branch vs patch branch).
Source code: [https://gist.github.com/xtern/4a4699efd06f147df2b7b342169aee0d
]Benchmark measures the time required for handling one supply message with 100 
objects of different sizes (the size does not include object overhead ~40 
bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*| *100 - 32000*|
||master(μs)|198.7  ^± 7.2^|205.7  ^± 10.0^|213.3  ^± 17.4^|243.4  ^± 
31.7^|261.2  ^± 10.6^| |371.8  ^± 13.5^|639.5  ^± 36.4^|914.5  ^± 85.5^|
||patch (μs)|121.9  ^± 4.1^|141.3  ^± 23.0^|155.3  ^± 15.8^|178.7  ^± 
21.4^|241.2  ^± 6.3^| |359.1  ^± 31.3^|637.3  ^± 145.7^|898.6  ^± 81.6^|

Free list benchmark shows that performance increases in cases when the memory 
page has enough free space to store more than one object.
The performance boost in preloading is significantly lower since this process 
involves more overhead. Real life rebalancing speedup will be even less.


was (Author: xtern):
Results of microbenchmarks.
Environment: Linux, 16Gb Ram, Core i7 8700
Config: memory page size - 4096 bytes.

1.{{CacheFreeList.insertDataRow()}} vs {{CacheFreeList.insertDataRows()}}.
Source code: [https://gist.github.com/xtern/8443638a026785655f1cd4d084ded6fd
]Benchmark measures the time required to insert 100 data rows of different 
sizes (the size does not include object overhead ~40 bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*|*100 - 32000*|
||single (μs)|162.3 ^± 4.2^|140.7  ^± 1.2^|159.9  ^± 4.6^|175.2  ^± 1.6^|239.8  
^± 2.3^| |422.3  ^± 5.7^|867.9  ^± 89.3^|1287.0  ^± 55.8^|
||batch (μs)|28.0  ^± 0.9^|43.4  ^± 0.4^|74.6  ^± 0.7^|115.8  ^± 2.0^|232.9  ^± 
5.7^| |398.5  ^± 8.6^|794.6  ^± 8.7^|1209.0  ^± 20.9^|

2. Comparison of preloading performance (master branch vs patch branch).
Source code: [https://gist.github.com/xtern/4a4699efd06f147df2b7b342169aee0d
]Benchmark measures the time required for handling one supply message with 100 
objects of different sizes (the size does not include object overhead ~40 
bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*| *100 - 32000*|
||master(μs)|198.7  ^± 7.2^|205.7  ^± 10.0^|213.3  ^± 17.4^|243.4  ^± 
31.7^|261.2  ^± 10.6^| |371.8  ^± 13.5^|639.5  ^± 36.4^|914.5  ^± 85.5^|
||patch (μs)|121.9  ^± 4.1^|141.3  ^± 23.0^|155.3  ^± 15.8^|178.7  ^± 
21.4^|241.2  ^± 6.3^| |359.1  ^± 31.3^|637.3  ^± 145.7^|898.6  ^± 81.6^|

Free list benchmark shows that performance increases in cases when the memory 
page has enough free space to store more than one object.
The performance boost in preloading is significantly lower since this process 
involves more overhead. Real life rebalancing speedup will be even less.

> Implement batch insertion of new cache entries in FreeList to improve 
> rebalancing
> -
>
> Key: IGNITE-11584
> URL: https://issues.apache.org/jira/browse/IGNITE-11584
> Project: Ignite
>  Issue Type: Sub-task
>Affects Versions: 2.7
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-32
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Main goals:
>  * Implement batch insert operation into FreeList - insert several data rows 
> at once
>  * Use batch insertion in the preloader
>   
> Implementation notes:
>  # Preloader cannot lock multiple cache entries at once, because this may 
> lead to a deadlock with concurrent batch updates. Therefore, it pre-creates 
> batch of data rows in the page memory, and then sequentially initializes the 
> cache 

[jira] [Comment Edited] (IGNITE-11584) Implement batch insertion of new cache entries in FreeList to improve rebalancing

2019-08-01 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865098#comment-16865098
 ] 

Pavel Pereslegin edited comment on IGNITE-11584 at 8/1/19 9:39 AM:
---

Results of microbenchmarks.
Environment: Linux, 16Gb Ram, Core i7 8700
Config: memory page size - 4096 bytes.

1.{{CacheFreeList.insertDataRow()}} vs {{CacheFreeList.insertDataRows()}}.
Source code: [https://gist.github.com/xtern/8443638a026785655f1cd4d084ded6fd
]Benchmark measures the time required to insert 100 data rows of different 
sizes (the size does not include object overhead ~40 bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*|*100 - 32000*|
||single (μs)|162.3 ^± 4.2^|140.7  ^± 1.2^|159.9  ^± 4.6^|175.2  ^± 1.6^|239.8  
^± 2.3^| |422.3  ^± 5.7^|867.9  ^± 89.3^|1287.0  ^± 55.8^|
||batch (μs)|28.0  ^± 0.9^|43.4  ^± 0.4^|74.6  ^± 0.7^|115.8  ^± 2.0^|232.9  ^± 
5.7^| |398.5  ^± 8.6^|794.6  ^± 8.7^|1209.0  ^± 20.9^|

2. Comparison of preloading performance (master branch vs patch branch).
Source code: [https://gist.github.com/xtern/4a4699efd06f147df2b7b342169aee0d
]Benchmark measures the time required for handling one supply message with 100 
objects of different sizes (the size does not include object overhead ~40 
bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*| *100 - 32000*|
||master(μs)|198.7  ^± 7.2^|205.7  ^± 10.0^|213.3  ^± 17.4^|243.4  ^± 
31.7^|261.2  ^± 10.6^| |371.8  ^± 13.5^|639.5  ^± 36.4^|914.5  ^± 85.5^|
||patch (μs)|121.9  ^± 4.1^|141.3  ^± 23.0^|155.3  ^± 15.8^|178.7  ^± 
21.4^|241.2  ^± 6.3^| |359.1  ^± 31.3^|637.3  ^± 145.7^|898.6  ^± 81.6^|

Free list benchmark shows that performance increases in cases when the memory 
page has enough free space to store more than one object.
The performance boost in preloading is significantly lower since this process 
involves more overhead. Real life rebalancing speedup will be even less.


was (Author: xtern):
Results of microbenchmarks.
Environment: Linux, 16Gb Ram, Core i7 8700
Config: memory page size - 4096 bytes.

1.{{CacheFreeList.insertDataRow()}} vs {{CacheFreeList.insertDataRows()}}.
Source code: [https://gist.github.com/xtern/8443638a026785655f1cd4d084ded6fd]

Benchmark measures the time required to insert 100 data rows of different sizes 
(the size does not include object overhead ~40 bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*|*100 - 32000*|
||single (μs)|162.3 ^± 4.2^|140.7  ^± 1.2^|159.9  ^± 4.6^|175.2  ^± 1.6^|239.8  
^± 2.3^| |422.3  ^± 5.7^|867.9  ^± 89.3^|1287.0  ^± 55.8^|
||batch (μs)|28.0  ^± 0.9^|43.4  ^± 0.4^|74.6  ^± 0.7^|115.8  ^± 2.0^|232.9  ^± 
5.7^| |398.5  ^± 8.6^|794.6  ^± 8.7^|1209.0  ^± 20.9^|

2. Comparison of preloading performance (master branch vs patch branch).
Source code: 
[https://gist.github.com/xtern/4a4699efd06f147df2b7b342169aee0d#file-jmhbatchupdatesinpreloadbenchmark-java]
Benchmark measures the time required for handling one supply message with 100 
objects of different sizes (the size does not include object overhead ~40 
bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*| *100 - 32000*|
||master(μs)|198.7  ^± 7.2^|205.7  ^± 10.0^|213.3  ^± 17.4^|243.4  ^± 
31.7^|261.2  ^± 10.6^| |371.8  ^± 13.5^|639.5  ^± 36.4^|914.5  ^± 85.5^|
||patch (μs)|121.9  ^± 4.1^|141.3  ^± 23.0^|155.3  ^± 15.8^|178.7  ^± 
21.4^|241.2  ^± 6.3^| |359.1  ^± 31.3^|637.3  ^± 145.7^|898.6  ^± 81.6^|

Free list benchmark shows that performance increases in cases when the memory 
page has enough free space to store more than one object.
The performance boost in preloading is significantly lower since this process 
involves more overhead. Real life rebalancing speedup will be even less.

> Implement batch insertion of new cache entries in FreeList to improve 
> rebalancing
> -
>
> Key: IGNITE-11584
> URL: https://issues.apache.org/jira/browse/IGNITE-11584
> Project: Ignite
>  Issue Type: Sub-task
>Affects Versions: 2.7
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-32
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Main goals:
>  * Implement batch insert operation into FreeList - insert several data rows 
> at once
>  * Use batch insertion in the preloader
>   
> Implementation notes:
>  # Preloader cannot lock multiple cache entries at once, because this may 
> lead to a deadlock with concurrent batch updates. Therefore, it pre-creates 
> batch of data rows in the page memory, and 

[jira] [Comment Edited] (IGNITE-11584) Implement batch insertion of new cache entries in FreeList to improve rebalancing

2019-08-01 Thread Pavel Pereslegin (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16865098#comment-16865098
 ] 

Pavel Pereslegin edited comment on IGNITE-11584 at 8/1/19 9:38 AM:
---

Results of microbenchmarks.
Environment: Linux, 16Gb Ram, Core i7 8700
Config: memory page size - 4096 bytes.

1.{{CacheFreeList.insertDataRow()}} vs {{CacheFreeList.insertDataRows()}}.
Source code: [https://gist.github.com/xtern/8443638a026785655f1cd4d084ded6fd]

Benchmark measures the time required to insert 100 data rows of different sizes 
(the size does not include object overhead ~40 bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*|*100 - 32000*|
||single (μs)|162.3 ^± 4.2^|140.7  ^± 1.2^|159.9  ^± 4.6^|175.2  ^± 1.6^|239.8  
^± 2.3^| |422.3  ^± 5.7^|867.9  ^± 89.3^|1287.0  ^± 55.8^|
||batch (μs)|28.0  ^± 0.9^|43.4  ^± 0.4^|74.6  ^± 0.7^|115.8  ^± 2.0^|232.9  ^± 
5.7^| |398.5  ^± 8.6^|794.6  ^± 8.7^|1209.0  ^± 20.9^|

2. Comparison of preloading performance (master branch vs patch branch).
Source code: 
[https://gist.github.com/xtern/4a4699efd06f147df2b7b342169aee0d#file-jmhbatchupdatesinpreloadbenchmark-java]
Benchmark measures the time required for handling one supply message with 100 
objects of different sizes (the size does not include object overhead ~40 
bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*| *100 - 32000*|
||master(μs)|198.7  ^± 7.2^|205.7  ^± 10.0^|213.3  ^± 17.4^|243.4  ^± 
31.7^|261.2  ^± 10.6^| |371.8  ^± 13.5^|639.5  ^± 36.4^|914.5  ^± 85.5^|
||patch (μs)|121.9  ^± 4.1^|141.3  ^± 23.0^|155.3  ^± 15.8^|178.7  ^± 
21.4^|241.2  ^± 6.3^| |359.1  ^± 31.3^|637.3  ^± 145.7^|898.6  ^± 81.6^|

Free list benchmark shows that performance increases in cases when the memory 
page has enough free space to store more than one object.
The performance boost in preloading is significantly lower since this process 
involves more overhead. Real life rebalancing speedup will be even less.


was (Author: xtern):
Results of microbenchmarks.
Environment: Linux, 16Gb Ram, Core i7 8700
Config: memory page size - 4096 bytes.

1.{{CacheFreeList.insertDataRow()}} vs {{CacheFreeList.insertDataRows()}}.
Source code: 
[https://github.com/apache/ignite/blob/b266bba3d1ae9d39b4b39ef38f4c56d1319da063/modules/benchmarks/src/main/java/org/apache/ignite/internal/benchmarks/jmh/pagemem/JmhCacheFreelistBenchmark.java]
Benchmark measures the time required to insert 100 data rows of different sizes 
(the size does not include object overhead ~40 bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*|*100 - 32000*|
||single (μs)|162.3 ^± 4.2^|140.7  ^± 1.2^|159.9  ^± 4.6^|175.2  ^± 1.6^|239.8  
^± 2.3^| |422.3  ^± 5.7^|867.9  ^± 89.3^|1287.0  ^± 55.8^|
||batch (μs)|28.0  ^± 0.9^|43.4  ^± 0.4^|74.6  ^± 0.7^|115.8  ^± 2.0^|232.9  ^± 
5.7^| |398.5  ^± 8.6^|794.6  ^± 8.7^|1209.0  ^± 20.9^|

2. Comparison of preloading performance (master branch vs patch branch).
Source code: 
[https://gist.github.com/xtern/4a4699efd06f147df2b7b342169aee0d#file-jmhbatchupdatesinpreloadbenchmark-java]
Benchmark measures the time required for handling one supply message with 100 
objects of different sizes (the size does not include object overhead ~40 
bytes).
||size (bytes)|*4 - 64*|*100-300*|*300-700*|*700 - 1200*|*1200 - 3000*| 
|*1000 - 8000*|*4000 - 16000*| *100 - 32000*|
||master(μs)|198.7  ^± 7.2^|205.7  ^± 10.0^|213.3  ^± 17.4^|243.4  ^± 
31.7^|261.2  ^± 10.6^| |371.8  ^± 13.5^|639.5  ^± 36.4^|914.5  ^± 85.5^|
||patch (μs)|121.9  ^± 4.1^|141.3  ^± 23.0^|155.3  ^± 15.8^|178.7  ^± 
21.4^|241.2  ^± 6.3^| |359.1  ^± 31.3^|637.3  ^± 145.7^|898.6  ^± 81.6^|

Free list benchmark shows that performance increases in cases when the memory 
page has enough free space to store more than one object.
The performance boost in preloading is significantly lower since this process 
involves more overhead. Real life rebalancing speedup will be even less.

> Implement batch insertion of new cache entries in FreeList to improve 
> rebalancing
> -
>
> Key: IGNITE-11584
> URL: https://issues.apache.org/jira/browse/IGNITE-11584
> Project: Ignite
>  Issue Type: Sub-task
>Affects Versions: 2.7
>Reporter: Pavel Pereslegin
>Assignee: Pavel Pereslegin
>Priority: Major
>  Labels: iep-32
> Fix For: 2.8
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Main goals:
>  * Implement batch insert operation into FreeList - insert several data rows 
> at once
>  * Use batch insertion in the preloader
>   
> Implementation notes:
>  # Preloader cannot lock 

[jira] [Commented] (IGNITE-11857) Investigate performance drop after IGNITE-10078

2019-08-01 Thread Ilya Suntsov (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897892#comment-16897892
 ] 

Ilya Suntsov commented on IGNITE-11857:
---

[~alex_pl] the last run showed that PR 6640 ~ 4 % slower than master.

> Investigate performance drop after IGNITE-10078
> ---
>
> Key: IGNITE-11857
> URL: https://issues.apache.org/jira/browse/IGNITE-11857
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Alexei Scherbakov
>Assignee: Aleksey Plekhanov
>Priority: Major
> Attachments: ignite-config.xml, 
> run.properties.tx-optimistic-put-b-backup
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After IGNITE-10078 yardstick tests show performance drop up to 8% in some 
> scenarios:
> * tx-optim-repRead-put-get
> * tx-optimistic-put
> * tx-putAll
> Partially this is due new update counter implementation, but not only. 
> Investigation is required.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IGNITE-12031) node freeze and not able to login after few days stop and start the service give the message in description

2019-08-01 Thread Yaser Mohammad Abushaip (JIRA)
Yaser Mohammad Abushaip created IGNITE-12031:


 Summary: node freeze and not able to login after few days stop and 
start the service give the message in description
 Key: IGNITE-12031
 URL: https://issues.apache.org/jira/browse/IGNITE-12031
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.7.5
 Environment: Linux 7.5
Reporter: Yaser Mohammad Abushaip
 Fix For: None


During startup I receive the below error

Failed to wait for partition map exchange

[topVer=AffinityTopologyVersion [topVer=2,minorTopVer=1],

node=adfsdfdsfxx. Dumping pending objects that might be the cause

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (IGNITE-12030) node freeze and not able to login after few days stop and start the service give the message in description

2019-08-01 Thread Yaser Mohammad Abushaip (JIRA)
Yaser Mohammad Abushaip created IGNITE-12030:


 Summary: node freeze and not able to login after few days stop and 
start the service give the message in description
 Key: IGNITE-12030
 URL: https://issues.apache.org/jira/browse/IGNITE-12030
 Project: Ignite
  Issue Type: Bug
  Components: general
Affects Versions: 2.7.5
 Environment: Linux 7.5
Reporter: Yaser Mohammad Abushaip
 Fix For: None


During startup I receive the below error

Failed to wait for partition map exchange

[topVer=AffinityTopologyVersion [topVer=2,minorTopVer=1],

node=adfsdfdsfxx. Dumping pending objects that might be the cause

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11944) [IEP-35] OpencensusExporter should export Histogram metrics

2019-08-01 Thread Amelchev Nikita (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897859#comment-16897859
 ] 

Amelchev Nikita commented on IGNITE-11944:
--

[~NIzhikov], Hi, could you take a look please?

> [IEP-35] OpencensusExporter should export Histogram metrics
> ---
>
> Key: IGNITE-11944
> URL: https://issues.apache.org/jira/browse/IGNITE-11944
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For now, OpenCensusMetricExporter doesn't export HistogramMetric.
> Ignite should support export of this type of metrics.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11924) [IEP-35] Migrate TransactionMetricsMxBean

2019-08-01 Thread Amelchev Nikita (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897852#comment-16897852
 ] 

Amelchev Nikita commented on IGNITE-11924:
--

[~NIzhikov] Hi, could you take a look please?

> [IEP-35] Migrate TransactionMetricsMxBean
> -
>
> Key: IGNITE-11924
> URL: https://issues.apache.org/jira/browse/IGNITE-11924
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After merging of IGNITE-11848 we should migrate `TransactionMetricsMxBean` to 
> the new metric framework.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11944) [IEP-35] OpencensusExporter should export Histogram metrics

2019-08-01 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897847#comment-16897847
 ] 

Ignite TC Bot commented on IGNITE-11944:


{panel:title=Branch: [pull/6737/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4439848buildTypeId=IgniteTests24Java8_RunAll]

> [IEP-35] OpencensusExporter should export Histogram metrics
> ---
>
> Key: IGNITE-11944
> URL: https://issues.apache.org/jira/browse/IGNITE-11944
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For now, OpenCensusMetricExporter doesn't export HistogramMetric.
> Ignite should support export of this type of metrics.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-11924) [IEP-35] Migrate TransactionMetricsMxBean

2019-08-01 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-11924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897846#comment-16897846
 ] 

Ignite TC Bot commented on IGNITE-11924:


{panel:title=Branch: [pull/6733/head] Base: [master] : No blockers 
found!|borderStyle=dashed|borderColor=#ccc|titleBGColor=#D6F7C1}{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4439502buildTypeId=IgniteTests24Java8_RunAll]

> [IEP-35] Migrate TransactionMetricsMxBean
> -
>
> Key: IGNITE-11924
> URL: https://issues.apache.org/jira/browse/IGNITE-11924
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Nikolay Izhikov
>Assignee: Amelchev Nikita
>Priority: Major
>  Labels: IEP-35
> Fix For: 2.8
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After merging of IGNITE-11848 we should migrate `TransactionMetricsMxBean` to 
> the new metric framework.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (IGNITE-5227) StackOverflowError in GridCacheMapEntry#checkOwnerChanged()

2019-08-01 Thread Ignite TC Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/IGNITE-5227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897845#comment-16897845
 ] 

Ignite TC Bot commented on IGNITE-5227:
---

{panel:title=Branch: [pull/6736/head] Base: [master] : Possible Blockers 
(1)|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1}
{color:#d04437}PDS (Indexing){color} [[tests 0 Exit Code 
|https://ci.ignite.apache.org/viewLog.html?buildId=4438275]]

{panel}
[TeamCity *-- Run :: All* 
Results|https://ci.ignite.apache.org/viewLog.html?buildId=4438310buildTypeId=IgniteTests24Java8_RunAll]

> StackOverflowError in GridCacheMapEntry#checkOwnerChanged()
> ---
>
> Key: IGNITE-5227
> URL: https://issues.apache.org/jira/browse/IGNITE-5227
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Alexey Goncharuk
>Assignee: Stepachev Maksim
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A simple test reproducing this error:
> {code}
> /**
>  * @throws Exception if failed.
>  */
> public void testBatchUnlock() throws Exception {
>startGrid(0);
>grid(0).createCache(new CacheConfiguration Integer>(DEFAULT_CACHE_NAME)
> .setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL));
> try {
> final CountDownLatch releaseLatch = new CountDownLatch(1);
> IgniteInternalFuture fut = GridTestUtils.runAsync(new 
> Callable() {
> @Override public Object call() throws Exception {
> IgniteCache cache = grid(0).cache(null);
> Lock lock = cache.lock("key");
> try {
> lock.lock();
> releaseLatch.await();
> }
> finally {
> lock.unlock();
> }
> return null;
> }
> });
> Map putMap = new LinkedHashMap<>();
> putMap.put("key", "trigger");
> for (int i = 0; i < 10_000; i++)
> putMap.put("key-" + i, "value");
> IgniteCache asyncCache = 
> grid(0).cache(null).withAsync();
> asyncCache.putAll(putMap);
> IgniteFuture resFut = asyncCache.future();
> Thread.sleep(1000);
> releaseLatch.countDown();
> fut.get();
> resFut.get();
> }
> finally {
> stopAllGrids();
> }
> {code}
> We should replace a recursive call with a simple iteration over the linked 
> list.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)