[jira] [Created] (IGNITE-5787) .NET: Ignite entities (ICache, ICompute) cause weird serialization errors when used as fields in user object

2017-07-19 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-5787:
--

 Summary: .NET: Ignite entities (ICache, ICompute) cause weird 
serialization errors when used as fields in user object
 Key: IGNITE-5787
 URL: https://issues.apache.org/jira/browse/IGNITE-5787
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 1.6
Reporter: Pavel Tupitsyn
Priority: Minor
 Fix For: 2.2


Common use case is using Ignite cache inside Compute:
{code}
class MyAction : IComputeAction
{
  private readonly ICache _cache;
  ...
}
{code}

This fails with a weird error:
{code}
class org.apache.ignite.IgniteException: Cannot serialize delegates over 
unmanaged function pointers, dynamic methods or methods outside the delegate 
creator's assembly.
{code}

We should consider providing a helpful error message, or handling this the same 
way as {{Ignite}} class is handled in {{BinarySystemHandlers.FindWriteHandler}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5786) .NET: Transaction fails with multiple write-through caches

2017-07-19 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-5786:
---
Priority: Critical  (was: Major)

> .NET: Transaction fails with multiple write-through caches
> --
>
> Key: IGNITE-5786
> URL: https://issues.apache.org/jira/browse/IGNITE-5786
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 1.6
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>Priority: Critical
>  Labels: .NET
> Fix For: 2.2
>
>
> To reproduce: create two caches with {{WriteThrough=true}} and some 
> {{CacheStore}} (implementation can be empty).
> Attempt to update both caches within a transaction:
> {code}
> using (var tx = ignite.GetTransactions().TxStart())
> {
> cache1.Put(1, 1);
> cache2.Put(1, 1);
> tx.Commit();
> }
> {code}
> Exception occurs:
> {code}
> (err) Failed to notify listener: 
> o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$16@17695df3javax.cache.integration.CacheWriterException:
>  PlatformNativeException [cause=System.InvalidOperationException 
> [idHash=1909546776, hash=1265661973, 
> ClassName=System.InvalidOperationException, Data=null, ExceptionMethod=8
> Get
> Apache.Ignite.Core, Version=2.1.0.19388, Culture=neutral, 
> PublicKeyToken=a487a7ff0b2aaa4a
> Apache.Ignite.Core.Impl.Handle.HandleRegistry
> T Get[T](Int64, Boolean), HelpURL=null, HResult=-2146233079, 
> InnerException=null, Message=Resource handle has been released (is Ignite 
> stopping?)., RemoteStackIndex=0, RemoteStackTraceString=null, 
> Source=Apache.Ignite.Core, StackTraceString=   at 
> Apache.Ignite.Core.Impl.Handle.HandleRegistry.Get[T](Int64 id, Boolean 
> throwOnAbsent) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Handle\HandleRegistry.cs:line
>  262
>at 
> Apache.Ignite.Core.Impl.Cache.Store.CacheStoreInternal`2.Invoke(IBinaryStream 
> stream, Ignite grid) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Cache\Store\CacheStoreInternal.cs:line
>  112
>at 
> Apache.Ignite.Core.Impl.Cache.Store.CacheStore.Invoke(PlatformMemoryStream 
> stream, Ignite grid) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Cache\Store\CacheStore.cs:line
>  127
>at 
> Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.CacheStoreInvoke(Int64 
> memPtr) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Unmanaged\UnmanagedCallbacks.cs:line
>  366
> {code}
> Explanation:
> * Cache stores share same session within a transaction
> * Session in Java is uses to store .NET session handle, so both stores have 
> the same .NET session (which is good: consistent with Java)
> * Each store calls sessionEnd, so session gets released multiple times - this 
> causes HandleRegistry exception
> Current unit test uses Spring XML with shared 
> {{PlatformDotNetCacheStoreFactory}}, which caches created store instance for 
> some reason, so the bug is hidden, since both caches use the same store 
> instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5729) IgniteCacheProxy instances from "with..." methods are not reusable

2017-07-19 Thread Pavel Kovalenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Kovalenko reassigned IGNITE-5729:
---

Assignee: Alexey Goncharuk  (was: Pavel Kovalenko)

Refactoring is ready.

Branch in both repositories: ignite-5729
Ignite TC: 
http://ci.ignite.apache.org/project.html?projectId=Ignite20Tests_Ignite20Tests=pull%2F2293%2Fhead#

> IgniteCacheProxy instances from "with..." methods are not reusable
> --
>
> Key: IGNITE-5729
> URL: https://issues.apache.org/jira/browse/IGNITE-5729
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Pavel Kovalenko
>Assignee: Alexey Goncharuk
> Fix For: 2.2
>
>
> On cache restart all IgniteCacheProxy instances must be reset in order to 
> reuse them.
> But bunch of methods in IgniteCache interface including withKeepBinary create 
> new instances of proxy for each call and these instances are not reset on 
> cache restart. 
> E.g. it leads to CacheStoppedException when reusing them after restoring from 
> snapshot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5750) Format of uptime for metrics

2017-07-19 Thread Alexandr Kuramshin (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093424#comment-16093424
 ] 

Alexandr Kuramshin commented on IGNITE-5750:


LGFM. Have you made a test or check that uses of {{timeSpan2HMSM}} wouldn't be 
broken on the format change?

> Format of uptime for metrics
> 
>
> Key: IGNITE-5750
> URL: https://issues.apache.org/jira/browse/IGNITE-5750
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.0
>Reporter: Alexandr Kuramshin
>Assignee: Yevgeniy Ignatyev
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.1
>
>
> Metrics for local node shows uptime formatted as 00:00:00:000
> But the last colon should be changed to the dot.
> Right format is 00:00:00.000



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-5786) .NET: Transaction fails with multiple write-through caches

2017-07-19 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093394#comment-16093394
 ] 

Pavel Tupitsyn edited comment on IGNITE-5786 at 7/19/17 4:52 PM:
-

To fix this we could clear {{KEY_SES}} value in 
{{PlatformDotNetCacheStore.sessionEnd()}} call. But this would cause two 
sessions to be created and destroyed, which is not correct. So we should mark 
the session as destroyed somehow.


was (Author: ptupitsyn):
To fix this we should clear {{KEY_SES}} value in 
{{PlatformDotNetCacheStore.sessionEnd()}} call.

> .NET: Transaction fails with multiple write-through caches
> --
>
> Key: IGNITE-5786
> URL: https://issues.apache.org/jira/browse/IGNITE-5786
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 1.6
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.2
>
>
> To reproduce: create two caches with {{WriteThrough=true}} and some 
> {{CacheStore}} (implementation can be empty).
> Attempt to update both caches within a transaction:
> {code}
> using (var tx = ignite.GetTransactions().TxStart())
> {
> cache1.Put(1, 1);
> cache2.Put(1, 1);
> tx.Commit();
> }
> {code}
> Exception occurs:
> {code}
> (err) Failed to notify listener: 
> o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$16@17695df3javax.cache.integration.CacheWriterException:
>  PlatformNativeException [cause=System.InvalidOperationException 
> [idHash=1909546776, hash=1265661973, 
> ClassName=System.InvalidOperationException, Data=null, ExceptionMethod=8
> Get
> Apache.Ignite.Core, Version=2.1.0.19388, Culture=neutral, 
> PublicKeyToken=a487a7ff0b2aaa4a
> Apache.Ignite.Core.Impl.Handle.HandleRegistry
> T Get[T](Int64, Boolean), HelpURL=null, HResult=-2146233079, 
> InnerException=null, Message=Resource handle has been released (is Ignite 
> stopping?)., RemoteStackIndex=0, RemoteStackTraceString=null, 
> Source=Apache.Ignite.Core, StackTraceString=   at 
> Apache.Ignite.Core.Impl.Handle.HandleRegistry.Get[T](Int64 id, Boolean 
> throwOnAbsent) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Handle\HandleRegistry.cs:line
>  262
>at 
> Apache.Ignite.Core.Impl.Cache.Store.CacheStoreInternal`2.Invoke(IBinaryStream 
> stream, Ignite grid) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Cache\Store\CacheStoreInternal.cs:line
>  112
>at 
> Apache.Ignite.Core.Impl.Cache.Store.CacheStore.Invoke(PlatformMemoryStream 
> stream, Ignite grid) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Cache\Store\CacheStore.cs:line
>  127
>at 
> Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.CacheStoreInvoke(Int64 
> memPtr) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Unmanaged\UnmanagedCallbacks.cs:line
>  366
> {code}
> Explanation:
> * Cache stores share same session within a transaction
> * Session in Java is uses to store .NET session handle, so both stores have 
> the same .NET session (which is good: consistent with Java)
> * Each store calls sessionEnd, so session gets released multiple times - this 
> causes HandleRegistry exception
> Current unit test uses Spring XML with shared 
> {{PlatformDotNetCacheStoreFactory}}, which caches created store instance for 
> some reason, so the bug is hidden, since both caches use the same store 
> instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5786) .NET: Transaction fails with multiple write-through caches

2017-07-19 Thread Pavel Tupitsyn (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093394#comment-16093394
 ] 

Pavel Tupitsyn commented on IGNITE-5786:


To fix this we should clear {{KEY_SES}} value in 
{{PlatformDotNetCacheStore.sessionEnd()}} call.

> .NET: Transaction fails with multiple write-through caches
> --
>
> Key: IGNITE-5786
> URL: https://issues.apache.org/jira/browse/IGNITE-5786
> Project: Ignite
>  Issue Type: Bug
>  Components: platforms
>Affects Versions: 1.6
>Reporter: Pavel Tupitsyn
>Assignee: Pavel Tupitsyn
>  Labels: .NET
> Fix For: 2.2
>
>
> To reproduce: create two caches with {{WriteThrough=true}} and some 
> {{CacheStore}} (implementation can be empty).
> Attempt to update both caches within a transaction:
> {code}
> using (var tx = ignite.GetTransactions().TxStart())
> {
> cache1.Put(1, 1);
> cache2.Put(1, 1);
> tx.Commit();
> }
> {code}
> Exception occurs:
> {code}
> (err) Failed to notify listener: 
> o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$16@17695df3javax.cache.integration.CacheWriterException:
>  PlatformNativeException [cause=System.InvalidOperationException 
> [idHash=1909546776, hash=1265661973, 
> ClassName=System.InvalidOperationException, Data=null, ExceptionMethod=8
> Get
> Apache.Ignite.Core, Version=2.1.0.19388, Culture=neutral, 
> PublicKeyToken=a487a7ff0b2aaa4a
> Apache.Ignite.Core.Impl.Handle.HandleRegistry
> T Get[T](Int64, Boolean), HelpURL=null, HResult=-2146233079, 
> InnerException=null, Message=Resource handle has been released (is Ignite 
> stopping?)., RemoteStackIndex=0, RemoteStackTraceString=null, 
> Source=Apache.Ignite.Core, StackTraceString=   at 
> Apache.Ignite.Core.Impl.Handle.HandleRegistry.Get[T](Int64 id, Boolean 
> throwOnAbsent) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Handle\HandleRegistry.cs:line
>  262
>at 
> Apache.Ignite.Core.Impl.Cache.Store.CacheStoreInternal`2.Invoke(IBinaryStream 
> stream, Ignite grid) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Cache\Store\CacheStoreInternal.cs:line
>  112
>at 
> Apache.Ignite.Core.Impl.Cache.Store.CacheStore.Invoke(PlatformMemoryStream 
> stream, Ignite grid) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Cache\Store\CacheStore.cs:line
>  127
>at 
> Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.CacheStoreInvoke(Int64 
> memPtr) in 
> C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Unmanaged\UnmanagedCallbacks.cs:line
>  366
> {code}
> Explanation:
> * Cache stores share same session within a transaction
> * Session in Java is uses to store .NET session handle, so both stores have 
> the same .NET session (which is good: consistent with Java)
> * Each store calls sessionEnd, so session gets released multiple times - this 
> causes HandleRegistry exception
> Current unit test uses Spring XML with shared 
> {{PlatformDotNetCacheStoreFactory}}, which caches created store instance for 
> some reason, so the bug is hidden, since both caches use the same store 
> instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5473) Create ignite troubleshooting logger

2017-07-19 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093392#comment-16093392
 ] 

Konstantin Boudnik commented on IGNITE-5473:


Yup, that's actually better than what I offered ;)Thanks!

> Create ignite troubleshooting logger
> 
>
> Key: IGNITE-5473
> URL: https://issues.apache.org/jira/browse/IGNITE-5473
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.0
>Reporter: Alexey Goncharuk
>Priority: Critical
>  Labels: important, observability
> Fix For: 2.2
>
>
> Currently, we have two extremes of logging - either INFO wich logs almost 
> nothing, or DEBUG, which will pollute logs with too verbose messages.
> We should create a 'troubleshooting' logger, which should be easily enabled 
> (via a system property, for example) and log all stability-critical node and 
> cluster events:
>  * Connection events (both communication and discovery), handshake status
>  * ALL ignored messages and skipped actions (even those we assume are safe to 
> ignore)
>  * Partition exchange stages and timings
>  * Verbose discovery state changes (this should make it easy to understand 
> the reason for 'Node has not been connected to the topology')
>  * Transaction failover stages and actions
>  * All unlogged exceptions
>  * Responses that took more than N milliseconds when in normal they should 
> return right away
>  * Long discovery SPI messages processing times
>  * Managed service deployment stages
>  * Marshaller mappings registration and notification
>  * Binary metadata registration and notification
>  * Continuous query registration / notification
> (add more)
> The amount of logging should be chosen accurately so that it would be safe to 
> enable this logger in production clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5786) .NET: Transaction fails with multiple write-through caches

2017-07-19 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-5786:
--

 Summary: .NET: Transaction fails with multiple write-through caches
 Key: IGNITE-5786
 URL: https://issues.apache.org/jira/browse/IGNITE-5786
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 1.6
Reporter: Pavel Tupitsyn
Assignee: Pavel Tupitsyn
 Fix For: 2.2


To reproduce: create two caches with {{WriteThrough=true}} and some 
{{CacheStore}} (implementation can be empty).

Attempt to update both caches within a transaction:

{code}
using (var tx = ignite.GetTransactions().TxStart())
{
cache1.Put(1, 1);
cache2.Put(1, 1);

tx.Commit();
}
{code}

Exception occurs:
{code}
(err) Failed to notify listener: 
o.a.i.i.processors.cache.distributed.near.GridNearTxLocal$16@17695df3javax.cache.integration.CacheWriterException:
 PlatformNativeException [cause=System.InvalidOperationException 
[idHash=1909546776, hash=1265661973, 
ClassName=System.InvalidOperationException, Data=null, ExceptionMethod=8
Get
Apache.Ignite.Core, Version=2.1.0.19388, Culture=neutral, 
PublicKeyToken=a487a7ff0b2aaa4a
Apache.Ignite.Core.Impl.Handle.HandleRegistry
T Get[T](Int64, Boolean), HelpURL=null, HResult=-2146233079, 
InnerException=null, Message=Resource handle has been released (is Ignite 
stopping?)., RemoteStackIndex=0, RemoteStackTraceString=null, 
Source=Apache.Ignite.Core, StackTraceString=   at 
Apache.Ignite.Core.Impl.Handle.HandleRegistry.Get[T](Int64 id, Boolean 
throwOnAbsent) in 
C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Handle\HandleRegistry.cs:line
 262
   at 
Apache.Ignite.Core.Impl.Cache.Store.CacheStoreInternal`2.Invoke(IBinaryStream 
stream, Ignite grid) in 
C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Cache\Store\CacheStoreInternal.cs:line
 112
   at 
Apache.Ignite.Core.Impl.Cache.Store.CacheStore.Invoke(PlatformMemoryStream 
stream, Ignite grid) in 
C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Cache\Store\CacheStore.cs:line
 127
   at 
Apache.Ignite.Core.Impl.Unmanaged.UnmanagedCallbacks.CacheStoreInvoke(Int64 
memPtr) in 
C:\w\incubator-ignite\modules\platforms\dotnet\Apache.Ignite.Core\Impl\Unmanaged\UnmanagedCallbacks.cs:line
 366
{code}

Explanation:
* Cache stores share same session within a transaction
* Session in Java is uses to store .NET session handle, so both stores have the 
same .NET session (which is good: consistent with Java)
* Each store calls sessionEnd, so session gets released multiple times - this 
causes HandleRegistry exception

Current unit test uses Spring XML with shared 
{{PlatformDotNetCacheStoreFactory}}, which caches created store instance for 
some reason, so the bug is hidden, since both caches use the same store 
instance.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5473) Create ignite troubleshooting logger

2017-07-19 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093383#comment-16093383
 ] 

Dmitriy Pavlov commented on IGNITE-5473:


[~cos], this issue has wider scope than IGNITE-5332. I've added supercedes 
link. Is is correct for this case?

> Create ignite troubleshooting logger
> 
>
> Key: IGNITE-5473
> URL: https://issues.apache.org/jira/browse/IGNITE-5473
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.0
>Reporter: Alexey Goncharuk
>Priority: Critical
>  Labels: important, observability
> Fix For: 2.2
>
>
> Currently, we have two extremes of logging - either INFO wich logs almost 
> nothing, or DEBUG, which will pollute logs with too verbose messages.
> We should create a 'troubleshooting' logger, which should be easily enabled 
> (via a system property, for example) and log all stability-critical node and 
> cluster events:
>  * Connection events (both communication and discovery), handshake status
>  * ALL ignored messages and skipped actions (even those we assume are safe to 
> ignore)
>  * Partition exchange stages and timings
>  * Verbose discovery state changes (this should make it easy to understand 
> the reason for 'Node has not been connected to the topology')
>  * Transaction failover stages and actions
>  * All unlogged exceptions
>  * Responses that took more than N milliseconds when in normal they should 
> return right away
>  * Long discovery SPI messages processing times
>  * Managed service deployment stages
>  * Marshaller mappings registration and notification
>  * Binary metadata registration and notification
>  * Continuous query registration / notification
> (add more)
> The amount of logging should be chosen accurately so that it would be safe to 
> enable this logger in production clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5473) Create ignite troubleshooting logger

2017-07-19 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093342#comment-16093342
 ] 

Konstantin Boudnik commented on IGNITE-5473:


Looks like this has accidentally fixed IGNITE-5332 as well? If so, the other 
ticket needs to be marked properly as duplicate to avoid any confusion for new 
contributors.

> Create ignite troubleshooting logger
> 
>
> Key: IGNITE-5473
> URL: https://issues.apache.org/jira/browse/IGNITE-5473
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 2.0
>Reporter: Alexey Goncharuk
>Priority: Critical
>  Labels: important, observability
> Fix For: 2.2
>
>
> Currently, we have two extremes of logging - either INFO wich logs almost 
> nothing, or DEBUG, which will pollute logs with too verbose messages.
> We should create a 'troubleshooting' logger, which should be easily enabled 
> (via a system property, for example) and log all stability-critical node and 
> cluster events:
>  * Connection events (both communication and discovery), handshake status
>  * ALL ignored messages and skipped actions (even those we assume are safe to 
> ignore)
>  * Partition exchange stages and timings
>  * Verbose discovery state changes (this should make it easy to understand 
> the reason for 'Node has not been connected to the topology')
>  * Transaction failover stages and actions
>  * All unlogged exceptions
>  * Responses that took more than N milliseconds when in normal they should 
> return right away
>  * Long discovery SPI messages processing times
>  * Managed service deployment stages
>  * Marshaller mappings registration and notification
>  * Binary metadata registration and notification
>  * Continuous query registration / notification
> (add more)
> The amount of logging should be chosen accurately so that it would be safe to 
> enable this logger in production clusters.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-1094) Ignite.createCache(CacheConfiguration) hangs if some exception occurs during cache initialization

2017-07-19 Thread Alexey Kuznetsov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-1094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093336#comment-16093336
 ] 

Alexey Kuznetsov commented on IGNITE-1094:
--

[~yzhdanov] Consider 
org.apache.ignite.internal.processors.cache.GridCacheProcessor#createCache() 
method, which creates cache store as follows(algorithm 1 step):
{code:java}
cfgStore = cfg.getCacheStoreFactory() != null ? 
cfg.getCacheStoreFactory().create() : null;
{code}
when exception arises in store creation, we can check a flag(indicating 
creation failed) and continue cache creation as if creation succeedes 
(pseudocode) :
{code:java}
try {
cfgStore = cfg.getCacheStoreFactory() != null ? 
cfg.getCacheStoreFactory().create() : null;
}
catch (Throwable e) {
desc.cacheStoreCreationFailed(true);
}
{code}
Then in some place we can test this flag whether creation failed or not, and 
continue algorithm 2 step. Do you think its an appropriate scheme?


> Ignite.createCache(CacheConfiguration) hangs if some exception occurs during 
> cache initialization
> -
>
> Key: IGNITE-1094
> URL: https://issues.apache.org/jira/browse/IGNITE-1094
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Sergey Evdokimov
>Assignee: Alexey Kuznetsov
>  Labels: Muted_test
> Fix For: 2.2
>
>
> User can pass broken configuration, for example, store factory that throws 
> exception from create() method. I created test to demonstrate the problem. 
> See IgniteDynamicCacheStartSelfTest#testBrokenStoreFactory in 'ignite-1094' 
> branch 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-5123) Ignite.cache(String) returns null in PluginProvider.onIgniteStart()

2017-07-19 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093304#comment-16093304
 ] 

Dmitriy Pavlov edited comment on IGNITE-5123 at 7/19/17 3:51 PM:
-

See also related discussions on dev list 
http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-5123-td19337.html
http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-5123-Review-td19813.html
 


was (Author: dpavlov):
See also related discussion on dev list 
http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-5123-Review-td19813.html
 

> Ignite.cache(String) returns null in PluginProvider.onIgniteStart()
> ---
>
> Key: IGNITE-5123
> URL: https://issues.apache.org/jira/browse/IGNITE-5123
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Nick Pordash
>Assignee: Yevgeniy Ignatyev
> Fix For: 2.2
>
> Attachments: ignite-plugin-failure.zip
>
>
> Given an Ignite node that has pre-configured caches (via 
> IgniteConfiguration.setCacheConfiguration) if you try to obtain a reference 
> to the cache instance in PluginProvider.onIgniteStart() you'll get a null 
> reference.
> @Override
> public void onIgniteStart() throws IgniteCheckedException {
> ignite.cacheNames().forEach(name -> {
> assert ignite.cache(name) != null : "Cache is null: " + name;
> });
> }



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5123) Ignite.cache(String) returns null in PluginProvider.onIgniteStart()

2017-07-19 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093304#comment-16093304
 ] 

Dmitriy Pavlov commented on IGNITE-5123:


See also related discussion on dev list 
http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-5123-Review-td19813.html
 

> Ignite.cache(String) returns null in PluginProvider.onIgniteStart()
> ---
>
> Key: IGNITE-5123
> URL: https://issues.apache.org/jira/browse/IGNITE-5123
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Nick Pordash
>Assignee: Yevgeniy Ignatyev
> Fix For: 2.2
>
> Attachments: ignite-plugin-failure.zip
>
>
> Given an Ignite node that has pre-configured caches (via 
> IgniteConfiguration.setCacheConfiguration) if you try to obtain a reference 
> to the cache instance in PluginProvider.onIgniteStart() you'll get a null 
> reference.
> @Override
> public void onIgniteStart() throws IgniteCheckedException {
> ignite.cacheNames().forEach(name -> {
> assert ignite.cache(name) != null : "Cache is null: " + name;
> });
> }



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5772) Race between WAL segment rollover and concurrent log

2017-07-19 Thread Ilya Lantukh (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Lantukh reassigned IGNITE-5772:


Assignee: Alexey Goncharuk  (was: Ilya Lantukh)

> Race between WAL segment rollover and concurrent log
> 
>
> Key: IGNITE-5772
> URL: https://issues.apache.org/jira/browse/IGNITE-5772
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Alexey Goncharuk
>Assignee: Alexey Goncharuk
> Fix For: 2.2
>
>
> The WAL log() and close() are synch-ed as follows:
> log: read head, check stop flag, cas head
> close: set stop flag, cas head to fake record.
> This guarantees that after close() is called, there will be no other records 
> appended to the closed segment.
> Now consider three threads doing the following operations:
> T1: flush(); T2: rollOver(); T3: log();
> The sequence of events:
> 1) T1 does a CAS of head to FakeRecord
> 2) T3 reads head as FakeRecord, reads stop flag as false
> 3) T2 attempts to rollOver: CAS stop to true; call flushOrWait(null); call 
> flush(null); Since the head is an instance of FakeRecord, the flush(null) 
> immediately returns false. This thread waits for written bytes and proceeds
> 4) T3 successfully does a CAS of head to non-fake record
> 5) T2 proceeds with rollOver, signals next available and asserts on head.
> The invariant above is broken when T2 does not CAS fake record during 
> rollover, which allows T3 to append an entry to the closed segment. The 
> solution is to change the code so the CAS is always attempted on close even 
> if the current head is already a FakeRecord.
> Alternatively, we can introduce another type of fake record that will seal 
> the WAL segment queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (IGNITE-5772) Race between WAL segment rollover and concurrent log

2017-07-19 Thread Ilya Lantukh (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Lantukh reopened IGNITE-5772:
--

> Race between WAL segment rollover and concurrent log
> 
>
> Key: IGNITE-5772
> URL: https://issues.apache.org/jira/browse/IGNITE-5772
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Alexey Goncharuk
>Assignee: Ilya Lantukh
> Fix For: 2.2
>
>
> The WAL log() and close() are synch-ed as follows:
> log: read head, check stop flag, cas head
> close: set stop flag, cas head to fake record.
> This guarantees that after close() is called, there will be no other records 
> appended to the closed segment.
> Now consider three threads doing the following operations:
> T1: flush(); T2: rollOver(); T3: log();
> The sequence of events:
> 1) T1 does a CAS of head to FakeRecord
> 2) T3 reads head as FakeRecord, reads stop flag as false
> 3) T2 attempts to rollOver: CAS stop to true; call flushOrWait(null); call 
> flush(null); Since the head is an instance of FakeRecord, the flush(null) 
> immediately returns false. This thread waits for written bytes and proceeds
> 4) T3 successfully does a CAS of head to non-fake record
> 5) T2 proceeds with rollOver, signals next available and asserts on head.
> The invariant above is broken when T2 does not CAS fake record during 
> rollover, which allows T3 to append an entry to the closed segment. The 
> solution is to change the code so the CAS is always attempted on close even 
> if the current head is already a FakeRecord.
> Alternatively, we can introduce another type of fake record that will seal 
> the WAL segment queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-5772) Race between WAL segment rollover and concurrent log

2017-07-19 Thread Ilya Lantukh (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Lantukh resolved IGNITE-5772.
--
Resolution: Fixed

> Race between WAL segment rollover and concurrent log
> 
>
> Key: IGNITE-5772
> URL: https://issues.apache.org/jira/browse/IGNITE-5772
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.1
>Reporter: Alexey Goncharuk
>Assignee: Ilya Lantukh
> Fix For: 2.2
>
>
> The WAL log() and close() are synch-ed as follows:
> log: read head, check stop flag, cas head
> close: set stop flag, cas head to fake record.
> This guarantees that after close() is called, there will be no other records 
> appended to the closed segment.
> Now consider three threads doing the following operations:
> T1: flush(); T2: rollOver(); T3: log();
> The sequence of events:
> 1) T1 does a CAS of head to FakeRecord
> 2) T3 reads head as FakeRecord, reads stop flag as false
> 3) T2 attempts to rollOver: CAS stop to true; call flushOrWait(null); call 
> flush(null); Since the head is an instance of FakeRecord, the flush(null) 
> immediately returns false. This thread waits for written bytes and proceeds
> 4) T3 successfully does a CAS of head to non-fake record
> 5) T2 proceeds with rollOver, signals next available and asserts on head.
> The invariant above is broken when T2 does not CAS fake record during 
> rollover, which allows T3 to append an entry to the closed segment. The 
> solution is to change the code so the CAS is always attempted on close even 
> if the current head is already a FakeRecord.
> Alternatively, we can introduce another type of fake record that will seal 
> the WAL segment queue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-5775) Compute runs one job in MetricsUpdateFrequency per thread after all jobs was submitted(as onCollision is not called)

2017-07-19 Thread Evgenii Zhuravlev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093135#comment-16093135
 ] 

Evgenii Zhuravlev edited comment on IGNITE-5775 at 7/19/17 2:11 PM:


That happened because job was removed from the activeJobs list only after the 
handleCollision method.


was (Author: ezhuravl):
That happened because job remove from activeJobs list only after 
handleCollision method.

> Compute runs one job in MetricsUpdateFrequency per thread after all jobs was 
> submitted(as onCollision is not called)
> 
>
> Key: IGNITE-5775
> URL: https://issues.apache.org/jira/browse/IGNITE-5775
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Evgenii Zhuravlev
>Assignee: Evgenii Zhuravlev
>Priority: Critical
> Fix For: 2.2
>
> Attachments: Compute.java
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5775) Compute runs one job in MetricsUpdateFrequency per thread after all jobs was submitted(as onCollision is not called)

2017-07-19 Thread Evgenii Zhuravlev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093135#comment-16093135
 ] 

Evgenii Zhuravlev commented on IGNITE-5775:
---

That happened because job remove from activeJobs list only after 
handleCollision method.

> Compute runs one job in MetricsUpdateFrequency per thread after all jobs was 
> submitted(as onCollision is not called)
> 
>
> Key: IGNITE-5775
> URL: https://issues.apache.org/jira/browse/IGNITE-5775
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Evgenii Zhuravlev
>Assignee: Evgenii Zhuravlev
>Priority: Critical
> Fix For: 2.2
>
> Attachments: Compute.java
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-3950) Deadlock when exchange starts with pending explicit lock

2017-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093072#comment-16093072
 ] 

ASF GitHub Bot commented on IGNITE-3950:


GitHub user BiryukovVA opened a pull request:

https://github.com/apache/ignite/pull/2322

IGNITE-3950: Issue is not reproduce.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/BiryukovVA/ignite IGNITE-3950

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2322.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2322


commit 51368ffa65259e2f449aeaf04c2189f12139a629
Author: Vitaliy Biryukov 
Date:   2017-07-19T13:29:36Z

IGNITE-3950: Issue is not reproduce.




> Deadlock when exchange starts with pending explicit lock
> 
>
> Key: IGNITE-3950
> URL: https://issues.apache.org/jira/browse/IGNITE-3950
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ilya Lantukh
>Assignee: Vitaliy Biryukov 
>  Labels: Muted_test, test-fail
>
> Reproduced by IgniteCacheMultiLockSelfTest#testExplicitLockManyKeysWithClient 
> (hangs with ~10% probability).
> Exchange worker waits for lock to be released:
> {noformat}
> Thread [name="exchange-worker-#155%dht.IgniteCacheMultiTxLockSelfTest3%", 
> id=195, state=TIMED_WAITING, blockCnt=0, waitCnt=44]
> Lock 
> [object=o.a.i.i.processors.cache.GridCacheMvccManager$FinishLockFuture@2638011,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> at 
> o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:187)
> at 
> o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:137)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.waitPartitionRelease(GridDhtPartitionsExchangeFuture.java:835)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:763)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:516)
> at 
> o.a.i.i.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1464)
> at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> while thread that holds lock cannot finish cache operation:
> {noformat}
> "Thread-9@3645" prio=5 tid=0x11a nid=NA waiting
>   java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Unsafe.java:-1)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:157)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$45.inOp(GridCacheAdapter.java:2849)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$SyncInOp.op(GridCacheAdapter.java:5303)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4351)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAll(GridCacheAdapter.java:2847)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.putAll(GridCacheProxyImpl.java:838)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCacheMultiTxLockSelfTest$1.run(IgniteCacheMultiTxLockSelfTest.java:218)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-3950) Deadlock when exchange starts with pending explicit lock

2017-07-19 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093031#comment-16093031
 ] 

Dmitriy Pavlov commented on IGNITE-3950:


[~VitaliyB], I agree, we can try. Probalby this issue was already fixed

> Deadlock when exchange starts with pending explicit lock
> 
>
> Key: IGNITE-3950
> URL: https://issues.apache.org/jira/browse/IGNITE-3950
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ilya Lantukh
>Assignee: Vitaliy Biryukov 
>  Labels: Muted_test, test-fail
>
> Reproduced by IgniteCacheMultiLockSelfTest#testExplicitLockManyKeysWithClient 
> (hangs with ~10% probability).
> Exchange worker waits for lock to be released:
> {noformat}
> Thread [name="exchange-worker-#155%dht.IgniteCacheMultiTxLockSelfTest3%", 
> id=195, state=TIMED_WAITING, blockCnt=0, waitCnt=44]
> Lock 
> [object=o.a.i.i.processors.cache.GridCacheMvccManager$FinishLockFuture@2638011,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> at 
> o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:187)
> at 
> o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:137)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.waitPartitionRelease(GridDhtPartitionsExchangeFuture.java:835)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:763)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:516)
> at 
> o.a.i.i.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1464)
> at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> while thread that holds lock cannot finish cache operation:
> {noformat}
> "Thread-9@3645" prio=5 tid=0x11a nid=NA waiting
>   java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Unsafe.java:-1)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:157)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$45.inOp(GridCacheAdapter.java:2849)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$SyncInOp.op(GridCacheAdapter.java:5303)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4351)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAll(GridCacheAdapter.java:2847)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.putAll(GridCacheProxyImpl.java:838)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCacheMultiTxLockSelfTest$1.run(IgniteCacheMultiTxLockSelfTest.java:218)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5781) Visor throws ClassCastException if cache store implementation is other than CacheJdbcPojoStore

2017-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16093027#comment-16093027
 ] 

ASF GitHub Bot commented on IGNITE-5781:


GitHub user Desperus opened a pull request:

https://github.com/apache/ignite/pull/2321

IGNITE-5781 Visor throws ClassCastException if cache store implementa…

…tion is other than CacheJdbcPojoStore

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Desperus/ignite IGNITE-5781

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/2321.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2321


commit bc038be5540e5fd8148509e31ac1b51afe37eecf
Author: Aleksandr_Meterko 
Date:   2017-07-19T12:55:15Z

IGNITE-5781 Visor throws ClassCastException if cache store implementation 
is other than CacheJdbcPojoStore




> Visor throws ClassCastException if cache store implementation is other than 
> CacheJdbcPojoStore
> --
>
> Key: IGNITE-5781
> URL: https://issues.apache.org/jira/browse/IGNITE-5781
> Project: Ignite
>  Issue Type: Bug
>  Components: visor
>Affects Versions: 2.0
>Reporter: Valentin Kulichenko
>Assignee: Aleksandr Meterko
> Fix For: 2.2
>
>
> Issue is reported on user list: 
> http://apache-ignite-users.70518.x6.nabble.com/Problem-with-Visor-and-Cassandra-Cache-Store-td15076.html
> There is an obvious bug in the code. {{VisorCacheJdbcType#list}} method 
> checks the type of store factory like this:
> {code}
> if (factory != null || factory instanceof CacheJdbcPojoStoreFactory) {
> CacheJdbcPojoStoreFactory jdbcFactory = (CacheJdbcPojoStoreFactory) 
> factory;
> {code}
> It should be {{&&}} instead of {{||}}, because otherwise condition will be 
> {{true}} for any factory that is not {{null}}. Even better if {{factory != 
> null}} is removed completely as {{instanceof}} returns {{false}} for {{null}} 
> values anyway.
> However, it's not clear to me why this scenario is reproduced only in certain 
> conditions (see mailing list thread for details). It's possible that there is 
> another hidden bug, this needs to be investigated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5781) Visor throws ClassCastException if cache store implementation is other than CacheJdbcPojoStore

2017-07-19 Thread Aleksandr Meterko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Meterko reassigned IGNITE-5781:
-

Assignee: Aleksandr Meterko

> Visor throws ClassCastException if cache store implementation is other than 
> CacheJdbcPojoStore
> --
>
> Key: IGNITE-5781
> URL: https://issues.apache.org/jira/browse/IGNITE-5781
> Project: Ignite
>  Issue Type: Bug
>  Components: visor
>Affects Versions: 2.0
>Reporter: Valentin Kulichenko
>Assignee: Aleksandr Meterko
> Fix For: 2.2
>
>
> Issue is reported on user list: 
> http://apache-ignite-users.70518.x6.nabble.com/Problem-with-Visor-and-Cassandra-Cache-Store-td15076.html
> There is an obvious bug in the code. {{VisorCacheJdbcType#list}} method 
> checks the type of store factory like this:
> {code}
> if (factory != null || factory instanceof CacheJdbcPojoStoreFactory) {
> CacheJdbcPojoStoreFactory jdbcFactory = (CacheJdbcPojoStoreFactory) 
> factory;
> {code}
> It should be {{&&}} instead of {{||}}, because otherwise condition will be 
> {{true}} for any factory that is not {{null}}. Even better if {{factory != 
> null}} is removed completely as {{instanceof}} returns {{false}} for {{null}} 
> values anyway.
> However, it's not clear to me why this scenario is reproduced only in certain 
> conditions (see mailing list thread for details). It's possible that there is 
> another hidden bug, this needs to be investigated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (IGNITE-5682) GridCacheRabalancingDelayedPartitionMapExchangeSelfTest fails

2017-07-19 Thread Dmitriy Pavlov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16085468#comment-16085468
 ] 

Dmitriy Pavlov edited comment on IGNITE-5682 at 7/19/17 12:43 PM:
--

[~agoncharuk], could you please review my changes?

https://github.com/apache/ignite/pull/2274
http://ci.ignite.apache.org/viewLog.html?buildId=721846=buildResultsDiv=Ignite20Tests_RunAll
http://reviews.ignite.apache.org/ignite/review/IGNT-CR-237

Fix involves change for GridDhtPartFullMessage for case resendAllPartitions() 
is used.
- Such messages are still not marked as related to exchange, but contains 
topology version.
- Same change was done for DhtTopologyImpl. Now it compares topology version 
from GridDhtFullMessage is not less than latest observed from exchange. For 
exchange topology version check is still as follows : "version is to be greater 
than latest observed".


was (Author: dpavlov):
[~agoncharuk], could you please review my changes?

https://github.com/apache/ignite/pull/2274
http://ci.ignite.apache.org/viewLog.html?buildId=721846=buildResultsDiv=Ignite20Tests_RunAll


> GridCacheRabalancingDelayedPartitionMapExchangeSelfTest fails
> -
>
> Key: IGNITE-5682
> URL: https://issues.apache.org/jira/browse/IGNITE-5682
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Vladimir Ozerov
>Assignee: Dmitriy Pavlov
>  Labels: test-fail
> Fix For: 2.2
>
> Attachments: ignite-5682.dump.txt
>
>
> This appears to be a regression introduced during persistent store migration. 
> {code}
> class org.apache.ignite.IgniteException: Timeout of waiting for topology map 
> update 
> [igniteInstanceName=rebalancing.GridCacheRabalancingDelayedPartitionMapExchangeSelfTest1,
>  cache=default, cacheId=1544803905, topVer=AffinityTopologyVersion 
> [topVer=10, minorTopVer=0], p=0, readVer=AffinityTopologyVersion [topVer=10, 
> minorTopVer=0], locNode=TcpDiscoveryNode 
> [id=c53cc66c-05ea-4441-825c-23d99ef1, addrs=[127.0.0.1], 
> sockAddrs=[/127.0.0.1:47501], discPort=47501, order=2, intOrder=2, 
> lastExchangeTime=1499156862204, loc=true, ver=2.1.0#19700101-sha1:, 
> isClient=false]]
>   at 
> org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.awaitPartitionMapExchange(GridCommonAbstractTest.java:698)
>   at 
> org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.awaitPartitionMapExchange(GridCommonAbstractTest.java:532)
>   at 
> org.apache.ignite.testframework.junits.common.GridCommonAbstractTest.awaitPartitionMapExchange(GridCommonAbstractTest.java:517)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.rebalancing.GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.test(GridCacheRabalancingDelayedPartitionMapExchangeSelfTest.java:154)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at junit.framework.TestCase.runTest(TestCase.java:176)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1997)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:132)
>   at 
> org.apache.ignite.testframework.junits.GridAbstractTest$5.run(GridAbstractTest.java:1912)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5706) Redis FLUSHDB command support

2017-07-19 Thread Roman Shtykh (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092974#comment-16092974
 ] 

Roman Shtykh commented on IGNITE-5706:
--

[~anovikov] I implemented {{FLUSHALL}} (and {{CACHE_CLEAR}} but not for jetty). 
Can you please have a look?

> Redis FLUSHDB command support
> -
>
> Key: IGNITE-5706
> URL: https://issues.apache.org/jira/browse/IGNITE-5706
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Roman Shtykh
>Assignee: Roman Shtykh
>
> https://redis.io/commands/flushdb



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-3950) Deadlock when exchange starts with pending explicit lock

2017-07-19 Thread Vyacheslav Daradur (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vyacheslav Daradur reassigned IGNITE-3950:
--

Assignee: Vitaliy Biryukov 

> Deadlock when exchange starts with pending explicit lock
> 
>
> Key: IGNITE-3950
> URL: https://issues.apache.org/jira/browse/IGNITE-3950
> Project: Ignite
>  Issue Type: Bug
>Reporter: Ilya Lantukh
>Assignee: Vitaliy Biryukov 
>  Labels: Muted_test, test-fail
>
> Reproduced by IgniteCacheMultiLockSelfTest#testExplicitLockManyKeysWithClient 
> (hangs with ~10% probability).
> Exchange worker waits for lock to be released:
> {noformat}
> Thread [name="exchange-worker-#155%dht.IgniteCacheMultiTxLockSelfTest3%", 
> id=195, state=TIMED_WAITING, blockCnt=0, waitCnt=44]
> Lock 
> [object=o.a.i.i.processors.cache.GridCacheMvccManager$FinishLockFuture@2638011,
>  ownerName=null, ownerId=-1]
> at sun.misc.Unsafe.park(Native Method)
> at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
> at 
> o.a.i.i.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:187)
> at 
> o.a.i.i.util.future.GridFutureAdapter.get(GridFutureAdapter.java:137)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.waitPartitionRelease(GridDhtPartitionsExchangeFuture.java:835)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:763)
> at 
> o.a.i.i.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:516)
> at 
> o.a.i.i.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1464)
> at o.a.i.i.util.worker.GridWorker.run(GridWorker.java:110)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> while thread that holds lock cannot finish cache operation:
> {noformat}
> "Thread-9@3645" prio=5 tid=0x11a nid=NA waiting
>   java.lang.Thread.State: WAITING
> at sun.misc.Unsafe.park(Unsafe.java:-1)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:157)
> at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$45.inOp(GridCacheAdapter.java:2849)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$SyncInOp.op(GridCacheAdapter.java:5303)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4351)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAll(GridCacheAdapter.java:2847)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.putAll(GridCacheProxyImpl.java:838)
> at 
> org.apache.ignite.internal.processors.cache.distributed.dht.IgniteCacheMultiTxLockSelfTest$1.run(IgniteCacheMultiTxLockSelfTest.java:218)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-4181) The several runs of ServicesExample causes NPE

2017-07-19 Thread Andrey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Kuznetsov reassigned IGNITE-4181:


Assignee: Andrey Kuznetsov

> The several runs of ServicesExample causes NPE
> --
>
> Key: IGNITE-4181
> URL: https://issues.apache.org/jira/browse/IGNITE-4181
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.6, 1.7
> Environment: Windows 10, Oracle JDK 7
>Reporter: Sergey Kozlov
>Assignee: Andrey Kuznetsov
>  Labels: newbie
> Fix For: 2.2
>
>
> 0. Open example project in IDEA
> 1. Start 2-3 {{ExampleNodeStartup}}
> 2. Run {{ServicesExample}} several times.
> Sometimes it causes NullPointerException:
> {noformat}
> Executing closure [mapSize=10]
> Service was cancelled: myNodeSingletonService
> [15:37:20,020][INFO ][srvc-deploy-#24%null%][GridServiceProcessor] Cancelled 
> service instance [name=myNodeSingletonService, 
> execId=88a92a4d-c1cb-4a9b-8930-c67ac7f42bf3]
> [15:37:20,032][INFO ][sys-#33%null%][GridCacheProcessor] Stopped cache: 
> myNodeSingletonService
> [15:37:20,033][INFO 
> ][exchange-worker-#23%null%][GridCachePartitionExchangeManager] Skipping 
> rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=10, 
> minorTopVer=4], evt=DISCOVERY_CUSTOM_EVT, 
> node=5faac72a-72ab-4277-9643-0e962973b3f4]
> [15:37:20,045][INFO ][sys-#39%null%][GridCacheProcessor] Stopped cache: 
> myClusterSingletonService
> [15:37:20,046][INFO 
> ][exchange-worker-#23%null%][GridCachePartitionExchangeManager] Skipping 
> rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=10, 
> minorTopVer=5], evt=DISCOVERY_CUSTOM_EVT, 
> node=478f1752-fdce-42c6-aef6-55a5f4c08d90]
> [15:37:20,062][INFO ][disco-event-worker-#20%null%][GridDiscoveryManager] 
> Node left topology: TcpDiscoveryNode 
> [id=4f9cbc67-d756-4c25-9ee4-aee6528da024, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 
> 172.25.4.107, 2001:0:9d38:6ab8:34b2:9f3e:3c6f:269], 
> sockAddrs=[/2001:0:9d38:6ab8:34b2:9f3e:3c6f:269:0, /127.0.0.1:0, 
> /0:0:0:0:0:0:0:1:0, work-pc/172.25.4.107:0], discPort=0, order=10, 
> intOrder=7, lastExchangeTime=1478522239236, loc=false, 
> ver=1.7.3#20161107-sha1:5132ac87, isClient=true]
> [15:37:20,063][INFO ][disco-event-worker-#20%null%][GridDiscoveryManager] 
> Topology snapshot [ver=11, servers=3, clients=0, CPUs=8, heap=11.0GB]
> [15:37:20,064][INFO ][sys-#44%null%][GridCacheProcessor] Stopped cache: 
> myMultiService
> [15:37:20,066][INFO 
> ][exchange-worker-#23%null%][GridCachePartitionExchangeManager] Skipping 
> rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=10, 
> minorTopVer=6], evt=DISCOVERY_CUSTOM_EVT, 
> node=5faac72a-72ab-4277-9643-0e962973b3f4]
> [15:37:20,076][INFO ][exchange-worker-#23%null%][GridCacheProcessor] Started 
> cache [name=myClusterSingletonService, mode=PARTITIONED]
> [15:37:20,115][INFO 
> ][exchange-worker-#23%null%][GridCachePartitionExchangeManager] Skipping 
> rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=10, 
> minorTopVer=7], evt=DISCOVERY_CUSTOM_EVT, 
> node=478f1752-fdce-42c6-aef6-55a5f4c08d90]
> [15:37:20,121][INFO 
> ][exchange-worker-#23%null%][GridCachePartitionExchangeManager] Skipping 
> rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=11, 
> minorTopVer=0], evt=NODE_LEFT, node=4f9cbc67-d756-4c25-9ee4-aee6528da024]
> [15:37:20,133][INFO ][exchange-worker-#23%null%][GridCacheProcessor] Started 
> cache [name=myMultiService, mode=PARTITIONED]
> [15:37:20,135][ERROR][exchange-worker-#23%null%][GridDhtPartitionsExchangeFuture]
>  Failed to reinitialize local partitions (preloading will be stopped): 
> GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=11, 
> minorTopVer=1], nodeId=5faac72a, evt=DISCOVERY_CUSTOM_EVT]
> java.lang.NullPointerException
>   at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.initStartedCacheOnCoordinator(CacheAffinitySharedManager.java:743)
>   at 
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.onCacheChangeRequest(CacheAffinitySharedManager.java:413)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:565)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:448)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1447)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> 

[jira] [Updated] (IGNITE-3568) .NET: Start JVM externally (thin client)

2017-07-19 Thread Pavel Tupitsyn (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pavel Tupitsyn updated IGNITE-3568:
---
Summary: .NET: Start JVM externally (thin client)  (was: .NET: Start JVM 
externally)

> .NET: Start JVM externally (thin client)
> 
>
> Key: IGNITE-3568
> URL: https://issues.apache.org/jira/browse/IGNITE-3568
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Affects Versions: 1.6
>Reporter: Vladimir Ozerov
>Assignee: Pavel Tupitsyn
>Priority: Minor
>  Labels: .net
> Fix For: 2.2
>
>
> Currently we start JVM inside .NET process. This is not good for several 
> reasons:
> 1) Broken isolation - only one JVM can exist per process. This way it is 
> impossible to start two Ignite instances with different JVM options.
> 2) JVM startup is expensive, cluster connection is expensive, and process 
> must host both Java and .NET heaps. Should we have external JVM to connect 
> to, we would allow for truly thin clients, when dozens thin processes will be 
> able to work with the same client. We already see growing demand for this 
> feature,



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5776) Add option to turn on filter reachable addresses in TcpCommunicationSpi

2017-07-19 Thread Evgenii Zhuravlev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgenii Zhuravlev updated IGNITE-5776:
--
Summary: Add option to turn on filter reachable addresses in 
TcpCommunicationSpi  (was: Add option to turn out filter reachable addresses in 
TcpCommunicationSpi)

> Add option to turn on filter reachable addresses in TcpCommunicationSpi
> ---
>
> Key: IGNITE-5776
> URL: https://issues.apache.org/jira/browse/IGNITE-5776
> Project: Ignite
>  Issue Type: Improvement
>Reporter: Evgenii Zhuravlev
>Assignee: Evgenii Zhuravlev
> Fix For: 2.2
>
>
> in the case of  not opened port 7(which is default to check 
> InetAddress.isReachable) each creation of tcpClient will lead to additional 
> delay for 2000ms because of filtering reachable addresses



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-647) org.apache.ignite.IgniteCacheAffinitySelfTest.testAffinity() hangs

2017-07-19 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov updated IGNITE-647:

Fix Version/s: 2.0

> org.apache.ignite.IgniteCacheAffinitySelfTest.testAffinity() hangs
> --
>
> Key: IGNITE-647
> URL: https://issues.apache.org/jira/browse/IGNITE-647
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Yakov Zhdanov
>Assignee: Semen Boikov
>  Labels: Muted_test
> Fix For: 2.0
>
> Attachments: dump1.txt, dump2.txt, dump3456.txt, 
> FairAffinityDynamicCacheSelfTest.testStartStopCache.txt, log.txt, 
> threaddump.txt
>
>
> 1-2 runs out of ~10 local runs hanged for me



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-647) org.apache.ignite.IgniteCacheAffinitySelfTest.testAffinity() hangs

2017-07-19 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov updated IGNITE-647:

Priority: Major  (was: Critical)

> org.apache.ignite.IgniteCacheAffinitySelfTest.testAffinity() hangs
> --
>
> Key: IGNITE-647
> URL: https://issues.apache.org/jira/browse/IGNITE-647
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Yakov Zhdanov
>Assignee: Semen Boikov
>  Labels: Muted_test
> Attachments: dump1.txt, dump2.txt, dump3456.txt, 
> FairAffinityDynamicCacheSelfTest.testStartStopCache.txt, log.txt, 
> threaddump.txt
>
>
> 1-2 runs out of ~10 local runs hanged for me



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-4380) Cache invoke calls can be lost

2017-07-19 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-4380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov updated IGNITE-4380:
-
Affects Version/s: 2.0

> Cache invoke calls can be lost
> --
>
> Key: IGNITE-4380
> URL: https://issues.apache.org/jira/browse/IGNITE-4380
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.0
>Reporter: Semen Boikov
>Assignee: Semen Boikov
>Priority: Critical
> Fix For: 2.2
>
>
> Recently added test 
> GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded fails on TC in 
> various configurations with transactional cache.
> Example of failure 
> GridCacheReplicatedOffHeapTieredMultiNodeFullApiSelfTest.testInvokeAllMultithreaded:
> {noformat}
> junit.framework.AssertionFailedError: expected:<2> but was:<10868>
> at junit.framework.Assert.fail(Assert.java:57)
> at junit.framework.Assert.failNotEquals(Assert.java:329)
> at junit.framework.Assert.assertEquals(Assert.java:78)
> at junit.framework.Assert.assertEquals(Assert.java:234)
> at junit.framework.Assert.assertEquals(Assert.java:241)
> at junit.framework.TestCase.assertEquals(TestCase.java:409)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAbstractFullApiSelfTest.testInvokeAllMultithreaded(GridCacheAbstractFullApiSelfTest.java:342)
> at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at junit.framework.TestCase.runTest(TestCase.java:176)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.runTestInternal(GridAbstractTest.java:1803)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest.access$000(GridAbstractTest.java:118)
> at 
> org.apache.ignite.testframework.junits.GridAbstractTest$4.run(GridAbstractTest.java:1718)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-5363) Exception in logs after starting cluster in inactive mode and subsequent activation

2017-07-19 Thread Sergey Chugunov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Chugunov resolved IGNITE-5363.
-
Resolution: Cannot Reproduce

It seems issue was fixed as part of other fixes and improvements of grid 
activation process.

> Exception in logs after starting cluster in inactive mode and subsequent 
> activation
> ---
>
> Key: IGNITE-5363
> URL: https://issues.apache.org/jira/browse/IGNITE-5363
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.1
>Reporter: Sergey Chugunov
> Attachments: GridActivationSimpleTest.java
>
>
> h2. Notes
> A "no-op" test reproducing the issue is attached. This behavior reproduces 
> only in persistent-enabled mode, so recently added pds module is needed to 
> run it correctly.
> h2. Steps to reproduce
> # Start cluster in inactive mode ({{IgniteConfiguration.setActiveOnStart}})
> # Activate cluster from any server node ({{Ignite.active(true)}})
> h2. Expected behavior
> No exceptions in logs
> h2. Actual behavior
> The following exception is printed out in logs, although cluster looks 
> working fine:
> {code}
> [18:00:42,159][ERROR][exchange-worker-#25%db.IgniteDbWholeClusterRestartSelfTest0%][GridCacheDatabaseSharedManager]
>  Failed to register MBean for MemoryMetrics with name: 'sysMemPlc'
> javax.management.InstanceAlreadyExistsException: 
> org.apache:clsLdr=4e25154f,igniteInstanceName=db.IgniteDbWholeClusterRestartSelfTest0,group=MemoryMetrics,name=sysMemPlc
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4539)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.registerMetricsMBean(IgniteCacheDatabaseSharedManager.java:148)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.registerMetricsMBeans(IgniteCacheDatabaseSharedManager.java:135)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.init(IgniteCacheDatabaseSharedManager.java:119)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.start0(IgniteCacheDatabaseSharedManager.java:102)
>   at 
> org.apache.ignite.internal.processors.cache.database.GridCacheDatabaseSharedManager.initDataBase(GridCacheDatabaseSharedManager.java:400)
>   at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onActivate(GridClusterStateProcessor.java:453)
>   at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onChangeGlobalState(GridClusterStateProcessor.java:367)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:744)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:536)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1802)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Also it looks like that not only MBeans are registered twice but OffHeap 
> memory is allocated twice as well which increases possibility of OOME on 
> reasonable setting in production environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5332) Add toString() to GridNearAtomicAbstractSingleUpdateRequest and it's inheritors

2017-07-19 Thread Aleksandr Meterko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092904#comment-16092904
 ] 

Aleksandr Meterko commented on IGNITE-5332:
---

This ticket was implemented in master during implementation of IGNITE-5473 
(commit cf345b8) so it should be closed.

> Add toString() to GridNearAtomicAbstractSingleUpdateRequest and it's 
> inheritors
> ---
>
> Key: IGNITE-5332
> URL: https://issues.apache.org/jira/browse/IGNITE-5332
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Dmitry Karachentsev
>Assignee: Aleksandr Meterko
>  Labels: newbie
> Fix For: 1.9
>
>
> GridNearAtomicAbstractSingleUpdateRequest and all his inheritors should 
> implement toString() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5332) Add toString() to GridNearAtomicAbstractSingleUpdateRequest and it's inheritors

2017-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092887#comment-16092887
 ] 

ASF GitHub Bot commented on IGNITE-5332:


Github user Desperus closed the pull request at:

https://github.com/apache/ignite/pull/2258


> Add toString() to GridNearAtomicAbstractSingleUpdateRequest and it's 
> inheritors
> ---
>
> Key: IGNITE-5332
> URL: https://issues.apache.org/jira/browse/IGNITE-5332
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.7
>Reporter: Dmitry Karachentsev
>Assignee: Aleksandr Meterko
>  Labels: newbie
> Fix For: 1.9
>
>
> GridNearAtomicAbstractSingleUpdateRequest and all his inheritors should 
> implement toString() method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5461) Visor shows wrong statistics for off heap memory

2017-07-19 Thread Andrey Novikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092884#comment-16092884
 ] 

Andrey Novikov commented on IGNITE-5461:


This fix can be found in branches: {{master}}, {{ignite-2.1}}. You can download 
latest nightly build: 
https://ignite.apache.org/community/contribute.html#nightly-builds or try 
upcoming 2.1 release 
http://apache-ignite-developers.2346864.n4.nabble.com/VOTE-Apache-Ignite-2-1-0-RC2-td19767.html




> Visor shows wrong statistics for off heap memory
> 
>
> Key: IGNITE-5461
> URL: https://issues.apache.org/jira/browse/IGNITE-5461
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.0
>Reporter: Mikhail Cherkasov
>Assignee: Alexey Kuznetsov
>  Labels: important
> Fix For: 2.1
>
> Attachments: CreateCache.java, visor-config.xml
>
>
> Visor show that data is stored in Heap, while the data is in off heap:
> Total: 1
> Heap: 1
> Off-Heap: 0
> Off-Heap Memory: 0
> while:
> cache.localPeek("Key1", ONHEAP) == null
> cache.localPeek("Key1", OFFHEAP) == Value
> reproducer is attached.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5363) Exception in logs after starting cluster in inactive mode and subsequent activation

2017-07-19 Thread Alexey Goncharuk (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092874#comment-16092874
 ] 

Alexey Goncharuk commented on IGNITE-5363:
--

Lowering the priority as it does not affect PDS functionality

> Exception in logs after starting cluster in inactive mode and subsequent 
> activation
> ---
>
> Key: IGNITE-5363
> URL: https://issues.apache.org/jira/browse/IGNITE-5363
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.1
>Reporter: Sergey Chugunov
> Attachments: GridActivationSimpleTest.java
>
>
> h2. Notes
> A "no-op" test reproducing the issue is attached. This behavior reproduces 
> only in persistent-enabled mode, so recently added pds module is needed to 
> run it correctly.
> h2. Steps to reproduce
> # Start cluster in inactive mode ({{IgniteConfiguration.setActiveOnStart}})
> # Activate cluster from any server node ({{Ignite.active(true)}})
> h2. Expected behavior
> No exceptions in logs
> h2. Actual behavior
> The following exception is printed out in logs, although cluster looks 
> working fine:
> {code}
> [18:00:42,159][ERROR][exchange-worker-#25%db.IgniteDbWholeClusterRestartSelfTest0%][GridCacheDatabaseSharedManager]
>  Failed to register MBean for MemoryMetrics with name: 'sysMemPlc'
> javax.management.InstanceAlreadyExistsException: 
> org.apache:clsLdr=4e25154f,igniteInstanceName=db.IgniteDbWholeClusterRestartSelfTest0,group=MemoryMetrics,name=sysMemPlc
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4539)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.registerMetricsMBean(IgniteCacheDatabaseSharedManager.java:148)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.registerMetricsMBeans(IgniteCacheDatabaseSharedManager.java:135)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.init(IgniteCacheDatabaseSharedManager.java:119)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.start0(IgniteCacheDatabaseSharedManager.java:102)
>   at 
> org.apache.ignite.internal.processors.cache.database.GridCacheDatabaseSharedManager.initDataBase(GridCacheDatabaseSharedManager.java:400)
>   at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onActivate(GridClusterStateProcessor.java:453)
>   at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onChangeGlobalState(GridClusterStateProcessor.java:367)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:744)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:536)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1802)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Also it looks like that not only MBeans are registered twice but OffHeap 
> memory is allocated twice as well which increases possibility of OOME on 
> reasonable setting in production environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5363) Exception in logs after starting cluster in inactive mode and subsequent activation

2017-07-19 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5363:
-
Priority: Major  (was: Critical)

> Exception in logs after starting cluster in inactive mode and subsequent 
> activation
> ---
>
> Key: IGNITE-5363
> URL: https://issues.apache.org/jira/browse/IGNITE-5363
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 2.1
>Reporter: Sergey Chugunov
> Attachments: GridActivationSimpleTest.java
>
>
> h2. Notes
> A "no-op" test reproducing the issue is attached. This behavior reproduces 
> only in persistent-enabled mode, so recently added pds module is needed to 
> run it correctly.
> h2. Steps to reproduce
> # Start cluster in inactive mode ({{IgniteConfiguration.setActiveOnStart}})
> # Activate cluster from any server node ({{Ignite.active(true)}})
> h2. Expected behavior
> No exceptions in logs
> h2. Actual behavior
> The following exception is printed out in logs, although cluster looks 
> working fine:
> {code}
> [18:00:42,159][ERROR][exchange-worker-#25%db.IgniteDbWholeClusterRestartSelfTest0%][GridCacheDatabaseSharedManager]
>  Failed to register MBean for MemoryMetrics with name: 'sysMemPlc'
> javax.management.InstanceAlreadyExistsException: 
> org.apache:clsLdr=4e25154f,igniteInstanceName=db.IgniteDbWholeClusterRestartSelfTest0,group=MemoryMetrics,name=sysMemPlc
>   at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
>   at 
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
>   at 
> org.apache.ignite.internal.util.IgniteUtils.registerMBean(IgniteUtils.java:4539)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.registerMetricsMBean(IgniteCacheDatabaseSharedManager.java:148)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.registerMetricsMBeans(IgniteCacheDatabaseSharedManager.java:135)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.init(IgniteCacheDatabaseSharedManager.java:119)
>   at 
> org.apache.ignite.internal.processors.cache.database.IgniteCacheDatabaseSharedManager.start0(IgniteCacheDatabaseSharedManager.java:102)
>   at 
> org.apache.ignite.internal.processors.cache.database.GridCacheDatabaseSharedManager.initDataBase(GridCacheDatabaseSharedManager.java:400)
>   at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onActivate(GridClusterStateProcessor.java:453)
>   at 
> org.apache.ignite.internal.processors.cluster.GridClusterStateProcessor.onChangeGlobalState(GridClusterStateProcessor.java:367)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onCacheChangeRequest(GridDhtPartitionsExchangeFuture.java:744)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:536)
>   at 
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:1802)
>   at 
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Also it looks like that not only MBeans are registered twice but OffHeap 
> memory is allocated twice as well which increases possibility of OOME on 
> reasonable setting in production environment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5603) All daemon node, can be only client daemon, server daemon is not allow.

2017-07-19 Thread Alexey Goncharuk (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Goncharuk updated IGNITE-5603:
-
Priority: Major  (was: Critical)

> All daemon node, can be only client daemon, server daemon is not allow.
> ---
>
> Key: IGNITE-5603
> URL: https://issues.apache.org/jira/browse/IGNITE-5603
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Reporter: Dmitriy Govorukhin
> Fix For: 2.2
>
>
> No reason for daemon server right now. Rework current functionality, prevent 
> the server node from being a daemon.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-4887) Support for starting transaction in another thread

2017-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-4887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092841#comment-16092841
 ] 

ASF GitHub Bot commented on IGNITE-4887:


Github user voipp closed the pull request at:

https://github.com/apache/ignite/pull/2061


> Support for starting transaction in another thread
> --
>
> Key: IGNITE-4887
> URL: https://issues.apache.org/jira/browse/IGNITE-4887
> Project: Ignite
>  Issue Type: Improvement
>  Components: general
>Affects Versions: 1.9
>Reporter: Alexey Kuznetsov
>Assignee: Nikolay Izhikov
> Attachments: HangTest.txt
>
>
> Consider the following pseudo-code:
> {code:xml}
> IgniteTransactions transactions = ignite1.transactions();
> Transaction tx = startTransaction(transactions);
> cache.put("key1", 1);
> tx.stop();
> {code}
> And in another thread:
> {code:xml}
> transactions.txStart(tx);
> cache.put("key3", 3);
> cache.remove("key2");
> tx.commit();
> {code}
> The Api should be implemented , that let you continue transaction in another 
> thread.
> method stop() should mark the transaction as unavailable for further commit.
> method txStart() should resume the transaction. 
> reason behind the proposal :
> Consider the next scenario:
> we begin transaction, doing some changes and start async future that will be 
> able to introduce futher changes into transaction and commit it in the end.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5712) Context switching for optimistic transactions

2017-07-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092840#comment-16092840
 ] 

ASF GitHub Bot commented on IGNITE-5712:


Github user voipp closed the pull request at:

https://github.com/apache/ignite/pull/2302


> Context switching for optimistic transactions
> -
>
> Key: IGNITE-5712
> URL: https://issues.apache.org/jira/browse/IGNITE-5712
> Project: Ignite
>  Issue Type: Sub-task
>  Components: general
>Reporter: Alexey Kuznetsov
>Assignee: Nikolay Izhikov
>
> Implement context switching between threads for optimistic transactions
> http://ci.ignite.apache.org/project.html?projectId=Ignite20Tests_Ignite20Tests=pull%2F2257%2Fhead



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (IGNITE-5767) Web console: use byte array type instead of java.lang.Object for binary JDBC types

2017-07-19 Thread Andrey Novikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16092803#comment-16092803
 ] 

Andrey Novikov commented on IGNITE-5767:


[~vsisko], can you please take a look on this ticket? As I understand you need 
change default type mapping 
{{modules/web-console/frontend/app/data/jdbc-types.json}}

> Web console: use byte array type instead of java.lang.Object for binary JDBC 
> types
> --
>
> Key: IGNITE-5767
> URL: https://issues.apache.org/jira/browse/IGNITE-5767
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Denis Kholodov
>Assignee: Andrey Novikov
> Fix For: 2.2
>
>
> Schema importer should use {{[B}} query entity field type instead of 
> {{java.lang.Object}} for the following SQL types: {{BINARY}}, {{VARBINARY}} 
> and {{LONGVARBINARY}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (IGNITE-5767) Web console: use byte array type instead of java.lang.Object for binary JDBC types

2017-07-19 Thread Mikhail Cherkasov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Cherkasov reassigned IGNITE-5767:
-

Assignee: Andrey Novikov

> Web console: use byte array type instead of java.lang.Object for binary JDBC 
> types
> --
>
> Key: IGNITE-5767
> URL: https://issues.apache.org/jira/browse/IGNITE-5767
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 2.0
>Reporter: Denis Kholodov
>Assignee: Andrey Novikov
> Fix For: 2.2
>
>
> Schema importer should use {{[B}} query entity field type instead of 
> {{java.lang.Object}} for the following SQL types: {{BINARY}}, {{VARBINARY}} 
> and {{LONGVARBINARY}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5783) LINQ queries should provide the ability to generate the SQL query plan

2017-07-19 Thread Michael Griggs (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Griggs updated IGNITE-5783:
---
Affects Version/s: (was: 2.0)
   2.1

> LINQ queries should provide the ability to generate the SQL query plan
> --
>
> Key: IGNITE-5783
> URL: https://issues.apache.org/jira/browse/IGNITE-5783
> Project: Ignite
>  Issue Type: New Feature
>  Components: platforms
>Affects Versions: 2.1
>Reporter: Michael Griggs
>Priority: Minor
> Fix For: 2.2
>
>
> At present, the only way to see the query plan generated by a LINQ query in 
> C# is to:
> # Call {{GetFieldsQuery()}}
> # Prepend the string {{"explain "}} to the resulting string
> # execute the query in the step above and retrieve the plan



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5785) C# QuerySqlField attribute should provide access to Order parameter

2017-07-19 Thread Michael Griggs (JIRA)
Michael Griggs created IGNITE-5785:
--

 Summary: C# QuerySqlField attribute should provide access to Order 
parameter
 Key: IGNITE-5785
 URL: https://issues.apache.org/jira/browse/IGNITE-5785
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Affects Versions: 2.1
Reporter: Michael Griggs
 Fix For: 2.2


https://apacheignite.readme.io/docs/indexes#section-group-indexes

{{order}} parameter should be accessible via Ignite.NET.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5784) .NET: QueryIndex.InlineSize

2017-07-19 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-5784:
--

 Summary: .NET: QueryIndex.InlineSize
 Key: IGNITE-5784
 URL: https://issues.apache.org/jira/browse/IGNITE-5784
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Affects Versions: 2.1
Reporter: Pavel Tupitsyn
 Fix For: 2.2


{{QueryIndex.InlineSize}} controls index payload when it is stored in Ignite 
page memory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (IGNITE-5783) LINQ queries should provide the ability to generate the SQL query plan

2017-07-19 Thread Michael Griggs (JIRA)
Michael Griggs created IGNITE-5783:
--

 Summary: LINQ queries should provide the ability to generate the 
SQL query plan
 Key: IGNITE-5783
 URL: https://issues.apache.org/jira/browse/IGNITE-5783
 Project: Ignite
  Issue Type: New Feature
  Components: platforms
Affects Versions: 2.0
Reporter: Michael Griggs
Priority: Minor
 Fix For: 2.2


At present, the only way to see the query plan generated by a LINQ query in C# 
is to:

# Call {{GetFieldsQuery()}}
# Prepend the string {{"explain "}} to the resulting string
# execute the query in the step above and retrieve the plan





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (IGNITE-5768) Retry resolving class name from marshaller cache and .classname file

2017-07-19 Thread Dmitry Karachentsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Karachentsev resolved IGNITE-5768.
-
Resolution: Fixed

> Retry resolving class name from marshaller cache and .classname file
> 
>
> Key: IGNITE-5768
> URL: https://issues.apache.org/jira/browse/IGNITE-5768
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Dmitry Karachentsev
>Assignee: Dmitry Karachentsev
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (IGNITE-5782) FifoEvictionPolicy does not work with ignite 2.0.0

2017-07-19 Thread wangpeibin (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-5782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangpeibin updated IGNITE-5782:
---
Description: 
I use a ignite slide window with FifoEvictionPolicy before with ignite 1.9.0
right now I migration to ingite 2.0.0 according 
ApacheIgnite2.0MigrationGuide-CacheMemoryMode 
but I find the window does not evict the entry successful.

I write a unit test code below:

{code:java}
  try (Ignite ignite = Ignition.start("default-config.xml")) {
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setName("Test");

// for ignite 1.9.0
// cacheConfiguration.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);
// for ignite 2.0.0
cacheConfiguration.setOnheapCacheEnabled(true);


cacheConfiguration.setEvictionPolicy(new FifoEvictionPolicy(5));
IgniteCache cache = 
ignite.getOrCreateCache(cacheConfiguration);
for(Integer i = 0; i < 10; i++) {
Map value = new HashMap<>();
value.put("key", i.toString());
value.put("value", 1);
cache.put(i.toString(), value);
}

cache.forEach(objectObjectEntry -> {
System.out.println(objectObjectEntry);

});
}
System.out.println("finished");
{code}




with ignite 1.9.0 the output is
{code:java}
Entry [key=5, val={value=1, key=5}]
Entry [key=6, val={value=1, key=6}]
Entry [key=7, val={value=1, key=7}]
Entry [key=8, val={value=1, key=8}]
Entry [key=9, val={value=1, key=9}]
{code}

with ignite 2.0.0 the output is
{code:java}
Entry [key=0, val={value=1, key=0}]
Entry [key=1, val={value=1, key=1}]
Entry [key=2, val={value=1, key=2}]
Entry [key=3, val={value=1, key=3}]
Entry [key=4, val={value=1, key=4}]
Entry [key=5, val={value=1, key=5}]
Entry [key=6, val={value=1, key=6}]
Entry [key=7, val={value=1, key=7}]
Entry [key=8, val={value=1, key=8}]
Entry [key=9, val={value=1, key=9}]
{code}


  was:
I use a ignite slide window with FifoEvictionPolicy before with ignite 1.9.0
right now I migration to ingite 2.0.0 according 
ApacheIgnite2.0MigrationGuide-CacheMemoryMode 
but I find the window does not evict the entry successful.

I write a unit test code below:
```
try (Ignite ignite = Ignition.start("default-config.xml")) {
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setName("Test");

// for ignite 1.9.0
// cacheConfiguration.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);
// for ignite 2.0.0
cacheConfiguration.setOnheapCacheEnabled(true);


cacheConfiguration.setEvictionPolicy(new FifoEvictionPolicy(5));
IgniteCache cache = 
ignite.getOrCreateCache(cacheConfiguration);
for(Integer i = 0; i < 10; i++) {
Map value = new HashMap<>();
value.put("key", i.toString());
value.put("value", 1);
cache.put(i.toString(), value);
}

cache.forEach(objectObjectEntry -> {
System.out.println(objectObjectEntry);

});
}
System.out.println("finished");
```

with ignite 1.9.0 the output is
```
Entry [key=5, val={value=1, key=5}]
Entry [key=6, val={value=1, key=6}]
Entry [key=7, val={value=1, key=7}]
Entry [key=8, val={value=1, key=8}]
Entry [key=9, val={value=1, key=9}]
```
with ignite 2.0.0 the output is
```
Entry [key=0, val={value=1, key=0}]
Entry [key=1, val={value=1, key=1}]
Entry [key=2, val={value=1, key=2}]
Entry [key=3, val={value=1, key=3}]
Entry [key=4, val={value=1, key=4}]
Entry [key=5, val={value=1, key=5}]
Entry [key=6, val={value=1, key=6}]
Entry [key=7, val={value=1, key=7}]
Entry [key=8, val={value=1, key=8}]
Entry [key=9, val={value=1, key=9}]
```


>  FifoEvictionPolicy does not work with ignite 2.0.0
> ---
>
> Key: IGNITE-5782
> URL: https://issues.apache.org/jira/browse/IGNITE-5782
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Affects Versions: 2.0
>Reporter: wangpeibin
>
> I use a ignite slide window with FifoEvictionPolicy before with ignite 1.9.0
> right now I migration to ingite 2.0.0 according 
> ApacheIgnite2.0MigrationGuide-CacheMemoryMode 
> but I find the window does not evict the entry successful.
> I write a unit test code below:
> {code:java}
>   try (Ignite ignite = Ignition.start("default-config.xml")) {
> CacheConfiguration cacheConfiguration 

[jira] [Created] (IGNITE-5782) FifoEvictionPolicy does not work with ignite 2.0.0

2017-07-19 Thread wangpeibin (JIRA)
wangpeibin created IGNITE-5782:
--

 Summary:  FifoEvictionPolicy does not work with ignite 2.0.0
 Key: IGNITE-5782
 URL: https://issues.apache.org/jira/browse/IGNITE-5782
 Project: Ignite
  Issue Type: Bug
  Components: cache
Affects Versions: 2.0
Reporter: wangpeibin


I use a ignite slide window with FifoEvictionPolicy before with ignite 1.9.0
right now I migration to ingite 2.0.0 according 
ApacheIgnite2.0MigrationGuide-CacheMemoryMode 
but I find the window does not evict the entry successful.

I write a unit test code below:
```
try (Ignite ignite = Ignition.start("default-config.xml")) {
CacheConfiguration cacheConfiguration = new CacheConfiguration();
cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
cacheConfiguration.setName("Test");

// for ignite 1.9.0
// cacheConfiguration.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);
// for ignite 2.0.0
cacheConfiguration.setOnheapCacheEnabled(true);


cacheConfiguration.setEvictionPolicy(new FifoEvictionPolicy(5));
IgniteCache cache = 
ignite.getOrCreateCache(cacheConfiguration);
for(Integer i = 0; i < 10; i++) {
Map value = new HashMap<>();
value.put("key", i.toString());
value.put("value", 1);
cache.put(i.toString(), value);
}

cache.forEach(objectObjectEntry -> {
System.out.println(objectObjectEntry);

});
}
System.out.println("finished");
```

with ignite 1.9.0 the output is
```
Entry [key=5, val={value=1, key=5}]
Entry [key=6, val={value=1, key=6}]
Entry [key=7, val={value=1, key=7}]
Entry [key=8, val={value=1, key=8}]
Entry [key=9, val={value=1, key=9}]
```
with ignite 2.0.0 the output is
```
Entry [key=0, val={value=1, key=0}]
Entry [key=1, val={value=1, key=1}]
Entry [key=2, val={value=1, key=2}]
Entry [key=3, val={value=1, key=3}]
Entry [key=4, val={value=1, key=4}]
Entry [key=5, val={value=1, key=5}]
Entry [key=6, val={value=1, key=6}]
Entry [key=7, val={value=1, key=7}]
Entry [key=8, val={value=1, key=8}]
Entry [key=9, val={value=1, key=9}]
```



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)