[jira] [Resolved] (IGNITE-3166) Incorrect java code in corner case

2016-05-19 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov resolved IGNITE-3166.
--
Resolution: Fixed
  Assignee: Pavel Konstantinov  (was: Alexey Kuznetsov)

Fixed java code generation.

> Incorrect java code in corner case
> --
>
> Key: IGNITE-3166
> URL: https://issues.apache.org/jira/browse/IGNITE-3166
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
> Fix For: 1.7
>
>
> # add two caches with incorrect name, e.g. '%' and '+'
> # verify generated java code
> {code}
> cfg.setCacheConfiguration(cache_(), cache_());
> ...
> public static CacheConfiguration cache_() 
> ...
> public static CacheConfiguration cache__1() 
> {code}
> should be
> {code}
> cfg.setCacheConfiguration(cache_(), cache__1());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3166) Incorrect java code in corner case

2016-05-19 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-3166:


Assignee: Alexey Kuznetsov

> Incorrect java code in corner case
> --
>
> Key: IGNITE-3166
> URL: https://issues.apache.org/jira/browse/IGNITE-3166
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
> Fix For: 1.7
>
>
> # add two caches with incorrect name, e.g. '%' and '+'
> # verify generated java code
> {code}
> cfg.setCacheConfiguration(cache_(), cache_());
> ...
> public static CacheConfiguration cache_() 
> ...
> public static CacheConfiguration cache__1() 
> {code}
> should be
> {code}
> cfg.setCacheConfiguration(cache_(), cache__1());
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-3012) Filtration of default cache doesn't work

2016-05-19 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov resolved IGNITE-3012.
--
Resolution: Fixed
  Assignee: Pavel Konstantinov  (was: Alexey Kuznetsov)

Fixed filtration of  and also now user can click on radio button label 
to select radio button.

> Filtration of default cache doesn't work
> 
>
> Key: IGNITE-3012
> URL: https://issues.apache.org/jira/browse/IGNITE-3012
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
>Priority: Minor
> Fix For: 1.7
>
>
> If grid contains default cache (cache without a name, name=null) then user 
> have no ability to find it using Filter. The default cache displayed as 
> , but it could not be found by entering into the Filter the same 
> string ''.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3004) Implement config variations test for ContinuousQueries

2016-05-19 Thread Semen Boikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292065#comment-15292065
 ] 

Semen Boikov commented on IGNITE-3004:
--

Good to merge.

> Implement config variations test for ContinuousQueries
> --
>
> Key: IGNITE-3004
> URL: https://issues.apache.org/jira/browse/IGNITE-3004
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Nikolay Tikhonov
>Priority: Blocker
> Fix For: 1.7
>
>
> Need implement test for continuous qeries with configuration variations. 
> Make sure these points are covered (in addition to nodes number/cache modes 
> and configuration paramers variaions):
> - different key/values types (see 
> IgniteConfigVariationsAbstractTest.runInAllDataModes)
> - keepBinary mode
> - with/without filters
> - ContinuousQuery API and IgniteCache.registerCacheEntryListener
> - async/sync listener and filter
> - all cache update operations
> - CacheEntryListenerConfiguration.isSynchronous true/false



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2899) BinaryObject is deserialized before getting passed to CacheInterceptor

2016-05-19 Thread Semen Boikov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15292055#comment-15292055
 ] 

Semen Boikov commented on IGNITE-2899:
--

Hi, 
- there is 'TODO delete one of isCLient methods' in 
IgniteConfigVariationsAbstractTest:213, please remove it or fix before merge
- as you told me today there is an issue in 'getEntry' with keepBinary if key 
is type which is not wrapped into BinaryObject (e.g. Integer), to fix it need 
change entry creation like this:
{noformat}
 '... new CacheEntryImplEx<>(ctx.keepBinary() ? 
(K)ctx.unwrapBinaryIfNeeded(key, true, false) : key, t.get1(), t.get2())'
{noformat}


> BinaryObject is deserialized before getting passed to CacheInterceptor
> --
>
> Key: IGNITE-2899
> URL: https://issues.apache.org/jira/browse/IGNITE-2899
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Denis Magda
>Assignee: Artem Shutak
> Fix For: 1.7
>
> Attachments: BinaryInterceptorIssue.java, 
> BinaryInterceptorNoTypeIssue.java
>
>
> If {{CacheInterceptor}} is configured for a cache that stores 
> {{BinaryObjects}} then the objects are always deserialized before being 
> passed to the interceptor body.
> Refer to BinaryInterceptorIssue test attached to the ticket to reproduce the 
> following stack trace
> {noformat}
> java.lang.ClassCastException: 
> org.apache.ignite.examples.tests.BinaryInterceptorIssue$ValidObject cannot be 
> cast to org.apache.ignite.binary.BinaryObject
>   at 
> org.apache.ignite.examples.tests.BinaryInterceptorIssue$ValidationInterceptor.onBeforePut(BinaryInterceptorIssue.java:49)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2309)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2044)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1439)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1314)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapSingle(GridNearAtomicUpdateFuture.java:457)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.access$1400(GridNearAtomicUpdateFuture.java:72)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture$UpdateState.map(GridNearAtomicUpdateFuture.java:931)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.mapOnTopology(GridNearAtomicUpdateFuture.java:417)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateFuture.map(GridNearAtomicUpdateFuture.java:283)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$21.apply(GridDhtAtomicCache.java:1006)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$21.apply(GridDhtAtomicCache.java:1004)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.asyncOp(GridDhtAtomicCache.java:737)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsync0(GridDhtAtomicCache.java:1004)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.putAsync0(GridDhtAtomicCache.java:465)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.putAsync(GridCacheAdapter.java:2491)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put(GridDhtAtomicCache.java:440)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2170)
>   at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1127)
>   at 
> org.apache.ignite.examples.tests.BinaryInterceptorIssue.main(BinaryInterceptorIssue.java:37)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
> Exception in thread "main" 
> org.apache.ignite.cache.CachePartialUpdateException: Failed to update keys 
> (retry update if possible).: 

[jira] [Updated] (IGNITE-1575) NPE when cache is started concurrently with the node stop

2016-05-19 Thread Denis Magda (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-1575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Magda updated IGNITE-1575:

Assignee: Alexei Scherbakov  (was: Andrey Gura)

> NPE when cache is started concurrently with the node stop
> -
>
> Key: IGNITE-1575
> URL: https://issues.apache.org/jira/browse/IGNITE-1575
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Valentin Kulichenko
>Assignee: Alexei Scherbakov
> Fix For: 1.7
>
>
> It's not causing any harm, but it's possible to get the NPE below during the 
> node stop.
> {noformat}
> 57724 [main] ERROR IgniteKernal%t1-1 - Got exception while starting (will 
> rollback startup routine).
> java.lang.NullPointerException
> at 
> org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryManager.onKernalStart0(CacheContinuousQueryManager.java:91)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheManagerAdapter.onKernalStart(GridCacheManagerAdapter.java:97)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:1058)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.onKernalStart(GridCacheProcessor.java:833)
> at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:829)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1549)
> at 
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1416)
> at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:916)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:477)
> at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:458)
> at org.apache.ignite.Ignition.start(Ignition.java:321)
> .. application stack frames ...
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:283)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:173)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
> at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:128)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:203)
> at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:155)
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-2971) Assertion error inside OPTIMISTIC SERIALIZABLE tx on 'get'

2016-05-19 Thread Semen Boikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Semen Boikov reassigned IGNITE-2971:


Assignee: Semen Boikov

> Assertion error inside OPTIMISTIC SERIALIZABLE tx on 'get'
> --
>
> Key: IGNITE-2971
> URL: https://issues.apache.org/jira/browse/IGNITE-2971
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Artem Shutak
>Assignee: Semen Boikov
> Fix For: 1.7
>
>
> The following code leads to assertion below:
> {code}
> final IgniteCache cache = jcache().withKeepBinary();
> Set keys = new LinkedHashSet() {{
> for (int i = 0; i < CNT; i++)
> add(key(i));
> }};
> try (Transaction tx = 
> testedGrid().transactions().txStart(conc, isolation)) {
> for (final Object key : keys) {
> Object res = cache.invoke(key, NOOP_ENTRY_PROC);
> assertNull(res);
> assertNull(cache.get(key));
> }
> tx.commit();
> }
> {code}
> Assertion:
> {noformat}
> java.lang.AssertionError: Wrong version [serReadVer=GridCacheVersion 
> [topVer=71859233, time=1460379235380, order=1460379231642, nodeOrder=5], 
> ver=GridCacheVersion [topVer=71859233, time=1460379235380, 
> order=1460379231642, nodeOrder=5]]
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxEntry.entryReadVersion(IgniteTxEntry.java:936)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$3.apply(IgniteTxLocalAdapter.java:1750)
>   at 
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxLocalAdapter$3.apply(IgniteTxLocalAdapter.java:1698)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.processLoaded(GridNearTxLocal.java:485)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal.access$100(GridNearTxLocal.java:84)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$2.apply(GridNearTxLocal.java:380)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.near.GridNearTxLocal$2.apply(GridNearTxLocal.java:375)
>   at 
> org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:54)
>   at 
> org.apache.ignite.internal.util.future.GridFutureChainListener.apply(GridFutureChainListener.java:28)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:263)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListeners(GridFutureAdapter.java:251)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:381)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:347)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onDone(GridPartitionedSingleGetFuture.java:727)
>   at 
> org.apache.ignite.internal.util.future.GridFutureAdapter.onDone(GridFutureAdapter.java:324)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.setResult(GridPartitionedSingleGetFuture.java:634)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.onResult(GridPartitionedSingleGetFuture.java:478)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.processNearSingleGetResponse(GridDhtCacheAdapter.java:154)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.access$100(GridDhtColocatedCache.java:85)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$3.apply(GridDhtColocatedCache.java:149)
>   at 
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache$3.apply(GridDhtColocatedCache.java:147)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:622)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:320)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:244)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:81)
>   at 
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:203)
>   at 
> 

[jira] [Updated] (IGNITE-3177) [Test] IgfsSizeSelfTest.testReplicated sometimes fails with a timeout.

2016-05-19 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-3177:

Description: 
In some rare cases test IgfsSizeSelfTest.testReplicated fails with a timeout.
We have 4 such logs.
In all known cases a hang up happens on attempt to start the 3rd node.
I was not able to reproduce the problem locally (test ran ~2200 times without 
any errors).

In 1 of 4 cases this is a timeout happening in method  
org.apache.ignite.testframework.junits.common.GridCommonAbstractTest#awaitPartitionMapExchange(boolean,
 boolean) .
In other 3 cases this is some hang up happening upon the 3rd node start up.

  was:
In some rare cases test IgfsSizeSelfTest.testReplicated fails with a timeout.
We have 4 such logs.
In all known cases a hang up happens on attempt to start the 3rd node.
I was not able to reproduce the problem locally (test ran ~2200 times without 
any errors).

In 1 of 4 cases this is a timeout happening in method  
org.apache.ignite.testframework.junits.common.GridCommonAbstractTest#awaitPartitionMapExchange(boolean,
 boolean) .
In other 3 cases this is some hang up happening upon the 3rd node start up.



> [Test] IgfsSizeSelfTest.testReplicated sometimes fails with a timeout. 
> ---
>
> Key: IGNITE-3177
> URL: https://issues.apache.org/jira/browse/IGNITE-3177
> Project: Ignite
>  Issue Type: Test
>  Components: IGFS
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> In some rare cases test IgfsSizeSelfTest.testReplicated fails with a timeout.
> We have 4 such logs.
> In all known cases a hang up happens on attempt to start the 3rd node.
> I was not able to reproduce the problem locally (test ran ~2200 times without 
> any errors).
> In 1 of 4 cases this is a timeout happening in method  
> org.apache.ignite.testframework.junits.common.GridCommonAbstractTest#awaitPartitionMapExchange(boolean,
>  boolean) .
> In other 3 cases this is some hang up happening upon the 3rd node start up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3177) [Test] IgfsSizeSelfTest.testReplicated sometimes fails with a timeout.

2016-05-19 Thread Ivan Veselovsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Veselovsky updated IGNITE-3177:

Description: 
In some rare cases test IgfsSizeSelfTest.testReplicated fails with a timeout.
We have 4 such logs.
In all known cases a hang up happens on attempt to start the 3rd node.
I was not able to reproduce the problem locally (test ran ~2200 times without 
any errors).

In 1 of 4 cases this is a timeout happening in method  
org.apache.ignite.testframework.junits.common.GridCommonAbstractTest#awaitPartitionMapExchange(boolean,
 boolean) .
In other 3 cases this is some hang up happening upon the 3rd node start up.


  was:
In some rare cases test IgfsSizeSelfTest.testReplicated fails with a timeout.
In all known cases a hang up happens on attempt to start the 3rd node.
I was not able to reproduce the problem locally (test ran ~2200 times without 
any errors).



> [Test] IgfsSizeSelfTest.testReplicated sometimes fails with a timeout. 
> ---
>
> Key: IGNITE-3177
> URL: https://issues.apache.org/jira/browse/IGNITE-3177
> Project: Ignite
>  Issue Type: Test
>  Components: IGFS
>Reporter: Ivan Veselovsky
>Assignee: Ivan Veselovsky
>
> In some rare cases test IgfsSizeSelfTest.testReplicated fails with a timeout.
> We have 4 such logs.
> In all known cases a hang up happens on attempt to start the 3rd node.
> I was not able to reproduce the problem locally (test ran ~2200 times without 
> any errors).
> In 1 of 4 cases this is a timeout happening in method  
> org.apache.ignite.testframework.junits.common.GridCommonAbstractTest#awaitPartitionMapExchange(boolean,
>  boolean) .
> In other 3 cases this is some hang up happening upon the 3rd node start up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3177) [Test] IgfsSizeSelfTest.testReplicated sometimes fails with a timeout.

2016-05-19 Thread Ivan Veselovsky (JIRA)
Ivan Veselovsky created IGNITE-3177:
---

 Summary: [Test] IgfsSizeSelfTest.testReplicated sometimes fails 
with a timeout. 
 Key: IGNITE-3177
 URL: https://issues.apache.org/jira/browse/IGNITE-3177
 Project: Ignite
  Issue Type: Test
  Components: IGFS
Reporter: Ivan Veselovsky
Assignee: Ivan Veselovsky


In some rare cases test IgfsSizeSelfTest.testReplicated fails with a timeout.
In all known cases a hang up happens on attempt to start the 3rd node.
I was not able to reproduce the problem locally (test ran ~2200 times without 
any errors).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3004) Implement config variations test for ContinuousQueries

2016-05-19 Thread Nikolay Tikhonov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291540#comment-15291540
 ] 

Nikolay Tikhonov commented on IGNITE-3004:
--

Thank you for your review!
I've fixed notes. Got green TC. Could you please look again?

> Implement config variations test for ContinuousQueries
> --
>
> Key: IGNITE-3004
> URL: https://issues.apache.org/jira/browse/IGNITE-3004
> Project: Ignite
>  Issue Type: Bug
>  Components: cache
>Reporter: Semen Boikov
>Assignee: Nikolay Tikhonov
>Priority: Blocker
> Fix For: 1.7
>
>
> Need implement test for continuous qeries with configuration variations. 
> Make sure these points are covered (in addition to nodes number/cache modes 
> and configuration paramers variaions):
> - different key/values types (see 
> IgniteConfigVariationsAbstractTest.runInAllDataModes)
> - keepBinary mode
> - with/without filters
> - ContinuousQuery API and IgniteCache.registerCacheEntryListener
> - async/sync listener and filter
> - all cache update operations
> - CacheEntryListenerConfiguration.isSynchronous true/false



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3168) Ignite mesos framework should provide ability to configure timeouts.

2016-05-19 Thread Nikolay Tikhonov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291535#comment-15291535
 ] 

Nikolay Tikhonov commented on IGNITE-3168:
--

Added {{JETTY_IDLE_TIMEOUT}} property.

> Ignite mesos framework should provide ability to configure timeouts.
> 
>
> Key: IGNITE-3168
> URL: https://issues.apache.org/jira/browse/IGNITE-3168
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 1.5.0.final
>Reporter: Nikolay Tikhonov
>Assignee: Nikolay Tikhonov
>  Labels: important
> Fix For: 1.7
>
>
> Need to add in ClusterProperties class properties which allow to configure 
> jetty timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (IGNITE-3168) Ignite mesos framework should provide ability to configure timeouts.

2016-05-19 Thread Nikolay Tikhonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Tikhonov updated IGNITE-3168:
-
Comment: was deleted

(was: Added `JETTY_IDLE_TIMEOUT` property.)

> Ignite mesos framework should provide ability to configure timeouts.
> 
>
> Key: IGNITE-3168
> URL: https://issues.apache.org/jira/browse/IGNITE-3168
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 1.5.0.final
>Reporter: Nikolay Tikhonov
>Assignee: Nikolay Tikhonov
>  Labels: important
> Fix For: 1.7
>
>
> Need to add in ClusterProperties class properties which allow to configure 
> jetty timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3168) Ignite mesos framework should provide ability to configure timeouts.

2016-05-19 Thread Nikolay Tikhonov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291533#comment-15291533
 ] 

Nikolay Tikhonov commented on IGNITE-3168:
--

Added `JETTY_IDLE_TIMEOUT` property.

> Ignite mesos framework should provide ability to configure timeouts.
> 
>
> Key: IGNITE-3168
> URL: https://issues.apache.org/jira/browse/IGNITE-3168
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 1.5.0.final
>Reporter: Nikolay Tikhonov
>Assignee: Nikolay Tikhonov
>  Labels: important
> Fix For: 1.7
>
>
> Need to add in ClusterProperties class properties which allow to configure 
> jetty timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3175) BigDecimal fields are not supported if query is executed from IgniteRDD

2016-05-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291373#comment-15291373
 ] 

ASF GitHub Bot commented on IGNITE-3175:


GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/736

IGNITE-3175 BigDecimal fields are not supported if query is executed from 
IgniteRDD



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3175

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/736.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #736


commit 4fa840399bd2580ff313f05ea8ea905677df1749
Author: tledkov-gridgain 
Date:   2016-05-19T16:03:56Z

IGNITE-3175 BigDecimal fields are not supported if query is executed from 
IgniteRDD




> BigDecimal fields are not supported if query is executed from IgniteRDD
> ---
>
> Key: IGNITE-3175
> URL: https://issues.apache.org/jira/browse/IGNITE-3175
> Project: Ignite
>  Issue Type: Bug
>  Components: Ignite RDD
>Affects Versions: 1.5.0.final
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
> Fix For: 1.7
>
>
> If one of the fields participating in the query is {{BigDecimal}}, the query 
> will fail when executed from {{IgniteRDD}} with the following error:
> {noformat}
> scala.MatchError: 1124757 (of class java.math.BigDecimal)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:255)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
>   at 
> org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
>   at 
> org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:505)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.(TungstenAggregationIterator.scala:686)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Most likely this is caused by the fact that {{IgniteRDD.dataType()}} method 
> doesn't honor {{BigDecimal}} and returns {{StructType}} by default. We should 
> fix this and check other 

[jira] [Commented] (IGNITE-2929) IgniteContext should not have type parameters

2016-05-19 Thread Valentin Kulichenko (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291369#comment-15291369
 ] 

Valentin Kulichenko commented on IGNITE-2929:
-

Hi, feel free to assign tickets to yourself if you're working on them. If you 
don't have enough permissions, please write an email to dev@ list.

> IgniteContext should not have type parameters
> -
>
> Key: IGNITE-2929
> URL: https://issues.apache.org/jira/browse/IGNITE-2929
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Valentin Kulichenko
> Fix For: 1.7
>
>
> Currently implementation of {{SparkContext}} has type parameters {{[K, V]}} 
> which means that all the RDDs that are created by a particular instance of 
> context have to be of the same type.
> Looks like type parameters on {{IgniteContext}} don't make much sense and 
> should be removed. {{fromCache}} method should be parameterized instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-2929) IgniteContext should not have type parameters

2016-05-19 Thread Valentin Kulichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Valentin Kulichenko reassigned IGNITE-2929:
---

Assignee: Valentin Kulichenko

> IgniteContext should not have type parameters
> -
>
> Key: IGNITE-2929
> URL: https://issues.apache.org/jira/browse/IGNITE-2929
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Valentin Kulichenko
>Assignee: Valentin Kulichenko
> Fix For: 1.7
>
>
> Currently implementation of {{SparkContext}} has type parameters {{[K, V]}} 
> which means that all the RDDs that are created by a particular instance of 
> context have to be of the same type.
> Looks like type parameters on {{IgniteContext}} don't make much sense and 
> should be removed. {{fromCache}} method should be parameterized instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-2929) IgniteContext should not have type parameters

2016-05-19 Thread Valentin Kulichenko (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Valentin Kulichenko updated IGNITE-2929:

Assignee: (was: Valentin Kulichenko)

> IgniteContext should not have type parameters
> -
>
> Key: IGNITE-2929
> URL: https://issues.apache.org/jira/browse/IGNITE-2929
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Valentin Kulichenko
> Fix For: 1.7
>
>
> Currently implementation of {{SparkContext}} has type parameters {{[K, V]}} 
> which means that all the RDDs that are created by a particular instance of 
> context have to be of the same type.
> Looks like type parameters on {{IgniteContext}} don't make much sense and 
> should be removed. {{fromCache}} method should be parameterized instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3154) More efficient field lookup in binary protocol.

2016-05-19 Thread Dmitry Karachentsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Karachentsev reassigned IGNITE-3154:
---

Assignee: Dmitry Karachentsev  (was: Vladimir Ozerov)

> More efficient field lookup in binary protocol.
> ---
>
> Key: IGNITE-3154
> URL: https://issues.apache.org/jira/browse/IGNITE-3154
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ozerov
>Assignee: Dmitry Karachentsev
>Priority: Critical
>  Labels: customer
> Fix For: 1.7
>
>
> *Problem*
> Currently creation of binary field is performed as follows: 
> {{BinaryObject.type().field(...)}}. Call to {{BinaryObject.type()}} is pretty 
> expensive as it requires metadata lookup. Interesting thing is that 
> subsequent call to {{BinaryType.field()}} doesn't require metadata at all. 
> *Solution*
> Implement lazy metadata load for this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2929) IgniteContext should not have type parameters

2016-05-19 Thread Biao Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15291263#comment-15291263
 ] 

Biao Ma commented on IGNITE-2929:
-

Hi,
   Val, I am working on this one.can you assign it to me?

> IgniteContext should not have type parameters
> -
>
> Key: IGNITE-2929
> URL: https://issues.apache.org/jira/browse/IGNITE-2929
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Valentin Kulichenko
> Fix For: 1.7
>
>
> Currently implementation of {{SparkContext}} has type parameters {{[K, V]}} 
> which means that all the RDDs that are created by a particular instance of 
> context have to be of the same type.
> Looks like type parameters on {{IgniteContext}} don't make much sense and 
> should be removed. {{fromCache}} method should be parameterized instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3114) "IllegalStateException: Row conflict should never happen" during load test

2016-05-19 Thread Sergey Kozlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kozlov updated IGNITE-3114:
--
Summary: "IllegalStateException: Row conflict should never happen" during 
load test  (was: IllegalStateException: Row conflict should never happen" 
during load test)

> "IllegalStateException: Row conflict should never happen" during load test
> --
>
> Key: IGNITE-3114
> URL: https://issues.apache.org/jira/browse/IGNITE-3114
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Ksenia Rybakova
>Priority: Minor
> Attachments: benchmark-cache-load.properties, logs.zip
>
>
> Configuration:
> - 3 drivers at 1 host, 30 servers at 3 hosts;
> - all operations are enabled except SCAN_QUERY, SQL_QUERY and CONTINUOUS_QUERY
> - for other settings see attached benchmark-cache-load.properties file
> Steps to reproduce:
> 1) Run load test:
>  ./bin/benchmark-run-all.sh config/benchmark-cache-load.properties
> 2) Check all server log files. 
> Note: In my case I noticed that all these exceptions happened only at 20th 
> server (1st server node at the 3rd host), so see 
> 20160511-ignite-1.6.0-SNAPSHOT-37c03c22-pr670-c3-s40/logs/servers/10.20.0.223/logs-20160511-040658/logs_servers/040710_id20_10.20.0.223.log
>  in attached archive.
> Expected:
> No exceptions.
> Actual:
> "java.lang.IllegalStateException: Row conflict should never happen, unique 
> indexes are not supported" exceptions accur while running load test 
> benchmark. 
> java.lang.IllegalStateException: Row conflict should never happen, unique 
> indexes are not supported.
> at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:410)
> at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:340)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:524)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:700)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:407)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateIndex(GridCacheMapEntry.java:3849)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:3309)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:1618)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:50)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:80)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1219)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:847)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:105)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:810)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3114) IllegalStateException: Row conflict should never happen" during load test

2016-05-19 Thread Sergey Kozlov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Kozlov updated IGNITE-3114:
--
Summary: IllegalStateException: Row conflict should never happen" during 
load test  (was: "IllegalStateException: Row conflict should never happen" 
during load test)

> IllegalStateException: Row conflict should never happen" during load test
> -
>
> Key: IGNITE-3114
> URL: https://issues.apache.org/jira/browse/IGNITE-3114
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.6
>Reporter: Ksenia Rybakova
>Priority: Minor
> Attachments: benchmark-cache-load.properties, logs.zip
>
>
> Configuration:
> - 3 drivers at 1 host, 30 servers at 3 hosts;
> - all operations are enabled except SCAN_QUERY, SQL_QUERY and CONTINUOUS_QUERY
> - for other settings see attached benchmark-cache-load.properties file
> Steps to reproduce:
> 1) Run load test:
>  ./bin/benchmark-run-all.sh config/benchmark-cache-load.properties
> 2) Check all server log files. 
> Note: In my case I noticed that all these exceptions happened only at 20th 
> server (1st server node at the 3rd host), so see 
> 20160511-ignite-1.6.0-SNAPSHOT-37c03c22-pr670-c3-s40/logs/servers/10.20.0.223/logs-20160511-040658/logs_servers/040710_id20_10.20.0.223.log
>  in attached archive.
> Expected:
> No exceptions.
> Actual:
> "java.lang.IllegalStateException: Row conflict should never happen, unique 
> indexes are not supported" exceptions accur while running load test 
> benchmark. 
> java.lang.IllegalStateException: Row conflict should never happen, unique 
> indexes are not supported.
> at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.doUpdate(GridH2Table.java:410)
> at 
> org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:340)
> at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:524)
> at 
> org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:700)
> at 
> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:407)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.updateIndex(GridCacheMapEntry.java:3849)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.initialValue(GridCacheMapEntry.java:3309)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:1618)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:305)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:50)
> at 
> org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:80)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1219)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:847)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:105)
> at 
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:810)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3154) More efficient field lookup in binary protocol.

2016-05-19 Thread Dmitry Karachentsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Karachentsev reassigned IGNITE-3154:
---

Assignee: Vladimir Ozerov  (was: Dmitry Karachentsev)

> More efficient field lookup in binary protocol.
> ---
>
> Key: IGNITE-3154
> URL: https://issues.apache.org/jira/browse/IGNITE-3154
> Project: Ignite
>  Issue Type: Task
>  Components: general
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ozerov
>Assignee: Vladimir Ozerov
>Priority: Critical
>  Labels: customer
> Fix For: 1.7
>
>
> *Problem*
> Currently creation of binary field is performed as follows: 
> {{BinaryObject.type().field(...)}}. Call to {{BinaryObject.type()}} is pretty 
> expensive as it requires metadata lookup. Interesting thing is that 
> subsequent call to {{BinaryType.field()}} doesn't require metadata at all. 
> *Solution*
> Implement lazy metadata load for this case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3176) Need to create gc log for each client separately [ yardstick-ignite ]

2016-05-19 Thread Ilya Suntsov (JIRA)
Ilya Suntsov created IGNITE-3176:


 Summary: Need to create gc log for each client separately [ 
yardstick-ignite ]
 Key: IGNITE-3176
 URL: https://issues.apache.org/jira/browse/IGNITE-3176
 Project: Ignite
  Issue Type: Bug
  Components: clients, general
Affects Versions: 1.6
Reporter: Ilya Suntsov
Priority: Critical
 Fix For: 1.7


In case when started more than one client/server on one host yardstick re-write 
GC logs.

GC options contain in *.properties files:
{noformat}
now0=`date +'%H%M%S'`
# JVM options.
JVM_OPTS=${JVM_OPTS}" -DIGNITE_QUIET=false"
# Uncomment to enable concurrent garbage collection (GC) if you encounter long 
GC pauses.
JVM_OPTS=${JVM_OPTS}" \
-Xloggc:./gc${now0}.log \
-XX:+PrintGCDetails \
-verbose:gc \
-XX:+UseParNewGC \
-XX:+UseConcMarkSweepGC \
-XX:+UseTLAB \
-XX:NewSize=128m \
-XX:MaxNewSize=128m \
-XX:MaxTenuringThreshold=0 \
-XX:SurvivorRatio=1024 \
-XX:+UseCMSInitiatingOccupancyOnly \
-XX:CMSInitiatingOccupancyFraction=60 \
{noformat}

As you can see here will be created 1 log file and if you start another 
driver/server with the same properties file will be re-write.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-3159) WebSession: Incorrect handling of HttpServletRequest.getRequestedSessionId.

2016-05-19 Thread Dmitry Karachentsev (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Karachentsev resolved IGNITE-3159.
-
Resolution: Won't Fix
  Assignee: Vladimir Ozerov  (was: Dmitry Karachentsev)

> WebSession: Incorrect handling of HttpServletRequest.getRequestedSessionId.
> ---
>
> Key: IGNITE-3159
> URL: https://issues.apache.org/jira/browse/IGNITE-3159
> Project: Ignite
>  Issue Type: Bug
>  Components: websession
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ozerov
>Assignee: Vladimir Ozerov
> Fix For: 1.7
>
>
> {{WebSessionFilter}} use HttpServletRequest.getRequestedSessionId() method to 
> get session ID.
> However, specification says that this method might return ID which is 
> different from ID of currently active session. E.g. when request is performed 
> with ID of already invalidated session. But we never account for this and 
> pass this session ID to our session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3159) WebSession: Incorrect handling of HttpServletRequest.getRequestedSessionId.

2016-05-19 Thread Dmitry Karachentsev (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290985#comment-15290985
 ] 

Dmitry Karachentsev commented on IGNITE-3159:
-

Current algorithm should be left as is, because there is no possibility to get 
to know if session was invalidated (f.e. in previous filter) or request came to 
AS2, but session was created on AS1. In that case the correct behavior would be 
to check if requested session present in cache, as it does now.

> WebSession: Incorrect handling of HttpServletRequest.getRequestedSessionId.
> ---
>
> Key: IGNITE-3159
> URL: https://issues.apache.org/jira/browse/IGNITE-3159
> Project: Ignite
>  Issue Type: Bug
>  Components: websession
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ozerov
>Assignee: Dmitry Karachentsev
> Fix For: 1.7
>
>
> {{WebSessionFilter}} use HttpServletRequest.getRequestedSessionId() method to 
> get session ID.
> However, specification says that this method might return ID which is 
> different from ID of currently active session. E.g. when request is performed 
> with ID of already invalidated session. But we never account for this and 
> pass this session ID to our session.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (IGNITE-3148) No description for package org.apache.ignite.cache.jta.websphere in javadoc

2016-05-19 Thread Ilya Suntsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Suntsov closed IGNITE-3148.


Fix confirmed

> No description for package org.apache.ignite.cache.jta.websphere in javadoc
> ---
>
> Key: IGNITE-3148
> URL: https://issues.apache.org/jira/browse/IGNITE-3148
> Project: Ignite
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.6
>Reporter: Ilya Suntsov
>Assignee: Ilya Suntsov
>Priority: Minor
> Fix For: 1.6
>
>
> Need to add description for package
> {noformat} org.apache.ignite.cache.jta.websphere {noformat}
> in javadoc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (IGNITE-3070) Failed to fetch data from node at GridReduceQueryExecutor [load test, yardstick]

2016-05-19 Thread Ilya Suntsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ilya Suntsov updated IGNITE-3070:
-
Affects Version/s: 1.6

>  Failed to fetch data from node at GridReduceQueryExecutor [load test, 
> yardstick]
> -
>
> Key: IGNITE-3070
> URL: https://issues.apache.org/jira/browse/IGNITE-3070
> Project: Ignite
>  Issue Type: Bug
>  Components: cache, general
>Affects Versions: 1.6
> Environment: 1 host: Cent OS 5, jdk 1.7
> 1 yardstick driver, 3 yardstick servers
>Reporter: Ilya Suntsov
>Priority: Critical
> Attachments: logs-bug-load.zip
>
>
> I ran a load test with the following parameters:
>  - 1 backup
>  - key range equal 1 000 000
>  - warmup 60 sec
>  - duration 300 sec
>  - preloading amount  500 000
>  - 64 threads
>  - primary sync mode 
> Additional parameters:
> {noformat}
> --allow-operation PUT --allow-operation PUT_ALL --allow-operation GET 
> --allow-operation GET_ALL --allow-operation INVOKE --allow-operation 
> INVOKE_ALL --allow-operation REMOVE \
> --allow-operation REMOVE_ALL --allow-operation PUT_IF_ABSENT 
> --allow-operation REPLACE --allow-operation SCAN_QUERY --allow-operation 
> SQL_QUERY --allow-operation CONTINUOUS_QUERY {noformat}
> Queries file:
> {noformat}
> SELECT Person.firstName  FROM "query".Person, "orgCache".Organization WHERE 
> Person.orgId = Organization.id AND lower(Organization.name) = 
> lower('Organization 55')
> SELECT Organization.name  FROM "orgCache".Organization WHERE 
> lower(Organization.name) LIKE lower('%55%')
> {noformat}
> Another parameters and caches configurations you can find in attachment.
> I got the following exception on yardstick driver:
> {noformat}
> ERROR: Shutting down benchmark driver to unexpected exception.
> Type '--help' for usage.
> javax.cache.CacheException: Failed to fetch data from node: 
> 25328521-72be-4d42-8ba1-98a408558900
> <-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$3.fetchNextPage(GridReduceQueryExecutor.java:290)
> <-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex.fetchNextPage(GridMergeIndex.java:229)
> <-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndexUnsorted$1.hasNext(GridMergeIndexUnsorted.java:106)
> <-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex$IteratorCursor.next(GridMergeIndex.java:351)
> <-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridMergeIndex$FetchingCursor.next(GridMergeIndex.java:382)
> <-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.query(GridReduceQueryExecutor.java:614)
> <-->at 
> org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing$2.iterator(IgniteH2Indexing.java:956)
> <-->at 
> org.apache.ignite.internal.processors.cache.QueryCursorImpl.iterator(QueryCursorImpl.java:61)
> <-->at 
> org.apache.ignite.yardstick.cache.load.IgniteCacheRandomOperationBenchmark.doSqlQuery(IgniteCacheRandomOperationBenchmark.java:835)
> <-->at 
> org.apache.ignite.yardstick.cache.load.IgniteCacheRandomOperationBenchmark.executeRandomOperation(IgniteCacheRandomOperationBenchmark.java:553)
> <-->at 
> org.apache.ignite.yardstick.cache.load.IgniteCacheRandomOperationBenchmark.executeOutOfTx(IgniteCacheRandomOperationBenchmark.java:496)
> <-->at 
> org.apache.ignite.yardstick.cache.load.IgniteCacheRandomOperationBenchmark.test(IgniteCacheRandomOperationBenchmark.java:152)
> <-->at 
> org.yardstickframework.impl.BenchmarkRunner$2.run(BenchmarkRunner.java:176)
> <-->at java.lang.Thread.run(Thread.java:745)
> <-->Suppressed: javax.cache.CacheException: Failed to execute map query 
> on the node: b0e43304-0a86-43cf-b14c-20a3753a85a6, class 
> javax.cache.CacheException:No query result found for request: 
> GridQueryNextPageRequest [qryReqId=108, qry=0, pageSize=1024]
> <--><-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.fail(GridReduceQueryExecutor.java:257)
> <--><-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onFail(GridReduceQueryExecutor.java:247)
> <--><-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor.onMessage(GridReduceQueryExecutor.java:228)
> <--><-->at 
> org.apache.ignite.internal.processors.query.h2.twostep.GridReduceQueryExecutor$1.onMessage(GridReduceQueryExecutor.java:176)
> <--><-->at 
> org.apache.ignite.internal.managers.communication.GridIoManager$ArrayListener.onMessage(GridIoManager.java:2039)
> <--><-->at 
> 

[jira] [Assigned] (IGNITE-3175) BigDecimal fields are not supported if query is executed from IgniteRDD

2016-05-19 Thread Taras Ledkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-3175:


Assignee: Taras Ledkov  (was: Valentin Kulichenko)

> BigDecimal fields are not supported if query is executed from IgniteRDD
> ---
>
> Key: IGNITE-3175
> URL: https://issues.apache.org/jira/browse/IGNITE-3175
> Project: Ignite
>  Issue Type: Bug
>  Components: Ignite RDD
>Affects Versions: 1.5.0.final
>Reporter: Valentin Kulichenko
>Assignee: Taras Ledkov
> Fix For: 1.7
>
>
> If one of the fields participating in the query is {{BigDecimal}}, the query 
> will fail when executed from {{IgniteRDD}} with the following error:
> {noformat}
> scala.MatchError: 1124757 (of class java.math.BigDecimal)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:255)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
>   at 
> org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
>   at 
> org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
>   at 
> org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:505)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.(TungstenAggregationIterator.scala:686)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
>   at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>   at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> Most likely this is caused by the fact that {{IgniteRDD.dataType()}} method 
> doesn't honor {{BigDecimal}} and returns {{StructType}} by default. We should 
> fix this and check other possible types as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3168) Ignite mesos framework should provide ability to configure timeouts.

2016-05-19 Thread Nikolay Tikhonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nikolay Tikhonov reassigned IGNITE-3168:


Assignee: Nikolay Tikhonov

> Ignite mesos framework should provide ability to configure timeouts.
> 
>
> Key: IGNITE-3168
> URL: https://issues.apache.org/jira/browse/IGNITE-3168
> Project: Ignite
>  Issue Type: Improvement
>Affects Versions: 1.5.0.final
>Reporter: Nikolay Tikhonov
>Assignee: Nikolay Tikhonov
>  Labels: important
> Fix For: 1.7
>
>
> Need to add in ClusterProperties class properties which allow to configure 
> jetty timeouts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-2865) Continuous query event passed to filter should be immutable for users.

2016-05-19 Thread Taras Ledkov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-2865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Taras Ledkov reassigned IGNITE-2865:


Assignee: Taras Ledkov

> Continuous query event passed to filter should be immutable for users.
> --
>
> Key: IGNITE-2865
> URL: https://issues.apache.org/jira/browse/IGNITE-2865
> Project: Ignite
>  Issue Type: Task
>  Components: cache
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ozerov
>Assignee: Taras Ledkov
>Priority: Critical
>  Labels: community, important
> Fix For: 1.7
>
>
> *Problem*
> When event is passed to continuous query filter, it can be used only in scope 
> of this method. The reason is that if filter returns {{false}}, the method 
> {{CacheContinuousQueryEntry.markFiltered()}} is called. This method *clears* 
> key and values.
> *Solution*
> We should not clear key and values. Instead, we should properly check for 
> {{FILTERED_ENTRY}} flag in all methods where {{key/newVal/oldVal/depInfo}} 
> are used. This includes generated {{readFrom()/writeTo()}} methods as well - 
> their manual change will be required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3009) querySql sometimes fails in Ignite RDD embedded mode test

2016-05-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290785#comment-15290785
 ] 

ASF GitHub Bot commented on IGNITE-3009:


GitHub user tledkov-gridgain opened a pull request:

https://github.com/apache/ignite/pull/734

IGNITE-3009  querySql sometimes fails in Ignite RDD embedded mode test

sets the overwrite=true data streamer's flag for the tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3009

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/734.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #734


commit 72dca2a7e9e2c9493e218cbb6ebd2600eb66fdcd
Author: tledkov-gridgain 
Date:   2016-05-19T09:31:04Z

IGNITE-3009  querySql sometimes fails in Ignite RDD embedded mode test; 
sets the overwrite=true data streamer's flag for the tests




> querySql sometimes fails in Ignite RDD embedded mode test
> -
>
> Key: IGNITE-3009
> URL: https://issues.apache.org/jira/browse/IGNITE-3009
> Project: Ignite
>  Issue Type: Bug
>  Components: Ignite RDD
>Affects Versions: 1.5.0.final
>Reporter: Alexei Scherbakov
>Assignee: Taras Ledkov
> Fix For: 1.7
>
>
> JavaEmbeddedIgniteRDDSelfTest.testQueryObjectsFromIgnite
> occasionally fails in the line 215 on the objectSql query
> If a cache size request is made before query(currently code is commented), 
> all works fine



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-3161) WebSession: Session must be created on demand, not always.

2016-05-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290759#comment-15290759
 ] 

ASF GitHub Bot commented on IGNITE-3161:


GitHub user dkarachentsev opened a pull request:

https://github.com/apache/ignite/pull/733

IGNITE-3161 - WebSession: Session must be created on demand, not alwa…

…ys. Fix.

IGNITE-3160 - WebSession: Incorrect session ID change logic. Fix.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3161

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/733.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #733


commit a94b8ffefef7e6098269e79e41e0cb9327225be4
Author: dkarachentsev 
Date:   2016-05-19T09:15:53Z

IGNITE-3161 - WebSession: Session must be created on demand, not always. 
Fix.
IGNITE-3160 - WebSession: Incorrect session ID change logic. Fix.




> WebSession: Session must be created on demand, not always.
> --
>
> Key: IGNITE-3161
> URL: https://issues.apache.org/jira/browse/IGNITE-3161
> Project: Ignite
>  Issue Type: Bug
>  Components: websession
>Affects Versions: 1.5.0.final
>Reporter: Vladimir Ozerov
>Assignee: Dmitry Karachentsev
>Priority: Critical
> Fix For: 1.7
>
>
> Our filter will always creates new session (both in Ignite and in container). 
> This is wrong, as session must be created only when it is really requested 
> through {{HttpRequest.getSession(true)}} call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2667) Allow to start caches in PRIVATE and ISOLATED deployment modes when BinaryMarshaller is used

2016-05-19 Thread Vladislav Pyatkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290745#comment-15290745
 ] 

Vladislav Pyatkov commented on IGNITE-2667:
---

I applied the patch and made new pull request.
Can you please review it after run TC tests?

> Allow to start caches in PRIVATE and ISOLATED deployment modes when 
> BinaryMarshaller is used
> 
>
> Key: IGNITE-2667
> URL: https://issues.apache.org/jira/browse/IGNITE-2667
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>  Labels: community
> Fix For: 1.7
>
> Attachments: modification.patch
>
>
> Refer to this discussion for details:
> http://apache-ignite-developers.2346864.n4.nabble.com/Fwd-Distributed-queue-problem-with-peerClassLoading-enabled-td4521.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2667) Allow to start caches in PRIVATE and ISOLATED deployment modes when BinaryMarshaller is used

2016-05-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290733#comment-15290733
 ] 

ASF GitHub Bot commented on IGNITE-2667:


GitHub user vldpyatkov opened a pull request:

https://github.com/apache/ignite/pull/732

IGNITE-2667

Allow to start caches in PRIVATE and ISOLATED deployment modes when 
BinaryMarshaller is used

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vldpyatkov/ignite ignite-2667

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/732.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #732


commit 3d31646fe472d21a84243ea30784ac62e8e17e67
Author: vdpyatkov 
Date:   2016-05-19T08:54:33Z

IGNITE-2667
Allow to start caches in PRIVATE and ISOLATED deployment modes when 
BinaryMarshaller is used




> Allow to start caches in PRIVATE and ISOLATED deployment modes when 
> BinaryMarshaller is used
> 
>
> Key: IGNITE-2667
> URL: https://issues.apache.org/jira/browse/IGNITE-2667
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>  Labels: community
> Fix For: 1.7
>
> Attachments: modification.patch
>
>
> Refer to this discussion for details:
> http://apache-ignite-developers.2346864.n4.nabble.com/Fwd-Distributed-queue-problem-with-peerClassLoading-enabled-td4521.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2667) Allow to start caches in PRIVATE and ISOLATED deployment modes when BinaryMarshaller is used

2016-05-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290730#comment-15290730
 ] 

ASF GitHub Bot commented on IGNITE-2667:


Github user vldpyatkov closed the pull request at:

https://github.com/apache/ignite/pull/718


> Allow to start caches in PRIVATE and ISOLATED deployment modes when 
> BinaryMarshaller is used
> 
>
> Key: IGNITE-2667
> URL: https://issues.apache.org/jira/browse/IGNITE-2667
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>  Labels: community
> Fix For: 1.7
>
> Attachments: modification.patch
>
>
> Refer to this discussion for details:
> http://apache-ignite-developers.2346864.n4.nabble.com/Fwd-Distributed-queue-problem-with-peerClassLoading-enabled-td4521.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (IGNITE-3012) Filtration of default cache doesn't work

2016-05-19 Thread Alexey Kuznetsov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Kuznetsov reassigned IGNITE-3012:


Assignee: Alexey Kuznetsov

> Filtration of default cache doesn't work
> 
>
> Key: IGNITE-3012
> URL: https://issues.apache.org/jira/browse/IGNITE-3012
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Pavel Konstantinov
>Assignee: Alexey Kuznetsov
>Priority: Minor
> Fix For: 1.7
>
>
> If grid contains default cache (cache without a name, name=null) then user 
> have no ability to find it using Filter. The default cache displayed as 
> , but it could not be found by entering into the Filter the same 
> string ''.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-3175) BigDecimal fields are not supported if query is executed from IgniteRDD

2016-05-19 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-3175:
---

 Summary: BigDecimal fields are not supported if query is executed 
from IgniteRDD
 Key: IGNITE-3175
 URL: https://issues.apache.org/jira/browse/IGNITE-3175
 Project: Ignite
  Issue Type: Bug
  Components: Ignite RDD
Affects Versions: 1.5.0.final
Reporter: Valentin Kulichenko
Assignee: Valentin Kulichenko
 Fix For: 1.7


If one of the fields participating in the query is {{BigDecimal}}, the query 
will fail when executed from {{IgniteRDD}} with the following error:
{noformat}
scala.MatchError: 1124757 (of class java.math.BigDecimal)
at 
org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:255)
at 
org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
at 
org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at 
org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:260)
at 
org.apache.spark.sql.catalyst.CatalystTypeConverters$StructConverter.toCatalystImpl(CatalystTypeConverters.scala:250)
at 
org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:102)
at 
org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:401)
at 
org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
at 
org.apache.spark.sql.SQLContext$$anonfun$6.apply(SQLContext.scala:492)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at 
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.processInputs(TungstenAggregationIterator.scala:505)
at 
org.apache.spark.sql.execution.aggregate.TungstenAggregationIterator.(TungstenAggregationIterator.scala:686)
at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:95)
at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
{noformat}

Most likely this is caused by the fact that {{IgniteRDD.dataType()}} method 
doesn't honor {{BigDecimal}} and returns {{StructType}} by default. We should 
fix this and check other possible types as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2667) Allow to start caches in PRIVATE and ISOLATED deployment modes when BinaryMarshaller is used

2016-05-19 Thread Vladislav Pyatkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290654#comment-15290654
 ] 

Vladislav Pyatkov commented on IGNITE-2667:
---

Thank for review, but the patch contains my and your differences jointly. It do 
not allow merge patch to my branch automatically.
I will consume time to merge.

> Allow to start caches in PRIVATE and ISOLATED deployment modes when 
> BinaryMarshaller is used
> 
>
> Key: IGNITE-2667
> URL: https://issues.apache.org/jira/browse/IGNITE-2667
> Project: Ignite
>  Issue Type: Bug
>Affects Versions: 1.5.0.final
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>  Labels: community
> Fix For: 1.7
>
> Attachments: modification.patch
>
>
> Refer to this discussion for details:
> http://apache-ignite-developers.2346864.n4.nabble.com/Fwd-Distributed-queue-problem-with-peerClassLoading-enabled-td4521.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (IGNITE-3145) Display in metadata list cache schema name instead of cache name if schema present in cache configuration

2016-05-19 Thread Andrey Novikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/IGNITE-3145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Novikov resolved IGNITE-3145.

Resolution: Fixed
  Assignee: Pavel Konstantinov  (was: Andrey Novikov)

Display sqlSchema as cache name in tree, add cache name leaf.
Pavel, please test.

> Display in metadata list cache schema name instead of cache name if schema 
> present in cache configuration 
> --
>
> Key: IGNITE-3145
> URL: https://issues.apache.org/jira/browse/IGNITE-3145
> Project: Ignite
>  Issue Type: Sub-task
>Reporter: Pavel Konstantinov
>Assignee: Pavel Konstantinov
>Priority: Minor
> Fix For: 1.7
>
>
> Currently we display cache name, but sort alphabetically considering cache 
> schema name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-19 Thread Vladislav Pyatkov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290632#comment-15290632
 ] 

Vladislav Pyatkov commented on IGNITE-2655:
---

Added new backup filter in FairAffinityFunction and RendezvouseAffinityFunction.
Could anyone review?

> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2528) Deadlocks caused by Ignite.close()

2016-05-19 Thread Alexei Scherbakov (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290631#comment-15290631
 ] 

Alexei Scherbakov commented on IGNITE-2528:
---

Pull request

https://github.com/apache/ignite/pull/730

> Deadlocks caused by Ignite.close()
> --
>
> Key: IGNITE-2528
> URL: https://issues.apache.org/jira/browse/IGNITE-2528
> Project: Ignite
>  Issue Type: Bug
>  Components: general
>Affects Versions: 1.5.0.final
>Reporter: Denis Magda
>Assignee: Alexei Scherbakov
> Fix For: 1.7
>
>
> If to start stopping an Ignite instance and execute {{cluster.nodes()}} from 
> an {{EntryProcessor}} or some other place of the code that holds a lock on 
> cache's gateway then this can lead to the deadlock:
> Ignite.close:
> - holds kernel.gateway lock;
> - tries to get a gateway lock on cache A;
> Entry.processor is called for cache A:
> - a gateway lock is acquired for cache A;
> - calling {{cluster.nodes()}};
> - trying to acquire kernel's gateway lock.
> To fix this deadlock we can do the following:
> - introduce a volatile variable that has to be set to 'true' when a node is 
> being stopped;
> - check this variable before acquiring kernel's gateway.
> Also probably it makes sense to try to use try lock here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (IGNITE-2655) AffinityFunction: primary and backup copies in different locations

2016-05-19 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/IGNITE-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15290612#comment-15290612
 ] 

ASF GitHub Bot commented on IGNITE-2655:


GitHub user vldpyatkov opened a pull request:

https://github.com/apache/ignite/pull/731

IGNITE-2655

AffinityFunction: primary and backup copies in different locations
Add affinity backup filter to FairAffinityFunction and 
RendezvousAffinityFunction

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vldpyatkov/ignite ignite-2655

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/731.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #731


commit a3276ab8210a41495dca1f7f1c50db4036bd1afa
Author: vdpyatkov 
Date:   2016-05-19T07:19:05Z

IGNITE-2655
AffinityFunction: primary and backup copies in different locations
Add affinity backup filter to FairAffinityFunction and 
RendezvousAffinityFunction




> AffinityFunction: primary and backup copies in different locations
> --
>
> Key: IGNITE-2655
> URL: https://issues.apache.org/jira/browse/IGNITE-2655
> Project: Ignite
>  Issue Type: Bug
>Reporter: Denis Magda
>Assignee: Vladislav Pyatkov
>Priority: Critical
>  Labels: important
> Fix For: 1.7
>
>
> There is a use case when primary and backup copies have to be located in 
> different racks, building, cities, etc.
> A simple scenario is the following. When nodes are started they will have 
> either "rack1" or "rack2" value in their attributes list and we will enforce 
> that the backups won't be selected among the nodes with the same attribute.
> It should be possible to filter out backups using IP addresses as well.
> Presently rendezvous and fair affinity function has {{backupFilter}} that 
> will work perfectly for the scenario above but only for cases when number of 
> backups for a cache is equal to 1.
> In case when the number of backups is bigger than one {{backupFilter}} will 
> only guarantee that the primary is located in different location but will NOT 
> guarantee that all the backups are spread out across different locations as 
> well.
> So we need to provide an API that will allow to spread the primary and ALL 
> backups copies across different locations.
> The proposal is to introduce {{AffinityBackupFilter}} with the following 
> method
> {{AffinityBackupFilter.isAssignable(Node n, List assigned)}}
> Where n - potential backup to check, assigned - list of current partition 
> holders, 1st is primary
> {{AffinityBackupFilter}} will be set using 
> {{affinity.setAffinityBackupFilter}}.
> {{Affinity.setBackupFilter}} has to be deprecated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)