[GitHub] ignite pull request: IGNITE-2775: Fixed HttpRequest.changeSessionI...

2016-03-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/539


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: Cassandra cache store [IGNITE-1371]

2016-03-09 Thread irudyak
Hi guys, I am quite busy now. It looks like I'll just start implementing
PersistenceCallback in a month.



--
View this message in context: 
http://apache-ignite-developers.2346864.n4.nabble.com/Cassandra-cache-store-IGNITE-1371-tp6880p7827.html
Sent from the Apache Ignite Developers mailing list archive at Nabble.com.


[jira] [Created] (IGNITE-2788) Redis API for Ignite to work with data via the Redis protocol

2016-03-09 Thread Roman Shtykh (JIRA)
Roman Shtykh created IGNITE-2788:


 Summary: Redis API for Ignite to work with data via the Redis 
protocol
 Key: IGNITE-2788
 URL: https://issues.apache.org/jira/browse/IGNITE-2788
 Project: Ignite
  Issue Type: New Feature
Reporter: Roman Shtykh


Introduce Redis API that works with the Redis protocol but uses Ignite grid.

Needless to say, Redis is an extremely popular caching solution. Such API will 
enable smooth migration to Ignite.

As a first phase we can start with most frequently used commands and enhance 
gradually.

Redis commands http://redis.io/commands



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-2787) Hibernate L2 cache doesn't survive client reconnect

2016-03-09 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-2787:
---

 Summary: Hibernate L2 cache doesn't survive client reconnect
 Key: IGNITE-2787
 URL: https://issues.apache.org/jira/browse/IGNITE-2787
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Affects Versions: 1.5.0.final
Reporter: Valentin Kulichenko
Assignee: Valentin Kulichenko
 Fix For: 1.6


After a client disconnects and reconnects with new ID, Hibernate L2 cache can't 
be used, because existing cache instance are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-2786) SpringCache doesn't survive client reconnect

2016-03-09 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-2786:
---

 Summary: SpringCache doesn't survive client reconnect
 Key: IGNITE-2786
 URL: https://issues.apache.org/jira/browse/IGNITE-2786
 Project: Ignite
  Issue Type: Improvement
  Components: cache
Affects Versions: 1.5.0.final
Reporter: Valentin Kulichenko
Assignee: Valentin Kulichenko
 Fix For: 1.6


After a client disconnects and reconnects with new ID, Spring caching can't be 
used, because existing cache instance are closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (IGNITE-2785) Improve test coverage for web session clustering

2016-03-09 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-2785:
---

 Summary: Improve test coverage for web session clustering
 Key: IGNITE-2785
 URL: https://issues.apache.org/jira/browse/IGNITE-2785
 Project: Ignite
  Issue Type: Improvement
  Components: websession
Affects Versions: 1.5.0.final
Reporter: Valentin Kulichenko
Assignee: Anton Vinogradov
 Fix For: 1.6


Web session clustering is not very well covered with tests. What should be done:

* Add more unit tests for different cases.
* Test with different cache configurations.
* Make {{ignite-weblogic-test}} more verbose and use it to check functionality 
on a "real" app with different app servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: IGNITE-2693: question about setting BinaryMarshaller on a cache

2016-03-09 Thread Dood

On 3/9/2016 7:46 PM, Alexey Goncharuk wrote:

Note that withKeepBinary() is just a way to tell a cache not to deserialize
values when doing a get or running an entry processor. The concept of
binary object does not belong solely to caches - you can get an instance of
IgniteBinary interface from Ignite and use binary objects in computations,
for example.

For me there would be more confusion if each cache had a separate
marshaller. What would then happen if you put an instance of BinaryObject
to a cache with JDK marshaller? When marshaller is global, the answer is
simple - BinaryObject is either available or not :)


Alexey, thanks for the taking the time to explain the reasoning!


[jira] [Created] (IGNITE-2784) Optimize continuous query remote listener invocation

2016-03-09 Thread Alexey Goncharuk (JIRA)
Alexey Goncharuk created IGNITE-2784:


 Summary: Optimize continuous query remote listener invocation
 Key: IGNITE-2784
 URL: https://issues.apache.org/jira/browse/IGNITE-2784
 Project: Ignite
  Issue Type: Bug
  Components: cache
Reporter: Alexey Goncharuk


The issue is triggered by this thread:
http://apache-ignite-developers.2346864.n4.nabble.com/CacheEntryEventFilter-with-Replicated-caches-td7806.html

Currently the following issues exist:
1) There is no way to pass Ignite continuous query parameters to JCache 
listener configuration. E.g. setting 'local' flag would have solved the 
original issue from the thread
2) For the case when a continuous query is executed on a REPLICATED cache 
affinity node and auto-unsubscribe is true, the query can be executed locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: IGNITE-2693: question about setting BinaryMarshaller on a cache

2016-03-09 Thread Alexey Goncharuk
Note that withKeepBinary() is just a way to tell a cache not to deserialize
values when doing a get or running an entry processor. The concept of
binary object does not belong solely to caches - you can get an instance of
IgniteBinary interface from Ignite and use binary objects in computations,
for example.

For me there would be more confusion if each cache had a separate
marshaller. What would then happen if you put an instance of BinaryObject
to a cache with JDK marshaller? When marshaller is global, the answer is
simple - BinaryObject is either available or not :)


Re: IGNITE-2693: question about setting BinaryMarshaller on a cache

2016-03-09 Thread Dood

On 3/9/2016 6:43 PM, Alexey Goncharuk wrote:

Hi,

The current version of test is not very clean and it works only because
withKeepBinary() is a noop. The correct version would be to use plain cache
for non-binary-object entry processor and use withKeepBinary for
binary-object entry processor. You can see that EntryProcessor creation is
encapsulated in a separate method testClosure() which is overridden in test
inheritors. The same thing should be done for the cache.


Alexey, thank you for the comment.

On a side note: you do not find it confusing that you can set a 
marshaller on a grid but you get a binary cache from another cache via 
withKeepBinary()?


Thanks!


RE: CacheEntryEventFilter with Replicated caches

2016-03-09 Thread Andrey Kornev
Alexey,

Thank you for the explanation!

But why any of that matters in this case? I'm talking about the REPLICATED 
cache -- all data changes get applied to all nodes in the same order on all 
nodes. And I'm talking about a cache event listener that is instantiated on one 
of thise nodes. It can be a primary or back up node, why does it matter? If the 
node is gone, that the listener is gone. Nobody's listening anymore. Time to 
forget the listener, clean up and move on. There is no way on a different node 
to pick from where the crushed node has left off (at least not thru JCache 
API). And  it's doubtful one would want to do that even if it was possible.

In the use case I'm describing, the whole filtering/listening thing could be 
entirely local making things a lot more efficient. I don't care about 
exactly-once delivery semantics that Ignite seems to be trying to guarantee, 
and I really hate to pay for things I don't need (the overhead of deploying 
filters to remote nodes, gratuitous evaluation of remote filters, the internal 
data structures and logic employed by Ignite to maintain the backup queues, 
etc, etc). I just want a lightweight all-local cache event listener.

How can I do it? Please advise.

Thanks
Andrey

> Date: Wed, 9 Mar 2016 16:21:19 -0800
> Subject: Re: CacheEntryEventFilter with Replicated caches
> From: alexey.goncha...@gmail.com
> To: dev@ignite.apache.org
> 
> Dmitriy is right, currently REPLICATED cache works the same way as
> PARTITIONED does, and in PARTITIONED cache filters should be evaluated on
> backups in order to maintain a backup queue in case a primary node fails.
> 
> For the case when query is executed on an affinity node of a REPLICATED
> cache _and_ auto-unsubscribe is true, I believe we can change the behavior,
> however it will be inconsistent with all other modes.
> 
> It can be easily overridden in Ignite API by setting local flag on a
> continuous query. I think we can provide a way to set the local flag for a
> JCache event listener, but I am not sure how it will look API-wise.
  

Re: IGNITE-2693: question about setting BinaryMarshaller on a cache

2016-03-09 Thread Alexey Goncharuk
Hi,

The current version of test is not very clean and it works only because
withKeepBinary() is a noop. The correct version would be to use plain cache
for non-binary-object entry processor and use withKeepBinary for
binary-object entry processor. You can see that EntryProcessor creation is
encapsulated in a separate method testClosure() which is overridden in test
inheritors. The same thing should be done for the cache.
​


Re: CacheEntryEventFilter with Replicated caches

2016-03-09 Thread Alexey Goncharuk
Dmitriy is right, currently REPLICATED cache works the same way as
PARTITIONED does, and in PARTITIONED cache filters should be evaluated on
backups in order to maintain a backup queue in case a primary node fails.

For the case when query is executed on an affinity node of a REPLICATED
cache _and_ auto-unsubscribe is true, I believe we can change the behavior,
however it will be inconsistent with all other modes.

It can be easily overridden in Ignite API by setting local flag on a
continuous query. I think we can provide a way to set the local flag for a
JCache event listener, but I am not sure how it will look API-wise.


RE: CacheEntryEventFilter with Replicated caches

2016-03-09 Thread Andrey Kornev
Alexey,

I'm talking about JCache's CacheEntry listener (which I think is implemented on 
top of the continuous query feature).

Andrey

> Date: Wed, 9 Mar 2016 15:52:48 -0800
> Subject: Re: CacheEntryEventFilter with Replicated caches
> From: alexey.goncha...@gmail.com
> To: dev@ignite.apache.org
> 
> Andrey,
> 
> Are you talking about a continuous query or a distributed event listener?
  

Re: CacheEntryEventFilter with Replicated caches

2016-03-09 Thread Alexey Goncharuk
Andrey,

Are you talking about a continuous query or a distributed event listener?


RE: CacheEntryEventFilter with Replicated caches

2016-03-09 Thread Andrey Kornev
Dmitriy,

The reason for my posting was exactly that: the filter is both deployed and 
invoked on all nodes where the REPLICATED cache is started. My point is that 
the filter should only be deployed and invoked on the node where its 
corresponding listener is, namely the local node (the node that registered the 
listener by calling IgniteCache.registerCacheEntryListener). Executing filter 
on remote nodes in case of a REPLICATED cache is a waste of resources that can 
and should be avoided.

Thanks
Andrey

> From: dsetrak...@apache.org
> Date: Wed, 9 Mar 2016 15:09:20 -0800
> Subject: Re: CacheEntryEventFilter with Replicated caches
> To: dev@ignite.apache.org
> 
> Hi Andrey.
> 
> The replicated cache is just a partitioned cache with more backups. I think
> the filter is deployed on all nodes, but is only invoked on the primary
> node (correct me if I am wrong). In that case, it will be impossible to
> deploy it only on the node that registered it.
> 
> D.
> 
> On Wed, Mar 9, 2016 at 1:44 PM, Andrey Kornev 
> wrote:
> 
> > Hello,
> >
> > It's come to my attention, when registering the cache event listener, the
> > filters get deployed on all the nodes of the cache, despite the fact that
> > the cache is configured as REPLICATED.  This seems redundant since it's
> > sufficient to have the filter deployed only on the node that has the local
> > cache listener (in other words, the same node that registers the listener).
> > Since the filter may execute some non-trivial computationally intensive
> > logic and it doesn't make sense to waste CPU on the nodes that are not
> > going to call back the listener. Not deploying filters to remote nodes
> > would also reduce the registration/unregistration overhead since only the
> > local node needs to be involved.
> >
> > The only case that would require special attention would be the case when
> > a listener is registered from a node which doesn't have the cache started.
> >
> > Please advise.
> > Andrey
> >
  

Re: CacheEntryEventFilter with Replicated caches

2016-03-09 Thread Dmitriy Setrakyan
Hi Andrey.

The replicated cache is just a partitioned cache with more backups. I think
the filter is deployed on all nodes, but is only invoked on the primary
node (correct me if I am wrong). In that case, it will be impossible to
deploy it only on the node that registered it.

D.

On Wed, Mar 9, 2016 at 1:44 PM, Andrey Kornev 
wrote:

> Hello,
>
> It's come to my attention, when registering the cache event listener, the
> filters get deployed on all the nodes of the cache, despite the fact that
> the cache is configured as REPLICATED.  This seems redundant since it's
> sufficient to have the filter deployed only on the node that has the local
> cache listener (in other words, the same node that registers the listener).
> Since the filter may execute some non-trivial computationally intensive
> logic and it doesn't make sense to waste CPU on the nodes that are not
> going to call back the listener. Not deploying filters to remote nodes
> would also reduce the registration/unregistration overhead since only the
> local node needs to be involved.
>
> The only case that would require special attention would be the case when
> a listener is registered from a node which doesn't have the cache started.
>
> Please advise.
> Andrey
>


IGNITE-2693: question about setting BinaryMarshaller on a cache

2016-03-09 Thread Dood

Hello all,

I am working on IGNITE-2693 with Vlad Ozerov's help. I am somewhat of a 
Java newbie so please be gentle ;-)


I am curious about something - after reading the Javadocs and Binary 
Marshaller docs on Ignite's documentation websites, I think that the 
documentation is not very friendly or even somewhat misleading. Or maybe 
it is the design that is puzzling to me :-)


For example, we use withKeepBinary() on a cache instance to get a binary 
cache that utilizes the binary marshaller. But this is not a setting 
that is "settable" on a per cache basis - we do not allow for a 
per-cache method to set a desired marshaller, this seems to be reserved 
for the IgniteConfiguration() interface/implementation(s) 
setMarshaller() method and is a grid-wide setting.


The background to this is that I have "fixed" the withKeepBinary() 
interface to throw an exception if the marshaller used is not binary 
(the ticket explains the reason why we want this). Apparently we 
(silently?) assume a binary marshaller everywhere but in one of the 
unrelated tests in the test suite for some reason an optimized 
marshaller is used and as a result of this (with my new change) these 
tests are failing [1]. I am trying to fix this but in the process I 
realized that you cannot set the marshaller through a 
CacheConfiguration() method (no such thing exists), this has to be done 
at a higher level (the IgniteConfiguration). However, the whole test is 
written to inherit a grid configuration with an optimized marshaller (is 
what it looks like to me)


Am I just horribly confused and missing something very obvious? Thanks!

[1] 
org.apache.ignite.internal.processors.cache.GridCacheOffHeapTieredEvictionAbstractSelfTest 



new blogs added to Ignite News section

2016-03-09 Thread Dmitriy Setrakyan
Igniters,

The latest blogs were added to the Ignite website news section:

https://ignite.apache.org/
https://ignite.apache.org/news.html

Thanks to Roman Shtykh and Shamim Bhuiyan for contributing.

Blogs about Ignite go a long way towards improving adoption and awareness
about our project. I would like to encourage all community members to blog
more.

Thanks,
D.


Re: Documentation for the ODBC driver

2016-03-09 Thread Dmitriy Setrakyan
Done.

On Wed, Mar 9, 2016 at 12:42 PM, Igor Sapego  wrote:

> Can I get edit permissions for readme?
>
> Best Regards,
> Igor
>
> On Wed, Mar 9, 2016 at 10:25 PM, Dmitriy Setrakyan 
> wrote:
>
> > I think we should document it in readme, right next to JDBC driver:
> > https://apacheignite.readme.io/docs/jdbc-driver
> >
> > I would probably create a category called “Drivers”, and put JDBC and
> ODBC
> > drivers under it. Remember to copy this documentation to 1.6 as well.
> >
> > D.
> >
> > On Wed, Mar 9, 2016 at 9:37 AM, Igor Sapego 
> wrote:
> >
> > > Hi Igniters!
> > >
> > > I'm currently working on the ODBC driver [1] and I'd wish to add
> > > documentation for it somewhere. Is readme.io [2] appropriate
> > > place for that? If so can I get edit permissions to modify it? If not
> > > can you point out the right place for the ODBC documentation?
> > >
> > > [1] - https://issues.apache.org/jira/browse/IGNITE-1786
> > > [2] - https://apacheignite.readme.io/docs/what-is-ignite
> > >
> > > Best Regards,
> > > Igor
> > >
> >
>


CacheEntryEventFilter with Replicated caches

2016-03-09 Thread Andrey Kornev
Hello,

It's come to my attention, when registering the cache event listener, the 
filters get deployed on all the nodes of the cache, despite the fact that the 
cache is configured as REPLICATED.  This seems redundant since it's sufficient 
to have the filter deployed only on the node that has the local cache listener 
(in other words, the same node that registers the listener). Since the filter 
may execute some non-trivial computationally intensive logic and it doesn't 
make sense to waste CPU on the nodes that are not going to call back the 
listener. Not deploying filters to remote nodes would also reduce the 
registration/unregistration overhead since only the local node needs to be 
involved.

The only case that would require special attention would be the case when a 
listener is registered from a node which doesn't have the cache started.

Please advise.
Andrey
  

Re: Documentation for the ODBC driver

2016-03-09 Thread Igor Sapego
Can I get edit permissions for readme?

Best Regards,
Igor

On Wed, Mar 9, 2016 at 10:25 PM, Dmitriy Setrakyan 
wrote:

> I think we should document it in readme, right next to JDBC driver:
> https://apacheignite.readme.io/docs/jdbc-driver
>
> I would probably create a category called “Drivers”, and put JDBC and ODBC
> drivers under it. Remember to copy this documentation to 1.6 as well.
>
> D.
>
> On Wed, Mar 9, 2016 at 9:37 AM, Igor Sapego  wrote:
>
> > Hi Igniters!
> >
> > I'm currently working on the ODBC driver [1] and I'd wish to add
> > documentation for it somewhere. Is readme.io [2] appropriate
> > place for that? If so can I get edit permissions to modify it? If not
> > can you point out the right place for the ODBC documentation?
> >
> > [1] - https://issues.apache.org/jira/browse/IGNITE-1786
> > [2] - https://apacheignite.readme.io/docs/what-is-ignite
> >
> > Best Regards,
> > Igor
> >
>


Re: Great blog about Ignite!

2016-03-09 Thread Dmitriy Setrakyan
On Wed, Mar 9, 2016 at 3:43 AM, 李玉珏@163 <18624049...@163.com> wrote:

> Hi:
>
> If I write the content will be released to:
> Https://www.zybuluo.com/liyuj/note/230739


Is your blog in Chinese? If yes, then I doubt we can help promote it.
However, I believe that you can post it to any developer hubs in China.

Did you already write the blog or planning to write one?\



> I say "authority", mainly refers to the relevant content, if written by
> the designer, will be more depth.


> 在 16/3/9 11:16, Dmitriy Setrakyan 写道:
>
> On Thu, Mar 3, 2016 at 11:45 PM, 李玉珏@163 <18624049...@163.com> wrote:
>>
>> I wrote some, but it was not deep enough, and the authority was not
>>> enough.
>>>
>>> I think we should still publish it. Can you explain what you meant by the
>> “authority”?
>>
>>
>> 在 16/3/4 09:39, Dmitriy Setrakyan 写道:
>>>
>>> Couldn’t agree more. It will be great if more community members would
>>>
 volunteer to write blogs or articles.


>>>
>>>
>
>


Re: Confusing IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE property

2016-03-09 Thread Dmitriy Setrakyan
Is there any way to get rid of this property completely?

On Wed, Mar 9, 2016 at 4:10 AM, Denis Magda  wrote:

> Igniters,
>
> As you know there is a property that controls maximum remove queue history
> for atomic caches
> (IgniteSystemProperties.IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE).
>
> The strange thing is that this property is also used for transactional
> caches as well. I see that GridDhtLocalPartition allocates rmvQueue
> regardless of a cache atomicity type which looks confusing.
>
> Do we need to avoid the allocation of the queue for transactional caches?
>
> --
> Denis
>


Re: Documentation for the ODBC driver

2016-03-09 Thread Dmitriy Setrakyan
I think we should document it in readme, right next to JDBC driver:
https://apacheignite.readme.io/docs/jdbc-driver

I would probably create a category called “Drivers”, and put JDBC and ODBC
drivers under it. Remember to copy this documentation to 1.6 as well.

D.

On Wed, Mar 9, 2016 at 9:37 AM, Igor Sapego  wrote:

> Hi Igniters!
>
> I'm currently working on the ODBC driver [1] and I'd wish to add
> documentation for it somewhere. Is readme.io [2] appropriate
> place for that? If so can I get edit permissions to modify it? If not
> can you point out the right place for the ODBC documentation?
>
> [1] - https://issues.apache.org/jira/browse/IGNITE-1786
> [2] - https://apacheignite.readme.io/docs/what-is-ignite
>
> Best Regards,
> Igor
>


Documentation for the ODBC driver

2016-03-09 Thread Igor Sapego
Hi Igniters!

I'm currently working on the ODBC driver [1] and I'd wish to add
documentation for it somewhere. Is readme.io [2] appropriate
place for that? If so can I get edit permissions to modify it? If not
can you point out the right place for the ODBC documentation?

[1] - https://issues.apache.org/jira/browse/IGNITE-1786
[2] - https://apacheignite.readme.io/docs/what-is-ignite

Best Regards,
Igor


[GitHub] ignite pull request: IGNITE-2663: Added "Protocol Version" field t...

2016-03-09 Thread isapego
GitHub user isapego opened a pull request:

https://github.com/apache/ignite/pull/541

IGNITE-2663: Added "Protocol Version" field to ODBC communication protocol.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/isapego/ignite ignite-2663

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/541.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #541


commit 62ef449794113d3fc7a265b4d18968ccfa854b64
Author: isapego 
Date:   2016-03-09T14:05:52Z

IGNITE-2663: Added Protocol version field to OdbcProcesor.

commit 4e4c4ac03670f77a76224f37a5e905d809e4f4bb
Author: isapego 
Date:   2016-03-09T14:36:01Z

IGNITE-2663: Implemented more robust algorithm for sending and receiving
data.

commit dffb02d7757eac3e3d567bdcb92761acdf09917a
Author: isapego 
Date:   2016-03-09T15:07:06Z

IGNITE-2663: Added version to ODBC driver message header.

commit bcf2fdeef72bbe9bbd8e55c148fa424635d9d673
Author: isapego 
Date:   2016-03-09T15:15:38Z

IGNITE-2663: Error handling refactored.

commit 35b19be39cdd014bd3ee0ff2c682fa5f357718ef
Author: isapego 
Date:   2016-03-09T16:03:07Z

IGNITE-2663: Version reading fixed.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-2783) Frequent deadlock during regular operations

2016-03-09 Thread Noam Liran (JIRA)
Noam Liran created IGNITE-2783:
--

 Summary: Frequent deadlock during regular operations
 Key: IGNITE-2783
 URL: https://issues.apache.org/jira/browse/IGNITE-2783
 Project: Ignite
  Issue Type: Bug
  Components: cache, important
Affects Versions: 1.5.0.final
Reporter: Noam Liran
Priority: Critical


We've ran into this severe deadlock several times in the past few weeks. This 
time we've managed to collect quite a bit of data which might help us discover 
the cause.

In general, we believe the deadlock involves two operations:
* Cache creation (and thus subsequent partition exchange)
* Service creation (call to IgniteServices.deployMultiple() )

h3. General information about the environment
* 77 ignite nodes; out of them:
** 3 seeds
** 74 regular nodes
* ~15 caches, some replicated and some distributed
* ~100 computing jobs running concurrently
* ~10 new computing jobs per second

h3. Relevant logs
h4. w2:m5 (uuid={{1e3897db-2fb2-4bf2-8a99-10b40f8bc72b}}) starts a cache
* Relevant logs:
{noformat}
2016-03-07 12:38:04.989 WARN  o.a.i.i.p.c.GridCacheEvictionManager 
exchange-worker-#97%null% ctx: actor: - 

 Evictions are not synchronized with other nodes in topology which provides 
2x-3x better performance but may cause data inconsistency if cache store is not 
configured (consider changing 'evictSynchronized' configuration property).
2016-03-07 12:38:04.992 INFO  o.a.i.i.p.cache.GridCacheProcessor   
exchange-worker-#97%null% ctx: actor: - Started cache 
[name=adallom.adalib.discovery.InetResolvingCache__adalib-0.66.147_adalibpy-0.66.110_ae-0.66.136_cb-0.66.171_ep-0.66.147_lg-0.66.150_mn-0.66.181_qc-0.66.115_rp-0.66.168_sg-0.66.150,
 mode=PARTITIONED]
{noformat}
* Unfortunately we don't have the stacktraces of this node but we have reason 
to believe we're still clocking in the call to getOrCreateCache().
* Cache configuration:
{code}
CACHE_CONFIGURATION = new CacheConfiguration()

.setName(CodeVersion.getVersionedName("adallom.adalib.discovery.InetResolvingCache"))
.setCacheMode(CacheMode.PARTITIONED)
.setAtomicityMode(CacheAtomicityMode.ATOMIC)
.setBackups(0) // No need for backup, just resolve again if 
needed
.setAffinity(new RendezvousAffinityFunction(true, 256))
.setEvictionPolicy(new 
IgniteUtils2.MemoryFractionLruEvictionPolicy(10_000_000, 0.02))
;
CACHE_CONFIGURATION

.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(CACHE_EXPIRE_PERIOD));
{code}

h4. s2:m2 (uuid={{79060e43-df7b-460d-836e-508beda65cc4}}) deploys a service
* Around the same time, another node (s2:m2) deploys a service (thus starting a 
transaction); we know this from the stack trace:
{noformat}
"ignite-#183%null%" #271 prio=5 os_prio=0 tid=0x7fde3c45 nid=0x2f2e 
waiting on condition [0x7fdd6ee76000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for  <0x00060a625040> (a 
org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireShared(AbstractQueuedSynchronizer.java:967)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireShared(AbstractQueuedSynchronizer.java:1283)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:155)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:115)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$32.op(GridCacheAdapter.java:2375)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(GridCacheAdapter.java:4073)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.getAndPutIfAbsent(GridCacheAdapter.java:2373)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.deploy(GridServiceProcessor.java:411)
at 
org.apache.ignite.internal.processors.service.GridServiceProcessor.deployMultiple(GridServiceProcessor.java:347)
at 
org.apache.ignite.internal.IgniteServicesImpl.deployMultiple(IgniteServicesImpl.java:119)
at 
com.adallom.minion.services.ServicePluginsDeployer.deploySingleService(ServicePluginsDeployer.java:109)
at 
com.adallom.minion.services.ServicePluginsDeployer.deployAllServices(ServicePluginsDeployer.java:137)
at 
com.adallom.minion.services.ServicePluginsDeployer.innerExecute(ServicePluginsDeployer.java:172)
at 

[GitHub] ignite pull request: Hunk perf 1119

2016-03-09 Thread iveselovskiy
GitHub user iveselovskiy opened a pull request:

https://github.com/apache/ignite/pull/540

Hunk perf 1119

DO NOT MERGE!

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/iveselovskiy/ignite hunk-perf-1119

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/540.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #540


commit a652adec401dc11e6f930fcef9de9b2ee5ecdbb9
Author: iveselovskiy 
Date:   2016-02-10T15:54:36Z

added missing file header to 
org.apache.ignite.hadoop.fs.KerberosHadoopFileSystemFactorySelfTest

commit 48f00cac3ba7668247585ab7c257562951c4f04f
Author: iveselovskiy 
Date:   2016-02-11T10:36:12Z

Merge branch 'master' of https://github.com/apache/ignite

commit a5c5ed4d5bbd61c62acfbb19610aac2f710935cd
Author: iveselovskiy 
Date:   2016-02-15T17:58:10Z

Merge branch 'master' of https://github.com/apache/ignite

commit b81ee18308307dad2416f7635120a387b98bb967
Author: iveselovskiy 
Date:   2016-02-17T18:32:53Z

Merge branch 'master' of https://github.com/apache/ignite

commit 4280cbdeef94a6956ea3ebe1ae389f1db57ed249
Author: iveselovskiy 
Date:   2016-02-18T09:47:12Z

Merge branch 'master' of https://github.com/apache/ignite

commit 754132067d73757f9d362129444b7a6dea3b3f5b
Author: iveselovskiy 
Date:   2016-02-19T13:18:16Z

Merge branch 'master' of https://github.com/apache/ignite

commit cd3a1e02cec00bd089deb3e7cc4c9ec39e7279a7
Author: iveselovskiy 
Date:   2016-02-26T16:24:36Z

Merge branch 'master' of https://github.com/apache/ignite

commit b5668701f27679f309e6968b1d7d8a0739d14c1a
Author: iveselovskiy 
Date:   2016-03-03T09:23:22Z

Merge branch 'master' of https://github.com/apache/ignite into 
hunk-perf-1119

commit aa2a344e48e6568e629d0d84988dc15d48faba6a
Author: iveselovskiy 
Date:   2016-03-03T10:51:08Z

remove transactional metacache limitation.

commit 6e69fa5fa1831587cb74d1c79e8b5b7782ff05fb
Author: iveselovskiy 
Date:   2016-03-03T10:52:12Z

remove value serialization for metacache.

commit e16cb28e27aa477496527ff8f0732522086ae544
Author: iveselovskiy 
Date:   2016-03-03T10:55:14Z

retVal = false for mata cache.

commit 2f9a05e013f8bf7050c7de28b9c2a3b7765af4cd
Author: iveselovskiy 
Date:   2016-03-03T10:56:35Z

retVal = false for mata cache for IgniteTxLocalAdapter.

commit 6e37219ddd5383339c78ce727f51dd309ebf7843
Author: iveselovskiy 
Date:   2016-03-03T11:01:48Z

Revert "remove value serialization for metacache."

This reverts commit 6e69fa5fa1831587cb74d1c79e8b5b7782ff05fb.

commit 7cac6ffa910d11e6e51ae0b030289eb6acda0494
Author: iveselovskiy 
Date:   2016-03-03T11:02:51Z

Revert "retVal = false for mata cache."

This reverts commit e16cb28e27aa477496527ff8f0732522086ae544.

commit 5ac2d9ff238e0a715e197232d0e5d8314bfdbbf8
Author: iveselovskiy 
Date:   2016-03-03T11:03:09Z

Revert "retVal = false for mata cache for IgniteTxLocalAdapter."

This reverts commit 2f9a05e013f8bf7050c7de28b9c2a3b7765af4cd.

commit e5c3e60625835659223d96af78d46efe8cc40494
Author: iveselovskiy 
Date:   2016-03-03T11:05:29Z

diagnostics, + fix of new ids creation out of tx, + igfs test in 3-node 
mode;

commit 5d580ab1bc1ee7cbe76c7a371904f4ce5e99bc7d
Author: iveselovskiy 
Date:   2016-03-03T11:06:03Z

Merge branch 'master' of https://github.com/apache/ignite into 
hunk-perf-1119

commit c8ea2ff9f35840ec9d6cb2e128a2a891af02327e
Author: iveselovskiy 
Date:   2016-03-04T07:57:36Z

updates via "invoke" closures, + some diagnistic.

commit eaa5a32c4bceecc62b222f5ab2942c2510ded273
Author: iveselovskiy 
Date:   2016-03-04T07:59:21Z

Merge branch 'master' of https://github.com/apache/ignite into 
hunk-perf-1119

commit 26f6bc29c91daa9ea386cc38af14add80cb4ce1c
Author: iveselovskiy 
Date:   2016-03-04T11:33:03Z

Merge branch 'master' of https://github.com/apache/ignite into 
hunk-perf-1119

commit e847f568d670ac51c28a7e01699051e49410a478
Author: iveselovskiy 
Date:   2016-03-09T14:24:46Z

WIP: ticket 1119.

commit 6a1abfc9adadbd7bd1067572bbab3253e093c7f0
Author: iveselovskiy 
Date:   2016-03-09T14:25:03Z

Merge branch 'master' of https://github.com/apache/ignite into 
hunk-perf-1119




---
If 

[GitHub] ignite pull request: IGNITE-2702 .NET: Support compact footers in ...

2016-03-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/523


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-2782) Implement single NEAR ATOMIC update future.

2016-03-09 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-2782:
---

 Summary: Implement single NEAR ATOMIC update future.
 Key: IGNITE-2782
 URL: https://issues.apache.org/jira/browse/IGNITE-2782
 Project: Ignite
  Issue Type: Task
  Components: cache
Affects Versions: 1.5.0.final
Reporter: Vladimir Ozerov
Assignee: Vladimir Ozerov
Priority: Critical
 Fix For: 1.6


Currently we create GridNearAtomicUpdateFuture always even if only one key is 
updated. Need to rework the logic so that simplified future is used instead, 
optimized for single-key scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Confusing IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE property

2016-03-09 Thread Denis Magda

Igniters,

As you know there is a property that controls maximum remove queue 
history for atomic caches 
(IgniteSystemProperties.IGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE).


The strange thing is that this property is also used for transactional 
caches as well. I see that GridDhtLocalPartition allocates rmvQueue 
regardless of a cache atomicity type which looks confusing.


Do we need to avoid the allocation of the queue for transactional caches?

--
Denis


[jira] [Created] (IGNITE-2781) IGFS: Automatically set "copyOnRead" to "false" for IGFS caches.

2016-03-09 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-2781:
---

 Summary: IGFS: Automatically set "copyOnRead" to "false" for IGFS 
caches.
 Key: IGNITE-2781
 URL: https://issues.apache.org/jira/browse/IGNITE-2781
 Project: Ignite
  Issue Type: Task
  Components: IGFS
Affects Versions: 1.5.0.final
Reporter: Vladimir Ozerov
Assignee: Ivan Veselovsky
Priority: Critical
 Fix For: 1.6


By default these flags are set to "true" meaning that we will copy values on 
each access.

We need to set them to {{false}} automatically on node startup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Great blog about Ignite!

2016-03-09 Thread 李玉珏

Hi:

If I write the content will be released to:
Https://www.zybuluo.com/liyuj/note/230739

I say "authority", mainly refers to the relevant content, if written by 
the designer, will be more depth.


在 16/3/9 11:16, Dmitriy Setrakyan 写道:

On Thu, Mar 3, 2016 at 11:45 PM, 李玉珏@163 <18624049...@163.com> wrote:


I wrote some, but it was not deep enough, and the authority was not enough.


I think we should still publish it. Can you explain what you meant by the
“authority”?



在 16/3/4 09:39, Dmitriy Setrakyan 写道:

Couldn’t agree more. It will be great if more community members would

volunteer to write blogs or articles.









[jira] [Created] (IGNITE-2780) CPP: Review Ignite C++ building process with Autotools.

2016-03-09 Thread Igor Sapego (JIRA)
Igor Sapego created IGNITE-2780:
---

 Summary: CPP: Review Ignite C++ building process with Autotools.
 Key: IGNITE-2780
 URL: https://issues.apache.org/jira/browse/IGNITE-2780
 Project: Ignite
  Issue Type: Bug
  Components: platforms
Affects Versions: 1.5.0.final
Reporter: Igor Sapego
 Fix For: 1.6


Review Ignite C++ Autotools build process and consider changing it so that 
{{make install}} command is not required to build all components.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] ignite pull request: IGNITE-2739 .NET: Implemented AffinityKey sup...

2016-03-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/533


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: IGNITE-2740 Remove error in IGNITE_HOME check

2016-03-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/531


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: IGNITE-2505

2016-03-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/443


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: IGNITE-2584

2016-03-09 Thread ilantukh
Github user ilantukh closed the pull request at:

https://github.com/apache/ignite/pull/511


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] ignite pull request: IGNITE-2689: Marked Date/Time features as sup...

2016-03-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/499


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (IGNITE-2779) BinaryMarshaller caches must be cleaned during client reconnect.

2016-03-09 Thread Vladimir Ozerov (JIRA)
Vladimir Ozerov created IGNITE-2779:
---

 Summary: BinaryMarshaller caches must be cleaned during client 
reconnect.
 Key: IGNITE-2779
 URL: https://issues.apache.org/jira/browse/IGNITE-2779
 Project: Ignite
  Issue Type: Task
  Components: cache, general
Affects Versions: 1.5.0.final
Reporter: Vladimir Ozerov
Priority: Critical
 Fix For: 1.6
 Attachments: IgniteProblemTest.java

The problem is originally reported by Vinay Sharma here: 
http://apache-ignite-users.70518.x6.nabble.com/changed-cache-configuration-and-restarted-server-nodes-Getting-exception-td3064.html#none

*Root cause*:
When client is reconnected to topology after full topology restart (i.e. all 
server nodes were down), it doesn't re-send binary metadata to topology. As a 
result, {{BinaryMarshaller}} exception occurs.

*Steps to reproduce*
Run attached code.

*Proposed solution*
Clean cached binary metadata during re-connect.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)