Re: [EXTERNAL] Re: Replace or Put after PutAsync causes Ignite to hang

2019-08-06 Thread Pavel Tupitsyn
Sorry guys, I've completely missed this thread, and the topic is very
important.

First, a simple fix for the given example. Add the following on the first
line of Main:
SynchronizationContext.SetSynchronizationContext(new
ThreadPoolSynchronizationContext());

And put the ThreadPoolSynchronizationContext class somewhere:
class ThreadPoolSynchronizationContext : SynchronizationContext
{
// No-op.
}


Now, detailed explanation. The problem exists forever in Ignite and is
mentioned in the docs briefly [1].
Also mentioned in .NET docs (I've updated them a bit) [2].

Breakdown:
* Ignite (Java side) runs async callbacks (continuations) on system
threads, and those threads have limitations (you should not call Ignite
APIs from them in general)
* Ignite.NET wraps async operations into native .NET Tasks
* Usually `await ...` call in .NET will continue execution on the original
Thread (simply put, actually it is more complex), so Ignite system thread
issue is avoided
* However, Console applications have no `SynchronizationContext`, so the
continuation can't be dispatched to original thread, and is executed on
current (Ignite) thread
* Setting custom SynchronizationContext fixes the issue: all async
continuations will be dispatched to .NET thread pool and never executed on
Ignite threads

However, dispatching callbacks to a different thread causes performance
hit, and Ignite favors performance over usability right now.
So it is up to the user to configure desired behavior.

Let me know if you need more details.

Thanks

[1] https://apacheignite.readme.io/docs/async-support
[2] https://apacheignite-net.readme.io/docs/asynchronous-support

On Thu, Aug 1, 2019 at 3:41 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> I have filed a ticket about this issue so it won't get lost.
> https://issues.apache.org/jira/browse/IGNITE-12033
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 2 мая 2019 г. в 10:53, Barney Pippin :
>
>> Thanks for the response Ilya. Did you get a chance to look at this Pavel?
>> Thanks.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Ignite 2.7.0: Ignite client:: memory leak

2019-08-06 Thread Ilya Kasnacheev
Hello!

We did not change it since it should never be a problem anymore. The
rationale is that I can't prove that some history size value will work and
cause no further issues.

Regards,
-- 
Ilya Kasnacheev


вт, 6 авг. 2019 г. в 22:25, Denis Magda :

> Andrey, Ilya, as part of IGNITE-11767, have we set the history size to 0
> for the clients? If haven't what is a rationale?
>
> -
> Denis
>
>
> On Tue, Aug 6, 2019 at 5:56 AM Andrei Aleksandrov 
> wrote:
>
>> Hi Mahesh,
>>
>> Yes, it's a problem related to IGNITE_EXCHANGE_HISTORY_SIZE. Ignite
>> stored the data for the last 1000 exchanges.
>>
>> It generally can be required for the case when the coordinator was
>> changed and new coordinator required to load last exchange history.
>>
>> Exist two problems here:
>>
>> 1)Client nodes can't be a coordinator. So there is no reason to store
>> 1000 entries there. Will be better to set this option to some small
>> value or zero for client nodes.
>> 2)Server nodes also don't require 1000 entries. The required number of
>> exchange history can depend on the number of server nodes. I suggest
>> change the default value to small value.
>>
>> Here is the ticket related to this problem:
>>
>> https://issues.apache.org/jira/browse/IGNITE-11767
>>
>> It fixed and should be available in Ignite 2.8 where these exchanges
>> will take less memory.
>>
>> BR,
>> Andrei
>>
>> On 2019/08/03 01:09:19, Mahesh Renduchintala 
>> wrote:
>>  > The clients we use have memory ranging from 4GB to 8GB. OOM was
>> produced on all these clientssome sooner, some little later, bit
>> always was seen.>
>>  >
>>  > The workaround is still stable for more than 48 hours now.>
>>  >
>>  >
>>
>


Re: Ignite 2.7.0: Ignite client:: memory leak

2019-08-06 Thread Denis Magda
Andrey, Ilya, as part of IGNITE-11767, have we set the history size to 0
for the clients? If haven't what is a rationale?

-
Denis


On Tue, Aug 6, 2019 at 5:56 AM Andrei Aleksandrov 
wrote:

> Hi Mahesh,
>
> Yes, it's a problem related to IGNITE_EXCHANGE_HISTORY_SIZE. Ignite
> stored the data for the last 1000 exchanges.
>
> It generally can be required for the case when the coordinator was
> changed and new coordinator required to load last exchange history.
>
> Exist two problems here:
>
> 1)Client nodes can't be a coordinator. So there is no reason to store
> 1000 entries there. Will be better to set this option to some small
> value or zero for client nodes.
> 2)Server nodes also don't require 1000 entries. The required number of
> exchange history can depend on the number of server nodes. I suggest
> change the default value to small value.
>
> Here is the ticket related to this problem:
>
> https://issues.apache.org/jira/browse/IGNITE-11767
>
> It fixed and should be available in Ignite 2.8 where these exchanges
> will take less memory.
>
> BR,
> Andrei
>
> On 2019/08/03 01:09:19, Mahesh Renduchintala 
> wrote:
>  > The clients we use have memory ranging from 4GB to 8GB. OOM was
> produced on all these clientssome sooner, some little later, bit
> always was seen.>
>  >
>  > The workaround is still stable for more than 48 hours now.>
>  >
>  >
>


Re: What happens when a client gets disconnected

2019-08-06 Thread Andrei Aleksandrov

Hi,

I guess that you should provide the full client and server logs, 
configuration files and reproducer if it's possible for case when the 
client node with near cache was able to crush the whole cluster.


Looks like it can be the issue here and the best way will be raise the 
JIRA ticket for it after analyze of provided data.


BR,
Andrei

On 2019/07/31 14:54:42, Matt Nohelty  wrote:
> Sorry for the long delay in responding to this issue. I will work on>
> replicating this issue in a more controlled test environment and try to>
> grab thread dumps from there.>
>
> In a previous post you mentioned that the blocking in this thread dump>
> should only happen when a data node is affected which is usually a 
server>

> node and you also said that near cache consistency is observed>
> continuously. If we have near caching enabled, does that mean clients>
> become data nodes? If that's the case, does that explain why we are 
seeing>

> blocking when a client crashes or hangs?>
>
> Assuming this is related to near caching, is there any configuration to>
> adjust this behavior to give us availability over perfect consistency?>
> Having a failure on one client ripple across the entire system and>
> effectively take down all other clients of that cluster is a major 
problem.>
> We obviously want to avoid problems like an OOM error or a big GC 
pause in>

> the client application but if these things happen we need to be able to>
> absorb these gracefully and limit the blast radius to just that client>
> node.>
>


Re: Ignite Partitioned Cache / Use for an in-memory transaction-log

2019-08-06 Thread Ilya Kasnacheev
Hello!

You can use ignite.affinity(cacheName).mapKeyToNode(key): it returns
ClusterNode which is primary for this key.

Not sure that I understand you about the quorum.

Regards,
-- 
Ilya Kasnacheev


вт, 6 авг. 2019 г. в 15:47, Johannes Lichtenberger <
johannes.lichtenber...@unitedplanet.com>:

> Hi,
>
> can I somehow query on which cluster-nodes a partitioned cache stores
> values for specific keys? I might want to use a cache for replicating an
> in-memory transaction log (which does not have to be persisted) to
> replicate a document of a NoSQL document store to a few nodes.
>
> Thus, instead of a local cache I'd simply write into an ignite cache and
> then would like to query which cluster-nodes have stored a specific key
> (for instance the document name). So for instance I could tell a load
> balancer for reads to read documents from one of the backup replicas.
>
> During a transaction commit I would also need this information to know
> where to send an event to... to commit the transaction. And somehow I'd
> love to wait for a quorum of nodes if the transaction really has
> committed or it needs to roll back.
>
> kind regards
>
> Johannes
>
>


Re: Ignite 2.7.0: Ignite client:: memory leak

2019-08-06 Thread Andrei Aleksandrov

Hi Mahesh,

Yes, it's a problem related to IGNITE_EXCHANGE_HISTORY_SIZE. Ignite 
stored the data for the last 1000 exchanges.


It generally can be required for the case when the coordinator was 
changed and new coordinator required to load last exchange history.


Exist two problems here:

1)Client nodes can't be a coordinator. So there is no reason to store 
1000 entries there. Will be better to set this option to some small 
value or zero for client nodes.
2)Server nodes also don't require 1000 entries. The required number of 
exchange history can depend on the number of server nodes. I suggest 
change the default value to small value.


Here is the ticket related to this problem:

https://issues.apache.org/jira/browse/IGNITE-11767

It fixed and should be available in Ignite 2.8 where these exchanges 
will take less memory.


BR,
Andrei

On 2019/08/03 01:09:19, Mahesh Renduchintala  
wrote:
> The clients we use have memory ranging from 4GB to 8GB. OOM was 
produced on all these clientssome sooner, some little later, bit 
always was seen.>

>
> The workaround is still stable for more than 48 hours now.>
>
>


Ignite Partitioned Cache / Use for an in-memory transaction-log

2019-08-06 Thread Johannes Lichtenberger

Hi,

can I somehow query on which cluster-nodes a partitioned cache stores 
values for specific keys? I might want to use a cache for replicating an 
in-memory transaction log (which does not have to be persisted) to 
replicate a document of a NoSQL document store to a few nodes.


Thus, instead of a local cache I'd simply write into an ignite cache and 
then would like to query which cluster-nodes have stored a specific key 
(for instance the document name). So for instance I could tell a load 
balancer for reads to read documents from one of the backup replicas.


During a transaction commit I would also need this information to know 
where to send an event to... to commit the transaction. And somehow I'd 
love to wait for a quorum of nodes if the transaction really has 
committed or it needs to roll back.


kind regards

Johannes



Re: Can Ignite Kafka connector be able to perform partial update ?

2019-08-06 Thread Andrei Aleksandrov

Hi,

Unfortunately, Ignite Kafka connector is simple implementation of Kafka 
connector that use source and sink functions.


All data transformation and filtering should be done using Kafka API. I 
guess that you can try use next functions for your purposes:


https://docs.confluent.io/current/connect/transforms/index.html

BR,
Andrei

On 2019/08/01 19:15:52, Yao Weng  wrote:
> Hi I have subscribe to user-subscr...@ignite.apache.org>
> 
,> 


> but still cannot post my question. So I send it directly to this email>
> address.>
>
> Our application receives Kafka message, and then calls invoke to do 
partial>
> update. Does ignite kafka connector support invoke ? If not, is 
Ignite team>

> going to support it ?>
>
> Thank you very much>
>
> Yao Weng>
>
> Our much.>
> Yao>
>


Re: Partitioned cache read through issue

2019-08-06 Thread Ilya Kasnacheev
Hello!

How do you load data?

loadCache would talk to database from both nodes, so if you only saw it on
one node, it's possible that only half of data was loaded from DB.

Regards,
-- 
Ilya Kasnacheev


пт, 2 авг. 2019 г. в 19:10, raja24 :

> Hi Folks,
>
> I have implemented cache read through and working fine with single node.
> I'm trying to test with two nodes cluster and didn't working for backup
> data
> concept.
> My test case is as below
>
> - I have started two server nodes
> - Data is loaded into one server node(seeing DB activity logs in the server
> console) and able to retrieve the cache. Then stopped this server node
> - Another node is still running
> -Trying to get the cache data and this server node is  again calling to
> Database instead of getting it from cached data
>
> please advise for this issue. My configuration is as below
>
> 
>  value="reference-data-cache">
>  value="PARTITIONED"/>
>  value="1"/>
> 
>
>
>  value="true">
>  value="true">
>
> 
>  class="javax.cache.configuration.FactoryBuilder"
> factory-method="factoryOf">
>  value="org.test.IgniteCacheRead">
> 
> 
>
>
>
>
> Thanks,
> Raja
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>