Re: Ignite Cache Data Not Available in Other Server Nodes

2017-12-13 Thread Nikolai Tikhonov
Cache API doesn't have method with this signature. Also I don't see sense
to insert into cache the same entry many times. :)
I think that you try to get entries that are not presented in the cache.
Execute Scan Query [1] over the cache and look at what really the cache
contains.

1. https://apacheignite.readme.io/docs/cache-queries#scan-queries

On Tue, Dec 12, 2017 at 7:56 PM, Harshil garg <harshilbi...@gmail.com>
wrote:

> Here is the code Snippet which shows how we put data to the cache.
>
> public void method(argType arg){
> while (true){
>try(Transaction 
> tx=ignite.transactions().txStart(TransactionConcurrency.OPTIMISTIC,TransactionIsolation.SERIALIZABLE)){
>workflowRunStateIgniteCache.put(arg);
>tx.commit();
>}catch(TransactionOptimisticException e){
>System.out.println("Transaction failed. Retrying...");
>}
> }
> }
>
>
>
> On Tue, Dec 12, 2017 at 8:19 PM, Nikolai Tikhonov <ntikho...@apache.org>
> wrote:
>
>> Can you share code snippet which shows how you put data to the cache?
>>
>> On Tue, Dec 12, 2017 at 12:26 PM, Harshil garg <harshilbi...@gmail.com>
>> wrote:
>>
>>> Sorry , forgot to attach the xml used for configuring cache.
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>> http://www.springframework.org/schema/beans;
>>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>>xmlns:context="http://www.springframework.org/schema/context;
>>>xsi:schemaLocation="
>>> http://www.springframework.org/schema/beans 
>>> http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
>>> http://www.springframework.org/schema/context 
>>> http://www.springframework.org/schema/context/spring-context-3.2.xsd;>
>>>
>>> 
>>> >> class="org.apache.ignite.configuration.IgniteConfiguration">
>>> 
>>> 
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.CacheConfiguration">
>>> 
>>> 
>>>
>>> 
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>> >> class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.CacheConfiguration">
>>> 
>>> >> value="${cache.workflow-run.name}"/>
>>>
>>> 
>>> 
>>>
>>> 
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>> >> class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
>>> 
>>> 
>>> >> class="org.apache.ignite.configuration.CacheConfiguration">
>>> 
>>> >> value="${cache.workflow-pause.name}"/>
>>>
>>> 
>>> 
>>>
>>> 
>>>
>>> 
>>>
>>> 
>>> >> class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>>> 
>>> 
>>> 
>>> 
>>> >> class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
>>> 
>>> 
>>> 
&

Re: Ignite Cache Data Not Available in Other Server Nodes

2017-12-12 Thread Nikolai Tikhonov
; 
>> 
>>
>> 
>> 
>>
>> 
>>
>> 
>>
>> 
>> > class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
>> 
>> 
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>> 
>> > value="${cache.workflow-run.name}"/>
>>
>> 
>> 
>>
>> 
>> 
>>
>> 
>>
>> 
>>
>> 
>> > class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
>> 
>> 
>> > class="org.apache.ignite.configuration.CacheConfiguration">
>>     
>> > value="${cache.workflow-pause.name}"/>
>>
>> 
>> 
>>
>> 
>>
>> 
>>
>> 
>> > class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
>> 
>> 
>> 
>> 
>> 
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>> 
>> 
>> 
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
>> 
>> 
>> 
>> 127.0.0.1:47500..47509
>> 
>> 
>> 
>>
>> 
>> 
>> 
>> 
>> 
>> 
>>
>>
>>
>>
>> On Mon, Dec 11, 2017 at 8:29 PM, Nikolai Tikhonov <ntikho...@apache.org>
>> wrote:
>>
>>> Hello!
>>>
>>> It looks weird for me. You should see the same data set from all nodes
>>> of cluster. I think you or remove data from other nodes or performe an
>>> operations under another cache. Can you share simple maven project which
>>> reproduces the problem?
>>>
>>> On Mon, Dec 11, 2017 at 5:22 PM, Harshil garg <harshilbi...@gmail.com>
>>> wrote:
>>>
>>>> I am trying to access ignite cache data from other nodes , I am able to
>>>> access the ignite cache but cache is completely empty and hence throwing
>>>> nullpointerException when I am trying to do cache.get(key).
>>>>
>>>> I have tried using both REPLICATED and PARTITONED MODE for
>>>> workflowRunState Cache.
>>>>
>>>> Here is the xml configuration
>>>>
>>>> 
>>>>
>>>> 
>>>>
>>>> 
>>>> http://www.springframework.org/schema/beans;
>>>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>>>>xmlns:context="http://www.springframework.org/schema/context;
>>>>xsi:schemaLocation="
>>>> http://www.springframework.org/schema/beans 
>>>> http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
>>>> http://www.springframework.org/schema/context 
>>>> http://www.springframework.org/schema/context/spring-context-3.2.xsd;>
>>>>
>>>> 
>>>> >>> class="org.apache.ignite.configuration.IgniteConfiguration">
>>>> 
>>>> 
>>>>
>>>> 
>>>> 
>>>> >>> value-type="java.lang.Boolean">
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> >>> class="org.apache.ignite.configuration.CacheConfiguration">
>>>> 
>>>> 
>>>>
>>>> 
>>>>  

Re: cluster hanged when client node put data into the caches

2017-12-12 Thread Nikolai Tikhonov
Hello,

You don't have to use synchronization for Ignite Cache API. This instance
already is threadsafe. I recommend to read the following documentation
page: https://apacheignite.readme.io/docs/jvm-and-system-tuning

On Sun, Dec 10, 2017 at 4:49 PM, Aurora <2565003...@qq.com> wrote:

> @Nikolai
> Your suggestion is right.
> "to-space exhausted" occured  in GC Logs on 3 out of 6 server nodes, as:
> Node 1:
> 2017-12-05T18:36:48.716+0800: 6870.705: [GC pause (G1 Evacuation Pause)
> (young), 0.4171797 secs]
>[Parallel Time: 412.1 ms, GC Workers: 13]
>   [GC Worker Start (ms): Min: 6870705.5, Avg: 6870705.7, Max:
> 6870705.8,
> Diff: 0.3]
>   [Ext Root Scanning (ms): Min: 0.6, Avg: 0.7, Max: 0.9, Diff: 0.4,
> Sum:
> 9.6]
>   [Update RS (ms): Min: 56.8, Avg: 58.6, Max: 60.2, Diff: 3.4, Sum:
> 761.8]
>  [Processed Buffers: Min: 69, Avg: 76.7, Max: 91, Diff: 22, Sum:
> 997]
>   [Scan RS (ms): Min: 1.4, Avg: 2.8, Max: 4.7, Diff: 3.3, Sum: 36.7]
>   [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1,
> Sum: 0.2]
>   [Object Copy (ms): Min: 349.1, Avg: 349.2, Max: 349.3, Diff: 0.2,
> Sum:
> 4540.2]
>   [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.5]
>  [Termination Attempts: Min: 1, Avg: 40.3, Max: 75, Diff: 74, Sum:
> 524]
>   [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.3, Diff: 0.2, Sum:
> 1.7]
>   [GC Worker Total (ms): Min: 411.4, Avg: 411.6, Max: 411.8, Diff: 0.4,
> Sum: 5350.7]
>   [GC Worker End (ms): Min: 6871117.2, Avg: 6871117.3, Max: 6871117.4,
> Diff: 0.2]
>[Code Root Fixup: 0.1 ms]
>[Code Root Purge: 0.0 ms]
>[Clear CT: 0.9 ms]
>[Other: 4.1 ms]
>   [Choose CSet: 0.0 ms]
>   [Ref Proc: 0.4 ms]
>   [Ref Enq: 0.0 ms]
>   [Redirty Cards: 0.9 ms]
>   [Humongous Register: 0.1 ms]
>   [Humongous Reclaim: 0.0 ms]
>   [Free CSet: 1.9 ms]
>[Eden: 8280.0M(8280.0M)->0.0B(8032.0M) Survivors: 968.0M->968.0M Heap:
> 13.4G(16.0G)->5680.8M(16.0G)]
>  [Times: user=5.37 sys=0.00, real=0.42 secs]
> 2017-12-05T18:40:45.218+0800: 7107.207: [GC pause (G1 Evacuation Pause)
> (young) (to-space exhausted), 19.5379052 secs]
>[Parallel Time: 1.0 ms, GC Workers: 13]
>   [GC Worker Start (ms): Min: 7107207.2, Avg: 7107207.4, Max:
> 7107207.5,
> Diff: 0.3]
>   [Ext Root Scanning (ms): Min: 0.6, Avg: 0.7, Max: 0.9, Diff: 0.3,
> Sum:
> 9.1]
>   [Update RS (ms): Min: 143.9, Avg: 144.0, Max: 144.1, Diff: 0.2, Sum:
> 1871.6]
>  [Processed Buffers: Min: 78, Avg: 87.2, Max: 108, Diff: 30, Sum:
> 1134]
>   [Scan RS (ms): Min: 6.1, Avg: 6.2, Max: 6.3, Diff: 0.2, Sum: 80.3]
>   [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0,
> Sum: 0.2]
>   [Object Copy (ms): Min: 16474.9, Avg: 16481.7, Max: 16514.7, Diff:
> 39.8, Sum: 214262.3]
>   [Termination (ms): Min: 0.0, Avg: 32.9, Max: 39.6, Diff: 39.6, Sum:
> 427.8]
>  [Termination Attempts: Min: 1, Avg: 1.5, Max: 4, Diff: 3, Sum: 19]
>   [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.1, Sum:
> 0.9]
>   [GC Worker Total (ms): Min: 16665.4, Avg: 16665.6, Max: 16665.7,
> Diff:
> 0.3, Sum: 216652.3]
>   [GC Worker End (ms): Min: 7123872.9, Avg: 7123872.9, Max: 7123873.0,
> Diff: 0.1]
>[Code Root Fixup: 0.1 ms]
>[Code Root Purge: 0.0 ms]
>[Clear CT: 0.8 ms]
>[Other: 2871.0 ms]
>   [Evacuation Failure: 2861.6 ms]
>   [Choose CSet: 0.0 ms]
>   [Ref Proc: 0.4 ms]
>   [Ref Enq: 0.0 ms]
>   [Redirty Cards: 6.9 ms]
>   [Humongous Register: 0.1 ms]
>   [Humongous Reclaim: 0.0 ms]
>   [Free CSet: 1.3 ms]
>[Eden: 8032.0M(8032.0M)->0.0B(328.0M) Survivors: 968.0M->1504.0M Heap:
> 13.5G(16.0G)->12.5G(16.0G)]
>  [Times: user=43.38 sys=7.36, real=19.53 secs]
> 2017-12-05T18:41:05.194+0800: 7127.183: [GC pause (G1 Evacuation Pause)
> (young) (initial-mark), 0.9987695 secs]
>[Parallel Time: 990.4 ms, GC Workers: 13]
>   [GC Worker Start (ms): Min: 7127183.8, Avg: 7127183.9, Max:
> 7127184.0,
> Diff: 0.2]
>   [Ext Root Scanning (ms): Min: 0.9, Avg: 1.1, Max: 1.3, Diff: 0.4,
> Sum:
> 14.3]
>   [Update RS (ms): Min: 563.4, Avg: 563.6, Max: 564.4, Diff: 1.0, Sum:
> 7327.2]
>  [Processed Buffers: Min: 1025, Avg: 1339.8, Max: 1575, Diff: 550,
> Sum: 17418]
>   [Scan RS (ms): Min: 15.0, Avg: 15.7, Max: 16.0, Diff: 1.0, Sum:
> 204.6]
>   [Code Root Scanning (ms): Min: 0.0, Avg: 0.0, Max: 0.0, Diff: 0.0,
> Sum: 0.1]
>   [Object Copy (ms): Min: 409.2, Avg: 409.4, Max: 409.4, Diff: 0.2,
> Sum:
> 5321.6]
>   [Termination (ms): Min: 0.0, Avg: 0.0, Max: 0.1, Diff: 0.1, Sum: 0.6]
>  [Termination Attempts: Min: 1, Avg: 33.3, Max: 48, Diff: 47, Sum:
> 433]
>   [GC Worker Other (ms): Min: 0.0, Avg: 0.1, Max: 0.2, Diff: 0.2, Sum:
> 1.2]
>   [GC Worker Total (ms): Min: 989.8, Avg: 990.0, Max: 990.2, Diff: 0.4,
> Sum: 12869.6]
>   [GC Worker End (ms): Min: 7128173.8, Avg: 7128173.9, 

Re: Affinity - Join query on the collocated data taking 90 seconds

2017-12-12 Thread Nikolai Tikhonov
Hello!

I've looked at your project and seems you confised cache names. Cache
configuration contains "AccountCache" and "CustomerCache", but for
streaming you use "Customer" and "Account" caches.

On Tue, Dec 12, 2017 at 4:50 PM, Naveen  wrote:

> Hi All
>
> Had any one got a chance look into this issue.
>
> As mentioned,  I am using Affinity key and IgniteDataStreamer to load data
> of 10M.
>
> This is how my code looks like
>
> Customer cache - PartyID is the primary key
> IgniteDataStreamer streamer =
> ignite.dataStreamer("Customer"));
>
> Account cache - AccoutnID is the Primary key and also has PartyID column
> IgniteDataStreamer streamer =
> ignite.dataStreamer("Account"));
> //Setting the Affinity kay
> accountKey = new AffinityKey(AccountID, PartyID);
> streamer.addData(accountKey, act);
>
> My requirement is to join customer and account with the partyID and query
> for a specific party ID, the is the query I run
>
> select P.PARTY_ID, A.PARTY_ID, P.ACCOUNT_ID_LIST from "Customer".Customer
> P,
> "Account".Account  A where P.PARTY_ID='P10101' and P.PARTY_ID=
> A.PARTY_ID;
>
> This seems to be work without specifying the distributeJoins=true, and
> response also very fast, around 30  ms. However, I could see some data
> missing Account cache.
>
> DataLoadAffinity.java
>  t1478/DataLoadAffinity.java>
>
> Have added Java cod for reference, if you want to reproduce the issue.
>
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache store class not found exception

2017-12-12 Thread Nikolai Tikhonov
It's correct link to the related thread
http://apache-ignite-users.70518.x6.nabble.com/CacheStore-being-serialized-to-client-td1931.html

On Mon, Dec 11, 2017 at 7:31 PM, Nikolai Tikhonov <ntikho...@apache.org>
wrote:

> Hello!
>
> Apache Ignite requires that CacheStore classes should be in classpath on
> client nodes. Look at thread with the same question:
> http://apache-ignite-users.70518.x6.nabble.com/Cache-
> store-class-not-found-exception-td18842.html
>
>
>
>
> On Mon, Dec 11, 2017 at 6:28 PM, Naveen Kumar <naveen.band...@gmail.com>
> wrote:
>
>> Please make sure
>>
>> class is on the server Ignite's CLASSPATH. OR You can just deploy the
>> JAR to $IGNITE_HOME/libs/user direcgtory
>>
>> This should resolve
>>
>> On Mon, Dec 11, 2017 at 8:52 PM, Mikael <mikael-arons...@telia.com>
>> wrote:
>> > Hi!
>> >
>> > I have a cache in a server node that is using a custom cache store for a
>> > JDBC database, when I connect a client node (running inside a
>> > Payara/Glassfish web application) to that server node I get a:
>> >
>> > class org.apache.ignite.IgniteCheckedException: Failed to find class
>> with
>> > given class loader for unmarshalling (make sure same versions of all
>> classes
>> > are available on all nodes or enable peer-class-loading)
>> > [clsLdr=WebappClassLoader (delegate=true; repositories=WEB-INF/classes/)
>> ,
>> > cls=my_cache_store_class]
>> >
>> > And sure, that class is not there, but it's a client so the cache
>> should not
>> > be there either and the cache store will not work in the client because
>> > there is no database there, so the question is if I am doing something
>> wrong
>> > or should it be like that ? do I need to put the class in the client ?
>> it
>> > will have references to other classes that are not there either so if it
>> > tries to unmarshal the cache store in the client it will not be a good
>> idea.
>> >
>> >
>>
>>
>>
>> --
>> Thanks & Regards,
>> Naveen Bandaru
>>
>
>


Re: Cache store class not found exception

2017-12-11 Thread Nikolai Tikhonov
Hello!

Apache Ignite requires that CacheStore classes should be in classpath on
client nodes. Look at thread with the same question:
http://apache-ignite-users.70518.x6.nabble.com/Cache-store-class-not-found-exception-td18842.html




On Mon, Dec 11, 2017 at 6:28 PM, Naveen Kumar 
wrote:

> Please make sure
>
> class is on the server Ignite's CLASSPATH. OR You can just deploy the
> JAR to $IGNITE_HOME/libs/user direcgtory
>
> This should resolve
>
> On Mon, Dec 11, 2017 at 8:52 PM, Mikael  wrote:
> > Hi!
> >
> > I have a cache in a server node that is using a custom cache store for a
> > JDBC database, when I connect a client node (running inside a
> > Payara/Glassfish web application) to that server node I get a:
> >
> > class org.apache.ignite.IgniteCheckedException: Failed to find class
> with
> > given class loader for unmarshalling (make sure same versions of all
> classes
> > are available on all nodes or enable peer-class-loading)
> > [clsLdr=WebappClassLoader (delegate=true; repositories=WEB-INF/classes/)
> ,
> > cls=my_cache_store_class]
> >
> > And sure, that class is not there, but it's a client so the cache should
> not
> > be there either and the cache store will not work in the client because
> > there is no database there, so the question is if I am doing something
> wrong
> > or should it be like that ? do I need to put the class in the client ? it
> > will have references to other classes that are not there either so if it
> > tries to unmarshal the cache store in the client it will not be a good
> idea.
> >
> >
>
>
>
> --
> Thanks & Regards,
> Naveen Bandaru
>


Re: Ignite Cache Data Not Available in Other Server Nodes

2017-12-11 Thread Nikolai Tikhonov
Hello!

It looks weird for me. You should see the same data set from all nodes of
cluster. I think you or remove data from other nodes or performe an
operations under another cache. Can you share simple maven project which
reproduces the problem?

On Mon, Dec 11, 2017 at 5:22 PM, Harshil garg 
wrote:

> I am trying to access ignite cache data from other nodes , I am able to
> access the ignite cache but cache is completely empty and hence throwing
> nullpointerException when I am trying to do cache.get(key).
>
> I have tried using both REPLICATED and PARTITONED MODE for
> workflowRunState Cache.
>
> Here is the xml configuration
>
> 
>
> 
>
> 
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xmlns:context="http://www.springframework.org/schema/context;
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans 
> http://www.springframework.org/schema/beans/spring-beans-3.2.xsd
> http://www.springframework.org/schema/context 
> http://www.springframework.org/schema/context/spring-context-3.2.xsd;>
>
> 
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
>
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
>
> 
> 
>
> 
>
> 
>
> 
>  class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
> 
>
> 
> 
>
> 
> 
>
> 
>
> 
>
> 
>  class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
> 
> 
>  class="org.apache.ignite.configuration.CacheConfiguration">
> 
>  value="${cache.workflow-pause.name}"/>
>
> 
> 
>
> 
>
> 
>
> 
>  class="com.mediaiq.caps.platform.choreography.commons.filter.DataNodeFilter"/>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
> 
> 
> 
> 127.0.0.1:47500..47509
> 
> 
> 
>
> 
> 
> 
> 
> 
> 
>
> All these caches are deployed in Data Node.
>
> Now after doing some operations I had populated data in
> workflowRunstateCache which I verified in web console as well.
>
> But when I try to access the same cache from different server node , no
> data is available in that. In the following code I am trying to access
> workflowrunStateCache from different server node , it shows me that
> containsKey as false and thows NullpointerException in debug mode when I do
> workflowRunStateCache.get();
>
> while (true) {
> try (Transaction tx = 
> ignite.transactions().txStart(TransactionConcurrency.OPTIMISTIC, 
> TransactionIsolation.SERIALIZABLE)) {
> System.out.println("Conatins Key" + 
> workflowRunStateIgniteCache.containsKey(updatedKeys.get(0)));
> System.out.println("Conatins Key" + workflowRunStateIgniteCache);
> Boolean flowProcessable = updatedKeys.stream()
> // check if there is at least one event in each cache entry 
> to be processed
> .map(updatedKey -> 
> workflowRunStateIgniteCache.get(updatedKey).getFlowRunEvents().size() > 0)
> .reduce(true, (a, b) -> a && b).booleanValue();
>
> List inputEvents = null;
>
> if (flowProcessable) {
> inputEvents = updatedKeys
> .stream()
> .map(updatedKey -> {
> try {
> return 
> workflowRunStateIgniteCache.get(updatedKey).getFlowRunEvents().take();
> } catch (InterruptedException e) {
> e.printStackTrace();
> }
> return null;
> }).collect(Collectors.toList());
> }
>
> tx.commit();
>
> break;
> } catch 

Re: Query Execution Error when changing the cache from PARTITIONED to REPLICATED

2017-12-11 Thread Nikolai Tikhonov
Hello,

I see in attached files the following "Caused by:
java.lang.OutOfMemoryError: Java heap space" error. It means that one node
can't handle whole resultset of this query. You need to increase JVM heap
size (via -Xmx/-Xms vm properties) or add node in cluster. Look at the
following doc pages:

1. https://apacheignite.readme.io/docs/preparing-for-production
2. https://apacheignite.readme.io/docs#garbage-collection-tuning


On Sun, Dec 10, 2017 at 9:21 PM, Ahmad Al-Masry  wrote:

> Dears;
> I also noticed that when I reduced to single node, the problem happens,
> even that it has 2 GB free of RAM.
> Any suggestions?
> BR
>
> > On Dec 10, 2017, at 4:23 PM, Ahmad Al-Masry  wrote:
> >
> > Dears;
> > I try to execute a complex query on a cluster of two nodes,
> > When the cashes are configured as PARTITIONED, the execution takes about
> 12 seconds, but when I changed it to REPLICATED, the attached error
> appears. And when tried to increase Java heap, the query reaches to time
> out.
> > Also attached is the nodes configuration.
> > BR
> > 
>
>
> --
>
>
>
> This email, and the content it contains, are intended only for the persons
> or entities to which it is addressed. It may contain sensitive,
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended
> recipient(s) is prohibited. If you received this email in error, please
> immediately contact security[at]harri[dot]com and delete it from any device
> or system on which it may be stored.
>


Re: Data lose in query

2017-12-11 Thread Nikolai Tikhonov
It depends from your data model and can't be enabled via one property.
Please, look at the following documentation pages:

https://apacheignite.readme.io/docs/affinity-collocation
https://apacheignite-sql.readme.io/docs/distributed-joins#collocated-joins

On Mon, Dec 11, 2017 at 4:02 PM, Ahmad Al-Masry <ma...@harri.com> wrote:

> How can I enable this on the server configuration XML?
> BR
>
>
> On Dec 11, 2017, at 2:31 PM, Nikolai Tikhonov <ntikho...@apache.org>
> wrote:
>
> Hi,
>
> Strongly recommend to care about collocation of your data (as above
> suggested by Vlad) instead of enable DistributedJoins flag. The performance
> of this type of joins is worse then the performance of the affinity
> collocation based joins due to the fact that there will be much more
> network round-trips and data movement between the nodes to fulfill a query
> [1].
>
> 1. https://apacheignite-sql.readme.io/docs/distributed-
> joins#non-collocated-joins
>
>
> On Mon, Dec 11, 2017 at 3:03 PM, Ahmad Al-Masry <ma...@harri.com> wrote:
>
>> Hi;
>> When I enabled the distributed JOIN, get the following Exception:
>>
>> java.sql.SQLException: javax.cache.CacheException: Failed to prepare
>> distributed join query: join condition does not use index
>> [joinedCache=PositionTypeCache
>>
>> Should I remove the indexes before doing distributed joins?
>> BR
>>
>>
>> On Dec 11, 2017, at 10:43 AM, Vladislav Pyatkov <vldpyat...@gmail.com>
>> wrote:
>>
>> Hi,
>>
>> When you use JOIN, you should to enable DistributedJoins flag[1], or tack
>> care about collocated of each joined entry[2].
>>
>> [1]: org.apache.ignite.cache.query.SqlFieldsQuery#setDistributedJoins
>> [2]: https://apacheignite.readme.io/docs
>>
>> On Mon, Dec 11, 2017 at 11:36 AM, Ahmad Al-Masry <ma...@harri.com> wrote:
>>
>>> Dears;
>>> The when I execute the attached query on Mysql data source or on a
>>> single node ignite, it returns about 25k records.
>>> When multiple node, it gives me about 3500 records.
>>> The caches are atomic and partitioned.
>>> Any suggestions.
>>> BR
>>>
>>> --
>>>
>>>
>>>
>>> This email, and the content it contains, are intended only for the
>>> persons
>>> or entities to which it is addressed. It may contain sensitive,
>>> confidential and/or privileged material. Any review, retransmission,
>>> dissemination or other use of, or taking of any action in reliance upon,
>>> this information by persons or entities other than the intended
>>> recipient(s) is prohibited. If you received this email in error, please
>>> immediately contact security[at]harri[dot]com and delete it from any
>>> device
>>> or system on which it may be stored.
>>>
>>
>>
>>
>> --
>> Vladislav Pyatkov
>>
>>
>>
>>
>> This email, and the content it contains, are intended only for the
>> persons or entities to which it is addressed. It may contain sensitive,
>> confidential and/or privileged material. Any review, retransmission,
>> dissemination or other use of, or taking of any action in reliance upon,
>> this information by persons or entities other than the intended
>> recipient(s) is prohibited. If you received this email in error, please
>> immediately contact security[at]harri[dot]com and delete it from any device
>> or system on which it may be stored.
>>
>
>
>
>
> This email, and the content it contains, are intended only for the persons
> or entities to which it is addressed. It may contain sensitive,
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended
> recipient(s) is prohibited. If you received this email in error, please
> immediately contact security[at]harri[dot]com and delete it from any device
> or system on which it may be stored.
>


Re: Data lose in query

2017-12-11 Thread Nikolai Tikhonov
Hi,

Strongly recommend to care about collocation of your data (as above
suggested by Vlad) instead of enable DistributedJoins flag. The performance
of this type of joins is worse then the performance of the affinity
collocation based joins due to the fact that there will be much more
network round-trips and data movement between the nodes to fulfill a query
[1].

1.
https://apacheignite-sql.readme.io/docs/distributed-joins#non-collocated-joins


On Mon, Dec 11, 2017 at 3:03 PM, Ahmad Al-Masry  wrote:

> Hi;
> When I enabled the distributed JOIN, get the following Exception:
>
> java.sql.SQLException: javax.cache.CacheException: Failed to prepare
> distributed join query: join condition does not use index
> [joinedCache=PositionTypeCache
>
> Should I remove the indexes before doing distributed joins?
> BR
>
>
> On Dec 11, 2017, at 10:43 AM, Vladislav Pyatkov 
> wrote:
>
> Hi,
>
> When you use JOIN, you should to enable DistributedJoins flag[1], or tack
> care about collocated of each joined entry[2].
>
> [1]: org.apache.ignite.cache.query.SqlFieldsQuery#setDistributedJoins
> [2]: https://apacheignite.readme.io/docs
>
> On Mon, Dec 11, 2017 at 11:36 AM, Ahmad Al-Masry  wrote:
>
>> Dears;
>> The when I execute the attached query on Mysql data source or on a single
>> node ignite, it returns about 25k records.
>> When multiple node, it gives me about 3500 records.
>> The caches are atomic and partitioned.
>> Any suggestions.
>> BR
>>
>> --
>>
>>
>>
>> This email, and the content it contains, are intended only for the persons
>> or entities to which it is addressed. It may contain sensitive,
>> confidential and/or privileged material. Any review, retransmission,
>> dissemination or other use of, or taking of any action in reliance upon,
>> this information by persons or entities other than the intended
>> recipient(s) is prohibited. If you received this email in error, please
>> immediately contact security[at]harri[dot]com and delete it from any
>> device
>> or system on which it may be stored.
>>
>
>
>
> --
> Vladislav Pyatkov
>
>
>
>
> This email, and the content it contains, are intended only for the persons
> or entities to which it is addressed. It may contain sensitive,
> confidential and/or privileged material. Any review, retransmission,
> dissemination or other use of, or taking of any action in reliance upon,
> this information by persons or entities other than the intended
> recipient(s) is prohibited. If you received this email in error, please
> immediately contact security[at]harri[dot]com and delete it from any device
> or system on which it may be stored.
>


Re: Ignite behaving strange with Spark SharedRDD in AWS EMR Yarn Client Mode

2017-12-11 Thread Nikolai Tikhonov
Hi @raksja!

Was your problem resolved? Otherwise can you provide detailed steps for
reproducing this behaviour?


On Thu, Nov 30, 2017 at 4:17 AM, vkulichenko 
wrote:

> I don't think raksja had an issue with only one record in the RDD.
> IgniteRDD#count redirects directly to IgniteCache#size, so if it returns 1,
> so you indeed have only one entry in a cache for some reason.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache clear through IgniteVisor console

2017-12-08 Thread Nikolai Tikhonov
Hello,

Can you share thread dumps from all nodes. You can take them by jstack tool.

On Fri, Dec 8, 2017 at 3:04 PM, Naveen  wrote:

> Hi
>
> I am using Ignite 2.3, trying to clear the cache, issued clear commands for
> the space which has 10M, dont anything happening, neither clearing the
> cache
> nor came out, it was in hung state.
>
> This is the command I issued
>
> cache -clear -c=Customer
>
> What could have gone wrong.
>
> Thanks
> naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Affinity - Join query on the collocated data taking 90 seconds

2017-12-08 Thread Nikolai Tikhonov
Can you share your configuration and model? It would be great if you can
provide simple maven project on Github.

On Thu, Dec 7, 2017 at 10:10 PM, Naveen  wrote:

> I have created an index on partyId of Account cache, after that query is
> responding in milli seconds.
>
> However, my basic affinity and collation does not seems to be working.
>
> I have created records with Ids from P10100 to P10101.
> If you see below, some of the records are not  returned, does that means
> they are not collacated.
>
> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
> P.PARTY_ID='P10116' and P.PARTY_ID= A.PARTY_ID;
> ++--
> --++
> |PARTY_ID|PARTY_ID|
> ACCOUNT_ID_LIST |
> ++--
> --++
> ++--
> --++
> No rows selected (0.01 seconds)
> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
> P.PARTY_ID='P10123' and P.PARTY_ID= A.PARTY_ID;
> ++--
> --++
> |PARTY_ID|PARTY_ID|
> ACCOUNT_ID_LIST |
> ++--
> --++
> | P10123 | P10123 |
> A10123 |
> ++--
> --++
> 1 row selected (0.011 seconds)
> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
> P.PARTY_ID='P10124' and P.PARTY_ID= A.PARTY_ID;
> ++--
> --++
> |PARTY_ID|PARTY_ID|
> ACCOUNT_ID_LIST |
> ++--
> --++
> ++--
> --++
> No rows selected (0.022 seconds)
>
> What could be the reason
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Affinity - Join query on the collocated data taking 90 seconds

2017-12-07 Thread Nikolai Tikhonov
Look at there [1] how to use explain statement.

1.
https://apacheignite-sql.readme.io/docs/performance-and-debugging#using-explain-statement

On Thu, Dec 7, 2017 at 7:43 PM, Nikolai Tikhonov <ntikho...@apache.org>
wrote:

> Hi,
>
> Did you create indexes for PARTY_ID fields?
> Any way, can you share explain for the query and also try to rewrite the
> query via inner join?
>
> On Thu, Dec 7, 2017 at 5:59 PM, Naveen <naveen.band...@gmail.com> wrote:
>
>> Hi
>>
>> AM using 2.3
>> Have 2 caches
>> Customer - PartyId is the Primary Key
>> Account - AccountId is the primary key and also has another column called
>> PartyId
>>
>> While storing the Account data, I am using AffinityKey<AccountId,
>> PartyId>,
>> so that my below join query works since the data is collocated, I could
>> get
>> the result for the below query without distributedJoins=true, means my
>> understanding is data collocated, thats why it is returning the data. But
>> it
>> is taking 90 secs.
>>
>> select P.PARTY_ID, A.PARTY_ID, P.ACCOUNT_ID_LIST from "Customer".Customer
>> P,
>> "Account".Account  A where P.PARTY_ID='P10101' and P.PARTY_ID=
>> A.PARTY_ID;
>>
>> Results of the Query
>>
>> [tibusr@JMNGD1BAQ10V05 bin]$ ./sqlline.sh --color=true --verbose=true -u
>> jdbc:ignite:thin://127.0.0.1
>> issuing: !connect jdbc:ignite:thin://127.0.0.1 '' ''
>> org.apache.ignite.IgniteJdbcThinDriver
>> Connecting to jdbc:ignite:thin://127.0.0.1
>> Connected to: Apache Ignite (version 2.3.0#20171028-sha1:8add7fd5)
>> Driver: Apache Ignite Thin JDBC Driver (version
>> 2.3.0#20171028-sha1:8add7fd5)
>> Autocommit status: true
>> Transaction isolation: TRANSACTION_REPEATABLE_READ
>> sqlline version 1.3.0
>> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
>> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
>> P.PARTY_ID='P10101' and P.PARTY_ID= A.PARTY_ID;
>> ++--
>> --++
>> |PARTY_ID|PARTY_ID|
>> ACCOUNT_ID_LIST |
>> ++--
>> --++
>> | P10101 | P10101 |
>> A10101 |
>> ++--
>> --++
>> 1 row selected (89.95 seconds)
>> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
>> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
>> P.PARTY_ID='P10001' and P.PARTY_ID= A.PARTY_ID;
>> ++--
>> --++
>> |PARTY_ID|PARTY_ID|
>> ACCOUNT_ID_LIST |
>> ++--
>> --++
>> | P10001 | P10001 |
>> A10001 |
>> ++--
>> --++
>> 1 row selected (90.984 seconds)
>> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
>> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
>> P.PARTY_ID='P10002' and P.PARTY_ID= A.PARTY_ID;
>> ++--
>> --++
>> |PARTY_ID|PARTY_ID|
>> ACCOUNT_ID_LIST |
>> ++--
>> --++
>> ++--
>> --++
>> No rows selected (90.985 seconds)
>> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
>> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
>> P.PARTY_ID='P10101' and P.PARTY_ID= A.PARTY_ID;
>> ++--
>> --++
>> |PARTY_ID|PARTY_ID|
>> ACCOUNT_ID_LIST |
>> +---

Re: cluster hanged when client node put data into the caches

2017-12-07 Thread Nikolai Tikhonov
Hello,

Can you share thread dumps from all nodes? It need for further
investigation.

On Thu, Dec 7, 2017 at 7:41 PM, Aurora <2565003...@qq.com> wrote:

> Hi guys.
> our project was stunned into this critical issue.
> Ignite version 2.2, 10 nodes on 5 servers.
> Client node consumed data from Kafka, and put them into caches.
> Sometimes client node was disconnected suddenly, then the cluster with 10
> server nodes hanged,
> CPU usage on 5 servers reached almost 100%.
>
> I appreciate if you could give me a hint on solving this issue.
>
> thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Affinity - Join query on the collocated data taking 90 seconds

2017-12-07 Thread Nikolai Tikhonov
Hi,

Did you create indexes for PARTY_ID fields?
Any way, can you share explain for the query and also try to rewrite the
query via inner join?

On Thu, Dec 7, 2017 at 5:59 PM, Naveen  wrote:

> Hi
>
> AM using 2.3
> Have 2 caches
> Customer - PartyId is the Primary Key
> Account - AccountId is the primary key and also has another column called
> PartyId
>
> While storing the Account data, I am using AffinityKey,
> so that my below join query works since the data is collocated, I could get
> the result for the below query without distributedJoins=true, means my
> understanding is data collocated, thats why it is returning the data. But
> it
> is taking 90 secs.
>
> select P.PARTY_ID, A.PARTY_ID, P.ACCOUNT_ID_LIST from "Customer".Customer
> P,
> "Account".Account  A where P.PARTY_ID='P10101' and P.PARTY_ID=
> A.PARTY_ID;
>
> Results of the Query
>
> [tibusr@JMNGD1BAQ10V05 bin]$ ./sqlline.sh --color=true --verbose=true -u
> jdbc:ignite:thin://127.0.0.1
> issuing: !connect jdbc:ignite:thin://127.0.0.1 '' ''
> org.apache.ignite.IgniteJdbcThinDriver
> Connecting to jdbc:ignite:thin://127.0.0.1
> Connected to: Apache Ignite (version 2.3.0#20171028-sha1:8add7fd5)
> Driver: Apache Ignite Thin JDBC Driver (version
> 2.3.0#20171028-sha1:8add7fd5)
> Autocommit status: true
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> sqlline version 1.3.0
> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
> P.PARTY_ID='P10101' and P.PARTY_ID= A.PARTY_ID;
> ++--
> --++
> |PARTY_ID|PARTY_ID|
> ACCOUNT_ID_LIST |
> ++--
> --++
> | P10101 | P10101 |
> A10101 |
> ++--
> --++
> 1 row selected (89.95 seconds)
> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
> P.PARTY_ID='P10001' and P.PARTY_ID= A.PARTY_ID;
> ++--
> --++
> |PARTY_ID|PARTY_ID|
> ACCOUNT_ID_LIST |
> ++--
> --++
> | P10001 | P10001 |
> A10001 |
> ++--
> --++
> 1 row selected (90.984 seconds)
> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
> P.PARTY_ID='P10002' and P.PARTY_ID= A.PARTY_ID;
> ++--
> --++
> |PARTY_ID|PARTY_ID|
> ACCOUNT_ID_LIST |
> ++--
> --++
> ++--
> --++
> No rows selected (90.985 seconds)
> 0: jdbc:ignite:thin://127.0.0.1> select P.PARTY_ID, A.PARTY_ID,
> P.ACCOUNT_ID_LIST from "Customer".Customer P, "Account".Account  A where
> P.PARTY_ID='P10101' and P.PARTY_ID= A.PARTY_ID;
> ++--
> --++
> |PARTY_ID|PARTY_ID|
> ACCOUNT_ID_LIST |
> ++--
> --++
> | P10101 | P10101 |
> A10101 |
> ++--
> --++
> 1 row selected (88.456 seconds)
> 0: jdbc:ignite:thin://127.0.0.1> Closing:
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection
> [tibusr@JMNGD1BAQ10V05 bin]$
> [tibusr@JMNGD1BAQ10V05 bin]$ ./sqlline.sh --color=true --verbose=true -u
> jdbc:ignite:thin://127.0.0.1?collacated=true
> issuing: !connect jdbc:ignite:thin://127.0.0.1?collacated=true '' ''
> org.apache.ignite.IgniteJdbcThinDriver
> Connecting to jdbc:ignite:thin://127.0.0.1?collacated=true
> Connected to: Apache Ignite (version 2.3.0#20171028-sha1:8add7fd5)
> Driver: Apache Ignite Thin JDBC Driver (version
> 2.3.0#20171028-sha1:8add7fd5)
> Autocommit status: true
> Transaction isolation: TRANSACTION_REPEATABLE_READ
> sqlline version 1.3.0
> 0: 

Re: How can we use the Discovery URL with SQLLINE which is both load balanced and Fault tolerent

2017-12-07 Thread Nikolai Tikhonov
Hi!

In current implementation thin driver doesn't support these features. You
can use thick driver [1] that supports them.

1. https://apacheignite.readme.io/docs/jdbc-driver#jdbc-client-node-driver

On Thu, Dec 7, 2017 at 6:28 PM, Naveen  wrote:

> Hi
>
> I am using 2.3, have 3 nodes in my cluster,
> This is how the config XML entries look like, all 3 nodes have below entry
>
>   
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
>
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
>
> 
> 
>
>  10.144.114.113:47500..
> 47502
>  10.144.114.114:47500..
> 47502
>  10.144.114.115:47500..
> 47502
> 
> 
> 
> 
> 
> 
>
> Client XML also has the same entry and along with that it also has
>  name="clientMode" value="true"/>
>
> So my cluster has 3 data nodes and one client node, my java clients connect
> connect to client node for all the operations. Is it the right way to do or
> any best practices we have.
>
> Also, thru sqlline, we normally give the URL like this
>
> jdbc:ignite:thin://10.144.114.113:10800
>
> ANd for some reason, if this node is down, I guess it wont be able to
> connect to the cluster.
> Can I give pair of nodes here like the below, so that it works in FT, if
> first one is down, requests goes to the second node etc.
>
> jdbc:ignite:thin://10.144.114.113:10800;10.144.114.114:10800;
> 10.144.114.115:10800
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance comparison of Primary Vs Secondary Indexes

2017-12-07 Thread Nikolai Tikhonov
Seems that there is no direct dependence of influence of the number of
nodes on performance. It strong depends from your case. Any way, you need
to do experiment for your case and analyze results.

On Thu, Dec 7, 2017 at 6:07 PM, Naveen  wrote:

> Hi Nikolay
>
> Can it get deteriorated further if we have more nodes in the cluster ?
> If so, performance of secondary index is inversely proportional to the
> number of nodes
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: e-mail address change not effective

2017-12-07 Thread Nikolai Tikhonov
Hello,

Did you receive approved message when sent from old address?
Any way look at steps listed below, might be it will helpful for you.

To unsubscribe from the user mailing list send a letter to
user-unsubscr...@ignite.apache.org with a word "Unsubscribe" without quotes
as a topic.

If you have a mailing client, follow an unsubscribe link here:
https://ignite.apache.org/community/resources.html#mail-lists



On Thu, Dec 7, 2017 at 2:41 PM, kmandalas  wrote:

> Hello,
>
> I changed e-mail address and have performed the following actions:
> - I confirmed the address change
> - I sent unsubscribe e-mail from my old address
> - I sent subscribe e-mail from my new address
>
> I keep getting e-mails at my old address...
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance comparison of Primary Vs Secondary Indexes

2017-12-07 Thread Nikolai Tikhonov
Hello!

It looks as expected behaviour. In the first request you use IgniteCache
API. In this case Apache Ignite knows which node in a cluster have a data
and fetch a entry from it. Only one request and one responce.

But in the second request Ignite does more action. In the first, Apache
Ignite parses your SQL queries, then send requests on all node which own
this cache. It happens because it's  unknown exactly nodes that have the
entries that matched by the conditions. The difference in performance looks
as reasonable.

Thanks,
Nikolay

On Thu, Dec 7, 2017 at 11:17 AM, Naveen  wrote:

> Hi
>
> Am using 2.3
>
> Doing a POC with 2 caches each having 10M records.
> My cluster configuration is, 3 server nodes and one client node.
>
> When I do the PT on primary key thru Rest API like this, I could TPS of
> around 30K and above.
>
> http://10.144.114.115:8080/ignite?cmd=get=P10007;
> cacheName=Customer
>
> I have created an index on Customer table on column ACCOUNT_ID_LIST and ran
> the below command to verify the query, its querying in 0.025 secs,
> hopefully
> it is using the index.
>
> 0: jdbc:ignite:thin://127.0.0.1> select * from "Customer".CUSTOMER where
> ACCOUNT_ID_LIST ='A10001';
> ++--
> --++
> +---+
> |ACCOUNT_ID_LIST |  CUST_ADDRESS_ID_LIST  |
> PARTYROLE|   PARTY_STATUS_CODE|   REFREE_ID
> |
> ++--
> --++
> +---+
> | A10001 | custAddressIdList1 |
> partyrole1 | partyStatusCode1   | refreeId1
> |
> ++--
> --++
> +---+
> 1 row selected (0.025 seconds)
>
> And when I do the PT for the the below SQL Query thru REST API, TPS has
> come
> down to 4.5K
>
> http://10.144.114.115:8080/ignite?cmd=qryexe=Customer=10;
> cacheName=Customer=A10001=ACCOUNT_ID_LIST+%3D+%3F
>
> Will it be so drastic drop in TPS querying on a secondary index?
>
> My question is, hows the performance of secondary indexes in Ignite, how
> does it work internally.
> Basically when it comes to Primary indexes, based on the key value, Ignite
> cluster get to know which node is storing this record and directly request
> goes to that node and gets the data, how does it work for secondary index.
> I
> doubt, it will not work the same way it does for primary index.
>
> Can someone make me understand how it works.
>
> Thanks
> Naveen
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Use custom Data Region or custom Cache for IgniteAtomicReference - Ignite 2.3

2017-12-07 Thread Nikolai Tikhonov
Hi Krzysztof!

You can do this via *AtomicConfiguration#setGroupName*. For example:

AtomicConfiguration cfg = new
AtomicConfiguration().setGroupName("atomicRefCacheGroup");

ignite.atomicReference("atomicRef", cfg, "initValue", true);


where "atomicRefCacheGroup" cache group that has your
DataRegionConfiguration.

1. https://apacheignite.readme.io/v2.3/docs/cache-groups
2. https://apacheignite.readme.io/v2.3/docs/data-structures


On Thu, Dec 7, 2017 at 2:49 PM, Krzysztof Chmielewski <
krzysiek.chmielew...@gmail.com> wrote:

> Hi all,
> I would like to ask, if there is a way to use custom
> DataRegionConfiguration
> (other than default) for storing IgniteAtomicReferences.
>
> I would like to use Ignite's 2.3 feature where I can turn on/off Persistent
> per Cache (via DataRegionConfiguration). I would like to enable persistent
> only for a few IgniteAtomiReferences.
>
> If we could set data region via AtomicConfiguration as we do for caches, or
> set different cache to store AtomicReference other than Ignite's system
> cache, this would be possible.
>
> Unfortunately AtomicConfiguration does not have this feature from what I
> see.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite in docker (Native Persistence)

2017-12-07 Thread Nikolai Tikhonov
Hello!

Yes, sure! I'll investigate this question and update our doc.

On Thu, Dec 7, 2017 at 9:57 AM, Denis Magda  wrote:

> Nick,
>
> As one of Ignite docker maintainers, could you please investigate one how
> to map Ignite persistence to docker volumes:
> https://docs.docker.com/engine/admin/volumes/volumes/#
> use-a-read-only-volume
>
> and update the docker documentation:
> https://apacheignite.readme.io/docs/docker-deployment
>
> —
> Denis
>
> On Dec 1, 2017, at 8:40 AM, afedotov  wrote:
>
> Hi,
>
> You need to create volumes and map them to the ${IGNITE_HOME}/work/db or
> any
> other path that
> you might have specified via setPersistentStorePath. These volumes should
> outlive the Ignite containers
> and thus it will be possible to reuse them on restart.
>
> It's worth trying docker-compose, docker-swarm or kubernetes, depending on
> your needs.
> For example, Kubernetes provides a wide variety of volume options
> https://kubernetes.io/docs/concepts/storage/volumes/
>
>
> Has anyone use Ignite Native Persistence with Docker?
> Is there a solution on how to map the Volume dynamically? And how about
> when
> you restart the whole cluster, how does it maps all volumes?
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Re: DataStreamer vs CacheStoreAdapter

2017-12-06 Thread Nikolai Tikhonov
Hi,

If you plan to stream into data from single JVM then the better way use
only one instance of DataStreamer. Sure, if you're going to start different
10 JVMs than you'll use 10 instance of DataStreamer.
By default DataStreamer won't overwrite existing entries in a cache. You
can change this behaviour IgniteDataStreamer#allowOverwrite(true) via
method [1].

1. https://apacheignite.readme.io/docs#section-allow-overwrite

On Tue, Dec 5, 2017 at 8:32 PM, Lybrial  wrote:

> Hi,
>
> I guess I did not understand what you mean for the DataStreamer. When I
> have
> a distributed
> database (or distributed files maybe) I want to be able to load all of
> these
> resources simultaneously
> into the same cache since there arent duplicates. the resources are
> distinct
> from each other. So
> in my example I would not have one DataStreamer per cache, I would have 10
> DataStreamers
> for one cache.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Multi Data center replication issue

2017-12-06 Thread Nikolai Tikhonov
Hello!

Apache Ignite does not have this functionality and community does not
relate with it product. You need to ask this questions to company that
provided this features.

On Wed, Dec 6, 2017 at 3:00 PM, bits1983_25  wrote:

> Iam doing a POC on multi data center replication feature of Gridgain. I'm
> able to execute successfully the sample example packaged in the gridgain
> distribution. But when I tried to replicate multiple caches (around 150
> caches), I could see only 16 caches eligible for replication through VISOR
> UI. Please find the attached screenshot. So I'm only able to replicate
> these
> 16 caches. I have given all the cacheNames of 150 caches in the sender
> cache
> configuration.
>
> I got the feeling like gridgain is having a local cache folder where these
> metadata is getting stored. Is there any local cache folder like that ?
> Even
> though I made changes to the configuration files, its not getting reflected
> in the VISOR UI.
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: DataStreamer vs CacheStoreAdapter

2017-12-05 Thread Nikolai Tikhonov
Hello!

You're thinking in the right way.

In the first case DataStreamer looks preferable. If I understood correctly
then you have a distributed database and you need to do some preparation
before loading data to Apache Ignite. In this case you can create needed
count of DataStreamer (one DataStreamer per cache) and start streaming data
via them. You can safe share the instances between threads. DataStreamer
API will be parallel loading itself, you don't need to worry about this.

The second case is not clear for me. Are going to split one table by
identifier to several caches? CacheStore should be used when your tables
directly mapped to Ignite Caches. For example you have a Person table and
you plan  going to move all data from it to one IgniteCache.


On Tue, Dec 5, 2017 at 7:30 PM, Lybrial  wrote:

> Hello,
>
> im new to ignite and im not completely sure about the different use cases
> for `igeniteDataStreamer` and `CacheStoreAdapter`. In my application I have
> two different usecases but both rely on preloading data from a database
> into
> the ignite cache.
>
> 1. The first usecase is just some kind of bulk loading data into the cache.
> No transformations or additional work needed, just copy the whole database
> into the cache. I thought the `IgniteDataStreamer` would be the way to go
> here, is this correct? If yes: How could I optimize the data loading if my
> data is distributed between several databases. What If I have, lets say, 10
> databases with distinct data in each table but they have the same shema.
> Could I parallelize the dataloading so that all 10 databases are getting
> loaded into the cache at the same time.
>
> 2. The second usecase is that I load data from the database and group them
> into different caches meaning that for a given identifier (key) I will have
> a set of entries and every identifier will have its own cache. Does it make
> sense to use a `CacheStoreAdapter` here?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Problem with loading data

2017-12-05 Thread Nikolai Tikhonov
Hello!

It looks that Web Console generated an incorrect schema for PositionCache
cache. Can you share CacheConfiguration and schema for the related table?


On Tue, Dec 5, 2017 at 5:00 PM, Ahmad Al-Masry  wrote:

> Hi;
> Want to to test Ignite to improve the performance of our reporting system.
> The datasource is MySQL.
> We used The Web Console to create the integration with MySQL and used the
> auto generated package from there to load the data.
> Some the tables have loaded successfully and the rest did not, and the
> following of some of the exceptions:
> 1- This Error gives SQL syntax error when trying to load Job table:
>
> Dec 05, 2017 3:54:32 PM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Failed to obtain remote job result policy for result from
> ComputeTask.result(..) method (will fail the whole task): GridJobResultImpl
> [job=C2 [c=LoadCacheJobV2 [keepBinary=false]], sib=GridJobSiblingImpl
> [sesId=88b94f62061-b5688ca0-691e-4982-84e7-9f5cc596a8b7,
> jobId=98b94f62061-b5688ca0-691e-4982-84e7-9f5cc596a8b7,
> nodeId=705cc1c6-2965-4700-83b3-0fc9671a70e0, isJobDone=false],
> jobCtx=GridJobContextImpl 
> [jobId=98b94f62061-b5688ca0-691e-4982-84e7-9f5cc596a8b7,
> timeoutObj=null, attrs={}], node=TcpDiscoveryNode
> [id=705cc1c6-2965-4700-83b3-0fc9671a70e0, addrs=[0:0:0:0:0:0:0:1%lo,
> 10.10.11.31, 127.0.0.1], sockAddrs=[ip-10-10-11-31.ec2.internal/
> 10.10.11.31:47500, 0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500],
> discPort=47500, order=1, intOrder=1, lastExchangeTime=1512482063467,
> loc=false, ver=2.3.0#19700101-sha1:, isClient=false], ex=class
> o.a.i.IgniteException: Failed to load cache: JobApplicationCache,
> hasRes=true, isCancelled=false, isOccupied=true]
> class org.apache.ignite.IgniteException: Remote job threw user exception
> (override or implement ComputeTask.result(..) method if you would like to
> have automatic failover for this exception).
> at org.apache.ignite.compute.ComputeTaskAdapter.result(
> ComputeTaskAdapter.java:101)
> at org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(
> GridTaskWorker.java:1047)
> at org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(
> GridTaskWorker.java:1040)
> at org.apache.ignite.internal.util.IgniteUtils.
> wrapThreadLoader(IgniteUtils.java:6663)
> at org.apache.ignite.internal.processors.task.GridTaskWorker.result(
> GridTaskWorker.java:1040)
> at org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(
> GridTaskWorker.java:858)
> at org.apache.ignite.internal.processors.task.GridTaskProcessor.
> processJobExecuteResponse(GridTaskProcessor.java:1066)
> at org.apache.ignite.internal.processors.task.GridTaskProcessor$
> JobMessageListener.onMessage(GridTaskProcessor.java:1301)
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(GridIoManager.java:1555)
> at org.apache.ignite.internal.managers.communication.GridIoManager.
> processRegularMessage0(GridIoManager.java:1183)
> at org.apache.ignite.internal.managers.communication.
> GridIoManager.access$4200(GridIoManager.java:126)
> at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(
> GridIoManager.java:1090)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1149)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: class org.apache.ignite.IgniteException: Failed to load cache:
> JobApplicationCache
> at org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.
> execute(GridClosureProcessor.java:1858)
> at org.apache.ignite.internal.processors.job.GridJobWorker$
> 2.call(GridJobWorker.java:566)
> at org.apache.ignite.internal.util.IgniteUtils.
> wrapThreadLoader(IgniteUtils.java:6631)
> at org.apache.ignite.internal.processors.job.GridJobWorker.
> execute0(GridJobWorker.java:560)
> at org.apache.ignite.internal.processors.job.GridJobWorker.
> body(GridJobWorker.java:489)
> at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110)
> at org.apache.ignite.internal.processors.job.GridJobProcessor.
> processJobExecuteRequest(GridJobProcessor.java:1115)
> at org.apache.ignite.internal.processors.job.GridJobProcessor$
> JobExecutionListener.onMessage(GridJobProcessor.java:1913)
> ... 7 more
> Caused by: class org.apache.ignite.IgniteException: Failed to load cache:
> JobApplicationCache
> at org.apache.ignite.internal.util.IgniteUtils.
> convertException(IgniteUtils.java:966)
> at org.apache.ignite.internal.processors.cache.
> GridCacheAdapter$LoadCacheJob.localExecute(GridCacheAdapter.java:5472)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter$
> LoadCacheJobV2.localExecute(GridCacheAdapter.java:5516)
> at org.apache.ignite.internal.processors.cache.GridCacheAdapter$
> TopologyVersionAwareJob.execute(GridCacheAdapter.java:6131)
> at org.apache.ignite.compute.ComputeJobAdapter.call(
> 

Re: Student Blog about Apache Ignite & Questions how to efficiently handle data

2017-12-05 Thread Nikolai Tikhonov
Hello Sven!

Glad hear that you solved the problem by yourself!
Any way, if you are able to share code snippet that shows how you use kafka
streamer then community can provide some additional suggest.

On Tue, Dec 5, 2017 at 2:45 PM, svonn  wrote:

> I solved the key issue with a singleTupleExtractor - For both GpsPoints and
> AccelerationPoints I'm simply adding a hashvalue over the Object.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: fabric8 ignite-service.yaml

2017-12-05 Thread Nikolai Tikhonov
Hello,

I haven't seen that users use ignite with the fabric8-maven-plugin.
Any way, feel free to share your experience with community. ;)

On Tue, Dec 5, 2017 at 11:57 AM, Humphrey  wrote:

> Has anyone used fabric8 in combination with Ignite to deploy ignite
> services
> and discovery using the TcpDiscoveryKubernetesIpFinder? I'm looking for a
> way to define my ignite-service.yaml file in my project so and configure it
> with the fabric8-maven-plugin so it will automatically be deployed (and
> installed) with my project.
> I know I can just run manually kubectl create -f ignite-service.yaml but is
> there a way to do that through the command: mvn fabric8:deploy ?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Semaphore Stuck when no acquirers to assign permit

2017-12-05 Thread Nikolai Tikhonov
Tim,

Thank you for your contribution! I'll look at your changes and leave my
comment to jira ticket.

On Tue, Dec 5, 2017 at 6:18 AM, Timay  wrote:

> From what i found, it looks like the DataStructuresProcessor EventListener
> get invoked after the dsMap has been cleared which prevents the
> onNodeRemoved from being invoked. I created a pull request which will
> invoke
> the onNodeRemove from the stop method. Also added my test to the data
> structure test suite.
>
> Please take a look, and let me know what your thoughts are on it.
>
> pull request: https://github.com/apache/ignite/pull/3138
> jira: https://issues.apache.org/jira/browse/IGNITE-7090
>
> Tim
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Semaphore Stuck when no acquirers to assign permit

2017-12-04 Thread Nikolai Tikhonov
Hi Tim!

Yes, it looks as bug. Thank you for investigation!
Feel free to contribute. ;)

On Fri, Dec 1, 2017 at 8:39 PM, Timay  wrote:

> Hey all,
>
> We experienced an issue when trying to establish a semaphore after a single
> instanced client node goes down hard (kill -9). Which afterwards we cannot
> acquire a permit on the existing semaphore. However, if the client is
> redundant, the permit is transferred successfully.
>
> I created a modified test of the SemaphoreFailoverSafeReleasePermitsTest,
> which will close the initial semephore ignite instance then try and acquire
> a permit and fail.
>
> SemaphoreFailoverNoWaitingAcquirerTest.java
>  SemaphoreFailoverNoWaitingAcquirerTest.java>
>
> I created a jira (https://issues.apache.org/jira/browse/IGNITE-7090) to
> track as well, may try and dig further but wanted to get it out to the
> group.
>
> Tim
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Regards the ContinuousQuery and MutableCacheEntryListenerConfiguration

2017-12-01 Thread Nikolai Tikhonov
Hi Aaron!

You close ContinuousQuery on this line:

try (QueryCursor> cur =
accountCache.query(query)) {

When you call *QueryCursor#close()* method this listener stops receiving
updates. Just remove `*try*` and it will work as you expected.

On Wed, Nov 29, 2017 at 2:56 PM, aa...@tophold.com 
wrote:

>
> BTW I Use the SQL to update the cache then can not trigger the ContinuousQuery
>  while If I update one by one seem can work.
>
> Is this the reason?
>
> SqlFieldsQuery update = new 
> SqlFieldsQuery(UPDATE).setArgs(Utils.utcEpochMills())
> .setTimeout(20, TimeUnit.SECONDS)
> .setCollocated(true)
> .setLocal(true);
>
>
> Regards
> Aaron
> --
> aa...@tophold.com
>
>
> *From:* aa...@tophold.com
> *Date:* 2017-11-29 19:22
> *To:* user 
> *Subject:* Regards the ContinuousQuery and MutableCacheEntryListenerConfi
> guration
> hi All,
>
> We use the exactly same configuration  with same CacheEntryListener  and  
> CacheEntryEventFilter
> in both ContinuousQuery  and  MutableCacheEntryListenerConfiguration
>
> But the ContinuousQuery  seem never continues trigger any events while
> the the MutableCacheEntryListenerConfiguration can continues trigger
> things.
>
> Also  If in the ContinuousQuery  no interface to set include old value
> disable.
>
>
> This can not work even after the cache update
>
> final ContinuousQuery query = new ContinuousQuery<>();
> query.setLocal(true);
> query.setPageSize(1);
> query.setTimeInterval(2_000);
> final ScanQuery scanQuery = new ScanQuery<>(new 
> ScanDataFilter());
> scanQuery.setLocal(true);
> query.setInitialQuery(scanQuery);
> query.setLocalListener(new DataCreateUpdateListener());
> query.setRemoteFilterFactory(new CacheEntryEventFilterFactory());
> try (QueryCursor> cur = 
> accountCache.query(query)) {
> for (Cache.Entry row : cur) {
> processUpdate(row.getValue());
> }
> }
>
>
> This can not work after cache updates trigger
>
> MutableCacheEntryListenerConfiguration 
> mutableCacheEntryListenerConfiguration = new 
> MutableCacheEntryListenerConfiguration(
> new Factory>() {
> private static final long serialVersionUID = 5358838258503369206L;
> @Override
> public CacheEntryListener create() {
> return new DataCreateUpdateListener();
> }
> },
> new CacheEntryEventFilterFactory(),
> false,
> true
> );
> ignite. AccountEntry>cache(AccountEntry.IG_CACHE_NAME).registerCacheEntryListener(mutableCacheEntryListenerConfiguration);
>
>
> did I configuration something wrong?  thanks for your advice!
>
>
> Regards
> Aaron
> --
> aa...@tophold.com
>
>


Re: Ignite SpringTransactionManager not rolling back cache changes

2017-11-27 Thread Nikolai Tikhonov
Hi,

It seems misconfiguration. Could you share your cache configuration and
double check that you set CacheConfiguration#setAtomicityMode to
TRANSACTIONAL instead of ATOMIC which used by default?


On Sat, Nov 25, 2017 at 9:19 AM, Sumanta Ghosh 
wrote:

> Hi,
> I am using Ignite's SpringTransactionManager in my spring boot application
> along with spring-data-jpa. I am using ignite's SpringTransactionManager
> class for transaction along with spring's JpaTransactionManager using
> spring
> data's ChainedTransactionManager class. Following is my configuration:-
>
> @Bean
> @Primary
> public PlatformTransactionManager
> transactionManager(@Qualifier("igniteSpringTxnManager")
> SpringTransactionManager igniteSpringTxnManager,
> EntityManagerFactory factory) throws Exception {
> return new ChainedTransactionManager(igniteSpringTxnManager,
> new
> JpaTransactionManager(factory));
> //return igniteSpringTxnManager;
> }
>
> This works fine for successsful DB transactions; however, whenever there is
> an error (e.g. DB error for column length mismatch), I am seeing the cache
> entry is not rolled back even if ignite's SpringTransactionManager displays
> the following in the log
>
> 2017-11-25 11:46:57.780 DEBUG 3556 --- [nio-8443-exec-4]
> o.a.i.t.spring.SpringTransactionManager  : Initiating transaction rollback
>
> SO, it seems even if log says rolling back, the rolling back is not
> happening actually.
> Can you please help me identifying the issue?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Initial query resent the data when client got reconnect

2017-11-17 Thread Nikolai Tikhonov
Hello,

When node disconnected from cluster, server's nodes close query listener
and lose information about updates which were already sent to client. You
should query again for avoiding this situation. Server nodes don't keep the
information when node goes out from clustre because it can lead to high
memory consumption.

On Fri, Nov 17, 2017 at 3:11 AM, gunman524  wrote:

> Alexey, thanks for reply.
>
> I wondered does Ignite will set a incremental value for incoming data so
> can
> help people to do stuff like you said?
>
> You know for isolate data pushers are not easy to generate a unique
> incremental value. I know many distribute data store,like
> elastisearch,mango
> has those  build-in field.
>
> So, how about Ignite, does we already has one but just I don;t know?
>
> Thanks,
>
> Gin
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using event to reconnect spring created cache client to server

2017-11-16 Thread Nikolai Tikhonov
Also you can annotate your Listener or Filter class by IgniteAsyncCallback
annotation. In this case a callback will be called from other thread.

On Thu, Nov 16, 2017 at 7:13 PM, ezhuravlev 
wrote:

> gunman524, I didn't say anything like that.
>
> I said that if you want to access cache from CQ, you need to start a new
> thread
>
> Evgenii
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite 2.3.0 docker image contains 2.2.0 files.

2017-11-03 Thread Nikolai Tikhonov
Good catch, thank you!

The latest image contains 2.3.0 (docker pull apacheignite/ignite) but image
with 2.3.0 tag contains binary files for 2.2.0 version. I've fixed it.

On Fri, Nov 3, 2017 at 8:13 AM, Denis Magda  wrote:

> Nick, Vovan,
>
> Have we really upgraded docker and the other images to 2.3?
>
> —
> Denis
>
> Begin forwarded message:
>
> *From: *bjason 
> *Subject: **ignite 2.3.0 docker image contains 2.2.0 files.*
> *Date: *November 2, 2017 at 3:37:34 PM PDT
> *To: *user@ignite.apache.org
> *Reply-To: *user@ignite.apache.org
>
>
>
> ignite 2.3.0 docker image contains 2.2.0 files. Please check.
>
> $ docker pull apacheignite/ignite:2.3.0
> 2.3.0: Pulling from apacheignite/ignite
> 3e17c6eae66c: Pull complete
> 74d44b20f851: Pull complete
> a156217f3fa4: Pull complete
> 4a1ed13b6faa: Pull complete
> 77980e5d0a6d: Pull complete
> 5458607a81d3: Pull complete
> e34cf8338f42: Pull complete
> 2f3d3da5c56e: Pull complete
> 2ade7a861e3f: Pull complete
> 686e6ce078d5: Pull complete
> f1d36075868f: Pull complete
> 2131367cd2fc: Pull complete
> 4c80ef6fe713: Pull complete
> ebe7c3987073: Pull complete
> Digest:
> sha256:78c6cca73d8a360d4705a8dbe722f386b90cdf6115a247c59753b796254a9116
> Status: Downloaded newer image for apacheignite/ignite:2.3.0
>
> $ sudo docker run -it --net=host -e
> "CONFIG_URI=https://raw.githubusercontent.com/apache/
> ignite/master/examples/config/example-cache.xml"
> apacheignite/ignite:2.3.0
> /opt/ignite/*apache-ignite-fabric-2.2.0-bin*/bin/ignite.sh, WARN: Failed
> to
> resolve JMX host (JMX will be disabled): moby
> [22:09:46]__  
> [22:09:46]   /  _/ ___/ |/ /  _/_  __/ __/
> [22:09:46]  _/ // (7 7// /  / / / _/
> [22:09:46] /___/\___/_/|_/___/ /_/ /___/
>
>
> $ docker ps
> CONTAINER IDIMAGE   COMMAND
> CREATED STATUS  PORTS   NAMES
> db37561bc87eapacheignite/ignite:2.3.0   "/bin/sh -c $IGNIT..."
>   22
> seconds ago  Up 20 seconds   sleepy_panini
>
> $ docker exec -it sudo docker run -it --net=host -e
> "CONFIG_URI=https://raw.githubusercontent.com/apache/
> ignite/master/examples/config/example-cache.xml"
> apacheignite/ignite
> $ docker exec -it db37561bc87e bash
>
> root@moby:/opt/ignite# ls -l
> total 4
> drwxr-xr-x 1 root root 4096 Nov  2 22:09 *apache-ignite-fabric-2.2.0-bin*
> root@moby:/opt/ignite#
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>


Re: Error : Commit produced a runtime exception

2017-10-25 Thread Nikolai Tikhonov
Hi,

Sorry, for late answer.

It's known usability problem (that Apache Ignite node requests 80% of RAM
from an operating system) which was fixed in 2.2 version. [1][2]  Could you
update to the latest version and check it?

Please, ignore if it is not actual any more.

1
http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-5717-improvements-of-MemoryPolicy-default-size-td20264.html
2
http://apache-ignite-developers.2346864.n4.nabble.com/DISCUSSION-Urgent-Ignite-bug-fix-release-td21292.html#a21307



On Mon, Jul 31, 2017 at 4:03 PM, iostream  wrote:

> No I have not set any memory configuration
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Error-Commit-produced-a-runtime-
> exception-tp15768p15824.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Client Near Cache Configuration Lost after Cluster Node Removed

2017-10-23 Thread Nikolai Tikhonov
Hello,

Could you say how you determine that client node loads data from remote
node bypasses near cache?
I'm not able reproduce this behaviour locally, could you share a simple
maven project that reproduces this behaviour?

On Tue, Oct 17, 2017 at 12:54 AM, torjt  wrote:

> Hello All,
>
> We are having an issue with Ignite client near caches being "lost" upon a
> cluster node being removed from the Ignite cluster.  Furthermore, using
> version 2.1.0, we are seeing this issue when another client joins the
> topology.  I built Ignite from GIT today, 10/16/17, with the latest
> changes,
> ver. 2.3.0-SNAPSHOT.  As of version 2.3.0-SNAPSHOT, bringing clients
> up/down
> does not cause an active client to lose its near cache and performance is
> good.  However, when we remove a node from the cluster, the client
> immediately communicates with the cluster and disregards its near cache.
> Restarting the client remedies the issue.
>
> The following is the steps to reproduce the issue:
> Apache Ignite Version:
> *SNAPSHOT#20171016-sha1:ca6662bcb4eecc62493e2e25a572ed0b982c046c*
> 1.  Start 2 Ignite servers
> 2.  Start client with caches configured as near-cache
> 3.  Access caches
> 4.  Stop node client is connected to
> 4a.  Client immediately bypasses near cache and access "cluster" for cache
> miss
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Trouble to connect to ignite cluster on kubernetes

2017-10-02 Thread Nikolai Tikhonov
Hi Anton!

For work Apache Ignite cluster requires that all nodes are able to
communicate to each other directly. If I understood correctly kubernete
starts up internal network interfaces and this environment looks similar
that Apache Ignite works behind proxy. In this case you need to configure
BasicAddressResolver which will map internal IP adresses on external host.

On Mon, Oct 2, 2017 at 12:13 PM, Anton Mushin  wrote:

> Hi everyone!
>
>
>
> Could you say what way for correct connect to ignite cluster on kubernetes?
>
> I'm using ignite version 2.2.0 and try connect to cluster from my local
> machine with next configuration
>
>
>
> private IgniteConfiguration getConfig(){
>
> TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder(false);
>
> ipFinder.setAddresses(Arrays.asList("kuber_external_host_addr", "
> kuber_external_host_addr:forwarded_port"));
>
>
>
> TcpCommunicationSpi commSpi=new TcpCommunicationSpi();
>
> commSpi.setSharedMemoryPort(-1);
>
>
>
> TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
>
> tcpDiscoverySpi.setIpFinder(ipFinder);
>
> tcpDiscoverySpi.setNetworkTimeout(TcpDiscoverySpi.DFLT_NETWORK_TIMEOUT
> *3);
>
>
>
> return new IgniteConfiguration()
>
> .setDiscoverySpi(tcpDiscoverySpi)
>
>.setIgniteInstanceName(UUID.randomUUID().toString())
>
>.setCommunicationSpi(commSpi);
>
> }
>
>
>
> I use default configuration for nods on cluster.
>
> Then I’m trying connect to cluster I get  error on my local machine:
>
>
>
> [SEVERE][main][TcpDiscoverySpi] Exception on direct send: Connection
> refused: connect
>
> java.net.ConnectException: Connection refused: connect
>
> at java.net.DualStackPlainSocketImpl.waitForConnect(Native
> Method)
>
> at java.net.DualStackPlainSocketI
> mpl.socketConnect(DualStackPlainSocketImpl.java:85)
>
> at java.net.AbstractPlainSocketIm
> pl.doConnect(AbstractPlainSocketImpl.java:350)
>
> at java.net.AbstractPlainSocketIm
> pl.connectToAddress(AbstractPlainSocketImpl.java:206)
>
> at java.net.AbstractPlainSocketIm
> pl.connect(AbstractPlainSocketImpl.java:188)
>
> at java.net.PlainSocketImpl.conne
> ct(PlainSocketImpl.java:172)
>
> at java.net.SocksSocketImpl.conne
> ct(SocksSocketImpl.java:392)
>
> at java.net.Socket.connect(Socket.java:589)
>
> at org.apache.ignite.spi.discover
> y.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1376)
>
> at org.apache.ignite.spi.discover
> y.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1339)
>
> at org.apache.ignite.spi.discover
> y.tcp.ServerImpl.sendMessageDirectly(ServerImpl.java:1159)
>
> at org.apache.ignite.spi.discover
> y.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:1006)
>
> at org.apache.ignite.spi.discover
> y.tcp.ServerImpl.joinTopology(ServerImpl.java:851)
>
> at org.apache.ignite.spi.discover
> y.tcp.ServerImpl.spiStart(ServerImpl.java:358)
>
> at org.apache.ignite.spi.discover
> y.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1834)
>
> at org.apache.ignite.internal.man
> agers.GridManagerAdapter.startSpi(GridManagerAdapter.java:297)
>
> at org.apache.ignite.internal.man
> agers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:842)
>
> at org.apache.ignite.internal.Ign
> iteKernal.startManager(IgniteKernal.java:1786)
>
> at org.apache.ignite.internal.Ign
> iteKernal.start(IgniteKernal.java:978)
>
> at org.apache.ignite.internal.Ign
> itionEx$IgniteNamedInstance.start0(IgnitionEx.java:1896)
>
> at org.apache.ignite.internal.Ign
> itionEx$IgniteNamedInstance.start(IgnitionEx.java:1648)
>
> at org.apache.ignite.internal.Ign
> itionEx.start0(IgnitionEx.java:1076)
>
> at org.apache.ignite.internal.Ign
> itionEx.start(IgnitionEx.java:596)
>
> at org.apache.ignite.internal.Ign
> itionEx.start(IgnitionEx.java:520)
>
> at org.apache.ignite.Ignition.start(Ignition.java:322)
>
>
>
> And in this moment in logs of cluster I see information about connect new
> server node to cluster group and instantaneous it disconnect.
>
>
>


Re: computation on view

2017-10-02 Thread Nikolai Tikhonov
Hi,

1. The better way load data to Apache Ignite via DataStreamer. [1] Also
hight reccomend configure properly data colloacation. In this case
employees for one organization will be placed on the node that allows to
improve join performance. [2]

2. You can create a separate cache which will contains aggregation data
(similar db view), describe on update events and update the cache.

3. Apache Ignite does support this SQL functions and I guess you can with
minimal changes run them under caches with acceptable performance.


On Sun, Oct 1, 2017 at 2:25 PM, James <2305958...@qq.com> wrote:

> I want to generate reports in html table in Ignite as I do in traditional
> database
> 1. load all data such as employee, address, department and organization
> from
> database into ignite. The primary key and foreign key are still kept in
> Ignite.
> 2. In order to do some computation and report,  first I need to use a lot
> of
> join to generate a set of data as a view like in traditional database. How
> do I create a view in Ignite.
> 3. On the data in above view, I need to query again to do a lot of
> computations such as "group by" to find a maximum value.
>
> What is best approach in Ignite?
>
> Thanks,
>
> James
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How do I support schema change in Ignite as well as addition of new Cache?

2017-10-02 Thread Nikolai Tikhonov
Hi,

You can create/drop caches (hence tables) in runtime. If we don't have
classes in runtime then you can configure via QueryEntitry. Refere the
following page https://apacheignite.readme.io/docs/indexes section *QueryEntity
Based Configuration*. In the next 2.3 release (which community plan to
release in nearest future) Apache Ignite will support alter table add
column command. You can track status this features by the following links:

https://issues.apache.org/jira/browse/IGNITE-5572
https://issues.apache.org/jira/browse/IGNITE-6283


On Sat, Sep 30, 2017 at 12:18 PM, Sumit Sethia 
wrote:

> Hi,
> I want to dynamically create cache for new mysql table in Ignite. I don't
> have table's class definition at runtime. I regularly get new data in HDFS
> for those tables in avro format. I want to ingest that incremental data
> into Ignite Cache by creating cache at runtime if it doesn't exist for that
> table and then put data into cache. Also I want to query on that data. Plus
> what happens if schema changes in MySql ? Will BinaryObject in Ignite help
> in solving my use case ? I tried to read Binary Object documentation but
> couldn't get it. Please help.
>
> Thanks,
> Sumit.
>


Re: chunkSize

2017-09-29 Thread Nikolai Tikhonov
Hello,

Apache Ignite calculates Chunk size and you shouldn't configure it
yourself. I guess that isn't root of your problem. Can you share full
ignite configuration (include cache configurations)?

On Fri, Sep 29, 2017 at 12:03 PM, mhetea  wrote:

> Hello,
> We use ignite 2.2.0 (we migrated from 2.1.0)
> We have 3 Ignite nodes, each with 32GB of memory.
> We are loading on a node a large amount of data (aprox. 5GB) on one node.
> The chunkSize is calculated at 1.94GB. Sometimes we get Connection reset by
> peer on a node and the caches are closed afterwards.
> Can this be due to the large chunkSize? (like node1 is trying to send to
> node2 1.94gb of data?)
> If yes, is there a way to fix that, like setting a smaller chunkSize?
> We tried with memory policies, but the chunkSize is always calculated. On
> ignite 2.1.0 the chunkSize was calculated at 1.34GB and we didn't have any
> issue.
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: underscores in Ignite

2017-09-28 Thread Nikolai Tikhonov
Hi,

As I know that integration with cassandra allows to set custom mapping.
You can find example in doc (Example 5):
https://apacheignite-mix.readme.io/docs/examples#section-example-5

On Thu, Sep 28, 2017 at 3:37 PM, elopez779 
wrote:

> Dear experts:
>
> I'm developing a Java app that has to create two caches from two simple
> tables from Cassandra 2.2.9. The database is not designed by my team, nor
> even my enterprise.
>
> The problem is that in one of the tables, one of the fields has an
> underscore character (location_info). In the other table is name of the
> table (Network_ids). Ignite removes underscores. Is there any suggestion
> about how to cache the table?
>
> I'haven't used any kind of importation. The POJO clases are created by me.
> The POJO class of the first table, for example, is:
>
> package org.bridging.ignite;
>
> import java.io.Serializable;
> import java.util.Map;
>
> public class PojoAccount implements Serializable {
>
> private Map location_info;
> private String status;
>
> public PojoAccount() {
> super();
> }
>
> public PojoAccount(Map loc_info, String st) {
> this.location_info = loc_info;
> this.status = st;
> }
>
> public Map getLocationInfo() { return
> location_info; }
> public String getStatus() { return status; }
>
> public void setLocationInfo(Map loc_info) {
> location_info =
> loc_info; }
> public void setStatus(String st) { status = st; }
>
> public String toString() {
> return status + " ; " + location_info.toString();
> }
> }
>
> Thanks in advance.
>
> Enrique
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: unsubscribe

2017-09-28 Thread Nikolai Tikhonov
Hi.

To unsubscribe from the user mailing list send a letter to user-unsubscribe
@ignite.apache.org with a word "Unsubscribe" without quotes as a topic. If
you have a mailing client, follow an unsubscribe link here:
https://ignite.apache.org/community/resources.html#mail-lists


On Thu, Sep 28, 2017 at 2:12 AM, Ivan Zeng  wrote:

> unsubscribe
>


Re: Full table scan query by ODBC causing node shutdown

2017-09-27 Thread Nikolai Tikhonov
CacheConfiguration.queryParallelism parameter affects all queries which
perform under this cache. It doesn't matter which API you will use. I guess
that when you increase heap size and tune gc it will bring significant
performance improvement for the case.

Thanks,
Nikolai

On Wed, Sep 27, 2017 at 6:11 PM, Ray  wrote:

> Ok, I'll try increasing the heap size.
>
> One more question here, from the log it says the full table scan query is
> taking 30s.
> And I wonder is there any way to speed the query up?
> I found this article
> https://apacheignite.readme.io/v2.2/docs/sql-performance-
> and-debugging#section-query-parallelism
>
> And my question is if I set CacheConfiguration.queryParallelism parameter
> when I ingest the data, will it take effect when I query the data by ODBC?
> Or is it only effective querying the data by Java API with
> CacheConfiguration.queryParallelism specified?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Full table scan query by ODBC causing node shutdown

2017-09-27 Thread Nikolai Tikhonov
Hello!

Seems that you don't have enough memory for loading whole data set to
memory. Ignite move the whole ResultSet to memory when executing SQL query.
How I see from logs that this mertics before query execution:

Metrics for local node (to disable set 'metricsLogFrequency' to 0)
...
^-- Heap [used=5054MB, free=35.88%, comm=7882MB]

I suppose that node was gone from topology due to long GC pauses. It's
known limitation and in 2.3 release this behaviour should be changed. More
details you can found there:
https://issues.apache.org/jira/browse/IGNITE-5991. As workaround you can
try to increase heap size on server nodes.

On Wed, Sep 27, 2017 at 3:02 PM, Ray  wrote:

> I have a cache with 6 million rows.
> The cache is configured with Partitioned cache mode and 3 backups.
> And I'm running Ignite 2.1 on 4 nodes and each have a 8Gb heap size and
> 30Gb
> non-heap size configured.
> When I'm trying to fetch all the rows using "select * from mytable" by odbc
> driver on windows, the node I'm querying will throw exception and
> eventually
> shut down.
>
> The debug log is in the attachment, the sql query begins at line 2579.
>
> Please advice me how to solve this problem
> Thanks
> ignite-6593a74d.log
>  t1346/ignite-6593a74d.log>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Client Mode and client to client communication

2017-09-27 Thread Nikolai Tikhonov
Hi John!

In current architecture nodes in cluster communicate to directly with other
nodes (include clients) and this behaviour can't be changed. If client
nodes will need a interact then they will open connect to directly. In your
case you need to exclude client nodes from ExecutorService. For it do the
following:

*ExecutorService srv =
ignite.executorService(ignite.cluster().forServers());*

On Thu, Sep 21, 2017 at 3:37 PM, ilya.kasnacheev 
wrote:

> Hello John!
>
> Why don't you promote your clients to servers?
>
> In Ignite, it is possible that only your dedicated servers will contain
> caches data, while other servers will participate in cluster without
> storing
> data. You can set up custom Cluster Groups / server roles for that. For
> every cache you can specify nodes that this cache will be started on, by
> setting nodeFilter property on cache configuration.
>
> Please refer to https://apacheignite.readme.io/docs/cluster-groups and
> https://ignite.apache.org/releases/latest/javadoc/org/
> apache/ignite/configuration/CacheConfiguration.html#getNodeFilter()
>
> Regards,
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Context failing with java.lang.NullPointerException: Ouch! Argument cannot be null: cfg

2017-09-26 Thread Nikolai Tikhonov
Hello,

This error looks strange. Configuration should not be null by code. Could
you try to change your code in the following fashion?

*val igniteContext = new IgniteContext(spark.sparkContext, () ⇒
configuration, standalone = false)*

*def configuration(): IgniteConfiguration = {*
*  val config = new IgniteConfiguration()*
*  val tcpDiscoverySpi = new TcpDiscoverySpi()*
*  val ipFinder = new TcpDiscoveryVmIpFinder()*
*  ipFinder.setAddresses(*
*util.Arrays.asList(*
*  "server1-ip",*
*  "server2-ip",*
*  "server3-ip",*
*  "server4-ip",*
*  "server5-ip:47500"))*

*  tcpDiscoverySpi.setIpFinder(ipFinder)*
*  config.setDiscoverySpi(tcpDiscoverySpi)*

*  config*
*}*


On Fri, Sep 22, 2017 at 1:09 AM, pradeepchanumolu 
wrote:

> I am hitting the following exception when running Ignite with Spark on
> Yarn.
> Here is the snippet of the code.
> The same job runs fine in spark local mode (spark-master: local). Only
> failing when running on Yarn.
>
> val config = new IgniteConfiguration()
> val tcpDiscoverySpi = new TcpDiscoverySpi()
> val ipFinder = new TcpDiscoveryVmIpFinder()
> ipFinder.setAddresses(
>   util.Arrays.asList(
> "server1-ip",
> "server2-ip",
> "server3-ip",
> "server4-ip",
> "server5-ip:47500"
>   ))
> tcpDiscoverySpi.setIpFinder(ipFinder)
> config.setDiscoverySpi(tcpDiscoverySpi)
>
>
> val igniteContext = new IgniteContext(spark.sparkContext, () ⇒ config,
> standalone = false)
>
> Exception:
>
>
> Driver stacktrace:
> at
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$
> scheduler$DAGScheduler$$failJobAndIndependentStages(
> DAGScheduler.scala:1435)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(
> DAGScheduler.scala:1423)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(
> DAGScheduler.scala:1422)
> at
> scala.collection.mutable.ResizableArray$class.foreach(
> ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(
> ArrayBuffer.scala:48)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(
> DAGScheduler.scala:1422)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
> at scala.Option.foreach(Option.scala:257)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(
> DAGScheduler.scala:802)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> doOnReceive(DAGScheduler.scala:1650)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> onReceive(DAGScheduler.scala:1605)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> onReceive(DAGScheduler.scala:1594)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> at org.apache.spark.scheduler.DAGScheduler.runJob(
> DAGScheduler.scala:628)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
> at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:925)
> at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:923)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:151)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
> at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:923)
> at org.apache.ignite.spark.IgniteContext.(
> IgniteContext.scala:54)
> at
> BulkLoadFeatures$.delayedEndpoint$BulkLoadFeatures$1(
> BulkLoadFeatures.scala:37)
> at BulkLoadFeatures$delayedInit$body.apply(BulkLoadFeatures.
> scala:18)
> at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> at scala.runtime.AbstractFunction0.apply$mcV$
> sp(AbstractFunction0.scala:12)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at scala.collection.immutable.List.foreach(List.scala:381)
> at
> scala.collection.generic.TraversableForwarder$class.
> foreach(TraversableForwarder.scala:35)
> at scala.App$class.main(App.scala:76)
> at BulkLoadFeatures$.main(BulkLoadFeatures.scala:18)
> at BulkLoadFeatures.main(BulkLoadFeatures.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> at
> 

Re: Ignite Xmx configuration

2017-09-08 Thread Nikolai Tikhonov
Hi Anil,

Yes, you are right.

On Fri, Jul 28, 2017 at 3:29 PM, Anil <anilk...@gmail.com> wrote:

> Hi Nikolai,
>
>
> So i need to add 4gb + indexes size as cache size for off-heap cache ?
>
> Thanks,
> Anil
>
> On 28 July 2017 at 17:23, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>
>> Indexes does not include in it. Indexes will occupy extra size.
>>
>> On Fri, Jul 28, 2017 at 12:21 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> 1.9 version
>>>
>>> On 28 July 2017 at 14:08, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>>
>>>> Which versioin ignite do you use?
>>>>
>>>> On Fri, Jul 28, 2017 at 11:12 AM, Anil <anilk...@gmail.com> wrote:
>>>>
>>>>> Hi Nikolai,
>>>>>
>>>>> One more question- documentation says the indexes are stored in off
>>>>> heap as well for off-heap cache?
>>>>>
>>>>> where does it store ? in the same 4 g (in my case) ? thanks.
>>>>>
>>>>> Regards,
>>>>> Anil
>>>>>
>>>>> On 28 July 2017 at 12:56, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Thanks Nikolai.
>>>>>>
>>>>>> On 28 July 2017 at 12:47, Nikolai Tikhonov <ntikho...@apache.org>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi!
>>>>>>>
>>>>>>> If you used off-heap cache then entry is not stored in heap memory.
>>>>>>> Hence Xmx is not related with cache size. You need to choose Xmx/Xms 
>>>>>>> based
>>>>>>> on your application requirements (how many object will be created by 
>>>>>>> your
>>>>>>> code). I guess that 2-4 Gb will be enough in your case.
>>>>>>>
>>>>>>> On Fri, Jul 28, 2017 at 9:59 AM, Anil <anilk...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Team,
>>>>>>>>
>>>>>>>> I have two off-heap caches with 4 gb size (per cache)  in my ignite
>>>>>>>> node.
>>>>>>>>
>>>>>>>> What would be the Xmx setting for ignite node ?
>>>>>>>>
>>>>>>>> is it  2 * 4 + heap required ? or Xmx is not related to any of the
>>>>>>>> cache size ? please clarify. thanks.
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>> Anil
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: Continuous Query event buffering OOME

2017-09-08 Thread Nikolai Tikhonov
Hi Michal,

I've looked at code and your points look reasonable. In now moment, how you
correct noted, you can decrease size of the buffer via
IGNITE_CONTINUOUS_QUERY_SERVER_BUFFER_SIZE property to 50 or 100.

On Tue, Sep 5, 2017 at 9:14 PM, mcherkasov  wrote:

> Hi Michal,
>
> Those buffers are required to make sure that all messages are delivered to
> all subscribers and delivered in right order.
> However I agree, 1M is a relatively large number for this.
>
> I will check this question with Continuous Query experts and will update
> you
> tomorrow.
>
> Thanks,
> Mikhail.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Amazon AMI not available in a specific region

2017-08-21 Thread Nikolai Tikhonov
Hello,

Yes, Apache Ignite doesn't provide straight forward way for it, but you can
start Ignite nodes via yarn manually. For it you need to connect to master
node and perform steps from this instruction:
https://apacheignite.readme.io/docs/yarn-deployment

On Tue, Aug 1, 2017 at 9:46 PM, raksja <shanmugkr...@gmail.com> wrote:

> Thanks for quick turn around that helped.
> Also do you know/any one how to install ignite in EMR worker nodes?
> Looks like there's no straight forward way.
>
> Any help or suggestions?
>
> On Tue, Aug 1, 2017, 5:33 AM Nikolai Tikhonov [via Apache Ignite Users] 
> <[hidden
> email] <http:///user/SendEmail.jtp?type=node=15871=0>> wrote:
>
>> Hi,
>>
>> I've copied AMI to Oregon. ami-f07f9b88 is image id.
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/Amazon-
>> AMI-not-available-in-a-specific-region-tp15838p15854.html
>> To unsubscribe from Amazon AMI not available in a specific region, click
>> here.
>> NAML
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
> --
> View this message in context: Re: Amazon AMI not available in a specific
> region
> <http://apache-ignite-users.70518.x6.nabble.com/Amazon-AMI-not-available-in-a-specific-region-tp15838p15871.html>
>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>


Re: docker external libs

2017-08-21 Thread Nikolai Tikhonov
Yes, you can easily to create your own image. As example you can look at
Apache Ignite docker file [1].

1.
https://github.com/apache/ignite/blob/master/modules/docker/2.1.0/Dockerfile

On Mon, Aug 21, 2017 at 1:31 PM, luqmanahmad  wrote:

> Thanks Nikolai for coming back. That's what I am doing and I needed one
> based
> on the centos anyway. So had to create one from scratch.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/docker-external-libs-tp16254p16330.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: docker external libs

2017-08-21 Thread Nikolai Tikhonov
Hello,

I think in this case that the better way it to extend existing a docker
image and add your libs by "ADD" command.

On Thu, Aug 17, 2017 at 4:47 PM, luqmanahmad  wrote:

> Hi there,
>
> In docker deployment can we provide jars from our local system to
> EXTERNAL_LIBS parameter. Let say we have 10 different jars on our local
> system how do we provide them to the EXTERNAL_LIBS parameter.
>
> Thanks,
> Luqman
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/docker-external-libs-tp16254.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: TcpDiscoveryVmIpFinder handle IP change

2017-08-14 Thread Nikolai Tikhonov
Hi David,

JVM can cache a name service. Could you try to set
*networkaddress.cache.ttl* to zero?

1.
http://www.myhowto.org/java/42-understanding-host-name-resolution-and-dns-behavior-in-java
2. http://docs.oracle.com/javase/1.5.0/docs/guide/net/properties.html

On Mon, Aug 14, 2017 at 10:18 AM, David Li  wrote:

> Hi all,
>
> Currently I am connecting to an ignite server node
> with TcpDiscoveryVmIpFinder, from a client node, using a domain as
> hostname, eg. ignite.dev, normally when there is any network issue or
> server issue, the client node will reconnect automatically when the network
> restore or ignite server node up again. But it cannot reconnect if the IP
> address of the server node has changed.
>
> I had a look the TcpDiscoveryVmIpFinder class, when setting the address,
> it resolve the ip string (or hostname) as  InetSocketAddress , I guess
> when ignite try to reconnect, it directly use the  InetSocketAddress ,
> will not try to resolve the ip address by the original hostname, so I am
> thinking maybe I can extend the TcpDiscoveryVmIpFinder, and make it work
> for my scenario, but I am not very sure about the place to change.
>
> Overall, my ignite server node may change its IP address if it up from a
> failure, and I want my client node can automatically reconnect itself to
> the server node. Any help are appreciated. Thank you.
>
>
> David
> 14 Aug
>
>


Re: Amazon AMI not available in a specific region

2017-08-01 Thread Nikolai Tikhonov
Hi,

I've copied AMI to Oregon. ami-f07f9b88 is image id.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Amazon-AMI-not-available-in-a-specific-region-tp15838p15854.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: bugreport: logger

2017-07-31 Thread Nikolai Tikhonov
Hello!

Thank you for your feedback. IgniteJdbcDriver tries to find a there
*config/java.util.logging.properties* by default or get path to
configuration file by *java.util.logging.config.file* system property. And
only if both a tries are failed, Ignite Jdbc Driver will configure JUL.

On Sun, Jul 30, 2017 at 2:49 PM, Simon IJskes  wrote:

> Version: ignite 2.1.0
>
> A custom IgniteLogger is passed in the configuration. The existing
> java.util.logging configuration is still modified by ignite.
>
> Cause:
>
> IgniteJdbcDriver uses a static instance of a JavaLogger. The JavaLogger
> modifies the java.util.logging configuration.
>
> G. Simon
>
> P.S. i'm not on the list. Please CC if you want me to file a jira ticket.
>


Re: Create and query a binary cache in ignite

2017-07-31 Thread Nikolai Tikhonov
Hi James,

Could you share your code as simple maven project? It would be great and
allows to us help you quickly.

On Mon, Jul 31, 2017 at 9:43 AM, James Isaac  wrote:

>
> Hi,
>
> I am trying to use BinaryObjects to create the cache at runtime. For
> example, instead of writing a pojo class such as Employee and configuring
> it as a cache value type, I need to be able to dynamically configure the
> cache with the field names and field types for the particular cache.
>
> I have posted the sample code on stackoverflow: https://stackov
> erflow.com/questions/45371054/create-and-query-a-binary-cache-in-ignite
> (Posting the code here would be messy)
>
> I am trying to configure the cache with the employeeId (Integer) as key
> and the whole employee record (BinaryObject) as value. When I run the above
> class, I get the following exception :
>
> Caused by: org.h2.jdbc.JdbcSQLException: Table "EMPLOYEE" not found; SQL 
> statement:
> select * from "emplCache".Employee where salary > 500 limit 5
>
> What am I doing wrong here? Is there anything more other than this line:
>
> employeeEntity.setTableName("Employee");
>
> I checked out https://github.com/apache/ignite/blob/master/examples/sr
> c/main/java/org/apache/ignite/examples/datagrid/CacheQueryDdlExample.java
>
> as Nikolay suggested but I see that they are setting the pojo class as the
> indexed type. Is there any way I can avoid this?
>
> Regards,
>
> James
>
>


Re: most of the dat in mbeans in empty

2017-07-31 Thread Nikolai Tikhonov
Sorry, I missed it. It's known behaviour. Count of operation (put/remove
and etc) updated on data nodes (node which really stored data) but time of
operation is counted on client node (node which perform this operation). We
have plan to make this more clear and intuitive. You can track status of
the issue there https://issues.apache.org/jira/browse/IGNITE-3495

On Mon, Jul 31, 2017 at 2:34 PM, neerajbhatt 
wrote:

> 
>
> Any update on this ? Please find image attached, as you can see get time of
> all caches are zero
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/most-of-the-data-in-mbeans-for-monitoring-is-empty-
> tp15774p15815.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Error : Commit produced a runtime exception

2017-07-31 Thread Nikolai Tikhonov
Did you configure IgniteConfiguration#setMemoryConfiguration?

On Mon, Jul 31, 2017 at 3:18 PM, iostream  wrote:

> I have already shared the cache configuration in my post above. Reposting
> below-
>
> Cache configuration :-
> cacheConfig.setAtomicityMode(TRANSACTIONAL);
> cacheConfig.setCacheMode(PARTITIONED);
> cacheConfig.setBackups(1);
> cacheConfig.setCopyOnRead(TRUE);
> cacheConfig.setPartitionLossPolicy(IGNORE);
> cacheConfig.setQueryParallelism(2);
> cacheConfig.setReadFromBackup(TRUE);
> cacheConfig.setRebalanceBatchSize(524288);
> cacheConfig.setRebalanceThrottle(500);
> cacheConfig.setRebalanceTimeout(1);
> cacheConfig.setOnheapCacheEnabled(FALSE);
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Error-Commit-produced-a-runtime-
> exception-tp15768p15818.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Error : Commit produced a runtime exception

2017-07-28 Thread Nikolai Tikhonov
Local node metrics show that ~80% free heap memory. Since ignite 2.0, a
entries are stored to offheap memory. Could you upgrade to 2.1 version
(just released) which has many improvment and fixes (included offheap
metrics)? Also can you share your configuration?

On Fri, Jul 28, 2017 at 3:57 PM, iostream  wrote:

> Attaching the complete error log file ignite_error_log.log
>  n15780/ignite_error_log.log>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Error-Commit-produced-a-runtime-
> exception-tp15768p15780.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: most of the dat in mbeans in empty

2017-07-28 Thread Nikolai Tikhonov
Hi,

You need to enable mertics by
*CacheConfiguration#setStatisticsEnabled(true)*

On Fri, Jul 28, 2017 at 3:04 PM, neerajbhatt 
wrote:

> Hi All
>
> I am trying to monitor mbeans through jconsole. I have ste
> setStatisticsEnabled(true)
>
> In jconsole I can see only some entries for all caches like key size and
> off
> heap counts (see image)
>
> Why the rest of data is not available ?
>  >
>  >
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/most-of-the-dat-in-mbeans-in-empty-tp15774.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Xmx configuration

2017-07-28 Thread Nikolai Tikhonov
Indexes does not include in it. Indexes will occupy extra size.

On Fri, Jul 28, 2017 at 12:21 PM, Anil <anilk...@gmail.com> wrote:

> 1.9 version
>
> On 28 July 2017 at 14:08, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>
>> Which versioin ignite do you use?
>>
>> On Fri, Jul 28, 2017 at 11:12 AM, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Nikolai,
>>>
>>> One more question- documentation says the indexes are stored in off heap
>>> as well for off-heap cache?
>>>
>>> where does it store ? in the same 4 g (in my case) ? thanks.
>>>
>>> Regards,
>>> Anil
>>>
>>> On 28 July 2017 at 12:56, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Thanks Nikolai.
>>>>
>>>> On 28 July 2017 at 12:47, Nikolai Tikhonov <ntikho...@apache.org>
>>>> wrote:
>>>>
>>>>> Hi!
>>>>>
>>>>> If you used off-heap cache then entry is not stored in heap memory.
>>>>> Hence Xmx is not related with cache size. You need to choose Xmx/Xms based
>>>>> on your application requirements (how many object will be created by your
>>>>> code). I guess that 2-4 Gb will be enough in your case.
>>>>>
>>>>> On Fri, Jul 28, 2017 at 9:59 AM, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Hi Team,
>>>>>>
>>>>>> I have two off-heap caches with 4 gb size (per cache)  in my ignite
>>>>>> node.
>>>>>>
>>>>>> What would be the Xmx setting for ignite node ?
>>>>>>
>>>>>> is it  2 * 4 + heap required ? or Xmx is not related to any of the
>>>>>> cache size ? please clarify. thanks.
>>>>>>
>>>>>>
>>>>>> Regards
>>>>>> Anil
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: Error : Commit produced a runtime exception

2017-07-28 Thread Nikolai Tikhonov
Hello,

Cause of this issue OOM. Are you sure that you have enough free memory on
your PC/server?

Caused by: java.lang.OutOfMemoryError
at sun.misc.Unsafe.allocateMemory(Native Method)

On Fri, Jul 28, 2017 at 11:55 AM, iostream  wrote:

> Hi,
>
> My ignite cluster hung producing the following error trace. Can someone
> help
> me identify the reason for the failure and cluster hang?
>
> [09:57:38,964][INFO][grid-timeout-worker-#19%null%][IgniteKernal] FreeList
> [name=null, buckets=256, dataPages=1208250, reusePages=0]
> [09:58:38,963][INFO][grid-timeout-worker-#19%null%][IgniteKernal]
> Metrics for local node (to disable set 'metricsLogFrequency' to 0)
> ^-- Node [id=635920a1, name=null, uptime=27:24:08:084]
> ^-- H/N/C [hosts=16, nodes=16, CPUs=60]
> ^-- CPU [cur=0.17%, avg=0.66%, GC=0%]
> ^-- PageMemory [pages=19546713]
> ^-- Heap [used=1437MB, free=79.94%, comm=4095MB]
> ^-- Non heap [used=74MB, free=95.07%, comm=76MB]
> ^-- Public thread pool [active=0, idle=0, qSize=0]
> ^-- System thread pool [active=0, idle=6, qSize=0]
> ^-- Outbound messages queue [size=0]
> [09:58:38,964][INFO][grid-timeout-worker-#19%null%][IgniteKernal] FreeList
> [name=null, buckets=256, dataPages=1208250, reusePages=0]
> [09:58:57,610][INFO][grid-nio-worker-tcp-comm-0-#21%null%][
> TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/10.65.97.216:47100,
> rmtAddr=/10.247.193.243:46910]
> [09:59:08,242][INFO][grid-nio-worker-tcp-comm-1-#22%null%][
> TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/10.65.97.216:47100,
> rmtAddr=/10.117.234.100:49012]
> [09:59:08,273][INFO][grid-nio-worker-tcp-comm-2-#23%null%][
> TcpCommunicationSpi]
> Established outgoing communication connection [locAddr=/10.65.97.216:40504
> ,
> rmtAddr=/10.247.197.103:47100]
> [09:59:08,474][INFO][grid-nio-worker-tcp-comm-3-#24%null%][
> TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/10.65.97.216:47100,
> rmtAddr=/10.65.100.14:47386]
> [09:59:08,563][INFO][grid-nio-worker-tcp-comm-0-#21%null%][
> TcpCommunicationSpi]
> Accepted incoming communication connection [locAddr=/10.65.97.216:47100,
> rmtAddr=/10.117.234.32:58504]
> [09:59:26,270][SEVERE][sys-stripe-6-#7%null%][GridDhtTxRemote] Commit
> failed.
> class
> org.apache.ignite.internal.transactions.IgniteTxHeuristicCheckedException:
> Commit produced a runtime exception (all transaction entries will be
> invalidated):
> GridDhtTxRemote[id=45c108f7d51--06b5-15d4--0010,
> concurrency=PESSIMISTIC, isolation=REPEATABLE_READ, state=COMMITTING,
> invalidate=false, rollbackOnly=false,
> nodeId=04540138-5619-4184-863c-2df07914ab02, duration=0]
> at
> org.apache.ignite.internal.processors.cache.distributed.
> GridDistributedTxRemoteAdapter.commitIfLocked(
> GridDistributedTxRemoteAdapter.java:719)
> at
> org.apache.ignite.internal.processors.cache.distributed.
> GridDistributedTxRemoteAdapter.commitRemoteTx(
> GridDistributedTxRemoteAdapter.java:789)
> at
> org.apache.ignite.internal.processors.cache.transactions.
> IgniteTxHandler.finish(IgniteTxHandler.java:1238)
> at
> org.apache.ignite.internal.processors.cache.transactions.IgniteTxHandler.
> processDhtTxPrepareRequest(IgniteTxHandler.java:965)
> at
> org.apache.ignite.internal.processors.cache.transactions.
> IgniteTxHandler.access$400(IgniteTxHandler.java:95)
> at
> org.apache.ignite.internal.processors.cache.transactions.
> IgniteTxHandler$5.apply(IgniteTxHandler.java:165)
> at
> org.apache.ignite.internal.processors.cache.transactions.
> IgniteTxHandler$5.apply(IgniteTxHandler.java:163)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.
> processMessage(GridCacheIoManager.java:863)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(
> GridCacheIoManager.java:386)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.
> handleMessage(GridCacheIoManager.java:308)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(
> GridCacheIoManager.java:100)
> at
> org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.
> onMessage(GridCacheIoManager.java:253)
> at
> org.apache.ignite.internal.managers.communication.
> GridIoManager.invokeListener(GridIoManager.java:1257)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager.
> processRegularMessage0(GridIoManager.java:885)
> at
> org.apache.ignite.internal.managers.communication.
> GridIoManager.access$2100(GridIoManager.java:114)
> at
> org.apache.ignite.internal.managers.communication.GridIoManager$7.run(
> GridIoManager.java:802)
> at
> org.apache.ignite.internal.util.StripedExecutor$Stripe.
> run(StripedExecutor.java:483)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: 

Re: Ignite Xmx configuration

2017-07-28 Thread Nikolai Tikhonov
Which versioin ignite do you use?

On Fri, Jul 28, 2017 at 11:12 AM, Anil <anilk...@gmail.com> wrote:

> Hi Nikolai,
>
> One more question- documentation says the indexes are stored in off heap
> as well for off-heap cache?
>
> where does it store ? in the same 4 g (in my case) ? thanks.
>
> Regards,
> Anil
>
> On 28 July 2017 at 12:56, Anil <anilk...@gmail.com> wrote:
>
>> Thanks Nikolai.
>>
>> On 28 July 2017 at 12:47, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>
>>> Hi!
>>>
>>> If you used off-heap cache then entry is not stored in heap memory.
>>> Hence Xmx is not related with cache size. You need to choose Xmx/Xms based
>>> on your application requirements (how many object will be created by your
>>> code). I guess that 2-4 Gb will be enough in your case.
>>>
>>> On Fri, Jul 28, 2017 at 9:59 AM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Team,
>>>>
>>>> I have two off-heap caches with 4 gb size (per cache)  in my ignite
>>>> node.
>>>>
>>>> What would be the Xmx setting for ignite node ?
>>>>
>>>> is it  2 * 4 + heap required ? or Xmx is not related to any of the
>>>> cache size ? please clarify. thanks.
>>>>
>>>>
>>>> Regards
>>>> Anil
>>>>
>>>>
>>>>
>>>
>>
>


Re: Ignite 1.9Version: Error GridCachePartitionExchangeManager Found long running cache future

2017-07-28 Thread Nikolai Tikhonov
Hi,

Could you take thread dumps from all nodes in cluster when it happens  and
share here?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-1-9Version-Error-GridCachePartitionExchangeManager-Found-long-running-cache-future-tp15755p15759.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Xmx configuration

2017-07-28 Thread Nikolai Tikhonov
Hi!

If you used off-heap cache then entry is not stored in heap memory. Hence
Xmx is not related with cache size. You need to choose Xmx/Xms based on
your application requirements (how many object will be created by your
code). I guess that 2-4 Gb will be enough in your case.

On Fri, Jul 28, 2017 at 9:59 AM, Anil  wrote:

> Hi Team,
>
> I have two off-heap caches with 4 gb size (per cache)  in my ignite node.
>
> What would be the Xmx setting for ignite node ?
>
> is it  2 * 4 + heap required ? or Xmx is not related to any of the cache
> size ? please clarify. thanks.
>
>
> Regards
> Anil
>
>
>


Re: Streaming data from concurrently from multiple nodes/data streamers - hangs

2017-07-27 Thread Nikolai Tikhonov
Hi,

Could you please share thread dump from all nodes?

On Thu, Jul 27, 2017 at 6:01 PM, Raja  wrote:

> Is it right to use data streamers from multiple nodes concurrently to
> ingest
> into the same cache?
>
> If I ingest data from a single node and multiple threads it works just
> fine.
> But when I ingests from multiple nodes it hangs (I get Unable to exchange
> partitions error)
>
>
> Any thoughts?
>
> Thank you
> Raja
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Streaming-data-from-concurrently-from-
> multiple-nodes-data-streamers-hangs-tp15735.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Does it reveal some problems?

2017-07-27 Thread Nikolai Tikhonov
More details about Page Memory you can found there
https://apacheignite.readme.io/docs/page-memory

On Thu, Jul 27, 2017 at 3:46 PM, Nikolai Tikhonov <ntikho...@apache.org>
wrote:

> Hi,
>
> Could you share full logs? From this messages I don't see any problem,
> it's just statistic of usage page memory.
>
> On Thu, Jul 27, 2017 at 2:12 PM, Bob Li <2789106...@qq.com> wrote:
>
>> From this worklog(you can find it in IGNITE_HOME/work/log):
>> [19:08:55,734][INFO ][grid-timeout-worker-#23%null%][IgniteKernal]
>> FreeList
>> [name=null, buckets=256, dataPages=70068, reusePages=20]
>>
>> Does "grid-timeout-worker" indicate my grid has timeout problem?
>> and  what's meaning of this log?
>> thanks.
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Does-it-reveal-some-problems-tp15724.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Does it reveal some problems?

2017-07-27 Thread Nikolai Tikhonov
Hi,

Could you share full logs? From this messages I don't see any problem, it's
just statistic of usage page memory.

On Thu, Jul 27, 2017 at 2:12 PM, Bob Li <2789106...@qq.com> wrote:

> From this worklog(you can find it in IGNITE_HOME/work/log):
> [19:08:55,734][INFO ][grid-timeout-worker-#23%null%][IgniteKernal]
> FreeList
> [name=null, buckets=256, dataPages=70068, reusePages=20]
>
> Does "grid-timeout-worker" indicate my grid has timeout problem?
> and  what's meaning of this log?
> thanks.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Does-it-reveal-some-problems-tp15724.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: [Paging] QueryCursor and SqlQuery

2017-07-27 Thread Nikolai Tikhonov
Hi,

Ignite API does not provide pagination out of box. You can achieve it by
your SQL query as in most of DB.

On Thu, Jul 27, 2017 at 4:33 AM, woo charles 
wrote:

> Hi,
> How can I do paging when query from ignite cache?
>
> Can I get records by range? specific page?
>
> If I have 1 rows of records with page size set to 20, how can I get
> the 21-40 (Page 2)?
>
>


Re: Cache performance in multi-threaded environment

2017-07-26 Thread Nikolai Tikhonov
Hi Vladimir,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.

1) I think that it doesn't lead to dramatically performance decrease. Which
version of ignite do you use for your PoC? From 1.9 version you can try
change CacheConfiguration#setQueryParallelism parameter. If you have big
data set it can bring performance improvement.

2) Ignite Cache object is thread safe. I recommend don't close this instance
on each invocate cache operation. Enough only once time to call
getOrCreateCache method and a this instance can be  shared beteween
different threads. For example:


// This object is thread safe.
IgniteCache cache = ignite.getOrCreateCache(...);

// Usage template

IgniteCache cache = ignite.getOrCreateCache(...); // Or can be used object
which create above

// Do not need to close cache instance each time for cache operations.
try {
QueryCursor cur = cache.query(...);
List res = cur.getAll();
}
catch (Exception e) {
// Query failed.
}

BTW, could you share simple reproducer which leads to this failure?




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-performance-in-multi-threaded-environment-tp15698p15702.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: regarding ignite-hibernate module as L2 hibernate cache

2017-07-26 Thread Nikolai Tikhonov
Hello,

I answered on SO [1]. Also I've created ticket and you can track progress
there [2].

1.
https://stackoverflow.com/questions/45327322/exception-while-trying-to-use-ignite-hibernate-as-l2-cache
2. https://issues.apache.org/jira/browse/IGNITE-5848

On Wed, Jul 26, 2017 at 3:37 PM, sureshkumarvepari <
sureshkumarvep...@gmail.com> wrote:

> Im getting this exception, while trying to use ignite-hibernate 2.1.1 as
> L2 cache with Hibernate 5.2.4
>
>
> "Handler dispatch failed; nested exception is
> java.lang.AbstractMethodError: org.apache.ignite.cache.hibernate.
> HibernateEntityRegion$AccessStrategy.putFromLoad(
> Lorg/hibernate/engine/spi/SharedSessionContractImplement
> or;Ljava/lang/Object;Ljava/lang/Object;JLjava/lang/Object;Z)Z"
>
>
> jars used for this integration is
>
> hibernate-core-5.2.4.Final.jar
> ignite-core-2.1.1.jar
> ignite-hibernate-core-2.1.1.jar
> ignite-hibernate_5.1-2.1.1.jar
> ignite-indexing-2.1.1.jar
> ignite-log4j-2.1.1.jar
> ignite-spring-2.1.1.jar
> ignite-web-2.1.1.jar
>
>
> Thanks.
>
> --
> View this message in context: regarding ignite-hibernate module as L2
> hibernate cache
> 
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: ignite-indexing karaf feature problems on Karaf 4.1.1

2017-07-26 Thread Nikolai Tikhonov
Hello,

Apache Ignite has a сlose integration with H2 and uses classes from this
package for starting debug console. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-indexing-karaf-feature-problems-on-Karaf-4-1-1-tp15643p15694.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cluster going OOM

2017-07-26 Thread Nikolai Tikhonov
Hi,

Vote for this release will be finished tomorrow. I think it will be
successful and the release will be ready on this week.

On Wed, Jul 26, 2017 at 12:57 PM, Ankit Singhai  wrote:

> Hi Andrew,
> Please let us know when Ignite 2.1.0 would be available & do you think we
> can try with Ignite 2.0?
>
> Thanks
> Ankit Singhai
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Cluster-going-OOM-tp15175p15683.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: REST API put issue

2017-07-25 Thread Nikolai Tikhonov
Hi,

Are you sure that you used a correct sql query or queries the same cache?
Could you share a simple reproducer?

Thanks,
Nikolai

On Sat, Jul 22, 2017 at 12:03 AM, waterg 
wrote:

> I used REST API put command and was able to valid the number of entries
> using
> REST size command as well as visor.
> However when I use JDBC to connect to ignite directly, the select count(*)
> return 0 rows.
> Has anyone tried that before? Is it a bug?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/REST-API-put-issue-tp15267.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Cache cannot be used any more on client if server is restarted.

2017-07-25 Thread Nikolai Tikhonov
Hello,

*CacheJdbcPojoStoreFactory#setDataSource* method is depricated (by this
reason). You need to use *CacheJdbcPojoStoreFactory#setDataSourceBean *or
*CacheJdbcPojoStoreFactory#setDataSourceFactory.*

Thanks,
Nikolai

On Tue, Jul 25, 2017 at 10:06 AM, AresZhu  wrote:

> I have one server and one client. everything works fine, if server is not
> restarted. but once the server is restarted, the cache cannot be used any
> more in client.
>
> below is code for both server and client.
>
> Server code:
> cacheConfiguration.setName("Sample");
> cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
> cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> cacheConfiguration.setRebalanceMode(CacheRebalanceMode.ASYNC);
>
> cacheConfiguration.setWriteSynchronizationMode(
> CacheWriteSynchronizationMode.FULL_SYNC);
> cacheConfiguration.setBackups(0);
> cacheConfiguration.setCopyOnRead(true);
> cacheConfiguration.setStoreKeepBinary(false);
>
> cacheConfiguration.setReadThrough(false);
> cacheConfiguration.setWriteThrough(true);
> cacheConfiguration.setWriteBehindEnabled(true);
> cacheConfiguration.setWriteBehindFlushFrequency(2000);
> cacheConfiguration.setWriteBehindFlushThreadCount(2);
>
> DriverManagerDataSource theDataSource = new DriverManagerDataSource();
> theDataSource.setDriverClassName("org.postgresql.Driver");
> theDataSource.setUrl("jdbc:postgresql://192.168.224.128:5432/sample");
> theDataSource.setUsername("postgres");
> theDataSource.setPassword("password");
>
>
> CacheJdbcPojoStoreFactory jdbcPojoStoreFactory = new
> CacheJdbcPojoStoreFactory()
> .setParallelLoadCacheMinimumThreshold(0)
> .setMaximumPoolSize(1)
> .setDataSource(theDataSource);
>
> cacheConfiguration.setCacheStoreFactory(jdbcPojoStoreFactory);
>
>
> Collection jdbcTypes = new ArrayList();
>
> JdbcType jdbcType = new JdbcType();
> jdbcType.setCacheName("Sample");
> jdbcType.setDatabaseSchema("public");
> jdbcType.setKeyType("java.lang.Long");
>
> Collection keys = new ArrayList();
> keys.add(new JdbcTypeField(Types.BIGINT, "id", long.class, "id"));
> jdbcType.setKeyFields(keys.toArray(new JdbcTypeField[keys.size()]));
> Collection vals = new ArrayList();
>
> jdbcType.setDatabaseTable("sample");
> jdbcType.setValueType("com.nmf.SampleModel");
>
> vals.add(new JdbcTypeField(Types.BIGINT, "id", long.class, "id"));
> vals.add(new JdbcTypeField(Types.VARCHAR, "name", String.class,
> "name"));
>
> jdbcType.setValueFields(vals.toArray(new JdbcTypeField[vals.size()]));
>
> jdbcTypes.add(jdbcType);
>
>
> ((CacheJdbcPojoStoreFactory)cacheConfiguration.getCacheStoreFactory()).
> setTypes(jdbcTypes.toArray(new
> JdbcType[jdbcTypes.size()]));
>
>
> IgniteConfiguration icfg = new IgniteConfiguration();
> icfg.setCacheConfiguration(cacheConfiguration);
>
> Ignite ignite = Ignition.start(icfg);
>
>
> Client Code:
> ExecutorService executor = Executors.newSingleThreadExecutor(r -> new
> Thread(r, "worker"));
>
>
> CacheConfiguration cacheConfiguration = new CacheConfiguration();
>
> cacheConfiguration.setName("Sample");
> cacheConfiguration.setCacheMode(CacheMode.REPLICATED);
> cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> cacheConfiguration.setRebalanceMode(CacheRebalanceMode.ASYNC);
>
> cacheConfiguration.setWriteSynchronizationMode(
> CacheWriteSynchronizationMode.FULL_SYNC);
> cacheConfiguration.setBackups(0);
> cacheConfiguration.setCopyOnRead(true);
> cacheConfiguration.setStoreKeepBinary(false);
>
> IgniteConfiguration icfg = new IgniteConfiguration();
> icfg.setCacheConfiguration(cacheConfiguration);
>
> icfg.setClientMode(true);
>
> final Ignite ignite = Ignition.start(icfg);
>
> ignite.events().localListen(new IgnitePredicate() {
> public boolean apply(Event event) {
> if (event.type() == EVT_CLIENT_NODE_RECONNECTED) {
> System.out.println("Reconnected");
>
> executor.submit(()-> {
> IgniteCache cache =
> ignite.getOrCreateCache("Sample");
>
> System.out.println("Got the cache");
>
> SampleModel model = cache.get(1L);
>
> System.out.println(model.getName());
> });
> }
>
> return true;
> }
> }, EVT_CLIENT_NODE_RECONNECTED);
>
> IgniteCache cache =
> ignite.getOrCreateCache("Sample");
>
> SampleModel model = cache.get(1L);
>
> System.out.println(model.getName());
>
>
> Error log on Client:
>   SEVERE: Failed to reinitialize local partitions (preloading will be
> stopped): GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=2, minorTopVer=1], nodeId=dea5f59b, evt=DISCOVERY_CUSTOM_EVT]
> class 

Re: Some question regards near cache and cache store

2017-07-25 Thread Nikolai Tikhonov
Hi Aaron,

NearCacheConfiguration can be used for REPLICATED cache. It can be useful
for client node. Affinity function used also for REPLICATED caches.

>Possible evict some data from the cache manually, when use
JDBC storage as back-end;  for some entry, I only want to mark them
as deleted  and move away from cache not really delete from DB.
IgniteCache#clear method does what you want.


On Tue, Jul 25, 2017 at 5:41 AM, aa...@tophold.com 
wrote:

> Hi All,
>
> Will REPLICATED mode cache can not use the NearCacheConfiguration?   Also
> the affinity should not be used for REPLICATED mode cache right?
>
> Possible evict some data from the cache manually, when use
> JDBC storage as back-end;  for some entry, I only want to mark them
> as deleted  and move away from cache not really delete from DB.
>
> Thanks for your time! very appreciate!
>
> Regards
> Aaron
> --
> aa...@tophold.com
>


Re: Do not store @javax.persistence.Transient fields

2017-07-25 Thread Nikolai Tikhonov
Apache Ignite does not handle this annatation. You can implement
org.apache.ignite.binary.Binarylizable interface. It allows to implement
custom serialization logic for binary objects.

On Tue, Jul 25, 2017 at 2:11 PM, kestas  wrote:

> Is there a simple way to ensure fields marked as
> @javax.persistence.Transient
> are not being stored in cache?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Do-not-store-javax-persistence-
> Transient-fields-tp15604.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Information

2017-07-05 Thread Nikolai Tikhonov
Hi,

> I wanted to know if I did start a node by igniting.sh in client mode with
a fixed configuration I can access that instance via java.
For getting Ignite API you need have ignite node in the same JVM that
running your application.

> Another thing I would do is to partition the input to the ignite server
nodes.
You can getting it easely by node filter. I think it's the better way.

http://apache-ignite-users.70518.x6.nabble.com/CacheConfiguration-AffinityFunction-or-node-filter-td9207.html

On Wed, Jul 5, 2017 at 6:56 PM, mimmo_c  wrote:

> Hi,
> I wanted to know if I did start a node by igniting.sh in client mode with a
> fixed configuration I can access that instance via java.
> Another thing I would do is to partition the input to the ignite server
> nodes. For example I have 2  maps and I would like the first
> one to go to one server and the second to the server two. Should I set the
> server cache to "local"? Can I use affinitykey? The maps are practically
> the
> same, there is no particular distinction between the objects inside.
> Thank you
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Information-tp14330.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Apache Ignite client gets disconnected on Amazon EC2 Scale In

2017-06-21 Thread Nikolai Tikhonov
Which version ignite do you use?

On Wed, Jun 21, 2017 at 6:08 AM, robbie  wrote:

> Hi. Below are the logs for server and client nodes
>
> server:
> https://pastebin.com/ashC5EN8
>
> client:
> https://pastebin.com/xTa0sQ9t
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Apache-Ignite-client-gets-disconnected-on-Amazon-EC2-
> Scale-In-tp13874p14000.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite Yardstick - Package built from source

2017-06-20 Thread Nikolai Tikhonov
Hi,

Can you to make sure that com.test.ignite.yardstick.Ignite Put Benchmark
really exist in classpath?

On Mon, Jun 19, 2017 at 9:54 PM, Swetad90  wrote:

> Hi
> I am trying to use Ignite yardstick on packages that I build rather than
> using the ones given in example packages(org.apache.ignite).
> I tried to package the entire source and place it under the sources folder.
> However, yardstick is not able to find my packages from properties file.
>
> Contents of Benchmark.properties -
> BENCHMARK_PACKAGES=org.yardstickframework, com.test.ignite.yardstick
>
> CONFIGS="\
> -cfg ${SCRIPT_DIR}/../config/ignite-localhost-config.xml -nn ${nodesNum}
> -b
> ${b} -w ${w} -d ${d} -t ${t} -p com.test.ignite.yardstick -sm ${sm} -dn
> IgnitePutBenchmark -sn IgniteNode -ds atomic-put-${b}-backup,\
> "
>
> Error in logs-
> ERROR: Could not find benchmark driver class name in classpath:
> IgnitePutBenchmark.
> Make sure class name is specified correctly and corresponding package is
> added to -p argument list.
> Type '--help' for usage.
>
> Is there any other property we have to modify for yardstick to recognize
> the
> packages?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-Yardstick-Package-built-from-
> source-tp13970.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Apache Ignite client gets disconnected on Amazon EC2 Scale In

2017-06-19 Thread Nikolai Tikhonov
Hello,

At the first look it's network problem. Where are you client located? Also
could you share full logs server and client node?

On Fri, Jun 16, 2017 at 3:59 PM, robbie  wrote:

> I've also noticed a similar behavior whenever I kill multiple EC2 instances
> when a compute task is running. I find that client gets disconnected from
> the grid often.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Apache-Ignite-client-gets-disconnected-on-Amazon-EC2-
> Scale-In-tp13874p13875.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: What is the difference in cache speed between ignite 1.7 and 1.9

2017-06-16 Thread Nikolai Tikhonov
Thank you! Simple benchmark would be the best information for further
investigation! ;)


Re: messaging behavior

2017-06-16 Thread Nikolai Tikhonov
Hi!

I think in your case you need to improve your protocol: send to back ack
message. Ack message will be sent to sender node when original message
processed.

On Fri, Jun 16, 2017 at 10:15 AM, shawn.du <shawn...@neulion.com.cn> wrote:

> Hi,
>
> I want to use ignite messaging to send notifications to some client nodes.
>
> I write a tool to do this.
>
> this tool will start as a client node and send some message. when it send
> out the message,
> the tool will stopped.
> it seems that this doesn't work and i notice error like: Failed to resolve
> sender node (did the node left grid?)
> it seems the node gone are too fast.
>
> How to solve this?  another question is how can I get the feedback of how
> message are received by other nodes.
>
> Thanks
> Shawn
>
> On 06/12/2017 20:52,Nikolai Tikhonov<ntikho...@apache.org>
> <ntikho...@apache.org> wrote:
>
> Hi,
>
> Ignite does not accumulate messages which were sent to non-exist topic.
> Messages will be lost in your case.
>
> On Mon, Jun 12, 2017 at 12:30 PM, shawn.du <shawn...@neulion.com.cn>
> wrote:
>
>> Hi,
>>
>> I am trying ignite topic based messaging. I wonder to know ignite
>> behavior in blow case:
>>
>> Client A send a message with topic T1  to ignite server, but there are no
>> topic listeners at this time, after for a while(like 1 or 2 minutes),
>> Client B is online and subscribe topic T1, will client B get the message?
>> if true, how long
>> the message will stay in ignite queue and how to set it?
>> how it is for ordered message?
>>
>> Thanks
>> Shawn
>>
>>
>


Re: What is the difference in cache speed between ignite 1.7 and 1.9

2017-06-16 Thread Nikolai Tikhonov
Hi,

Diff between this releases huge, but developers hard works on that the next
release will be faster than previous. Every release passes is benchmarked.
Can you share your case which show slowdown?

On Fri, Jun 16, 2017 at 8:16 AM, wychoi  wrote:

> hi
>
> I upgraded to ignite 1.9 and tested the cache
>
> Cache writes are slowing down
>
> What is the difference in cache speed between ignite 1.7 and 1.9 ?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/What-is-the-difference-in-cache-speed-
> between-ignite-1-7-and-1-9-tp13853.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: swift store as secondary file system

2017-06-13 Thread Nikolai Tikhonov
I got it! If you do it yourself doesn't shy to share your experience with
community. ;)

On Mon, Jun 12, 2017 at 7:23 PM, Antonio Si <antonio...@gmail.com> wrote:

> Thanks Nikolai. I am wondering if anyone has done something similar.
>
> Thanks.
>
> Antonio.
>
> On Mon, Jun 12, 2017 at 3:30 AM, Nikolai Tikhonov <ntikho...@apache.org>
> wrote:
>
>> Hi, Antonio!
>>
>> You can implement your own CacheStore which will propagate data to the
>> swift. Or do you mean other integration with this product?
>>
>> On Sat, Jun 10, 2017 at 9:04 AM, Antonio Si <antonio...@gmail.com> wrote:
>>
>>> Hi Alexey,
>>>
>>> I meant a swift object storage: https://wiki.openstack.org/wiki/Swift
>>>
>>> Thanks.
>>>
>>> Antonio.
>>>
>>>
>>>
>>> On Fri, Jun 9, 2017 at 6:38 PM, Alexey Kuznetsov <akuznet...@apache.org>
>>> wrote:
>>>
>>>> Hi, Antonio!
>>>>
>>>> What is a "swift store"?
>>>> Could you give a link?
>>>>
>>>> On Sat, Jun 10, 2017 at 7:32 AM, Antonio Si <antonio...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Is there a secondary file system implementation for a swift store?
>>>>>
>>>>> Thanks.
>>>>>
>>>>> Antonio.
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Alexey Kuznetsov
>>>>
>>>
>>>
>>
>


Re: Grid/Cluster unique UUID possible with IgniteUuid?

2017-06-13 Thread Nikolai Tikhonov
Muthu,

Look at Ignite Uuid#randomUuid() method. I think it will provide needed
guarantees for your case.

On Mon, Jun 12, 2017 at 9:53 PM, Muthu <muthu.kumara...@gmail.com> wrote:

> Thanks Nikolai..this is what i am doing...not sure if this is too
> much..what do you think..the goal is to make sure that a UUID is unique
> across the entire application (the problem is each node that is part of the
> cluster would be doing this for different entities that it owns)
>
> ...
> ...
> System.out.println(" in ObjectCacheMgrService.insertDepartment 
> for dept : " + dept);
> long t1 = System.currentTimeMillis();
> *String uUID = new IgniteUuid(UUID.randomUUID(),
> igniteAtomicSequence.incrementAndGet()).toString();*
> long t2 = System.currentTimeMillis();
> System.out.println("Time for UUID generation (millis) : " + (t2 - t1));
> *dept.setId(uUID);*
> * deptCache.getAndPut(uUID, dept);*
> System.out.println(" in ObjectCacheMgrService.insertDepartment :
> department  inserted successfully : " + dept);
> ...
> ...
>
> Regards,
> Muthu
>
> On Mon, Jun 12, 2017 at 3:24 AM, Nikolai Tikhonov <ntikho...@apache.org>
> wrote:
>
>> Muthu,
>>
>> Yes, you can use IgniteUUID as unique ID generator. What you will use
>> depends your requirements. IgniteAtomicSequence takes one long and
>> IgniteUUID takes 3 long. But getting new range sequence is distributed
>> operation. You need to decied what more critical for your.
>>
>> On Fri, Jun 9, 2017 at 8:46 PM, Muthu <muthu.kumara...@gmail.com> wrote:
>>
>>>
>>> Missed adding this one...i know there is support for ID generation with
>>> IgniteAtomicSequence @ https://apacheignite.readme.io/docs/id-generator
>>>
>>> The question is which one should i use...i want to use this to generate
>>> unique ids for entities that are to be cached & persisted..
>>>
>>> Regards,
>>> Muthu
>>>
>>>
>>> On Fri, Jun 9, 2017 at 10:27 AM, Muthu <muthu.kumara...@gmail.com>
>>> wrote:
>>>
>>>> Hi Folks,
>>>>
>>>> Is it possible to generate a Grid/Cluster unique UUID using IgniteUuid.
>>>> I looked at the source code & static factory method *randomUuid
>>>> <https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/lang/IgniteUuid.html#randomUuid%28%29>*().
>>>> It looks like it generates one with with a java.util.UUID (generated with
>>>> its randomUUID) & an AutomicLong's incrementAndGet
>>>>
>>>> Can i safely assume that given that it uses a combination of UUID &
>>>> long on the individual VMs that are part of the Grid/Cluster it will be
>>>> unique or is there a better way?
>>>>
>>>> Regards,
>>>> Muthu
>>>>
>>>
>>>
>>
>


Re: System Parameters to improve CPU utilization

2017-06-12 Thread Nikolai Tikhonov
Hi,

Can provide more details about your case? Which operations you perform
under grid?

On Fri, Jun 9, 2017 at 1:21 PM, rishi007bansod 
wrote:

> Hi,
>For my ignite data caching process i have recorded following
> statistics. In which I have found my CPU utilization is not much(only
> 60-70%). Also during this run high number of minor page faults, context
> switches/sec are seen. Are these parameters limiting my system performance?
> Are there any tuning that I can apply to improve CPU utilization?
>
> *CPU Utilization :*
> 
>
> *Page Faults :*
>  n13562/page_faults.png>
>
> *Context Switches/sec :*
>  n13562/contextswitchespersec.png>
>
> I have also tried increasing setStartCacheSize for cache but still same
> number of Page faults, Context switches/sec and CPU utilization is seen.
>
> *Page Faults when setCacheStartSize is set to 60*1024*1024(for 60M entries
> in our case):*
>  n13562/pagingsetStartSize.png>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/System-Parameters-to-improve-CPU-
> utilization-tp13562.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: messaging behavior

2017-06-12 Thread Nikolai Tikhonov
Hi,

Ignite does not accumulate messages which were sent to non-exist topic.
Messages will be lost in your case.

On Mon, Jun 12, 2017 at 12:30 PM, shawn.du  wrote:

> Hi,
>
> I am trying ignite topic based messaging. I wonder to know ignite behavior
> in blow case:
>
> Client A send a message with topic T1  to ignite server, but there are no
> topic listeners at this time, after for a while(like 1 or 2 minutes),
> Client B is online and subscribe topic T1, will client B get the message?
> if true, how long
> the message will stay in ignite queue and how to set it?
> how it is for ordered message?
>
> Thanks
> Shawn
>
>


Re: ignite 1.5 network imbalance

2017-06-12 Thread Nikolai Tikhonov
Hi Libo!

Would you describe your imbalance in percent? Also can you try to upgrade
Ignite to 1.9 and check it?

On Fri, Jun 9, 2017 at 11:05 PM, Libo Yu  wrote:

> Hi,
>
>
>
> We have used embedded ignite cache on three application servers which are
> behind a load balancer.
>
> The cache is set to PARTITIONED mode with backups=0. However, we noticed
> one node has
>
> a large outbound traffic and the other two nodes both have large inbound
> traffic. I printed
>
> out the partition number and local data size for each cache and they are
> almost the same.
>
> We have been struggling with this issue for quite some time and cannot
> figure out what
>
> caused this imbalance.  Note that we did not use client mode. I wonder if
> anybody has
>
> experienced the same issue for 1.5. Thanks.
>
>
>
> Regards,
>
>
>
> Libo Yu
>
>
>


Re: swift store as secondary file system

2017-06-12 Thread Nikolai Tikhonov
Hi, Antonio!

You can implement your own CacheStore which will propagate data to the
swift. Or do you mean other integration with this product?

On Sat, Jun 10, 2017 at 9:04 AM, Antonio Si  wrote:

> Hi Alexey,
>
> I meant a swift object storage: https://wiki.openstack.org/wiki/Swift
>
> Thanks.
>
> Antonio.
>
>
>
> On Fri, Jun 9, 2017 at 6:38 PM, Alexey Kuznetsov 
> wrote:
>
>> Hi, Antonio!
>>
>> What is a "swift store"?
>> Could you give a link?
>>
>> On Sat, Jun 10, 2017 at 7:32 AM, Antonio Si  wrote:
>>
>>> Hi,
>>>
>>> Is there a secondary file system implementation for a swift store?
>>>
>>> Thanks.
>>>
>>> Antonio.
>>>
>>
>>
>>
>> --
>> Alexey Kuznetsov
>>
>
>


Re: FW: QueryCursor.iterator() hanges forever

2017-06-12 Thread Nikolai Tikhonov
Hello,

It looks strange. Could you share full example (how maven project)? Which
version of apache ignite do you use?

On Sat, Jun 10, 2017 at 1:14 PM, Reshma Bochare  wrote:

> Same thing works fine if executed at server side
>
>
>
> *From:* Reshma Bochare
> *Sent:* Friday, June 09, 2017 4:21 PM
> *To:* 'user@ignite.apache.org' 
> *Subject:* QueryCursor.iterator() hanges forever
>
>
>
> Hi,
>
> I am getting below error when iterated over QueryCursor.
>
>
>
>
>
> [2017-06-09 
> 16:12:58,947][ERROR][grid-nio-worker-2-#11%null%][GridDirectParser]
> Failed to read message [msg=GridIoMessage [plc=0, topic=null, topicOrd=-1,
> ordered=false, timeout=0, skipOnTimeout=false, msg=null],
> buf=java.nio.DirectByteBuffer[pos=2 lim=145 cap=32768],
> reader=DirectMessageReader [state=DirectMessageState [pos=0,
> stack=[StateItem [stream=DirectByteBufferStreamImplV2 
> [buf=java.nio.DirectByteBuffer[pos=2
> lim=145 cap=32768], baseOff=1356327248, arrOff=-1, tmpArrOff=0,
> tmpArrBytes=0, msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1,
> keyDone=false, readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0,
> uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true], state=0], null,
> null, null, null, null, null, null, null, null]], lastRead=false],
> ses=GridSelectorNioSessionImpl [selectorIdx=2, queueSize=0,
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=2 lim=145 cap=32768], 
> recovery=GridNioRecoveryDescriptor
> [acked=23, resendCnt=0, rcvCnt=17, sentCnt=23, reserved=true, lastAck=16,
> nodeLeft=false, node=TcpDiscoveryNode 
> [id=69869e5b-703f-4a86-8ad9-12fd06dfe624,
> addrs=[0:0:0:0:0:0:0:1, **.**.*.**, 127.0.0.1],
> sockAddrs=[IND-***..***/**.**.8.76:0, /0:0:0:0:0:0:0:1:0, /
> 127.0.0.1:0], discPort=0, order=2, intOrder=2, lastExchangeTime=1497004966182,
> loc=false, ver=1.8.0#20161205-sha1:9ca40dbe, isClient=true],
> connected=true, connectCnt=2, queueLimit=5120, reserveCnt=7],
> super=GridNioSessionImpl [locAddr=/0:0:0:0:0:0:0:1:47100,
> rmtAddr=/0:0:0:0:0:0:0:1:50946, createTime=1497004978927, closeTime=0,
> bytesSent=26, bytesRcvd=182, sndSchedTime=1497004978927,
> lastSndTime=1497004978927, lastRcvTime=1497004978947, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@d1411b, directMode=true],
> GridConnectionBytesVerifyFilter], accepted=true]]]
>
> class org.apache.ignite.IgniteException: Invalid message type: -33
>
> at org.apache.ignite.internal.managers.communication.
> GridIoMessageFactory.create(GridIoMessageFactory.java:805)
>
> at org.apache.ignite.spi.communication.tcp.
> TcpCommunicationSpi$5.create(TcpCommunicationSpi.java:1631)
>
> at org.apache.ignite.internal.direct.stream.v2.
> DirectByteBufferStreamImplV2.readMessage(DirectByteBufferStreamImplV2.
> java:1144)
>
> at org.apache.ignite.internal.direct.DirectMessageReader.
> readMessage(DirectMessageReader.java:311)
>
> at org.apache.ignite.internal.managers.communication.
> GridIoMessage.readFrom(GridIoMessage.java:254)
>
> at org.apache.ignite.internal.util.nio.GridDirectParser.
> decode(GridDirectParser.java:84)
>
> at org.apache.ignite.internal.util.nio.GridNioCodecFilter.
> onMessageReceived(GridNioCodecFilter.java:104)
>
> at org.apache.ignite.internal.
> util.nio.GridNioFilterAdapter.proceedMessageReceived(
> GridNioFilterAdapter.java:107)
>
> at org.apache.ignite.internal.util.nio.
> GridConnectionBytesVerifyFilter.onMessageReceived(
> GridConnectionBytesVerifyFilter.java:123)
>
> at org.apache.ignite.internal.
> util.nio.GridNioFilterAdapter.proceedMessageReceived(
> GridNioFilterAdapter.java:107)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> HeadFilter.onMessageReceived(GridNioServer.java:2332)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterChain.
> onMessageReceived(GridNioFilterChain.java:173)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> DirectNioClientWorker.processRead(GridNioServer.java:918)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.processSelectedKeysOptimized(
> GridNioServer.java:1583)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.body(GridNioServer.java:1289)
>
> at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110)
>
> at java.lang.Thread.run(Unknown Source)
>
>
>
>
>
> Configuration is as below:
>
>
>
> <*bean **id=**"igniteClientConfiguration" **class=*
> 

Re: Grid/Cluster unique UUID possible with IgniteUuid?

2017-06-12 Thread Nikolai Tikhonov
Muthu,

Yes, you can use IgniteUUID as unique ID generator. What you will use
depends your requirements. IgniteAtomicSequence takes one long and
IgniteUUID takes 3 long. But getting new range sequence is distributed
operation. You need to decied what more critical for your.

On Fri, Jun 9, 2017 at 8:46 PM, Muthu  wrote:

>
> Missed adding this one...i know there is support for ID generation with
> IgniteAtomicSequence @ https://apacheignite.readme.io/docs/id-generator
>
> The question is which one should i use...i want to use this to generate
> unique ids for entities that are to be cached & persisted..
>
> Regards,
> Muthu
>
>
> On Fri, Jun 9, 2017 at 10:27 AM, Muthu  wrote:
>
>> Hi Folks,
>>
>> Is it possible to generate a Grid/Cluster unique UUID using IgniteUuid. I
>> looked at the source code & static factory method *randomUuid
>> *().
>> It looks like it generates one with with a java.util.UUID (generated with
>> its randomUUID) & an AutomicLong's incrementAndGet
>>
>> Can i safely assume that given that it uses a combination of UUID & long
>> on the individual VMs that are part of the Grid/Cluster it will be unique
>> or is there a better way?
>>
>> Regards,
>> Muthu
>>
>
>


Re: Node can't start. java.lang.NullPointerException in GridUnsafe.compareAndSwapLong()

2017-06-12 Thread Nikolai Tikhonov
Hi,

Seems that known issue with IBM JDK
http://www-01.ibm.com/support/docview.wss?uid=swg1IV76872. You need to
update on jdk which contains fixes.

On Fri, Jun 9, 2017 at 7:06 PM, Vladimir  wrote:

> Hi,
>
> Having no problems on Windows and Linux application suddenly couldn't start
> on IBM AIX with IBM J9 VM (build 2.8):
>
> Caused by: java.lang.NullPointerException
> at
> org.apache.ignite.internal.util.GridUnsafe.compareAndSwapLong(GridUnsafe.
> java:1228)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.util.OffheapReadWriteLock.
> readLock(OffheapReadWriteLock.java:122)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.readLock(
> PageMemoryNoStoreImpl.java:450)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.util.PageHandler.readLock(PageHandler.java:181)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.util.PageHandler.readPage(PageHandler.java:152)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.DataStructure.read(
> DataStructure.java:319)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.BPlusTree.findDown(BPlusTree.java:1115)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.BPlusTree.doFind(BPlusTree.java:1084)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.BPlusTree.findOne(BPlusTree.java:1048)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$
> CacheDataStoreImpl.find(IgniteCacheOffheapManagerImpl.java:1143)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.
> read(IgniteCacheOffheapManagerImpl.java:361)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.unswap(
> GridCacheMapEntry.java:384)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(
> GridCacheMapEntry.java:588)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet(
> GridCacheMapEntry.java:474)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridPartitionedSingleGetFuture.localGet(GridPartitionedSingleGetFuture
> .java:380)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridPartitionedSingleGetFuture.mapKeyToNode(GridPartitionedSingleGetFuture
> .java:326)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture
> .java:211)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture
> .java:203)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.
> GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:266)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get0(
> GridCacheAdapter.java:4482)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:4463)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:1405)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.datastructures.
> DataStructuresProcessor.getAtomic(DataStructuresProcessor.java:586)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.datastructures.
> DataStructuresProcessor.sequence(DataStructuresProcessor.java:396)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.IgniteKernal.atomicSequence(
> IgniteKernal.java:3419)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
>
> Any workarounds?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Node-can-t-start-java-lang-NullPointerException-in-
> GridUnsafe-compareAndSwapLong-tp13573.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: vertx-ignite

2017-06-08 Thread Nikolai Tikhonov
Hi Anil,

Yes, you're right. Ignite vertx integration creates caches during work. The
template needed for correct work.

On Thu, Jun 8, 2017 at 7:37 AM, Anil <anilk...@gmail.com> wrote:

> Hi Nikhonov,
>
> May I know the reason for adding template configuration.
>
> Its a generic cache configuration. I have added all cache configurations
> required for my application explicitly in ignite.xml as we don't use java
> based IgniteCache creation. I dont see any importance of adding template
> configuration. please correct me if I am wrong. Thanks.
>
> Does vertex need any default caches/configurations for it to work? like
> below semaphore in hazelcast cluster.xml
>
>  
>   
>     1
>   
>
> Thanks
>
>
> On 5 June 2017 at 21:07, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>
>> Hi Anil,
>>
>> You missed a template for caches (lines 90-97).
>>
>> On Sat, May 13, 2017 at 12:05 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Andrey,
>>>
>>> Could you please help me here? Thanks.
>>>
>>> Thanks
>>>
>>> On 11 May 2017 at 14:16, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Andrey,
>>>>
>>>> I am checking the default-ignite.xml at https://github.com/apacheig
>>>> nite/vertx-ignite/blob/master/src/main/resources/default-ignite.xml
>>>>
>>>> Could you please point what is missing in my configuration ?
>>>>
>>>> I could not find anything in default-ignite.xml.
>>>>
>>>> Thanks
>>>>
>>>> On 11 May 2017 at 11:07, Anil <anilk...@gmail.com> wrote:
>>>>
>>>>> HI Andrey,
>>>>>
>>>>> i am using vertx-ignite 3.4.1 and ignite 1.9 version. i will check the
>>>>> default-ignite.xml.
>>>>>
>>>>> Thanks
>>>>>
>>>>> On 10 May 2017 at 21:31, Andrey Gura <ag...@apache.org> wrote:
>>>>>
>>>>>> Anil,
>>>>>>
>>>>>> What version of vertx-ignite or Ignite itself do you use?
>>>>>>
>>>>>> In provided ignite.xml there is no minimal configuration that is
>>>>>> mandatory for Ignite cluster manager for vert.x (see
>>>>>> default-ignite.xml for example).
>>>>>>
>>>>>>
>>>>>> On Tue, May 2, 2017 at 9:18 AM, Anil <anilk...@gmail.com> wrote:
>>>>>> >
>>>>>> > Hi Andrey,
>>>>>> >
>>>>>> > Apologies for late reply. I don't have any exact reproduce. I can
>>>>>> see this
>>>>>> > log frequently in our logs.
>>>>>> >
>>>>>> > attached the ignite.xml.
>>>>>> >
>>>>>> > Thanks.
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > On 26 April 2017 at 18:32, Andrey Gura <ag...@apache.org> wrote:
>>>>>> >>
>>>>>> >> Anil,
>>>>>> >>
>>>>>> >> what kind of lock do you mean? What are steps for reproduce? What
>>>>>> >> version if vert-ignite do use and what is your configuration?
>>>>>> >>
>>>>>> >> On Wed, Apr 26, 2017 at 2:16 PM, Anil <anilk...@gmail.com> wrote:
>>>>>> >> > HI,
>>>>>> >> >
>>>>>> >> > I am using vertx-ignite and when node is left the topology, lock
>>>>>> is not
>>>>>> >> > getting released and whole server is not responding.
>>>>>> >> >
>>>>>> >> > 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
>>>>>> >> > BlockedThreadChecker:57 - Thread
>>>>>> >> > Thread[vert.x-worker-thread-82,5,ignite]
>>>>>> >> > has been blocked for 2329981 ms, time limit is 6
>>>>>> >> > io.vertx.core.VertxException: Thread blocked
>>>>>> >> > at sun.misc.Unsafe.park(Native Method)
>>>>>> >> > at
>>>>>> >> > java.util.concurrent.locks.LockSupport.park(LockSupport.java
>>>>>> :175)
>>>>>> >> > at
>>>>>> >> >
>>>>>> >> > java.util.concurrent.lo

Re: Why is custom cacheStore.write() being called in clientMode?

2017-06-06 Thread Nikolai Tikhonov
Rick,

> What you are saying is that I cannot update keys 2 and 4 in the same
transaction, correct?
No, it's will be update in the same transaction. I explained you why Ignite
can't update store from dht nodes (nodes which own this data) and why
Ignite propagates updates to store from client node.

On Tue, Jun 6, 2017 at 5:42 PM, rick_tem  wrote:

> Hi Nikolai,
>
> Thanks for your reply.  It is appreciated!  Thanks for your answer to 2) I
> will look into it. 3) and 4) are really the same issue I am trying to
> understand how it works.
>
> With regards to 1) below, we aren't speaking about distributed databases,
> but distributed caches that are java JVMs.  But isn't that what a JTA
> transaction manager is supposed to do?  ie handle distributed transactions?
> if I enlist MQ and Jboss in the same transaction that is two seperate JVMs
> and I believe should work with one atomic transaction...
>
> But regardless, I believe this is what you are saying here:  Please correct
> me if I am wrong.  Say I have keys 1, 2, 3 on node 1 and keys 4, 5, 6 on
> node 2.  What you are saying is that I cannot update keys 2 and 4 in the
> same transaction, correct?  This is because they live in two different
> JVMs...If this is the case, that is a severe limitation as then I need to
> know which node my data is on.  What would your recommendation be here then
> for write-through cache?  Have everything replicated?  It is a requirement
> that the transaction be rock solid in whatever model I implement.  I cannot
> afford to lose writes or have half-committed data.
>
> Thanks,
> Rick
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Why-is-custom-cacheStore-write-being-
> called-in-clientMode-tp13309p13424.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Combine two table caches to expose a database view type cache?

2017-06-06 Thread Nikolai Tikhonov
Muthu,

You can use binary representation instead of POJO classes [1]. I think it
can help you to avoid boilerplate code.

1.
https://apacheignite.readme.io/docs/binary-marshaller#binaryobject-cache-api

On Mon, Jun 5, 2017 at 10:18 PM, Muthu <muthu.kumara...@gmail.com> wrote:

> Our current application code uses this view in several places. We use
> MyBatis for ORM & it generates the DTO object & everything. The thought is
> if there is way to transparently use Ignite cache for the view instead of
> going to the database & let the rest of the application code use the same
> DTO object as generated by MyBatis.
>
> Regards,
> Muthu
>
> On Mon, Jun 5, 2017 at 12:10 PM, Muthu <muthu.kumara...@gmail.com> wrote:
>
>>
>> Thanks Nikolai for the suggestion..one other thing i was thinking was to
>> use continuous queries feature to create & update the new cache...but the
>> problem is i still have to manually construct the resultant DTO object
>> (manually set every field in the code). Since this is a view that joins two
>> or three tables with lots of columns i was wondering if there was a way i
>> can auto generate this boiler plate code...
>>
>> Regards,
>> Muthu
>>
>> On Mon, Jun 5, 2017 at 5:44 AM, Nikolai Tikhonov <ntikho...@apache.org>
>> wrote:
>>
>>> Hello,
>>>
>>> You need to implement your own CacheStore which will execute several
>>> selects for one entry and combine two rows to one cache entry.
>>>
>>> On Thu, Jun 1, 2017 at 9:34 AM, Muthu <muthu.kumara...@gmail.com> wrote:
>>>
>>>> Hello Folks,
>>>>
>>>> Just to add a little bit more clarity & context...taking the
>>>> Cross-Cache querying example from the ignite docs (copied below) if one
>>>> were to select fields from both Person & Organization table caches in the
>>>> select query what would be the elegant way to construct a domain POJO from
>>>> the query result set instead of constructing it in the application code.
>>>>
>>>>
>>>>- Cross-Cache SqlFieldsQuery
>>>><https://apacheignite.readme.io/docs/sql-queries>
>>>>
>>>> // In this example, suppose Person objects are stored in a // cache named 
>>>> 'personCache' and Organization objects // are stored in a cache named 
>>>> 'orgCache'.IgniteCache<Long, Person> personCache = 
>>>> ignite.cache("personCache");
>>>> // Select with join between Person and Organization to // get the names of 
>>>> all the employees of a specific organization.SqlFieldsQuery sql = new 
>>>> SqlFieldsQuery(
>>>> "select Person.name  "
>>>> + "from Person as p, \"orgCache\".Organization as org where "
>>>> + "p.orgId = org.id "
>>>> + "and org.name = ?");
>>>> // Execute the query and obtain the query result cursor.try 
>>>> (QueryCursor<List> cursor =  personCache.query(sql.setArgs("Ignite"))) {
>>>> for (List row : cursor)
>>>> System.out.println("Person name=" + row.get(0));
>>>> }
>>>>
>>>>
>>>> Regards,
>>>> Muthu
>>>>
>>>> -- The latest fact in modern technology isn't that machines will begin
>>>> to think like people, but that people will begin to think like machines.
>>>> -- Nothing exists except atoms and empty space, everything else is
>>>> opinion - *Democritus*
>>>>
>>>> On Tue, May 30, 2017 at 4:26 PM, Muthu <muthu.kumara...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>> Just to clarify a little bit i don't want the view created on the
>>>>> database but rather created & exposed purely in ignite. The individual
>>>>> tables are already cached & available as L2 cache (MyBatis L2 cache) in
>>>>> Ignite.
>>>>>
>>>>> Regards,
>>>>> Muthu
>>>>>
>>>>>
>>>>> On Tue, May 30, 2017 at 4:07 PM, Muthu <muthu.kumara...@gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Folks,
>>>>>>
>>>>>> I need to combine two table caches to expose a database view type
>>>>>> cache. Is there an elegant way to do this where i don't need to manually
>>>>>> set/construct the view's POJO from the result of the join query?
>>>>>>
>>>>>> Regards,
>>>>>> Muthu
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: vertx-ignite

2017-06-05 Thread Nikolai Tikhonov
Hi Anil,

You missed a template for caches (lines 90-97).

On Sat, May 13, 2017 at 12:05 PM, Anil  wrote:

> Hi Andrey,
>
> Could you please help me here? Thanks.
>
> Thanks
>
> On 11 May 2017 at 14:16, Anil  wrote:
>
>> Hi Andrey,
>>
>> I am checking the default-ignite.xml at https://github.com/apacheig
>> nite/vertx-ignite/blob/master/src/main/resources/default-ignite.xml
>>
>> Could you please point what is missing in my configuration ?
>>
>> I could not find anything in default-ignite.xml.
>>
>> Thanks
>>
>> On 11 May 2017 at 11:07, Anil  wrote:
>>
>>> HI Andrey,
>>>
>>> i am using vertx-ignite 3.4.1 and ignite 1.9 version. i will check the
>>> default-ignite.xml.
>>>
>>> Thanks
>>>
>>> On 10 May 2017 at 21:31, Andrey Gura  wrote:
>>>
 Anil,

 What version of vertx-ignite or Ignite itself do you use?

 In provided ignite.xml there is no minimal configuration that is
 mandatory for Ignite cluster manager for vert.x (see
 default-ignite.xml for example).


 On Tue, May 2, 2017 at 9:18 AM, Anil  wrote:
 >
 > Hi Andrey,
 >
 > Apologies for late reply. I don't have any exact reproduce. I can see
 this
 > log frequently in our logs.
 >
 > attached the ignite.xml.
 >
 > Thanks.
 >
 >
 >
 > On 26 April 2017 at 18:32, Andrey Gura  wrote:
 >>
 >> Anil,
 >>
 >> what kind of lock do you mean? What are steps for reproduce? What
 >> version if vert-ignite do use and what is your configuration?
 >>
 >> On Wed, Apr 26, 2017 at 2:16 PM, Anil  wrote:
 >> > HI,
 >> >
 >> > I am using vertx-ignite and when node is left the topology, lock
 is not
 >> > getting released and whole server is not responding.
 >> >
 >> > 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
 >> > BlockedThreadChecker:57 - Thread
 >> > Thread[vert.x-worker-thread-82,5,ignite]
 >> > has been blocked for 2329981 ms, time limit is 6
 >> > io.vertx.core.VertxException: Thread blocked
 >> > at sun.misc.Unsafe.park(Native Method)
 >> > at
 >> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 >> > at
 >> >
 >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAn
 dCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 >> > at
 >> >
 >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcqu
 ireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
 >> > at
 >> >
 >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
 eSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
 >> > at
 >> >
 >> > org.apache.ignite.internal.util.future.GridFutureAdapter.get
 0(GridFutureAdapter.java:161)
 >> > at
 >> >
 >> > org.apache.ignite.internal.util.future.GridFutureAdapter.get
 (GridFutureAdapter.java:119)
 >> > at
 >> >
 >> > org.apache.ignite.internal.processors.cache.distributed.dht.
 atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
 >> > at
 >> >
 >> > org.apache.ignite.internal.processors.cache.GridCacheAdapter
 .get(GridCacheAdapter.java:4663)
 >> > at
 >> >
 >> > org.apache.ignite.internal.processors.cache.GridCacheAdapter
 .get(GridCacheAdapter.java:1388)
 >> > at
 >> >
 >> > org.apache.ignite.internal.processors.cache.IgniteCacheProxy
 .get(IgniteCacheProxy.java:1117)
 >> > at io.vertx.spi.cluster.ignite.im
 pl.MapImpl.get(MapImpl.java:81)
 >> > at
 >> > io.vertx.core.impl.HAManager.chooseHashedNode(HAManager.java:590)
 >> > at io.vertx.core.impl.HAManager.c
 heckSubs(HAManager.java:519)
 >> > at io.vertx.core.impl.HAManager.n
 odeLeft(HAManager.java:305)
 >> > at io.vertx.core.impl.HAManager.a
 ccess$100(HAManager.java:107)
 >> > at io.vertx.core.impl.HAManager$1
 .nodeLeft(HAManager.java:157)
 >> > at
 >> >
 >> > io.vertx.spi.cluster.ignite.IgniteClusterManager.lambda$null
 $4(IgniteClusterManager.java:254)
 >> > at
 >> >
 >> > io.vertx.spi.cluster.ignite.IgniteClusterManager$$Lambda$36/
 837728834.handle(Unknown
 >> > Source)
 >> > at
 >> >
 >> > io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(Cont
 extImpl.java:271)
 >> > at
 >> > io.vertx.core.impl.ContextImpl$$Lambda$13/116289363.run(Unknown
 >> > Source)
 >> > at io.vertx.core.impl.TaskQueue.l
 ambda$new$0(TaskQueue.java:60)
 >> > at io.vertx.core.impl.TaskQueue$$
 Lambda$12/443290224.run(Unknown
 >> > Source)
 >> > at
 >> >
 >> > 

Re: Best way to send records to Kafka from DataStreamer Receiver

2017-06-05 Thread Nikolai Tikhonov
I'm glad to hear that you were able to solve this problem yourself! If you
will face with some problems feel free to ask.

Thanks,
Nikolay

On Mon, Jun 5, 2017 at 3:28 PM, fatih tekin <fatih.teki...@gmail.com> wrote:

> Hi Nikolai,
> Thanks for responding, That is good idea but we are having some metric for
> the kafka messages and i don't want to put the our common jar into the
> classpath of the ignite. For now, I am sending all the messages in one go
> to as Json String. So that also reduces the network round as well.
>
> On Mon, Jun 5, 2017 at 1:58 PM, Nikolai Tikhonov <ntikho...@apache.org>
> wrote:
>
>> Hi,
>>
>> You can avoid sending message to local lister. From datastreamer receiver
>> you can submit a closure (which will send a message to kafka) to another
>> thread pool.
>>
>> On Wed, May 31, 2017 at 4:13 PM, fatality <fatih.teki...@gmail.com>
>> wrote:
>>
>>> Hi
>>>
>>> I am using IgniteDataStreamer to take records and process them via the
>>> receiver in IgniteDataStreamer and as a result of the process, for some
>>> records, I would like to send messages into another Kafka Topic inside
>>> the
>>> receiver. Could you please advice a good way to do that or maybe this is
>>> already possible by using some common library in ignite?
>>>
>>> Currently i am using ignite messaging to send records into the
>>> locallisteners of the datastreamer.But i am not sure if that is a good
>>> idea
>>> once i have huge traffic.
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/Best-way-to-send-records-to-Kafka-from-Data
>>> Streamer-Receiver-tp13291.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>


Re: Why is custom cacheStore.write() being called in clientMode?

2017-06-05 Thread Nikolai Tikhonov
Hi,

1) Dht nodes is remote nodes which actually store data.  Let's I explain it
on simple example. You start transaction and update 10 keys. This keys
mapped on dht 3 nodes (this entries phisycly will be stored on this nodes).
You need to consisitency updates data on all nodes and persiste them
CacheStore too. Ignite is not able to  transactionally update store from 3
nodes. (It different processe, JVM and etc. Also how many db do you khow
that support it? ;) ) By this reasons near node (which started transaction)
propagate an updates to store.

2) As Yakov said about it you can do it by
org.apache.ignite.lifecycle.LifecycleAware. Look at
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore as example. Also
you could use store factory for initialization
CacheConfiguration#setCacheStoreFactory

3) Are you sure that you invoked loadLoad() and got write() on client node?
Could you share logs from both nodes: server and client?

4) To be honest this question not clear. Could you provide more context
about what you mean?

On Thu, Jun 1, 2017 at 4:42 PM, rick_tem  wrote:

> Hi Yakov,
>
> Thanks for you reply, it is appreciated, but I still have questions.
>
> When you say: 1) we don't have an option to write-through from dht nodes?
> What is a dht node?  Are you saying I can't write my own write-through
> cache
> store?
> 2) I agree that initialization isn't the best in loadCache.  But how do I
> then do it?  It is Ignite that is creating this class and the cache
> returned
> from ignite.getCache is not an instance of my class.
> 3) Client mode seems inconsistent.  It is not calling loadCache() as
> expected but why would it call write()...It should be a proxy to the server
> node, not acting as a server.
> 4) Are you saying Ignite does not keep transactional state consistent
> across
> nodes for write-through?  For any cache store?  Even JDBC cache store?
>
> Thanks,
> Rick
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Why-is-custom-cacheStore-write-being-
> called-in-clientMode-tp13309p13317.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Affinity Issue(Collocate compute) in Ignite JDBC Query

2017-06-05 Thread Nikolai Tikhonov
Hi,

It's expected behaviour. If a query is executed over a PARTITIONED cache,
then the execution flow will be the following. The query will be parsed and
split into multiple map queries and a single reduce query. All the map
queries are executed on all the data nodes where cache data resides. All
the nodes provide result sets of local execution to the query initiator
(reducer) that, in turn, will accomplish the reduce phase by properly
merging provided result sets.

On Fri, Jun 2, 2017 at 2:48 PM, sandeepbellary 
wrote:

> Hi,
>
>
> When I issue a JDBC query on collocated Data in Ignite.It seems to be
> scanning all nodes instead of the node on which affinity key resides.This
> seems to be working fine for Map but jdbc query is not happening in
> distribuited manner.i added collacated =true also, but with no luck.
> Any help is appreciated.
>
> Regards,
> Sandeep
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Affinity-Issue-Collocate-compute-in-
> Ignite-JDBC-Query-tp13338.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


  1   2   3   >