Re: Closures stuck in 2.0 when try to add an element into the queue.

2017-05-16 Thread afedotov
Hi.

The question was answered on stackoverflow
.
Duplicating
the answer below:

It's not allowed to invoke ignite.queue and ignite.affinity methods in
EventListener, because it may lead to deadlock.

All cache operations, including EventListeners are executed in system pool,
so it's not recommended to invoke inside EventListener operations, that
uses system pool too.

You could read more here at "Closures Execution and Thread Pools":
https://apacheignite.readme.io/docs/async-support#section-listeners-and-chaining-futures

And here
https://apacheignite.readme.io/docs/thread-pools#section-system-pool

Kind regards,
Alex.

On Thu, May 11, 2017 at 2:17 PM, fatality [via Apache Ignite Users] <
ml+s70518n1262...@n6.nabble.com> wrote:

> Hi
>
> I have just subscribed to the user group also same question is posted at
> http://stackoverflow.com/questions/43891757/closures-
> stuck-in-2-0-when-try-to-add-an-element-into-the-queue
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Closures-
> stuck-in-2-0-when-try-to-add-an-element-into-the-queue-tp12587p12628.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Closures-stuck-in-2-0-when-try-to-add-an-element-into-the-queue-tp12587p12936.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Is it restricted by ignite to put elements into queue in closures

2017-05-16 Thread afedotov
Well, actually the question was answered on stackoverflow

with the following:

It's not allowed to invoke ignite.queue and ignite.affinity methods in
EventListener, because it may lead to deadlock.

All cache operations, including EventListeners are executed in system pool,
so it's not recommended to invoke inside EventListener operations, that
uses system pool too.

You could read more here at "Closures Execution and Thread Pools":
https://apacheignite.readme.io/docs/async-support#section-listeners-and-chaining-futures

And here
https://apacheignite.readme.io/docs/thread-pools#section-system-pool



Kind regards,
Alex.

On Tue, May 16, 2017 at 11:29 PM, fatality [via Apache Ignite Users] <
ml+s70518n12934...@n6.nabble.com> wrote:

> Hi
>
> I have posted below question a while ago and still not received any
> response yet. Could you please tell me what can be accessed or can not be
> from closures that we send with remotelisteners.
>
> http://apache-ignite-users.70518.x6.nabble.com/Closures-
> stuck-in-2-0-when-try-to-add-an-element-into-the-queue-td12587.html#a12609
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/Is-it-
> restricted-by-ignite-to-put-elements-into-queue-in-closures-tp12934.html
> To start a new topic under Apache Ignite Users, email
> ml+s70518n1...@n6.nabble.com
> To unsubscribe from Apache Ignite Users, click here
> 
> .
> NAML
> 
>




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-it-restricted-by-ignite-to-put-elements-into-queue-in-closures-tp12934p12935.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Is it restricted by ignite to put elements into queue in closures

2017-05-16 Thread fatality
Hi 

I have posted below question a while ago and still not received any response
yet. Could you please tell me what can be accessed or can not be from
closures that we send with remotelisteners.

http://apache-ignite-users.70518.x6.nabble.com/Closures-stuck-in-2-0-when-try-to-add-an-element-into-the-queue-td12587.html#a12609




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Is-it-restricted-by-ignite-to-put-elements-into-queue-in-closures-tp12934.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Is there a way to allow nodes to join an already started task?

2017-05-16 Thread Ryan Ripken
In the GridGain days I was able to add nodes to an already started 
task.   Is there a way to do that in Ignite?  I occasionally have nodes 
disconnect or crash in a custom native library.  I'd like to be able to 
restart those failed nodes or add additional nodes if the compute isn't 
progressing as quickly as initially hoped.


If its not possible to add nodes to an already started task, are there 
patterns or tricks that can be used to accomplish something similar?


It seems like one trick might be to take the original task (100 jobs) 
and turn it into many more tasks (10) with fewer jobs (10) per task.  
The new list of tasks aren't started all at once but staggered over time 
so that if additional nodes join (after task 1 has already started) they 
can contribute to the later tasks.


Its a crude solution but it seems like it would have to work. Before I 
refactor my tasks and jobs to try it out I'm wondering if someone has a 
better suggestion.  Is there a better way to accomplish something similar?


Thanks!



Re: Ignite2.0 memory policy limit

2017-05-16 Thread Denis Magda
Ajay, 

I could successfully start a node using your configuration on my laptop. Could 
you check how many processes are running on your Linux machine and how much 
physical RAM is left for the Ignite process? You can share the output of “top” 
or similar tool.

Btw, what’s the Linux distribution name and version? The logs don’t say it.

Did you try to start the node using the same config on another machine?

—
Denis

> On May 15, 2017, at 11:29 PM, Ajay  wrote:
> 
> HI Denis,
> 
> Please find requested data
> 
> ignite_server_config.xml
> 
>   
> ignite-c88d9d24.log
> 
>   
> 
> OS and Java version you can get by log file and total RAM size in was 32 GB.
> 
> 
> Thanks
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Ignite2-0-memory-policy-limit-tp12840p12875.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Igfs and MaprFs

2017-05-16 Thread Pranay Tonpay
Hi,
On the ignite site, it does have links that talk about running igfs on top
of Apache hadoop, hdp and cdh but there is no mention of MapR ... Is it due
to the no namenode architecture of MapR that it is not supported?

Pls advise


BinaryFieldIdentityResolver

2017-05-16 Thread thammoud
Hello,

>From what I understand, Ignite uses the equals of the binary image of the
key. we have a key that contains extra information that does not need to be
considered in equals/hashCode. I tried writing custom code using
BinaryFieldIdentityResolver bu it does not seem to work. Attached is the
example code. Any help will be greatly appreciated. 

SimpleCacheTest.java

  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/BinaryFieldIdentityResolver-tp12896.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Sizing in Ignite

2017-05-16 Thread vkulichenko
What you quote is just an example emphasizing that Ignite and relational DB
are completely different storages and that mapping memory estimates one by
one are wrong. You should use the actual guide for calculations that doesn't
mention any multipliers like this.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Sizing-in-Ignite-tp12600p12895.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite 1.6.0 suspected memory leak from DynamicCacheDescriptor

2017-05-16 Thread vkulichenko
Hi,

It looks like there is client node constantly joining and leaving topology.
What is the reason for this?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-1-6-0-suspected-memory-leak-from-DynamicCacheDescriptor-tp9443p12894.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


how to save data to local node and share it with other nodes

2017-05-16 Thread Libo Yu
Hi all,

I am trying to figure out how to do this with Ignite Cache:

I have ignite cache installed on several application servers. Those application 
servers may need the same data from time to time.
Here is my scenario:
After server A gets some data, it saves the data to the cache which is local to 
A. When B needs the same data, B can get data from
A.

The issue with Ignite affinity function is that when A tries to save the data, 
the data's partition may be on a different server. That makes
It really inefficient. Is there a way to save the data locally and share it 
clusterwise? Thanks.

Libo



Re: HDFS IGFS Integration

2017-05-16 Thread Ivan Veselovsky
Hi, Pranay, 

> Does it mean that Namenode can be avoided when IGFS get deployed on top of
> HDFS ?
No. IGFS itself does not have namenode, it is a distributed cache storing
file blocks. But when deployed on top of HDFS, it fetches the underlying
data using ordinary namenode mechanism.

> Or is it that when one reads from IGFS (for mapreduce or only data ),
> there is nothing like a namenode  but for IGFS to talk to HDFS,
> namenode is still needed.
Yes.

> But now, when we access the same data, it can be accessed from IGFS
> directly and there is no need of a namenode.
Yes, if the requested data were fully cached, it will be given to the client
w/o namenode access.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/HDFS-IGFS-Integration-tp12874p12891.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Failed to wait for initial partition map exchange

2017-05-16 Thread vkulichenko
Jai,

So what is the result of investigation? Does it look like memory issue or
not? As I said earlier, the issue itself doesn't have generic solution, you
need to find out the reason.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-wait-for-initial-partition-map-exchange-tp6252p12890.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: 10X decrease in performance with Ignite 2.0.0

2017-05-16 Thread Chris Berry
Hi Yakov,
I was able to try these suggestions yesterday.
2.0.0 is now only a 19% decrease in performance -- versus the original
1000+% 

This is it in more detail:
 

I do not truly understand the ramifications of using the BinaryMarshaller.
In fact, we tried the BinaryMarshaller before (in 1.8) and got a lot of OOMs
Is there any doc that explains more succinctly what it means to use the
BinaryMarshaller?

In the end, I made only 3 substantive changes
* cacheConfig.setOnheapCacheEnabled(true)
* cache.withKeepBinary();
* igniteConfig.setMarshaller(new BinaryMarshaller());

Thank you for your assistance.
Please, if you can see room for further improvement, let me know.

Thanks,
-- Chris 




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/10X-decrease-in-performance-with-Ignite-2-0-0-tp12637p12889.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to deploy ignite service evenly across the cluster ?

2017-05-16 Thread Alexander Fedotov
Danny, just one more question.
Are all the services deployed as cluster singletons?

Kind regards,
Alex.

On Tue, May 16, 2017 at 1:10 PM, afedotov 
wrote:

> Hi Danny,
>
> Could you please share your node filter/cluster group pedicate?
>
> Kind regards,
> Alex.
>
> On Mon, May 15, 2017 at 10:49 PM, dany74q [via Apache Ignite Users] <[hidden
> email] > wrote:
>
>> Hey Alex.
>> Thanks for the prompt response !
>> We actually do employ a node filter - lets say we have N nodes, we deploy
>> our services to a group B, being roughly N/10 of size (20~ nodes).
>>
>> Our services are quite extensive, and so deploying them unevenly across B
>> causes GCs, stalls and other unfortunate results.
>>
>> We have implemented a round robin approach, where we create a cluster
>> group consisting of one of the nodes in B (node at position counter %
>> B_size).
>> But it has many shortcomings -
>> their deployment predicate is always fixed on that specific node, and it
>> doesn't scale when adding additional nodes with the same node attribute.
>>
>> We're looking for a different solution.
>> Question - is it possible to have an affinity key deployment strategy,
>> but not having the services "rebalance" when a node leaves / joins ?
>> Any other solutions ?
>> On Mon, May 15, 2017 at 10:12 PM afedotov [via Apache Ignite Users] <[hidden
>> email] > wrote:
>>
>>> Danny, in this case, it looks like you are right and the random don't
>>> provide even distribution.
>>> To avoid this you could devise a solution based on a node filter if
>>> affinity deployment doesn't do the trick for you
>>> https://apacheignite.readme.io/v1.8/docs/service-grid#sectio
>>> n-node-filter-based-deployment
>>>
>>> Kind regards,
>>> Alex.
>>>
>>> On Mon, May 15, 2017 at 8:34 PM, dany74q <[hidden email]
>>> > wrote:
>>> Hey Alex,
>>>
>>> The services are deployed dynamically after all nodes are started.
>>> It could be the case that the random distribution isn't "random enough"
>>> - and most of the services are concentrated on specific nodes.
>>>
>>> Could we somehow control the way those services are deployed within
>>> their cluster group, but without an affinity key ?
>>>
>>> Would love that ability !
>>>
>>> On Mon, May 15, 2017 at 4:08 PM afedotov [via Apache Ignite Users] <[hidden
>> email] > wrote:
>> Hi Danny.
>>
>> At the moment, cluster singleton services are deployed on a random basis.
>> How many Ignite nodes do you run?
>> How did you deploy services: after all the nodes had been started or you
>> have the services specified in configuration?
>>
>> Kind regards,
>> Alex.
>>
>> On Sat, May 13, 2017 at 1:06 AM, dany74q [via Apache Ignite Users] <[hidden
>> email] > wrote:
>> Hi everyone,
>>
>> We use ignite's (1.7.4) service grid to deploy hundreds of (different)
>> services in our cluster.
>> There's a behavior we've noticed, once deploying that many services, that
>> they tend do deploy unevenly in the cluster,
>> that means that we can get to a point where a great number of services
>> are deployed on a single node, instead of spreading them out more evenly.
>>
>> Is there any way to guarantee a random or even distribution, or a
>> round-robin deployment policy on our nodes, without using affinity keys
>> (Partitions may shift, and so services can get cancelled and redeployed
>> with no reason) ?
>>
>> Thanks !
>> - Danny
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/How-to-deploy
>> -ignite-service-evenly-across-the-cluster-tp12674.html
>> To start a new topic under Apache Ignite Users, email [hidden email]
>> 
>> To unsubscribe from Apache Ignite Users, click here.
>> NAML
>> 
>>
>>
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/How-to-deploy
>> -ignite-service-evenly-across-the-cluster-tp12674p12845.html
>> To unsubscribe from How to deploy ignite service evenly across the
>> cluster ?, click here.
>> NAML
>> 

Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-05-16 Thread Andrey Gura
Hi,

There was a problem with incorrect transaction timeout handling [1]
that was fixed in Ignite 1.8. It is possible that it is your case.

[1] https://issues.apache.org/jira/browse/IGNITE-2797

On Thu, May 11, 2017 at 1:51 AM, bintisepaha  wrote:
> Hey guys, we had a key lock issue again on 1.7.0. here is a suspicious thread
> dump. Is this helpful for tracking down our issue further?
> we did not see any topology changes or any other exceptions.
>
> Attaching the entire thread dump too tdump.zip
> 
>
> "pub-#7%DataGridServer-Production%" Id=47 in WAITING on
> lock=org.apache.ignite.internal.util.future.GridFutureAdapter$ChainFuture@2094df59
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:159)
>   at
> org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:117)
>   at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4800)
>   at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4783)
>   at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:1395)
>   at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(IgniteCacheProxy.java:956)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.updatePosition(OrderHolderSaveRunnable.java:790)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.cancelPosition(OrderHolderSaveRunnable.java:805)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.cancelExistingTradeOrderForPositionUpdate(OrderHolderSaveRunnable.java:756)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.processOrderHolders(OrderHolderSaveRunnable.java:356)
>   at
> com.tudor.datagridI.server.tradegen.OrderHolderSaveRunnable.run(OrderHolderSaveRunnable.java:109)
>   at
> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4V2.execute(GridClosureProcessor.java:2184)
>   at
> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:509)
>   at
> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6521)
>   at
> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:503)
>   at
> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:456)
>   at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>   at
> org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1161)
>   at
> org.apache.ignite.internal.processors.job.GridJobProcessor$JobExecutionListener.onMessage(GridJobProcessor.java:1766)
>   at
> org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238)
>   at
> org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866)
>   at
> org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106)
>   at
> org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829)
>   at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
>
>   Locked synchronizers: count = 1
>  <1237e0be>  - java.util.concurrent.ThreadPoolExecutor$Worker@1237e0be
>
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p12611.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: OOM Issue with Eviction Event listener - Ignite 1.9.0

2017-05-16 Thread Andrey Gura
Hi,

anyway such increasing of memory consumption is too suspicious for me.
Could you please provide more information about your use case: cluster
topology, Ignite and caches configuration? Siomple reproducer would be
grade in order to analyse the problem.

On Mon, May 1, 2017 at 11:05 PM, Pradeep Badiger
 wrote:
> Hi Andrey,
>
> I tried with -Xmx3072m -Xmx3072m and I still get OOM. If I try with 4GB heap 
> setting then it works with GC running continuously. With 8GB, it works 
> without any issues.
>
> Is it just that it needs more heap to perform unmarshalling and give the 
> evicted entry to the listener?
>
> [15:33:08] Ignite node started OK (id=3a14ca7d, 
> grid=IgniteWithCacheEvictListener)
> [15:33:08] Topology snapshot [ver=1, servers=1, clients=0, CPUs=8, heap=3.0GB]
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:3236)
> at java.lang.StringCoding.safeTrim(StringCoding.java:79)
> at java.lang.StringCoding.encode(StringCoding.java:365)
> at java.lang.String.getBytes(String.java:941)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.doWriteString(BinaryWriterExImpl.java:435)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.writeStringField(BinaryWriterExImpl.java:1102)
> at 
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.write(BinaryFieldAccessor.java:506)
> at 
> org.apache.ignite.internal.binary.BinaryClassDescriptor.write(BinaryClassDescriptor.java:784)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal0(BinaryWriterExImpl.java:206)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:147)
> at 
> org.apache.ignite.internal.binary.BinaryWriterExImpl.marshal(BinaryWriterExImpl.java:134)
> at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.marshal(GridBinaryMarshaller.java:239)
> at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.marshalToBinary(CacheObjectBinaryProcessorImpl.java:521)
> at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toBinary(CacheObjectBinaryProcessorImpl.java:914)
> at 
> org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryProcessorImpl.toCacheObject(CacheObjectBinaryProcessorImpl.java:859)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheContext.toCacheObject(GridCacheContext.java:1792)
> at 
> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.updateAllInternal(GridLocalAtomicCache.java:834)
> at 
> org.apache.ignite.internal.processors.cache.local.atomic.GridLocalAtomicCache.put0(GridLocalAtomicCache.java:147)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2276)
> at 
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2253)
> at 
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(IgniteCacheProxy.java:1375)
> at 
> com.example.IgniteWithCacheEvictListener.main(IgniteWithCacheEvictListener.java:63)
>
> -Original Message-
> From: Andrey Gura [mailto:ag...@apache.org]
> Sent: Monday, May 01, 2017 11:34 AM
> To: user@ignite.apache.org
> Subject: Re: OOM Issue with Eviction Event listener - Ignite 1.9.0
>
> Hi,
>
> Listener just lead to additional Event objects instantiation. You should give 
> more memory for your Java process.
>
> On Mon, May 1, 2017 at 5:54 PM, Pradeep Badiger  
> wrote:
>> Hi,
>>
>>
>>
>> I am facing an OOM Exception when eviction policy is turned on. I have
>> attached an eclipse project that has two test programs. One is set
>> with eviction event listener and another one is not.
>>
>>
>>
>> The test program with eviction listener throws an OOM error almost
>> immediately after the ignite is initialized. The one without the
>> listener works fine.
>>
>>
>>
>> I ran both the test programs with –Xmx512m –Xms512m.
>>
>>
>>
>> Can someone let me know if there are any issues with my configurations?
>>
>>
>>
>> Thanks,
>>
>> Pradeep V.B.
>>
>> This email and any files transmitted with it are confidential,
>> proprietary and intended solely for the individual or entity to whom they 
>> are addressed.
>> If you have received this email in error please delete it immediately.
> This email and any files transmitted with it are confidential, proprietary 
> and intended solely for the individual or entity to whom they are addressed. 
> If you have received this email in error please delete it immediately.


Re: Failed to wait for initial partition map exchange

2017-05-16 Thread jaipal
Hii val,

I tried recommended settings and configurations. still I observe this issue
and it can be reproduced easily.

Is there any fix available to this issue.

Regards
Jai



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-wait-for-initial-partition-map-exchange-tp6252p12885.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Kindly tell me where to find these jar files.

2017-05-16 Thread Humphrey
Seems like you need to start first with the basics. This is not the correct
place to learn java.
Google online and do more research. 

Here are some links explaining maven. 
https://youtu.be/JK9oZfScQgg
https://youtu.be/uEYjXpMDJiU

There are more out there. Play a little around with maven and java first.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Kindly-tell-me-where-to-find-these-jar-files-tp12649p12884.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to deploy ignite service evenly across the cluster ?

2017-05-16 Thread afedotov
Hi Danny,

Could you please share your node filter/cluster group pedicate?

Kind regards,
Alex.

On Mon, May 15, 2017 at 10:49 PM, dany74q [via Apache Ignite Users] <
ml+s70518n12861...@n6.nabble.com> wrote:

> Hey Alex.
> Thanks for the prompt response !
> We actually do employ a node filter - lets say we have N nodes, we deploy
> our services to a group B, being roughly N/10 of size (20~ nodes).
>
> Our services are quite extensive, and so deploying them unevenly across B
> causes GCs, stalls and other unfortunate results.
>
> We have implemented a round robin approach, where we create a cluster
> group consisting of one of the nodes in B (node at position counter %
> B_size).
> But it has many shortcomings -
> their deployment predicate is always fixed on that specific node, and it
> doesn't scale when adding additional nodes with the same node attribute.
>
> We're looking for a different solution.
> Question - is it possible to have an affinity key deployment strategy, but
> not having the services "rebalance" when a node leaves / joins ?
> Any other solutions ?
> On Mon, May 15, 2017 at 10:12 PM afedotov [via Apache Ignite Users] <[hidden
> email] > wrote:
>
>> Danny, in this case, it looks like you are right and the random don't
>> provide even distribution.
>> To avoid this you could devise a solution based on a node filter if
>> affinity deployment doesn't do the trick for you
>> https://apacheignite.readme.io/v1.8/docs/service-grid#
>> section-node-filter-based-deployment
>>
>> Kind regards,
>> Alex.
>>
>> On Mon, May 15, 2017 at 8:34 PM, dany74q <[hidden email]
>> > wrote:
>> Hey Alex,
>>
>> The services are deployed dynamically after all nodes are started.
>> It could be the case that the random distribution isn't "random enough" -
>> and most of the services are concentrated on specific nodes.
>>
>> Could we somehow control the way those services are deployed within their
>> cluster group, but without an affinity key ?
>>
>> Would love that ability !
>>
>> On Mon, May 15, 2017 at 4:08 PM afedotov [via Apache Ignite Users] <[hidden
> email] > wrote:
> Hi Danny.
>
> At the moment, cluster singleton services are deployed on a random basis.
> How many Ignite nodes do you run?
> How did you deploy services: after all the nodes had been started or you
> have the services specified in configuration?
>
> Kind regards,
> Alex.
>
> On Sat, May 13, 2017 at 1:06 AM, dany74q [via Apache Ignite Users] <[hidden
> email] > wrote:
> Hi everyone,
>
> We use ignite's (1.7.4) service grid to deploy hundreds of (different)
> services in our cluster.
> There's a behavior we've noticed, once deploying that many services, that
> they tend do deploy unevenly in the cluster,
> that means that we can get to a point where a great number of services are
> deployed on a single node, instead of spreading them out more evenly.
>
> Is there any way to guarantee a random or even distribution, or a
> round-robin deployment policy on our nodes, without using affinity keys
> (Partitions may shift, and so services can get cancelled and redeployed
> with no reason) ?
>
> Thanks !
> - Danny
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-
> deploy-ignite-service-evenly-across-the-cluster-tp12674.html
> To start a new topic under Apache Ignite Users, email [hidden email]
> 
> To unsubscribe from Apache Ignite Users, click here.
> NAML
> 
>
>
>
> --
> If you reply to this email, your message will be added to the discussion
> below:
> http://apache-ignite-users.70518.x6.nabble.com/How-to-
> deploy-ignite-service-evenly-across-the-cluster-tp12674p12845.html
> To unsubscribe from How to deploy ignite service evenly across the cluster
> ?, click here.
> NAML
> 
>
> --
> View this message in context: Re: How to deploy ignite service evenly
> across the cluster ?
> 

Re: Ignite2.0 Data Strorage

2017-05-16 Thread Andrey Mashenkov
Hi Ajay,

This is not supported.

Ignite cache is divided into partitions and distribute partition among
nodes.
Partitions distribution depends on AffinityFunction [1] you configured.
AffinityFunction available by default are designed to have distribution
closer to equal.
However you may try to implement your own to meet your needs.

[1]
https://apacheignite.readme.io/docs/affinity-collocation#affinity-function

On Mon, May 15, 2017 at 8:45 PM, Ajay  wrote:

> Hi,
>
> I have two servers X and Y. I can store 100 entries in X. If i got 500
> entries to put into cache the exceed data in X can i bypass into Y servers
> along with Y data. If possible please let me know how to achieve.
>
>
> Thanks
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite2-0-Data-Strorage-tp12859.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Understanding the mechanics of peer class loading

2017-05-16 Thread Dmitry Pavlov
Hi Ilya,

I supposed following: GridDeploymentClassLoader usage for loading message
class on receiver side is mandatory condition for reproduction. But for now
it is only hypothesis.

In my test if all nodes have message payload class in its classpath there
are no problems with messaging under high load (around 10^6 of messages in
32 threads). Same is true if application class loader is used in for
loading message class (no WAR files & no Jetty).

Could you please check class loaders used at sender(s) sides and on
receiver side. Is GridDeploymentClassLoader used for loading message
payload class at receiver?

Sincerely,
Dmitry Pavlov



вт, 16 мая 2017 г. в 12:17, Ilya :

> Hi Dmitry,
>
> Unfortunately, I've did not yet manage to reproduce this issue outside of
> our project.
>
> What do you mean by "GridDeploymentClassLoader is used for loading class
> on server"? How can server classloaders be configured? How does a remote
> node choose, in which classloader to deploy the received class?
>
> My test configuration is as follows:
>
>- Web application that has a single cache and has two IgniteMessaging
>local listeners;
>- It's run from JUnit test code under Jetty;
>- The three instances form a cluster;
>- One client sends messages to one of the topics (to the random node)
>using IgniteMessaging.
>
> All of these works on a single JVM. I suggest that Jetty servers might use
> dedicated classloaders for each web-app. I've tried to launch client from
> both test code and another web-app under Jetty, but that did not change
> anything.
>
> In fact, the failing test on our production application is launched in the
> same manner: two exploded web-apps (client and server), three instances of
> server app and all of these is run under Jettys in a single VM...
>
> On Sun, May 14, 2017 at 2:22 PM, Dmitry Pavlov [via Apache Ignite Users] 
> <[hidden
> email] > wrote:
>
>> Hi, Ilya,
>>
>> I've tried to reproduce deployment problem in standalone project
>> involving Ingnite.start() in several WAR files. But this test is still
>> passing.
>>
>> It is still possible deployment problem can be reprdoced only
>> when GridDeploymentClassLoader is used for loading class on server, and
>> several different Web App class loaders are enabled on clients.
>>
>> Do you have standalone reproducer you can share?
>>
>> Best Regards,
>> Dmitry Pavlov
>>
>>
>>
>> пт, 12 мая 2017 г. в 15:58, Ilya <[hidden email]
>> >:
>>
> Hi all!
>>>
>>> The question was originally asked (but not answered) on SO:
>>>
>>> http://stackoverflow.com/questions/43803402/how-does-peer-classloading-work-in-apache-ignite
>>>
>>> In short, we have "Failed to deploy user message" exceptions under high
>>> load
>>> in our project.
>>>
>>> Here is an overview of our architecture:
>>> - Distributed cache on three nodes, all nodes run on a single workstation
>>> (in this test);
>>> - Workers on each node;
>>> - Messaging between workers is done using IgniteMessaging (topic has the
>>> type of String and I've tried both byte[] and ByteBuffer as a message
>>> class);
>>> - Client connects to the cluster and triggers some business logic, that
>>> causes cross-node messaging, scan queries and MR jobs (using
>>> IgniteCompute::broadcast). All of these may performed concurrently.
>>>
>>> I've tried both SHARED and CONTINUOUS deployment mode, but the result
>>> remains the same.
>>>
>>> I've noticed lots of similar messages in the logs:
>>> /2017-05-05 13:31:28 INFO   org.apache.ignite.logger.java.JavaLogger info
>>> Removed undeployed class: GridDeployment [ts=1493980288578,
>>> depMode=CONTINUOUS, clsLdr=WebAppClassLoader=MyApp@38815daa,
>>> clsLdrId=36c3828db51-0d65e7d5-77bf-444d-9b8b-d18bde94ad13, userVer=0,
>>> loc=true, sampleClsName=java.lang.String, pendingUndeploy=false,
>>> undeployed=true, usage=0]
>>> ...
>>> 2017-05-05 13:31:29 INFO   org.apache.ignite.logger.java.JavaLogger info
>>> Removed undeployed class: GridDeployment [ts=1493980289125,
>>> depMode=CONTINUOUS, clsLdr=WebAppClassLoader=MyApp@355f6680,
>>> clsLdrId=1dd3828db51-1b20df7a-a98d-45a3-8ab6-e5d229945830, userVer=0,
>>> loc=true, sampleClsName=java.lang.String, pendingUndeploy=false,
>>> undeployed=true, usage=0]
>>> .../
>>>
>>> This happens when I use ByteBuffer as message type. In case of byte[],
>>> class
>>> B[ is being constantly re-deployed.
>>>
>>> ScanQuery predicate and IgniteCompute caller are also being constantly
>>> re-deployed.
>>> If we disable ScanQueries and IgniteCompute broadcasts - all is fine,
>>> there
>>> are no re-deployments.
>>>
>>> For the further testing I've disabled MRs and kept ScanQueries. I've also
>>> added some debug output to a fresh snapshot of Ignite 2.1.0. Messages
>>> "Class
>>> locally deployed: " usually come from the
>>> following
>>> call stack:
>>>
>>> 

Re: Performance issue with Replicated Cache among 7 nodes

2017-05-16 Thread Ramzinator
Hi Andrey,

It turns out that we had increases our failureDetectionTimeout wayyy too
much. After setting it to something more sensible, the issue stopped.

Thanks for the info,
Rami



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Performance-issue-with-Replicated-Cache-among-7-nodes-tp12848p12880.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Understanding the mechanics of peer class loading

2017-05-16 Thread Ilya
Hi Dmitry,

Unfortunately, I've did not yet manage to reproduce this issue outside of
our project.

What do you mean by "GridDeploymentClassLoader is used for loading class on
server"? How can server classloaders be configured? How does a remote node
choose, in which classloader to deploy the received class?

My test configuration is as follows:

   - Web application that has a single cache and has two IgniteMessaging
   local listeners;
   - It's run from JUnit test code under Jetty;
   - The three instances form a cluster;
   - One client sends messages to one of the topics (to the random node)
   using IgniteMessaging.

All of these works on a single JVM. I suggest that Jetty servers might use
dedicated classloaders for each web-app. I've tried to launch client from
both test code and another web-app under Jetty, but that did not change
anything.

In fact, the failing test on our production application is launched in the
same manner: two exploded web-apps (client and server), three instances of
server app and all of these is run under Jettys in a single VM...

On Sun, May 14, 2017 at 2:22 PM, Dmitry Pavlov [via Apache Ignite Users] <
ml+s70518n12831...@n6.nabble.com> wrote:

> Hi, Ilya,
>
> I've tried to reproduce deployment problem in standalone project involving
> Ingnite.start() in several WAR files. But this test is still passing.
>
> It is still possible deployment problem can be reprdoced only
> when GridDeploymentClassLoader is used for loading class on server, and
> several different Web App class loaders are enabled on clients.
>
> Do you have standalone reproducer you can share?
>
> Best Regards,
> Dmitry Pavlov
>
>
>
> пт, 12 мая 2017 г. в 15:58, Ilya <[hidden email]
> >:
>
>> Hi all!
>>
>> The question was originally asked (but not answered) on SO:
>> http://stackoverflow.com/questions/43803402/how-does-
>> peer-classloading-work-in-apache-ignite
>>
>> In short, we have "Failed to deploy user message" exceptions under high
>> load
>> in our project.
>>
>> Here is an overview of our architecture:
>> - Distributed cache on three nodes, all nodes run on a single workstation
>> (in this test);
>> - Workers on each node;
>> - Messaging between workers is done using IgniteMessaging (topic has the
>> type of String and I've tried both byte[] and ByteBuffer as a message
>> class);
>> - Client connects to the cluster and triggers some business logic, that
>> causes cross-node messaging, scan queries and MR jobs (using
>> IgniteCompute::broadcast). All of these may performed concurrently.
>>
>> I've tried both SHARED and CONTINUOUS deployment mode, but the result
>> remains the same.
>>
>> I've noticed lots of similar messages in the logs:
>> /2017-05-05 13:31:28 INFO   org.apache.ignite.logger.java.JavaLogger info
>> Removed undeployed class: GridDeployment [ts=1493980288578,
>> depMode=CONTINUOUS, clsLdr=WebAppClassLoader=MyApp@38815daa,
>> clsLdrId=36c3828db51-0d65e7d5-77bf-444d-9b8b-d18bde94ad13, userVer=0,
>> loc=true, sampleClsName=java.lang.String, pendingUndeploy=false,
>> undeployed=true, usage=0]
>> ...
>> 2017-05-05 13:31:29 INFO   org.apache.ignite.logger.java.JavaLogger info
>> Removed undeployed class: GridDeployment [ts=1493980289125,
>> depMode=CONTINUOUS, clsLdr=WebAppClassLoader=MyApp@355f6680,
>> clsLdrId=1dd3828db51-1b20df7a-a98d-45a3-8ab6-e5d229945830, userVer=0,
>> loc=true, sampleClsName=java.lang.String, pendingUndeploy=false,
>> undeployed=true, usage=0]
>> .../
>>
>> This happens when I use ByteBuffer as message type. In case of byte[],
>> class
>> B[ is being constantly re-deployed.
>>
>> ScanQuery predicate and IgniteCompute caller are also being constantly
>> re-deployed.
>> If we disable ScanQueries and IgniteCompute broadcasts - all is fine,
>> there
>> are no re-deployments.
>>
>> For the further testing I've disabled MRs and kept ScanQueries. I've also
>> added some debug output to a fresh snapshot of Ignite 2.1.0. Messages
>> "Class
>> locally deployed: " usually come from the
>> following
>> call stack:
>> /org.apache.ignite.internal.managers.deployment.GridDeploymentLocalStore.
>> recordDeploy(GridDeploymentLocalStore.java:404)
>> at
>> org.apache.ignite.internal.managers.deployment.GridDeploymentLocalStore.
>> deploy(GridDeploymentLocalStore.java:333)
>> at
>> org.apache.ignite.internal.managers.deployment.GridDeploymentLocalStore.
>> getDeployment(GridDeploymentLocalStore.java:201)
>> at
>> org.apache.ignite.internal.managers.deployment.GridDeploymentManager.
>> getLocalDeployment(GridDeploymentManager.java:383)
>> at
>> org.apache.ignite.internal.managers.deployment.GridDeploymentManager.
>> getDeployment(GridDeploymentManager.java:345)
>> at
>> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.
>> injectResources(GridCacheQueryManager.java:918)
>> at
>> org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.
>> 

Re: 10X decrease in performance with Ignite 2.0.0

2017-05-16 Thread Yakov Zhdanov
Chris, any news?

--Yakov

2017-05-13 1:05 GMT+03:00 Denis Magda :

> Chris,
>
> These are some links for reference:
>
> 1. BinaryObject and BinaryObjectBuilder interfaces usage:
> https://apacheignite.readme.io/docs/binary-marshaller#
> section-binaryobject-cache-api
> https://apacheignite.readme.io/docs/binary-marshaller#
> section-modifying-binary-objects-using-binaryobjectbuilder
>
> 2. Page memory on-heap caching: https://apacheignite.readme.
> io/docs/page-memory#on-heap-caching
>
> —
> Denis
>
> > On May 12, 2017, at 1:16 PM, Chris Berry  wrote:
> >
> > Thank you.
> > I will try on Monday (-ish)
> >
> > Yes, the objects are large. (average ~0.25MB)
> >
> > Could you please tell me the magic config I will need to try both these
> > options?
> > If not, I will do my homework.
> >
> > Thank you again,
> > -- Chris
> >
> >
> >
> > --
> > View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/10X-decrease-in-performance-with-
> Ignite-2-0-0-tp12637p12670.html
> > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>
>


Re: export ignite data

2017-05-16 Thread dkarachentsev
Hi,

Probably the reason of client reconnection was a long GC pause on it.
Try to use scan query [1], it's liter than SQL, and check that you have
enough of heap.

[1] https://apacheignite.readme.io/v1.8/docs/cache-queries#scan-queries

Thanks!
-Dmitry



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/export-ignite-data-tp12873p12877.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: PutIfAbsent issues

2017-05-16 Thread rick_tem
Hi Andrey,

Yes, I figured out what this issue was and it was not Ignite related.

Thanks for your reply.

Best,
Rick



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/PutIfAbsent-issues-tp12633p12876.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite2.0 memory policy limit

2017-05-16 Thread Ajay
HI Denis,

Please find requested data

ignite_server_config.xml

  
ignite-c88d9d24.log

  

OS and Java version you can get by log file and total RAM size in was 32 GB.


Thanks





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite2-0-memory-policy-limit-tp12840p12875.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.