ReadWriteLock

2016-12-02 Thread akaptsan
As far as I can see Ignite does not provide implementation of java.util.concurrent.locks.ReadWriteLock. Could you please advice: is it possible to implement something similar using existent Ignite API? Do you have any plans to implement ReadWriteLock? -- View this message in context:

Re: Fetching large number of records

2016-12-02 Thread Anil
Hi Val, Thanks for clarification. I understand something and i will give a try. Thanks. On 2 December 2016 at 23:17, vkulichenko wrote: > Anil, > > The JdbcQueryTask is executed each time the next page is needed. And the > number of rows returned by the task is

Re: Performance with increase in node

2016-12-02 Thread Denis Magda
Hi Sam, I guess that the degradation happens because all your server nodes are started on a single machine as separate processes. The processes compete for system resources (I/O, CPU, RAM), there are more context switches done at the OS kernel level and all this leads to the worse performance.

Performance with increase in node

2016-12-02 Thread javastuff....@gmail.com
Hi, We are observing really good performance with single node, however with multiple nodes in cluster ,performance is degrading or we can say application throughput is not scaling. I wrote a sample program to put 100K data into cache in a single thread using put() and later

Re: Unable to create cluster of Apache Ignite Server Containers running on individual VMs

2016-12-02 Thread vkulichenko
Hi, You can take a look at the JavaDoc of BasicAddressResolver [1]. This is the implementation provided out of the box. [1] https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/configuration/BasicAddressResolver.java -Val -- View this message in context:

Re: Unable to create cluster of Apache Ignite Server Containers running on individual VMs

2016-12-02 Thread Piali Mazumder Nath
Hi Val, Is there any example code to refer the usage of Address Resolver in xml file? I tried searching but couldn't find. Can you please help? On Thu, Dec 1, 2016 at 5:09 PM, vkulichenko wrote: > Hi, > > Address resolver is needed if there are private and

Re: Cluster hung after a node killed

2016-12-02 Thread javastuff....@gmail.com
Attaching 4 classes - Node1.java - create empty cache node. Node2.java - seed cache with 100K dat. Node3.java - take explicit lock on a key and wait for 15 seconds to unlock. Node4.java - fetch cached data. Steps - 1. Run Node1, wait for empty node to boot up. 2. Run Node2, wait for

Re: about GAR files and task deployment

2016-12-02 Thread vkulichenko
Is this the full class name? No package? -Val -- View this message in context: http://apache-ignite-users.70518.x6.nabble.com/about-GAR-files-and-task-deployment-tp9322p9374.html Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Fetching large number of records

2016-12-02 Thread vkulichenko
Anil, The JdbcQueryTask is executed each time the next page is needed. And the number of rows returned by the task is limited by fetchSize: if (rows.size() == fetchSize) // If fetchSize is 0 then unlimited break; The cursor is cached and reused there, so is this task is executed

Re: Cassandra cache info

2016-12-02 Thread vkulichenko
Hi Riccardo, If you are working with SQL queries, then you have to load all required data into Ignite in advance. Ignite will not go to Cassandra in this case and will return the result based only on entries that are already in caches. However, if you use key-value API, both read-through and

Re: Persistent Store & Ignite Streamer

2016-12-02 Thread Andrey Mashenkov
Hi Labard, This is default behavior of data streamer. By default, data streamer won't rewrite data on store. It useful if u fill cache with data from the store that cache backed by. See link [1] for more information. [1]

Cassandra cache info

2016-12-02 Thread Riccardo Iacomini
I would like to ask some info about the Cassandra integration I could not find in the documentation. - Is there a way one could use ignite cache selectively on one or more particular cassandra's tables, and still query all the data using only the Ignite driver? I mean, can I set a table

Persistent Store & Ignite Streamer

2016-12-02 Thread Labard
Hello. I use Persistent Store for my cache and when I put data in cache by cache.put() all is ok. But if I use streamer.addData() and then streamer.flush() when method write and writeAll does not invoke. Can you please tell me what is the problem? Labard -- View this message in context:

Re: Cache Memory Behavior \ GridDhtLocalPartition

2016-12-02 Thread Nikolay Tikhonov
Hi Isaeed Mohanna, Could you please try to use ignite with the following JVM oprtion -DIGNITE_ATOMIC_CACHE_DELETE_HISTORY_SIZE=100? For atomic caches Ignite does not remove entry from memory immediately to avoid some data consistency issue and this behaviour can to encrease memory consumption.

Re: How IGFS keep sync with HDFS?

2016-12-02 Thread Vladimir Ozerov
Hi, IGFS always propagate updates to HDFS immediately (or with slight delay in case of writes in DUAL_ASYNC mode). It doesn't remove data from memory after flushing it to HDFS. You can try configuring *org.apache.ignite.cache.eviction.igfs.IgfsPerBlockLruEvictionPolicy *to evict some data blocks

Re: Cluster can not let more than one client to do continuous queries

2016-12-02 Thread Nikolai Tikhonov
Hi, You start grid with disabled peer class loading . In this case you need to deploy CacheEntryEventFilterFactory class on all nodes in topology. Please, make sure that all nodes contain this jar with the factory in classpath or set peerClassLoadingEnabled to true. On Mon, Nov 28, 2016 at

Re: Ignite Cassandra AWS test framework

2016-12-02 Thread Riccardo Iacomini
Thank you Igor, unfortunately the issue I was referring to is still present in the documentation. Actually the AWS test deployment section is still the same. I also tried running the deployment scripts in AWS without the

Re: Cache size in Partitioned mode as seen in visor

2016-12-02 Thread Vasiliy Sisko
Hello Mohammad. I reproduce your use case of Visor Console. Now Console for command "cache -a" shows in the size column sum of primary key size and backup key size. In your case that is (15000 primary keys + 15000 backup keys) / 4 nodes = 7500. Created issue to divide cache size on primary and

Re: Can ignite-spark support spark 1.6.x?

2016-12-02 Thread Sergey Kozlov
Hi Kaiming At leat hadoop edition of Apache Ignite can be compiled against Apache Spark 1.5.2 and 1.6.3 On Fri, Dec 2, 2016 at 7:58 AM, Kaiming Wan <344277...@qq.com> wrote: > hi, > > I have found that ignite-spark-1.7.0 doesn't support spark 2.0.x. > Does > it support spark 1.6.x? > > >