Re: Ignite Plugin development ...

2017-11-08 Thread myset
Previous example was done with 1.0 . It work also in 2.0.
Be aware of definition within
META-INF/services/org.apache.ignite.plugin.PluginProvider
and implementation itself.

Ex.
#SecurityPluginProvider comment line
ro.myset.appcore.grid.plugins.security.SecurityPluginProvider



Output log...

...
[09:43:43] VM information: Java(TM) SE Runtime Environment 1.8.0_112-b16
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.112-b16
[09:43:44] Configured plugins:
[09:43:44]   ^-- SecurityPlugin 1.0
[09:43:44]   ^-- MYSET SOLUTIONS S.R.L.
[09:43:44] 
...



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite behaving strange with Spark SharedRDD in AWS EMR Yarn Client Mode

2017-11-08 Thread futureexpert
Hi @raksja

I am using Ignite RDD with yarn client and is observing the same behavior
that you observed. Infact, your below comment is exactly true in my case. 

transformedValues.take(5).foreach(println) 
THIS RETURNS 1 ROW AFTER SPINNING UP 100 EXECUTORS, AROUND 1.5 mins 

All my [String, String] pair RDDs always shows a count of 1 and
take(5).foreach(println) always return only 1 record. 

The [Int, Int] pair RDDs work as expected though. Were you able to find our
the reason for returning only 1 record from the shared-rdd? Thanks. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Question on On-Heap Caching

2017-11-08 Thread Dmitriy Setrakyan
Naresh, several questions:

   1. How are you accessing data, with SQL or key-value APIs?
   2. Are you accessing data locally on the server or remotely from a
   client? If remotely, then you might want to enable near caching.

D.

On Thu, Nov 9, 2017 at 3:01 PM, naresh.goty  wrote:

> Thanks Alexey for the info. Actually our application is read-heavy, and we
> are seeing high latencies (based on our perf benchmark) when we are
> measuring the response times during load tests. Based on the one of the
> thread's recommendations
> (http://apache-ignite-users.70518.x6.nabble.com/10X-
> decrease-in-performance-with-Ignite-2-0-0-td12637.html#a12655),
> we are trying to check if onheap cache have any reduction in latencies. But
> we did not see any noticeable difference in perf using onheap cache
> enabled/disabled. We are using ignite v2.3.
>
> Thanks,
> Naresh
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Question on On-Heap Caching

2017-11-08 Thread naresh.goty
Thanks Alexey for the info. Actually our application is read-heavy, and we
are seeing high latencies (based on our perf benchmark) when we are
measuring the response times during load tests. Based on the one of the
thread's recommendations
(http://apache-ignite-users.70518.x6.nabble.com/10X-decrease-in-performance-with-Ignite-2-0-0-td12637.html#a12655),
we are trying to check if onheap cache have any reduction in latencies. But
we did not see any noticeable difference in perf using onheap cache
enabled/disabled. We are using ignite v2.3.

Thanks,
Naresh



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Node failed to startup due to deadlock

2017-11-08 Thread naresh.goty
Hi Alexey,

Thank you for pointing out the problem with caches causing deadlocks.
I fixed the classpath pointing to 1.9, but the issue still exists.

Actually we could reproduce deadlock with a scenario when two nodes trying
to comeup during same time, and the first node holds lock on a cache. 
(attached sample code when run in the following order:
1) Start App.java and wait till the cache is locked
2) start App2.java, then we see deadlock on App2. 

We also tried with patch provided in (IGNITE-6380), but it is rejecting jobs
which are waiting.
 ex=class o.a.i.compute.ComputeExecutionRejectedException: Pending topology
found - job execution within lock or transaction was canceled., hasRes=true,
isCancelled=false, isOccupied=true]
class org.apache.ignite.IgniteException: Remote job threw exception.

To summarize our node startup process:
1) Each node upon start, it will create or get the caches. 
2) Lock the caches
3) if caches not already loaded, then it will clear the caches, and perform
dataload
4) notify through ignite messaging about cache load activities
5) release lock on caches.

So, if multiple nodes are starting at the same time, all the nodes will try
to perform the above activities. During this process we will ensure that,
each cache will be loaded only once.  

Can you please confirm if the above usecase is supported with ignite?

Thanks
Naresh
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: same cache cannot update twice in one transaction

2017-11-08 Thread veris4crm
OK,Thanks for reply.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: When client node query a cache with JDBC storage, report miss the dataSourceBean

2017-11-08 Thread aa...@tophold.com
Got it thanks Ilya!  

So if we try to totally isolate the client from server, may need start a facade 
to supply query only service. 

Regards
Aaron


aa...@tophold.com
 
From: Ilya Kasnacheev
Date: 2017-11-08 19:57
To: user
Subject: Re: When client node query a cache with JDBC storage, report miss the 
dataSourceBean
Hello Aaron!

- In Ignite, client nodes are always aware of backed storage 
(cacheStoreFactory) of all caches. This is by design.

- In Ignite, client nodes perform operations on backed storage DB for 
transactional caches.
The reasoning here is that transaction commit has to happen in one place, and 
that place is the client which initiated transaction.
Otherwise there's no reliable way to make sure that backing non-distributed DB 
is updated (or rolled back) properly.

- For atomic caches, client nodes should not be using cacheStore to talk to DB 
but still instantiate it in full.

Please make sure that client has all the beans required for cacheStore 
operation.

-- 
Ilya Kasnacheev

2017-11-08 13:52 GMT+03:00 aa...@tophold.com :
hi All, 

My server side cache configuration with JDBC storage as back-end.  whose data 
source refer to a bean "serverDatasource" from server spring context. 

While a pure client node to fetch data from the server, it always report:


GridCachePartitionExchangeManager - Failed to process custom exchange task: 
ClientCacheChangeDummyDiscoveryMessage [reqId=1ffc2697-6548-49b3-9c
ac-a3c8a8672770, cachesToClose=null, startCaches=[ProductEntry]]
org.apache.ignite.IgniteException: Failed to load bean in application context 
[beanName=serverDatasource, 
igniteConfig=org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContex
t@2667f029: startup date [Wed Nov 08 10:36:06 UTC 2017]; root of context 
hierarchy]
at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:183)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:100)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1318)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1799)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCacheStartRequests(CacheAffinitySharedManager.java:428)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(CacheAffinitySharedManager.java:611)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:338)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2142)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2231)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
~[ignite-core-2.3.0.jar!/:2.3.0]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

 The client side never ever should have this "serverDatasource" in its context, 
also client suppose not to touch this DB. 

Client as : 

Could you please advice, how suppose can we stop this check?  even trigger a 
read/write through, this suppose performed by the Server side? not the client 
right? 

BTW I already set: 
System.setProperty(org.apache.ignite.IgniteSystemProperties.IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK,
 "true");  but this not work. 

Thanks for your time

Regards
Aaron


aa...@tophold.com



Re: same cache cannot update twice in one transaction

2017-11-08 Thread vkulichenko
No, this is not implemented yet. Here is the ticket where you can track the
progress: https://issues.apache.org/jira/browse/IGNITE-3478

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Two Ignite Clusters formed after network disturbance

2017-11-08 Thread vkulichenko
Amit,

1. On all nodes, clients and servers.
2-3. Easiest way is to create a JAR file with these classes and put it under
IGNITE_HOME/libs prior to start. This JAR will be automatically picked up.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite-cassandra module issue

2017-11-08 Thread Dmitriy Setrakyan
Hi Michael, do you have any update for the issue?

On Thu, Nov 2, 2017 at 5:14 PM, Michael Cherkasov <
michael.cherka...@gmail.com> wrote:

> Hi Tobias,
>
> Thank you for explaining how to reproduce it, I'll try your instruction. I
> spend several days trying to reproduce the issue,
> but I thought that the reason of this is too high load and I didn't stop
> client during testing.
> I'll check your instruction and try to fix the issue.
>
> Thanks,
> Mike.
>
> 2017-10-25 16:23 GMT+03:00 Tobias Eriksson :
>
>> Hi Andrey et al
>>
>> I believe I now know what the problem is, the Cassandra session is
>> refreshed, but before it is a prepared statement is created/used and there,
>> and so using a new session with an old prepared statement is not working.
>>
>>
>>
>> The way to reproduce is
>>
>> 1)   Start Ignite Server Node
>>
>> 2)   Start client which inserts a batch of 100 elements
>>
>> 3)   End client
>>
>> 4)   Now Ignite Server Node returns the Cassandra Session to the pool
>>
>> 5)   Wait 5+ minutes
>>
>> 6)   Now Ignite Server Node has does a clean-up of the “unused”
>> Cassandra sessions
>>
>> 7)   Start client which inserts a batch of 100 elements
>>
>> 8)   Boom ! The exception starts to happen
>>
>>
>>
>> Reason is
>>
>> 1)   Execute is called for a BATCH
>>
>> 2)   Prepared-statement is reused since there is a cache of those
>>
>> 3)   It is about to do session().execute( batch )
>>
>> 4)   BUT the call to session() results in refreshing the session,
>> and this is where the prepared statements as the old session new them are
>> cleaned up
>>
>> 5)   Now it is looping over 100 times with a NEW session but with an
>> OLD prepared statement
>>
>>
>>
>> This is a bug,
>>
>>
>>
>> -Tobias
>>
>>
>>
>>
>>
>> *From: *Andrey Mashenkov 
>> *Reply-To: *"user@ignite.apache.org" 
>> *Date: *Wednesday, 25 October 2017 at 14:12
>> *To: *"user@ignite.apache.org" 
>> *Subject: *Re: Ignite-cassandra module issue
>>
>>
>>
>> Hi Tobias,
>>
>>
>>
>> What ignite version do you use? May be this was already fixed in latest
>> one?
>>
>> I see related fix inclueded in upcoming 2.3 version.
>>
>>
>>
>> See IGNITE-5897 [1] issue. It is unobvious, but this fix session init\end
>> logic, so session should be closed in proper way.
>>
>>
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-5897
>>
>>
>>
>>
>>
>> On Wed, Oct 25, 2017 at 11:13 AM, Tobias Eriksson <
>> tobias.eriks...@qvantel.com> wrote:
>>
>> Hi
>>  Sorry did not include the context when I replied
>>  Has anyone been able to resolve this problem, cause I have it too on and
>> off
>> In fact it sometimes happens just like that, e.g. I have been running my
>> Ignite client and then stop it, and then it takes a while and run it
>> again,
>> and all by a sudden this error shows up. An that is the first thing that
>> happens, and there is NOT a massive amount of load on Cassandra at that
>> time. But I have also seen it when I hammer Ignite/Cassandra with
>> updates/inserts.
>>
>> This is a deal-breaker for me, I need to understand how to fix this, cause
>> having this in production is not an option.
>>
>> -Tobias
>>
>>
>> Hi!
>> I'm using the cassandra as persistence store for my caches and have one
>> issue by handling a huge data (via IgniteDataStreamer from kafka).
>> Ignite Configuration:
>> final IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
>> igniteConfiguration.setIgniteInstanceName("test");
>> igniteConfiguration.setClientMode(true);
>> igniteConfiguration.setGridLogger(new Slf4jLogger());
>> igniteConfiguration.setMetricsLogFrequency(0);
>> igniteConfiguration.setDiscoverySpi(configureTcpDiscoverySpi());
>> final BinaryConfiguration binaryConfiguration = new BinaryConfiguration();
>> binaryConfiguration.setCompactFooter(false);
>> igniteConfiguration.setBinaryConfiguration(binaryConfiguration);
>> igniteConfiguration.setPeerClassLoadingEnabled(true);
>> final MemoryPolicyConfiguration memoryPolicyConfiguration = new
>> MemoryPolicyConfiguration();
>> memoryPolicyConfiguration.setName("3Gb_Region_Eviction");
>> memoryPolicyConfiguration.setInitialSize(1024L * 1024L * 1024L);
>> memoryPolicyConfiguration.setMaxSize(3072L * 1024L * 1024L);
>>
>> memoryPolicyConfiguration.setPageEvictionMode(DataPageEvicti
>> onMode.RANDOM_2_LRU);
>> final MemoryConfiguration memoryConfiguration = new MemoryConfiguration();
>> memoryConfiguration.setMemoryPolicies(memoryPolicyConfiguration);
>> igniteConfiguration.setMemoryConfiguration(memoryConfiguration);
>>
>> Cache configuration:
>> final CacheConfiguration cacheConfiguration = new
>> CacheConfiguration<>();
>> cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>> cacheConfiguration.setStoreKeepBinary(true);
>> cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
>> cacheConfiguration.setBackups(0

Re: Failed to parse query: exception with Scala

2017-11-08 Thread future expert
I think I found something interesting. I see the below when i print the
current topology from visor.

[image: Inline image 1]

The "1%lo" string that is under the Int./Ext. IPs is the same as the string
that is included in the below exception when
*sharedRDDConsumer.first() or
**sharedRDDConsumer.take(5).foreach(println)
*is executed*. *

java.lang.NumberFormatException: For input string: *"1%lo"*
at java.lang.NumberFormatException.forInputString(NumberFormatE
xception.java:65)

Is this a possible bug?


>
>
>
> On Wed, Nov 8, 2017 at 11:00 AM, future expert  > wrote:
>
>> I am just trying to retrieve the already cached shared-rdd(
>> sharedRDDProducer) back from cache. I have also tried like below too
>> without success.
>>
>> *val *sharedRDDConsumer*: IgniteRDD[String, String] =
>> igniteContext.fromCache[String, String]("*sharedRDDProducer*")*
>>
>> I am getting the same error even when running the below example program
>>
>> Exception   : class javax.cache.CacheException
>> Message : class org.apache.ignite.internal.pro
>> cessors.query.IgniteSQLException: Failed to parse query: select _val
>> from String
>>
>> Also, not sure why the retrieved RDD count() always shows 1 instead of
>> the actual recordcount!
>>
>> Thanks.
>>
>>
>> On Wed, Nov 8, 2017 at 8:45 AM, Evgenii Zhuravlev <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> I don't really get, what you trying to do here:
>>>
>>> val sharedRDDConsumer = igniteContext.fromCache("sharedRDDProducer")
>>>
>>> it looks like a mistake
>>>
>>> Here is example of using ignite sql from spark in java:
>>>
>>> https://github.com/apache/ignite/blob/master/examples/src/ma
>>> in/spark/org/apache/ignite/examples/spark/SharedRDDExample.java
>>>
>>> the same for scala:
>>>
>>> https://github.com/apache/ignite/blob/master/examples/src/ma
>>> in/scala/org/apache/ignite/scalar/examples/spark/ScalarShare
>>> dRDDExample.scala
>>>
>>> 2017-11-08 19:02 GMT+03:00 future expert :
>>>
 Thanks. I currently do not have indexed types for cache
 "sharedRDDProducer" as i currently add it as below.

 *val sharedRDDProducer: IgniteRDD[String, String] =
 igniteContext.fromCache[String, String]("sharedRDDProducer")*
 *sharedRDDProducer.savePairs(jsonRdd)*

 Is the indexed types needed for sharedRDDProducer as well? If so, how
 can I add it?

 Also, I am getting the below exception with all the different types of
 datasets when trying to do a *sharedRDDConsumer.first() or 
 **sharedRDDConsumer.take(5).foreach(println).
 *I think that something is wrong with the saved sharedRDDProducer. Could
 it be an Ignite version issue?

 java.lang.NumberFormatException: For input string: "1%lo"
 at java.lang.NumberFormatException.forInputString(NumberFormatE
 xception.java:65)


 I tried the below example using [Int, Int] rdd as well but the SQL part
 at the end is giving the same exception. Do you have a working SQL query
 sample in scala using [string, string] pair rdd? Thanks.

 https://github.com/apache/ignite/blob/master/examples/src/ma
 in/scala/org/apache/ignite/scalar/examples/spark/ScalarShare
 dRDDExample.scala




 On Wed, Nov 8, 2017 at 6:27 AM, ezhuravlev 
 wrote:

> Do you have indexed types for cache "sharedRDDProducer"?
>
> like
>
> cacheCfg.setIndexedTypes(String.class, String.class);
>
> Evgenii
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


>>>
>>
>


Re: Failed to parse query: exception with Scala

2017-11-08 Thread future expert
Any suggestions on the below will be really appreciated.

Thanks.



On Wed, Nov 8, 2017 at 11:00 AM, future expert 
wrote:

> I am just trying to retrieve the already cached shared-rdd(
> sharedRDDProducer) back from cache. I have also tried like below too
> without success.
>
> *val *sharedRDDConsumer*: IgniteRDD[String, String] =
> igniteContext.fromCache[String, String]("*sharedRDDProducer*")*
>
> I am getting the same error even when running the below example program
>
> Exception   : class javax.cache.CacheException
> Message : class org.apache.ignite.internal.pro
> cessors.query.IgniteSQLException: Failed to parse query: select _val from
> String
>
> Also, not sure why the retrieved RDD count() always shows 1 instead of the
> actual recordcount!
>
> Thanks.
>
>
> On Wed, Nov 8, 2017 at 8:45 AM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> I don't really get, what you trying to do here:
>>
>> val sharedRDDConsumer = igniteContext.fromCache("sharedRDDProducer")
>>
>> it looks like a mistake
>>
>> Here is example of using ignite sql from spark in java:
>>
>> https://github.com/apache/ignite/blob/master/examples/src/
>> main/spark/org/apache/ignite/examples/spark/SharedRDDExample.java
>>
>> the same for scala:
>>
>> https://github.com/apache/ignite/blob/master/examples/src/
>> main/scala/org/apache/ignite/scalar/examples/spark/ScalarSh
>> aredRDDExample.scala
>>
>> 2017-11-08 19:02 GMT+03:00 future expert :
>>
>>> Thanks. I currently do not have indexed types for cache
>>> "sharedRDDProducer" as i currently add it as below.
>>>
>>> *val sharedRDDProducer: IgniteRDD[String, String] =
>>> igniteContext.fromCache[String, String]("sharedRDDProducer")*
>>> *sharedRDDProducer.savePairs(jsonRdd)*
>>>
>>> Is the indexed types needed for sharedRDDProducer as well? If so, how
>>> can I add it?
>>>
>>> Also, I am getting the below exception with all the different types of
>>> datasets when trying to do a *sharedRDDConsumer.first() or 
>>> **sharedRDDConsumer.take(5).foreach(println).
>>> *I think that something is wrong with the saved sharedRDDProducer. Could
>>> it be an Ignite version issue?
>>>
>>> java.lang.NumberFormatException: For input string: "1%lo"
>>> at java.lang.NumberFormatException.forInputString(NumberFormatE
>>> xception.java:65)
>>>
>>>
>>> I tried the below example using [Int, Int] rdd as well but the SQL part
>>> at the end is giving the same exception. Do you have a working SQL query
>>> sample in scala using [string, string] pair rdd? Thanks.
>>>
>>> https://github.com/apache/ignite/blob/master/examples/src/ma
>>> in/scala/org/apache/ignite/scalar/examples/spark/ScalarShare
>>> dRDDExample.scala
>>>
>>>
>>>
>>>
>>> On Wed, Nov 8, 2017 at 6:27 AM, ezhuravlev 
>>> wrote:
>>>
 Do you have indexed types for cache "sharedRDDProducer"?

 like

 cacheCfg.setIndexedTypes(String.class, String.class);

 Evgenii



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>
>>>
>>
>


Re: Re: How the Ignite Service performance? When we test the CPU soon be occupied 100%

2017-11-08 Thread Dmitriy Setrakyan
On Wed, Oct 25, 2017 at 9:15 AM, aa...@tophold.com 
wrote:

> Thanks Andrey!  Now it better, we try to exclude no-core logic to
> separated instances.
>
> What I learned from last several months using ignite, we should set up
> ignite as a standalone data node, while put my application logic in another
> one.
>
> Otherwise it will bring too much unstable to my application.  I not sure
> this is the best practice?
>

It depends on your use case, but I would say that majority of Ignite
deployments have stand-alone data nodes, so there is nothing wrong with
what you are suggesting.


Re: Multiple Cluster and Cache Joins

2017-11-08 Thread vkulichenko
Saji,

To do a join caches must reside in the same cluster. What is the reason for
creating separate clusters in the first place?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Multiple Cluster and Cache Joins

2017-11-08 Thread StartCoding
Hi Team

I have a scenario . I wanted to have two ignite clusters  ClusterA will have
2 members and will have a cacheA and ClusterB will have 2 members and will
have cacheB. Would I be able to do SQL join of  cacheA and CacheB
considering the fact that it is in two isolated clusters in same machine?
What is the best way to handle this situation.

Appreciate your time for response.

Thanks
Saji



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Plugin development ...

2017-11-08 Thread Amit Pundir
Hi,
I am writing a plugin and have followed the steps mentioned on this
conversation but the plugin is not discovered by Ignite node on start up.

I have created META-INF/services directories on my Ignite *server* node
under the 'libs' directory. I also tried keeping it at the same level as
'libs'.

I am using Ignite 2.0. Could you please tell me how you got it to work.


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Out of memory in client node freezes complete cluster

2017-11-08 Thread Amit Pundir
Hi,
I am using Ignite 2.0. I have observed that if there is an out of memory
error on any Ignite client node, the complete cluster becomes unresponsive. 

A few details about my caches/operations -
1. Atomicity mode - Transactional
2. Locking - Pessimistic with repeatable read.


Is this expected to happen? If so, what are the options to ensure cluster
availability besides restarting the nodes and allocate large enough memory
to all the nodes to avoid OOM at every cost?


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to parse query: exception with Scala

2017-11-08 Thread future expert
I am just trying to retrieve the already cached shared-rdd(
sharedRDDProducer) back from cache. I have also tried like below too
without success.

*val *sharedRDDConsumer*: IgniteRDD[String, String] =
igniteContext.fromCache[String, String]("*sharedRDDProducer*")*

I am getting the same error even when running the below example program

Exception   : class javax.cache.CacheException
Message : class org.apache.ignite.internal.pro
cessors.query.IgniteSQLException: Failed to parse query: select _val from
String

Also, not sure why the retrieved RDD count() always shows 1 instead of the
actual recordcount!

Thanks.


On Wed, Nov 8, 2017 at 8:45 AM, Evgenii Zhuravlev 
wrote:

> I don't really get, what you trying to do here:
>
> val sharedRDDConsumer = igniteContext.fromCache("sharedRDDProducer")
>
> it looks like a mistake
>
> Here is example of using ignite sql from spark in java:
>
> https://github.com/apache/ignite/blob/master/examples/
> src/main/spark/org/apache/ignite/examples/spark/SharedRDDExample.java
>
> the same for scala:
>
> https://github.com/apache/ignite/blob/master/examples/
> src/main/scala/org/apache/ignite/scalar/examples/spark/
> ScalarSharedRDDExample.scala
>
> 2017-11-08 19:02 GMT+03:00 future expert :
>
>> Thanks. I currently do not have indexed types for cache
>> "sharedRDDProducer" as i currently add it as below.
>>
>> *val sharedRDDProducer: IgniteRDD[String, String] =
>> igniteContext.fromCache[String, String]("sharedRDDProducer")*
>> *sharedRDDProducer.savePairs(jsonRdd)*
>>
>> Is the indexed types needed for sharedRDDProducer as well? If so, how
>> can I add it?
>>
>> Also, I am getting the below exception with all the different types of
>> datasets when trying to do a *sharedRDDConsumer.first() or 
>> **sharedRDDConsumer.take(5).foreach(println).
>> *I think that something is wrong with the saved sharedRDDProducer. Could
>> it be an Ignite version issue?
>>
>> java.lang.NumberFormatException: For input string: "1%lo"
>> at java.lang.NumberFormatException.forInputString(NumberFormatE
>> xception.java:65)
>>
>>
>> I tried the below example using [Int, Int] rdd as well but the SQL part
>> at the end is giving the same exception. Do you have a working SQL query
>> sample in scala using [string, string] pair rdd? Thanks.
>>
>> https://github.com/apache/ignite/blob/master/examples/src/
>> main/scala/org/apache/ignite/scalar/examples/spark/ScalarSh
>> aredRDDExample.scala
>>
>>
>>
>>
>> On Wed, Nov 8, 2017 at 6:27 AM, ezhuravlev 
>> wrote:
>>
>>> Do you have indexed types for cache "sharedRDDProducer"?
>>>
>>> like
>>>
>>> cacheCfg.setIndexedTypes(String.class, String.class);
>>>
>>> Evgenii
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>


Re: Affinity Compute latency

2017-11-08 Thread rajivgandhi
Thanks Andrew. 

The nos are an average (not 90+ percentile) over a period of 60 minutes! 
We use the default deployment mode which is Shared. 
We used 3 nodes M4.large instances. Throughput less than 200-300 TPS 

1. Are the nos below your expectations? 
2. Given the above, what changes are recommended? 

thanks, 
Rajeev 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Affinity Latency

2017-11-08 Thread rajivgandhi
Thanks Andrew.

The nos are an average (not 90+ percentile) over a period of 60 minutes!
We use the default deployment mode which is Shared.
We used 3 nodes M4.large instances. Throughput less than 200-300 TPS

1. Are the nos below your expectations?
2. Given the above, what changes are recommended?

thanks,
Rajeev



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to parse query: exception with Scala

2017-11-08 Thread Evgenii Zhuravlev
I don't really get, what you trying to do here:

val sharedRDDConsumer = igniteContext.fromCache("sharedRDDProducer")

it looks like a mistake

Here is example of using ignite sql from spark in java:

https://github.com/apache/ignite/blob/master/examples/src/main/spark/org/apache/ignite/examples/spark/SharedRDDExample.java

the same for scala:

https://github.com/apache/ignite/blob/master/examples/src/main/scala/org/apache/ignite/scalar/examples/spark/ScalarSharedRDDExample.scala

2017-11-08 19:02 GMT+03:00 future expert :

> Thanks. I currently do not have indexed types for cache
> "sharedRDDProducer" as i currently add it as below.
>
> *val sharedRDDProducer: IgniteRDD[String, String] =
> igniteContext.fromCache[String, String]("sharedRDDProducer")*
> *sharedRDDProducer.savePairs(jsonRdd)*
>
> Is the indexed types needed for sharedRDDProducer as well? If so, how can
> I add it?
>
> Also, I am getting the below exception with all the different types of
> datasets when trying to do a *sharedRDDConsumer.first() or 
> **sharedRDDConsumer.take(5).foreach(println).
> *I think that something is wrong with the saved sharedRDDProducer. Could
> it be an Ignite version issue?
>
> java.lang.NumberFormatException: For input string: "1%lo"
> at java.lang.NumberFormatException.forInputString(NumberFormatE
> xception.java:65)
>
>
> I tried the below example using [Int, Int] rdd as well but the SQL part at
> the end is giving the same exception. Do you have a working SQL query
> sample in scala using [string, string] pair rdd? Thanks.
>
> https://github.com/apache/ignite/blob/master/examples/
> src/main/scala/org/apache/ignite/scalar/examples/spark/
> ScalarSharedRDDExample.scala
>
>
>
>
> On Wed, Nov 8, 2017 at 6:27 AM, ezhuravlev 
> wrote:
>
>> Do you have indexed types for cache "sharedRDDProducer"?
>>
>> like
>>
>> cacheCfg.setIndexedTypes(String.class, String.class);
>>
>> Evgenii
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: split-brain problem and GridSegmentationProcessor

2017-11-08 Thread Amit Pundir
Hi Anirudha, we can collaborate on this. Please drop me an email and we can
then discuss it.


Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failed to parse query: exception with Scala

2017-11-08 Thread future expert
Thanks. I currently do not have indexed types for cache "sharedRDDProducer"
as i currently add it as below.

*val sharedRDDProducer: IgniteRDD[String, String] =
igniteContext.fromCache[String, String]("sharedRDDProducer")*
*sharedRDDProducer.savePairs(jsonRdd)*

Is the indexed types needed for sharedRDDProducer as well? If so, how can I
add it?

Also, I am getting the below exception with all the different types of
datasets when trying to do a *sharedRDDConsumer.first() or
**sharedRDDConsumer.take(5).foreach(println).
*I think that something is wrong with the saved sharedRDDProducer. Could it
be an Ignite version issue?

java.lang.NumberFormatException: For input string: "1%lo"
at java.lang.NumberFormatException.forInputString(NumberFormatE
xception.java:65)


I tried the below example using [Int, Int] rdd as well but the SQL part at
the end is giving the same exception. Do you have a working SQL query
sample in scala using [string, string] pair rdd? Thanks.

https://github.com/apache/ignite/blob/master/examples/src/main/scala/org/apache/ignite/scalar/examples/spark/ScalarSharedRDDExample.scala




On Wed, Nov 8, 2017 at 6:27 AM, ezhuravlev  wrote:

> Do you have indexed types for cache "sharedRDDProducer"?
>
> like
>
> cacheCfg.setIndexedTypes(String.class, String.class);
>
> Evgenii
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: write behind performance impacting main thread. Write behindbuffer is never full

2017-11-08 Thread Alexey Popov
Hi Larry,

Please note that applyBatch() takes locks after the call to updateStore() while 
flushCacheCoalescing() takes them before updateStore().
UpdateStore() does all the work with persisting your data via CacheStore, so it 
is could be quite long & blocking operation for one flushing thread. It seems 
that you can have some gain here coalescing with turning off coalesce.

1. There are no changes here b/w 2.3 and 2.1.
2. Could you please clarify your proposal for the fix? 
For your use case: 
a) if you enable coalescing you will have ONE update (within a batch) to DB 
while you perform multiple updates to the same Cache entry. So the DB load is 
reduced here but it costs some locking overhead to resolve updates
b) If you disable coalescing you will have MULTIPLE updates to DB while you 
perform multiple updates to the same Cache entry. It will reduce locking 
overhead but loads your DB more heavily
You can find some balance by tuning batch size, # of flushing threads 
with/without coalescing.

Thank you,
Alexey

From: Larry Mark
Sent: Wednesday, November 8, 2017 2:27 AM
To: user@ignite.apache.org
Cc: Henry Olschofka
Subject: Re: write behind performance impacting main thread. Write behindbuffer 
is never full

Alexey,

I dug into this a bit more and it is the perfect storm of the way the write 
behind works and the way we are using one of our Caches.  We need to keep our 
kafka offsets persisted, so we have a cache with the Key being a topic and 
partition.  When we get a record from that combination we update the value.  
When we are very busy we are constantly getting messages, and the contents of 
the message gets distributed to many caches, but the offset is to the same 
cache with the same key.  When that gets flushed to disk the coalesce keeps 
locking that key, and is in contention with the main thread trying to update 
the key.  Turning off coalesce does not seem to help, first of all if I am 
reading the code correctly it is still going to take locks in applyBatch after 
the call to updateStore and if we have not coalesced we will take the lock on 
the same value over and over.  Also, because we rewrite that key constantly, 
without coalesce the write behind cannot keep up.  

Now that we understand what is going on we can work around this.  

Two quick questions:
- We are on 2.1, is there anything changed in this area in 2.3 that might make 
this better.
- Is this use case of updating the same key unique to us, or is this common 
enough that there should be a fix to the coalesce code?  

Best,

Larry


On Fri, Nov 3, 2017 at 5:14 PM, Larry Mark  wrote:
Alexey,

With our use case setting the coalesce off will probably make it worse, for at 
least some caches we are doing many updates to the same key, one of the reasons 
I am setting the batch size to 500.

I will send the cachestore implementation and some logs that show the 
phenomenon early next week.  Thanks for your help.

Larry 

On Fri, Nov 3, 2017 at 12:11 PM, Alexey Popov  wrote:
Hi,

Can you share your cache store implementation?

It could be several reasons for possible performance degradation in
write-behind mode.
Ignite can start flushing your cache values in main() thread if cacheSize
becomes greater than 1,5 x setWriteBehindFlushSize. It is a common case but
it does not look like your case.

WriteBehind implementation could use ReentrantReadWriteLock while you
insert/update the Cache entries in your main thread. WriteBehind background
threads use these locks when they read and flush entries.
Such WriteBehind implementation is used when writeCoalescing is turned on by
setWriteBehindCoalescing(true); BTW, the default value is TRUE. Actually, it
makes sense only when you configure several flush threads
(setWriteBehindFlushThreadCount(X)) to have a real concurrency in multiple
reads and writes.

It is hard to believe that it could slow down your main() thread, but please
check: just add setWriteBehindCoalescing(false) to your config and try your
tests again.

Thanks,
Alexey




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/





Re: Failed to parse query: exception with Scala

2017-11-08 Thread ezhuravlev
Do you have indexed types for cache "sharedRDDProducer"?

like

cacheCfg.setIndexedTypes(String.class, String.class);

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Affinity Latency

2017-11-08 Thread Andrey Mashenkov
Answered in [1] thread.

[1]
http://apache-ignite-users.70518.x6.nabble.com/Affinity-Compute-latency-tt18028.html

On Wed, Nov 8, 2017 at 9:03 AM, rajivgandhi  wrote:

> Hi,
> We are seeing higher latency form ignite affinity (single key)/compute
> (multiple keys) of 7ms compared to get/getall 700 microseconds.
>
> Is this as expected? Our implementation is based on the below examples:
> https://github.com/apache/ignite/blob/master/examples/
> src/main/java/org/apache/ignite/examples/datagrid/
> CacheAffinityExample.java
>
> thanks & Regards,
> Rajeev
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Unable to connect ignite pods in Kubernetes using Ip-finder

2017-11-08 Thread rishi007bansod
Hi,
I have used API method *TcpDiscoveryKubernetesIpFinder* and set Ip
address explicitly as *https://192.168.120.92 using ipFinder.setMasterUrl *.
But still I am unable to retrieve ip addresses of running ignite pods. I am
using flannel network in kubernetes. Following is the error log I am getting
:

*[13:42:10,489][INFO][main][IgniteKernal]

>>>__  
>>>   /  _/ ___/ |/ /  _/_  __/ __/
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/
>>>
>>> ver. 1.9.0#20170302-sha1:a8169d0a
>>> 2017 Copyright(C) Apache Software Foundation
>>>
>>> Ignite documentation: http://ignite.apache.org

[13:42:10,491][INFO][main][IgniteKernal] Config URL: n/a
[13:42:10,491][INFO][main][IgniteKernal] Daemon mode: off
[13:42:10,491][INFO][main][IgniteKernal] OS: Linux
3.10.0-514.21.2.el7.x86_64 amd64
[13:42:10,491][INFO][main][IgniteKernal] OS user: root
[13:42:10,496][INFO][main][IgniteKernal] PID: 7
[13:42:10,496][INFO][main][IgniteKernal] Language runtime: Java Platform API
Specification ver. 1.8
[13:42:10,496][INFO][main][IgniteKernal] VM information: OpenJDK Runtime
Environment 1.8.0_111-8u111-b14-2~bpo8+1-b14 Oracle Corporation OpenJDK
64-Bit Server VM 25.111-b14
[13:42:10,499][INFO][main][IgniteKernal] VM total memory: 27.0GB
[13:42:10,499][INFO][main][IgniteKernal] Remote Management [restart: off,
REST: on, JMX (remote: off)]
[13:42:10,499][INFO][main][IgniteKernal]
IGNITE_HOME=/opt/ignite/apache-ignite-fabric-1.9.0-bin
[13:42:10,500][INFO][main][IgniteKernal] VM arguments:
[-DIGNITE_QUIET=false]
[13:42:10,500][INFO][main][IgniteKernal] Configured caches
['ignite-marshaller-sys-cache', 'ignite-sys-cache',
'ignite-atomics-sys-cache']
[13:42:10,508][INFO][main][IgniteKernal] 3-rd party licenses can be found
at: /opt/ignite/apache-ignite-fabric-1.9.0-bin/libs/licenses
[13:42:10,610][INFO][main][IgnitePluginProcessor] Configured plugins:
[13:42:10,610][INFO][main][IgnitePluginProcessor]   ^-- None
[13:42:10,610][INFO][main][IgnitePluginProcessor]
[13:42:10,703][INFO][main][TcpCommunicationSpi] Successfully bound
communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0,
selectorsCnt=28, selectorSpins=0, pairedConn=false]
[13:42:10,711][WARNING][main][TcpCommunicationSpi] Message queue limit is
set to 0 which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver sides.
[13:42:10,745][WARNING][main][NoopCheckpointSpi] Checkpoints are disabled
(to enable configure any GridCheckpointSpi implementation)
[13:42:10,787][WARNING][main][GridCollisionManager] Collision resolution is
disabled (all jobs will be activated upon arrival).
[13:42:10,792][WARNING][main][NoopSwapSpaceSpi] Swap space is disabled. To
enable use FileSwapSpaceSpi.
[13:42:10,794][INFO][main][IgniteKernal] Security status
[authentication=off, tls/ssl=off]
[13:42:11,254][INFO][main][GridTcpRestProtocol] Command protocol
successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
[13:42:11,303][INFO][main][IgniteKernal] Non-loopback local IPs: 172.17.0.4,
fe80:0:0:0:42:acff:fe11:4%eth0
[13:42:11,303][INFO][main][IgniteKernal] Enabled local MACs: 0242AC110004
[13:42:11,344][INFO][main][TcpDiscoverySpi] Successfully bound to TCP port
[port=47500, localHost=0.0.0.0/0.0.0.0,
locNodeId=3c13c077-b9cc-4f00-91e2-57749c3fea32]
[13:42:11,946][SEVERE][main][TcpDiscoverySpi] Failed to get registered
addresses from IP finder on start (retrying every 2000 ms).
class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite
pods IP addresses.
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1613)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1562)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:974)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:837)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:351)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1850)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:268)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:685)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1626)
at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:924)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1602)
at
org.apache.ignite.internal.Ignit

Re: When client node query a cache with JDBC storage, report miss the dataSourceBean

2017-11-08 Thread Ilya Kasnacheev
Hello Aaron!

- In Ignite, client nodes are always aware of backed storage
(cacheStoreFactory) of all caches. This is by design.

- In Ignite, client nodes perform operations on backed storage DB for
transactional caches.
The reasoning here is that transaction commit has to happen in one place,
and that place is the client which initiated transaction.
Otherwise there's no reliable way to make sure that backing non-distributed
DB is updated (or rolled back) properly.

- For atomic caches, client nodes should not be using cacheStore to talk to
DB but still instantiate it in full.

Please make sure that client has all the beans required for cacheStore
operation.

-- 
Ilya Kasnacheev

2017-11-08 13:52 GMT+03:00 aa...@tophold.com :

> hi All,
>
> My server side cache configuration with JDBC storage as back-end.  whose
> data source refer to a bean "serverDatasource" from server spring context.
>
> While a pure client node to fetch data from the server, it always report:
>
>
> GridCachePartitionExchangeManager -
> * Failed to process custom exchange task*: ClientCacheChangeDummyDiscover
> yMessage [reqId=1ffc2697-6548-49b3-9c
> ac-a3c8a8672770, cachesToClose=null, startCaches=[ProductEntry]]
> org.apache.ignite.IgniteException: Failed to load bean in application
> context [beanName=*serverDatasource*, igniteConfig=org.
> springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApp
> licationContex
> t@2667f029: startup date [Wed Nov 08 10:36:06 UTC 2017];
> root of context hierarchy]
> at org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(
> CacheJdbcPojoStoreFactory.java:183) ~[ignite-core-2.3.0.jar!/:2.3.0]
> at org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(
> CacheJdbcPojoStoreFactory.java:100) ~[ignite-core-2.3.0.jar!/:2.3.0]
> at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> createCache(GridCacheProcessor.java:1318) ~[ignite-core-2.3.0.jar!/:2.3.0]
> at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> prepareCacheStart(GridCacheProcessor.java:1799)
> ~[ignite-core-2.3.0.jar!/:2.3.0]
> at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.
> processClientCacheStartRequests(CacheAffinitySharedManager.
> java:428) ~[ignite-core-2.3.0.jar!/:2.3.0]
> at org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.
> processClientCachesChanges(CacheAffinitySharedManager.
> java:611) ~[ignite-core-2.3.0.jar!/:2.3.0]
> at org.apache.ignite.internal.processors.cache.GridCacheProcessor.
> processCustomExchangeTask(GridCacheProcessor.java:338) ~
> [ignite-core-2.3.0.jar!/:2.3.0]
> at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(
> GridCachePartitionExchangeManager.java:2142) ~[ignite-core-
> 2.3.0.jar!/:2.3.0]
> at org.apache.ignite.internal.processors.cache.
> GridCachePartitionExchangeManager$ExchangeWorker.body(
> GridCachePartitionExchangeManager.java:2231) ~[ignite-core-
> 2.3.0.jar!/:2.3.0]
> at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110) ~[ignite-core-2.3.0.jar!/:2.3.0]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
>
>  The client side never ever should have this "serverDatasource" in its
> context, also client suppose not to touch this DB.
>
> Client as : 
>
> Could you please advice, how suppose can we stop this check?  even trigger
> a read/write through, this suppose performed by the Server side? not the
> client right?
>
> BTW I already set: System.setProperty(org.apache.ignite.
> IgniteSystemProperties.IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK, "true"
> );  but this not work.
>
> Thanks for your time
>
> Regards
> Aaron
> --
> aa...@tophold.com
>


When client node query a cache with JDBC storage, report miss the dataSourceBean

2017-11-08 Thread aa...@tophold.com
hi All, 

My server side cache configuration with JDBC storage as back-end.  whose data 
source refer to a bean "serverDatasource" from server spring context. 

While a pure client node to fetch data from the server, it always report:


GridCachePartitionExchangeManager - Failed to process custom exchange task: 
ClientCacheChangeDummyDiscoveryMessage [reqId=1ffc2697-6548-49b3-9c
ac-a3c8a8672770, cachesToClose=null, startCaches=[ProductEntry]]
org.apache.ignite.IgniteException: Failed to load bean in application context 
[beanName=serverDatasource, 
igniteConfig=org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContex
t@2667f029: startup date [Wed Nov 08 10:36:06 UTC 2017]; root of context 
hierarchy]
at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:183)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:100)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1318)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1799)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCacheStartRequests(CacheAffinitySharedManager.java:428)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(CacheAffinitySharedManager.java:611)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:338)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2142)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2231)
 ~[ignite-core-2.3.0.jar!/:2.3.0]
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) 
~[ignite-core-2.3.0.jar!/:2.3.0]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

 The client side never ever should have this "serverDatasource" in its context, 
also client suppose not to touch this DB. 

Client as : 

Could you please advice, how suppose can we stop this check?  even trigger a 
read/write through, this suppose performed by the Server side? not the client 
right? 

BTW I already set: 
System.setProperty(org.apache.ignite.IgniteSystemProperties.IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK,
 "true");  but this not work. 

Thanks for your time

Regards
Aaron


aa...@tophold.com


Re: Question on On-Heap Caching

2017-11-08 Thread Alexey Kukushkin
Hi,

Ignite always stores data off-heap. Enabling on-heap caching

just turns Java heap into a cache for the off-heap memory, allowing you to
configure eviction policies specific for such a heap cache.

I believe the idea is that in some cases accessing on-heap is faster than
accessing off-heap, although I never saw any benchmarks or recommendations
what data access scenarios would benefit from the on-heap caching. Remember
that storing data on-heap negatively impacts GC. Maybe community will help.
You can also benchmark your use case with on-heap caching and without it
and share results with the community.


Re: getAverageGetTime/getAveragePutTime APIs of CacheMetrics always return 0

2017-11-08 Thread headstar
Hi,

Changed to 2.3.0 and added











AverageGetTime/AveragePutTime still 0.

Per



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Affinity Compute latency

2017-11-08 Thread Andrey Mashenkov
Hi,

Most likely, first compute call will register a job class and it takes most
of time.
Make sure your class will not be unloaded between compute calls [1].

Try to make much more cycles to benchmark to make it run a few tens of
seconds and add warm-up cycles.
You can see how it done in Ignite yarstick [2] benchmarks [3].


[1] https://apacheignite.readme.io/docs/deployment-modes
[2]
https://apacheignite.readme.io/docs/perfomance-benchmarking#yardstick-ignite-benchmarks
[3] https://github.com/apache/ignite/tree/master/modules/yardstick

On Wed, Nov 8, 2017 at 7:58 AM, rajivgandhi  wrote:

> Hi All,
> We see a significant difference between the latency nos for
> affnityRun(single key)/compute(multiple keys) and get/getall. Our code is
> based on the below example:
> https://github.com/apache/ignite/blob/master/examples/
> src/main/java/org/apache/ignite/examples/datagrid/
> CacheAffinityExample.java
>
> The nos are as follows:
> 1. get/getall: 700 microseconds
> 2. affnityRun/Compute: 7 milliseconds
>
> Is this as expected?
>
> thanks,
> Rajeev
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Affinity Compute latency

2017-11-08 Thread rajivgandhi
Hi All,
We see a significant difference between the latency nos for
affnityRun(single key)/compute(multiple keys) and get/getall. Our code is
based on the below example:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheAffinityExample.java

The nos are as follows:
1. get/getall: 700 microseconds 
2. affnityRun/Compute: 7 milliseconds

Is this as expected?

thanks,
Rajeev



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/