Re: isolated cluster configuration

2019-03-13 Thread javastuff....@gmail.com
Thank you for the response Ilya. Using 2 different DB schema is a way out here, but I was trying to see if there is any other way to achieve this. Maybe property or configuration resulting in different table names for the isolated cluster or table details having cluster details to isolate from

Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread jackluo923
Hi Ilya, I dug deeper after your response and found the root cause of the error. Platform: Ubuntu 16.04 1. Installed openssl 1.1.0h using binaries compiled from source (reason: 16.04 does not provided any openssl 1.1 packages), then installed using "make install" command. 2. Ubuntu 16.04

Primary partitions return zero partitions before rebalance.

2019-03-13 Thread Koitoer
Hi All. I'm trying to follow the rebalance events of my ignite cluster so I'm able to track which partitions are assigned to each node at any point in time. I am listening to the `EVT_CACHE_REBALANCE_STARTED` and `EVT_CACHE_REBALANCE_STOPPED` events from Ignite and that is working well, except in

How to use atomic operations on C++ thin client?

2019-03-13 Thread jackluo923
Is atomic operation supported on C++ thin client? Design documentation shows yes (https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features) SDK documentation suggest now (https://www.gridgain.com/sdk/pe/latest/cppdoc/classignite_1_1thin_1_1cache_1_1CacheClient.html) Compilation

Re: JDK 11 support.

2019-03-13 Thread Ilya Kasnacheev
Hello! Ignite is very complex application which indeed have to be adjusted for every new major JDK release. So yes, basically it fails for every new JDK release when APIs change. Fortunately, you can control version of your JDK. Regards, -- Ilya Kasnacheev ср, 13 мар. 2019 г. в 19:43,

Re: JDK 11 support.

2019-03-13 Thread Loredana Radulescu Ivanoff
Hello, I am very interested in this topic as well so I've been following up. So, if I understand correctly, there is no other way to access the needed API's without these flags, and the following (extract from Java documentation) is an accepted risk? "*The --add-exports and --add-opens options

Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread Ilya Kasnacheev
Hello! You can uninstall openssl-dev package without uninstalling openssl library. That's what on my system: ii libssl1.0-dev:amd64 1.0.2n-1ubuntu5.3 amd64 Secure Sockets Layer toolkit - development files ii libssl1.0.0:amd64 1.0.2n-1ubuntu5.3

Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread jackluo923
Hi Igor, Thank you for the support. Unfortunately, uninstalling openssl1.1 won't be feasible because there are too many projects which depends on openssl1.1. I.e. openjdk-8 package will be automatically uninstalled if openssl1.1 is uninstalled which breaks ignite build due to missing jni

Re: Ignite not able to persist spark rdd to RDBMS in master slave mode

2019-03-13 Thread Ilya Kasnacheev
Hello! With otherwrite=true, Data Streamer used in underlying implementation will skip Cache Store (along with other things). Regards, -- Ilya Kasnacheev ср, 13 мар. 2019 г. в 18:10, Harshal Patil : > Hi Ilya , > Thanks for the solution , it worked . > But can you please explain why

Re: Ignite not able to persist spark rdd to RDBMS in master slave mode

2019-03-13 Thread Harshal Patil
Hi Ilya , Thanks for the solution , it worked . But can you please explain why overwrite = true is required in case when i run spark in master slave configuration . On Wed, Mar 13, 2019 at 8:26 PM Ilya Kasnacheev wrote: > Hello! > > Please try savePairs(rdd, true). > > Hope it helps! > -- >

Re: Ignite not able to persist spark rdd to RDBMS in master slave mode

2019-03-13 Thread Ilya Kasnacheev
Hello! Please try savePairs(rdd, true). Hope it helps! -- Ilya Kasnacheev ср, 13 мар. 2019 г. в 17:41, Harshal Patil : > Hi , > I am using SPARK 2.3.0 and Ignite 2.7.0 . I have enabled POSTGRES as > persistent store through gridgain automatic RDBMS integration .I have > enabled write

Ignite not able to persist spark rdd to RDBMS in master slave mode

2019-03-13 Thread Harshal Patil
Hi , I am using SPARK 2.3.0 and Ignite 2.7.0 . I have enabled POSTGRES as persistent store through gridgain automatic RDBMS integration .I have enabled write through cache . I could see data being persisted in POSTGRES when I am running SPARK in standalone mode , with *val *conf = *new

Re: Ignite client connection issue "Cache doesn't exists ..." even though cache severs and caches are up and running.

2019-03-13 Thread ibelyakov
Hello, Could you please provide small example of cache using from your application which produces described issue? Also try to invoke ignite.cacheNames() and check that requested cache name exists in this list. (case sensitive)

Re: QueryEntities not returning all the fields available in cache - 2.6 version

2019-03-13 Thread Ilya Kasnacheev
Hello! Do you have field mappingId in your value objects? Note that your key (which is java.lang.String) is a different thing that mappingId in your value object, since obviously you can make them diverge. Regards, -- Ilya Kasnacheev пн, 11 мар. 2019 г. в 15:34, : > Hi Igniters, > > > >

Re: Transactions stuck after tearing down cluster

2019-03-13 Thread Ilya Kasnacheev
Hello! This sounds scary, but I doubt that anyone will investigate 2.4 behavior. Can you try to get that behavior on 2.7? Regards, -- Ilya Kasnacheev пн, 11 мар. 2019 г. в 09:41, Ariel Tubaltsev : > I have in-memory cluster of 3 nodes (2.4), replicated mode, transactional > caches. > There

Re: isolated cluster configuration

2019-03-13 Thread Ilya Kasnacheev
Hello! I guess you can supply differing data sources to IP finders of different clusters' instances. Data sources will point to different databases (even if within same database server). Regards, -- Ilya Kasnacheev сб, 9 мар. 2019 г. в 04:22, javastuff@gmail.com : > Hi, > > We have a

Re: Finding collocated data in ignite nodes

2019-03-13 Thread Ilya Kasnacheev
Hello! You can't really check it from SQL when it works, but you can compare it with non-collocated requests: 0: jdbc:ignite:thin://localhost> select * from City c join Country cc on cc.Code = c.CountryCode; ID 101 NAME Mumbai COUNTRYCODE IND DISTRICT NA POPULATION 0

Finding collocated data in ignite nodes

2019-03-13 Thread NileshKhaire
I am trying to collocate data based on SQL given in this link https://ignite.apache.org/features/collocatedprocessing.html . I have created 2 caches 'Country' and 'City' using following SQLs. -- Cache Country CREATE TABLE Country ( Code CHAR(3), Name CHAR(52), Continent CHAR(50), Region

Re: Stream continuous Data from Sql Server to ignite

2019-03-13 Thread Ilya Kasnacheev
Hello! I can see that you have CacheJdbcTable1Store but you never hook it to your cache. Instead you have CacheJdbcPojoStoreFactory You never use CacheJdbcTable1Store so of course it is not called. Regards, -- Ilya Kasnacheev вт, 12 мар. 2019 г. в 12:29, austin solomon : > Hi Ilya, > I

Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread Igor Sapego
Are you sure you don't have OpenSSL 1.1 installed? Best Regards, Igor On Tue, Mar 12, 2019 at 7:18 PM jackluo923 wrote: > What is the specific SSL library version dependency to successfully compile > the cpp binaries under Ubuntu 16.04. This particular dependency isn't > mentioned anywhere in

Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread Igor Sapego
Also, note, that since Ignite 2.8 we are going to support OpenSSL 1.1 as well as 1.0. Best Regards, Igor On Wed, Mar 13, 2019 at 12:12 PM Ilya Kasnacheev wrote: > Hello! > > My recommendation is to uninstall openssl-dev (or openssl-devel) of 1.1.0 > and install that of 1.0.0. > > Then

Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread Ilya Kasnacheev
Hello! My recommendation is to uninstall openssl-dev (or openssl-devel) of 1.1.0 and install that of 1.0.0. Then configure, etc. Regards, -- Ilya Kasnacheev вт, 12 мар. 2019 г. в 21:55, jackluo923 : > Hi Ilya, > can you point me to some resource in how to point ./configure to the >

org.apache.ignite.IgniteException: For input string: “30s” in ignite hadoop execution

2019-03-13 Thread mehdi sey
0 i want to execute a wordcount example of hadoop over apache ignite. i have used IGFS as cache for HDFS configuration in ignite, but after submitting job via hadoop for execution on ignite i encountered with below error. thanks in advance to anyone who could help me! Using configuration: