Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread Ilya Kasnacheev
Hello!

My recommendation is to uninstall openssl-dev (or openssl-devel) of 1.1.0
and install that of 1.0.0.

Then configure, etc.

Regards,
-- 
Ilya Kasnacheev


вт, 12 мар. 2019 г. в 21:55, jackluo923 :

> Hi Ilya,
> can you point me to some resource in how to point ./configure to the
> openssl 1.0 install location? I am a newbie in configuring environments for
> C++ based projects.
>
> What I have tried so far:
> a) ./configure --with-libssl-prefix=/usr/local/ssl
>  - configure: WARNING: unrecognized options: --with-libssl-prefix
> b) ./configure --with-ssl=/usr/local/ssl
>  - configure: WARNING: unrecognized options: --with-ssl
> c) in ./configure --help, there is a "-I flag" to specify headers in a non
> standard directly
>  - I am not sure if this is the properly route to take and I seem to be
> using it the wrong way
>  - "./configure -I/usr/local/ssl" and "./configure -I /usr/local/ssl"
>- both results to configure: error: unrecognized option:
> `-I/usr/local/ssl'
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread Igor Sapego
Also, note, that since Ignite 2.8 we are going to support OpenSSL 1.1 as
well as 1.0.

Best Regards,
Igor


On Wed, Mar 13, 2019 at 12:12 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> My recommendation is to uninstall openssl-dev (or openssl-devel) of 1.1.0
> and install that of 1.0.0.
>
> Then configure, etc.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> вт, 12 мар. 2019 г. в 21:55, jackluo923 :
>
>> Hi Ilya,
>> can you point me to some resource in how to point ./configure to the
>> openssl 1.0 install location? I am a newbie in configuring environments
>> for
>> C++ based projects.
>>
>> What I have tried so far:
>> a) ./configure --with-libssl-prefix=/usr/local/ssl
>>  - configure: WARNING: unrecognized options: --with-libssl-prefix
>> b) ./configure --with-ssl=/usr/local/ssl
>>  - configure: WARNING: unrecognized options: --with-ssl
>> c) in ./configure --help, there is a "-I flag" to specify headers in a non
>> standard directly
>>  - I am not sure if this is the properly route to take and I seem to
>> be
>> using it the wrong way
>>  - "./configure -I/usr/local/ssl" and "./configure -I /usr/local/ssl"
>>- both results to configure: error: unrecognized option:
>> `-I/usr/local/ssl'
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread Igor Sapego
Are you sure you don't have OpenSSL 1.1 installed?

Best Regards,
Igor


On Tue, Mar 12, 2019 at 7:18 PM jackluo923 
wrote:

> What is the specific SSL library version dependency to successfully compile
> the cpp binaries under Ubuntu 16.04. This particular dependency isn't
> mentioned anywhere in the DEVNotes.txt or other documentations.
>
> It seems like 1.0.2g-1ubuntu4.15 is incompatible on Ubuntu 16.04 with
> ignite's source code.
>
> build error message:
> make[3]: Entering directory
> '/tmp/apache-ignite-installation/apache-ignite-2.7.0-bin/platforms/cpp/thin-client'
>
>   CXX  src/impl/ssl/secure_socket_client.lo
> In file included from ./src/impl/ssl/ssl_bindings.h:21:0,
>  from src/impl/ssl/secure_socket_client.cpp:27:
> ./src/impl/ssl/ssl_bindings.h:135:28: error: expression list treated as
> compound expression in initializer [-fpermissive]
>  inline int SSL_library_init()
> ^
> In file included from src/impl/ssl/secure_socket_client.cpp:27:0:
> ./src/impl/ssl/ssl_bindings.h:136:17: error: expected ‘,’ or ‘;’ before ‘{’
> token
>  {
>  ^
> In file included from ./src/impl/ssl/ssl_bindings.h:21:0,
>  from src/impl/ssl/secure_socket_client.cpp:27:
> ./src/impl/ssl/ssl_bindings.h:144:29: error: variable or field
> ‘OPENSSL_init_ssl’ declared void
>  inline void SSL_load_error_strings()
>  ^
> src/impl/ssl/secure_socket_client.cpp: In static member function ‘static
> void* ignite::impl::thin::ssl::SecureSocketClient::MakeContext(const
> string&, const string&, const string&)’:
> src/impl/ssl/secure_socket_client.cpp:199:35: error:
> ‘ignite::impl::thin::ssl::OPENSSL_init_ssl’ cannot be used as a function
>  (void)SSL_library_init();
>^
> src/impl/ssl/secure_socket_client.cpp:201:29: error:
> ‘ignite::impl::thin::ssl::OPENSSL_init_ssl’ cannot be used as a function
>  SSL_load_error_strings();
>  ^
> src/impl/ssl/secure_socket_client.cpp:222:44: error: ‘SSL_CTRL_OPTIONS’ was
> not declared in this scope
>  ssl::SSL_CTX_ctrl(ctx, SSL_CTRL_OPTIONS, flags,
> NULL);
> ^~~~
> src/impl/ssl/secure_socket_client.cpp:222:44: note: suggested alternative:
> ‘SSL_CTRL_CHAIN’
>  ssl::SSL_CTX_ctrl(ctx, SSL_CTRL_OPTIONS, flags,
> NULL);
> ^~~~
> SSL_CTRL_CHAIN
> Makefile:617: recipe for target 'src/impl/ssl/secure_socket_client.lo'
> failed
> make[3]: *** [src/impl/ssl/secure_socket_client.lo] Error 1
> make[3]: Leaving directory
> '/tmp/apache-ignite-installation/apache-ignite-2.7.0-bin/platforms/cpp/thin-client'
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Stream continuous Data from Sql Server to ignite

2019-03-13 Thread Ilya Kasnacheev
Hello!

I can see that you have CacheJdbcTable1Store but you never hook it to your
cache. Instead you have CacheJdbcPojoStoreFactory

You never use CacheJdbcTable1Store so of course it is not called.

Regards,
-- 
Ilya Kasnacheev


вт, 12 мар. 2019 г. в 12:29, austin solomon :

> Hi Ilya,
> I cannot upload the project to git hence attaching the zip here.
> Please use this and guide me where I am doing wrong.
> IgniteDeltaLoad.7z
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t619/IgniteDeltaLoad.7z>
>
>
> Thanks,
> Austin
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Finding collocated data in ignite nodes

2019-03-13 Thread NileshKhaire
I am trying to collocate data based on SQL given in this link 

https://ignite.apache.org/features/collocatedprocessing.html .

I have created 2 caches 'Country' and 'City' using following SQLs.

-- Cache Country
CREATE TABLE Country (
Code CHAR(3),
Name CHAR(52),
Continent CHAR(50),
Region CHAR(26),
SurfaceArea DECIMAL(10,2),
Population INT(11),
Capital INT(11),
PRIMARY KEY (Code)) WITH "template=partitioned, backups=1";

--Cache City
CREATE TABLE City (
ID INT(11),
Name CHAR(35),
CountryCode CHAR(3),
District CHAR(20),
Population INT(11),
PRIMARY KEY (ID, CountryCode)
) WITH "template=partitioned, backups=1, affinityKey=CountryCode"; 

I have inserted some sample records, for example :

insert into Country values('RU','Rusia','Rusia','Rusia',0.0,00,0);
insert into Country values('IND','India','Asia','Asia',0.0,00,0);


insert into City values(101,'Mumbai','IND','NA',00);
insert into City values(102,'Moscow','RU','NA',00);

I have started 2 ignite(on different machines) node to collocate data on
different nodes. After finding records presents on node 0 through visor

cache -scan -c=@c0 -id8=@n0

I can see both cities Mumbai and Moscow are present on node 0 (n0) as well
as on node 1. I was expecting that cities of India will be collocated on
node 0 and cities of Rusia will collocated on node 1 but not both on the
same node.

My questions are :

1. I am doing anything wrong while collocating the data .
2. Running visor cache -scan command is correct way to find collocated data
on nodes ?
3. If this is not correct way then, how can we find which data is collocated
on node 0 and node 1 ?
4. Let's say data is collocated on node 0 (cities of India) and node 1
(cities on Russia) . What will happen if one of the node will be
disconnected from cluster ? Will there be a data loss ? After restarting the
node, Will data be collocated again ?

Thank you in Advance.

PS : I have already asked this question on stack overflow but didn't get
answer 
https://stackoverflow.com/questions/55100844/finding-collocated-data-in-ignite

  

I have already tried collocated=true and local=true approaches . I also
tried to remove backups=1 flag from SQL query , starting 3rd node but
nothing is working.  Hope I will get answer here :) 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Finding collocated data in ignite nodes

2019-03-13 Thread Ilya Kasnacheev
Hello!

You can't really check it from SQL when it works, but you can compare it
with non-collocated requests:

0: jdbc:ignite:thin://localhost> select * from City c join Country cc on
cc.Code = c.CountryCode;
ID   101
NAME Mumbai
COUNTRYCODE  IND
DISTRICT NA
POPULATION   0
CODE IND
NAME India
CONTINENTAsia
REGION   Asia
SURFACEAREA  0.0
POPULATION   0
CAPITAL  0

ID   102
NAME Moscow
COUNTRYCODE  RU
DISTRICT NA
POPULATION   0
CODE RU
NAME Rusia
CONTINENTRusia
REGION   Rusia
SURFACEAREA  0.0
POPULATION   0
CAPITAL  0

2 rows selected (0,021 seconds)

but

0: jdbc:ignite:thin://localhost> select * from City c join Country cc on
cc.Code != c.CountryCode;
No rows selected (0,013 seconds)

You would expect that latter query will return two rows as former, but it
returns zero since in former case data is collocated and in latter it's not.

Regards,
-- 
Ilya Kasnacheev


ср, 13 мар. 2019 г. в 13:13, NileshKhaire :

> I am trying to collocate data based on SQL given in this link
>
> https://ignite.apache.org/features/collocatedprocessing.html .
>
> I have created 2 caches 'Country' and 'City' using following SQLs.
>
> -- Cache Country
> CREATE TABLE Country (
> Code CHAR(3),
> Name CHAR(52),
> Continent CHAR(50),
> Region CHAR(26),
> SurfaceArea DECIMAL(10,2),
> Population INT(11),
> Capital INT(11),
> PRIMARY KEY (Code)) WITH "template=partitioned, backups=1";
>
> --Cache City
> CREATE TABLE City (
> ID INT(11),
> Name CHAR(35),
> CountryCode CHAR(3),
> District CHAR(20),
> Population INT(11),
> PRIMARY KEY (ID, CountryCode)
> ) WITH "template=partitioned, backups=1, affinityKey=CountryCode";
>
> I have inserted some sample records, for example :
>
> insert into Country values('RU','Rusia','Rusia','Rusia',0.0,00,0);
> insert into Country values('IND','India','Asia','Asia',0.0,00,0);
>
>
> insert into City values(101,'Mumbai','IND','NA',00);
> insert into City values(102,'Moscow','RU','NA',00);
>
> I have started 2 ignite(on different machines) node to collocate data on
> different nodes. After finding records presents on node 0 through visor
>
> cache -scan -c=@c0 -id8=@n0
>
> I can see both cities Mumbai and Moscow are present on node 0 (n0) as well
> as on node 1. I was expecting that cities of India will be collocated on
> node 0 and cities of Rusia will collocated on node 1 but not both on the
> same node.
>
> My questions are :
>
> 1. I am doing anything wrong while collocating the data .
> 2. Running visor cache -scan command is correct way to find collocated data
> on nodes ?
> 3. If this is not correct way then, how can we find which data is
> collocated
> on node 0 and node 1 ?
> 4. Let's say data is collocated on node 0 (cities of India) and node 1
> (cities on Russia) . What will happen if one of the node will be
> disconnected from cluster ? Will there be a data loss ? After restarting
> the
> node, Will data be collocated again ?
>
> Thank you in Advance.
>
> PS : I have already asked this question on stack overflow but didn't get
> answer
>
> https://stackoverflow.com/questions/55100844/finding-collocated-data-in-ignite
> <
> https://stackoverflow.com/questions/55100844/finding-collocated-data-in-ignite>
>
>
> I have already tried collocated=true and local=true approaches . I also
> tried to remove backups=1 flag from SQL query , starting 3rd node but
> nothing is working.  Hope I will get answer here :)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: isolated cluster configuration

2019-03-13 Thread Ilya Kasnacheev
Hello!

I guess you can supply differing data sources to IP finders of different
clusters' instances. Data sources will point to different databases (even
if within same database server).

Regards,
-- 
Ilya Kasnacheev


сб, 9 мар. 2019 г. в 04:22, javastuff@gmail.com :

> Hi,
>
> We have a requirement for 2 separate cache cluster isolated from each
> other.
> We have 2 separate configuration file and java program to initialize.
> We achieved it by using non-intersecting IP and Port for different cluster
> while using TCP discovery.
>
> However, we need to achieve the same using DB discovery. Is there a way to
> configure 2 separate cache cluster using DB discovery?
>
> Thanks,
> Sam
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Transactions stuck after tearing down cluster

2019-03-13 Thread Ilya Kasnacheev
Hello!

This sounds scary, but I doubt that anyone will investigate 2.4 behavior.
Can you try to get that behavior on 2.7?

Regards,
-- 
Ilya Kasnacheev


пн, 11 мар. 2019 г. в 09:41, Ariel Tubaltsev :

> I have in-memory cluster of 3 nodes (2.4), replicated mode, transactional
> caches.
> There is a client sending transactions to the cluster.
>
> 1. I bring down all 3 server nodes.
> 2. Bring all of them back.
> 3. Client sends some transactions  - it's stuck, no visible progress
>
> Logs show that the client is still using an old topology version -
> 163575049, when servers are using a new one - 5.
> Are there any additional steps to take on the client side after reconnect,
> waiting period?
>
> DEBUG GridDhtColocatedCache:454 -  Client topology version
> mismatch,
> need remap lock request [reqTopVer=AffinityTopologyVersion [topVer=5,
> minorTopVer=0], locTopVer=AffinityTopologyVersion [topVer=5,
> minorTopVer=1],
> req=GridNearLockRequest [topVer=AffinityTopologyVersion [topVer=5,
> minorTopVer=0], miniId=525, dhtVers=[null],
> subjId=f00f8691-8dcf-4919-aaff-3c1f25f1b757, taskNameHash=0, createTtl=-1,
> accessTtl=-1, flags=3, filter=null, super=GridDistributedLockRequest
> [nodeId=f00f8691-8dcf-4919-aaff-3c1f25f1b757, nearXidVer=GridCacheVersion
> [topVer=163575049, order=1552095656672, nodeOrder=5], threadId=221,
> futId=20fc9106961-a0d19da2-fba4-4805-89dc-ae1195ebb183, timeout=0,
> isInTx=true, isInvalidate=false, isRead=false, isolation=SERIALIZABLE,
> retVals=[true], txSize=0, flags=0, keysCnt=1,
> super=GridDistributedBaseMessage [ver=GridCacheVersion [topVer=163575049,
> order=1552095656672, nodeOrder=5], committedVers=null, rolledbackVers=null,
> cnt=0, super=GridCacheIdMessage [cacheId=288276891]
>
> Not sure if it adds anything, there is also bunch of these:
> DEBUG GridDhtTxRemote:454 - Invalid transaction state transition
> [invalid=PREPARED, cur=PREPARED, tx=GridDhtTxRemote
> [nearNodeId=74418337-ff2d-41ea-b93d-0dc371614b68,
> rmtFutId=7855bef5961-1f4f7cdf-c7b2-4141-815f-86563df4b23d,
> nearXidVer=GridCacheVersion [topVer=163568372, order=1552092449386,
> nodeOrder=4], storeWriteThrough=false, super=GridDistributedTxRemoteAdapter
> [explicitVers=null, started=true, commitAllowed=0,
> txState=IgniteTxRemoteStateImpl [readMap={}, writeMap={IgniteTxKey
> [key=com.google.protobuf.ByteString$LiteralByteString [idHash=1280551205,
> hash=-847442509, bytes=[101, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0], hash=0],
> cacheId=288276891]=IgniteTxEntry
> [key=com.google.protobuf.ByteString$LiteralByteString [idHash=1280551205,
> hash=-847442509, bytes=[101, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0], hash=0],
> cacheId=288276891, txKey=IgniteTxKey
> [key=com.google.protobuf.ByteString$LiteralByteString [idHash=1280551205,
> hash=-847442509, bytes=[101, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0], hash=0],
> cacheId=288276891], val=[op=CREATE,
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: QueryEntities not returning all the fields available in cache - 2.6 version

2019-03-13 Thread Ilya Kasnacheev
Hello!

Do you have field mappingId in your value objects? Note that your key
(which is java.lang.String) is a different thing that mappingId in your
value object, since obviously you can make them diverge.

Regards,
-- 
Ilya Kasnacheev


пн, 11 мар. 2019 г. в 15:34, :

> Hi Igniters,
>
>
>
> Am trying to retrieve Cache metadata i.e. its field name
> and data type. But facing one weird issue, the metadata is returning valid
> response for some caches and invalid for some. Am giving my cache
> configuration for reference. The cluster is up and running for some time
> now, and we have not altered/added any fields to any specific cache (aware
> about one existing issue if you alter cache then its metadata is not
> updated and can’t be retrieved from QueryEntities). Any help or pointers on
> this would be appreciated :
>
> e.g. for below cache its returning only one field i.e. associateId.
> However, it is supposed to return both the fields mentioned in ‘fields’
> property below.
>
>
>
>  parent="cache-template">
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>  value="java.lang.String" />
>
>  value="com.digitalapi.edif.customer.model.EntMapAssociate" />
>
>  value="ENT_MAP_ASSOCIATE" />
>
>  value="mappingId" />
>
> 
>
> 
>
>
> mappingId
>
> 
>
> 
>
> 
>
> 
>
>  key="associateId" value="java.lang.String" />
>
>  key="mappingId" value="java.lang.String" />
>
> 
>
> 
>
> 
>
> 
>
>  key="associateId" value="ASSOCIATE_ID" />
>
>  key="mappingId" value="MAPPING_ID" />
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
>
>
>
>
> *Thanks and Regards,*
>
> *Kamlesh Joshi*
>
>
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Re: Ignite client connection issue "Cache doesn't exists ..." even though cache severs and caches are up and running.

2019-03-13 Thread ibelyakov
Hello,

Could you please provide small example of cache using from your application
which produces described issue?

Also try to invoke ignite.cacheNames() and check that requested cache name
exists in this list. (case sensitive)
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/Ignite.html#cacheNames--

Regards,
Igor



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite not able to persist spark rdd to RDBMS in master slave mode

2019-03-13 Thread Harshal Patil
Hi ,
I  am using SPARK 2.3.0 and Ignite 2.7.0 . I have enabled POSTGRES as
persistent store through gridgain automatic RDBMS integration .I have
enabled write through cache .  I could see data being persisted in POSTGRES
when I am running SPARK in standalone mode , with

*val *conf = *new *SparkConf()

conf.setMaster("local[*]")


*But* when I have master configured for spark like,


conf.setMaster("spark://harshal-patil.local:7077")


my data is not getting persisted in postgres , but i can see cache is
updated .


I am doing operation - ic.fromCache("RoleCache").savePairs(rdd)


Please help me understand what could be going wrong .


Regards ,

Harshal


Re: Ignite not able to persist spark rdd to RDBMS in master slave mode

2019-03-13 Thread Ilya Kasnacheev
Hello!

Please try savePairs(rdd, true).

Hope it helps!
-- 
Ilya Kasnacheev


ср, 13 мар. 2019 г. в 17:41, Harshal Patil :

> Hi ,
> I  am using SPARK 2.3.0 and Ignite 2.7.0 . I have enabled POSTGRES as
> persistent store through gridgain automatic RDBMS integration .I have
> enabled write through cache .  I could see data being persisted in POSTGRES
> when I am running SPARK in standalone mode , with
>
> *val *conf = *new *SparkConf()
>
> conf.setMaster("local[*]")
>
>
> *But* when I have master configured for spark like,
>
>
> conf.setMaster("spark://harshal-patil.local:7077")
>
>
> my data is not getting persisted in postgres , but i can see cache is
> updated .
>
>
> I am doing operation - ic.fromCache("RoleCache").savePairs(rdd)
>
>
> Please help me understand what could be going wrong .
>
>
> Regards ,
>
> Harshal
>
>
>
>


Re: Ignite not able to persist spark rdd to RDBMS in master slave mode

2019-03-13 Thread Harshal Patil
Hi Ilya ,
Thanks for the solution , it worked .
But can you please explain why  overwrite = true is required in case when i
run spark in master slave configuration .

On Wed, Mar 13, 2019 at 8:26 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Please try savePairs(rdd, true).
>
> Hope it helps!
> --
> Ilya Kasnacheev
>
>
> ср, 13 мар. 2019 г. в 17:41, Harshal Patil :
>
>> Hi ,
>> I  am using SPARK 2.3.0 and Ignite 2.7.0 . I have enabled POSTGRES as
>> persistent store through gridgain automatic RDBMS integration .I have
>> enabled write through cache .  I could see data being persisted in POSTGRES
>> when I am running SPARK in standalone mode , with
>>
>> *val *conf = *new *SparkConf()
>>
>> conf.setMaster("local[*]")
>>
>>
>> *But* when I have master configured for spark like,
>>
>>
>> conf.setMaster("spark://harshal-patil.local:7077")
>>
>>
>> my data is not getting persisted in postgres , but i can see cache is
>> updated .
>>
>>
>> I am doing operation - ic.fromCache("RoleCache").savePairs(rdd)
>>
>>
>> Please help me understand what could be going wrong .
>>
>>
>> Regards ,
>>
>> Harshal
>>
>>
>>
>>


Re: Ignite not able to persist spark rdd to RDBMS in master slave mode

2019-03-13 Thread Ilya Kasnacheev
Hello!

With otherwrite=true, Data Streamer used in underlying implementation will
skip Cache Store (along with other things).

Regards,
-- 
Ilya Kasnacheev


ср, 13 мар. 2019 г. в 18:10, Harshal Patil :

> Hi Ilya ,
> Thanks for the solution , it worked .
> But can you please explain why  overwrite = true is required in case when
> i run spark in master slave configuration .
>
> On Wed, Mar 13, 2019 at 8:26 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Please try savePairs(rdd, true).
>>
>> Hope it helps!
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 13 мар. 2019 г. в 17:41, Harshal Patil > >:
>>
>>> Hi ,
>>> I  am using SPARK 2.3.0 and Ignite 2.7.0 . I have enabled POSTGRES as
>>> persistent store through gridgain automatic RDBMS integration .I have
>>> enabled write through cache .  I could see data being persisted in POSTGRES
>>> when I am running SPARK in standalone mode , with
>>>
>>> *val *conf = *new *SparkConf()
>>>
>>> conf.setMaster("local[*]")
>>>
>>>
>>> *But* when I have master configured for spark like,
>>>
>>>
>>> conf.setMaster("spark://harshal-patil.local:7077")
>>>
>>>
>>> my data is not getting persisted in postgres , but i can see cache is
>>> updated .
>>>
>>>
>>> I am doing operation - ic.fromCache("RoleCache").savePairs(rdd)
>>>
>>>
>>> Please help me understand what could be going wrong .
>>>
>>>
>>> Regards ,
>>>
>>> Harshal
>>>
>>>
>>>
>>>


Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread jackluo923
Hi Igor, 
Thank you for the support. Unfortunately, uninstalling openssl1.1 won't
be feasible because there are too many projects which depends on openssl1.1.
I.e. openjdk-8 package will be automatically uninstalled if openssl1.1 is
uninstalled which breaks ignite build due to missing jni headers unless you
have a separate copy installed somewhere and point JAVA_HOME to that
location. 

The best solution so far from my prospective is to wait for ignite version
2.8. In the meantime use CPPFLAGS and LIBS flag to configure the openssl
include and library paths (CPPFLAGS="-I/usr/local/openssl-1.0/include"
LIBS="-L/usr/local/openssl-1.0/lib" ./configure).

Regards
Jack



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread Ilya Kasnacheev
Hello!

You can uninstall openssl-dev package without uninstalling openssl library.
That's what on my system:

ii  libssl1.0-dev:amd64
1.0.2n-1ubuntu5.3   amd64
Secure Sockets Layer toolkit - development files
ii  libssl1.0.0:amd64
1.0.2n-1ubuntu5.3   amd64
Secure Sockets Layer toolkit - shared libraries
rc  libssl1.0.0:i386
1.0.2n-1ubuntu5.1   i386
Secure Sockets Layer toolkit - shared libraries
ii  libssl1.1:amd64
1.1.0g-2ubuntu4.3   amd64
Secure Sockets Layer toolkit - shared libraries
ii  libssl1.1:i386
1.1.0g-2ubuntu4.3   i386
Secure Sockets Layer toolkit - shared libraries

As you can see, I have both 1.0 and 1.1 but -dev package only for 1.0.

Regards,
-- 
Ilya Kasnacheev


ср, 13 мар. 2019 г. в 18:44, jackluo923 :

> Hi Igor,
> Thank you for the support. Unfortunately, uninstalling openssl1.1 won't
> be feasible because there are too many projects which depends on
> openssl1.1.
> I.e. openjdk-8 package will be automatically uninstalled if openssl1.1 is
> uninstalled which breaks ignite build due to missing jni headers unless you
> have a separate copy installed somewhere and point JAVA_HOME to that
> location.
>
> The best solution so far from my prospective is to wait for ignite version
> 2.8. In the meantime use CPPFLAGS and LIBS flag to configure the openssl
> include and library paths (CPPFLAGS="-I/usr/local/openssl-1.0/include"
> LIBS="-L/usr/local/openssl-1.0/lib" ./configure).
>
> Regards
> Jack
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: JDK 11 support.

2019-03-13 Thread Loredana Radulescu Ivanoff
Hello,

I am very interested in this topic as well so I've been following up. So,
if I understand correctly, there is no other way to access the needed API's
without these flags, and the following (extract from Java documentation) is
an accepted risk?

"*The --add-exports and --add-opens options must be used with great care.
You can use them to gain access to an internal API of a library module, or
even of the JDK itself, but you do so at your own risk: If that internal
API is changed or removed then your library or application will fail."*

The quote is from the Breaking Encapsulation section in the link below:
https://openjdk.java.net/jeps/261



On Tue, Mar 12, 2019 at 2:15 PM Dmitriy Pavlov  wrote:

> Hi Shane,
>
> These flags are required to access JVM internals and are used by Ignite.
> And it is not related to production readiness.
>
> A number of projects require these flags. In theory in some future release
> Ignite can get rid of the mandatory specification of extra flags, but it
> will anyway affect performance. So in this scenario (if community accept
> it), Ignite will recommend to set it up but will be (much) slower without
> it.
>
> There are a number of open discussions at dev@ related to Java 11,
> modularity support. So AFAIK there are no exact plans.
>
> Sincerely,
> Dmitriy Pavlov
>
> вт, 12 мар. 2019 г. в 20:48, Shane Duan :
>
>> Currently running Ignite 2.7 with OpenJDK11, with these additional JVM
>> flags:
>>
>> --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED 
>> --add-exports=java.base/sun.nio.ch=ALL-UNNAMED 
>> --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED 
>> --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED--add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED--illegal-access=permit-Djdk.tls.client.protocols=TLSv1.2
>>
>>
>> It is working, but also brought some concerns whether Ignite is
>> production-ready...
>>
>> Any plan to remove these dependency on these flags in next release?
>>
>> Thanks!
>>
>> -Shane
>>
>>
>>
>>
>>


Re: JDK 11 support.

2019-03-13 Thread Ilya Kasnacheev
Hello!

Ignite is very complex application which indeed have to be adjusted for
every new major JDK release. So yes, basically it fails for every new JDK
release when APIs change.

Fortunately, you can control version of your JDK.

Regards,
-- 
Ilya Kasnacheev


ср, 13 мар. 2019 г. в 19:43, Loredana Radulescu Ivanoff :

> Hello,
>
> I am very interested in this topic as well so I've been following up. So,
> if I understand correctly, there is no other way to access the needed API's
> without these flags, and the following (extract from Java documentation) is
> an accepted risk?
>
> "*The --add-exports and --add-opens options must be used with great care.
> You can use them to gain access to an internal API of a library module, or
> even of the JDK itself, but you do so at your own risk: If that internal
> API is changed or removed then your library or application will fail."*
>
> The quote is from the Breaking Encapsulation section in the link below:
> https://openjdk.java.net/jeps/261
>
>
>
> On Tue, Mar 12, 2019 at 2:15 PM Dmitriy Pavlov  wrote:
>
>> Hi Shane,
>>
>> These flags are required to access JVM internals and are used by Ignite.
>> And it is not related to production readiness.
>>
>> A number of projects require these flags. In theory in some future
>> release Ignite can get rid of the mandatory specification of extra flags,
>> but it will anyway affect performance. So in this scenario (if community
>> accept it), Ignite will recommend to set it up but will be (much) slower
>> without it.
>>
>> There are a number of open discussions at dev@ related to Java 11,
>> modularity support. So AFAIK there are no exact plans.
>>
>> Sincerely,
>> Dmitriy Pavlov
>>
>> вт, 12 мар. 2019 г. в 20:48, Shane Duan :
>>
>>> Currently running Ignite 2.7 with OpenJDK11, with these additional JVM
>>> flags:
>>>
>>> --add-exports=java.base/jdk.internal.misc=ALL-UNNAMED 
>>> --add-exports=java.base/sun.nio.ch=ALL-UNNAMED 
>>> --add-exports=java.management/com.sun.jmx.mbeanserver=ALL-UNNAMED 
>>> --add-exports=jdk.internal.jvmstat/sun.jvmstat.monitor=ALL-UNNAMED--add-exports=java.base/sun.reflect.generics.reflectiveObjects=ALL-UNNAMED--illegal-access=permit-Djdk.tls.client.protocols=TLSv1.2
>>>
>>>
>>> It is working, but also brought some concerns whether Ignite is
>>> production-ready...
>>>
>>> Any plan to remove these dependency on these flags in next release?
>>>
>>> Thanks!
>>>
>>> -Shane
>>>
>>>
>>>
>>>
>>>


How to use atomic operations on C++ thin client?

2019-03-13 Thread jackluo923
Is atomic operation supported on C++ thin client? 

Design documentation shows yes
(https://cwiki.apache.org/confluence/display/IGNITE/Thin+clients+features)

SDK documentation suggest now
(https://www.gridgain.com/sdk/pe/latest/cppdoc/classignite_1_1thin_1_1cache_1_1CacheClient.html)

Compilation error also suggest it's not working
cache.Put(1, org); -> works perfectly

cache.PutIfAbsent(1, org); -> compilation error
src/thin_client_put_get_example.cpp:43:11: error: ‘class
ignite::thin::cache::CacheClient’ has
no member named ‘PutIfAbsent’

cache.GetPutIfAbsent(1, org); -> compilation error
src/thin_client_put_get_example.cpp:43:11: error: ‘class
ignite::thin::cache::CacheClient’ has
no member named ‘GetPutIfAbsent’



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Primary partitions return zero partitions before rebalance.

2019-03-13 Thread Koitoer
Hi All.

I'm trying to follow the rebalance events of my ignite cluster so I'm able
to track which partitions are assigned to each node at any point in time. I
am listening to the `EVT_CACHE_REBALANCE_STARTED` and
`EVT_CACHE_REBALANCE_STOPPED`
events from Ignite and that is working well, except in the case one node
crash and another take its place.

My cluster is 5 nodes.
Ex. Node 1 has let's say 100 partitions, after I kill this node the
partitions that were assigned to it, got rebalance across the entire
cluster, I'm able to track that done with the STOPPED event and checking
the affinity function in each one of them using the `primaryPartitions`
method gives me that, if I add all those numbers I get 1024 partitions,
which is why I was expected.

However when a new node replaces the previous one, I see a rebalance
process occurs and now I'm getting that some of the partitions `disappear`
from the already existing nodes (which is expected as well as new node will
take some partitions from them) but when the STOPPED event is listened by
this new node if I call the `primaryPartitions` that one returns an empty
list, but if I used the  `allPartitions` method that one give me a list (I
think at this point is primary + backups).

If I let pass some time and I execute the `primaryPartitions` method again
I am able to retrieve the partitions that I was expecting to see after the
STOPPED event comes. I read here
https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood#id-(PartitionMap)Exchange-under
the hood-LateAffinityAssignment that it could be a late assignment, that
after the cache rebalance the new node needs to bring all the entries to
fill-out the cache and after that, the `primaryPartitions` will return
something.
Will be great to know if this actually what is happening.

My question is if there is any kind of event that I should listen so I can
be aware that this process (if this is what is happening) already finish. I
would like to said, "After you bring this node into the cluster the
partitions assigned to that node are the following: XXX, XXX".

Also, I'm aware of the event `EVT_CACHE_REBALANCE_PART_LOADED` but I'm
seeing a ton of them and at this point, I would be able to know when the
last one arrives and say that are now my primary partitions.

Thanks in advance.


Re: How to compile 2.7 release cpp binaries on ubuntu 16.04

2019-03-13 Thread jackluo923
Hi Ilya, 
I dug deeper after your response and found the root cause of the error. 

Platform: Ubuntu 16.04
1. Installed openssl 1.1.0h using binaries compiled from source (reason:
16.04 does not provided any openssl 1.1 packages), then installed using
"make install" command.

2. Ubuntu 16.04 (Xenial) packages
- libssl1.0.0 (version: 1.0.2g-1ubuntu4.15)
- libssl-dev (version: 1.0.2g-1ubuntu4.15)
- openssl (version: 1.0.2g-1ubuntu4.15)
- no openssl 1.1 packages are available on ubuntu 16.04 thus not installed

The root cause is due to Openssl 1.1 being installed from source by
following the openssl's default instructions (also installs development
header automatically). Even though only libssl-dev (openssl 1.0.2g) package
is installed, the ignite's autoconfig will automatically default to 1.1.0h
headers. 

Fixes: 
1) Re-install openssl 1.1 without development header from source.
(documentation is scarce, not sure how it could be done)
2) Delete openssl 1.1's development headers manually after install
3) Upgrade to ubuntu 18.04 where libssl1.0.0-dev and libssl-dev (1.1)
development headers can be changed out easily. Openssl development header
ubuntu packages for v1.0 and v1.1 (install one, uninstalls the other) thus
preventing this problem from occurring.
4) Manually provide openssl development binary and include paths when
building ignite.


TLTR: The problem is due to development header conflict. It's slightly messy
to resolve the issue in Ubuntu 16.04. Easiest solution would be to upgrade
to ubuntu 18.04 and install openssl 1.0, 1.1 package, and development header
version of your choosing suggested by Ilya. Development header version can
be easily switched via apt-get only in Ubuntu 18.04. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: isolated cluster configuration

2019-03-13 Thread javastuff....@gmail.com
Thank you for the response Ilya.

Using 2 different DB schema is a way out here, but I was trying to see if
there is any other way to achieve this. Maybe property or configuration
resulting in different table names for the isolated cluster or table details
having cluster details to isolate from each other. 

Please let me know if this can be done without having 2 separate DB schemas.

Thanks,
-Sam 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: GridTimeoutProcessor - Timeout has occurred - Too frequent messages

2019-03-13 Thread Павлухин Иван
Hi,

Yes, perhaps there is no reach documentation for mentioned classes.
But on the other hand they are internal classes which could be changed
at any time. I will try to outline roles for 2 classes.
1. CancellableTask is used internally as a general way to cancel some
action sheduled in the future. GridTimeouProcessor.schedule returns
CacncellableTask which allows to cancel scheduled task.
2. GridCommunicationMessageSet is used during processing ordered
messages. GridIoManager supports ordered message delivery. And in this
case timeouts should be involved during processing in case if some
previous messages were not received.

Unfortunately I cannot say anything meaningful about
CacheContinuousQueryManager$BackupCleaner.

Giving a good answer to your questions is almost equal to creating a
documentation for aforementioned classes. I think that it is much easy
to receive an answer if you have a problem in your use case, e.g.
something does not work or works improperly.

пн, 11 мар. 2019 г. в 12:04, userx :
>
> Hi Ivan,
>
> Thanks for the reply. I totally buy your point that these messages are not
> bad. What I wanted to understand  basically was the role of the following
> GridTimeOut objects
>
> 1) CancelableTask
> 2) GridCommunicationMessageSet
> 3) CacheContinuousQueryManager$BackupCleaner
>
> There is no documentation available in the class for the above three
> classes. So was just trying to understand the role of each of them.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin