Call to deployClusterSingleton blocked.

2018-07-18 Thread Calvin KL Wong, CLSA
Hi,

I encountered a scenario where my process was blocked at a call to 
deployClusterSingleton.

IgniteServices svcs = ignite.services();
svcs.deployClusterSingleton(..., ...);

Too bad that I didn't get the stack trace.

I didn't specify any node filter.  I was able to deploy the service upon 
restarting the process.
Do you have any suggestion as to why this may happen?

Thanks,
Calvin

Calvin KL Wong
Sr. Lead Engineer, Execution Services
D  +852 2600 7983  |  M  +852 9267 9471  |  T  +852 2600 
5/F, One Island East, 18 Westlands Road, Island East, Hong Kong

[:1. Social Media Icons:CLSA_Social Media 
Icons_linkedin.png][:1. Social Media 
Icons:CLSA_Social Media 
Icons_twitter.png][:1. Social Media 
Icons:CLSA_Social Media 
Icons_youtube.png][:1.
 Social Media Icons:CLSA_Social Media 
Icons_facebook.png]

clsa.com
Insights. Liquidity. Capital.

[CLSA_RGB]

A CITIC Securities Company

The content of this communication is intended for the recipient and is subject 
to CLSA Legal and Regulatory Notices.
These can be viewed at https://www.clsa.com/disclaimer.html or sent to you upon 
request.
Please consider before printing. CLSA is ISO14001 certified and committed to 
reducing its impact on the environment.


Re: Possible starvation in striped pool

2018-07-18 Thread Shailendrasinh Gohil
Here you go...




























































 












 


























prodId






































--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Understanding the mechanics of peer class loading

2018-07-18 Thread Dave Harvey
I added this ticket, because we hit a similar problem, as was able to find
some quite suspect code: https://issues.apache.org/jira/browse/IGNITE-9026





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: CRUD operation on Hibernate L2 Cache ?

2018-07-18 Thread Mikhail
Hi Monstereo,


monstereo wrote
> When I want to add new element to cache, it will also update the database.
> When I want to update any element in cache, it will also update the
> database.
> When I want to delete any element in cache, it will also delete the
> element
> from database.
> 
> How I can do that? I am allowed to use igniteCache.get, put, delete ?? 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/

If you want to use ignite as L2 cache for Hibernate, then it make sense to
use hibernate api for CRUD operations.
You can get entries directly from ignite cache, but I don't think that it's
a good idea, it's L2 cache and it's supposed to be accessed via Hibernate,
so in this case you should work with Hibernate API only.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Docker Healthcheck command recommendation

2018-07-18 Thread Ilya Kasnacheev
Hello!

You could also add ignite-rest-http module and call some rest endpoint as
Healthcheck, such as version or cache read:
https://apacheignite.readme.io/docs/rest-api#version

Regards,

-- 
Ilya Kasnacheev

2018-07-17 18:37 GMT+03:00 Dave Harvey :

> Any suggestions on an appropriate HEALTHCHECK command to use in an IGNITE
> docker container?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Task not serializable: java.io.NotSerializableException: org.apache.ignite.configuration.IgniteConfiguration

2018-07-18 Thread Ilya Kasnacheev
Hello!

As your message states, IgniteConfiguration isn't serializable. Sooo, you
will need to create IgniteConfiguration from inside the () => igniteConf
lambda, instead of passing it from outside. Instead, pass parameters needed
to create that configuration with this lambda.

Regards,

-- 
Ilya Kasnacheev

2018-07-18 17:29 GMT+03:00 wt :

> I can connect to ignite rmdbs table from spark but can't query it. ignite
> server running in intellij - rmdbs integration (1 table) and cache loaded
>
> in spark i have the following code:
>
>import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
>import
> org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder
>import org.apache.ignite.configuration.IgniteConfiguration
>import java.util.ArrayList
>import java.util.List
>
> val tcpDiscoverySpi: TcpDiscoverySpi = new TcpDiscoverySpi
> val ipFinder: TcpDiscoveryVmIpFinder = new TcpDiscoveryVmIpFinder
> val addressList: java.util.List[String] = new
> java.util.ArrayList[String]
> addressList.add("127.0.0.1:47500")
> ipFinder.setAddresses(addressList)
> tcpDiscoverySpi.setIpFinder(ipFinder)
> tcpDiscoverySpi.setLocalAddress("127.0.0.1")
>
> val igniteConf : IgniteConfiguration = new
> IgniteConfiguration().setClientMode(true).setDiscoverySpi(tcpDiscoverySpi)
>
> val igniteContext = new IgniteContext(sc, () => igniteConf)
> val igniteCache = igniteContext.fromCache("COMPLIANCESUMMARYCACHE")
>
> igniteCache.sql("select * from ComplianceSummary")
>
> scala> res24.printSchema
> root
>  |-- COMPANYID: integer (nullable = true)
>  |-- COMPLIANCESUMMARYITEM: string (nullable = true)
>  |-- COMPLIANCESUMMARYVALUE: string (nullable = true)
>  |-- RECORDVALIDFROM: date (nullable = true)
>  |-- RECORDVALIDTO: date (nullable = true)
>
>
> -
>
> I can print the schema out but if i try query it i get the following error
> message:
>
>
> org.apache.spark.SparkException: Job aborted due to stage failure: Task
> not
> serializable: java.io.NotSerializableException:
> org.apache.ignite.configuration.IgniteConfiguration
> Serialization stack:
> - object not serializable (class:
> org.apache.ignite.configuration.IgniteConfiguration, value:
> IgniteConfiguration [igniteInstanceName=null, pubPoolSize=8,
> svcPoolSize=null, callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8,
> mgmtPoolSize=4, igfsPoolSize=2, dataStreamerPoolSize=8,
> utilityCachePoolSize=8, utilityCacheKeepAliveTime=6, p2pPoolSize=2,
> qryPoolSize=8, igniteHome=null, igniteWorkDir=null, mbeanSrv=null,
> nodeId=null, marsh=null, marshLocJobs=false, daemon=false, p2pEnabled=
> false, netTimeout=5000, sndRetryDelay=1000, sndRetryCnt=3,
> metricsHistSize=1, metricsUpdateFreq=2000,
> metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi
> [addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10,
> reconDelay=2000, maxAckTimeout=60, forceSrvMode=false,
> clientReconnectDisabled=false, internalLsnr=null], segPlc=STOP,
> segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true,
> segChkFreq=1, commSpi=null, evtSpi=null, colSpi=null, deploySpi
> =null, indexingSpi=null, addrRslvr=null, clientMode=true,
> rebalanceThreadPoolSize=1,
> txCfg=org.apache.ignite.configuration.TransactionConfiguration@24766217,
> cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED,
> p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
> timeSrvPortRange=100, failureDetectionTimeout=1,
> clientFailureDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null,
> connectorCfg=org.apache.ignite.configuration.
> ConnectorConfiguration@f8c34ef,
> od
> bcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
> [seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
> grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null,
> binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=null, activeOnStart=true,
> autoActivation=true, longQryWarnTimeout=3000, sqlConnCfg=null,
> cliConnCfg=ClientConnectorConfiguration [host=null, port=10800,
> portRange=100, sockSndBufSize=0, sockRcvBufSize=0, tcpNoDelay=true,
> maxOpenCursorsPerConn=128, threadPoolS
> ize=8, idleTimeout=0, jdbcEnabled=true, odbcEnabled=true,
> thinCliEnabled=true, sslEnabled=false, useIgniteSslCtxFactory=true,
> sslClientAuth=false, sslCtxFactory=null], authEnabled=false,
> failureHnd=null, commFailureRslvr=null])
> - field (class: $iw, name: igniteConf, type: class
> org.apache.ignite.configuration.IgniteConfiguration)
> - object (class $iw, $iw@1bdb1284)
> Caused by: java.io.NotSerializableException:
> org.apache.ignite.configuration.IgniteConfiguration
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


[ANNOUNCE] Apache Ignite 2.6.0 Released

2018-07-18 Thread Andrey Gura
The Apache Ignite Community is pleased to announce the release of
Apache Ignite 2.6.0.

Apache Ignite [1] is a memory-centric distributed database, caching,
and processing platform for transactional, analytical, and streaming
workloads delivering in-memory speeds at petabyte scale.

This release fixes several critical issues and brings in some improvements:
https://ignite.apache.org/releases/2.6.0/release_notes.html

Download the latest Ignite version from here:
https://ignite.apache.org/download.cgi

Please let us know [2] if you encounter any problems.

Regards,
Andrey Gura on behalf of Apache Ignite community

[1] https://ignite.apache.org
[2] https://ignite.apache.org/community/resources.html#ask


Re: SqlQueryFileds on List of Binary Object

2018-07-18 Thread debashissinha
Hi ,
The Reason is i am trying to do it though load method , which supports the
read through. The requirement is to pass a set of param say
Map which will contain a sql statement some query param value
and the return will be the list of BinaryObject . That is against one key I
want to hold all the binaryobejct and then query it from within the list
using SqlQueryFields



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Docker Healthcheck command recommendation

2018-07-18 Thread Stanislav Lukyanov
I’d look into calling control.sh or ignitevisorcmd.sh and parsing their output.
E.g. check that control.sh --cache can connect to the local node and return one 
of your caches.

However, this check is not purely for the local node, as the command will 
connect to the cluster as a whole.
A more localized check would be to see if the Ignite’s JVM is running via jps.

Thanks,
Stan 

From: Dave Harvey
Sent: 17 июля 2018 г. 18:37
To: user@ignite.apache.org
Subject: Docker Healthcheck command recommendation

Any suggestions on an appropriate HEALTHCHECK command to use in an IGNITE
docker container?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: SqlQueryFileds on List of Binary Object

2018-07-18 Thread Ilya Kasnacheev
Hello!

Why do you try to store List in cache? It should work if you
will just put plain BinaryObject's in it without List<>.

Regards,

-- 
Ilya Kasnacheev

2018-07-18 17:42 GMT+03:00 debashissinha :

> Hi ,
>
> If I add a List to cache and also in the cache configuration
> I
> set QueryEntity with fields, then how can I query using
> cache.query(new SqlFieldQuery("Some sql"));
>
> Sample I am trying to use is
>
> CacheConfiguration> cfg = new
> CacheConfiguration();
> cfg.setQueryEntities(new ArrayList(){{
>
> e.setKeyType("java.lang.Integer");
> e.setValueType("Person");
> e.setFields(new LinkedHashMap(){{
>
> put("id","java.lang.Integer");
> put("name","java.lang.String");
>
> }});
> add(e);
> }});
>
> cfg.setName("TESTPERSON");
>
> Ignite ignite = Ignition.start();
> IgniteCache> cache =
> ignite.getOrCreateCache(cfg).withKeepBinary();
>
> List binaryObjectList = new ArrayList();
> IgniteBinary binary = binary.builder("Person");
>
> bldr.setField("id",1);
> bldr.setField("name","test");
> binaryObjectList.add(bldr.build());
>
> cache.put(1,binaryObjectList);
>
> QueryCursor> cursor = cache.query(new SqlFieldsQuery("Select *
> from Person"));
>
> Am getting empty results here.
> Can some one kindly help .
>
> Thanks in advance.
> Debashis Sinha
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Possible starvation in striped pool

2018-07-18 Thread Ilya Kasnacheev
Hello again!

I have just noticed the following stack trace:

"flusher-0-#588%AppCluster%" #633 prio=5 os_prio=0
tid=0x7f18d424f800 nid=0xe1bb runnable [0x7f197c1cd000]
   java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at oracle.net.ns.Packet.receive(Packet.java:311)
at oracle.net.ns.DataPacket.receive(DataPacket.java:105)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:305)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:249)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:171)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:89)
at 
oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123)
at 
oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79)
at 
oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:429)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:397)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at 
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at 
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
at 
oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
at 
oracle.jdbc.driver.OraclePreparedStatement.executeForRowsWithTimeout(OraclePreparedStatement.java:12029)
at 
oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:12140)
- locked <0x7f2aaa591778> (a oracle.jdbc.driver.T4CConnection)
at 
oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:246)
at 
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.executeBatch(CacheAbstractJdbcStore.java:1226)
at 
org.apache.ignite.cache.store.jdbc.CacheAbstractJdbcStore.writeAll(CacheAbstractJdbcStore.java:)
at 
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.updateStore(GridCacheWriteBehindStore.java:809)
at 
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.applyBatch(GridCacheWriteBehindStore.java:725)
at 
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore.access$2400(GridCacheWriteBehindStore.java:75)
at 
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore$Flusher.flushCacheCoalescing(GridCacheWriteBehindStore.java:1113)
at 
org.apache.ignite.internal.processors.cache.store.GridCacheWriteBehindStore$Flusher.body(GridCacheWriteBehindStore.java:1011)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)

It looks like you're waiting for Oracle to complete batch execution, but it
won't.

Regards,

-- 
Ilya Kasnacheev

2018-07-18 17:47 GMT+03:00 Ilya Kasnacheev :

> Hello!
>
> Can you please share the configuration of your Apache Ignite nodes,
> especially the cache store's of caches. I have just noticed that you're
> actually waiting on cache store lock.
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2018-07-17 19:11 GMT+03:00 Shailendrasinh Gohil  salientcrgt.com>:
>
>> We are using the TreeMap for all the putAll operations. We also tried
>> streamer API to create the automatic batches. Still the issue is same.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Possible starvation in striped pool

2018-07-18 Thread Ilya Kasnacheev
Hello!

Can you please share the configuration of your Apache Ignite nodes,
especially the cache store's of caches. I have just noticed that you're
actually waiting on cache store lock.

Regards,

-- 
Ilya Kasnacheev

2018-07-17 19:11 GMT+03:00 Shailendrasinh Gohil <
shailendrasinh.go...@salientcrgt.com>:

> We are using the TreeMap for all the putAll operations. We also tried
> streamer API to create the automatic batches. Still the issue is same.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


SqlQueryFileds on List of Binary Object

2018-07-18 Thread debashissinha
Hi ,

If I add a List to cache and also in the cache configuration I
set QueryEntity with fields, then how can I query using
cache.query(new SqlFieldQuery("Some sql"));

Sample I am trying to use is 

CacheConfiguration> cfg = new
CacheConfiguration();
cfg.setQueryEntities(new ArrayList(){{

e.setKeyType("java.lang.Integer");
e.setValueType("Person");
e.setFields(new LinkedHashMap(){{
  
put("id","java.lang.Integer");
put("name","java.lang.String");

}});
add(e);
}});

cfg.setName("TESTPERSON");

Ignite ignite = Ignition.start();
IgniteCache> cache =
ignite.getOrCreateCache(cfg).withKeepBinary();

List binaryObjectList = new ArrayList();
IgniteBinary binary = binary.builder("Person");

bldr.setField("id",1);
bldr.setField("name","test");
binaryObjectList.add(bldr.build());

cache.put(1,binaryObjectList);

QueryCursor> cursor = cache.query(new SqlFieldsQuery("Select *
from Person"));

Am getting empty results here.
Can some one kindly help .

Thanks in advance.
Debashis Sinha



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Task not serializable: java.io.NotSerializableException: org.apache.ignite.configuration.IgniteConfiguration

2018-07-18 Thread wt
I can connect to ignite rmdbs table from spark but can't query it. ignite
server running in intellij - rmdbs integration (1 table) and cache loaded

in spark i have the following code:

   import org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
   import
org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder
   import org.apache.ignite.configuration.IgniteConfiguration
   import java.util.ArrayList
   import java.util.List
   
val tcpDiscoverySpi: TcpDiscoverySpi = new TcpDiscoverySpi
val ipFinder: TcpDiscoveryVmIpFinder = new TcpDiscoveryVmIpFinder
val addressList: java.util.List[String] = new
java.util.ArrayList[String]
addressList.add("127.0.0.1:47500")
ipFinder.setAddresses(addressList)
tcpDiscoverySpi.setIpFinder(ipFinder)
tcpDiscoverySpi.setLocalAddress("127.0.0.1") 

val igniteConf : IgniteConfiguration = new
IgniteConfiguration().setClientMode(true).setDiscoverySpi(tcpDiscoverySpi)

val igniteContext = new IgniteContext(sc, () => igniteConf)
val igniteCache = igniteContext.fromCache("COMPLIANCESUMMARYCACHE") 

igniteCache.sql("select * from ComplianceSummary")

scala> res24.printSchema
root
 |-- COMPANYID: integer (nullable = true)
 |-- COMPLIANCESUMMARYITEM: string (nullable = true)
 |-- COMPLIANCESUMMARYVALUE: string (nullable = true)
 |-- RECORDVALIDFROM: date (nullable = true)
 |-- RECORDVALIDTO: date (nullable = true)


-

I can print the schema out but if i try query it i get the following error
message:


org.apache.spark.SparkException: Job aborted due to stage failure: Task not
serializable: java.io.NotSerializableException:
org.apache.ignite.configuration.IgniteConfiguration
Serialization stack:
- object not serializable (class:
org.apache.ignite.configuration.IgniteConfiguration, value:
IgniteConfiguration [igniteInstanceName=null, pubPoolSize=8,
svcPoolSize=null, callbackPoolSize=8, stripedPoolSize=8, sysPoolSize=8,
mgmtPoolSize=4, igfsPoolSize=2, dataStreamerPoolSize=8,
utilityCachePoolSize=8, utilityCacheKeepAliveTime=6, p2pPoolSize=2,
qryPoolSize=8, igniteHome=null, igniteWorkDir=null, mbeanSrv=null,
nodeId=null, marsh=null, marshLocJobs=false, daemon=false, p2pEnabled=
false, netTimeout=5000, sndRetryDelay=1000, sndRetryCnt=3,
metricsHistSize=1, metricsUpdateFreq=2000,
metricsExpTime=9223372036854775807, discoSpi=TcpDiscoverySpi
[addrRslvr=null, sockTimeout=0, ackTimeout=0, marsh=null, reconCnt=10,
reconDelay=2000, maxAckTimeout=60, forceSrvMode=false,
clientReconnectDisabled=false, internalLsnr=null], segPlc=STOP,
segResolveAttempts=2, waitForSegOnStart=true, allResolversPassReq=true,
segChkFreq=1, commSpi=null, evtSpi=null, colSpi=null, deploySpi
=null, indexingSpi=null, addrRslvr=null, clientMode=true,
rebalanceThreadPoolSize=1,
txCfg=org.apache.ignite.configuration.TransactionConfiguration@24766217,
cacheSanityCheckEnabled=true, discoStartupDelay=6, deployMode=SHARED,
p2pMissedCacheSize=100, locHost=null, timeSrvPortBase=31100,
timeSrvPortRange=100, failureDetectionTimeout=1,
clientFailureDetectionTimeout=3, metricsLogFreq=6, hadoopCfg=null,
connectorCfg=org.apache.ignite.configuration.ConnectorConfiguration@f8c34ef,
od
bcCfg=null, warmupClos=null, atomicCfg=AtomicConfiguration
[seqReserveSize=1000, cacheMode=PARTITIONED, backups=1, aff=null,
grpName=null], classLdr=null, sslCtxFactory=null, platformCfg=null,
binaryCfg=null, memCfg=null, pstCfg=null, dsCfg=null, activeOnStart=true,
autoActivation=true, longQryWarnTimeout=3000, sqlConnCfg=null,
cliConnCfg=ClientConnectorConfiguration [host=null, port=10800,
portRange=100, sockSndBufSize=0, sockRcvBufSize=0, tcpNoDelay=true,
maxOpenCursorsPerConn=128, threadPoolS
ize=8, idleTimeout=0, jdbcEnabled=true, odbcEnabled=true,
thinCliEnabled=true, sslEnabled=false, useIgniteSslCtxFactory=true,
sslClientAuth=false, sslCtxFactory=null], authEnabled=false,
failureHnd=null, commFailureRslvr=null])
- field (class: $iw, name: igniteConf, type: class
org.apache.ignite.configuration.IgniteConfiguration)
- object (class $iw, $iw@1bdb1284)
Caused by: java.io.NotSerializableException:
org.apache.ignite.configuration.IgniteConfiguration




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite Bianry Cache store Sql Read Through Feature

2018-07-18 Thread debashissinha
Hi ,
Thanks a lot for your help



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Runtime failure on bounds

2018-07-18 Thread aealexsandrov
Hi,

I am out of context what you do in your code. However, I know that several
page corruption issues were fixed in 2.6 release. 

So there is no specific suggestion from my side.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Runtime failure on bounds

2018-07-18 Thread kvenkatramtreddy
Hi Andrei,

Yes, I have stopped all the servers and removed corrupted nodes data and
upgraded to 2.6 and restart the server.

any more suggestion and configuration changes to prevent this issue.



Thanks & Regards,
Venkat



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread wt
here is my rmdbs integration server config






http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
  
http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
  
http://www.springframework.org/schema/util/spring-util.xsd;>





















127.0.0.1:47500..47510



























































































































 

Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread wt
could it be that spark is not compatible with ignite rmdbs integration? If i
run an example without that it works (writing to and reading from caches and
tables)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread aealexsandrov
So you said that if I will try to start provided example I will see the same
error? I mean that I can try to investigate the problem in case if I will be
able to reproduce the same behavior.

Let me some time to take a look at this example.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread aealexsandrov
Hi,

First of all, you shouldn't use spark 2.1. With Ignite because you could
have conflicts of spark versions. 

>From your log when you used ignite spark (that used spark 2.2) I see that
you have the problem with spring configuration:

class org.apache.ignite.IgniteException: Spring application context resource
is not injected. 

I can't say why you face it using the provided code lines. 

Could you please provide the reproducer example on GitHub (or analog) to
analyze?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread wt
If i run the same code on spark 2.1 i get the following error (same jars in
classpath)

Welcome to
    __
 / __/__  ___ _/ /__
_\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
  /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
1.8.0_151)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import org.apache.ignite.spark.IgniteDataFrameSettings._
import org.apache.ignite.spark.IgniteDataFrameSettings._

scala>import org.apache.ignite.spark._
import org.apache.ignite.spark._

scala>
 |val df =
spark.read.format(FORMAT_IGNITE).option(OPTION_TABLE,"ComplianceSummaryCache.ComplianceSummary").option(OPTION_CONFIG_FILE,"stoxx-server.xml").load()
18/07/18 10:56:10 WARN GenericApplicationContext: Exception encountered
during context initialization - cancelling refresh attempt:
org.springframework.beans.factory.BeanCreationException: Error creating bean
with name 'org.apache.ignite.configuration.IgniteConfiguration#0' defined in
URL [file:/C:/spark_2.1/conf/stoxx-server.xm
org.apache.ignite.IgniteCheckedException: Failed to instantiate Spring XML
application context (make sure all classes used in Spring configuration are
present at CLASSPATH) [springUrl=file:/C:/spark_2.1/conf/stoxx-server.xml]
  at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:387)
  at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:104)
  at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.loadConfigurations(IgniteSpringHelperImpl.java:98)
  at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:744)
  at
org.apache.ignite.internal.IgnitionEx.loadConfigurations(IgnitionEx.java:783)
  at
org.apache.ignite.internal.IgnitionEx.loadConfiguration(IgnitionEx.java:823)
  at
org.apache.ignite.spark.impl.IgniteRelationProvider$$anonfun$configProvider$1$1.apply(IgniteRelationProvider.scala:216)
  at
org.apache.ignite.spark.impl.IgniteRelationProvider$$anonfun$configProvider$1$1.apply(IgniteRelationProvider.scala:213)
  at org.apache.ignite.spark.Once.apply(IgniteContext.scala:222)
  at org.apache.ignite.spark.IgniteContext.ignite(IgniteContext.scala:144)
  at org.apache.ignite.spark.IgniteContext.(IgniteContext.scala:63)
  at org.apache.ignite.spark.IgniteContext$.apply(IgniteContext.scala:192)
  at
org.apache.ignite.spark.impl.IgniteRelationProvider.igniteContext(IgniteRelationProvider.scala:236)
  at
org.apache.ignite.spark.impl.IgniteRelationProvider.createRelation(IgniteRelationProvider.scala:62)
  at
org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
  at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
  ... 53 elided
Caused by: org.springframework.beans.factory.BeanCreationException: Error
creating bean with name
'org.apache.ignite.configuration.IgniteConfiguration#0' defined in URL
[file:/C:/spark_2.1/conf/stoxx-server.xml]: Initialization of bean failed;
nested exception is java.lang.NoClassDefFoundError:
javax/cache/configuration/Factory
  at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:564)
  at
org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:483)
  at
org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:306)
  at
org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
  at
org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:302)
  at
org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
  at
org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
  at
org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:866)
  at
org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:542)
  at
org.apache.ignite.internal.util.spring.IgniteSpringHelperImpl.applicationContext(IgniteSpringHelperImpl.java:381)
  ... 69 more
Caused by: java.lang.NoClassDefFoundError: javax/cache/configuration/Factory
  at java.lang.Class.getDeclaredMethods0(Native Method)
  at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
  at java.lang.Class.privateGetPublicMethods(Class.java:2902)
  at java.lang.Class.getMethods(Class.java:1615)
  at
org.springframework.beans.ExtendedBeanInfoFactory.supports(ExtendedBeanInfoFactory.java:54)
  at

Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread wt
thanks but that doesnt work

I have the folowing jars in spark classpath for spark 2.2

cache-api-1.0.0.jar
spring-expression-4.3.7.RELEASE.jar
spring-context-4.3.7.RELEASE.jar
spring-core-4.3.7.RELEASE.jar
spring-beans-4.3.7.RELEASE.jar
ignite-spring-2.5.0.jar
ignite-core-2.5.0.jar
ignite-spark-2.5.0.jar


running spark in local mode i execute the following code:

import org.apache.ignite.spark.IgniteDataFrameSettings._
   import org.apache.ignite.spark._
   
   val df =
spark.read.format(FORMAT_IGNITE).option(OPTION_TABLE,"ComplianceSummaryCache.ComplianceSummary").option(OPTION_CONFIG_FILE,"stoxx-server.xml").load()

I get the following error 


Welcome to
    __
 / __/__  ___ _/ /__
_\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
  /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java
1.8.0_151)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import org.apache.ignite.spark.IgniteDataFrameSettings._
import org.apache.ignite.spark.IgniteDataFrameSettings._

scala>import org.apache.ignite.spark._
import org.apache.ignite.spark._

scala>
 |val df =
spark.read.format(FORMAT_IGNITE).option(OPTION_TABLE,"ComplianceSummaryCache.ComplianceSummary").option(OPTION_CONFIG_FILE,"stoxx-server.xml").load()
18/07/18 10:59:06 ERROR : Failed to resolve default logging config file:
config/java.util.logging.properties
Console logging handler is not configured.
[10:59:07]__  
[10:59:07]   /  _/ ___/ |/ /  _/_  __/ __/
[10:59:07]  _/ // (7 7// /  / / / _/
[10:59:07] /___/\___/_/|_/___/ /_/ /___/
[10:59:07]
[10:59:07] ver. 2.5.0#20180523-sha1:86e110c7
[10:59:07] 2018 Copyright(C) Apache Software Foundation
[10:59:07]
[10:59:07] Ignite documentation: http://ignite.apache.org
[10:59:07]
[10:59:07] Quiet mode.
[10:59:07]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[10:59:07]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[10:59:07]
[10:59:07] OS: Windows 7 6.1 amd64
[10:59:07] VM information: Java(TM) SE Runtime Environment 1.8.0_151-b12
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.151-b12
18/07/18 10:59:07 WARN : Possible too long JVM pause: 510 milliseconds.
18/07/18 10:59:07 WARN GridDiagnostic: Initial heap size is 128MB (should be
no less than 512MB, use -Xms512m -Xmx512m).
[10:59:07] Initial heap size is 128MB (should be no less than 512MB, use
-Xms512m -Xmx512m).
[10:59:07] Configured plugins:
[10:59:07]   ^-- None
[10:59:07]
[10:59:07] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0]]
18/07/18 10:59:08 WARN TcpCommunicationSpi: Message queue limit is set to 0
which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver sides.
[10:59:08] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
18/07/18 10:59:08 WARN NoopCheckpointSpi: Checkpoints are disabled (to
enable configure any GridCheckpointSpi implementation)
18/07/18 10:59:08 WARN GridCollisionManager: Collision resolution is
disabled (all jobs will be activated upon arrival).
[10:59:08] Security status [authentication=off, tls/ssl=off]
[10:59:08] REST protocols do not start on client node. To start the
protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system
property.
18/07/18 10:59:13 ERROR GridDhtPartitionsExchangeFuture: Failed to
reinitialize local partitions (preloading will be stopped):
GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion [topVer=2,
minorTopVer=0], discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=0c59ff6b-7c95-4072-9866-844f87a6f12d, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1,
172.19.112.175], sockAddrs=[PC50439.oa.pnrad.net/172.19.112.175:0,
/0:0:0:0:0:0:0:1:0, /127.0.0.1:0], discPort=0, order=2, intOrder=0,
lastExchangeTime=1531907
49159, loc=true, ver=2.5.0#20180523-sha1:86e110c7, isClient=true], topVer=2,
nodeId8=0c59ff6b, msg=null, type=NODE_JOINED, tstamp=1531907953545],
nodeId=0c59ff6b, evt=NODE_JOINED]
class org.apache.ignite.IgniteException: Spring application context resource
is not injected.
at
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:171)
at
org.apache.ignite.cache.store.jdbc.CacheJdbcPojoStoreFactory.create(CacheJdbcPojoStoreFactory.java:100)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.createCache(GridCacheProcessor.java:1437)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.prepareCacheStart(GridCacheProcessor.java:1945)
at
org.apache.ignite.internal.processors.cache.GridCacheProcessor.startCachesOnLocalJoin(GridCacheProcessor.java:1830)
at

CRUD operation on Hibernate L2 Cache ?

2018-07-18 Thread monstereo
I can run the example on github for hibernate l2 cache with ignite. And also
I am adding the new data to mydatabase like this command::
 
User user = new User("jedi", "Luke", "Skywalker");

 user.getPosts().add(new Post(user, "Let the Force be with you."));

   ses.save(user);

When I want to add new element to cache, it will also update the database.
When I want to update any element in cache, it will also update the
database.
When I want to delete any element in cache, it will also delete the element
from database.

How I can do that? I am allowed to use igniteCache.get, put, delete ?? 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Runtime failure on bounds

2018-07-18 Thread aealexsandrov
Hi,

In case if you see (page content is corrupted) and after that upgrade your
version to 2.6 then it's possible that your persistence is still broken.

The most simple way here is cleaning your PDS (work/db) directory before the
upgrading to 2.6. 

In case if data is important then you also can try to do next:

1)Stop the cluster
2)Remove only index.bin files from
work/db///index.bin.
3)Restart the cluster
4)Wait for the message that indexes were rebuilt in Ignite log.

In case if it will not help then only cleaning of PDS will help.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: what version of spark is supported by ignite 2.5

2018-07-18 Thread aealexsandrov
Sorry, I have a typo. ignite contains "spark-core_2.11" inside
. Not spark-core_2.10.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to use 3 party file system

2018-07-18 Thread Evgenii Zhuravlev
Hi,

To connect to the 3rd party store you will need to implement your own
CacheStore to interact with 3 party file system. Here is a good
documentation with examples:
https://apacheignite.readme.io/docs/3rd-party-store

I think you can implement it using Hive jdbc driver or you can directly
access hdfs.

Evgenii

2018-07-18 8:32 GMT+03:00 zhouxy1123 :

> hi,i want to use 3 party file system like HDFS underneath Ignite.
> now dose Ignite support 3 party file system?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Logging in C++

2018-07-18 Thread aealexsandrov
Hi,
What did you mean when saying "remote logging"? 

In case if you asking how you can configure the standard Apache Ignite
logging:

Ignite java node will store its own log into a log file that will be located
in work dir. C++ ignite node will be started as a wrapper for java node.
When you start it you provide the XML configuration file. How to configure
the logging using XML you can read here (Don't forget to add the required
java binaries to the path):

https://apacheignite.readme.io/docs#section-log4j2

In case if you are going to log something from C++ code then you can use
some existing solution for it e.g log4cpp, Pantheios, Glog, etc. How they
should be configurated you can read on their official sites.

BR,
Andrei





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


what version of spark is supported by ignite 2.5

2018-07-18 Thread wt
I can't get it to work with spark 2.2 with context error that i asked about
last week with no answers. Documentation is lacking here it only described
the features and not the version compatability



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/