Re: Triggering Rebalancing Programmatically get error while requesting

2019-03-26 Thread luongbd.hust
Thank you for your enthusiasm

I attached the logs for a longer time after the error occurred.

logs.rar
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite node goes down often

2019-03-26 Thread newigniter
Tnx Ilya. 

And what could be possible causes of long GC pauses?

Tnx



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Event listeners on servers only

2019-03-26 Thread Stanislav Lukyanov
The options I see
1. Register a local listener on each node; you can call localListen() from a 
broadcast() job or when the node starts. 
2. Deploy a cluster-singleton service that calls remoteListen() in its 
initialize().

I guess the first one will perform better.

Stan 

From: maros.urbanec
Sent: 26 марта 2019 г. 15:59
To: user@ignite.apache.org
Subject: Event listeners on servers only

Hi all,
  we're faced with the following requirement - when a cache entry expires
and is about to get removed from the cache, listen to the event, alter an
attribute on the entry and write it to some other cache.

It can be implemented as a client-side event listener, but that ceases to
function as soon as the client leave the topology.

/UUID listenerId = ignite.events().remoteListen(
(e, uuid) -> {
System.out.println("Expired event - executed on the
client");
return true;
},
e -> {
System.out.println("Expired event - executed on one of the
servers");
return true;
},
EventType.EVT_CACHE_OBJECT_EXPIRED
);/

Calling /ignite.events(ignite.cluster().forServers()).remoteListen / instead
makes no difference as long as I can tell.

Is there any way to run an event listener on the server without a
corresponding client? Is there any way for the listener to outlive its
client?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Spark dataframe to Ignite write issue .

2019-03-26 Thread Harshal Patil
Hi Nikolay , Denis

If i am disable option of streamer_overwrite , then it works fine .

df1.write
  .format(IgniteDataFrameSettings.FORMAT_IGNITE)
  .option(IgniteDataFrameSettings.OPTION_CONFIG_FILE, configPath)

  .option(IgniteDataFrameSettings.OPTION_TABLE, "ENTITY_PLAYABLE")

.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS,
"gameId,playableId,companyId,version")
//.option(IgniteDataFrameSettings.OPTION_STREAMER_ALLOW_OVERWRITE,
"true")
  .mode(SaveMode.Append)
  .save()


but , after enabling "
IgniteDataFrameSettings.OPTION_STREAMER_ALLOW_OVERWRITE"  i get before
mentioned errors intermittently , i guess it may be due to
https://stackoverflow.com/questions/5763747/h2-in-memory-database-table-not-found
.

One more thing , if I create IgniteContext like

val configPath =
"/Users/harshal/Downloads/Ignite23-project/src/main/resources/META-INF/Ignite23-server.xml"
val ic : IgniteContext = new IgniteContext(sc, configPath)

like above , i am not able to inject dependencies , so I am doing

public static IgniteConfiguration createConfiguration() throws Exception {
  IgniteConfiguration cfg = new IgniteConfiguration();
  cfg.setIgniteInstanceName("Ignite23");
  TcpDiscoverySpi discovery = new TcpDiscoverySpi();
  TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
  ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500..47510"));
  discovery.setIpFinder(ipFinder);
  cfg.setDiscoverySpi(discovery);
  cfg.setCacheConfiguration(new
CacheConfiguration[]{cacheEntityPlayableCache()});
  return cfg;
}

public static CacheConfiguration cacheEntityPlayableCache() throws Exception {
  CacheConfiguration ccfg = new CacheConfiguration();
  ccfg.setName("EntityPlayableCache");
  ccfg.setCacheMode(CacheMode.PARTITIONED);
  ccfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
  CacheJdbcPojoStoreFactory cacheStoreFactory = new CacheJdbcPojoStoreFactory();
  cacheStoreFactory.setDataSourceFactory(new Factory() {
public DataSource create() {
  return ServerConfigurationFactory.DataSources.INSTANCE_dsPostgreSQL_Rein;
}
  });
  cacheStoreFactory.setDialect(new BasicJdbcDialect());
  cacheStoreFactory.setTypes(new
JdbcType[]{jdbcTypeEntityPlayable(ccfg.getName())});
  cacheStoreFactory.setSqlEscapeAll(true);
  ccfg.setCacheStoreFactory(cacheStoreFactory);
  ccfg.setReadThrough(true);
  ccfg.setWriteThrough(true);
  ArrayList qryEntities = new ArrayList();
  QueryEntity qryEntity = new QueryEntity();
  qryEntity.setKeyType("com.gmail.patil.j.harshal.model.EntityPlayableKey");
  qryEntity.setValueType("com.gmail.patil.j.harshal.model.EntityPlayable");
  qryEntity.setTableName("entity_playable");
  HashSet keyFields = new HashSet();
  keyFields.add("gameId");
  keyFields.add("playableid");
  keyFields.add("companyId");
  keyFields.add("version");
  qryEntity.setKeyFields(keyFields);
  LinkedHashMap fields = new LinkedHashMap();
  fields.put("gameId", "java.lang.Long");
  fields.put("playableid", "java.lang.Long");
  fields.put("companyId", "java.lang.Long");
  fields.put("version", "java.lang.Integer");
  fields.put("eventTimestamp", "java.sql.Timestamp");
  fields.put("eventTimestampSys", "java.lang.Long");
  fields.put("companyIdPartition", "java.lang.Long");
  fields.put("partitionkey", "java.lang.Long");
  qryEntity.setFields(fields);
  ArrayList indexes = new ArrayList();
  QueryIndex index = new QueryIndex();
  index.setName("company_id_partition_hash_entity_playable_hash");
  index.setIndexType(QueryIndexType.SORTED);
  LinkedHashMap indFlds = new LinkedHashMap();
  indFlds.put("companyIdPartition", false);
  index.setFields(indFlds);
  indexes.add(index);
  index = new QueryIndex();
  index.setName("companyId_entity_playable_hash");
  index.setIndexType(QueryIndexType.SORTED);
  indFlds = new LinkedHashMap();
  indFlds.put("companyId", false);
  index.setFields(indFlds);
  indexes.add(index);
  index = new QueryIndex();
  index.setName("gameId_entity_playable_hash");
  index.setIndexType(QueryIndexType.SORTED);
  indFlds = new LinkedHashMap();
  indFlds.put("gameId", false);
  index.setFields(indFlds);
  indexes.add(index);
  index = new QueryIndex();
  index.setName("company_id_partition_entity_playable_normal");
  index.setIndexType(QueryIndexType.SORTED);
  indFlds = new LinkedHashMap();
  indFlds.put("companyIdPartition", false);
  index.setFields(indFlds);
  indexes.add(index);
  index = new QueryIndex();
  index.setName("companyId_entity_playable_normal");
  index.setIndexType(QueryIndexType.SORTED);
  indFlds = new LinkedHashMap();
  indFlds.put("companyId", false);
  index.setFields(indFlds);
  indexes.add(index);
  index = new QueryIndex();
  index.setName("gameId_entity_playable_normal");
  index.setIndexType(QueryIndexType.SORTED);
  indFlds = new LinkedHashMap();
  indFlds.put("gameId", false);
  index.setFields(indFlds);
  indexes.add(index);
  qryEntity.setIndexes(indexes);
  qryEntities.a

Re: Failed to process selector key

2019-03-26 Thread Ilya Kasnacheev
Hello!

Most often it means that remote node closed the socket.

Please see other answer posted.

Regards,
-- 
Ilya Kasnacheev


пн, 25 мар. 2019 г. в 22:47, newigniter :

> Hi!
>
> I get a bunch of "Failed to process selector key" errors. Can someone
> please
> help what is the reason I could get those errors and any possible ways I
> could fix this?
> Below is the full error from my log.
>
> Tnx.
>
>
> [grid-nio-worker-client-listener-2-#31][ClientListenerProcessor] Failed to
> process selector key [ses=GridSelectorNioSessionImpl
> [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
> lim=8192 cap=8192], super=AbstractNioClientWorker [idx=2, bytesRcvd=0,
> bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
> [name=grid-nio-worker-client-listener-2, igniteInstanceName=null,
> finished=false, heartbeatTs=1553113497439, hashCode=1749319672,
> interrupted=false, runner=grid-nio-worker-client-listener-2-#31]]],
> writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null,
> super=GridNioSessionImpl [locAddr=/172.30.4.65:10800,
> rmtAddr=/50.84.201.2:59934, createTime=1553105345772, closeTime=0,
> bytesSent=132019, bytesRcvd=3510, bytesSent0=0, bytesRcvd0=0,
> sndSchedTime=1553105610035, lastSndTime=1553105610035,
> lastRcvTime=1553105610035, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
> GridNioCodecFilter [parser=ClientListenerBufferedParser,
> directMode=false]],
> accepted=true, markedForClose=false]]]
> java.io.IOException: Operation timed out
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:197)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1104)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2389)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2156)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1797)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite node goes down often

2019-03-26 Thread Ilya Kasnacheev
Hello!

[21:16:59,387][WARNING][jvm-pause-detector-worker][IgniteKernal] Possible
too long JVM pause: 13584 milliseconds.
[21:16:59,388][INFO][tcp-disco-sock-reader-#6][TcpDiscoverySpi] Finished
serving remote node connection [rmtAddr=/172.30.4.64:50487, rmtPort=50487
[21:16:59,398][INFO][tcp-disco-msg-worker-#2][TcpDiscoverySpi] Local node
seems to be disconnected from topology (failure detection timeout is
reached) [failureDetectionTimeout=1, connCheckInterval=500]

You have 13,5 seconds of GC but failure detection timeout of 10 seconds.
This means nodes will be kicked out of topology.

Consider adjusting failureDetectionTimeout so it's considerably longer than
your GC pauses.

Regards,
-- 
Ilya Kasnacheev


пн, 25 мар. 2019 г. в 23:15, newigniter :

> I have a problem where my ignite node goes down often.
> I attached the full log I have from last time my node crashed.
> What I see from the log is problems with GC(possible long GC pause) and if
> I
> understand correctly after that some locking happens and after some time
> node simple crashed.
>
> Could someone please take a look and point me in the right direction?
> If I have long GC pauses, what is the example of something that could cause
> it?
> I do have some queries on my ignite node which are "heavy" but I added 10gb
> of heap memory to both of my nodes and that is much more than the total
> amount of data which is being queried at times when the node goes down.
>
> I currently use Ignite 2.7. I have 2 nodes replicated cluster with 10 GB of
> the heap and 30 GB of non-heap memory per node.
>
> Tnx.
>
> ignite-5768a4b3.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2228/ignite-5768a4b3.log>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Triggering Rebalancing Programmatically get error while requesting

2019-03-26 Thread Ilya Kasnacheev
Hello!

Can you please collect thread dumps from all nodes (after waiting around a
minute once the error appears)?

Regards,
-- 
Ilya Kasnacheev


вт, 26 мар. 2019 г. в 05:34, luongbd.hust :

> hi Ilya,
>
> I tried to follow the way you instructed.
> But nothing has changed.
> I have attached a log and configuration when testing.
>
> disable-fail-handling.rar
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2354/disable-fail-handling.rar>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Event listeners on servers only

2019-03-26 Thread maros.urbanec
Hi all,
  we're faced with the following requirement - when a cache entry expires
and is about to get removed from the cache, listen to the event, alter an
attribute on the entry and write it to some other cache.

It can be implemented as a client-side event listener, but that ceases to
function as soon as the client leave the topology.

/UUID listenerId = ignite.events().remoteListen(
(e, uuid) -> {
System.out.println("Expired event - executed on the
client");
return true;
},
e -> {
System.out.println("Expired event - executed on one of the
servers");
return true;
},
EventType.EVT_CACHE_OBJECT_EXPIRED
);/

Calling /ignite.events(ignite.cluster().forServers()).remoteListen / instead
makes no difference as long as I can tell.

Is there any way to run an event listener on the server without a
corresponding client? Is there any way for the listener to outlive its
client?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite node is down due to full RAM usage

2019-03-26 Thread praveeng
Hi,

As we can't upgrade java version to 1.8, we can't use the ignite latest
version.
If it is because of Heap Memory issue, i could have got the OOM error in
logs and heap dump might have generated automatically.
This could be because of the data in off heap is not expired and the RAM is
used completely.

Thanks,
Praveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Spark dataframe to Ignite write issue .

2019-03-26 Thread Nikolay Izhikov
Hello, Harshal

Can you, please, share your Ignite config?
Especially, "*ENTITY_PLAYABLE*" cache definition

вт, 26 мар. 2019 г. в 05:35, Denis Magda :

> Hi, as far as I can guess from the shared details, you should pass the
> IgniteCache name as a SQL schema if SQL metadata was configured via XML or
> annotations. Try this "INSERT INTO cacheName.ENTITY_PLAYABLE".
>
> -
> Denis
>
>
> On Mon, Mar 25, 2019 at 7:18 AM Harshal Patil <
> harshal.pa...@mindtickle.com> wrote:
>
>> Hi ,
>> I am running spark 2.3.1 with Ignite 2.7.0 . I have configured Postgres
>> as cachePersistance store . After loading of cache , i can read and convert
>> data from ignite cache to Spark Dataframe . But while writing back to
>> ignite , I get below error
>>
>> class org.apache.ignite.internal.processors.query.IgniteSQLException: *Table
>> "ENTITY_PLAYABLE" not found*; SQL statement:
>>
>> INSERT INTO
>> ENTITY_PLAYABLE(GAMEID,PLAYABLEID,COMPANYID,VERSION,EVENTTIMESTAMP,EVENTTIMESTAMPSYS,COMPANYIDPARTITION,partitionkey)
>> VALUES(?,?,?,?,?,?,?,?) [42102-197]
>>
>> at
>> *org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.streamUpdateQuery*
>> (IgniteH2Indexing.java:1302)
>>
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:2206)
>>
>> at
>> org.apache.ignite.internal.processors.query.GridQueryProcessor$5.applyx(GridQueryProcessor.java:2204)
>>
>> at
>> org.apache.ignite.internal.util.lang.IgniteOutClosureX.apply(IgniteOutClosureX.java:36)
>>
>>
>>
>> *Read from Ignite* :
>>
>>
>> loading cache
>>
>>
>> val conf = new SparkConf()
>> conf.setMaster("spark://harshal-patil.local:7077")
>> //conf.setMaster("local[*]")
>> conf.setAppName("IGniteTest")
>> conf.set("spark.executor.heartbeatInterval", "900s")
>> conf.set("spark.network.timeout", "950s")
>> conf.set("spark.default.parallelism", "4")
>> conf.set("spark.cores.max", "4")
>> 
>> conf.set("spark.jars","target/pack/lib/spark_ignite_cache_test_2.11-0.1.jar")
>>
>> val cfg = () => ServerConfigurationFactory.createConfiguration()
>>
>> Ignition.start(ServerConfigurationFactory.createConfiguration())
>>
>> val ic : IgniteContext = new IgniteContext(sc,  cfg)
>>
>> ic.ignite().cache("EntityPlayableCache").loadCache(null.asInstanceOf[IgniteBiPredicate[_,
>>  _]])
>>
>>
>>
>>
>> *spark.read*
>>
>>   .format(IgniteDataFrameSettings.*FORMAT_IGNITE*)
>>
>>   .option(IgniteDataFrameSettings.*OPTION_CONFIG_FILE*, configPath)
>>
>>   .option(IgniteDataFrameSettings.*OPTION_TABLE*,
>> "ENTITY_PLAYABLE").load().select(*sum*("partitionkey").alias("sum"),
>> *count*("gameId").as("total")).collect()(0)
>>
>>
>> *Write To Ignite* :
>>
>>
>> *df.write*
>>
>>   .format(IgniteDataFrameSettings.*FORMAT_IGNITE*)
>>
>>   .option(IgniteDataFrameSettings.*OPTION_CONFIG_FILE*, configPath)
>>
>>
>>   .option(IgniteDataFrameSettings.*OPTION_TABLE*, "ENTITY_PLAYABLE")
>>
>> .option(IgniteDataFrameSettings.
>> *OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS*,
>> "gameId,playableId,companyId,version")
>>
>> .option(IgniteDataFrameSettings.*OPTION_STREAMER_ALLOW_OVERWRITE*,
>> "true")
>>
>>   .mode(SaveMode.*Append*)
>>
>>   .save()
>>
>>
>>
>> I think the problem is with *Spring bean Injection on executer node* ,
>> please help , what i am doing wrong .
>>
>>
>>
>>