Re: Web Agent connected to Cluster but console doesn't show

2019-07-08 Thread Sankar Ramiah
I am using Apache Ignite v2.0.0 as this is the latest approved version
available to my organization. However I could get it working with the latest
version of Ignite (2.7.5) in my personal machine. Do we need to anything
additional with the older version (2.0.0)? Because web agent is up and it is
connected to both console back-end and cluster.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node not joined to the cluster - joining node doesn't have encryption data

2019-07-08 Thread Andrey Dolmatov
[17:01:02,269][SEVERE][ttl-cleanup-worker-#47][] Critical system error
detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
[type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
[name=tcp-disco-msg-worker, igniteInstanceName=null, finished=false,
heartbeatTs=1562239850489]]]

Thread [name="tcp-disco-msg-worker-#2", id=95, state=RUNNABLE, blockCnt=0,
waitCnt=50038]
at java.base@11/java.net.PlainSocketImpl.waitForConnect(Native
Method)

the reason is that remoute node is unreachable. try to analyze remote node
logs

вт, 9 июл. 2019 г. в 09:11, shahidv :

> Hi Ilya,
>
> Here is my log and configuration.
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2499/ignite-6784e9b3.log>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2499/default-config.xml>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Node not joined to the cluster - joining node doesn't have encryption data

2019-07-08 Thread shahidv
Hi Ilya,

Here is my log and configuration.
 
  
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Set Expiry Policies When Creating Cache Using Java Thin Client

2019-07-08 Thread Shane Duan
Yes, I know I can set the template in SQL (via JDBC) or REST API. But how
can I set the template on the Java thin client?

Thanks!

On Mon, Jul 8, 2019 at 5:55 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Yes, you can declare cache configuration templates, refer to them when
> creating tables from thin client/JDBC, as per documentation:
> https://apacheignite.readme.io/docs/cache-template
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 5 июл. 2019 г. в 21:22, Shane Duan :
>
>> Thanks, Denis. Alternatively, does ignite thin client provide a way to
>> use a server side pre-defined cache configurations, just like the TEMPLATE
>> in the CREATE TABLE statement in SQL?
>>
>> Thanks,
>> Shane
>>
>> On Wed, Jul 3, 2019 at 2:41 PM Denis Magda  wrote:
>>
>>> Shane,
>>>
>>> That's unavailable on the thing clients end yet. My suggestion is to
>>> configure caches with required expiration policies on the servers'
>>> configuration end.
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Tue, Jul 2, 2019 at 10:54 AM Shane Duan  wrote:
>>>
 I mean ClientCacheConfiguration

 On Tue, Jul 2, 2019 at 10:48 AM Shane Duan  wrote:

> Hi Igniters,
>
> How can I set expiry policies if I have to create cache using Java
> Thin client. I did not see any API to do so in ClientConfiguration.
>
> Thanks,
> Shane
>



Re: add data to table, sql add ,cache add has different result ,I use ignite 2.6

2019-07-08 Thread okni-67

City_id is defined in the key, key never show in the value?  How can I use sql 
insert ,insert the key like cache , and how can I use cache insert, insert the 
value like sql ,I want insert use cache or use sql keep the same . Can you give 
me a solution?

> 在 2019年7月9日,上午11:46,Andrey Dolmatov  写道:
> 
> Cityid should be part of key builder, not value builder. That's because 
> cityid defined in the key
> 
> On Tue, Jul 9, 2019, 4:56 AM okni-67  > wrote:
> 
>>   i created  a table below
>> 
>> CREATE TABLE IF NOT EXISTS person (
>>   id int,
>>   city_id int,
>>   name varchar,
>>   age int, 
>>   company varchar,
>>   PRIMARY KEY (id, city_id)
>> ) WITH 
>> "ATOMICITY=ATOMIC,WRITE_SYNCHRONIZATION_MODE=PRIMARY_SYNC,cache_name=PersonCache,template=partitioned,backups=1,affinity_key=city_id,
>>  key_type=org.apache.ignite.cache.affinity.AffinityKey, 
>> value_type=com.okni.okkong.data.common.entity.Person”;
>> 
>> And I use sql below 
>> 
>> INSERT INTO person values(2,5,'test1',19,'okni’)
>> 
>> I can find data like below
>> 
>> 
>> And. I use ignite cache put binary object like  below
>> 
>> 
>> 
>> This problem is in DBeaver my new data id and city_id is null.
>> 
>> 
>> 
>>   I use cache can get the cache insert id and city id value, but the sql 
>> insert operate data the id is null, like below
>> 
>> 
>> I want to know why , thanks.
>> 
> 
> 



Re: INSERT and MERGE statements

2019-07-08 Thread vitalys
Hi, I have to follow-up. 

MERGE INTO works fine when I define matching fields between SOURCE and
DESTINATION caches. However, Merge command nullifies fields in the
DESTINATION table when they are not part of the SOURCE.

for Instance an object in cache DST has 3 fields : field1, field2, field3
with values :1,2,3
an object in cache SRC also has 3 fields : field2, field3, field4 with
values.

When I MERGE an object from SRC cache with an Object in DST cache : 

MERGER INTO DST ( field2,field3,_key)
SELECT field2,field3,_key FROM SRC where _key =  

it updates fields : field2, field3 in DST cache to 2 and 3, but it also
updates field1 to NULL. 

How do I preserve an existing values in the Destionation cache. 

I did some research and it seems like MERGE INTO  WHEN MATCHED ...
construct is not supported by Ignie.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: An existing connection was forcibly closed by the remote host

2019-07-08 Thread Ilya Kasnacheev
Hello!

I think that it will make sense to have a single client per handling
thread. Close only when idle.

Regards.
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 20:07, siva :

> ok.
> One more question on this,
> Our dot net services(lets say micro service 15) is processing 40K to 60K
> messages.  In each service, I am opening and closing connection for each
> message processing and inserting/reading into ignite.
> Generally, Is it good practice to do..?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: An existing connection was forcibly closed by the remote host

2019-07-08 Thread siva
ok.
One more question on this,
Our dot net services(lets say micro service 15) is processing 40K to 60K 
messages.  In each service, I am opening and closing connection for each
message processing and inserting/reading into ignite.
Generally, Is it good practice to do..?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: An existing connection was forcibly closed by the remote host

2019-07-08 Thread Ilya Kasnacheev
Hello!

If it's thin client than you should be able to simply re-open your client
when it turns bad.

Then maybe it's client that closing this connection, maybe it's network
stack (inactivity, etc).

It is recommended to close Thin Client when not using it, rather than let
it hang around.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 19:26, siva :

> Hi,
> Above configuration is that related to thinclient? already i am using
> socketTimeout on thinclient.
>
>  like below:
>
>_igniteClientConfiguration = new IgniteClientConfiguration
> {
> Endpoints = new string[] { endPoint },
> SocketTimeout = TimeSpan.FromSeconds(30)
> }
>   
>   
>
>public async Task CreateOrUpdateRecordAsync(string cacheName,
> ICustomCacheStore data, IKeyModel key)
> {
> //get client configuration
> try
> {
> //
> using (IIgniteClient client =
> Ignition.StartClient(this._igniteClientConfiguration))
> {
> //get cache configuraation
> var cache = client.GetCache ICustomCacheStore>(cacheName);
> string json =
> JsonConvert.SerializeObject(key,Formatting.None);
> string base64EncodedKey =
> Convert.ToBase64String(Encoding.UTF8.GetBytes(json));
> await cache.PutAsync(base64EncodedKey, data);
>
> }
> }
> catch (Exception ex)
> {
> throw;
> }
> }
>
> with above timeout getting the same exception.
>
>
> Actually where i need to set is that need to set on server configuration in
> xml file?
> I thing some where i am missing docs ...i didn't find in  documentation
>    property
> "socketWriteTimeout".
>
>  class="org.apache.ignite.configuration.IgniteConfiguration">
> ...
> ...
> 
>class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> 
>   
> 
>
> 
>
>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: An existing connection was forcibly closed by the remote host

2019-07-08 Thread siva
Hi,
Above configuration is that related to thinclient? already i am using
socketTimeout on thinclient.

 like below:

   _igniteClientConfiguration = new IgniteClientConfiguration
{
Endpoints = new string[] { endPoint },
SocketTimeout = TimeSpan.FromSeconds(30)
}
  
  
 
   public async Task CreateOrUpdateRecordAsync(string cacheName,
ICustomCacheStore data, IKeyModel key)
{
//get client configuration
try
{
//
using (IIgniteClient client =
Ignition.StartClient(this._igniteClientConfiguration))
{
//get cache configuraation
var cache = client.GetCache(cacheName);
string json =
JsonConvert.SerializeObject(key,Formatting.None);
string base64EncodedKey =
Convert.ToBase64String(Encoding.UTF8.GetBytes(json));
await cache.PutAsync(base64EncodedKey, data);

}
}
catch (Exception ex)
{
throw;
}
}

with above timeout getting the same exception.


Actually where i need to set is that need to set on server configuration in
xml file?
I thing some where i am missing docs ...i didn't find in  documentation
   property
"socketWriteTimeout".


...
...

  

  












--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Web Agent connected to Cluster but console doesn't show

2019-07-08 Thread Vladimir Pligin
What version do you use?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Ilya Kasnacheev
Hello!

Data is always written to persistence immediately (via WAL). You can
control eviction of offheap with evictionThreshold and pageEvictionMode
settings of DataRegionConfiguration.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 17:50, Andrey Dolmatov :

> When data overfit dataRegion max size, so no more available offheap space,
> then data goes to persistence. So, what option controls how data pages
> should be evicted from offheap to persistence.
>
> On Mon, Jul 8, 2019, 5:33 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Data is always stored in offheap. Eviction strictly controls onheap
>> cache. Once data is evicted from onheap it is available in offheap.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 8 июл. 2019 г. в 17:31, Andrey Dolmatov :
>>
>>> We plan to use persistence in production. I didn't understand,
>>> CacheConfiguration.EvictionPolicy specify heap->offheap eviction,
>>> offheap->persistence eviction or both. It's not clear for me.
>>>
>>> On Mon, Jul 8, 2019, 5:19 PM Ilya Kasnacheev 
>>> wrote:
>>>
 Hello!

 Oops, I was wrong. This is indeed the wrong setting.

 Have you tried specifying evictionPolicy? I think it is the one that
 controls eviction from onheap cache. You can put a LruEvictionPolicy of 100
 000 here, for example.

 Regards,
 --
 Ilya Kasnacheev


 пн, 8 июл. 2019 г. в 17:09, Andrey Dolmatov :

> No, because we didnt specify QueryEntity.
> Does onheapCacheEnabled uses for SQL only?
> What default value for sqlOnheapCacheMaxSize?
>
> пн, 8 июл. 2019 г. в 17:05, Ilya Kasnacheev  >:
>
>> Hello!
>>
>> Have you tried also specifying sqlOnheapCacheMaxSize? You can specify
>> 100 000 if you like.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :
>>
>>> We use simple replicated KV cache.
>>> We try to upload 32 000 000 small records  to it (about
>>> 6Gb in data region, persistance disabled). We load data using 
>>> DataStreamer.
>>>
>>> If we set onheapCacheEnabled=false, server node consumes heap about
>>> 500 Mb.
>>> If we set onheapCacheEnabled=true, server node consumes heap about 6
>>> Gb.
>>>
>>> Why DataStreamer uses heap memory to load data? Why on-heap size is
>>> unlimited (not just 100.000 records)? What default on-heap eviction 
>>> policy?
>>>
>>> 
>>>
>>> 
>>> >> value="true"/>
>>> 
>>>
>>> Thanks!
>>>
>>


Re: On ServerNode Thread TIMED_WAITING Lock

2019-07-08 Thread Ilya Kasnacheev
Hello!

It seems that one thread is busy reading pages from disk, so other thread
can't acquire write lock.

May happen if you have block device with bad characteristics, such as HDD,
and issue queries which overwhelm it.

Regards,
-- 
Ilya Kasnacheev


чт, 4 июл. 2019 г. в 16:51, siva :

> Hi,Thanks for reply.
>
> Please find the below log file.In this file don't know why these many
> thread
> are waiting state and because of thread lock or some other reason due to
> application also getting hang.
>
> Log file:
> 
> ignite-e3c80688.log
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1379/ignite-e3c80688.log>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Andrey Dolmatov
When data overfit dataRegion max size, so no more available offheap space,
then data goes to persistence. So, what option controls how data pages
should be evicted from offheap to persistence.

On Mon, Jul 8, 2019, 5:33 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Data is always stored in offheap. Eviction strictly controls onheap cache.
> Once data is evicted from onheap it is available in offheap.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 8 июл. 2019 г. в 17:31, Andrey Dolmatov :
>
>> We plan to use persistence in production. I didn't understand,
>> CacheConfiguration.EvictionPolicy specify heap->offheap eviction,
>> offheap->persistence eviction or both. It's not clear for me.
>>
>> On Mon, Jul 8, 2019, 5:19 PM Ilya Kasnacheev 
>> wrote:
>>
>>> Hello!
>>>
>>> Oops, I was wrong. This is indeed the wrong setting.
>>>
>>> Have you tried specifying evictionPolicy? I think it is the one that
>>> controls eviction from onheap cache. You can put a LruEvictionPolicy of 100
>>> 000 here, for example.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 8 июл. 2019 г. в 17:09, Andrey Dolmatov :
>>>
 No, because we didnt specify QueryEntity.
 Does onheapCacheEnabled uses for SQL only?
 What default value for sqlOnheapCacheMaxSize?

 пн, 8 июл. 2019 г. в 17:05, Ilya Kasnacheev >>> >:

> Hello!
>
> Have you tried also specifying sqlOnheapCacheMaxSize? You can specify
> 100 000 if you like.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :
>
>> We use simple replicated KV cache.
>> We try to upload 32 000 000 small records  to it (about
>> 6Gb in data region, persistance disabled). We load data using 
>> DataStreamer.
>>
>> If we set onheapCacheEnabled=false, server node consumes heap about
>> 500 Mb.
>> If we set onheapCacheEnabled=true, server node consumes heap about 6
>> Gb.
>>
>> Why DataStreamer uses heap memory to load data? Why on-heap size is
>> unlimited (not just 100.000 records)? What default on-heap eviction 
>> policy?
>>
>> 
>>
>> 
>> 
>> 
>>
>> Thanks!
>>
>


Re: onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Ilya Kasnacheev
Hello!

Data is always stored in offheap. Eviction strictly controls onheap cache.
Once data is evicted from onheap it is available in offheap.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 17:31, Andrey Dolmatov :

> We plan to use persistence in production. I didn't understand,
> CacheConfiguration.EvictionPolicy specify heap->offheap eviction,
> offheap->persistence eviction or both. It's not clear for me.
>
> On Mon, Jul 8, 2019, 5:19 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Oops, I was wrong. This is indeed the wrong setting.
>>
>> Have you tried specifying evictionPolicy? I think it is the one that
>> controls eviction from onheap cache. You can put a LruEvictionPolicy of 100
>> 000 here, for example.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 8 июл. 2019 г. в 17:09, Andrey Dolmatov :
>>
>>> No, because we didnt specify QueryEntity.
>>> Does onheapCacheEnabled uses for SQL only?
>>> What default value for sqlOnheapCacheMaxSize?
>>>
>>> пн, 8 июл. 2019 г. в 17:05, Ilya Kasnacheev :
>>>
 Hello!

 Have you tried also specifying sqlOnheapCacheMaxSize? You can specify
 100 000 if you like.

 Regards,
 --
 Ilya Kasnacheev


 пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :

> We use simple replicated KV cache.
> We try to upload 32 000 000 small records  to it (about
> 6Gb in data region, persistance disabled). We load data using 
> DataStreamer.
>
> If we set onheapCacheEnabled=false, server node consumes heap about
> 500 Mb.
> If we set onheapCacheEnabled=true, server node consumes heap about 6
> Gb.
>
> Why DataStreamer uses heap memory to load data? Why on-heap size is
> unlimited (not just 100.000 records)? What default on-heap eviction 
> policy?
>
> 
>
> 
> 
> 
>
> Thanks!
>



Re: onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Andrey Dolmatov
We plan to use persistence in production. I didn't understand,
CacheConfiguration.EvictionPolicy specify heap->offheap eviction,
offheap->persistence eviction or both. It's not clear for me.

On Mon, Jul 8, 2019, 5:19 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Oops, I was wrong. This is indeed the wrong setting.
>
> Have you tried specifying evictionPolicy? I think it is the one that
> controls eviction from onheap cache. You can put a LruEvictionPolicy of 100
> 000 here, for example.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 8 июл. 2019 г. в 17:09, Andrey Dolmatov :
>
>> No, because we didnt specify QueryEntity.
>> Does onheapCacheEnabled uses for SQL only?
>> What default value for sqlOnheapCacheMaxSize?
>>
>> пн, 8 июл. 2019 г. в 17:05, Ilya Kasnacheev :
>>
>>> Hello!
>>>
>>> Have you tried also specifying sqlOnheapCacheMaxSize? You can specify
>>> 100 000 if you like.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :
>>>
 We use simple replicated KV cache.
 We try to upload 32 000 000 small records  to it (about 6Gb
 in data region, persistance disabled). We load data using DataStreamer.

 If we set onheapCacheEnabled=false, server node consumes heap about 500
 Mb.
 If we set onheapCacheEnabled=true, server node consumes heap about 6 Gb.

 Why DataStreamer uses heap memory to load data? Why on-heap size is
 unlimited (not just 100.000 records)? What default on-heap eviction policy?

 

 
 
 

 Thanks!

>>>


Re: onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Andrey Dolmatov
No, we didn't specify EvictionPolicy. I didn't find does CacheConfiguration
have default EvictionPolicy?

On Mon, Jul 8, 2019, 5:19 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Oops, I was wrong. This is indeed the wrong setting.
>
> Have you tried specifying evictionPolicy? I think it is the one that
> controls eviction from onheap cache. You can put a LruEvictionPolicy of 100
> 000 here, for example.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 8 июл. 2019 г. в 17:09, Andrey Dolmatov :
>
>> No, because we didnt specify QueryEntity.
>> Does onheapCacheEnabled uses for SQL only?
>> What default value for sqlOnheapCacheMaxSize?
>>
>> пн, 8 июл. 2019 г. в 17:05, Ilya Kasnacheev :
>>
>>> Hello!
>>>
>>> Have you tried also specifying sqlOnheapCacheMaxSize? You can specify
>>> 100 000 if you like.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :
>>>
 We use simple replicated KV cache.
 We try to upload 32 000 000 small records  to it (about 6Gb
 in data region, persistance disabled). We load data using DataStreamer.

 If we set onheapCacheEnabled=false, server node consumes heap about 500
 Mb.
 If we set onheapCacheEnabled=true, server node consumes heap about 6 Gb.

 Why DataStreamer uses heap memory to load data? Why on-heap size is
 unlimited (not just 100.000 records)? What default on-heap eviction policy?

 

 
 
 

 Thanks!

>>>


Re: onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Ilya Kasnacheev
Hello!

Oops, I was wrong. This is indeed the wrong setting.

Have you tried specifying evictionPolicy? I think it is the one that
controls eviction from onheap cache. You can put a LruEvictionPolicy of 100
000 here, for example.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 17:09, Andrey Dolmatov :

> No, because we didnt specify QueryEntity.
> Does onheapCacheEnabled uses for SQL only?
> What default value for sqlOnheapCacheMaxSize?
>
> пн, 8 июл. 2019 г. в 17:05, Ilya Kasnacheev :
>
>> Hello!
>>
>> Have you tried also specifying sqlOnheapCacheMaxSize? You can specify 100
>> 000 if you like.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :
>>
>>> We use simple replicated KV cache.
>>> We try to upload 32 000 000 small records  to it (about 6Gb
>>> in data region, persistance disabled). We load data using DataStreamer.
>>>
>>> If we set onheapCacheEnabled=false, server node consumes heap about 500
>>> Mb.
>>> If we set onheapCacheEnabled=true, server node consumes heap about 6 Gb.
>>>
>>> Why DataStreamer uses heap memory to load data? Why on-heap size is
>>> unlimited (not just 100.000 records)? What default on-heap eviction policy?
>>>
>>> 
>>>
>>> 
>>> 
>>> 
>>>
>>> Thanks!
>>>
>>


Re: onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Andrey Dolmatov
No, because we didnt specify QueryEntity.
Does onheapCacheEnabled uses for SQL only?
What default value for sqlOnheapCacheMaxSize?

пн, 8 июл. 2019 г. в 17:05, Ilya Kasnacheev :

> Hello!
>
> Have you tried also specifying sqlOnheapCacheMaxSize? You can specify 100
> 000 if you like.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :
>
>> We use simple replicated KV cache.
>> We try to upload 32 000 000 small records  to it (about 6Gb
>> in data region, persistance disabled). We load data using DataStreamer.
>>
>> If we set onheapCacheEnabled=false, server node consumes heap about 500
>> Mb.
>> If we set onheapCacheEnabled=true, server node consumes heap about 6 Gb.
>>
>> Why DataStreamer uses heap memory to load data? Why on-heap size is
>> unlimited (not just 100.000 records)? What default on-heap eviction policy?
>>
>> 
>>
>> 
>> 
>> 
>>
>> Thanks!
>>
>


Re: onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Ilya Kasnacheev
Hello!

Have you tried also specifying sqlOnheapCacheMaxSize? You can specify 100
000 if you like.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 17:01, Andrey Dolmatov :

> We use simple replicated KV cache.
> We try to upload 32 000 000 small records  to it (about 6Gb in
> data region, persistance disabled). We load data using DataStreamer.
>
> If we set onheapCacheEnabled=false, server node consumes heap about 500 Mb.
> If we set onheapCacheEnabled=true, server node consumes heap about 6 Gb.
>
> Why DataStreamer uses heap memory to load data? Why on-heap size is
> unlimited (not just 100.000 records)? What default on-heap eviction policy?
>
> 
>
> 
> 
> 
>
> Thanks!
>


onheapCacheEnabled enormous heap consumption

2019-07-08 Thread Andrey Dolmatov
We use simple replicated KV cache.
We try to upload 32 000 000 small records  to it (about 6Gb in
data region, persistance disabled). We load data using DataStreamer.

If we set onheapCacheEnabled=false, server node consumes heap about 500 Mb.
If we set onheapCacheEnabled=true, server node consumes heap about 6 Gb.

Why DataStreamer uses heap memory to load data? Why on-heap size is
unlimited (not just 100.000 records)? What default on-heap eviction policy?







Thanks!


Re: Set Expiry Policies When Creating Cache Using Java Thin Client

2019-07-08 Thread Ilya Kasnacheev
Hello!

Yes, you can declare cache configuration templates, refer to them when
creating tables from thin client/JDBC, as per documentation:
https://apacheignite.readme.io/docs/cache-template

Regards,
-- 
Ilya Kasnacheev


пт, 5 июл. 2019 г. в 21:22, Shane Duan :

> Thanks, Denis. Alternatively, does ignite thin client provide a way to use
> a server side pre-defined cache configurations, just like the TEMPLATE in
> the CREATE TABLE statement in SQL?
>
> Thanks,
> Shane
>
> On Wed, Jul 3, 2019 at 2:41 PM Denis Magda  wrote:
>
>> Shane,
>>
>> That's unavailable on the thing clients end yet. My suggestion is to
>> configure caches with required expiration policies on the servers'
>> configuration end.
>>
>> -
>> Denis
>>
>>
>> On Tue, Jul 2, 2019 at 10:54 AM Shane Duan  wrote:
>>
>>> I mean ClientCacheConfiguration
>>>
>>> On Tue, Jul 2, 2019 at 10:48 AM Shane Duan  wrote:
>>>
 Hi Igniters,

 How can I set expiry policies if I have to create cache using Java Thin
 client. I did not see any API to do so in ClientConfiguration.

 Thanks,
 Shane

>>>


Re: An existing connection was forcibly closed by the remote host

2019-07-08 Thread Ilya Kasnacheev
Hello!

I have seen such messages under heavy load on a large cluster.

My recommendation is to increase socketWriteTimeout to 5s on
TcpCommunicationSpi. The default of 2s is too small, even smaller than TCP
retransmit.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 15:38, siva :

> Sorry,don't i have pasted but it's did attached with message.
> *Exception:*
>
> [07:16:22,681][SEVERE][grid-nio-worker-client-listener-3-#52%ServerNode%][ClientListenerProcessor]
> Failed to process selector key [ses=GridSelectorNioSessionImpl
> [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
> lim=8192 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0,
> bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
> [name=grid-nio-worker-client-listener-3, igniteInstanceName=ServerNode,
> finished=false, heartbeatTs=1562569764447, hashCode=117867960,
> interrupted=false,
> runner=grid-nio-worker-client-listener-3-#52%ServerNode%]]], writeBuf=null,
> readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl
> [locAddr=/173.16.4.13:10800, rmtAddr=/173.16.4.8:54238,
> createTime=1562568311896, closeTime=0, bytesSent=112344, bytesRcvd=77846,
> bytesSent0=0, bytesRcvd0=0, sndSchedTime=1562568800561,
> lastSndTime=1562568800561, lastRcvTime=1562568798463, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
> GridNioCodecFilter [parser=ClientListenerBufferedParser,
> directMode=false]],
> accepted=true, markedForClose=false]]]
> java.io.IOException: An existing connection was forcibly closed by the
> remote host
> at sun.nio.ch.SocketDispatcher.read0(Native Method)
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
> at sun.nio.ch.IOUtil.read(IOUtil.java:197)
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1120)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2386)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2153)
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1794)
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: An existing connection was forcibly closed by the remote host

2019-07-08 Thread siva
Sorry,don't i have pasted but it's did attached with message.
*Exception:*
[07:16:22,681][SEVERE][grid-nio-worker-client-listener-3-#52%ServerNode%][ClientListenerProcessor]
Failed to process selector key [ses=GridSelectorNioSessionImpl
[worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
lim=8192 cap=8192], super=AbstractNioClientWorker [idx=3, bytesRcvd=0,
bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
[name=grid-nio-worker-client-listener-3, igniteInstanceName=ServerNode,
finished=false, heartbeatTs=1562569764447, hashCode=117867960,
interrupted=false,
runner=grid-nio-worker-client-listener-3-#52%ServerNode%]]], writeBuf=null,
readBuf=null, inRecovery=null, outRecovery=null, super=GridNioSessionImpl
[locAddr=/173.16.4.13:10800, rmtAddr=/173.16.4.8:54238,
createTime=1562568311896, closeTime=0, bytesSent=112344, bytesRcvd=77846,
bytesSent0=0, bytesRcvd0=0, sndSchedTime=1562568800561,
lastSndTime=1562568800561, lastRcvTime=1562568798463, readsPaused=false,
filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
GridNioCodecFilter [parser=ClientListenerBufferedParser, directMode=false]],
accepted=true, markedForClose=false]]]
java.io.IOException: An existing connection was forcibly closed by the
remote host
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at
org.apache.ignite.internal.util.nio.GridNioServer$ByteBufferNioClientWorker.processRead(GridNioServer.java:1120)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.processSelectedKeysOptimized(GridNioServer.java:2386)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:2153)
at
org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1794)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node not joined to the cluster - joining node doesn't have encryption data

2019-07-08 Thread Ilya Kasnacheev
Hello!

It's hard to say why node failed. Software termination? Network problems?
Unhandled error?

But it is likely unrelated to 'joining node doesn't have encryption data'
messages.

Can you provide complete logs?

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 09:15, shahidv :

> any idea ? seems to be node joined and terminated ,
>
> [17:01:10,562][INFO][exchange-worker-#43][time] Started exchange init
> [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0],
> mvccCrd=MvccCoordinator [nodeId=6784e9b3-c5dc-48b2-b786-30999e3041f4,
> crdVer=1562239510448, topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=0]], mvccCrdChange=false, crd=true, evt=NODE_JOINED,
> evtNode=1e232f60-9f6d-4dfd-bfaf-8e31e70e245b, customEvt=null,
> allowMerge=true]
> [17:01:10,567][WARNING][disco-event-worker-#42][GridDiscoveryManager] Node
> FAILED: TcpDiscoveryNode [id=1e232f60-9f6d-4dfd-bfaf-8e31e70e245b,
> addrs=[0:0:0:0:0:0:0:1, 10.174.92.125, 127.0.0.1, 192.168.0.5],
> sockAddrs=[/192.168.0.5:47500, /0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500,
> /10.174.92.125:47500], discPort=47500, order=2, intOrder=2,
> lastExchangeTime=1562239850489, loc=false,
> ver=2.7.5#20190603-sha1:be4f2a15,
> isClient=false]
> [17:01:10,569][INFO][disco-event-worker-#42][GridDiscoveryManager] Topology
> snapshot [ver=3, locNode=6784e9b3, servers=1, clients=0, state=ACTIVE,
> CPUs=8, offheap=3.2GB, heap=1.0GB]
> [17:01:10,569][INFO][disco-event-worker-#42][GridDiscoveryManager]   ^--
> Baseline [id=0, size=1, online=1, offline=0]
> [17:01:10,582][INFO][exchange-worker-#43][GridDhtPartitionsExchangeFuture]
> Finished waiting for partition release future
> [topVer=AffinityTopologyVersion [topVer=2, minorTopVer=0], waitTime=0ms,
> futInfo=NA, mode=DISTRIBUTED]
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: An existing connection was forcibly closed by the remote host

2019-07-08 Thread Ilya Kasnacheev
Hello!

Unfortunately your letter does not seem to contain any error messages. Can
you paste them in as text?

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 15:33, siva :

> Hi,
> I have .NetCore(v2.2.103) client and Server Ignite(v2.7.5) Application.And
> I
> am using third party thin client
> making request to Server to Put And Read data to/from Server.
>
> Normal application request read and put operation is happening if no of
> connection is less.
>
> But onces sending continously data for put and no of connection increased
> after around more than 3000 getting continously Exception
>
> "An existing connection was forcibly closed by the remote host".
>
> *Here is the Exception:*
>
>
> *Put Request:*
> //method calling rate 10 thousand times in a second.
>
>
>
> Please help,how to solve above exception.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


An existing connection was forcibly closed by the remote host

2019-07-08 Thread siva
Hi,
I have .NetCore(v2.2.103) client and Server Ignite(v2.7.5) Application.And I
am using third party thin client
making request to Server to Put And Read data to/from Server.

Normal application request read and put operation is happening if no of
connection is less.

But onces sending continously data for put and no of connection increased
after around more than 3000 getting continously Exception 

"An existing connection was forcibly closed by the remote host".

*Here is the Exception:*


*Put Request:*
//method calling rate 10 thousand times in a second.



Please help,how to solve above exception.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IGFS block at startup

2019-07-08 Thread Ilya Kasnacheev
Hello!

It is hard to say what is happening here without full stack trace from all
threads of both nodes. Can you provide that?

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 13:48, Oscar Torreno :

> Hello Ilya,
>
>
>
> Please find attached the docker compose log of both nodes (shapelets-1 and
> shapelets-2). Shapelets-2 was the one able to start without problems in
> this case. Attaching the output of jstack for the main Thread of the
> shapelets-1 node.
>
>
>
> Regards,
>
> *--*
>
> *Oscar Torreno*
>
>
>
> *From: *Ilya Kasnacheev 
> *Reply-To: *"user@ignite.apache.org" 
> *Date: *Monday, 8 July 2019 at 11:25
> *To: *"user@ignite.apache.org" 
> *Subject: *Re: IGFS block at startup
>
>
>
> Hello!
>
>
>
> Can you please provide complete logs and stack traces from both nodes?
>
>
>
> I guess we have a lot of tests where we start several IGFS nodes and they
> finish just fine.
>
>
>
> Regards,
>
> --
>
> Ilya Kasnacheev
>
>
>
>
>
> пн, 8 июл. 2019 г. в 10:16, Oscar Torreno :
>
> Hi,
>
>
>
> I am trying to start a fresh 2 nodes Ignite 2.7.0 cluster (using
> docker-compose) with 2 IGFS configured. When I start both nodes at the same
> time, almost always one of them starts without problems, but the second one
> hangs at line 120 of the IgfsMetaManager class (doing an await on a
> CountDownLatch). Rarely, both nodes progress, so it seems to be a kind of
> race condition/inconsistent state problem because of the simultaneous start.
>
>
>
> Have you experienced such issue before? If yes, is there any workaround to
> overcome it?
>
>
>
> Best regards, thanks in advance.
>
> Oscar
>
>
>
> [image: Logo] 
>
> *Oscar Torreno*
>
> Software Engineer
>
> m: + 34 675 026 952
>
> e: oscar.torr...@shapelets.io
>
> C/ Puerta del Mar 18, 2º. 29005, Málaga,Spain
>
> [image: LinkedIn icon]   [image:
> Twitter icon] 
>
>
>
>


Re: ignite cluster lock up

2019-07-08 Thread Ilya Kasnacheev
Hello!

When full GC is running, all threads are effectively blocked. This is why
it's named 'GC pause'.

Regards,
-- 
Ilya Kasnacheev


сб, 6 июл. 2019 г. в 12:33, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:

> We are now testing by increasing failureDetectionTimeout values
>
>
> Even if full GC is running, why are ignite system threads blocked?
>
> why aren't ignite system threads free to accept new connections?
>
> Why exactly would rebooting a few of previously connected nodes, reset
> everything.
>
>
> There could be something else as well.
>
>


Re: Re: Distributed Cluster Deployment

2019-07-08 Thread Ilya Kasnacheev
Hello!

I think you can use sqlline.sh directly in the shell for distributed
queries either way: with Java and with XML.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 04:27, shicheng31...@gmail.com :

> You mean that I need to configure Ignite with Java code (call
> Ignition.start()) so that I can  use sqlline.sh directly in the shell for
> distributed queries?
>
> --
> shicheng31...@gmail.com
>
>
> *From:* Ilya Kasnacheev 
> *Date:* 2019-07-05 19:51
> *To:* user 
> *Subject:* Re: Re: Distributed Cluster Deployment
> Hello!
>
> The easiest way to configure Ignite without external configuration is just
> issuing Ignition.start() in your Java code.
>
> It uses Multicast IP finder by default, so if you start several nodes in a
> single subnet they should join as a single distributed cluster.
>
> Then you can use sqlline to connect to any node.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 5 июл. 2019 г. в 03:44, shicheng31...@gmail.com <
> shicheng31...@gmail.com>:
>
>> Maybe I didn't describe it clearly. How to configure Ignite, you can
>> achieve distributed without external configuration, and then directly use
>> the sqline command to connect, for distributed query operations. Should it
>> be through the configuration file below the installation file? I see that
>> the configuration file that comes with it seems to be the format that
>> Spring requires. Is there any relation  between the two?
>>
>> --
>> shicheng31...@gmail.com
>>
>>
>> *From:* Vladimir Pligin 
>> *Date:* 2019-07-04 20:27
>> *To:* user 
>> *Subject:* Re: Distributed Cluster Deployment
>> Hi,
>>
>> Spring here is just a convenient way of configuration building.
>> Ignite is not tightly bound to it. You're able to construct everything
>> programmatically.
>> For example here https://apacheignite.readme.io/docs/tcpip-discovery you
>> can
>> switch example from "XML" to "Java".
>> And Spring xml configuration
>> 
>>   ...
>>   
>> 
>>   
>> >
>> class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">
>>   
>> 
>>   
>> 
>>   
>> 
>>
>> turns into
>>
>> TcpDiscoverySpi spi = new TcpDiscoverySpi();
>> TcpDiscoveryMulticastIpFinder ipFinder = new
>> TcpDiscoveryMulticastIpFinder();
>> ipFinder.setMulticastGroup("228.10.10.157");
>> spi.setIpFinder(ipFinder);
>> IgniteConfiguration cfg = new IgniteConfiguration();
>> cfg.setDiscoverySpi(spi);
>> Ignition.start(cfg);
>>
>> Does it make sense?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>


Re: TcpCommunicationSpi failed to establish connection to node, node will be dropped from cluster

2019-07-08 Thread Ilya Kasnacheev
Hello!

Since you can use Java system properties in XMLs and you can specify those
on cmdline, should be not hard to automate DevOps here.

Glad that you have it working.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 10:57, wiltu :

> Hello!
> Yep, "172.20.0.1" is a docker network IP, like this :
> =
> 6: docker0:  mtu 1500 qdisc noqueue
> state
> DOWN group default
> link/ether 02:42:56:dd:06:b5 brd ff:ff:ff:ff:ff:ff
> inet 172.17.0.1/16 scope global docker0
>valid_lft forever preferred_lft forever
> 11: docker_gwbridge:  mtu 1500 qdisc
> noqueue state UP group default
> link/ether 02:42:7f:c3:f0:89 brd ff:ff:ff:ff:ff:ff
> inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
>valid_lft forever preferred_lft forever
> inet6 fe80::42:7fff:fec3:f089/64 scope link
>valid_lft forever preferred_lft forever
> =
>
> My docker use host network driver for container, and this cluster can work
> for a while (maybe up 12 hours ) , the means TcpCommunicationSpi is working
> at the beginning. Maybe specify localAddress can make the cluster to work,
> but does not seem to fit the principle of auto DevOps.
> Then I tried to change the cluster to another machine, it live 2 days and
> looks good.
> I want it to run for a few days under observation.
>
> Thank you very much, I have benefited a lot from your suggestion, Please
> let
> me know if you have any suggestions.
>
> Regards,
> Wilson
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: IGFS block at startup

2019-07-08 Thread Ilya Kasnacheev
Hello!

Can you please provide complete logs and stack traces from both nodes?

I guess we have a lot of tests where we start several IGFS nodes and they
finish just fine.

Regards,
-- 
Ilya Kasnacheev


пн, 8 июл. 2019 г. в 10:16, Oscar Torreno :

> Hi,
>
>
>
> I am trying to start a fresh 2 nodes Ignite 2.7.0 cluster (using
> docker-compose) with 2 IGFS configured. When I start both nodes at the same
> time, almost always one of them starts without problems, but the second one
> hangs at line 120 of the IgfsMetaManager class (doing an await on a
> CountDownLatch). Rarely, both nodes progress, so it seems to be a kind of
> race condition/inconsistent state problem because of the simultaneous start.
>
>
>
> Have you experienced such issue before? If yes, is there any workaround to
> overcome it?
>
>
>
> Best regards, thanks in advance.
>
> Oscar
>
>
>
> [image: Logo] 
>
> *Oscar Torreno*
>
> Software Engineer
>
> m: + 34 675 026 952
>
> e: oscar.torr...@shapelets.io
>
> C/ Puerta del Mar 18, 2º. 29005, Málaga,Spain
>
> [image: LinkedIn icon]   [image:
> Twitter icon] 
>
>
>


Re: configuration of ignite client nodes

2019-07-08 Thread Stephen Darlington
The list of machines in your IP finder list does not need to be exhaustive. As 
long as a node can find at least one other it should be able to join the 
cluster.

You don’t need to configure your clients to know about the other client nodes, 
but, by virtue of joining the cluster, they will learn about all the other 
nodes, both clients and servers.

Thick clients are full participants of the cluster, so the same timeouts on the 
clients as well as the servers would make sense.

Regards,
Stephen

> On 8 Jul 2019, at 04:22, Scott Cote  wrote:
> 
> We have ignite client nodes (2.7.0 linux)  embedded inside our microservices 
> (spring boot apps) pointed at a dedicated ignite server (2.7.0 linux).  Each 
> client node has its ip finder – the vm ip finder –configured to seek the 
> dedicated server.
>  
> None of the client nodes are configured to know about each other.
>  
> Should the client nodes know about each other?  In other words, do I need to 
> place the ip address of the client nodes in the vm ip finder list for each of 
> the clients?
>  
> We have the apps (Abbreviated names and abbreviated addresses):
>  
> QM – 19
> ALPR – 18
> RM – 21
> A – 20
>  
> Ignite Server – 225
>  
> QM, ALPR, RM, and A all have their VM ip finder list to only seek “225” .   
> Do they need to have each other in there too?   We keep seeing the client 
> nodes trying to seek each other out inside the spring boot logs.
>  
> Another note:
> Using Visor ping command – usually at least one node fails in the ping.  
> Sometimes they all pass, but not often.  How do I leverage this diagnostic?
> Using Visor cache -scan sometimes fails …. Symptom again?
>  
> Have increased clientFailureDetectionTimeout from 30,000 to 100,000 on the 
> server.
> Saw the message to increase networkTimeout from 5000.  So increased it to 
> 20,000 on the server.   Do we need to do this on the client too?
>  
> What is a symptom of problems from mixed ip environments?   See that warning 
> in the log also…..
>  
> SCott




Re: TcpCommunicationSpi failed to establish connection to node, node will be dropped from cluster

2019-07-08 Thread wiltu
Hello!
Yep, "172.20.0.1" is a docker network IP, like this :
=
6: docker0:  mtu 1500 qdisc noqueue state
DOWN group default
link/ether 02:42:56:dd:06:b5 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
   valid_lft forever preferred_lft forever
11: docker_gwbridge:  mtu 1500 qdisc
noqueue state UP group default
link/ether 02:42:7f:c3:f0:89 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global docker_gwbridge
   valid_lft forever preferred_lft forever
inet6 fe80::42:7fff:fec3:f089/64 scope link
   valid_lft forever preferred_lft forever
=

My docker use host network driver for container, and this cluster can work
for a while (maybe up 12 hours ) , the means TcpCommunicationSpi is working
at the beginning. Maybe specify localAddress can make the cluster to work,
but does not seem to fit the principle of auto DevOps.
Then I tried to change the cluster to another machine, it live 2 days and
looks good.
I want it to run for a few days under observation.

Thank you very much, I have benefited a lot from your suggestion, Please let
me know if you have any suggestions.

Regards,
Wilson





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IGFS block at startup

2019-07-08 Thread Oscar Torreno
Hi,

I am trying to start a fresh 2 nodes Ignite 2.7.0 cluster (using 
docker-compose) with 2 IGFS configured. When I start both nodes at the same 
time, almost always one of them starts without problems, but the second one 
hangs at line 120 of the IgfsMetaManager class (doing an await on a 
CountDownLatch). Rarely, both nodes progress, so it seems to be a kind of race 
condition/inconsistent state problem because of the simultaneous start.

Have you experienced such issue before? If yes, is there any workaround to 
overcome it?

Best regards, thanks in advance.
Oscar

[Logo]
Oscar Torreno
Software Engineer
m: + 34 675 026 952
e: oscar.torr...@shapelets.io
C/ Puerta del Mar 18, 2º. 29005, Málaga,Spain
[LinkedIn icon]  [Twitter icon]