Re: ignite node ports

2020-05-18 Thread 배혜원
Thank you so much. 
then, if im not going to be use 11211 port.
what should i do. I do not want to binding 11211 port. is there any way?? or it 
will be just automatically binding??

나의 iPhone에서 보냄

> 2020. 5. 18. 오후 10:27, Ilya Kasnacheev  작성:
> 
> 
> Hello!
> 
> I have to correct, 11211 is not used by thick JDBC driver (Which is a regular 
> client node), instead it is used by control.sh tool mostly. And some other 
> legacy tools.
> 
> Regards,
> -- 
> Ilya Kasnacheev
> 
> 
> ср, 13 мая 2020 г. в 17:44, Evgenii Zhuravlev :
>> Hi,
>> 
>> Ports are described here: 
>> https://dzone.com/articles/a-simple-checklist-for-apache-ignite-beginners
>> 
>> Basically, Discovery(47500 by default) and Communication(47100) are always 
>> should be open, since without them cluster won't be functional. Discovery 
>> port used for clustering, checking all nodes state in the cluster. 
>> 
>> Communication 
>> port(https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalPort-int-)
>>  used for all other communications between nodes, for example, cache 
>> operations requests, compute jobs, etc.
>> 
>> Rest Port(8080) is used for rest 
>> calls(https://apacheignite.readme.io/docs/rest-api) and connection from 
>> WebConsole(Management tool)
>> 
>> Client connector port(10800) is used for the 
>> JDBC(https://apacheignite-sql.readme.io/docs/jdbc-driver), 
>> ODBC(https://apacheignite-sql.readme.io/docs/odbc-driver) or other thin 
>> clients(https://apacheignite.readme.io/docs/java-thin-client) connection.
>> 
>> 11211 - Port for Thick Jdbc Driver and old rest protocol.
>> 
>> Note that all ports also have port range, which means that if the default 
>> port is already in use, it will try to use the next one.
>> 
>> Evgenii
>> 
>> 
>> 
>> 
>> 
>> вт, 12 мая 2020 г. в 22:56, kay :
>>> Hello, I started ignite node and checked log file.
>>> 
>>> I found TCP ports in logs
>>> 
>>> >>> Local ports : TCP:8080 TCP:11213 TCP:47102 TCP:49100 TCP:49200
>>> 
>>> I set 49100, 49200 port at configuration file for ignite node and client
>>> connector port.
>>> but I don't know the others port exactly.
>>> 
>>> I found a summary at log.
>>> 
>>> [Node 1]
>>> TCP binary : 8080
>>> Jetty REST  : 11213
>>> Communication spi : 47102
>>> 
>>> [Node 2]
>>> TCP binary : 8081
>>> Jetty REST  : 11214
>>> Communication spi : 47103
>>> 
>>> Could you guys tell me where each port is used??
>>> 
>>> Is it necessary ports? 
>>> Do I need 5 ports each time add a new node all of different port?
>>> if it is true, how can i set TCP binary port(8080) & Jetty REST(11213) at
>>> configuration file ??
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Webinar, May 20th: Ignite SQL Essentials (Basics, Memory Quotas, Calcite-powered engine)

2020-05-18 Thread Denis Magda
Igniters,

Some time ago, we united with Igor to produce a webinar about Ignite SQL
essentials:
https://bit.ly/2WzlCrp

Beginners will get a full understanding of our SQL capabilities while
experienced Ignite developers will learn more about memory management
internals in relation to SQL, will get an update about memory quotas and
the new Calcite-powered engine.

Attend the event or bookmark the page to watch a recording later.

-
Denis


Near Cache Support For Thin Clients

2020-05-18 Thread martybjo...@gmail.com
I wanted to see if there are any plans to support near caches  for thin
clients?   I think it would be a great feature.  I know I could use it right
now.  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: join question

2020-05-18 Thread narges saleh
It seems the issue exist only if one uses data streamer with binaryobject
builder. If I use straight JDBC to insert data, the issue goes away. Any
idea what one needs to do to get this working with binary objects?
Everything else is the same between the two scenarios.

On Mon, May 18, 2020 at 4:39 PM narges saleh  wrote:

> It turned out that I'd get partial results in some cases, when joining
> partitioned caches. But I still don't understand why I am not getting all
> the rows that the joined query should return.
> My assumption is that if you have caches with primary keys, containing the
> affinity key, then the related entries  (by affinity key) in these caches
> should be collocated and a join among these caches based on the leading
> part of the primary keys (including the affinity key) which is shared
> across all the keys, should return all the rows which satisfy the where
> clause. Even if this is not the case, a distributed join should be possible
> and I still should get all the rows. But this is not happening either.
> What could be the issue here? What am I missing here?
>
> On Mon, May 18, 2020 at 9:30 AM narges saleh  wrote:
>
>> No error. Just no records is returned, as opposed to the join between the
>> replicated and partitioned cache which returns ass applicable rows. Sorry,
>> for not being clear.
>>
>> On Mon, May 18, 2020 at 9:00 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> Fails how? Is the result set incorrect? Any specific error message?
>>> Please share details.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> пн, 18 мая 2020 г. в 16:49, narges saleh :
>>>
 Hi All,
 I have encountered a puzzling join case.
 I have 3 tables on a cluster of two ignite server nodes:
 table-A (id + org = primary), replicated
 id
 org. <-- affinity
 other fields

 table-B (id, org, add-id=primary key), partitioned
 id
 org <- affinity
 addr-id
 other fields

 table-C (id, org, comp-id=primary key), partitioned
 id
 org <- affinity
 comp-id
 other fields

 joins between table-A and table-B (on id, and org) succeeds.
 joins between table-A and table-C (of id and org) succeeds.
 joins between table-B and table-C (on id and org) fails.

 all three joins succeed if the cluster has only one server node.
 Why the join between the partitioned caches fail in a distributed mode?

 I am using JDBC connection for select statements. The join fails
 whether dealing with thick or thin client.

 thanks


>>>


Re: join question

2020-05-18 Thread narges saleh
It turned out that I'd get partial results in some cases, when joining
partitioned caches. But I still don't understand why I am not getting all
the rows that the joined query should return.
My assumption is that if you have caches with primary keys, containing the
affinity key, then the related entries  (by affinity key) in these caches
should be collocated and a join among these caches based on the leading
part of the primary keys (including the affinity key) which is shared
across all the keys, should return all the rows which satisfy the where
clause. Even if this is not the case, a distributed join should be possible
and I still should get all the rows. But this is not happening either.
What could be the issue here? What am I missing here?

On Mon, May 18, 2020 at 9:30 AM narges saleh  wrote:

> No error. Just no records is returned, as opposed to the join between the
> replicated and partitioned cache which returns ass applicable rows. Sorry,
> for not being clear.
>
> On Mon, May 18, 2020 at 9:00 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Fails how? Is the result set incorrect? Any specific error message?
>> Please share details.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 18 мая 2020 г. в 16:49, narges saleh :
>>
>>> Hi All,
>>> I have encountered a puzzling join case.
>>> I have 3 tables on a cluster of two ignite server nodes:
>>> table-A (id + org = primary), replicated
>>> id
>>> org. <-- affinity
>>> other fields
>>>
>>> table-B (id, org, add-id=primary key), partitioned
>>> id
>>> org <- affinity
>>> addr-id
>>> other fields
>>>
>>> table-C (id, org, comp-id=primary key), partitioned
>>> id
>>> org <- affinity
>>> comp-id
>>> other fields
>>>
>>> joins between table-A and table-B (on id, and org) succeeds.
>>> joins between table-A and table-C (of id and org) succeeds.
>>> joins between table-B and table-C (on id and org) fails.
>>>
>>> all three joins succeed if the cluster has only one server node.
>>> Why the join between the partitioned caches fail in a distributed mode?
>>>
>>> I am using JDBC connection for select statements. The join fails whether
>>> dealing with thick or thin client.
>>>
>>> thanks
>>>
>>>
>>


Re: Deploying Ignite Code

2020-05-18 Thread akorensh
Hi,
  To explicitly deploy classes copy them to the libs dir. (located in
$IGNITE_HOME\libs)
   Ignite takes the jars from the libs dir and puts them into the classpath.
 
   Your classes need to be in the classpath.
   Use jinfo  or Visual VM to look at the classpath of the running java
ignite project, and 
   verify that your jars are there.
   
   https://apacheignite.readme.io/docs/zero-deployment#explicit-deployment


   cache configuration classes (cacheStore implementations, eviction and
expiry policies, etc.) need to be 
   explicitly deployed on all server nodes

   The peer class loading functionality does not deploy the key and object
classes of the entries stored in 
   caches.
 
   If you are using BinaryObject API then the cache Key/Value classes are
not needed: https://apacheignite.readme.io/docs/binary-marshaller

---
  PeerClassLoading works for the following cases:

  Tasks and jobs submitted via the compute interface.
  Transformers and filters used with scan queries and continuous queries.
  Stream transformers, receivers and visitors used with data streamers.
  Entry processors.

see: 
   https://apacheignite.readme.io/docs/zero-deployment#peer-class-loading
  https://apacheignite.readme.io/docs/deployment-modes

Thanks, Alex





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Messages being Missed on Node Start

2020-05-18 Thread akorensh
Hi,
  A node has to join the cluster to receive/register for messages/events.

  You can store events:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/eventstorage/memory/MemoryEventStorageSpi.html
  On every node join, replay the events stored. 

 see: https://ignite.apache.org/features/messaging.html
  https://apacheignite.readme.io/docs/events

example:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/CacheEventsExample.java



Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Scheduling Cache Refresh

2020-05-18 Thread nithin91
Hi 

Can anyone help me by providing inputs to the questions posted in my
previous message.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite.cache.loadcache.Does this method do Increamental Load?

2020-05-18 Thread nithin91
Thanks for the inputs.It is really helpful.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Deploying Ignite Code

2020-05-18 Thread nithin91
Hi 

Can any let me know whether should i deploy my Ignite Project developed in
UNIX in each and every node.If Yes, then should it be deployed on the bin
folder shipped with ignite or can it be kept in any folder.


Currently we are following steps

1. We created a  bean file which has all the cache configuration details and
data region configuration details.

2. Content of this xml file is  copied into the default-config.xml file
shipped with ignite present on each Linux server node and then starting each
server node using nohup ./ignite.sh 


3. From my local system using the same xml file but adding an additional
property Client mode=true
and then running a standalone java program to load the cache.

With the help of peerclassloading enabled=True, i am able to  execute my
java program without deploying the classes in each server node.

But this method of execution is not working, if i use cache.invoke or
cache.invokeall methods as i am getting ClassNotfoundException even though
the class is present in my local machine.

Can you please let me know how to overcome this error.



Following is the log generated by the program

[20:08:21]__   
[20:08:21]   /  _/ ___/ |/ /  _/_  __/ __/ 
[20:08:21]  _/ // (7 7// /  / / / _/   
[20:08:21] /___/\___/_/|_/___/ /_/ /___/  
[20:08:21] 
[20:08:21] ver. 2.7.6#20190911-sha1:21f7ca41
[20:08:21] 2019 Copyright(C) Apache Software Foundation
[20:08:21] 
[20:08:21] Ignite documentation: http://ignite.apache.org
[20:08:21] 
[20:08:21] Quiet mode.
[20:08:21]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[20:08:21]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[20:08:21] 
[20:08:21] OS: Windows 10 10.0 amd64
[20:08:21] VM information: Java(TM) SE Runtime Environment 1.8.0_131-b11
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.131-b11
[20:08:21] Please set system property '-Djava.net.preferIPv4Stack=true' to
avoid possible problems in mixed environments.
[20:08:21] Initial heap size is 254MB (should be no less than 512MB, use
-Xms512m -Xmx512m).
[20:08:21] Configured plugins:
[20:08:21]   ^-- None
[20:08:21] 
[20:08:21] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0, super=AbstractFailureHandler
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT
[20:08:27] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[20:08:27] Security status [authentication=off, tls/ssl=off]
[20:08:28] REST protocols do not start on client node. To start the
protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system
property.
[20:09:00] Performance suggestions for grid  (fix if possible)
[20:09:00] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[20:09:00]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[20:09:00]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to
JVM options)
[20:09:00]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[20:09:00]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[20:09:00] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[20:09:00] 
[20:09:00] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[20:09:00] 
[20:09:00] Ignite node started OK (id=6601bb78)
[20:09:00] Topology snapshot [ver=217, locNode=6601bb78, servers=2,
clients=1, state=ACTIVE, CPUs=16, offheap=9.0GB, heap=9.6GB]
[20:09:00]   ^-- Baseline [id=0, size=2, online=2, offline=0]
[20:09:03] Ignite node stopped OK [uptime=00:00:03.075]
Exception in thread "main" javax.cache.processor.EntryProcessorException:
class org.apache.ignite.binary.BinaryInvalidTypeException:
ignite.example.IgniteUnixImplementation.NumberandDateFormat
at
org.apache.ignite.internal.processors.cache.CacheInvokeResult.get(CacheInvokeResult.java:108)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1440)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.invoke(IgniteCacheProxyImpl.java:1482)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.invoke(GatewayProtectedCacheProxy.java:1228)
at Load.Computejob.main(Computejob.java:22)
Caused by: class org.apache.ignite.binary.BinaryInvalidTypeException:
ignite.example.IgniteUnixImplementation.NumberandDateFormat
at
org.apache.ignite.internal.binary.BinaryContext.descriptorForTypeId(BinaryContext.java:707)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1758)
at
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(Bina

Re: join question

2020-05-18 Thread narges saleh
No error. Just no records is returned, as opposed to the join between the
replicated and partitioned cache which returns ass applicable rows. Sorry,
for not being clear.

On Mon, May 18, 2020 at 9:00 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Fails how? Is the result set incorrect? Any specific error message? Please
> share details.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 18 мая 2020 г. в 16:49, narges saleh :
>
>> Hi All,
>> I have encountered a puzzling join case.
>> I have 3 tables on a cluster of two ignite server nodes:
>> table-A (id + org = primary), replicated
>> id
>> org. <-- affinity
>> other fields
>>
>> table-B (id, org, add-id=primary key), partitioned
>> id
>> org <- affinity
>> addr-id
>> other fields
>>
>> table-C (id, org, comp-id=primary key), partitioned
>> id
>> org <- affinity
>> comp-id
>> other fields
>>
>> joins between table-A and table-B (on id, and org) succeeds.
>> joins between table-A and table-C (of id and org) succeeds.
>> joins between table-B and table-C (on id and org) fails.
>>
>> all three joins succeed if the cluster has only one server node.
>> Why the join between the partitioned caches fail in a distributed mode?
>>
>> I am using JDBC connection for select statements. The join fails whether
>> dealing with thick or thin client.
>>
>> thanks
>>
>>
>


Messages being Missed on Node Start

2020-05-18 Thread zork
Hi Ignite experts,

I am facing an issue where some messages sent to a node are sometimes missed
when the node just joins the cluster.

On some debugging, I found that this is because as soon as the node joins
the cluster, the sender node receives a NODE_JOINED event for that receiver
node and it starts sending messages to it, however, the receiver has still
not started listening to those topics which are being received.

Keeping this use case in mind, can someone help answer these please:
1. Can a node register to listen for specific topics before it joins the
cluster?
2. If the above is not possible what would be a good way to achieve this so
that the node that just joined does not miss any messages?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache ignite evolvable object

2020-05-18 Thread Ilya Kasnacheev
Hello!

Binarylizable may be shorter but BinaryObject supports evolvable objects,
as in, you can add new fields, and cache operations will usually preserve
these.
You will only lose these when you de-serialize your object to old version
of POJO, but most of internal operations (such as rebalancing) will not
touch them.

Regards,
-- 
Ilya Kasnacheev


сб, 2 мая 2020 г. в 13:50, Hemambara :

> Does it save additional bytes by default or do we have to implement
> binarylable, if so do you have any example.
>
> If it does by default, does that mean, let's say if I send data from new
> version node to old version node and send the same data back to new version
> node will it preserve thos new fields??
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite Persistence Issues/Warnings

2020-05-18 Thread Ilya Kasnacheev
Hello!

In both cases Ignite will walk through the segments it needs, and adjust
persistence for any data that was updated in already completed operations.

Archive or no archive is a logistical choice, as far as my understanding
goes.

Regards,
-- 
Ilya Kasnacheev


вт, 12 мая 2020 г. в 07:28, adipro :

> Hi Alex,
>
> Are you saying if we disable WAL Archiving, will there be any problem?
>
> Let's say if I disable WAL archiving and if 6 out of 10 WAL segments are
> filled then what happens during OS crash?
>
> When we restart the server, will all the operations in those 6 segments be
> updated on the disc data by the process of checkpointing? And what happens
> if crash happened during checkpointing in the case of disabled WAL
> Archiving?
>
> Thanks,
> Aditya.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Can we set TTL (expiry time) of a key-value from thin client?

2020-05-18 Thread scriptnull
I would like to break down this question into two questions.

1. Can we have key-values with different expire times in the same cache? (I
think the answer for this is yes, because the redis layer in ignite allows
for this)

2. I am trying to build a ruby thin client for Apache Ignite and got a basic
prototype in place. But I couldn't find an operation in binary protocol
documentation that will enable us to set TTL for a key-value pair.  So, any
idea on how to set TTL via the binary protocol?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Reloading of cache not working

2020-05-18 Thread Ilya Kasnacheev
Hello!

Yes, as it was mentioned on the list, it is not going to replace already
existing keys' values.

Regards,
-- 
Ilya Kasnacheev


ср, 13 мая 2020 г. в 08:01, Akash Shinde :

> Hi,
> My question is specifically for clo.apply(key, data)  that I invoked in 
> CacheStoreAdapter.loadCache
> method.
> So, does this method (clo.apply) overrides value for the keys which are
> already present in cache or it just skips?
> My observation is that its not overriding value for the keys which are
> already present and adding data for only new keys.
>
> Thanks,
> Akash
>
> On Wed, May 13, 2020 at 1:15 AM akorensh  wrote:
>
>> Hi,
>> 1)  loadCache() is implementation dependent, by default it just adds new
>> records to the cache .
>>   see example:
>>
>> https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/CacheLoadOnlyStoreExample.java
>>
>>
>>   Take a look at jdbc example as well:
>>
>> https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcStoreExample.java
>>
>>   more info:
>> https://apacheignite.readme.io/docs/data-loading#ignitecacheloadcache
>>
>> 2) You do not need to clear the cache in order to call loadCache
>>
>> 3)  This is implementation dependent.   By default it does not overwrite
>> existing entries.
>>
>> you can experiment w/the above examples by putting:
>>   cache.put(1L,new Person(1L,"A", "B") ) before the loadCache() statement
>> to
>> get a better feel for its behavior.
>>
>> Thanks, Alex
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: IOException in log and reference for dataRegion&cache configure

2020-05-18 Thread Ilya Kasnacheev
Hello!

These are for some Ignite internal caches. They have sensible small default
value, you do not need to tune them.

Regards,
-- 
Ilya Kasnacheev


чт, 14 мая 2020 г. в 10:59, kay :

> Hello again :)
>
> I read  memory configuration section.
>
> https://apacheignite.readme.io/docs/memory-configuration
>
> and I don't know exactly what 'setSystemRegionInitialSize',
> 'setSystemRegionMaxSize' do?
> is it limit the global data storage of node?
> I didn't configure those things and configured only dataRegionSize.
> but it is working well.
>
> some one tell me about 'setSystemRegionInitialSize',
> 'setSystemRegionMaxSize' infomation more
>
> Thank you
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: join question

2020-05-18 Thread Ilya Kasnacheev
Hello!

Fails how? Is the result set incorrect? Any specific error message? Please
share details.

Regards,
-- 
Ilya Kasnacheev


пн, 18 мая 2020 г. в 16:49, narges saleh :

> Hi All,
> I have encountered a puzzling join case.
> I have 3 tables on a cluster of two ignite server nodes:
> table-A (id + org = primary), replicated
> id
> org. <-- affinity
> other fields
>
> table-B (id, org, add-id=primary key), partitioned
> id
> org <- affinity
> addr-id
> other fields
>
> table-C (id, org, comp-id=primary key), partitioned
> id
> org <- affinity
> comp-id
> other fields
>
> joins between table-A and table-B (on id, and org) succeeds.
> joins between table-A and table-C (of id and org) succeeds.
> joins between table-B and table-C (on id and org) fails.
>
> all three joins succeed if the cluster has only one server node.
> Why the join between the partitioned caches fail in a distributed mode?
>
> I am using JDBC connection for select statements. The join fails whether
> dealing with thick or thin client.
>
> thanks
>
>


Re: java.sql.SQLException: Schema change operation failed: Thread got interrupted while trying to acquire table lock.

2020-05-18 Thread yangjiajun
Hello.Thanks for u reply.

Please see this JIRA ticket:
https://issues.apache.org/jira/browse/IGNITE-13020



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Binary recovery for a very long time

2020-05-18 Thread Ilya Kasnacheev
Hello!

Direct IO module is experimental and should not be used unless performance
is tested first, in your specific use case.

Regards,
-- 
Ilya Kasnacheev


пн, 18 мая 2020 г. в 16:47, 38797715 <38797...@qq.com>:

> Hi,
>
> If direct IO is disabled, the startup speed will be doubled, including
> some other tests. I find that direct IO has a great impact on the read
> performance.
> 在 2020/5/14 上午5:16, Evgenii Zhuravlev 写道:
>
> Can you share full logs from all nodes?
>
> вт, 12 мая 2020 г. в 18:24, 38797715 <38797...@qq.com>:
>
>> Hi Evgenii,
>>
>> The storage used is not SSD.
>>
>> We will use different versions of ignite for further testing, such as
>> ignite2.8.
>> Ignite is configured as follows:
>> 
>> http://www.springframework.org/schema/beans";
>> 
>> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>>  xsi:schemaLocation="
>> http://www.springframework.org/schema/beans
>> http://www.springframework.org/schema/beans/spring-beans.xsd";>
>> > "org.apache.ignite.configuration.IgniteConfiguration">
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > />
>> 
>> 
>> 
>> 
>> > "org.apache.ignite.configuration.CacheConfiguration">
>> 
>> 
>> 
>> 
>> 
>> 
>> > "org.apache.ignite.configuration.CacheConfiguration">
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 在 2020/5/13 上午4:45, Evgenii Zhuravlev 写道:
>>
>> Hi,
>>
>> Can you share full logs and configuration? What disk so you use?
>>
>> Evgenii
>>
>> вт, 12 мая 2020 г. в 06:49, 38797715 <38797...@qq.com>:
>>
>>> Among them:
>>> CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)
>>>
>>> Ignite sys cache: ~ 27 minutes
>>>
>>> PLM_ITEM:~3 minutes(repicated,1.9K)
>>>
>>>
>>> 在 2020/5/12 下午9:08, 38797715 写道:
>>>
>>> Hi community,
>>>
>>> We have 5 servers, 16 cores, 256g memory, and 200g off-heap memory.
>>> We have 7 tables to test, and the data volume is
>>> respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others are
>>> partitioned(backup = 1)
>>>
>>> VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch -XX:+UseG1GC
>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+PrintGCDetails
>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+UseGCLogFileRotation
>>> -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M
>>> -Xloggc:/data/gc/logs/gclog.txt -Djava.net.preferIPv4Stack=true
>>> -XX:MaxDirectMemorySize=256M -XX:+PrintAdaptiveSizePolicy
>>>
>>> Today, one of the servers was restarted(kill and then start ignite.sh)
>>> for some reason, but the node took 1.5 hours to start, which was much
>>> longer than expected.
>>>
>>> After analyzing the log, the following information is found:
>>> [2020-05-12T17:00:05,138][INFO ][main][GridCacheDatabaseSharedManager]
>>> Found last checkpoint marker [cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb,
>>> pos=FileWALPointer [idx=10511, fileOff=5134, len=61193]]
>>> [2020-05-12T17:00:05,151][INFO ][main][GridCacheDatabaseSharedManager]
>>> Binary memory state restored at node startup [restoredPtr=FileWALPointer
>>> [idx=10511, fileOff=51410110, len=0]]
>>> [2020-05-12T17:00:05,152][INFO ][main][FileWriteAheadLogManager]
>>> Resuming logging to WAL segment [file=/appdata/ignite/db/wal/24/
>>> 0001.wal, offset=51410110, ver=2]
>>> [2020-05-12T17:00:06,448][INFO ][main][PageMemoryImpl] Started page
>>> memory [memoryAllocated=200.0 GiB, pages=50821088, tableSize=3.9 GiB,
>>> checkpointBuffer=2.0 GiB]
>>> [2020-05-12T17:02:08,528][INFO ][main][GridCacheProcessor] Started
>>> cache in recovery mode [name=CO_CO_NEW, id=-189779360,
>>> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
>>> mvcc=false]
>>> [2020-05-12T17:50:44,341][INFO ][main][GridCacheProcessor] Started
>>> cache in recovery mode [name=CO_CO_LINE, id=-1588248812,
>>> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
>>> mvcc=false]
>>> [2020-05-12T17:50:44,366][INFO ][main][GridCacheProcessor] Started
>>> cache in recovery mode [name=ignite-sys-cache, id=-2100569601,
>>> dataRegionName=sysMemPlc, mode=REPLICATED, atomicity=TRANSACTIONAL, backups=
>>> 2147483647, mvcc=false]
>>> [2020-05-12T18:17:57,071][INFO ][main][GridCacheProcessor] Started
>>> cache in recovery mode [name=CO_CO_LINE_NEW, id=1742991829,
>>> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
>>> mvcc=false]
>>> [2020-05-12T18:19:54,910][INFO ][main][GridCacheProcessor] Started
>>> cache in recovery mode [name=PI_COM_DAY, id=-1904194728,
>>> dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC, backups=1,
>>> mvcc=false]
>>> [2020-05-12T18:19:54,949][INFO ][main][GridCacheProcessor] Started
>>> cache in recovery mode [name=PLM_ITEM, id=-1283854143,
>>> dataRegionName=default, mode=REPLICATED, atomicity=ATOMIC, backups=
>>> 2147483647, mvcc=false]
>>> [2020-05-12T18:22:53,662][INFO ][main][GridCacheProcessor] Started
>>> cache in recovery mode [name=CO_CO, id=64322847,
>>> dataRegionName=defau

Re: About index inline size of primary key

2020-05-18 Thread Ilya Kasnacheev
Hello!

I actually think that you should be conservative with that setting, since
large inline size will make your B+-tree taller which may lead to worse
performance.
Setting it to some good default would make sense. It depends on structure
and selectivity of your primary keys.

Regards,
-- 
Ilya Kasnacheev


пн, 18 мая 2020 г. в 16:53, 18624049226 <18624049...@163.com>:

> Hi Ilya,
>
> Then I think this property should be ready in the production planning
> stage.
> So the problem is, if there are many tables in the system and many tables
> have combined primary keys, should this attribute be configured with a
> relatively large value at the beginning, such as 40 and 50? What's the
> negative impact? Instead of waiting for notifications from the log.
> 在 2020/5/18 下午9:45, Ilya Kasnacheev 写道:
>
> Hello!
>
> I think this is correct. Moreover, setting this property on a part of
> cluster may lead to problems of its own. It is recommended to set ot before
> deploying a cluster.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 18 мая 2020 г. в 16:40, 18624049226 <18624049...@163.com>:
>
>> Hi Ilya,
>>
>> Thank you very much for your reply!
>> I wonder if the existing primary key index will not be rebuilt after this
>> property is configured? Will only affect newly created tables in the future?
>> 在 2020/5/18 下午9:25, Ilya Kasnacheev 写道:
>>
>> Hello!
>>
>> Yes, it will have global impact on all indexes on primary keys, and all
>> indexes created without INLINE SIZE clause.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> чт, 14 мая 2020 г. в 16:53, 38797715 <38797...@qq.com>:
>>
>>> Hi,
>>>
>>> I see this property.
>>> If this property is configured, it has a global impact? What is the
>>> influence range of this parameter?
>>> 在 2020/5/14 下午9:41, Stephen Darlington 写道:
>>>
>>> Exactly as the warning says, with the IGNITE_MAX_INDEX_PAYLOAD_SIZE
>>> property:
>>>
>>> ./ignite.sh -J-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=33
>>>
>>> Regards,
>>> Stephen
>>>
>>> On 14 May 2020, at 14:23, 38797715 <38797...@qq.com> wrote:
>>>
>>> Hi,
>>>
>>> Today, I see the following information in the log:
>>> [2020-05-14T16:42:04,346][WARN][query-#7759][IgniteH2Indexing] Indexed
>>> columns of a row cannot be fully inlined into index what may lead to
>>> slowdown due to additional data page reads, increase index inline size
>>> if needed (set system property IGNITE_MAX_INDEX_PAYLOAD_SIZE with
>>> recommended size (be aware it will be used by default for all indexes
>>> without explicit inline size)) [cacheName=NEW, tableName=NEW,
>>> idxName=_key_PK, idxCols=(CO_NUM, CUST_ID), idxType=PRIMARY KEY,
>>> curSize=10, recommendedInlineSize=33]
>>>
>>> I know that the create index statement has an inline_ size clause, but I
>>> want to ask, how to adjust the inline size of primary key?
>>>
>>>
>>>
>>>


Re: About index inline size of primary key

2020-05-18 Thread 18624049226

Hi Ilya,

Then I think this property should be ready in the production planning stage.
So the problem is, if there are many tables in the system and many 
tables have combined primary keys, should this attribute be configured 
with a relatively large value at the beginning, such as 40 and 50? 
What's the negative impact? Instead of waiting for notifications from 
the log.


在 2020/5/18 下午9:45, Ilya Kasnacheev 写道:

Hello!

I think this is correct. Moreover, setting this property on a part of 
cluster may lead to problems of its own. It is recommended to set ot 
before deploying a cluster.


Regards,
--
Ilya Kasnacheev


пн, 18 мая 2020 г. в 16:40, 18624049226 <18624049...@163.com 
>:


Hi Ilya,

Thank you very much for your reply!
I wonder if the existing primary key index will not be rebuilt
after this property is configured? Will only affect newly created
tables in the future?

在 2020/5/18 下午9:25, Ilya Kasnacheev 写道:

Hello!

Yes, it will have global impact on all indexes on primary keys,
and all indexes created without INLINE SIZE clause.

Regards,
-- 
Ilya Kasnacheev



чт, 14 мая 2020 г. в 16:53, 38797715 <38797...@qq.com
>:

Hi,

I see this property.
If this property is configured, it has a global impact? What
is the influence range of this parameter?

在 2020/5/14 下午9:41, Stephen Darlington 写道:

Exactly as the warning says, with
the IGNITE_MAX_INDEX_PAYLOAD_SIZE property:

./ignite.sh -J-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=33

Regards,
Stephen


On 14 May 2020, at 14:23, 38797715 <38797...@qq.com
> wrote:

Hi,

Today, I see the following information in the log:

[2020-05-14T16:42:04,346][WARN][query-#7759][IgniteH2Indexing]
Indexed columns of a row cannot be fully inlined into index
what may lead to slowdown due to additional data page
reads, increase index inline size if needed (set system
property IGNITE_MAX_INDEX_PAYLOAD_SIZE with recommended
size (be aware it will be used by default for all indexes
without explicit inline size)) [cacheName=NEW,
tableName=NEW, idxName=_key_PK, idxCols=(CO_NUM, CUST_ID),
idxType=PRIMARY KEY, curSize=10, recommendedInlineSize=33]

I know that the create index statement has an inline_ size
clause, but I want to ask, how to adjust theinline size of
primary key?






join question

2020-05-18 Thread narges saleh
Hi All,
I have encountered a puzzling join case.
I have 3 tables on a cluster of two ignite server nodes:
table-A (id + org = primary), replicated
id
org. <-- affinity
other fields

table-B (id, org, add-id=primary key), partitioned
id
org <- affinity
addr-id
other fields

table-C (id, org, comp-id=primary key), partitioned
id
org <- affinity
comp-id
other fields

joins between table-A and table-B (on id, and org) succeeds.
joins between table-A and table-C (of id and org) succeeds.
joins between table-B and table-C (on id and org) fails.

all three joins succeed if the cluster has only one server node.
Why the join between the partitioned caches fail in a distributed mode?

I am using JDBC connection for select statements. The join fails whether
dealing with thick or thin client.

thanks


Re: Binary recovery for a very long time

2020-05-18 Thread 38797715

Hi,

If direct IO is disabled, the startup speed will be doubled, including 
some other tests. I find that direct IO has a great impact on the read 
performance.


在 2020/5/14 上午5:16, Evgenii Zhuravlev 写道:

Can you share full logs from all nodes?

вт, 12 мая 2020 г. в 18:24, 38797715 <38797...@qq.com 
>:


Hi Evgenii,

The storage used is not SSD.

We will use different versions of ignite for further testing, such
as ignite2.8.
Ignite is configured as follows:


http://www.springframework.org/schema/beans";

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";

xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd";>













































在 2020/5/13 上午4:45, Evgenii Zhuravlev 写道:

Hi,

Can you share full logs and configuration? What disk so you use?

Evgenii

вт, 12 мая 2020 г. в 06:49, 38797715 <38797...@qq.com
>:

Among them:
CO_CO_NEW: ~ 48 minutes(partitioned,backup=1,33M)

Ignite sys cache: ~ 27 minutes

PLM_ITEM:~3 minutes(repicated,1.9K)


在 2020/5/12 下午9:08, 38797715 写道:


Hi community,

We have 5 servers, 16 cores, 256g memory, and 200g off-heap
memory.
We have 7 tables to test, and the data volume is
respectively:31.8M,495.2M,552.3M,33M,873.3K,28M,1.9K(replicated),others
are partitioned(backup = 1)

VM args:-server -Xms20g -Xmx20g -XX:+AlwaysPreTouch
-XX:+UseG1GC -XX:+ScavengeBeforeFullGC
-XX:+DisableExplicitGC -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
-XX:GCLogFileSize=100M -Xloggc:/data/gc/logs/gclog.txt
-Djava.net.preferIPv4Stack=true -XX:MaxDirectMemorySize=256M
-XX:+PrintAdaptiveSizePolicy

Today, one of the servers was restarted(kill and then start
ignite.sh) for some reason, but the node took 1.5 hours to
start, which was much longer than expected.

After analyzing the log, the following information is found:

[2020-05-12T17:00:05,138][INFO][main][GridCacheDatabaseSharedManager]
Found last checkpoint marker
[cpId=7a0564f2-43e5-400b-9439-746fc68a6ccb,
pos=FileWALPointer [idx=10511, fileOff=5134, len=61193]]
[2020-05-12T17:00:05,151][INFO][main][GridCacheDatabaseSharedManager]
Binary memory state restored at node startup
[restoredPtr=FileWALPointer [idx=10511, fileOff=51410110,
len=0]]
[2020-05-12T17:00:05,152][INFO][main][FileWriteAheadLogManager]
Resuming logging to WAL segment
[file=/appdata/ignite/db/wal/24/0001.wal,
offset=51410110, ver=2]
[2020-05-12T17:00:06,448][INFO][main][PageMemoryImpl]
Started page memory [memoryAllocated=200.0GiB,
pages=50821088, tableSize=3.9GiB, checkpointBuffer=2.0GiB]
[2020-05-12T17:02:08,528][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO_NEW,
id=-189779360, dataRegionName=default, mode=PARTITIONED,
atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T17:50:44,341][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO_LINE,
id=-1588248812, dataRegionName=default, mode=PARTITIONED,
atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T17:50:44,366][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=ignite-sys-cache,
id=-2100569601, dataRegionName=sysMemPlc, mode=REPLICATED,
atomicity=TRANSACTIONAL, backups=2147483647, mvcc=false]
[2020-05-12T18:17:57,071][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO_LINE_NEW,
id=1742991829, dataRegionName=default, mode=PARTITIONED,
atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T18:19:54,910][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=PI_COM_DAY,
id=-1904194728, dataRegionName=default, mode=PARTITIONED,
atomicity=ATOMIC, backups=1, mvcc=false]
[2020-05-12T18:19:54,949][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=PLM_ITEM,
id=-1283854143, dataRegionName=default, mode=REPLICATED,
atomicity=ATOMIC, backups=2147483647, mvcc=false]
[2020-05-12T18:22:53,662][INFO][main][GridCacheProcessor]
Started cache in recovery mode [name=CO_CO, id=64322847,
dataRegionName=default, mode=PARTITIONED, atomicity=ATOMIC,
backups=1

Re: Unsubscribe

2020-05-18 Thread Vivian Huertaz
Hello!

Please write to user-unsubscr...@ignite.apache.org, etc, to unsubscribe
from lists.

Regards,

--
Vivian Huertaz

On Mon, May 18, 2020, 7:14 AM Ilya Kasnacheev 
wrote:

> Hello!
>
> Please write to user-unsubscr...@ignite.apache.org, etc, to unsubscribe
> from lists.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 18 мая 2020 г. в 16:13, ANKIT SINGHAI :
>
>>
>>
>> --
>> Regards,
>> Ankit Singhai
>>
>


Re: About index inline size of primary key

2020-05-18 Thread Ilya Kasnacheev
Hello!

I think this is correct. Moreover, setting this property on a part of
cluster may lead to problems of its own. It is recommended to set ot before
deploying a cluster.

Regards,
-- 
Ilya Kasnacheev


пн, 18 мая 2020 г. в 16:40, 18624049226 <18624049...@163.com>:

> Hi Ilya,
>
> Thank you very much for your reply!
> I wonder if the existing primary key index will not be rebuilt after this
> property is configured? Will only affect newly created tables in the future?
> 在 2020/5/18 下午9:25, Ilya Kasnacheev 写道:
>
> Hello!
>
> Yes, it will have global impact on all indexes on primary keys, and all
> indexes created without INLINE SIZE clause.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 14 мая 2020 г. в 16:53, 38797715 <38797...@qq.com>:
>
>> Hi,
>>
>> I see this property.
>> If this property is configured, it has a global impact? What is the
>> influence range of this parameter?
>> 在 2020/5/14 下午9:41, Stephen Darlington 写道:
>>
>> Exactly as the warning says, with the IGNITE_MAX_INDEX_PAYLOAD_SIZE
>> property:
>>
>> ./ignite.sh -J-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=33
>>
>> Regards,
>> Stephen
>>
>> On 14 May 2020, at 14:23, 38797715 <38797...@qq.com> wrote:
>>
>> Hi,
>>
>> Today, I see the following information in the log:
>> [2020-05-14T16:42:04,346][WARN][query-#7759][IgniteH2Indexing] Indexed
>> columns of a row cannot be fully inlined into index what may lead to
>> slowdown due to additional data page reads, increase index inline size
>> if needed (set system property IGNITE_MAX_INDEX_PAYLOAD_SIZE with
>> recommended size (be aware it will be used by default for all indexes
>> without explicit inline size)) [cacheName=NEW, tableName=NEW,
>> idxName=_key_PK, idxCols=(CO_NUM, CUST_ID), idxType=PRIMARY KEY,
>> curSize=10, recommendedInlineSize=33]
>>
>> I know that the create index statement has an inline_ size clause, but I
>> want to ask, how to adjust the inline size of primary key?
>>
>>
>>
>>


Re: About index inline size of primary key

2020-05-18 Thread 18624049226

Hi Ilya,

Thank you very much for your reply!
I wonder if the existing primary key index will not be rebuilt after 
this property is configured? Will only affect newly created tables in 
the future?


在 2020/5/18 下午9:25, Ilya Kasnacheev 写道:

Hello!

Yes, it will have global impact on all indexes on primary keys, and 
all indexes created without INLINE SIZE clause.


Regards,
--
Ilya Kasnacheev


чт, 14 мая 2020 г. в 16:53, 38797715 <38797...@qq.com 
>:


Hi,

I see this property.
If this property is configured, it has a global impact? What is
the influence range of this parameter?

在 2020/5/14 下午9:41, Stephen Darlington 写道:

Exactly as the warning says, with
the IGNITE_MAX_INDEX_PAYLOAD_SIZE property:

./ignite.sh -J-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=33

Regards,
Stephen


On 14 May 2020, at 14:23, 38797715 <38797...@qq.com
> wrote:

Hi,

Today, I see the following information in the log:

[2020-05-14T16:42:04,346][WARN][query-#7759][IgniteH2Indexing]
Indexed columns of a row cannot be fully inlined into index what
may lead to slowdown due to additional data page reads, increase
index inline size if needed (set system property
IGNITE_MAX_INDEX_PAYLOAD_SIZE with recommended size (be aware it
will be used by default for all indexes without explicit inline
size)) [cacheName=NEW, tableName=NEW, idxName=_key_PK,
idxCols=(CO_NUM, CUST_ID), idxType=PRIMARY KEY, curSize=10,
recommendedInlineSize=33]

I know that the create index statement has an inline_ size
clause, but I want to ask, how to adjust theinline size of
primary key?






Re: ignite node ports

2020-05-18 Thread Ilya Kasnacheev
Hello!

I have to correct, 11211 is not used by thick JDBC driver (Which is a
regular client node), instead it is used by control.sh tool mostly. And
some other legacy tools.

Regards,
-- 
Ilya Kasnacheev


ср, 13 мая 2020 г. в 17:44, Evgenii Zhuravlev :

> Hi,
>
> Ports are described here:
> https://dzone.com/articles/a-simple-checklist-for-apache-ignite-beginners
>
> Basically, Discovery(47500 by default) and Communication(47100) are always
> should be open, since without them cluster won't be functional. Discovery
> port used for clustering, checking all nodes state in the cluster.
>
> Communication port(
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setLocalPort-int-)
>  used
> for all other communications between nodes, for example, cache operations
> requests, compute jobs, etc.
>
> Rest Port(8080) is used for rest calls(
> https://apacheignite.readme.io/docs/rest-api) and connection from
> WebConsole(Management tool)
>
> Client connector port(10800) is used for the JDBC(
> https://apacheignite-sql.readme.io/docs/jdbc-driver), ODBC(
> https://apacheignite-sql.readme.io/docs/odbc-driver) or other thin
> clients(https://apacheignite.readme.io/docs/java-thin-client) connection.
>
> 11211 - Port for Thick Jdbc Driver and old rest protocol.
>
> Note that all ports also have port range, which means that if the default
> port is already in use, it will try to use the next one.
>
> Evgenii
>
>
>
>
>
> вт, 12 мая 2020 г. в 22:56, kay :
>
>> Hello, I started ignite node and checked log file.
>>
>> I found TCP ports in logs
>>
>> >>> Local ports : TCP:8080 TCP:11213 TCP:47102 TCP:49100 TCP:49200
>>
>> I set 49100, 49200 port at configuration file for ignite node and client
>> connector port.
>> but I don't know the others port exactly.
>>
>> I found a summary at log.
>>
>> [Node 1]
>> TCP binary : 8080
>> Jetty REST  : 11213
>> Communication spi : 47102
>>
>> [Node 2]
>> TCP binary : 8081
>> Jetty REST  : 11214
>> Communication spi : 47103
>>
>> Could you guys tell me where each port is used??
>>
>> Is it necessary ports?
>> Do I need 5 ports each time add a new node all of different port?
>> if it is true, how can i set TCP binary port(8080) & Jetty REST(11213) at
>> configuration file ??
>>
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: About index inline size of primary key

2020-05-18 Thread Ilya Kasnacheev
Hello!

Yes, it will have global impact on all indexes on primary keys, and all
indexes created without INLINE SIZE clause.

Regards,
-- 
Ilya Kasnacheev


чт, 14 мая 2020 г. в 16:53, 38797715 <38797...@qq.com>:

> Hi,
>
> I see this property.
> If this property is configured, it has a global impact? What is the
> influence range of this parameter?
> 在 2020/5/14 下午9:41, Stephen Darlington 写道:
>
> Exactly as the warning says, with the IGNITE_MAX_INDEX_PAYLOAD_SIZE
> property:
>
> ./ignite.sh -J-DIGNITE_MAX_INDEX_PAYLOAD_SIZE=33
>
> Regards,
> Stephen
>
> On 14 May 2020, at 14:23, 38797715 <38797...@qq.com> wrote:
>
> Hi,
>
> Today, I see the following information in the log:
> [2020-05-14T16:42:04,346][WARN][query-#7759][IgniteH2Indexing] Indexed
> columns of a row cannot be fully inlined into index what may lead to
> slowdown due to additional data page reads, increase index inline size if
> needed (set system property IGNITE_MAX_INDEX_PAYLOAD_SIZE with recommended
> size (be aware it will be used by default for all indexes without
> explicit inline size)) [cacheName=NEW, tableName=NEW, idxName=_key_PK,
> idxCols=(CO_NUM, CUST_ID), idxType=PRIMARY KEY, curSize=10, recommendedInl
> ineSize=33]
>
> I know that the create index statement has an inline_ size clause, but I
> want to ask, how to adjust the inline size of primary key?
>
>
>
>


Re: BinaryObject field is not update

2020-05-18 Thread Ilya Kasnacheev
Hello!

Before put() is completed, Binary Object Schema is not created/updated and
so the type names are not reflected here.

Unfortunately, if you need to read all fields of binary objects, you may
need to use internal APIs (which may not work as expected or change between
releases),
such as, in this case, BinaryObjectImpl.createSchema().fieldIds() and
BinaryObjectImpl.field(int fieldId).

I recommend putting any variable properties in a map field as opposed to
re-building binary object on the fly.

Regards,
-- 
Ilya Kasnacheev


сб, 9 мая 2020 г. в 18:46, takumi :

> Hello. As I am in trouble, please help me.
>
> I use Java ThinClient and Ignite Cache Server.
> In client code, I update BinaryObject by BinaryObjectBuilder API.
> ex) BinaryObject bo = clientCache.get(KEY);
>  clientCache.put(KEY, bo.toBuilder().setField("AddField",
> ...).build());
>
> But MyCacheInterceptor#onBeforePut(entry, newVal) is not update
> newVal.type().fieldNames().
> newVal.type().fieldNames() does not return "AddField" field.
>
> I can get newVal.field("AddField");
>
> How can I update type().fieldNames() result?
> I want to get a "newVal" in response list of type().fieldNames().
>
> Sorry, I am weak in English, and a sentence is not good.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query on a varying set of keys

2020-05-18 Thread Ilya Kasnacheev
Hello!

Remote filter is code. It can execute arbitrary logic. It can adjust to
what it needs to filter, change its own behavior with time.

Regards,
-- 
Ilya Kasnacheev


пн, 18 мая 2020 г. в 15:40, zork :

> Hi Ilya,
> Thanks for your response.
> I'm aware of remote filters but can these filters be modified once the
> query
> is already attached?
> Because if not, then this would not solve my use case as the filter would
> always give me updates on a fixed subset of keys, however in my case this
> subset is varying (based on what keys a user subscribes from the GUI).
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Streamer with overwrite option

2020-05-18 Thread Ilya Kasnacheev
Hello!

I think it is somewhat slower, but not by much, and it certainly does not
wipe all the benefits of using Data Streamer. It's OK to use.

Regards,
-- 
Ilya Kasnacheev


пн, 18 мая 2020 г. в 01:36, narges saleh :

> Hi All,
>
> I am going to get updates for the existing records in the cache, and
> currently don't have an update or upsert process in place.
> How expensive is a streamer with overwrite option set to true, assuming
> the cache has millions of records? I assume the streamer would compare the
> new entry against the cache using the primary key of the cache/table but
> I'd assume this wipes out all the bulk processing benefits of the streamer.
> Would an explicit upsert process, say using a streamer receiver  work
> better in comparison?
>
> Is there any way to speed the process up?
>
> thanks
>
>


Re: Unsubscribe

2020-05-18 Thread Ilya Kasnacheev
Hello!

Please write to user-unsubscr...@ignite.apache.org, etc, to unsubscribe
from lists.

Regards,
-- 
Ilya Kasnacheev


пн, 18 мая 2020 г. в 16:13, ANKIT SINGHAI :

>
>
> --
> Regards,
> Ankit Singhai
>


Unsubscribe

2020-05-18 Thread ANKIT SINGHAI
-- 
Regards,
Ankit Singhai


Re: How many Caches/Tables we can create/query in parallel?

2020-05-18 Thread Ilya Kasnacheev
Hello!

I just wanted to add that you can't create tables in parallel. Table
creation is sequential since it involves a Partition Map Exchange, which is
a blocking operation.

It can become a bottleneck quickly.

Regards,
-- 
Ilya Kasnacheev



пт, 15 мая 2020 г. в 07:24, adipro :

> We created a task which crawls website. We have an SQL table, where the
> application writes data while crawling website. But the application has
> several threads running in parallel, each thread working on a certain
> website. The amount of data that is being generated on that SQL table is
> huge and we want to create a separate SQL table for each thread.
>
> Now if we crawl 100s of websites in parallel, we'll create 100s of threads.
> So can we create 100s of SQL tables? And right after the crawling is done
> we'll delete/destroy those SQL tables.
>
> I am asking this question because, we have a limit for now like each
> website
> should be crawled for "n" number of pages. If that "n" increases the amount
> of data one SQL table holds increases linearly and it's not ideal for
> future
> purposes.
>
> Does Ignite hold connections to 100s of SQL tables in parallel assuming
> that
> ulimit is set to whatever is required?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: java.sql.SQLException: Schema change operation failed: Thread got interrupted while trying to acquire table lock.

2020-05-18 Thread Ilya Kasnacheev
Hello!

I can see how such scenario can expose issues in Apache Ignite's SQL. Can
you please create a JIRA ticket with this code?

Regards,
-- 
Ilya Kasnacheev


сб, 16 мая 2020 г. в 12:51, yangjiajun <1371549...@qq.com>:

> Hello.Sorry for so late to reply.I have got a reproducer.
>
> Here is the test code.Please run it in a ignite without persistence
> enabled.
>
> public class ConcurrentCreateTable {
> private static Connection conn;
>
> private static Connection conn1;
>
> public static void main(String[] args) throws Exception {
>
> initialize();
> new Thread(new Runnable() {
>
> @Override
> public void run() {
> while (true) {
> try (Statement stmt = conn.createStatement()) {
> stmt.execute(
> "CREATE TABLE IF NOT EXISTS city1(ID
> INTEGER,NAME VARCHAR,NAME1 VARCHAR ,PRIMARY KEY(ID)) WITH
> \"template=replicated\";"
> + "CREATE INDEX IF NOT EXISTS city1_name ON
> city1(NAME);CREATE INDEX IF NOT EXISTS city1_name1 on city1(NAME1);");
> stmt.execute("DROP TABLE IF EXISTS city1");
> System.out.println("XXX");
> } catch (SQLException e) {
>
> e.printStackTrace();
> }
> }
> }
> }).start();
> new Thread(new Runnable() {
>
> @Override
> public void run() {
> while (true) {
> try (Statement stmt = conn1.createStatement()) {
> stmt.execute(
> "CREATE TABLE IF NOT EXISTS city2(ID
> INTEGER,NAME VARCHAR, NAME1 VARCHAR ,PRIMARY KEY(ID)) WITH
> \"template=replicated\";"
> + "CREATE INDEX IF NOT EXISTS city2_name ON
> city2(NAME);CREATE INDEX IF NOT EXISTS city2_name1 on city2(NAME1);");
> stmt.execute("DROP TABLE IF EXISTS city2");
> System.out.println("XXX");
> } catch (SQLException e) {
>
> e.printStackTrace();
> }
> }
> }
> }).start();
> while (true) {
>
> }
> }
>
> private static void initialize() throws Exception {
> Class.forName(Config.IGNITE_DRIVER);
> final Properties props = new Properties();
> conn = DriverManager.getConnection(Config.IGNITE_URL, props);
> conn1 = DriverManager.getConnection(Config.IGNITE_URL, props);
> }
> }
>
> Here is the exception:
> java.sql.SQLException: Schema change operation failed: Thread got
> interrupted while trying to acquire table lock.
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:901)
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:231)
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:559)
> at
> 
> at java.lang.Thread.run(Thread.java:748)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query on a varying set of keys

2020-05-18 Thread zork
Hi Ilya,
Thanks for your response.
I'm aware of remote filters but can these filters be modified once the query
is already attached?
Because if not, then this would not solve my use case as the filter would
always give me updates on a fixed subset of keys, however in my case this
subset is varying (based on what keys a user subscribes from the GUI).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been cancelled

2020-05-18 Thread nithin91
Got it. Thanks a lot. This is very useful



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite.cache.loadcache.Does this method do Increamental Load?

2020-05-18 Thread Ilya Kasnacheev
Hello!

You can use Data Streamer with allowOverride flag set can do incremental
load with keys replacement. Load cache is tailored for initial loading only.

Of course, you will have to write your own code around Data Streamer.

Regards,
-- 
Ilya Kasnacheev


пт, 15 мая 2020 г. в 18:53, nithin91 <
nithinbharadwaj.govindar...@franklintempleton.com>:

> Then what is the best way to do perform incremental load.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Continuous Query on a varying set of keys

2020-05-18 Thread Ilya Kasnacheev
Hello!

Continuous query has a notion of 'remote filter'. This is a piece of code
which is executed near data (on server nodes) to determine if the update
needs to be sent over network.

https://apacheignite.readme.io/docs/continuous-queries#remote-filter

If you define a proper remote filter, updates will not flow over the
network unless this is actually needed.

Regards,
-- 
Ilya Kasnacheev


вс, 17 мая 2020 г. в 22:14, zork :

> Hi,
>
> We have a table in ignite cache which would have say around 1Mn entries at
> anytime. Now we wish to listen on updates on a subset of these keys (say
> 5-10 thousand keys) and this subset keeps on changing as the user
> subscribes/unsubscribes to these keys.
>
> The way it is currently working is one continuous query is attached for
> every key whenever it is subscribed and it is closed whenever that key is
> no
> longer of interest (or unsubscribed). The problem with this is that since
> there are so many continuous queries (a few thousands), the application
> goes
> out of memory. Also, it would mean all those queries would be evaluated on
> the remote node for every update.
>
> To overcome this, what we intend to do is to have just one continuous query
> which would listen to all the updates on this table (i.e. all the keys) and
> on receiving these updates we would have to filter those of our interest on
> our end. But this would mean unnecessary updates would flow over the
> network
> and it doesn't sound like a very good solution too.
>
> Can someone suggest a better way this problem could be addressed? Do we
> have
> something else in ignite to cater such requirement?
>
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Calculation of Size

2020-05-18 Thread Ilya Kasnacheev
Hello!

Why do you focus on "too many dirty pages" message? It is purely
informational.

If you wish to avoid that, you need to make sure that less than half of
checkpoint page buffer is ever filled with updates which are pending to the
written to disk, between scheduled checkpoints.

Ideally you should have more off-heap than the amount of data you are
having, this way there will not be page eviction or forced checkpoints.

Regards,
-- 
Ilya Kasnacheev


вс, 17 мая 2020 г. в 16:51, adipro :

> I have persistence enabled. How much size do I need for Off-Heap if I wish
> not to have "too many dirty pages" checkpointing?
>
> Our application has many threads which simultaneously write/read data. With
> total 1L records in all caches I'm seeing 770MB of size occupied in
> Off-heap
> while running for 1 thread. If more threads run, then we can multiply 770Mb
> with number of threads.
>
> If I don't provide Off-heap at all, I'm getting delayed performance.
>
> What should I do and how should I calculate how much Off-heap I need for
> better performance?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cant connect distributed servers

2020-05-18 Thread Ilya Kasnacheev
Hello!

I think this is because the other node (second server) cannot ping the
first node (server1) via communication (port 47100 is closed or connection
is blocked).

Please provide complete log from both nodes.

Regards,
-- 
Ilya Kasnacheev


вс, 17 мая 2020 г. в 21:13, Vasily Laktionov :

> Hi all,
> We can't connect two distributed servers, see attached log.
> server1.log
> 
>
> JVM config includes -Djava.net.preferIPv4Stack=true
> Ignite conf includes
> TcpDiscoverySpi discovery = new TcpDiscoverySpi();
> TcpDiscoveryMulticastIpFinder ipFinder = new
> TcpDiscoveryMulticastIpFinder();
>
> ipFinder.setAddresses(Arrays.asList("127.0.0.1:47500",
> "xxx.xx.xx.xx:ext_port"));
> ipFinder.setMulticastGroup("228.10.10.157");
> discovery.setLocalPort(47500);
> discovery.setLocalPortRange(10);
> discovery.setIpFinder(ipFinder);
>
> We try to use BasicAddressResolver but it not helps.
> Please provide info how to solve this problem.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Data streamer has been cancelled

2020-05-18 Thread Manuel Núñez Sánchez
Since there are several approaches on solving this, I’m continue with your 
code, a couple of suggestions:

- Create streamer instance out of loop, and close it on finally, not within the 
loop.
- stmr.autoflushfrequency(0) as you do it every 2000 elements… 
- Don’t forget remaining data (< 2000) from last iteration

IgniteDataStreamer stmr = 
ignite.dataStreamer("PieCountryAllocationCache");
stmr.allowOverwrite(true);
// disable auto flush - we’ll do it manually
stmr.autoflushfrequency(0);

try{
int j=0;
for (Map.Entry entry :
PieCountryAllocationobjs.entrySet()) { 
 
tempobjs.put(entry.getKey(), 
entry.getValue());
//For ever 2000 rows i am callling stmr.addData(tempobjs) and then stmr.flush 
and stmr.close(false)
if(j++ == 2000 ){
System.out.println(j);
stmr.addData(tempobjs);
// do flush every 2000 items
stmr.flush();
tempobjs.clear();
System.out.println(“Sent 
Ended");
System.out.println(j);
j = 0;
}
 }

  // stream remaining data
  if (!tempobjs.is Empty()){
  stmr.addData(tempobjs);
  }

} finally {
stmr.flush();
stmr.close(false);
}

> El 18 may 2020, a las 12:36, nithin91 
>  escribió:
> 
>   int j=0;
>   for (Map.Entry entry :
> PieCountryAllocationobjs.entrySet()) { 
>
>   tempobjs.put(entry.getKey(), 
> entry.getValue());
> //For ever 2000 rows i am callling stmr.addData(tempobjs) and then
> stmr.flush and stmr.close(false)
>   if((j%2000==0 && j!=0) ||
>   
> (PieCountryAllocationobjs.keySet().size() < 2000 &&
> j==PieCountryAllocationobjs.keySet().size())
>   || 
> j==PieCountryAllocationobjs.keySet().size()
>   ){
>   System.out.println(j);
>   IgniteDataStreamer PieCountryAllocation> stmr =
> ignite.dataStreamer("PieCountryAllocationCache");
>   stmr.allowOverwrite(true);
>   stmr.addData(tempobjs);
>   stmr.flush();
>   stmr.close(false);
>   tempobjs.clear();
>   System.out.println("Stream 
> Ended");
>   System.out.println(j);
>   
>   }
>   j++;
>}



Re: Data streamer has been cancelled

2020-05-18 Thread nithin91
Hi 

Implemented the code as suggested by you. Please find the code related to
this. Please let me know is this 
right way of implementing what you suggested.

Also can you please let me know the use of stmr.autoflushfrequency(2000)
method usage .If i pass higher number to this method,will that improve the
performance.

Map Originalobjs=new HashMap();--Contains all the
0.1 million key value pairs that has to be loaded

Map tempobjs=new HashMap();--Temp object that
will contain only 2000 
records at a time and which will be pushed to cache using data streamer

int j=0;
for (Map.Entry entry :
PieCountryAllocationobjs.entrySet()) { 
 
tempobjs.put(entry.getKey(), 
entry.getValue());
//For ever 2000 rows i am callling stmr.addData(tempobjs) and then
stmr.flush and stmr.close(false)
if((j%2000==0 && j!=0) ||

(PieCountryAllocationobjs.keySet().size() < 2000 &&
j==PieCountryAllocationobjs.keySet().size())
|| 
j==PieCountryAllocationobjs.keySet().size()
){
System.out.println(j);
IgniteDataStreamer stmr =
ignite.dataStreamer("PieCountryAllocationCache");
stmr.allowOverwrite(true);
stmr.addData(tempobjs);
stmr.flush();
stmr.close(false);
tempobjs.clear();
System.out.println("Stream 
Ended");
System.out.println(j);

}
j++;
 }



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Deleting multiple entries from cache at once

2020-05-18 Thread nithin91
Thanks for sharing this.Its really helpful



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service Node vs Data Node

2020-05-18 Thread Stephen Darlington
We’d need to see your code. As I noted previously, running a compute job does 
not start a server node.

> On 18 May 2020, at 11:21, nithin91 
>  wrote:
> 
> Hi 
> 
> W.r.t client mode i am clear, But what is the use of starting the compute
> job on server node as when i start the the job  on server node following
> things happen 
> 
> 
> It is creating a new server node every time and the node is getting
> disconnected once the job is done. But the problem with this behavior, since
> a new node is created as 
> a part of execution of this job, data will be re balanced such that data is
> evenly distributed across
> nodes and when the job completes, then this node goes down which means  data
> on this node is lost.
> 
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Service Node vs Data Node

2020-05-18 Thread nithin91
Hi 

W.r.t client mode i am clear, But what is the use of starting the compute
job on server node as when i start the the job  on server node following
things happen 


It is creating a new server node every time and the node is getting
disconnected once the job is done. But the problem with this behavior, since
a new node is created as 
a part of execution of this job, data will be re balanced such that data is
evenly distributed across
nodes and when the job completes, then this node goes down which means  data
on this node is lost.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service Node vs Data Node

2020-05-18 Thread Stephen Darlington
Ignition.start() doesn’t start a compute job, it starts a new node (a server 
node by default). If you want t create a client node, you can do that in the 
configuration file or by adding "Ignition.setClientMode(true)” before you start 
the node. Then you can run a compute job:
Ignition.setClientMode(true);
Ignite ignite = Ignition.start("config.xml");

ignite.compute().run(
() -> System.out.println("Hello from a server")
);
Regards,
Stephen

> On 18 May 2020, at 08:49, nithin91 
>  wrote:
> 
> Hi
> 
> I have initially two nodes,
> But when i am initiating the compute job(IgniteCompute compute =
> ignite.compute()) from server i.e Ignite ignite=Ignition.star("Server.xml")
> it is creating a new server node every time and the node is getting
> disconnected once the job is done. But the problem with this behaviour, when
> node joins data will be rebalanced so that data is evenly distributed across
> nodes but when the job completes then this node goes down which means  data
> on this node is lost.
> 
> This  is the log i when i start  a compute job using
> ignite=Ignition.star("Server.xml") 
> 
> Topology snapshot [ver=3, locNode=92d08291, servers=3, clients=0,
> state=ACTIVE, CPUs=8, offheap=13.0GB, heap=11.0GB]
> [13:10:21]   ^-- Baseline [id=0, size=2, online=2, offline=0]
> 
> When the job completes execution, i get below
> 
> Ignite node stopped OK 
> 
> Please let me know whether my understanding is correct.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Deleting multiple entries from cache at once

2020-05-18 Thread Stephen Darlington
Yes, that’s the removeAll method! There’s another method signature, one that 
accepts a parameter:
public void removeAll(Set keys) throws TransactionException;
There are a few ways to update multiple entries. You can always use the SQL 
UPDATE commend, but the equivalent to removeAll() is invokeAll();
public  Map> invokeAll(Set keys,
EntryProcessor entryProcessor, Object... args) throws 
TransactionException;
Regards,
Stephen

> On 18 May 2020, at 08:55, nithin91 
>  wrote:
> 
> The method mentioned in the API is cache.removeAll() which removes  all the
> elements is the cache but i want to remove certain entries from cache at
> once like cache.removeAll(List of Keys). is there any such method or
> efficient way to remove  entries corresponding to List of Keys at once .
> 
> *Similarly is there any such method or efficient way to update entries
> corresponding to List of Keys at once .*
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Deleting multiple entries from cache at once

2020-05-18 Thread Manuel Núñez Sánchez
Hi!

This is the method you are looking for (also available in async mode):

/**
 * {@inheritDoc}
 * @throws TransactionException If operation within transaction is failed.
 */
@IgniteAsyncSupported
@Override public void removeAll(Set keys) throws 
TransactionException;

Also, you can use clearAll, to delete entries in silent mode:

/**
 * Clears entries from the cache and swap storage, without notifying listeners 
or
 * {@link CacheWriter}s. Entry is cleared only if it is not currently locked,
 * and is not participating in a transaction.
 *
 * @param keys Keys to clear.
 * @throws IllegalStateException if the cache is {@link #isClosed()}
 * @throws CacheExceptionif there is a problem during the clear
 */
@IgniteAsyncSupported
public void clearAll(Set keys);


Use putAll to update entries at once:

/**
 * {@inheritDoc}
 * @throws TransactionException If operation within transaction is failed.
 */
@IgniteAsyncSupported
@Override public void putAll(Map map) throws 
TransactionException;


Cheers,

Manuel.

> El 18 may 2020, a las 9:55, nithin91 
>  escribió:
> 
> The method mentioned in the API is cache.removeAll() which removes  all the
> elements is the cache but i want to remove certain entries from cache at
> once like cache.removeAll(List of Keys). is there any such method or
> efficient way to remove  entries corresponding to List of Keys at once .
> 
> *Similarly is there any such method or efficient way to update entries
> corresponding to List of Keys at once .*
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Deleting multiple entries from cache at once

2020-05-18 Thread nithin91
The method mentioned in the API is cache.removeAll() which removes  all the
elements is the cache but i want to remove certain entries from cache at
once like cache.removeAll(List of Keys). is there any such method or
efficient way to remove  entries corresponding to List of Keys at once .

*Similarly is there any such method or efficient way to update entries
corresponding to List of Keys at once .*



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service Node vs Data Node

2020-05-18 Thread nithin91
Hi

I have initially two nodes,
But when i am initiating the compute job(IgniteCompute compute =
ignite.compute()) from server i.e Ignite ignite=Ignition.star("Server.xml")
it is creating a new server node every time and the node is getting
disconnected once the job is done. But the problem with this behaviour, when
node joins data will be rebalanced so that data is evenly distributed across
nodes but when the job completes then this node goes down which means  data
on this node is lost.

This  is the log i when i start  a compute job using
ignite=Ignition.star("Server.xml") 

 Topology snapshot [ver=3, locNode=92d08291, servers=3, clients=0,
state=ACTIVE, CPUs=8, offheap=13.0GB, heap=11.0GB]
[13:10:21]   ^-- Baseline [id=0, size=2, online=2, offline=0]

When the job completes execution, i get below

Ignite node stopped OK 

Please let me know whether my understanding is correct.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service Node vs Data Node

2020-05-18 Thread nithin91
Thanks.This information is very helpful.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Service Node vs Data Node

2020-05-18 Thread Stephen Darlington
> To execute a compute job on server nodes, should i use the below command by
> starting as client node.Please correct me if  i am wrong.
> IgniteCompute compute = ignite.compute(cluster.forRemotes());

That would start a compute job on a remote node, which could be another client! 
If you just say ignite.compute() it defaults to server nodes. You can be 
explicit with cluster.forServers().

> Also can you please confirm, that a compute job can be initiated only on a 
> client node(i.e by setting the property set client node=true in the bean 
> file).

A compute job can be initiated from any node, client or server.

Regards,
Stephen