Ignite 2.9 one way client to server communication

2020-10-30 Thread Hemambara
I see that ignite 2.9 has added support for one-way thick-client to server
connections. Does it reduce the time taken to connect thick client to server
and provides all thick client functionalities? Does client still be in ring?
Right now we r facing issues with thick client where it is taking more time
to connect especially when we have 60 clients. Switched to thin clients for
now. But we need map listeners. Does upgrading to 2.9 helps reducing long
connection times? Also is there any plan to provide map listeners on thin
clients?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Execution of local SqlFieldsQuery on client node disallowed

2020-10-30 Thread Denis Magda
Hi Narges,

Then just send a task to a required node. If the cluster topology changes
while the task was running you can re-submit it to ensure the result is
accurate.

-
Denis


On Fri, Oct 30, 2020 at 2:16 PM narges saleh  wrote:

> Hi Denis,
>
> My problem with using affinity call/run is that I have to have the key in
> order to run it. I just want to run a function on the data on the current
> node, without knowing the key. Is there anyway to do this and also
> guard against partition rebalancing?
>
> thanks
>
> On Tue, Oct 27, 2020 at 10:31 AM narges saleh 
> wrote:
>
>> Thanks Ilya, Denis for the feedback.
>>
>> On Mon, Oct 26, 2020 at 1:44 PM Denis Magda  wrote:
>>
>>> Narges,
>>>
>>> Also, keep in mind that if a local query is executed over a partitioned
>>> table and it happens that partitions rebalancing starts, the local query
>>> might return a wrong result (if partitions the query was executed over were
>>> rebalanced to another node during the query execution time). To address
>>> this:
>>>
>>>1. Execute the local query inside of an affinityCall/Run function (
>>>
>>> https://ignite.apache.org/docs/latest/distributed-computing/collocated-computations#colocating-by-key).
>>>Those functions don't let partitions be evicted until the function
>>>execution completes.
>>>2. Don't use the local queries, let the Ignite SQL engine to run
>>>standard queries, and to take care of possible optimizations.
>>>
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Mon, Oct 26, 2020 at 8:50 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 You are using an Ignite Thick Client driver. As its name implies, it
 will start a local client node and then connect to it, without the option
 of doing local queries.

 You need to use Ignite Thin JDBC driver: jdbc:ignite:thin://
 Then you can do local queries.

 Regards,
 --
 Ilya Kasnacheev


 сб, 24 окт. 2020 г. в 16:04, narges saleh :

> Hello Ilya
> Yes, it happens all the time. It seems ignite forces the "client"
> establishing the jdbc connection into a client mode, even if I set
> client=false.  The sample code and config are attached. The question is 
> how
> do I force JDBC connections from a server node.
> thanks.
>
> On Fri, Oct 23, 2020 at 10:31 AM Ilya Kasnacheev <
> ilya.kasnach...@gmail.com> wrote:
>
>> Hello!
>>
>> Does this happen every time? If so, do you have a reproducer for the
>> issue?
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 23 окт. 2020 г. в 13:06, narges saleh :
>>
>>> Denis -- Just checked. I do specify my services to be deployed on
>>> server nodes only. Why would ignite think that I am running my code on a
>>> client node?
>>>
>>> On Fri, Oct 23, 2020 at 3:50 AM narges saleh 
>>> wrote:
>>>
 Hi Denis
 What would make an ignite node a client node? The code is invoked
 via an ignite service deployed on each node and I am not setting the 
 client
 mode anywhere. The code sets the jdbc connection to local and tries to
 execute a sql code on the node in some interval. By the way, I didn't 
 know
 one could deploy a service on client nodes. Do I need to explicitly 
 mark a
 node as a server node when deploying a service?
 thanks

 On Thu, Oct 22, 2020 at 9:42 PM Denis Magda 
 wrote:

> The error message says you're attempting to run the query on a
> client node. If that's the case (if the service is deployed on the 
> client
> node), then the local flag has no effect because client nodes don't 
> keep
> your data locally but rather consume it from servers.
>
> -
> Denis
>
>
> On Thu, Oct 22, 2020 at 6:26 PM narges saleh 
> wrote:
>
>> Hi All,
>> I am trying to execute a sql query via a JDBC  connection on the
>> service node (the query is run via a service), but I am getting 
>> *Execution
>> of local SqlFieldsQuery on client node disallowed.*
>> *The JDBC connection has the option local=true as I want to run
>> the query on the data on the local node only.*
>> *Any idea why I am getting this error?*
>>
>> *thanks.*
>>
>


Ignite timeouts and trouble interpreting the logs

2020-10-30 Thread tschauenberg
First some background.  Ignite 2.8.1 with a 3 node cluster, two webserver
client nodes, and one batch processing client node that comes and goes.

The two webserver thick client nodes and the one batch processing thick
client node have the following configuration values:
* IgniteConfiguration.setNetworkTimeout(6)
* IgniteConfiguration.setFailureDetectionTimeout(12)
* TcpDiscoverySpi.setJoinTimeout(6)
* TcpCommunicationSpi.setIdleConnectionTimeout(Long.MAX_VALUE)

The server nodes do not have any timeouts set and are currently using all
defaults.  My understanding is that means they are using:
* failureDetectionTimeout 1
* clientFailureDetectionTimeout 3
 
Every so often the batch processing client node fails to connect to the
cluster.  We try to connect the batch processing client node to a single
node in the cluster using:
TcpDiscoverySpi.setIpFinder(TcpDiscoveryVmIpFinder().setAddresses(single
node ip)

I see the following stream of logs on the server node the client connects to
and I am hoping you can shed some light into what timeout values I have set
incorrectly what values I need to set instead.

In these logs I have obfuscated the client IP to 10.1.2.xxx and the server
IP as 10.1.10.xxx


On the server node that the client tries to connect to I see the following
sequence of messages:

[20:21:28,092][INFO][exchange-worker-#42][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=4146, minorTopVer=0], force=false, evt=NODE_JOINED,
node=1b91b2a5-05ac-4809-8a3d-c1c2efb6a3e3]

So the client joined the cluster almost at exactly the same time it tried to
join which seems good so far.

Then I see
[20:21:54,726][INFO][db-checkpoint-thread-#56][GridCacheDatabaseSharedManager]
Skipping checkpoint (no pages were modified) [checkpointBeforeLockTime=6ms,
checkpointLockWait=0ms, checkpointListenersExecuteTime=6ms,
checkpointLockHoldTime=8ms, reason='timeout']
[20:21:58,044][INFO][tcp-disco-sock-reader-[1b91b2a5 10.1.2.xxx:47585
client]-#4176][TcpDiscoverySpi] Finished serving remote node connection
[rmtAddr=/10.1.2.xxx:47585, rmtPort=47585

[20:21:58,045][WARNING][grid-timeout-worker-#23][TcpDiscoverySpi] Socket
write has timed out (consider increasing
'IgniteConfiguration.failureDetectionTimeout' configuration property)
[failureDetectionTimeout=1, rmtAddr=/10.1.2.xxx:47585, rmtPort=47585,
sockTimeout=5000]

I don't understand this socket timeout line because that remote address is
the client remote address so I don't know what it was doing here and this
failureDetectionTimeout isn't the clientFailureDetectionTimeout which I
don't get.

It then seems to connect just fine to the client discovery here

[20:22:10,170][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
discovery accepted incoming connection [rmtAddr=/10.1.2.xxx, rmtPort=56921]
[20:22:10,170][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
discovery spawning a new thread for connection [rmtAddr=/10.1.2.xxx,
rmtPort=56921]
[20:22:10,171][INFO][tcp-disco-sock-reader-[]-#4178][TcpDiscoverySpi]
Started serving remote node connection [rmtAddr=/10.1.2.xxx:56921,
rmtPort=56921]
[20:22:10,175][INFO][tcp-disco-sock-reader-[1b91b2a5 10.1.2.xxx:56921
client]-#4178][TcpDiscoverySpi] Initialized connection with remote client
node [nodeId=1b91b2a5-05ac-4809-8a3d-c1c2efb6a3e3,
rmtAddr=/10.1.2.xxx:56921]
[20:22:27,870][INFO][tcp-disco-sock-reader-[1b91b2a5 10.1.2.xxx:56921
client]-#4178][TcpDiscoverySpi] Finished serving remote node connection
[rmtAddr=/10.1.2.xxx:56921, rmtPort=56921

The client hits its timeout at 20:22:28 which is the 60 seconds timeout we
give it from 20:21:28, so this finished message is almost at the exact same
time as the timeout threshold.  

Given that socket timeout above, is the second chunk of logs from
20:22:10-20:22:27 a client discovery retry?  

The client exits at 20:22:28 because of its 60 seconds timeout and probably
didn't get the above discovery response message in time?

This server node then notices the client didn't respond within 30 seconds
from 20:22:27 to 20:22:57 (and since it timed out at 20:22:28 and exited
that generally seems to fit):

[20:22:57,811][WARNING][tcp-disco-msg-worker-[21ddf49c 10.1.10.xxx:47500
crd]-#2][TcpDiscoverySpi] Failing client node due to not receiving metrics
updates from client node within
'IgniteConfiguration.clientFailureDetectionTimeout' (consider increasing
configuration property) [timeout=3, node=TcpDiscoveryNode
[id=1b91b2a5-05ac-4809-8a3d-c1c2efb6a3e3,
consistentId=1b91b2a5-05ac-4809-8a3d-c1c2efb6a3e3, addrs=ArrayList
[127.0.0.1, 172.17.0.3], sockAddrs=HashSet [/127.0.0.1:0, /172.17.0.3:0],
discPort=0, order=4146, intOrder=2076, lastExchangeTime=1604089287814,
loc=false, ver=2.8.1#20200521-sha1:86422096, isClient=true]]
[20:22:57,812][WARNING][disco-event-worker-#41][GridDiscoveryManager] Node
FAILED: TcpDiscoveryNode [id=1b91b2a5-05ac-4809-8a3d-c1c2efb6a3e3,

Re: Execution of local SqlFieldsQuery on client node disallowed

2020-10-30 Thread narges saleh
Hi Denis,

My problem with using affinity call/run is that I have to have the key in
order to run it. I just want to run a function on the data on the current
node, without knowing the key. Is there anyway to do this and also
guard against partition rebalancing?

thanks

On Tue, Oct 27, 2020 at 10:31 AM narges saleh  wrote:

> Thanks Ilya, Denis for the feedback.
>
> On Mon, Oct 26, 2020 at 1:44 PM Denis Magda  wrote:
>
>> Narges,
>>
>> Also, keep in mind that if a local query is executed over a partitioned
>> table and it happens that partitions rebalancing starts, the local query
>> might return a wrong result (if partitions the query was executed over were
>> rebalanced to another node during the query execution time). To address
>> this:
>>
>>1. Execute the local query inside of an affinityCall/Run function (
>>
>> https://ignite.apache.org/docs/latest/distributed-computing/collocated-computations#colocating-by-key).
>>Those functions don't let partitions be evicted until the function
>>execution completes.
>>2. Don't use the local queries, let the Ignite SQL engine to run
>>standard queries, and to take care of possible optimizations.
>>
>>
>> -
>> Denis
>>
>>
>> On Mon, Oct 26, 2020 at 8:50 AM Ilya Kasnacheev <
>> ilya.kasnach...@gmail.com> wrote:
>>
>>> Hello!
>>>
>>> You are using an Ignite Thick Client driver. As its name implies, it
>>> will start a local client node and then connect to it, without the option
>>> of doing local queries.
>>>
>>> You need to use Ignite Thin JDBC driver: jdbc:ignite:thin://
>>> Then you can do local queries.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> сб, 24 окт. 2020 г. в 16:04, narges saleh :
>>>
 Hello Ilya
 Yes, it happens all the time. It seems ignite forces the "client"
 establishing the jdbc connection into a client mode, even if I set
 client=false.  The sample code and config are attached. The question is how
 do I force JDBC connections from a server node.
 thanks.

 On Fri, Oct 23, 2020 at 10:31 AM Ilya Kasnacheev <
 ilya.kasnach...@gmail.com> wrote:

> Hello!
>
> Does this happen every time? If so, do you have a reproducer for the
> issue?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 23 окт. 2020 г. в 13:06, narges saleh :
>
>> Denis -- Just checked. I do specify my services to be deployed on
>> server nodes only. Why would ignite think that I am running my code on a
>> client node?
>>
>> On Fri, Oct 23, 2020 at 3:50 AM narges saleh 
>> wrote:
>>
>>> Hi Denis
>>> What would make an ignite node a client node? The code is invoked
>>> via an ignite service deployed on each node and I am not setting the 
>>> client
>>> mode anywhere. The code sets the jdbc connection to local and tries to
>>> execute a sql code on the node in some interval. By the way, I didn't 
>>> know
>>> one could deploy a service on client nodes. Do I need to explicitly 
>>> mark a
>>> node as a server node when deploying a service?
>>> thanks
>>>
>>> On Thu, Oct 22, 2020 at 9:42 PM Denis Magda 
>>> wrote:
>>>
 The error message says you're attempting to run the query on a
 client node. If that's the case (if the service is deployed on the 
 client
 node), then the local flag has no effect because client nodes don't 
 keep
 your data locally but rather consume it from servers.

 -
 Denis


 On Thu, Oct 22, 2020 at 6:26 PM narges saleh 
 wrote:

> Hi All,
> I am trying to execute a sql query via a JDBC  connection on the
> service node (the query is run via a service), but I am getting 
> *Execution
> of local SqlFieldsQuery on client node disallowed.*
> *The JDBC connection has the option local=true as I want to run
> the query on the data on the local node only.*
> *Any idea why I am getting this error?*
>
> *thanks.*
>



Re: Apache Ignite talks videos from IMC Summit 2000

2020-10-30 Thread Saikat Maitra
Hi Kseniya,

Thank you for sharing the videos details, much appreciate it.

Regards,
Saikat


On Fri, 30 Oct 2020 at 12:13 PM, Kseniya Romanova 
wrote:

> Hi, igniters!
> Below you can find videos of Ignite talks at the passed Virtual In-memory
> Computing Summit:
>
>1. Apache Ignite Training Part 1—Setting Up Apache Ignite Management
>and Monitoring Solution with GridGain Control Centerr—by Denis Magda
>https://www.youtube.com/watch?v=6R6y7RLT2YA
>2. Apache Ignite Training Part 2 —Training for Apache Ignite as an
>In-Memory Database (IMDB)—by Glenn Wiebe
>https://www.youtube.com/watch?v=cLb4KZHC3KA
>3. Engineering Overview of GridGain Nebula Managed Service: How We
>Deploy GridGain and Apache Ignite in Clouds—by Andrey Alexandrov
>https://www.youtube.com/watch?v=TYWFxDW0yIQ
>4. Going Cloud-Native: Serverless Applications with Apache Ignite—by
>Denis Magda https://www.youtube.com/watch?v=hq7MOIQrhrE
>5. Analyzing and Debugging Ignite Applications for Performance— by
>Greg Stachnick https://www.youtube.com/watch?v=6q0U25yklaM
>6. Performance and Fault-Tolerance of Apache Ignite’s Network
>Components—by Stanislav Lukyanov
>https://www.youtube.com/watch?v=zTpgE3ppfUM
>7. Hyperparameter Tuning and Distributed Stacking with Apache Ignite
>ML—by Alexey Zinoviev https://www.youtube.com/watch?v=vYXVqVtpFzo
>8. Apache Ignite Extensions: Modularization—by Saikat Maitra
>https://www.youtube.com/watch?v=JJcEgsJ7P38
>
> Thanks to all the speakers for promoting Ignite!
>
> Have a spooky Halloween!
> Kseniya
>


Re: Ignite instances frequently failing - BUG: soft lockup - CPU#1 stuck

2020-10-30 Thread bbellrose
Looks like it was a centos 8 bug with ksmtuned. Had a few VMs going crazy
with cpu for that process. I have disabled that service and CPU on the VM
cluster is down. I am going to wait to see if that resolves it.

Brian



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Apache Ignite talks videos from IMC Summit 2000

2020-10-30 Thread Kseniya Romanova
Hi, igniters!
Below you can find videos of Ignite talks at the passed Virtual In-memory
Computing Summit:

   1. Apache Ignite Training Part 1—Setting Up Apache Ignite Management and
   Monitoring Solution with GridGain Control Centerr—by Denis Magda
   https://www.youtube.com/watch?v=6R6y7RLT2YA
   2. Apache Ignite Training Part 2 —Training for Apache Ignite as an
   In-Memory Database (IMDB)—by Glenn Wiebe
   https://www.youtube.com/watch?v=cLb4KZHC3KA
   3. Engineering Overview of GridGain Nebula Managed Service: How We
   Deploy GridGain and Apache Ignite in Clouds—by Andrey Alexandrov
   https://www.youtube.com/watch?v=TYWFxDW0yIQ
   4. Going Cloud-Native: Serverless Applications with Apache Ignite—by
   Denis Magda https://www.youtube.com/watch?v=hq7MOIQrhrE
   5. Analyzing and Debugging Ignite Applications for Performance— by Greg
   Stachnick https://www.youtube.com/watch?v=6q0U25yklaM
   6. Performance and Fault-Tolerance of Apache Ignite’s Network
   Components—by Stanislav Lukyanov
   https://www.youtube.com/watch?v=zTpgE3ppfUM
   7. Hyperparameter Tuning and Distributed Stacking with Apache Ignite
   ML—by Alexey Zinoviev https://www.youtube.com/watch?v=vYXVqVtpFzo
   8. Apache Ignite Extensions: Modularization—by Saikat Maitra
   https://www.youtube.com/watch?v=JJcEgsJ7P38

Thanks to all the speakers for promoting Ignite!

Have a spooky Halloween!
Kseniya


Re: Tracing configuration

2020-10-30 Thread Bastien Durel
Le vendredi 30 octobre 2020 à 18:03 +0300, Maxim Muzafarov a écrit :
> Hello Bastien,
> 
> Is the issue [1] is the same as you faced with?
> It seems to me it will be available in 2.9.1 (or 2.10).
> 
> [1] https://issues.apache.org/jira/browse/IGNITE-13640

Hello,

It may be, but I don't know enough about maven to see what the patch is
about to change.

Regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: Inserting date into ignite with spark jdbc

2020-10-30 Thread Humphrey
Yes I don't want to supply an Ignite Configuration XML I would like to
connect through JDBC like any other database. And there is no way to supply
the primary key.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


When doing a redeploy of JBoss WAR with Apache Ignite, Failed to marshal custom event: StartRoutineDiscoveryMessage

2020-10-30 Thread Nicholas DiPiazza
Also posted on
https://stackoverflow.com/questions/64611498/when-doing-a-redeploy-of-jboss-war-with-apache-ignite-failed-to-marshal-custom
so please answer there so i can reward with points.

I am trying to make it so I can redeploy a JBoss 7.1.0 cluster with a WAR
that has apache ignite.

I am starting the cache like this:

System.setProperty("IGNITE_UPDATE_NOTIFIER", "false");

igniteConfiguration = new IgniteConfiguration();

int failureDetectionTimeout =
Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT",
"6"));

igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);

String igniteVmIps = getProperty("IGNITE_VM_IPS");
List addresses = Arrays.asList("127.0.0.1:47500");
if (StringUtils.isNotBlank(igniteVmIps)) {
addresses = Arrays.asList(igniteVmIps.split(","));
}

int networkTimeout =
Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_NETWORK_TIMEOUT",
"6"));
boolean failureDetectionTimeoutEnabled =
Boolean.parseBoolean(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT_ENABLED",
"true"));

int tcpDiscoveryLocalPort =
Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT",
"47500"));
int tcpDiscoveryLocalPortRange =
Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT_RANGE",
"0"));

TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setLocalPort(tcpDiscoveryLocalPort);
tcpDiscoverySpi.setLocalPortRange(tcpDiscoveryLocalPortRange);
tcpDiscoverySpi.setNetworkTimeout(networkTimeout);

tcpDiscoverySpi.failureDetectionTimeoutEnabled(failureDetectionTimeoutEnabled);
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
tcpDiscoverySpi.setIpFinder(ipFinder);

igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);

Ignite ignite = Ignition.start(igniteConfiguration);

ignite.cluster().active(true);

Then I am stopping the cache when the application undeploys:

ignite.close();

When I try to redeploy, I get the following error during initialization.

 org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom
event: StartRoutineDiscoveryMessage [startReqData=StartRequestData
[prjPred=org.apache.ignite.internal.cluster.ClusterGroupAdapter$CachesFilter@7385a997,
clsName=null, depInfo=null,
hnd=org.apache.ignite.internal.GridEventConsumeHandler@2aec6952,
bufSize=1, interval=0, autoUnsubscribe=true], keepBinary=false,
deserEx=null, routineId=bbe16e8e-2820-4ba0-a958-d5f644498ba2]

If I full restart the server, starts up fine.

Am I missing some magic in the shutdown process?


Re: Inserting date into ignite with spark jdbc

2020-10-30 Thread Andrei Aleksandrov

Denis,

I can check it out soon. The mentioned problem can probably only be 
related to JDBC data frames. In this case, I will create a JIRA ticket. 
But as I know using OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS should be the 
same as I showed in my example.


BR,
Andrei

10/30/2020 6:01 PM, Denis Magda пишет:

Andrey,

Do we need to update our docs? It feels like the docs miss these 
details or have an outdated example.


-
Denis


On Fri, Oct 30, 2020 at 7:03 AM Andrei Aleksandrov 
mailto:aealexsand...@gmail.com>> wrote:


Hi,

Here's an example with correct syntax that should work fine:

|DataFrameWriter < Row > df = resultDF .write()
.format(IgniteDataFrameSettings.FORMAT_IGNITE())
.option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath)
.option(IgniteDataFrameSettings.OPTION_TABLE(), "Person")
.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),
"id, city_id")
.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PARAMETERS(),
"template=partitioned,backups=1") .mode(Append); |

Please let me know if something is wrong here.

BR,
Andrei

10/30/2020 2:20 AM, Humphrey пишет:

Hello guys this question has been asked on  Stack Overflow
     
but yet no answer is a provided.


I'm facing the same issue (trying to insert data in ignite using
spark.jdbc):
Exception in thread "main" java.sql.SQLException: No PRIMARY KEY defined for
CREATE TABLE
at

org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1004)

Code:
 println("-- writing using jdbc --")
 val prop = Properties()
 prop["driver"] = "org.apache.ignite.IgniteJdbcThinDriver"

 df.write().apply {
 mode(SaveMode.Overwrite)
 format("jdbc")
 option("url", "jdbc:ignite:thin://127.0.0.1  
")
 option("dbtable", "comments")

option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),

"last_name")
 }.save()

The last option doesn't seem to work/help.



--
Sent from:http://apache-ignite-users.70518.x6.nabble.com/  





Re: Tracing configuration

2020-10-30 Thread Maxim Muzafarov
Hello Bastien,

Is the issue [1] is the same as you faced with?
It seems to me it will be available in 2.9.1 (or 2.10).

[1] https://issues.apache.org/jira/browse/IGNITE-13640

On Fri, 30 Oct 2020 at 17:49, Bastien Durel  wrote:
>
> Hello,
>
> I'd like to activate tracing to investigate a slowdown, I have
> succeeded (I think) to activate trace gathering by linking
> optional/ignite-opencensus/ into libs, then using control.sh
>
> Command [TRACING-CONFIGURATION] started
> Arguments: --tracing-configuration
> 
> Scope, Label, Sampling Rate, included scopes
> DISCOVERY,,1.0,[]
> EXCHANGE,,0.0,[]
> COMMUNICATION,,1.0,[]
> TX,,1.0,[]
> Command [TRACING-CONFIGURATION] finished with code: 0
>
> But I don't find how to direct traces on my collector.
>
> I put a call to
>
> io.opencensus.exporter.trace.zipkin.ZipkinTraceExporter.createAndRegister(url,
>  serviceName);
>
> in the static section of a class loaded by the server at start, but I
> get some NoClassDefFoundError.
> I miss at least zipkin-reporter-2.7.14.jar, zipkin-2.12.0.jar and
> zipkin-sender-urlconnection-2.7.14.jar for the Zipkin part ,but now I
> get an error for io/opencensus/exporter/trace/util/TimeLimitedHandler
>
> Is this normal that zipkin & part of opencensus are missing from the
> released distribution ? (I'm using Debian package), or must I use a
> totally different way of configuring collection ?
>
> Thanks. Best regards,
>
> --
> Bastien Durel
> DATA
> Intégration des données de l'entreprise,
> Systèmes d'information décisionnels.
>
> bastien.du...@data.fr
> tel : +33 (0) 1 57 19 59 28
> fax : +33 (0) 1 57 19 59 73
> 45 avenue Carnot, 94230 CACHAN France
> www.data.fr
>
>


Re: Inserting date into ignite with spark jdbc

2020-10-30 Thread Denis Magda
Andrey,

Do we need to update our docs? It feels like the docs miss these details or
have an outdated example.

-
Denis


On Fri, Oct 30, 2020 at 7:03 AM Andrei Aleksandrov 
wrote:

> Hi,
>
> Here's an example with correct syntax that should work fine:
>
>  DataFrameWriter < Row > df = resultDF
>   .write()
>   .format(IgniteDataFrameSettings.FORMAT_IGNITE())
>   .option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath)
>   .option(IgniteDataFrameSettings.OPTION_TABLE(), "Person")
>   .option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(), 
> "id, city_id")
>   .option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PARAMETERS(), 
> "template=partitioned,backups=1")
>   .mode(Append);
>
> Please let me know if something is wrong here.
>
> BR,
> Andrei
> 10/30/2020 2:20 AM, Humphrey пишет:
>
> Hello guys this question has been asked on  Stack 
> Overflow
>  
> 
> but yet no answer is a provided.
>
> I'm facing the same issue (trying to insert data in ignite using
> spark.jdbc):
> Exception in thread "main" java.sql.SQLException: No PRIMARY KEY defined for
> CREATE TABLE
>   at
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1004)
>
> Code:
> println("-- writing using jdbc --")
> val prop = Properties()
> prop["driver"] = "org.apache.ignite.IgniteJdbcThinDriver"
>
> df.write().apply {
> mode(SaveMode.Overwrite)
> format("jdbc")
> option("url", "jdbc:ignite:thin://127.0.0.1")
> option("dbtable", "comments")
>
> option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),
> "last_name")
> }.save()
>
> The last option doesn't seem to work/help.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>


Too long JVM pause out of nowhere leading into shutdowns of ignite-servers

2020-10-30 Thread VincentCE
Hello!

In our project we are currently using ignite 2.81 and using zookeeper.
During the last couple of days we were facing shutdowns of some of our
ignite-server nodes.

Please find the logs below:

1) Why can there occur such long jvm/gc pauses although previous metrics in
the log do not indicate that imho?

2) We have the following timeouts set for the server-nodes. Which of them
would influence the handling after such long gc-pauses in order to avoid a
restart of the node?

Thanks in advance for your help!

Configs:

















 

LOGs:

[12:46:21,142][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]
[12:47:21,146][INFO][grid-timeout-worker-#35][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=3f58f4f5, uptime=9 days, 20:56:18.016]
^-- H/N/C [hosts=96, nodes=96, CPUs=1082]
^-- CPU [cur=-100%, avg=-100%, GC=0%]
^-- PageMemory [pages=16626106]
^-- Heap [used=20318MB, free=44.88%, comm=36864MB]
^-- Off-heap [used=65326MB, free=9.12%, comm=71760MB]
^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
^--   TxLog region [used=0MB, free=100%, comm=40MB]
^--   Default_Region region [used=65325MB, free=8.87%, comm=71680MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=14, qSize=0]
[12:47:21,146][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]
[12:48:21,154][INFO][grid-timeout-worker-#35][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=3f58f4f5, uptime=9 days, 20:57:18.025]
^-- H/N/C [hosts=96, nodes=96, CPUs=1082]
^-- CPU [cur=-100%, avg=-100%, GC=0%]
^-- PageMemory [pages=16626106]
^-- Heap [used=13057MB, free=64.58%, comm=36864MB]
^-- Off-heap [used=65326MB, free=9.12%, comm=71760MB]
^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
^--   TxLog region [used=0MB, free=100%, comm=40MB]
^--   Default_Region region [used=65325MB, free=8.87%, comm=71680MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=14, qSize=0]
[12:48:21,154][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]
[12:49:21,162][INFO][grid-timeout-worker-#35][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=3f58f4f5, uptime=9 days, 20:58:18.029]
^-- H/N/C [hosts=96, nodes=96, CPUs=1082]
^-- CPU [cur=-100%, avg=-100%, GC=0%]
^-- PageMemory [pages=16626106]
^-- Heap [used=8768MB, free=76.21%, comm=36864MB]
^-- Off-heap [used=65326MB, free=9.12%, comm=71760MB]
^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
^--   TxLog region [used=0MB, free=100%, comm=40MB]
^--   Default_Region region [used=65325MB, free=8.87%, comm=71680MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=14, qSize=0]
^-- System thread pool [active=0, idle=14, qSize=0]
[12:49:21,162][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]
[12:50:21,163][INFO][grid-timeout-worker-#35][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=3f58f4f5, uptime=9 days, 20:59:18.031]
^-- H/N/C [hosts=96, nodes=96, CPUs=1082]
^-- CPU [cur=-100%, avg=-100%, GC=0.03%]
^-- PageMemory [pages=16626106]
^-- Heap [used=7632MB, free=79.3%, comm=36864MB]
^-- Off-heap [used=65326MB, free=9.12%, comm=71760MB]
^--   sysMemPlc region [used=0MB, free=99.21%, comm=40MB]
^--   TxLog region [used=0MB, free=100%, comm=40MB]
^--   Default_Region region [used=65325MB, free=8.87%, comm=71680MB]
^-- Outbound messages queue [size=0]
^-- Public thread pool [active=0, idle=0, qSize=0]
^-- System thread pool [active=0, idle=14, qSize=0]
[12:50:21,163][INFO][grid-timeout-worker-#35][IgniteKernal] FreeList
[name=Default_Region##FreeList, buckets=256, dataPages=287347,
reusePages=3169711]
[12:51:21,168][INFO][grid-timeout-worker-#35][IgniteKernal] 
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=3f58f4f5, uptime=9 days, 21:00:18.038]
^-- H/N/C [hosts=96, nodes=96, CPUs=1082]
^-- CPU [cur=-100%, avg=-100%, GC=0%]
^-- PageMemory [pages=16626106]
^-- Heap [used=27712MB, free=24.82%, comm=36864MB]
^-- Off-heap [used=65326MB, free=9.12%, comm=71760MB]
^--   

Tracing configuration

2020-10-30 Thread Bastien Durel
Hello,

I'd like to activate tracing to investigate a slowdown, I have
succeeded (I think) to activate trace gathering by linking
optional/ignite-opencensus/ into libs, then using control.sh

Command [TRACING-CONFIGURATION] started
Arguments: --tracing-configuration

Scope, Label, Sampling Rate, included scopes
DISCOVERY,,1.0,[]
EXCHANGE,,0.0,[]
COMMUNICATION,,1.0,[]
TX,,1.0,[]
Command [TRACING-CONFIGURATION] finished with code: 0

But I don't find how to direct traces on my collector.

I put a call to 

io.opencensus.exporter.trace.zipkin.ZipkinTraceExporter.createAndRegister(url, 
serviceName);

in the static section of a class loaded by the server at start, but I
get some NoClassDefFoundError.
I miss at least zipkin-reporter-2.7.14.jar, zipkin-2.12.0.jar and
zipkin-sender-urlconnection-2.7.14.jar for the Zipkin part ,but now I
get an error for io/opencensus/exporter/trace/util/TimeLimitedHandler

Is this normal that zipkin & part of opencensus are missing from the
released distribution ? (I'm using Debian package), or must I use a
totally different way of configuring collection ?

Thanks. Best regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: Ignite Cluster Issue on 2.7.6

2020-10-30 Thread Andrei Aleksandrov

Hi,

Did you remove the code with ignite.cluster().active(*true*); ?

However, yes, all of your data nodes should be in baseline topology. 
Could you collect logs from your servers?


BR,
Andrei

10/30/2020 2:28 PM, Gurmehar Kalra пишет:


Hi,

I tried changes suggested by you , waited for nodes  and then tried to 
start cluster , but only 1 node is  joins cluster other node  does not 
participates in cluster.


Do I have to add all nodes into BLT ?

Regards,

Gurmehar Singh

*From:*Andrei Aleksandrov 
*Sent:* 29 October 2020 20:11
*To:* user@ignite.apache.org
*Subject:* Re: Ignite Cluster Issue on 2.7.6

[CAUTION: This Email is from outside the Organization. Unless you 
trust the sender, Don’t click links or open attachments as it may be a 
Phishing email, which can steal your Information and compromise your 
Computer.]


Hi,

Do you use cluster with persistence? After first actication all your 
data will be located on the first activated node.


In this case, you also should track your baseline.

https://www.gridgain.com/docs/latest/developers-guide/baseline-topology 



Baseline topology is a subset of nodes where you cache data located.

The recommendations are the following:

1)you should activate the cluster only when all server nodes were started
2)If the topology changes, you must either restore the failed nodes or 
reset to the base topology to trigger partition reassignment and 
rebalancing.
3)If some new node should contain the cache data then you should add 
this node to baseline topology:


using java code:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCluster.html#setBaselineTopology-java.util.Collection 
-


using utility tool:

https://www.gridgain.com/docs/latest/administrators-guide/control-script#adding-nodes-to-baseline-topology 



4)In case if some node from baseline can't be started (e.g because its 
data on the disk was destroyed) it should be removed from baseline:


https://www.gridgain.com/docs/latest/administrators-guide/control-script#removing-nodes-from-baseline-topology 



If you are not using persistence, please provide additional 
information that "data is being added to the cache but not available 
to any of the modules." means:


1) How you access data
2) What do you see in the logs

BR,
Andrei

10/29/2020 4:19 PM, Gurmehar Kalra пишет:

Hi,

I have two module(Web and Engine)  and want to share data b/w the
modules , but when I run  web and engine together , data is added
to cache  but is not available to either of modules.
below is my ignite config, which is same in both modules

config.setActiveOnStart(*true*);

config.setAutoActivationEnabled(*true*);


config.setIgniteHome(propertyReader.getProperty("spring.ignite.storage.path"));

config.setFailureHandler(*new*StopNodeOrHaltFailureHandler());

config.setDataStorageConfiguration(getDataStorageConfiguration());


config.setGridLogger(*new*JavaLogger(java.util.logging.Logger./getLogger/(*/LOG/*.getClass().getCanonicalName(;

Ignite ignite= Ignition./start/(config);

ignite.cluster().active(*true*);

All Caches created have below 

Re: IgniteSpiOperationTimeoutException: Operation timed out [timeoutStrategy= ExponentialBackoffTimeoutStrategy

2020-10-30 Thread Andrei Aleksandrov

Hi,

Often, problems with establishing a communication connection can be 
solved with the following configuration:


1)You may have multiple network interfaces and the wrong one could be 
used. Solved by changing the SPI communication timeouts.:



    class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">

          ...
  
  
          ...
    


Otherwise, you can wait more than 10 minutes when trying to create a 
connection (due to the ExponentialBackoffTimeoutStrategy strategy).


2)Some operations in the cluster require communication with clients 
through communication. In case you have communication problems, but you 
can still access through the discovery SPI, such operations may hang. To 
avoid it please set the following property:


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteSystemProperties.html#IGNITE_ENABLE_FORCIBLE_NODE_KILL

If these recommendations do not help, then yes, as Ilya said, we require 
a loudspeaker on your part.


BR,
Andrei

10/30/2020 2:20 PM, Ilya Kasnacheev пишет:

Hello!

Do you have a reproducer for this behaviour that I could run and see 
it failing?


Regards,
--
Ilya Kasnacheev


вт, 27 окт. 2020 г. в 22:02, VeenaMithare >:


Hi Ilya, The node communication issue is because one of the node
is being restarted - and not due to network failure . The original
issue is as below : Our setup : Servers - 3 node cluster Reader
clients : wait for an update on an entry of a cache ( around 20 of
them ) Writer Client : 1 If one of the reader client restarts
while the writer is writing into the entry of the cache , the
server attempts to send the update to the failed client's local
listener . It keeps attempting to communicate with the failed
client ( client's continous query local listener ? ) till it
timesout as per
connTimeoutStrategy=ExponentialBackoffTimeoutStrategy . ( Please
find the snippet of the exception below. The complete log is
attached as an attachment ) This delays the completion of the
transaction that was started by the writer client. Is there any
way the writer client could complete the transaction without
getting impacted by the reader client restarts ? regards, Veena.

Sent from the Apache Ignite Users mailing list archive
 at Nabble.com.



Re: Ignite instances frequently failing - BUG: soft lockup - CPU#1 stuck

2020-10-30 Thread Andrei Aleksandrov

Hello,

Too little information has been provided on your part:

1) Could you provide the screenshot from the web console at this time?
2) Could you collect Ignite logs during this period?
3) What tool shows that the processors are frozen? Have you checked 
other tools?


BR,
Andrew

10/30/2020 3:07 PM, bbellrose пишет:

Ignite instances keep failing. Server indicates CPU stuck. However monitoring
shows very little CPU usage. This happens almost every day on different
nodes of the cluster.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Inserting date into ignite with spark jdbc

2020-10-30 Thread Andrei Aleksandrov

Hi,

Here's an example with correct syntax that should work fine:

|DataFrameWriter < Row > df = resultDF .write() 
.format(IgniteDataFrameSettings.FORMAT_IGNITE()) 
.option(IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath) 
.option(IgniteDataFrameSettings.OPTION_TABLE(), "Person") 
.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(), 
"id, city_id") 
.option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PARAMETERS(), 
"template=partitioned,backups=1") .mode(Append); |


Please let me know if something is wrong here.

BR,
Andrei

10/30/2020 2:20 AM, Humphrey пишет:

Hello guys this question has been asked on  Stack Overflow

but yet no answer is a provided.

I'm facing the same issue (trying to insert data in ignite using
spark.jdbc):
Exception in thread "main" java.sql.SQLException: No PRIMARY KEY defined for
CREATE TABLE
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:1004)

Code:
 println("-- writing using jdbc --")
 val prop = Properties()
 prop["driver"] = "org.apache.ignite.IgniteJdbcThinDriver"

 df.write().apply {
 mode(SaveMode.Overwrite)
 format("jdbc")
 option("url", "jdbc:ignite:thin://127.0.0.1")
 option("dbtable", "comments")

option(IgniteDataFrameSettings.OPTION_CREATE_TABLE_PRIMARY_KEY_FIELDS(),

"last_name")
 }.save()

The last option doesn't seem to work/help.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite instances frequently failing - BUG: soft lockup - CPU#1 stuck

2020-10-30 Thread bbellrose
Ignite instances keep failing. Server indicates CPU stuck. However monitoring
shows very little CPU usage. This happens almost every day on different
nodes of the cluster.

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Ignite Cluster Issue on 2.7.6

2020-10-30 Thread Gurmehar Kalra
Hi,

I tried changes suggested by you , waited for nodes  and then tried to start 
cluster , but only 1 node is  joins cluster other node  does not participates 
in cluster.
Do I have to add all nodes into BLT ?
Regards,
Gurmehar Singh

From: Andrei Aleksandrov 
Sent: 29 October 2020 20:11
To: user@ignite.apache.org
Subject: Re: Ignite Cluster Issue on 2.7.6

[CAUTION: This Email is from outside the Organization. Unless you trust the 
sender, Don’t click links or open attachments as it may be a Phishing email, 
which can steal your Information and compromise your Computer.]

Hi,

Do you use cluster with persistence? After first actication all your data will 
be located on the first activated node.

In this case, you also should track your baseline.

https://www.gridgain.com/docs/latest/developers-guide/baseline-topology

Baseline topology is a subset of nodes where you cache data located.

The recommendations are the following:

1)you should activate the cluster only when all server nodes were started
2)If the topology changes, you must either restore the failed nodes or reset to 
the base topology to trigger partition reassignment and rebalancing.
3)If some new node should contain the cache data then you should add this node 
to baseline topology:

using java code:

https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCluster.html#setBaselineTopology-java.util.Collection-

using utility tool:

https://www.gridgain.com/docs/latest/administrators-guide/control-script#adding-nodes-to-baseline-topology

4)In case if some node from baseline can't be started (e.g because its data on 
the disk was destroyed) it should be removed from baseline:

https://www.gridgain.com/docs/latest/administrators-guide/control-script#removing-nodes-from-baseline-topology

If you are not using persistence, please provide additional information that 
"data is being added to the cache but not available to any of the modules." 
means:

1) How you access data
2) What do you see in the logs

BR,
Andrei
10/29/2020 4:19 PM, Gurmehar Kalra пишет:
Hi,

I have two module(Web and Engine)  and want to share data b/w the modules , but 
when I run  web and engine together , data is added to cache  but is not 
available to either of modules.
below is my ignite config, which is same in both modules

config.setActiveOnStart(true);
config.setAutoActivationEnabled(true);

config.setIgniteHome(propertyReader.getProperty("spring.ignite.storage.path"));
config.setFailureHandler(new StopNodeOrHaltFailureHandler());
config.setDataStorageConfiguration(getDataStorageConfiguration());
config.setGridLogger(new 
JavaLogger(java.util.logging.Logger.getLogger(LOG.getClass().getCanonicalName(;

Ignite ignite = Ignition.start(config);
ignite.cluster().active(true);


All Caches created have below properties |
cache.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC);
cache.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
cache.setCacheMode(CacheMode.REPLICATED);
cache.setGroupName("EngineGroup");

Both Modules  are running on  IP List : 

Re: IgniteSpiOperationTimeoutException: Operation timed out [timeoutStrategy= ExponentialBackoffTimeoutStrategy

2020-10-30 Thread Ilya Kasnacheev
Hello!

Do you have a reproducer for this behaviour that I could run and see it
failing?

Regards,
-- 
Ilya Kasnacheev


вт, 27 окт. 2020 г. в 22:02, VeenaMithare :

> Hi Ilya, The node communication issue is because one of the node is being
> restarted - and not due to network failure . The original issue is as below
> : Our setup : Servers - 3 node cluster Reader clients : wait for an update
> on an entry of a cache ( around 20 of them ) Writer Client : 1 If one of
> the reader client restarts while the writer is writing into the entry of
> the cache , the server attempts to send the update to the failed client's
> local listener . It keeps attempting to communicate with the failed client
> ( client's continous query local listener ? ) till it timesout as per
> connTimeoutStrategy=ExponentialBackoffTimeoutStrategy . ( Please find the
> snippet of the exception below. The complete log is attached as an
> attachment ) This delays the completion of the transaction that was started
> by the writer client. Is there any way the writer client could complete the
> transaction without getting impacted by the reader client restarts ?
> regards, Veena.
> --
> Sent from the Apache Ignite Users mailing list archive
>  at Nabble.com.
>


Re: Client App Object Allocation Rate

2020-10-30 Thread Ilya Kasnacheev
Hello!

I guess that you have EVT_NODE_METRICS_UPDATED event enabled on client
nodes (but maybe not on server nodes)

It will indeed produce a lot of garbage so I recommend disabling the
recording of this event by calling
ignite.events().disableLocal(EVT_NODE_METRICS_UPDATED);

+ dev@

Why do we record EVT_NODE_METRICS_UPDATED by default? Sounds like a bad
idea yet we enable recording of all internal events in
GridEventStorageManager.

-- 
Ilya Kasnacheev


пн, 26 окт. 2020 г. в 19:37, ssansoy :

> Hi, here's an example (using YourKit rather than JFR).
>
> Apologies, I had to obfuscate some of the company specific information.
> This
> shows a window of about 10 seconds of allocations
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2797/MetricsUpdated.png>
>
>
> Looks like these come from GridDiscoveryManager - creating a new string
> every time. This happens several times per second it seems. Some of these
> mention other client nodes - so some other production app in our firm, that
> uses the cluster, has an impact on a different production app. Is there any
> way to turn this off? Each of our clients need to be isolated such that
> other client apps do not interfere in any way
>
> Also
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2797/TcpDiscoveryClientMetricsUpdateMessage.png>
>
>
> These update messages seem to come in even though metricsEnabled is turned
> off on the client (not specificied).
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: removing ControlCenterAgent

2020-10-30 Thread Bastien Durel
Le jeudi 29 octobre 2020 à 12:07 +, Mekhanikov Denis a écrit :
> Hi!
> 
> The issue is that Control Center Agent puts its configuration to the
> meta-storage. 
> Ignite has an issue with processing data in meta-storage with class
> that is not present on all nodes: 
> https://issues.apache.org/jira/browse/IGNITE-13642
> Effectively it means that you can't remove control-center-agent from
> a cluster that worked with it previously.
> 
> You have a few options how to solve it:
> - Add control-center-agent to class path of all nodes and disable it
> using management.sh --off. Classes and configuration will be there,
> but it won't do anything. You'll be able to remove the library after
> an upgrade to the version that doesn't have this bug. Hopefully, it
> will be fixed in Ignite 2.9.1
> 
> - Remove the metastorage directory from the persistence directory on
> all nodes. It will lead to removal of Control Center Agent
> configuration along with Baseline Topology history.
> You will need to do that together with removal of the control-center-
> agent library.
> NOTE that removal of metastorage is a dangerous operation and can
> lead to data loss. I recommend using the first option if it works for
> you.
> Make a copy of persistence directories before removing anything.
> After the removal and a restart the baseline topology will be reset.
> Make sure that first activation will lead to the same BLT like before
> the restart to avoid data loss.
> 
Hello,

Thanks for info. I've removed the db directory on all nodes, as most of
my data is in 3rd-party storage, and I can live without event logs that
uses ignite storage, as we're not in production.

We'll keep this in mind to avoid future problems.

Regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
45 avenue Carnot, 94230 CACHAN France
www.data.fr




Re: High availability of local listeners for ContinuousQuery or Events

2020-10-30 Thread 38797715

Hi Igor,

We hope that if the local listener node fails, we can have a mechanism 
similar to fail over. Otherwise, if the local listener node fails and 
restarts, the events during the failure will be lost.


在 2020/10/30 上午12:04, Igor Belyakov 写道:

Hi,

In case the node, which registered a continuous query fails, the 
continuous query will be undeployed from the cluster. The cluster 
state won't be changed.


It's not a good practice to write the business code in a remote 
filter. Could you please clarify more details regarding your use case?


Igor

On Thu, Oct 29, 2020 at 4:46 PM 38797715 <38797...@qq.com 
> wrote:


Hi community,

For local listeners registered for ContinuousQuery and Events, is
there
a corresponding high availability mechanism design? That is, if
the node
registering the local listener fails, what state will the cluster be?

If do not register a local listener, but write the business code
in the
remote filter and return false, is this a good practice?