SQL EXPLAIN ANALYZE

2021-06-10 Thread Devakumar J
Hi,

I see the ANALYZE syntax doesn't show the scan count.

Is this not supported in ignite?

Thanks & Regards,
Devakumar J



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Regarding ignite query result set limit

2021-06-10 Thread Devakumar J
Hello Vladimir Pligin,

Thanks for the response.

Do we have any other document to understand in-depth how this lazy setting
works internally.

Let say if the query to return 1 million records but if there is no enough
memory at the server side, how exactly the service which connects through
thin client will get the whole data? Is it kind of streaming will be enabled
if we use lazy=true?

Thanks & Regards,
Devakumar J



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Split brain in 2.9.0?

2021-06-10 Thread Devin Bost
We encountered a situation after a node unexpectedly went down and came
back up.
After it came back, none of our transactions were going through (due to
rollbacks), and we started getting a lot of exceptions in the logs. (I've
added the exceptions at the bottom of this message.)
We were getting "Failed to execute the cache operation (all partition
owners have left the grid, partition data has been lost)", so we tried to
reset the partitions (since these are persistent caches), and the commands
succeeded, but we kept seeing errors.

We checked the cluster state, and it looks like we have two nodes that came
up with different IDs.

Cluster state: active
Current topology version: 1170
Baseline auto adjustment disabled: softTimeout=30
Current topology version: 1170 (Coordinator:
ConsistentId=1455b414-5389-454a-9609-8dd1d15a2430, Order=1)
Baseline nodes:
ConsistentId=1a0aa611-58b7-479a-b1e6-735e31f87ed9, State=ONLINE,
Order=1169
ConsistentId=92bc8407-30f1-433d-9c32-5eeb759c73be, State=OFFLINE
ConsistentId=b5875ab9-7923-46c9-b3f3-1550455a24e5, State=OFFLINE

Number of baseline nodes: 3
Other nodes:
ConsistentId=1455b414-5389-454a-9609-8dd1d15a2430, Order=1
ConsistentId=5e8f3b03-aa20-45aa-892a-37e988e3741f, Order=2


Could this be a split-brain scenario?

Here's the more complete logs:

javax.cache.CacheException: class
org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
Failed to execute the cache operation (all partition owners have left the
grid, partition data has been lost) [cacheName=propensity-customer,
partition=430, key=com.company.PropensityKey [idHash=362342248,
hash=42458921, customerId=142045188, variant=MODEL_A]]
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.convertToCacheException(GridCacheUtils.java:1270)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.cacheException(IgniteCacheProxyImpl.java:2083)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.get(IgniteCacheProxyImpl.java:1110)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.get(GatewayProtectedCacheProxy.java:676)
at
org.apache.ignite.internal.processors.platform.client.cache.ClientCacheGetRequest.process(ClientCacheGetRequest.java:41)
at
org.apache.ignite.internal.processors.platform.client.ClientRequestHandler.handle(ClientRequestHandler.java:99)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:202)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:56)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
at
org.apache.ignite.internal.util.nio.GridNioFilterAdapter.proceedMessageReceived(GridNioFilterAdapter.java:109)
at
org.apache.ignite.internal.util.nio.GridNioAsyncNotifyFilter$3.body(GridNioAsyncNotifyFilter.java:97)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at
org.apache.ignite.internal.util.worker.GridWorkerPool$1.run(GridWorkerPool.java:70)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: class
org.apache.ignite.internal.processors.cache.CacheInvalidStateException:
Failed to execute the cache operation (all partition owners have left the
grid, partition data has been lost) [cacheName=propensity-customer,
partition=430, key=com.company.PropensityKey [idHash=362342248,
hash=42458921, customerId=142045188, variant=MODEL_A]]
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validateKey(GridDhtTopologyFutureAdapter.java:209)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTopologyFutureAdapter.validateCache(GridDhtTopologyFutureAdapter.java:128)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.validate(GridPartitionedSingleGetFuture.java:859)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture.java:277)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture.java:244)
at
org.apache.ignite.internal.processors.cache.distributed.dht.colocated.GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:297)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(GridCacheAdapter.java:4844)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.repairableGet(GridCacheAdapter.java:4810)
 

Re: unsubscibe

2021-06-10 Thread Ilya Kasnacheev
Hello!

You need to send a message to user-unsubscr...@ignite.apache.org with
subject “Unsubscribe"

Regards,
-- 
Ilya Kasnacheev


чт, 10 июн. 2021 г. в 17:06, pinak sawhney :

>
>


unsubscibe

2021-06-10 Thread pinak sawhney



Re: Apache Ignite and Kafka Connector

2021-06-10 Thread Stephen Darlington
Create your table in Ignite, using whatever mechanism you’re comfortable with 
(XML, code, SQL).

Configure the Kafka adapter to stream from the desired topic into your table. 
You may need to write a little code to map from one schema to the other.

There are commercial Kafka Connectors for Ignite that that does some of this 
for you.

> On 10 Jun 2021, at 14:38, shubhamshirur  wrote:
> 
> Thanks, I want the same. So I have to do it with Kafka streams, a java code.
> Not with kafka connectors. is it?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Apache Ignite and Kafka Connector

2021-06-10 Thread shubhamshirur
Thanks, I want the same. So I have to do it with Kafka streams, a java code.
Not with kafka connectors. is it?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Heap dump is not getting generated even after specifying HeapDumpOnOutOfMemoryError

2021-06-10 Thread andrei

Hi,

I guess you should remove -J-XX:+ExitOnOutOfMemoryError.

BR,
Andrei

6/8/2021 3:42 PM, Naveen пишет:

HI

We are using Ignite 2.8.1

we start Ignite  node with below command and whenever node gets crashed with
OOM it is not generating the heap dump in spite of having the below settings
enabled. what could be the reason for this ?

/usr/apache-ignite-2.8.1-bin/bin/ignite.sh -J-Xms2G -J-Xmx6G
-J-XX:+AlwaysPreTouch -J-XX:+UseG1GC -J-XX:+ScavengeBeforeFullGC
-J-XX:+DisableExplicitGC -J-XX:+HeapDumpOnOutOfMemoryError
-J-XX:HeapDumpPath=/data1/ignite-log/heapdump -J-XX:+ExitOnOutOfMemoryError
-J-XX:+PrintGC -J-XX:+PrintGCDetails
-J-Xlog:gc*,gc+ref=debug,gc+heap=debug,gc+age=trace:file=/data1/ignite-log/gc-%p-%t.log:tags,time,level:filecount=10,filesize=20m
-J-XX:+PrintFlagsFinal -J-XX:+UnlockDiagnosticVMOptions
-J-Xlog:safepoint:file=/data1/ignite-log/safepoint.log
-J-XX:+PrintSafepointStatistics -J-XX:PrintSafepointStatisticsCount=1
-J-Djava.net.preferIPv4Stack=true
-J-DIGNITE_LONG_OPERATIONS_DUMP_TIMEOUT=60
-J-DIGNITE_THREAD_DUMP_ON_EXCHANGE_TIMEOUT=true
/usr/apache-ignite-2.8.1-bin/config/ignitedev-config.xml

When allocated heap is exhausted, node is not getting killed, it just hangs
and become unresponsive, this behavior of ignite node causing the issues.
How do we kill the node when the whole heap memory is exhausted gracefully ?

Thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite and Kafka Connector

2021-06-10 Thread Stephen Darlington
This is a good overview of the storage options: multi-tier-storage.html 


Basically, there’s no cache and separate database. In Ignite, they’re the same 
things. A table can (optionally) have SQL support and persistence. So from a 
Kafka point of view, you just stream your data into a table. Whether it’s 
persistent or has SQL support depends on how you configured your table.

> On 10 Jun 2021, at 14:16, shubhamshirur  wrote:
> 
> Yes may be. I do not know much about Apache Ignite. So by Ignite Database I
> mean SQLline. Please tell me if on what should research over to make it more
> relatable?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Apache Ignite and Kafka Connector

2021-06-10 Thread shubhamshirur
Yes may be. I do not know much about Apache Ignite. So by Ignite Database I
mean SQLline. Please tell me if on what should research over to make it more
relatable?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cannot disable system view JMX exporter via configuration

2021-06-10 Thread andrei

Hi,

As an option you can just add all SQL GridGain system views to blacklist:

blacklistObjectNames: ["group=views,*"]

It should help too.

BR,
Andrei

6/2/2021 6:50 PM, tsipporah22 пишет:

An update on the workaround. I'm now using below whitelist in my
prometheus.yaml as previous workaround filtered out all spark driver and
executor metrics:

whitelistObjectNames:
   - "metrics:name=*"



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Nodes taking 5 minutes to reattempt joining grid when failed to join

2021-06-10 Thread andrei

Hi,

Do you use SSL?

If yes, then Ignite will tries to create an SSL connection until the 
ExponentialBackoffTimeoutStrategy.totalTimeout is exceeded.


During these attempts,  other threads can be blocked because of 
communication will be able to send the message over SSL.


I saw before the same behavior when TLS 1.3 was used and different 
versions of java on each hosts.


The problem was related to the following JDK issue:

https://bugs.openjdk.java.net/browse/JDK-8208526

The recommendations are:

1)set correct TLS version for every node via 
-Djdk.tls.client.protocols=TLSv1.2  and -Dhttps.protocols=TLSv1.2
2)check the versions of JAVA used. They should be the same for every 
node (servers and clients)


Please note that if you do not set the TLS version directly via 
properties, then some default versions will be used. For some JAVA 
versions, it may be 1.3.


However, your case is very similar to the situation described above. It 
is possible that some client was running with a newer version of JAVA, 
but at the same time your server node was started as needed.


BR,
Andrei

5/21/2021 3:47 PM, tarunk пишет:

Hi Team,

We are seeing quite a long delay in node reattempting to join the when it
failed to join in first attempt.
 From the logs it appears it attempts after 5 minutes, in our case it failed
3 times and then joined after 15 minutes. We have setup the networkTimeout
to 1 but not sure if updated anything else from default. Can you please
suggest if we can reduce this retry attempt time ?

Below is the error we see in stacktrace from
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi

Node has not been connected to topology and will repeat join process. Check
remote nodes logs for possible error messages. Note that large topology may
require significant time to start. Increase 'TcpDiscoverySpi.networkTimeout'
configuration property if getting this message on the starting nodes
[networkTimeout=1]

and below screenshot suggest 3 attempts after ~5 minute interval and finally
node joining grid after ~15 minutes.


Thanks
Tarun



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Failing client node due to not receiving metrics updates-IGNITE-10354

2021-06-10 Thread Akash Shinde
Hi Zhenya, Thanks for the quick response.
I am checking with the network team as well for any network glitch
occurrence.

Thanks,
Akash

On Thu, Jun 10, 2021 at 1:06 PM Zhenya Stanilovsky 
wrote:

> Hello Akash !
> I found that fix mentioned by you is ok.
> Why do you think that your network between server and client is ok ?
> Can you add some network monitoring here ?
> thanks.
>
>
>
>
>
>
> Hi, There is a cluster of four server nodes and six client nodes in
> production. I was using ignite 2.6.0 version and all six client nodes were
> failing with below error
>
> WARN o.a.i.s.d.tcp.TcpDiscoverySpi - Failing client node due to not
> receiving metrics updates from client node within
> 'IgniteConfiguration.clientFailureDetectionTimeout' (consider increasing
> configuration property) [timeout=9, node=TcpDiscoveryNode
> [id=12f9809d-95be-47e3-81fe-d7ffcaab064c,
> consistentId=12f9809d-95be-47e3-81fe-d7ffcaab064c, addrs=ArrayList
> [0:0:0:0:0:0:0:1%lo, 127.0.0.1, ], sockAddrs=HashSet
> [/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /:0],
> discPort=0, order=155, intOrder=82, lastExchangeTime=1623154238808,
> loc=false, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=true]]
>
> Then I have upgraded the ignite version to 2.10.0 to get the fix of known
> issue IGNITE-10354 .
> But I am still facing the issue even after upgrading to the 10.2.0
> ignite version.
>
> Could someone help here.
>
> Thanks,
> Akash
>
>
>
>
>
>
>
>


Re: Apache Ignite and Kafka Connector

2021-06-10 Thread Stephen Darlington
What do you mean by “Ignite Database” and “Ignite cache”? Are you trying to 
write from Kafka into a disk-based store, rather than a memory store?

> On 10 Jun 2021, at 13:28, shubhamshirur  wrote:
> 
> I want to achieve a scenario where I am sending a Kafka topic data to ignite
> database using connectors and not to the ignite cache directly. Can I do
> that? What Ignite Configuration changes and connector configuration changes
> I have to make? 
> Thank you.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: CacheEntryProcessor ClassNotFoundException after 2.7.6 -> 2.10.0 Upgrade

2021-06-10 Thread andrei

Hi,

The question was answered here (JIRA ticket was created) - 
http://apache-ignite-users.70518.x6.nabble.com/Exception-on-CacheEntryProcessor-invoke-2-10-0-possible-bug-tt36042.html#a36090


BR,
Andrei

5/18/2021 6:16 PM, ihalilaltun пишет:

what we expect here is that related cacheentryprocessors or any other class
should be redeployed in SHARED mode and do the task it should be.



-
İbrahim Halil Altun
Senior Software Engineer @ Segmentify
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Apache Ignite and Kafka Connector

2021-06-10 Thread shubhamshirur
I want to achieve a scenario where I am sending a Kafka topic data to ignite
database using connectors and not to the ignite cache directly. Can I do
that? What Ignite Configuration changes and connector configuration changes
I have to make? 
Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Namespace and DsicoverSpi Properties for Ignite running on Kubernetes

2021-06-10 Thread Ilya Kasnacheev
Hello!

I don't think that you need to have exactly the same IP finder, however,
due to how K8S works your nodes outside K8S may not be able to connect to
nodes within K8S (including thick clients).

Regards,
-- 
Ilya Kasnacheev


ср, 26 мая 2021 г. в 23:58, PunxsutawneyPhil3 :

> I have two questions regarding how to set up Ignite in Kubernetes.
>
> Do all nodes need to be in the same namespace? E.G. If I have a thick
> client
> and a server node, do both need to be in the same name space to form a
> cluster?
> From my research I think the answer is yes they need to be in the same
> namespace but I have not found any definitive documentation
>
> Do both the client and server nodes need to be running the
> TcpDiscoveryKubernetesIpFinder or can nodes use a mix of the
> TcpDiscoveryKubernetesIpFinder and the static IPfinder?
> From my resaerch I am fairly confident that all nodes must be running with
> the TcpDiscoveryKubernetesIpFinder but again I have not found any
> definitive
> documentation.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Right Handling of UnRegisteredBinaryObjectTypeException

2021-06-10 Thread andrei

Hi,

Answer was provided here - 
http://apache-ignite-users.70518.x6.nabble.com/2-8-1-CacheEntryProcessor-for-insert-update-within-Transaction-supported-tt35918.html


BR,
Andrei

4/20/2021 3:50 PM, VeenaMithare пишет:

Hello all,

Please guide me on the right way to use cacheentryprocessor within a
transaction to update cache records .

As shown in the example above, insert of a new record always throws
UnregisteredBinaryTypeException - unless a metadata is registered on the
client side before starting the transaction.

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: 2.8.1 : CacheEntryProcessor for insert/update within Transaction supported ?

2021-06-10 Thread antkr
I've created ticket according with the observations: https://issues.apache.org/jira/browse/IGNITE-14886However, WA would be to preregister all the types on the system start, this might possibly be done using lifecycle beans or right after the start before allowing the applications to proceed with the load. The root cause is the CREATE TABLE statement that is using the same type, which creates this type with different schema that needs updates to be made to existing binary schemas when the binary object builder is invoked within EntryProcessor.Best regards,Anton From: VeenaMithareSent: Tuesday, April 27, 2021 3:46 PMTo: user@ignite.apache.orgSubject: 2.8.1 : CacheEntryProcessor for insert/update within Transaction supported ? Hi Team,  Is the functionality to use CacheEntryProcessor for insert/update withinTransaction supported ? As mentioned in this post, insert of a record using cacheentryprocessor always fails with UnregisteredBinaryTypeException if the insert is donewithin a transaction :  http://apache-ignite-users.70518.x6.nabble.com/Right-Handling-of-UnRegisteredBinaryObjectTypeException-tp35412p35681.html regards,Veena.   --Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 


Re: Restart ignite on segmentation

2021-06-10 Thread Ilya Kasnacheev
Hello!

It should be OK.

Ignite tests start, stop and segment thousands of nodes in a single JVM.

Regards,
-- 
Ilya Kasnacheev


чт, 10 июн. 2021 г. в 14:06, jenny winsor :

> Is it okay to start ignite again on a segmentation without restarting the
> JVM? StopNodeFailureHandler will just stop the ignite instance running
> locally. I am running in embedded mode so do not want to crash the server.
> I've read different opinions on this - is there something I should be aware
> of?
>
>


Restart ignite on segmentation

2021-06-10 Thread jenny winsor
Is it okay to start ignite again on a segmentation without restarting the JVM? 
StopNodeFailureHandler will just stop the ignite instance running locally. I am 
running in embedded mode so do not want to crash the server. I've read 
different opinions on this - is there something I should be aware of?



Re: Ignite crashed with CorruptedTreeException

2021-06-10 Thread Maksim Timonin
Hi, Marcus! I've found a bug. It was already fixed within this ticket
https://issues.apache.org/jira/browse/IGNITE-14451. So it will be
available in Ignite 2.11.

Reason is that you declare fields in your table in order (viewId, status)
while PK has order (status, viewId). So, change the order in the table and
it will be fine. This bug affects only Primary Key indexes, so it's safe to
declare fields in different order in secondary indexes.

On Thu, Jun 10, 2021 at 10:58 AM Maksim Timonin 
wrote:

> hi, Marcus!
>
> Thank you for the reproducer, I've succeed in reproducing it. Create a
> ticket for that https://issues.apache.org/jira/browse/IGNITE-14869
>
> I will return to you with a workaround, after finding a reason.
>
> On Thu, Jun 10, 2021 at 3:18 AM Lo, Marcus  wrote:
>
>> Hi Ivan, Maksim,
>>
>> Here is the reproducer:
>>
>> import org.apache.ignite.Ignition;
>> import org.apache.ignite.binary.BinaryObject;
>> import org.apache.ignite.binary.BinaryObjectBuilder;
>> import org.apache.ignite.cache.QueryEntity;
>> import org.apache.ignite.client.ClientCache;
>> import org.apache.ignite.client.IgniteClient;
>> import org.apache.ignite.configuration.BinaryConfiguration;
>> import org.apache.ignite.configuration.ClientConfiguration;
>> import org.junit.jupiter.api.Test;
>>
>> import java.sql.Timestamp;
>> import java.time.Instant;
>> import java.util.*;
>> import java.util.stream.IntStream;
>>
>> import static java.util.stream.Collectors.toList;
>>
>> public class Reproducer {
>>
>> @Test
>> public void reproduce() throws InterruptedException {
>> ClientConfiguration config = constructIgniteThinClientConfig();
>> IgniteClient ignite = Ignition.startClient(config);
>>
>> List uuids = IntStream.range(0, 200).mapToObj((i) ->
>> UUID.randomUUID()).collect(toList());
>>
>> while (true) {
>> upsertLimitViewData(ignite, uuids);
>> Thread.sleep(1000);
>> }
>> }
>>
>> private void upsertLimitViewData(IgniteClient ignite, List
>> uuids) {
>> System.out.println("[" + Instant.now() + "] upserting data... " +
>> Thread.currentThread().getName());
>>
>> ClientCache cache =
>> ignite.cache("LimitViewStatusCache").withKeepBinary();
>> QueryEntity queryEntity =
>> cache.getConfiguration().getQueryEntities()[0];
>> BinaryObjectBuilder keyBuilder =
>> ignite.binary().builder(queryEntity.getKeyType());
>> BinaryObjectBuilder valueBuilder =
>> ignite.binary().builder(queryEntity.getValueType());
>> HashMap valueMap = new HashMap<>();
>>
>> for (int i = 0; i < 200; i++) {
>> BinaryObject key = keyBuilder
>> .setField("viewId", uuids.get(i))
>> .setField("status", "moo")
>> .build();
>>
>> BinaryObject value = valueBuilder
>> .setField("batchId", new Random().nextInt())
>> .setField("instance", Integer.toString(new
>> Random().nextInt()))
>> .setField("nodes", Integer.toString(new
>> Random().nextInt()))
>> .setField("eqtgContext", Integer.toString(new
>> Random().nextInt()))
>> .setField("lastUpdateTime",
>> Timestamp.from(Instant.now()))
>> .build();
>>
>> valueMap.put(key, value);
>> }
>>
>> cache.putAll(valueMap);
>> }
>>
>> private ClientConfiguration constructIgniteThinClientConfig() {
>> return
>> new ClientConfiguration()
>> .setAddresses("xxx:10800")
>> .setPartitionAwarenessEnabled(false)
>> .setBinaryConfiguration(new
>> BinaryConfiguration().setCompactFooter(false))
>> .setUserName("xxx")
>> .setUserPassword("xxx");
>> }
>> }
>>
>>
>> Regards,
>> Marcus
>>
>> -Original Message-
>> From: [External] Maksim Timonin 
>> Sent: Thursday, June 10, 2021 12:31 AM
>> To: user@ignite.apache.org
>> Subject: Re: Ignite crashed with CorruptedTreeException
>>
>> Hi Marcus!
>>
>> Could you please provide a complete code that inserts data (either it is
>> SQL, or cache put, which types do you use, etc.). I've tried to reproduce
>> your case but failed.
>>
>> Thanks a lot!
>>
>>
>>
>> --
>> Sent from:
>> https://urldefense.com/v3/__http://apache-ignite-users.70518.x6.nabble.com/__;!!Jkho33Y!2v--HF_tnHWeR_0YefFDx-NcnoY3hkO-9G94IAXG23N6qzB_qz-rSYtuciav3A$
>>
>


Re: Exception on CacheEntryProcessor invoke (2.10.0)

2021-06-10 Thread Ilya Kasnacheev
Hello!

Unfortunately, I can't see the stack trace that you are referring to, but
since you did not see the issue before, it may be
https://issues.apache.org/jira/browse/IGNITE-14856

There's a work-around since the bug will only manifest when cache is
defined in client nodes' configuration but not in server nodes'.

Regards,
-- 
Ilya Kasnacheev


вт, 25 мая 2021 г. в 16:04, ihalilaltun :

> Hi,
>
> here is the debug log  ignite.zip
> 
>
> in the mean time i'll try to simplfy use case as you suggested.
>
>
>
> -
> İbrahim Halil Altun
> Senior Software Engineer @ Segmentify
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Bug in GridCacheWriteBehindStore

2021-06-10 Thread Ilya Kasnacheev
Hello!

I guess so. I think you should file a ticket against Ignite JIRA.

Regards,
-- 
Ilya Kasnacheev


ср, 9 июн. 2021 г. в 20:26, gigabot :

> There's a bug in GridCacheWriteBehindStore in the flusher method.
>
>
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/store/GridCacheWriteBehindStore.java#L674
>
> The logic states there that if flush thread count is not a power of 2,
> then
> perform some math that is not guaranteed to return a positive number. For
> example, if you pass this string as a key it returns a negative number:
>  accb2e8ea33e4a89b4189463cacc3c4e
>
> and then throws an array out of bounds exception when looking up the
> thread.
>
> I'm surprised this bug has been there so long, I guess people are not
> setting their thread counts to non-powers-of-2.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite crashed with CorruptedTreeException

2021-06-10 Thread Maksim Timonin
hi, Marcus!

Thank you for the reproducer, I've succeed in reproducing it. Create a
ticket for that https://issues.apache.org/jira/browse/IGNITE-14869

I will return to you with a workaround, after finding a reason.

On Thu, Jun 10, 2021 at 3:18 AM Lo, Marcus  wrote:

> Hi Ivan, Maksim,
>
> Here is the reproducer:
>
> import org.apache.ignite.Ignition;
> import org.apache.ignite.binary.BinaryObject;
> import org.apache.ignite.binary.BinaryObjectBuilder;
> import org.apache.ignite.cache.QueryEntity;
> import org.apache.ignite.client.ClientCache;
> import org.apache.ignite.client.IgniteClient;
> import org.apache.ignite.configuration.BinaryConfiguration;
> import org.apache.ignite.configuration.ClientConfiguration;
> import org.junit.jupiter.api.Test;
>
> import java.sql.Timestamp;
> import java.time.Instant;
> import java.util.*;
> import java.util.stream.IntStream;
>
> import static java.util.stream.Collectors.toList;
>
> public class Reproducer {
>
> @Test
> public void reproduce() throws InterruptedException {
> ClientConfiguration config = constructIgniteThinClientConfig();
> IgniteClient ignite = Ignition.startClient(config);
>
> List uuids = IntStream.range(0, 200).mapToObj((i) ->
> UUID.randomUUID()).collect(toList());
>
> while (true) {
> upsertLimitViewData(ignite, uuids);
> Thread.sleep(1000);
> }
> }
>
> private void upsertLimitViewData(IgniteClient ignite, List
> uuids) {
> System.out.println("[" + Instant.now() + "] upserting data... " +
> Thread.currentThread().getName());
>
> ClientCache cache =
> ignite.cache("LimitViewStatusCache").withKeepBinary();
> QueryEntity queryEntity =
> cache.getConfiguration().getQueryEntities()[0];
> BinaryObjectBuilder keyBuilder =
> ignite.binary().builder(queryEntity.getKeyType());
> BinaryObjectBuilder valueBuilder =
> ignite.binary().builder(queryEntity.getValueType());
> HashMap valueMap = new HashMap<>();
>
> for (int i = 0; i < 200; i++) {
> BinaryObject key = keyBuilder
> .setField("viewId", uuids.get(i))
> .setField("status", "moo")
> .build();
>
> BinaryObject value = valueBuilder
> .setField("batchId", new Random().nextInt())
> .setField("instance", Integer.toString(new
> Random().nextInt()))
> .setField("nodes", Integer.toString(new
> Random().nextInt()))
> .setField("eqtgContext", Integer.toString(new
> Random().nextInt()))
> .setField("lastUpdateTime",
> Timestamp.from(Instant.now()))
> .build();
>
> valueMap.put(key, value);
> }
>
> cache.putAll(valueMap);
> }
>
> private ClientConfiguration constructIgniteThinClientConfig() {
> return
> new ClientConfiguration()
> .setAddresses("xxx:10800")
> .setPartitionAwarenessEnabled(false)
> .setBinaryConfiguration(new
> BinaryConfiguration().setCompactFooter(false))
> .setUserName("xxx")
> .setUserPassword("xxx");
> }
> }
>
>
> Regards,
> Marcus
>
> -Original Message-
> From: [External] Maksim Timonin 
> Sent: Thursday, June 10, 2021 12:31 AM
> To: user@ignite.apache.org
> Subject: Re: Ignite crashed with CorruptedTreeException
>
> Hi Marcus!
>
> Could you please provide a complete code that inserts data (either it is
> SQL, or cache put, which types do you use, etc.). I've tried to reproduce
> your case but failed.
>
> Thanks a lot!
>
>
>
> --
> Sent from:
> https://urldefense.com/v3/__http://apache-ignite-users.70518.x6.nabble.com/__;!!Jkho33Y!2v--HF_tnHWeR_0YefFDx-NcnoY3hkO-9G94IAXG23N6qzB_qz-rSYtuciav3A$
>


Re: Failing client node due to not receiving metrics updates-IGNITE-10354

2021-06-10 Thread Zhenya Stanilovsky

Hello Akash !
I found that fix mentioned by you is ok.
Why do you think that your network between server and client is ok ?
Can you add some network monitoring here ?
thanks.
 
> 
>> 
>>>Hi, There is a cluster of four server nodes and six client nodes in 
>>>production. I was using ignite 2.6.0 version and all six client nodes were 
>>>failing with below error
>>> 
>>>WARN o.a.i.s.d.tcp.TcpDiscoverySpi -  Failing client node due to not 
>>>receiving metrics updates from client node within 
>>>'IgniteConfiguration.clientFailureDetectionTimeout' (consider increasing 
>>>configuration property) [timeout=9, node=TcpDiscoveryNode 
>>>[id=12f9809d-95be-47e3-81fe-d7ffcaab064c, 
>>>consistentId=12f9809d-95be-47e3-81fe-d7ffcaab064c, addrs=ArrayList 
>>>[0:0:0:0:0:0:0:1%lo, 127.0.0.1, ], sockAddrs=HashSet 
>>>[/0:0:0:0:0:0:0:1%lo:0, / 127.0.0.1:0 , /:0], 
>>>discPort=0, order=155, intOrder=82, lastExchangeTime=1623154238808, 
>>>loc=false, ver=2.10.0#20210310-sha1:bc24f6ba, isClient=true]]
>>> 
>>>Then I have upgraded the ignite version to 2.10.0 to get the fix of known 
>>>issue  IGNITE-10354 .  But I am still facing the issue even after upgrading 
>>>to the 10.2.0 ignite version.
>>> 
>>>Could someone help here.
>>> 
>>>Thanks,
>>>Akash
>>>  
>> 
>> 
>> 
>>