Ok, here is the problem, per documentation [1]:
"In case of partitioned caches, keys that are not mapped to this node,
either as primary or backups, will be automatically discarded by the cache."
Since you have two nodes in the cluster, but call localLoadCache only on
one node, part of the cache
1. I currently just have 2 Ignite nodes--first one remotely to start the
cluster and the second one (this one) started programmatically with C#.
2. Adding Thread.Sleep(5000) doesn't change the result, unfortunately.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Did anyone get a chance to look into this issue? Is this supported
configuration for ignite? As these 2 (clientside NearCache and serverside
eviction) are very common features, I am wondering many projects should be
using this combination.
Can this be reported as a defect for ignite, if
Hello,
I recently ran into an out-of-memory error on a durable persistent cache I
set up a few weeks ago. I have a single node, with durable persistence
enabled, as well as WAL archiving. I'm running Ignite ver.
2.8.1#20200521-sha1:86422096.
I looked at the stack trace, but I couldn't get a
1. How many Ignite nodes do you have?
2. What if you add Thread.Sleep(5000) before the last Console.WriteLine?
Does the resulting number change?
On Mon, Nov 23, 2020 at 6:01 PM ABDumalagan
wrote:
> 1. Your program worked for me!
>
> 2. I added something to my LoadCache(Action, params
> object[]
1. Your program worked for me!
2. I added something to my LoadCache(Action, params object[]
args) method in OracleStore.cs. I added the following 3 lines after the
while loop:
reader.Dispose();
cmd.Dispose();
con.Dispose();
Console returned a non-zero cache size of 5136, however, the queries I
Hi,
Can you please tell what scan were you running? I want to reproduce this
issue using tenable.sc.
Thank you,
Evgenii
вт, 22 сент. 2020 г. в 06:55, Ilya Kasnacheev :
> Hello!
>
> I don't think it should cause heap dumps. Here you are showing just a
> warning. This warning may be ignored.
>
Hi,
sadly, logs from the latest message show nothing. There are no visible
issues with the code either, I already checked it. Sorry to say, but what
we need is additional logs in Ignite code and stable reproducer, we don't
have both.
You shouldn't worry about it I think. It's most likely a bug
Hello!
You can set concurrency mode and isolation for transactions by default by
specifying them in TransactionConfiguration. Otherwise you are correct.
Regards,
--
Ilya Kasnacheev
пн, 23 нояб. 2020 г. в 14:49, 38797715 <38797...@qq.com>:
> Hi Ilya,
>
> Then confirm again that according to
Hi Ilya,
Then confirm again that according to the log message, optimistic
transaction and READ_COMMITTED are used for single data operation of
transactional cache?
If transactions are explicitly turned on, the default concurrency model
and isolation level are pessimistic and
Hello!
Please refer to this specific ticket:
https://issues.apache.org/jira/browse/IGNITE-9560
As well as this Javadoc of the new class:
/**
* Ignite Security Processor.
*
* The differences between {@code IgniteSecurity} and {@code
GridSecurityProcessor} are:
*
* {@code IgniteSecurity}
Hi,
According to the provided log I see "Blocked system-critical thread has been
detected" message and that the node was segmented since it was unable to
respond to another node. Most probably it's caused by JVM pauses, possibly
related with GC.
Do you collect GC logs for the nodes?
You can
Your code seems to be correct. It works for me in a simplified form:
https://gist.github.com/ptupitsyn/a64c899b32b73ab55cb706cd4a09e6e9
1. Can you try the program above - does it work for you?
2. Can you confirm that the Oracle query returns a non-empty result set?
On Mon, Nov 23, 2020 at 3:00
Hi!
in our project we are currently using ignite 2.8.1 without ignite native
persistence enabled. No we would like to enable this feature to prevent data
loss during node restart.
Important: We use AttributeNodeFilters to separate our data, e.g data of
type1 only lives in the type1-clustergroup
14 matches
Mail list logo