Good answer, thanks. We'll use the backup filter to create a dedicated backup
subcluster.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
We have found that the simplest way to achieve this is by overriding the
Kafka partitioner and assignor - and delegating to Ignite for partition
assignment and the affinity function respectively. In this way, Ignite
controls the data to partition and partition to node assignment and Kafka
reflects
Thanks - that does seem to be effective at stopping the OOM condition at
least.
Is there any way to determine which cache entries were affected by the page
expiry, do you know? The EVT_CACHE_ENTRY_EVICTED doesn't seem to get fired
in this case as far as I can tell. Is that your expectation?
This
In a system that is not using native persistence, what is the recommended way
of stopping a cluster from running out of memory - or stopping it from
crashing when it does?
As per the below jira, memory monitoring appears to be unreliable in the
latest version of Ignite:
https://issues.apache.org/
@Ilya KasnacheevReply - in reference to your comment about the IOOM condition
- is there any acceptable way to stop a full cache from killing the node
that it lives on? Or is this always recommended against? A custom data
region, perhaps?
--
Sent from: http://apache-ignite-users.70518.x6.nabble
An update on this - the test works as expected on Ignite versions 2.6 and
earlier. It appears to be a bug introduced in Ignite 2.7. I have raised the
following jira ticket to track:
https://issues.apache.org/jira/browse/IGNITE-12096
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>From anecdotal experience of storing larger objects (up to say 10MB) in
Ignite, I find that the overall access performance is significantly better
than storing lots of small objects. The main thing to watch out for is that
very large objects can cause unbalanced data distribution. Similar to
over-
I understand from this post:
https://stackoverflow.com/questions/50116444/unable-to-increase-pagesize/50121410#50121410
that the maximum page size is 16K. Is that still true?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Yes - avoiding an Ignite out of memory condition is exactly what I'm trying
to do. The question is - how can I do this if memory metrics aren't
reliable?
Does anyone have experience of successfully monitoring contracting Ignite
memory consumption? Or does anyone have any more general thoughts on h
BTW, I'm running on Ignite 2.75. Any ideas would be appreciated.
Regards,
Colin.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I am using the Ignite metrics API to monitor memory usage levels in an
attempt to avoid OOM conditions.
I have found that the metrics API appears to provide reasonably accurate
figures as entries are being written to the cache - but usage levels do not
come down again when entries are removed. I h
I am using IgniteQueue to store some POJOs in memory which are removed at a
15 minute interval after some processing. During the processing, elements
are added and removed from the queue multiple times using removeAll() api.
Below is my queue configuration -
@Override
public IgniteQueue creat
The Ignite SQL delete command seems to load all entries (both keys and
values) on heap before deleting them from the cache. This is slow and we
have seen it cause JVM heap to go OOM.
The docs state that a select is used to gather the keys of records being
deleted:
https://apacheignite-sql.readme.i
The documentation that you referenced states that the
IGNITE_MAX_INDEX_PAYLOAD_SIZE system property defines the default max - and
that this defaults to 10.
Since it's only a maximum value, is there any reason why it can't be a bit
higher - say 100? Or is it strongly encouraged to keep indexed fiel
This appears to be a problem that is fixed in Ignite 2.7.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks. We'll give Native Persistence another try.
Our reluctance to use it stems from the fact that if something goes wrong
with the storage then there are additional production processes required to
recover - bad persistent store can cause the cluster to fail to start or
else propagate problems.
We were using Ignite 2.4 (update pending). Ignite 2.5 and later seens to
treat OOM as a critical error by default and stops the node. The reproducer
below uses a failure handler to stop this from happening. It allocates a
100MB (configurable - 100MB is quite small) region and fills it up with
data.
Ignite DataRegionMetrics reports that memory *is* freed up when removing
items from the cache. However, Ignite continues to throw an OOM exception on
each subsequent cache removal. Cache puts are unsuccessful.
So although Ignite reports that the memory is free, it doesn't seem possible
to actually
I wrote a test for what happens in the case that a DataRegion runs out of
memory. I filled up a cache with records until I received the expected
IgniteOutOfMemoryException. Then I tried to remove entries from the cache -
expecting that memory would be freed up again.
What I found is that any cache
Or is it necessary to use the OS to enforce this? For example using ulimit -d
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks for the responses. To summarise:
* JVM Heap (Xmx) - Not normally used by Ignite for caching data.
* MaxDirectMemorySize - Used by Ignite for some file operations but not for
caching data. As per above, 256m is usually sufficient.
* DataRegion maxSize - Used by Ignite to determine how much m
Is there any alternative way to constrain max physical RAM that Ignite uses?
My use case is to constrain physical RAM usage in a shared environment,
while allowing a relatively generous allocation of swap storage. I'm aware
of Ignite persistence, but believe that swap storage might meet our needs
Thanks for this. I'll keep an eye on that ticket - it seems to be exactly
what we are looking for.
In the meantime, we think we have a work-around. Does the following sound
viable, or do you think it might it cause problems?
* Create a custom intercepting ClassLoader
* Start Ignite by directly ca
I'm interested in using Ignite services as microservices and have seen Denis'
blog posts on the topic. In my case, I also have a requirement to perform
computations with data affinity. My idea is to call a node singleton service
locally from a distributed compute task.
The advantage of using the s
Thanks for this. The referenced post mentions AllocatedPages rather than
PhysicalPages. Which would you advise is the most appropriate for this
application?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I have given this (pages * pageSize * pagesFillFactor) a go now, but it
doesn't seem to be returning the values I'm expecting. In particular, the
value can drop significantly even when data is being inserted into the
cache.
Am I using pagesFillFactor incorrectly?
--
Sent from: http://apache-ign
Please ignite my above comment, I am now able to retrieve the factor.
As such, is the following correct?
memoryUsed = pages * pageSize * pagesFillFactor
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I can confirm that I have metrics enabled for my region - I am able to read
allocatedPages, it's just the fillFactor that always seems to return zero.
Colin.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Denis,
As per your comment, I can see pages*pageSize rising as entries are put into
the cache - but this metric doesn't come down e.g. when new nodes are added
to the cluster. I assume that the pages remain allocated but with a lower
fill factor.
So pages*pageSize gives a misleadingly pessimis
Normally when storing data in Ignite using the default
RendezVousAffinityFunction, data is distributed reasonably evenly over the
available nodes. When increasing the cluster size (up to 24 nodes in my
case), I'm finding that the data is not so well distributed. Some nodes have
more than twice as m
With Ignite 2+, I have found that the on-heap option makes only modest
improvements to performance in most cases. Likewise for copyOnRead=false,
which works in conjunction with on-heap. These options work best in the case
where you have a small number of cache entries that are read many times. In
b
Benchmarks I've run on this flag and tests on mutating the entry
(acknowledging that this would be bad practice in production) - agree with
Evgenii's comments. Namely that this attribute has an effect for on-heap
caches (I'm using 2.1). I find it has more of an effect in cases where he
same records
isBenchmark is indexed by the way:
@QuerySqlField(index = true)
private boolean isBenchmark;
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
One like this:
select portCode, benchCode, shortName, benchShortName, fullName, currency,
operatingCurrency
from PORTFOLIO.Portfolio
where ISBENCHMARK = true
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I have a use case where I would like to store a record that has some
QuerySqlField attributes but also a non-queriable list of child objects.
Something like this:
public class Portfolio {
@QuerySqlField(index = true)
private int portCode;
@QuerySqlField
private String fullName;
An update on this - after killing the persistent store and reloading the
cache, I am now able to start more than one node as expected. I don't know
what caused this problem, but I assume it must be something to do with the
state of the store. I have also seen some more clear-cut cases where the
sto
My configuration is as follows:
I am experiencing the same problem with Ignite 2.1 when I have persistence
configured. I can start a single node successfully (I have 1M entries
persisted) but a second identical node fails with the error.
SEVERE: Failed to reinitialize local partitions (preloading will be
stopped): GridDhtPartiti
By the way, I only get this error with Ignite 1.6. Switching back to 1.5 it
goes away.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Exception-on-Ignite-shutdown-tp5551p5585.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
I'm running some tests on an Ignite cluster using an Ignite client. I'm
receiving a stacktrace quite regularly in the client - especially when the
cluster is freshly started. The exception occurs at the point that the
client is closed down (i.e. the try block completes) - see below log trace.
The
When configuring a cache for use with SqlFieldsQuery, it is necessary to call
setIndexedTypes(). The below example from the documentation defines 3 types
to index:
ccfg.setIndexedTypes(
MyKey.class, MyValue.class,
Long.class, MyOtherValue.class,
UUID.class, String.class
);
If an individual
In the case of the test that I am executing, high contention caused by the
test running as an Ignite client is causing not only the client but the
whole cluster to become unresponsive upon subsequent destruction of the
cache. The only way to get it to respond again seems to be to kill the
client JV
Thanks for the tip. I'm happy to report that the 1.6 version is considerably
more reliable than 1.5. Although I am still able to break it under/ high
enough levels of contention it is a lot harder to do. Also, it generally
recovers when the client is killed (as opposed to the whole cluster) -
thoug
I'm experiencing something very similar to this. In my case, I have a load
test that is causing transaction contention. I don't see the problem when
transactions are switched off, even at high load. The transactions are
cross-cache if that's relevant at all.
The contention causes (expected) errors
Is it possible for SQL queries to participate in Ignite transactions in the
same way that cache get/put operations do?
So far, I've not managed to get them to do so. For example, an
optimistic/serializable transaction around a query (and with a concurrent
put in another transaction) doesn't throw
45 matches
Mail list logo