Hi Alex
We can't share GC logs as they contains sensitive data. We have solve it by
creating a new data cluster with persistence enabled and moving data from
problematic cluster to new one.
As far as we can see, it seems that the problem is in the checkpoint
process, for some unknown reason (mayb
Hi Alex, thanks so much.
We are reduced topology to picture below (1 server node and 3 clients).
- 1 Ignite server node: IMDB with persistence enabled
- 3 Ignite client nodes: for SQL query, messaging (topic, queue) and
countdown latches.
All pluggable elements (TOPIC listener and QUEUE listener
Hi!
We have been working with Ignite 2.7.6 without incidents, since we upgrade
to 2.8.1 (same machine, same resources) we are getting "Blocked
system-critical thread", Ignite server nodes stops responding.
We have been notice that after several hours (about 8 or 9), it recovers
itself, but after
Try this, you just need the multicast group (must be the same on clients and
servers within same cluster):
PIE
Thanks Denis!
Regards.
Manu.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi!
I have a question, is it normal that if WAL is deactivated for a persisted
cache when the server node(s) is restarted, the persisted content of the
cache is completely destroyed?
I need to disable WAL for large heavy ingestion processes, but eventually
ingestion may fail (OS, machine cra
Hi!
Hawkore’s team has reimplemented partially ignite’s spring-data 2.0 module
to provide fully support to (dynamic) projections, Page responses, SpEL...
until changes be aproved by ignite community you could use it. (uses spring
data version 5.1.4.RELEASE compatible with spring-boot 2.1.4.RELEAS
Hi!
“When the CREATE TABLE command is executed, the name of the cache is
generated with the following format- SQL_{SCHEMA_NAME}_{TABLE}. Use the
CACHE_NAME parameter to override the default name.”
So if you want to create a table under a specific schema use CREATE TABLE
“SCHEMA_NAME”.TABLE ... u
Hi!
You need one work directory (not only wal) for each server node.
If you have persistence enabled, once server nodes start, you need to
activate cluster: ignite.cluster().active(true) this creates a cluster
baseline topology. Please note that once cluster is activated, if you add a
new server
Hi! Could you try to configure a different work directory per ode?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi! Could you try to configure a different work directory per node?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene indexes that supports SQL
searching
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution to detect changes on query entities and propagate
changes over cluster (fields, indexes and re-indexation)
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene and spatial indexes
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi! take a look to
https://github.com/hawkore/examples-apache-ignite-extensions/ they are
implemented a solution for persisted lucene and spatial indexes
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
pe it helps!!
Bye!
Manu
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
after restart data seems not be consistent.
We have been waiting until rebalance was fully completed to restart the
cluster to check if durable memory data rebalance works correctly and sql
queries still work.
Another question (it´s not this case), what's happen if one cluster node
crashes i
To reproduce:
1. create a replicated cache with multiple indexedtypes, with some indexes
2. Start first server node
3. Insert data into cache (100 entries)
4. Start second server node
At this point, seems all is ok, data is apparently successfully rebalanced
making sql queries (count(*))
5.
Hi,
If you need advanced lucene search you could modify GridLuceneIndex to parse
KeyCacheObject and CacheObject on store method to create additional
IndexableFields applying transformation to non string values.
We just integrate cassandra-lucene-index concept from stratio implementation
(https://
AffinityKeyMapped is only processed on cache keys, not on cache values.
Try cache.put(keyEntityWithAffinityKeyMappedAnnotation, value)
El 22 jun 2017, a las 13:16, tuco.ramirez [via Apache Ignite Users]
mailto:ml+s70518n14043...@n6.nabble.com>>
escribió:
Hi,
I have a simple use case, but aff
Hi,
Grid name allows to identify different ignite instances within same JVM
(commonly with different configuration: discovery, etc, but not always, you
could have n clients ignite instances with same internal configuration and
different grid names, i.e. on ignite JdbcConnection, if no gridName is
Hi,
You need to return true on apply method to continuously listen.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-continuously-subscribe-for-event-tp8438p8442.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
If you use ignite jdbc driver, to ensure that you always get a valid ignite
instance before call a ignite operation I recommend to use a datasource
implementation that validates connection before calls and create new ones
otherwise.
For common operations with a ignite instance, I use this method t
You are right, if connection is closed due to cluster *client* node
disconnection, client will automatically recreate connection using discovery
configuration. Pool is also supported, but N pooled instances of
org.apache.ignite.internal.jdbc2.JdbcConnection for same url on same java VM
will use sa
Hi,
as you know, org.apache.ignite.internal.jdbc2.JdbcConnection is an
implementation of java.sql.Connection, works always on client mode (this
flag is hardcoded to true when load xml configuration passed on connection
url) and works on read mode (only select). On same java VM instance,
connection
Hi,
Your are creating new data streamer on each loop call...
[...]
for (int i = 0; i < 100; i++){
//
CacheManager.getInstallBaseCache().put(name+"-"+i, new
TestPojo());
CacheManager.getInsta
Of course, it's not trivial... and changes on database are required (new
field on primary table (better) or new "extended partition table" 1to1
relationship with primary table (id primary table, partitionId)) but using
CacheStoreAdapter implementation it's not such as complex. I would do:
1. over
Have you probe to partitioning your data? It’s pretty simple by adding a
field (integer partitionId) on your table, so each node will load only its
own partitions. You could see an example here:
http://apacheignite.gridgain.org/docs/data-loading#section-partition-aware-data-loading
--
View this
Done.
Time to :P
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/SQL-Queries-propagate-new-CacheConfiguration-queryEntities-over-the-cluster-on-an-already-started-cae-tp5802p5851.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Thanks!
I almost have the change, queryEntities changes are propagated on H2 tables
and index tree over the cluster... preserving old indexes. I'll let you know
when done... working on version 1.6.0
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/SQL-Queries-pro
30 matches
Mail list logo