Hi,
get() operation from client always go to the primary node. If you run
compute task on other nodes, where each will do get() request for that key,
it will read local value. REPLICATED has many other optimizations, for
example for SQL queries.
Thanks!
-Dmitry
--
Sent from:
Hi,
You can, for example, set SYNC rebalance mode for your replicated cache [1].
In that case all cache operations will be blocked unless rebalance is
finished, and when it's done you'll get a fully replicated cache.
But this will block cache on each topology change.
[1]
Hi,
Usually it's enough to open ports for communication and discovery, thair
default values: 47500 and 47100.
If you run more than one node per pachine, you'll need to open a port range:
47500..47509 and 47100...47109.
You always can configure other values [1, 2]
[1]
Hi,
There are no such limitations on peer class loading, but it was designed and
works for compute jobs, remote filters or queries only. All unknown classes
from tasks or queries will be deployed in cluster with dependencies
according to deployment mode [1]. Actually with job Ignite sends
Hi,
Where did you find it? It might be a broken link.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Dynamic schema chages is available only via SQL/JDBC [1].
BTW caches created via SQL could be accessed from java API if you add
SQL_PUBLIC_ to table. For example: ignite.cache(SQL_PUBLIC_TABLENAME).
[1] https://apacheignite-sql.readme.io/docs/ddl
Thanks!
-Dmitry
--
Sent from:
Hi,
Could you please explain how do you update database? Do you use CacheStore
with writeThrough or manually save?
Anyway, you can update data with custom eviction policy:
cache.withExpiryPolicy(policy) [1]
[1]
Hi,
I think the best way here would be to read items directly from kafka,
process and store in cache and rememeber in another cache kafka stream
offset. If node crashes, your service could start from the last point
(offset).
Thanks!
-Dmitry
--
Sent from:
Hi,
It looks like the most of the time transactions in receiver are waiting for
locks. Any lock adds serialization for parallel code. And in your case I
don't think it's possible to tune throughput with settings, because ten
transactions could wait when one finish. You need to change algorithm.
Hi,
Yes, you're right, it was missed during refactoring. I've created a ticket
[1], you may fix it and contribute to Apache Ignite :)
[1] https://issues.apache.org/jira/browse/IGNITE-9259
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Looks like it was killed by kernel. Check logs for OOM Killer:
grep -i 'killed process' /var/log/messages
If process was killed by Linux, correct your config, you might be set too
much memory for Ignite paged memory, set to lower values [1]
If not, try to find in logs by PID, maybe it was
Hi,
Nice work, thank you! I'm sure it will be very useful. Looking forward for
your contributions in Apache Ignite project ;)
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Ignite by default uses Rendezvous hashing algorithm [1] and
RendezvoudAffinityFunction is an implementation that responsible of
partition distribution [2]. This allows significantly reduce traffic on
partiton rebalancing.
[1] https://en.wikipedia.org/wiki/Rendezvous_hashing
[2]
Hi,
1) You need to add jetbrains annotation in compile-time [1].
2) Imports depend on what are you using :) It's hard to say if your imports
enough. Add ignite-core to your plugin dependencies.
I don't think that there are other examples besides that blog post.
[1]
Hi,
I've opened a ticket for this [1]. It seems LOCAL cache keeps all entries
on-heap. If you use only one node - switch to PARTITIONED, if more than one
- PARTITIONED + node filter [2]
[1] https://issues.apache.org/jira/browse/IGNITE-9257
[2]
Hi Akash,
1) Actually exchange is a short-time process when nodes remap partitions.
But Ignite uses late affinity assignment, that means affinity distribution
will be switched after rebalance completed. In other words after rebalance
it will atomically switch partition distribution.
But you don't
Hi,
I'm not sure that nightly builds are updates regularly, but you should a
try. The biggest impact that nightly build could have some bugs that will be
fixed on release.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Akash,
How do you measure partition distribution? Can you provide code for that
test? I can assume that you get partitions before exchange process if
finished. Try to use delay in 5 sec after all nodes are started and check
again.
Thanks!
-Dmitry
--
Sent from:
Hi,
It defines by AffinityFunction [1]. By default 1024 partitions, affinity
automatically calculates nodes that will keep required partitions and
minifies rebalancing when topology changes (nodes arrive or quit).
Hi,
TTL fixes are not included in 2.6 as it was an emergency release. You'll
need to wait for 2.7.
https://issues.apache.org/jira/browse/IGNITE-5874
https://issues.apache.org/jira/browse/IGNITE-8503
https://issues.apache.org/jira/browse/IGNITE-8681
Hi,
Rules of field naming defined in BinaryIdMapper interface. By default used
BinaryBasicIdMapper implementation that is by default converts all field
names to lower case. So Ignite doesn't support the same field names in
different cases as it will treat them as same field.
But you can
Hi,
It might be an issue with deactivation. Try update to 2.6 or wait 2.7. Right
now just skip cluster deactivation. Once you formed a baseline topology and
finished loading data, just enable WAL log for all caches. When log enabled
successfully, you can safely stop nodes.
On next time when all
Hi Akash,
First of all SQL is not transactional yet, this feature will be available
only since 2.7 [1]. Your exception might be caused if query was canceled or
node stopped.
[1] https://issues.apache.org/jira/browse/IGNITE-5934
Thanks!
-Dmitry
--
Sent from:
Hi Calvin,
> Can I assume that BinaryMarshaller won't be used for any object embedded
> inside GridCacheQueryResponse?
Yes, because Binary can fallback to Optimized, but not vice versa.
> If I am correct, do you have any suggestion on how I can avoid this type
> of issue?
Probably you need
Hi Calvin,
1. Enlist I mean that if you want, for example, to get to see what fields
present in BinaryObject. In other words, if you want to work with
BinaryObject directly. For POJO serialization/deserialization this should
not be and issue at all.
2-3. In your case, you have and java.time.Ser
Hi,
Reduce will be done on node to which JDBC or thin client connected, it could
be either client or server node.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Calvin,
BinaryMarshaller can solve that issue with involving a few more.
First of all, you will need to disable compact footer to let each
BinaryObject has it's schema in footer.
If you need just put/get POJOs everything will be fine. But you need to
enlist your POJO in BinaryConfiguration
Hi Jose,
1. Yep, I would say, you'll get more profit in persistence. Because if you
split between real machines, each may keep more hot data in memory and each
has separate hard drive. The more data you can fit into RAM and more hard
drive could work in parallel, the better performance you get.
1) This applicable to Ignite. As it grown from GridGain sometimes it may
appear in docs, because missed fro removal.
2) Yes, and I would say overhead could be even bigger. But anyway I cannot
say definitely how much, because Ignite doesn't store data sequentially,
there a lot of nuances.
3) Ignite
Hi,
Slight degradation is expected in some cases. Let me explain how it works.
1) Client sends request to each node (if you have query parallelism > 1 than
number of requests multiplied by that num).
2) Each node runs that query against it's local dataset.
3) Each node responses with 100 entries.
Naresh,
GC logs show not only GC pause, but system pause as well. Try these
parameters:
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-XX:+PrintGCApplicationStoppedTime
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Where did you get that images? In logs of all your instances do you see
2.5.0 version?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Thread dumps look healthy. Please share full logs at that time when you took
that thread dumps or take a new ones (thread dumps + logs).
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Naresh,
Actually any JVM process hang could lead to segmentation. If some node is
not responsive for longer than failureDetectionTimeout, it will be kicked
off from the cluster to prevent all over grid performance degradation.
It works on following scenario. Let's say we have 3 nodes in a
I suppose that is issue with updating timestamps, rather with WAL writes.
Try to make a load test and compare hash sum of files before load test and
after. Also check if WAL history grow.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Just because:
1) not all users build their apps from scratch, they might have some legacy
code built over Cassandra DB;
2) native persistence featured much later than Cassandra module, and there
is no point to remove it now;
3) it's always better to offer more choices to user.
Anyway,
Hi Oleksandr,
It's OK for discovery, and this message is printed only in debug mode:
if (log.isDebugEnabled())
log.error("Exception on direct send: " + e.getMessage(),
e);
Just turn off debug logging for discovery package:
org.apache.ignite.spi.discovery.tcp.
Thanks!
Hi,
What is your configuration? Check WAL mode and path to persistence.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
You configured external public EC interface address (34.241...), but it
should be internal: 172...
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
AFAIK, you cannot download plugin separately, it's commercial product. You
can use it for free from here [1] or purchase a payed version for internal
use.
[1] http://console.gridgain.com/
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Not sure if it's possible to remove the ticket. Just close it with won't fix
status, it would be enough.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Probably the best choice would be Cassandra as Ignite has out of the box
integration with it [1].
[1]
https://apacheignite-mix.readme.io/v2.5/docs/ignite-with-apache-cassandra
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Check your configuration. This code works perfectly well for me. If set page
eviction mode to disabled - IOOME will be thrown:
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
DataStorageConfiguration dataStorageConfig = new
DataStorageConfiguration();
long
Jose,
Unfortunately there is no other tools at the moment. But you still can
contribute to Apache Ignite and implement that ticket which will persist
Lucene indexes. It would be a great help!
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
I see you used your data region as default and set name for it. Try to set
it to DataStorageConfiguration.setDataRegionConfigurations().
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
REST API does not have such option, but you can write your own compute task
(that uses Java API) and call it from REST [1]. It's not possible to use
Lucene search from SQL interfaces.
To use full text search you need to annotate fields with @QueryTextField [2]
and add to indexed types [3].
Hi Mikael!
Don't worry about this message and you may just ignore it. It's absolutely
OK and means that WAL was read fully. The question is why it's WARNING... In
future releases it would be changed to INFO and message content to avoid
such confusing.
Thanks!
-Dmitry
--
Sent from:
Hi,
Check system logs for that time, maybe there was some system freeze and add
more information in GC logs, for example safepoints:
-XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
localPeek must be called on local node. I fyou want to do that from client,
you have to execute a task [1] targeting on server node. But to list all
entries ScanQuery is designed for [2]. You may run it via compute task from
client with setLocal() flag set to true.
[1]
It would be better to upgrade to 2.5, where it is fixed.
But if you want to overcome this issue in your's version, you need to add
ignite-indexing dependency to your classpath and configure SQL indexes. For
example [1], just modify it to work with Spring in XML:
Hi Naresh,
Recommendation will be the same: increase failureDetectionTimeout unless
nodes stop segmenting or use gdb (or remove "live" option from jmap command
to skip full GC).
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
This thread dump is absolutely fine, you confused socket state and java
thread state. These two things are absolutely unrelated.
There should not be so many socket connections (TIME_WAIT means that socket
already closed and waiting for last packages) for three nodes. Could you
please share
Hi,
I totally agree with Val that implementing own AffinityFunction is quite
complex way. Requirement that you described is named affinity co-location as
I wrote before.
Let me explain in more details what to do and what are the drawbacks.
1. Use use @AffinityKeyMapped for all your keys. For
Hi,
What IgniteConfiguration do you use? Could you please share it?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Ignite keeps Tx cached values on-heap.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
There are various possible ways, but use one partition per node is
definitely a bad idea, because you're loosing scaling possibilities. If you
have 5 partitions and 5 nodes, then 6 node will be empty.
It's much better if you in AffinityFunction.partition() method will
calculate node according to
Normally (without @AffinityKeyMapped) Ignite will use CustomerKey hash code
(not object hashCode()) to find a partition. Ignite will colsult with
AffinityFunction (partition() method) and to what partition put key and with
assignPartitions find concrete node that holds that partition.
In other
Hi,
Make sure that your keys are go to specific partition. Only one node could
keep that partition at a time (except backups, of course). To do that, you
may use @AffinityKeyMapped annotation [1].
Additionally you can implement your own AffinityFunction that will assign
partitions that you need
Hi,
TcpDiscoveryMulticastIpFinder produces such a big number of connections. I'd
recommend to switch to TcpDiscoveryVmIpFinder with static set of addresses.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Mikael,
Please share your Ignite settings and logs.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Could you please provide a reproducer? I don't get such exception.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
1. By default get() will read backups if node, on which it's invoked is
affinity node. In other words, if current node has backups, Ignite prefer to
read local data from backup rather requesting primary node over network.
This can be changed by setting
Hi,
It's hard to get what's going wrong from your question.
Please attach full logs and thread dumps from all server nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Yes, Ignite will send messages to all nodes, but you may use filter:
ignite.message(ignite.cluster().forAttribute("topic1", Boolean.TRUE));
In this case messages would be sent to all nodes from the cluster group, in
this example - only nodes with set attribute "topic1" [1].
[1]
Hi,
Yes, for complex transaction this workaround will not work. So you need
either wait for fix or avoid using EntryProcessor for now.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ankit,
No, Ignite uses sun.misc.Unsafe for offheap memory. Direct memory may be
used in DirectBuffers used for intercommunication. Usually defaults quite
enough.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
For sure Ignite caches queries, that's why first request runs much longer
than rest ones.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Praveen,
Stack traces only show that thread is waiting for response, to get the full
picture, please attach full logs and thread dumps at the moment of hang from
all nodes. I need from all nodes, because actual issue happened on remote
node.
Also, according to last exception, there might be
Hi,
Blocked threads show only the fact that there are no tasks to process in
pool. Do you use persistence and/or indexing? Could you please attach your
configs and logs from all nodes? Please take few sequential thread dumps
when throughput is low.
Thanks!
-Dmitry
--
Sent from:
Hi Dome,
Could you please attach full logs?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Christoph,
This metric is not implemented because of complexity. But you may get to
know now much of space your cache or cashes consumes with DataRegionMetrics:
DataRegionMetrics drm = ignite.dataRegionMetrics("region_name");
long used = (long)(drm.getPhysicalMemorySize() *
Hi Prasad,
This issue could not be completed in 2.5 as it's done in a low priority. As
a workaround, you can wrap your executeEntryProcessorTransaction() method
into affinity run [1], and no additional value transferring will happen.
[1]
Hi Ray,
I think the only way to do it is to use
IgniteDataFrameSettings.OPTION_CONFIG_FILE and set path to xml configuration
with all settings you need. Here is a nice article about this [1]
[1]
Hi,
If you have enabled read through mode for cache, entry will be loaded on
next IgniteCache.get() operation, or when IgniteCache.loadCache() was
called.
Next time entry will be evicted according to your eviction policy.
Please note that entry will not be counted in SQL queries if it was
Hi Naveen,
Unfortunately I'm unable to reproduce that error. Could you please attach
simple code/project that fails with specified exception?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Duplicates
http://apache-ignite-users.70518.x6.nabble.com/Strange-node-fail-td21078.html.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ray,
If your JVM process consumes more memory, then started swapping may cause
JVM freeze, and as a consequence, throwing it out from the cluster. Check
your free memory, disable swapping, if possible, or increase
IgniteConfiguration.failureDetectionTimeout.
To check that guess you may use
Hi Anshu,
This looks like a bug that was fixed in 2.4, try to upgrade [1].
[1] https://ignite.apache.org/download.cgi
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Bryan,
You need to use StatefulSet [1], Kubernetes will start nodes one-by-one when
each comes in a ready state.
[1] https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Prasad,
This approach will work with multiple keys if they are collocated on the
same node and you start/stop transaction in the same thread/task. There no
other workaround.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Jet,
Yep, this should work, but meanwhile this ticket remains unresolved [1].
[1] https://issues.apache.org/jira/browse/IGNITE-5371
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Prasad,
If you started Ignite with IgniteSpringBean or IgniteSpring try
@SpringApplicationContextResource [1] annotation. Ignite's resource injector
will use spring context to set a dependency annotated by it. But I'm not
sure that this will work with CacheStore, it should be rechecked.
[1]
Hi,
You may fuse filter for that, for example:
ContinuousQuery qry = new ContinuousQuery<>();
final Set nodes = new
HashSet<>(client.cluster().forDataNodes("cache")
.forHost(client.cluster().localNode()).nodes());
qry.setRemoteFilterFactory(new
Hi Jet,
Full text search creates Lucene in-memory indexes and after restart they are
not available, so you cannot use it with persistence. @QuerySqlField enables
DB indexes that are able to work with persisted data, and probably no way to
rebuild them for now.
Thanks!
-Dmitry
--
Sent from:
Hi,
Transaction here might be a not optimal solution, as it by default
optimistic and may throw optimistic transaction exception. I believe the
best solution would be to use EntryProcessor [1], it will atomically modify
entry as on TRANSACTIONAL as on ATOMIC cache on affinity data node (that
Hi,
This exception says that client node was stopped, but by default it should
wait for servers. In other words, wait for reconnect, in this case it throws
IgniteClientDisconnectedException that contains future on which you may wait
for reconnect event.
You may locally listen for
Hi Shravya,
To understand what's going on in your cluster I need full logs from all
nodes. Please, share all files, if it's possible.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Sharavya,
This exception means that client node is disconnected from cluster and tries
to reconnect. You may get reconnect future on it
(IgniteClientDisconnectedException.reconnectFuture().get()) and wait when
client will be reconnected.
So it looks like you're trying to create cache on
Hi Ranjit,
That metrics should be correct, you also may check [1], because Ignite
anyway keeps data in offheap. But if enabled on-heap, it caches entries in
java heap.
[1] https://apacheignite.readme.io/docs/memory-metrics
Thanks!
-Dmitry
--
Sent from:
Hi Svonn,
I'm not sure that I properly understand your issue. Could you please provide
a problematic code snipped?
> is the policy also deleting the Map
Yes, if it was stored as a value.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Anonymous and inner classes have link to outer class object and might bring
it to marshaller. When you set it inner static or separate class you're
explicitly saying that you don't need such links.
In thread dumps you need to lookup for waiting or blocked threads. In your
case in service
Hi,
Discovery events are processed in a single thread, and cache creation uses
discovery custom messages. Trying to create cache in discovery thread will
lead to deadlock, because discovery thread will wait in your lambda instead
of processing messages.
To avoid it just start another thread in
Hi,
It's hard to say why it happens. I'm not familiar with mybatis and actually
don't know if it shares jdbc connection between threads. It would be great
if you could provide some reproducible example that will help to debug the
issue.
Thanks!
-Dmitry
--
Sent from:
Hi,
Please attach thread dumps from all nodes taken at the moment of hang.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Looks like not on all nodes exist your classes. Please check if all classes
that you're using in cache are available on all nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
There are few options:
1) You need to have backups to survive node loss. [1]
2) You may enable persistence to survive grid restart and store more data
that available in memory. [2]
3) Checkout nohup command [3]
[1] https://apacheignite.readme.io/docs/primary-and-backup-copies
[2]
Glad to hear that it was helpful! I wrote the example just in email, so
didn't have a compiler to check it :)
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Sure, I meant you need to create your own inner class:
private static class WorkflowEntryProcessor extends EntryProcessor {
@Override
public Object process(MutableEntry entry, Object...
arguments) throws EntryProcessorException {
Hi Dmitriy,
1. You may use node filter [1] and specifically
org.apache.ignite.util.AttributeNodeFilter that could be configured in XML
without writing code.
2. Yes you can. You need to configure data regions and set
persistenceEnabled flag. After that you may apply cachesh to that regions.
[2]
Is there any case that you're using Connection in more than one thread? It's
not thread safe for now.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
1 - 100 of 275 matches
Mail list logo