Hello,
IgniteCache has a way to specify the expiry policy at key level for thick
clients via IgniteCache#withExpiryPolicy() facade. I think it may be
reasonable to add similar option to the thin clients protocol as well. Feel
free to open a ticket.
they are always submitted to primary
> node. How setting DIGNITE_READ_LOAD_BALANCING to false help in this case ?
> Even if it is true it will always read the values from primary node as the
> task is landed on primary node.
>
> Thanks,
> Prasad
>
> On Fri, Feb 28,
Prasad,
The current version in the entry is checked agains the version which was
read from the very same entry, so with absence of concurrent updates the
version will be the same.
>From your description, I think there might be a concurrent read for the key
that you clear which loads the value on
Prasad,
> Can you please answer following questions?
> 1) The significance of the nodeOrder w.r.t Grid and cache?
>
Node order is a unique integer assigned to a node when the node joins grid.
The node order is included into GridCacheVersion to disambiguate versions
generated on different nodes th
Prasad,
Since optimistic transactions do not acquire key locks until prepare phase,
it is possible that the key value is concurrently changed before the
prepare commences. Optimistic exceptions is thrown exactly in this case and
suggest a user that they should retry the transaction.
Consider the
introduces addresses frequent usability and critical stability
issues
https://ignite.apache.org/releases/2.7.6/release_notes.html
Download the latest Ignite version from here:
https://ignite.apache.org/download.cgi
Please let us know [2] if you encounter any problems.
Regards,
Alexey Goncharuk on
Yuriy,
Is your Ignite node running on localhost and has a REST endpoint bound to
localhost:11211? If not, default values will not work. I've re-checked the
control utility in Ignite 2.7 in several environments, works fine for me.
вт, 11 дек. 2018 г. в 15:19, Yuriy :
> I am explicitly set the --h
Hi Murthy,
You should use user-unsubscr...@ignite.apache.org in order to unsubscribe
from the list.
Cheers,
Alexey
вт, 28 авг. 2018 г. в 5:37, Murthy Kakarlamudi :
>
>
Hello,
Can you please share the full stacktrace so we can see where the original
ClassCastException is initiated? If it is not printed on a client, it
should be printed on one of the server nodes.
Thanks!
вт, 5 июн. 2018 г. в 18:35, Cong Guo :
> Hello,
>
>
>
> Can anyone see this email?
>
>
>
>
Ray,
Which Ignite version are you running on. You may be affected by [1] which
becomes worse the larger the data set is. Please wait for the Ignite 2.5
release which will be available shortly.
[1] https://issues.apache.org/jira/browse/IGNITE-7638
пт, 18 мая 2018 г. в 5:44, Ray :
> I ran into th
15 19:25 GMT+03:00 Larry :
> Hi Alexey.
>
> Were there any findings? Any updates would be helpful.
>
> Thanks,
> -Larry
>
> On Thu, Mar 8, 2018 at 3:48 PM, Dmitriy Setrakyan
> wrote:
>
>> Hi Lawernce,
>>
>> I believe Alexey Goncharuk was working o
Andrey,
Can you please describe in greater detail the configuration of your nodes
(specifically, number of caches and number of partitions). Ignite would not
load all the partitions into memory on startup simply because there is no
such logic. What it does, however, is loading meta pages for each
Hi,
Just to reiterate and clarify the behavior. Region maxSize defines the
total maxSize of the region, you will get OOME if your data size exceeds
the maxSize. However, when using swap, you can set maxSize _bigger_ than
RAM size, in this case, the OS will take care of the swapping.
2017-12-21 11
Created the ticket: https://issues.apache.org/jira/browse/IGNITE-7235
2017-12-15 16:16 GMT+03:00 Alexey Goncharuk :
> Ray,
>
> With the current API it is impossible to get a reliable integration of
> Ignite native persistence with 3rd party persistence. The reason is that
> fi
Ray,
With the current API it is impossible to get a reliable integration of
Ignite native persistence with 3rd party persistence. The reason is that
first, CacheStore interface does not have methods for 2-phase commit,
second, it would require significant changes to the persistence layer
itself to
Hi Ray,
Do you see "Page evictions started, this will affect storage performance"
message in the log? If so, dramatic performance drop you observe might
indicate that we have an issue with page replacement algorithm that we need
to investigate. Can you please check the message?
2017-10-17 17:09 G
Hi,
I assume you have backups=0 for your cache (otherwise you should not see
data loss). There are two ways to achieve what you need:
1) Set PartitionLossPolicy different from IGNORE in your cache
configuration. This way your clients will get an exception when trying to
read a lost key. After a st
Hi,
This should never happen in BACKGROUND mode unless you have a hard power
kill for your Ignite node (which is not your case). I've reviewed the
related parts of the code and found that there were a few tickets fixed in
2.3 that may have caused this issue (e.g. IGNITE-5772). Can you try
building
Hi,
In default WAL mode each cache put() is fsynced to the disk, which causes a
major performance penalty.
You can do either: batch your updates using putAll or using a data streamer.
BTW, how do you insert data to postgres?
2017-10-04 14:32 GMT+03:00 Dmitry Pryakhin :
> Dear colleagues,
>
> I
Hi Raul,
Do you observe this exception under some specific events, like topology
change? Can you share an example of how you use Ignite scheduler in your
service here?
Thanks,
AG
2017-06-06 17:21 GMT+03:00 Raul :
> Hi,
>
> We are trying to deploy a service as cluster singleton in our environmen
Alexey,
There is no CacheMemoryMode in Ignite 2.0 anymore since it has been removed
in favor of the new Ignite architecture. It seems that you've built Ignite
from one of the intermediate states between 1.9 and 2.0.
Can you try with the ignite-2.0 release?
--AG
2017-05-30 17:00 GMT+03:00 Алексе
What about the size of the result set returned?
2017-06-02 12:47 GMT+03:00 Pratham Joshi :
> Yes, I have used *@QuerySqlField(index = true)* in MyClass. And here's
> my query plan using explain.
>
> == Start time = 2017-06-02 15:11:45.*454*
> ***Query executed === >2017-0
How do you configure field1 to be an indexed field? Do you use
@QuerySqlField annotation? Can you share the execution plan of your query
(you need to run "explain select ..." query)?
Also, what is the result set size of your query?
--AG
2017-05-30 14:49 GMT+03:00 Pratham Joshi :
> Hello Guys,
>
It's pretty simple. I've added newbie label for it, anyone can pick it up.
2017-05-17 21:03 GMT+03:00 Denis Magda :
> Alex, thanks.
>
> Can the ticket be resolved in 2.1?
>
>
> On Wednesday, May 17, 2017, Alexey Goncharuk
> wrote:
>
>> Created a fo
Created a follow-up UX ticket:
https://issues.apache.org/jira/browse/IGNITE-5248
2017-05-17 19:20 GMT+03:00 Sergey Chugunov :
> Ajay,
>
> I managed to reproduce your issue. As I can see from logs you're starting
> Ignite using 32-bit JVM.
>
> To fix your issue just use 64-bit JVM or decrease init
Hi Chris,
One of the most significant changes made in 2.0 was moving to an off-heap
storage by default. This means that each time you do a get(), your value
gets deserialized, which might be an overhead (though, I would be a bit
surprised if this causes the 10x drop).
Can you try setting CacheCon
This does not look like a bug to me. Rendezvous affinity function is
stateless, while FairAffinityFunction relies on the previous partition
distribution among nodes, thus it IS stateful. The partition distribution
would be the same if caches were created on the same cluster topology and
then a sequ
Hi Mauricio,
You encounter this exception because SingletonFactory actually stores a
reference to the factory instance which is later is serialized. Instead of
SingletonFactory, you can
use org.apache.ignite.configuration.IgniteReflectionFactory which does not
store this reference and can be succe
Hi,
Ignite uses standard Java engines for SSL, so this depends on the version
of JDK you are running. See, for example, this [] post on how to disable
cipher suites on Oracle JDK.
Hope this helps,
AG
[1]
http://security.stackexchange.com/questions/120347/how-to-disable-weak-cipher-suits-in-java-
owever, in view of the
> FGC time I fear that such an approach causes a significant impact for
> applications which need to keep huge caches.
>
> Kind regards,
> Peter
>
>
>
> 2017-01-26 15:57 GMT+01:00 Alexey Goncharuk :
>
>> Hi Peter,
>>
>> Leaving d
Hi Peter,
Leaving defragmentation to Ignite is one of the reasons we are trying
PageMemory approach. In Ignite 1.x we basically use OS memory allocator to
place a value off-heap. Once the OS has given us a pointer, the memory
cannot be moved around unless we free this region, thus the fragmentatio
Hello Steve,
You are right, Ignite requires all fields participating in affinity
calculation to be included in the key. The main reason behind this
restriction is that Ignite is a distributed system and it is an absolute
requirement to be able to calculate affinity based only on a key.
Imagine th
Hi Yuci,
Ignite uses Spring XML for configuration creation, so standard
PropertyPlaceholderConfigurer perfectly meets your needs.
Just add
to your configuration file and it will do the trick. Make sure to consult
the PropertyPlaceholderConfigurer javadoc for the available system
properties reso
Hi Alisher,
As Nicolae suggested, try parallelizing your scan using per-partition
iterator. This should give you almost linear performance growth up to the
number of available CPUs.
Also make sure to set CacheConfiguration#copyOnRead flag to false.
--AG
2016-11-28 19:31 GMT+03:00 Marasoiu Nicola
Hi,
Currently SQL queries do not participate in transactions in any way, so you
can see partially committed data from other transactions.
In other words, if a thread 1 updates keys 1, 2, 3, 4 and started
transaction commit, and thread 2 issues an SQL query, this query may see
keys 1, 2 updated an
Hi Tracyl,
Can you describe in greater detail what you are trying to achieve? To my
knowledge, predicate pushdown is a term usually used for map-reduce jobs.
The concept of Ignite's jobs and tasks is more similar to fork-join rather
than map-reduce semantics, so we could better help you if you des
Hi Patrick,
I was not able to reproduce this issue neither under 8u51 nor on 8u101
under Mac using your code. Can you share the reproducer which does not use
Ignite with us when it's available?
2016-09-09 11:43 GMT+03:00 wbyeh :
> Val,
>
> It's definitely not an ignite issue.
> It's Oracle JDK8
Hi Caio,
How do you create threads to process your task? From what I've read, the
issue you are describing looks like a synchronization bottleneck (all your
threads go through a single lock).
Instead, you could've spawned as many jobs as you have partitions in your
grid and use a per-partition sca
Hi,
In FULL_ASYNC mode the API call returns before the update message is sent
to a remote node, let alone the response receipt from the remote node. This
means that in FULL_ASYNC mode you can stop your client even before the grid
knows that you wanted to put something in the cache. You need to use
Hi,
You need to make your EntryProcessor a static class, otherwise it captures
a reference to your enclosing class which causes the serialization
exception.
2016-08-24 17:54 GMT+03:00 Vladislav Pyatkov :
> Hello,
>
> Could you please provide reproduced example?
>
> On Wed, Aug 24, 2016 at 11:04
You need to implement only GridSecurityProcessor and return implementation
instance from PluginProvider#createComponent().
DiscoverySpiNodeAuthenticator is an internal interface and Ignite already
has an implementation which delegates to
GridSecurityProcessor#authenticateNode().
2016-08-19 11:52
Hi,
The plugin activation mechanism changed since RC1 to Java Service Provider
[1]. You need to add a META-INF/services/your.plugin.Provider entry to your
plugin jar in order for plugin to be activated. The file name should be the
fully-qualified name of your plugin provider and it should contain
Hi,
If I understand correctly, you want to reduce the total number of
partitions for your cache to 2. Is there any reason you want to do this? It
is impossible to change the number of partitions without the full cluster
restart, so if at some point in time you want to add more nodes to your
cluste
Hi,
Do you have an IGNITE_HOME environment variable set? If you do, please
re-check that it points to the correct folder.
2016-08-04 9:15 GMT+03:00 chevy :
> I had already provided full permissions (777) and owner for these
> folders/files is root (also starting it as a root user). So I think
>
Ross,
The optimization you suggested does not work in the case when remote filter
is present, but it indeed works for your case. I created a ticket for this
optimization: https://issues.apache.org/jira/browse/IGNITE-3607
2016-07-29 17:51 GMT+03:00 ross.anderson :
> Glad to be of assistance
> I
Jason,
As far as I understood your use case, machine-to group assignment (the
digit in your example) is constant and cannot change over time. In this
case user attributes should work perfectly like Val suggested.
If, for some reason, this does not meet your requirements, my suggestion
would be to
Jason,
As a workaround I would suggest that you store an instance of Ignite rather
than instance of logger in your class and make it not transient. Ignite can
candle serialization/deserialization of it's own instances as long as
gridName is the same on all nodes.
BTW - my 2 cents on the filter it
Hi,
Ignite 1.6 requires data to be properly collocated in order for joins to
work correctly. Namely, data being joined from tables Kc21 and Kc24 must be
collocated. See [1] for more details on affinity collocation and [2] for
more details on how SQL queries work. Also, take a look
at org.apache.ig
Hi,
The answers are inline:
Hi, all
> I am researching the cluster rebalance, and the sync mode
> is CacheWriteSynchronizationMode.PRIMARY_SYNC, when rebalance completed,
> how does it ensure that the primary partition has already synchronized with
> backup partition because it possible ha
I remember asking this question on Spark user list and parallelize() was
the suggested option to run a closure on all Spark workers. Paolo, I like
the idea with foreachPartition() - maybe we can crete a fake RDD with
partition number equal to the number of Spark workers and then map each
partition
Hi,
Good point! You can go ahead and create a ticket for this. It looks really
simple to implement, so you can either fix it by yourself, or somebody from
the community will pick it up.
Thanks,
AG
Hi,
As Dmitriy pointed out, there is no a reliable way to timeout a transaction
once the commit phase has begun.
If there is a chance that your cache store may stall for unpredictable
amount of time, this should be handled within the store and possibly throw
an exception, but this will result in
Hello Mans,
Ignite data partitioning and distribution is defined by the
AffinityFunction [1]. Basically, this is a three-step process:
Given a key K, Ignite first determines an affinity key AK using configured
AffinityKeyMapper. Then, for the given affinity key AK a corresponding
partition is det
Hi,
As Andrey pointed out, now you can grab an expiry policy factory from
Ignite's cache configuration, create an instance and get durations you
need. I agree that this way a bit awkward and it only covers a configured
ExpiryPolicy, currently there is no way to check if an instance of
IgniteCache
Kristian,
Are you sure you are using the latest 1.7-SNAPSHOT for your production
data? Did you build binaries yourself? Can you confirm the commit# of the
binaries you are using? The issue you are reporting seems to be the same as
IGNITE-3305 and, since the fix was committed only a couple of days
Kristian,
Just letting you know - I've merged the fix to master branch.
; an explicit call to cache.rebalance().get() on the new node.
>
> Kristian
>
>
> 2016-06-13 20:03 GMT+02:00 Alexey Goncharuk :
> > Kristian,
> >
> > I am a little bit confused by the example you provided in your first
> e-mail.
> > From the code I see that
Kristian,
I am a little bit confused by the example you provided in your first
e-mail. From the code I see that you create a cache dynamically by calling
getOrCreateCache, and the next line asserts that cache size is equal to a
knownRemoteCacheSize. This does not make sense to me because cache cre
Note that IgniteDataStreamer implements AutoCloseable, so the code in the
example you are referring to is correct because data streamer is used in
try-with-resources block. It is not required to call flush() before calling
close() because close() will flush the data automatically.
Hi Amit,
You can also close() the streamer or call flush() explicitly to make sure
all the added data was added to the cache.
2016-06-04 10:57 GMT-07:00 visagan :
> The Streamer actually buffers the data. Buffer Default Size is 1024, either
> the buffer size is reached Or you set a Flush Frequen
Ignite _does_ use a separate thread pool to persist data in write-behind
mode, this is the essential difference between write-through and
write-behind. However, if your cache load rate is significantly higher than
the database insert rate, the write queue will grow faster than background
threads ca
Hi,
Ignite client automatically checks the partition counter and filters out
duplicate events, you do not need to do it manually to get rid of
duplicates. However, starting from Ignite 1.6 update counter is available
through CacheQueryEvent API.
2016-06-03 5:23 GMT-07:00 M Singh :
> Hi Folks:
>
4.0
>
> -- 原始邮件 --
> *发件人:* "Alexey Goncharuk";;
> *发送时间:* 2016年6月3日(星期五) 中午1:02
> *收件人:* "user";
> *主题:* Re: put data and then get it , but it returns null in sometimes
>
> Hi,
>
> Which version of Ignite are you using?
>
> 2016-
Hi,
Which version of Ignite are you using?
2016-06-02 21:55 GMT-07:00 往事如烟 :
> thanks for your answer, I don't use configure file, so almost we used the
> default value, only set some items as follows:
>
> *CacheConfiguration cacheCfg = new
> CacheConfiguration<>("testName");*
>
> *cacheCfg.setW
David,
Have you considered using continuous queries for your use-case [1]? Even if
there were such a thing as a transaction event, I do not see how you can
reliably (read - in a proper order) publish this information to Kafka.
Say, you have 2 clients and 2 server nodes. First client executes a
tr
Hi,
SPI stands for Service Provider Interface. In Ignite it is an isolated
abstracted component which can be plugged in to provide new or
replace/extend existing functionality.
For example, you can implement your own CollisionSPI to control how
ComputeJobs are scheduled on a local node, or extend
Hi,
Can you take a thread dump of the process in this 'hanging' state and
attach in to the thread?
I think it makes sense not to validate store configuration unless we know
that the entry is enlisted as WRITE.
I've created the issue: https://issues.apache.org/jira/browse/IGNITE-3086
2016-05-04 5:28 GMT-07:00 Denis Magda :
> As Val already mentioned you can't mix write-through and write-behind
Hi,
How many entries do you have in your transaction and how many nodes do you
have in your set up? Note that for write-behind cache store is always
invoked from the primary node. May it be the case that your transaction is
small enough so that all entries go to different nodes?
Yes, as long as cache configurations have matching affinity configuration
and those caches are deployed on the same nodes - the same affinity keys
will go to the same nodes in any cache.
2016-05-02 3:38 GMT-07:00 nikhilknk :
> Thanks Alexey . This is working . I have one more question following
Hi,
Scala does not automatically place annotations to generated fields, you
need to use the annotation as follows:
@(AffinityKeyMapped @field) val marketSectorId:Int = 0
Hi,
As long as cache configuration is the same, affinity assignment for such
caches will be identical, so you do not need to explicitly specify cache
dependency. On the other hand, if cache configurations do differ, it is not
always possible to collocate keys properly, so for this case such a
depe
Hi,
You can achieve this behavior by setting a backup filter to
RendezvousAffinityFunction or FairAffinityFunction (e.g. see
org.apache.ignite.cache.affinity.rendezvous.RendezvousAffinityFunction#setBackupFilter).
This filter is a predicate that accepts the assigned primary node and a
potential ca
Ravi,
It's been a while since I used Hibernate last time, but as far as I
remember, you may do either:
* call any method (e.g. size()) on your proxied collection to trigger lazy
collection initialization
* Call Hibernate.initialize(collectionProxy)
* Use @ManyToOne(fetch = FetchType.EAGER) in y
Val,
StringBuilder.append() is what javac generates for `"Read value: " + val`
in the code.
Ravi,
Hibernate returned you a collection proxy for your map, however by the time
control flow leaves the CacheStore your hibernate session gets closed and
this collection proxy cannot be used anymore. Yo
Yes, this is correct, if there is no write-behind, then in TRANSACTIONAL
cache the database write happens from the originating node, and in ATOMIC
cache - from primary nodes.
Denis,
Updates are always queued on primary nodes when write-behind is enabled,
regardless of atomicity mode. This is required because otherwise updates
can be written to the database in a wrong order.
We did not queue database updates on backups because we did not have a
mechanism that would all
It looks like the XML file you are using is not valid. Can you share the
config with us?
>From the error message "Spring XML configuration path is invalid:
/home/test/SparkIgniteStreaming/config/example-cache.xm" my guess is that
the configuration file is absent on the Spark executor node.
2016-04-04 8:17 GMT-07:00 Yakov Zhdanov :
> Thanks for sharing the code. Alex Goncharuk, can yo
Hi Arthi,
Can you elaborate more on what you want to achieve by collocation based on
two fields?
If you have a class A, which is used as a cache key, has a field aKey, then
setting this field as an affinity key tells ignite that an instance of
class A should always be stored on the same node (mor
Jimmy,
The approach you suggested will not work either. Consider a situation when
concurrent updates are required for your object. In this case there is a
chance that you modify version 1 of your object, but when you do a
cache.get(), you will receive already updated, different version of your
obj
Hi,
It may be the case that you can utilize BinaryObjectBuilder instead of
HashMap for the use-case you described [1]. It is an abstraction that was
created to handle cases when no class definitions exist. You can also
change the structure of your binary objects in runtime.
So the code you have d
It looks like we are missing an option to tell IgniteRDD to work with
binary objects. When an iterator is created, it tries to deserialize
objects, and since you do not have a corresponding class, the exception
occurs. I will create a ticket for this shortly.
Despite this, you should still be able
Hi,
Consistency between nodes is guaranteed in ATOMIC mode, however, the
read-after-write guarantee is met in the following cases:
- cache write synchronization mode is FULL_SYNC. In this mode cache.put()
will not return control until all data nodes (primary and backup)
responsible for the data a
Dmitriy,
You should have used the same entity name in QueryEntity as the one you
used when creating a builder, i.e.
queryEntity.setValueType("DT1")
because you can have multiple value types stored in one cache.
I will create a ticket to throw a proper exception when BinaryObject is
used in query
Yep, BinaryObjectBuilder should definitely be a solution for this. You can
obtain an instance of Ignite from IgniteContext and use the IgniteBinary
interface to get an instance of BinaryObjectBuilder to build object
structures dynamically. And you can use QueryEntity class to describe the
index con
Myron,
I believe IGNITE-2645 should be fixed in the near future since the issue is
critical, and will definitely be included to 1.6.
As for the IGNITE-1018, I will not speculate on the timelines because the
issue has some workarounds, even though it is possible that it will be
fixed for 1.6 if so
Oh, I see now what you mean, IGNITE-1018 has escaped my view. Then, until
IGNITE-1018 is fixed, the only guaranteed approach is to wait on a CDL.
Here is the pseudo-code that I have in mind:
LifecycleBean or after Ignition.start():
// Populate your node local map
CountDownLatch init = nlm.ge
Myron,
What approach did you use initially to initialize the node local map?
IgniteNode is considered to be fully functional as soon as Ignition.start()
method returns control, so any operations done on NodeLocalMap after the
node start should be considered to be run concurrently with EntryProcess
Myron,
We have a specific test for the exact use-case you have described and it
passes - see IgniteAtomicCacheEntryProcessorNodeJoinTest. I tried to play
with the configuration (added test store, tried different memory modes),
but was not able to make the test fail.
Is there any change you can sh
>
> Thanks for your suggestion. I did not follow this:
>
> "For this use-case I would suggest using single cache puts (the same way
> you
> insert data to Oracle) and combine it with write-behind store writing to
> HDFS, this should give you better latencies."
>
> Are you suggesting not using
Kobe,
I am not sure this is a fair comparison because writing a file to IGFS
involves 3 operations: updating the metadata cache (empty file creation),
actual file writing and then updating the metadata cache again (update the
file size).
For this use-case I would suggest using single cache puts (t
I see no fundamental reasons why it cannot be supported, however, as far as
I know, current queue implementation starts several nested transactions on
more than one system caches, so re-writing this into single transaction and
supporting system and user caches in one transaction may require quite
s
Folks,
The current implementation of IgniteCache.lock(key).lock() has the same
semantics as the transactional locks - cache topology cannot be changed
while there exists an ongoing transaction or an explicit lock is held. The
restriction for transactions is quite fundamental, the lock() issue can
A little correction: in this particular case inputStream does return 0
which leads to an infinite loop, however, generally this may be not the
case, so the implementation should not read beyond object boundary anyways.
Hello Myron,
Your implementation of Externalizable interface is incorrect. The number of
bytes that can be read from the object input stream passed to the
readExternal() method is not limited, so you need to make sure that you do
not read more bytes than the number of bytes written.
The correct
Myron,
Thank you for reporting the issue. The assertion happens when the value is
present in the store, absent in the cache and you run invokeAll(). As a
temporary solution, you can either call invoke() for each particular key
individually, or call getAll() for the keys prior to calling invokeAll(
Ravi,
A small typo sneaked to the code snippet - the lock() call was omitted - it
should be like this (I also omitting the try-finally block for the
simplicity)
IgniteCache cache = ...;
Lock lock = cache.lock(key);
lock.lock();
// ... process while lock is held
lock.unlock();
Myron,
I tried to reproduce this assertion on ignite-1.5, but with no luck. Can
you share your full cache configuration, the number of nodes in your
clusterr and a code snippet allowing to reproduce the issue?
Thanks,
AG
Myron,
This is a known usability issue, see [1]. You need to set
atomicWriterOrderMode to PRIMARY in order to make entry processors to work
correctly. I will cross-post this mail to devlist in order to raise the
ticket priority.
[1] https://issues.apache.org/jira/browse/IGNITE-2088
--AG
1 - 100 of 129 matches
Mail list logo