Hello!
Then you need to implement your own AffinityFunction by subclassing
RendezvousAffinityFunction.
Regards,
--
Ilya Kasnacheev
вт, 13 окт. 2020 г. в 13:15, ssansoy :
> Hi,
> RE: the custom affinity function, this is what we have:
>
> public class CacheLevelAffinityKeyMapper implements
Hi,
RE: the custom affinity function, this is what we have:
public class CacheLevelAffinityKeyMapper implements AffinityKeyMapper {
private final Logger LOGGER =
LoggerFactory.getLogger(CacheLevelAffinityKeyMapper.class);
@Override
public Object affinityKey(Object key) {
Hello!
I think you may need to write a custom affinity function for your use case,
which will confine every cache to a single primary node.
Regards,
--
Ilya Kasnacheev
вт, 13 окт. 2020 г. в 11:18, ssansoy :
> Hi, thanks for the reply again!
>
> 1. @AffinityKeyMapped is not deprecated as you
Hi, thanks for the reply again!
1. @AffinityKeyMapped is not deprecated as you mentioned, but
AffinityKeyMapper is (it seems the AffinityKeyMapper is used in places where
the annotation cannot be - e.g. our case). if we use the AFFINITY_KEY clause
on the table definition, we don't want to select
Hello!
1. I don't think that AffinityKeyMapped is deprecated, but there are cases
when it is ignored :(
You can use affinity_key clause in CREATE TABLE ... WITH.
2. If it's the same node for all keys, all processing will happen on that
node.
3. It depends on what you are trying to do.
4. I don't
Thanks, this is what I have ended up doing. However, it looks like
AffinityKeyMapper is deprecated?
I am adding an implementation of this (which returns the binary typename of
the key BinaryObject) - and this does seem to have the desired effect (e.g.
all keys with the same typename are marked as
Hello!
In this case you could use an affinity function which will put all these
entries on the same node, but it will mean that you no longer use any
distribution benefits.
I don't think it is a good design if you expect local listener to get a tx
worth of entries at once. Listener should
Hi,
We have an app that writes N records to the cluster (REPLICATED) - e.g.
10,000 records, in one transaction.
We also have an app that issues a continuous query against the cluster,
listening for updates to this cache.
We'd like the app to receive all 10,000 records in one call into the