Ignite DefaultDataRegion config

2020-07-21 Thread kay
Hello!

What size should I give you for DefaultDataRegion?
Every cache has a specific Region(I'm not going to use defaultDataRegion)

40MB is enough? If will not use defaultDataRegion.

Thank you so much
I will wait for reply!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Checking dataregion limits

2020-07-21 Thread Victor
Hi,

What is the best way to check for data region limits. Currently i am using
below mbeans attributes to monitor this,

1. CacheClusterMetricsMXBeanImpl / CacheLocalMetricsMXBeanImpl
Size - provides the total entries.

2. DataRegionMetricsMXBeanImpl
OffHeapSize - Shows close to the max size i have set for the cache. Not
exact though.
TotalAllocatedSize - Seems to increase as data is added to the cache.

Few queries,
1. Why does max size not show what i exactly set. E.g. if i set the size as
20 mb (i.e. 20 * 1024 * 1024 = 20971520), but the value for OffHeapSize is
19922944. So why not exact?
2. The Initial size under the same mbean show simply 19, where as i set it
to 19 mb (in bytes). Any idea why is that?
3. I expected "OffheapUsedSize" to be incremented everytime i add data to
this cache till it hits the max Size. However, it always stays at 0. The
only value increments is "TotalAllocatedSize". Is that the right attribute
to check for data size increments or should other attributes be checked?
4. With no data yet, "TotalAllocatedSize" still shows some amount of
allocation. Like in the above case of max size of 20 mb, i could see
TotalAllocatedSize already at 8 mb, even before data was added to the cache.
5. Finally if "TotalAllocatedSize" is indeed the attribute to track the size
increment, i should expect eviction to kick in when its value reaches 90% of
the max size. Is this understanding correct?

I'll run some more tests.

Victor

I'll tr




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite CacheRebalanceMode Is Not Respected By Nodes

2020-07-21 Thread Evgenii Zhuravlev
Hi,

CacheRebalanceMode is responsible for a different thing - it starts to work
when data need to be rebalanced due to topology(or baseline topology
change). It's not responsible for data distribution between nodes for put
operations. So, when you insert data, part of this data belongs to the
partitions which are not related to the local node.

To achieve what you want, you can just create 2 different caches with
NodeFilter:
https://www.javadoc.io/doc/org.apache.ignite/ignite-core/latest/org/apache/ignite/util/AttributeNodeFilter.html
Using that you can avoid data movement between nodes and your thin client
will see these caches too.

Evgenii




ср, 15 июл. 2020 г. в 07:58, cparaskeva :

> The setup: Hello folks I have a simple Apache Ignite setup with two Ignite
> instances configured as server nodes over C# and one Ignite instance as a
> client node over java.
>
> What is the goal: Populate data on instance 1 and instance 2 but avoid
> movement of data between them. In other words data receiced on each node
> must stay in node. Then using the java client to run queries against the
> two
> nodes either combined (distributed join) or per node (using affinity).
>
> The issue: With one server node everything works as expected, however on
> more than one server nodes, data of the cluster is balancing between the x
> member nodes even if I have expliccitly set the CacheRebalanceMode to None
> which should disable the rebalance between then nodes. The insert time is
> increase by 4x-10x times, function to each node's populated data.
>
> P.S. I have tried change the cache mode from Partitioned to Local where
> each
> node will have isolated the data in it's internal H2 db however in that
> case
> the Java client is unable to detect the nodes or read any data from the
> cache of each node.
>
> Java Client Node
>
> IgniteConfiguration cfg = new IgniteConfiguration();
> // Enable client mode.
> cfg.setClientMode(true);
>
> // Setting up an IP Finder to ensure the client can locate the
> servers.
> TcpDiscoveryMulticastIpFinder ipFinder = new
> TcpDiscoveryMulticastIpFinder();
>
> ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500
> ..47509"));
> cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
>
> // Configure Ignite to connect with .NET nodes
> cfg.setBinaryConfiguration(new BinaryConfiguration()
> .setNameMapper(new BinaryBasicNameMapper(true))
> .setCompactFooter(true)
>
> // Start Ignite in client mode.
> Ignite ignite = Ignition.start(cfg);
>
>
> IgniteCache cache0 = ignite.cache(CACHE_NAME);
> IgniteCache cache =
> cache0.withKeepBinary();
>
> // execute some queries to nodes
> C# Server Node
>
>
>IIgnite _ignite =
> Ignition.Start(IgniteUtils.DefaultIgniteConfig()));
>
> // Create new cache and configure queries for Trade binary
> types.
> // Note that there are no such classes defined.
> var cache0 = _ignite.GetOrCreateCache Trade>("DEALIO");
>
> // Switch to binary mode to work with data in serialized
> form.
> _cache = cache0.WithKeepBinary IBinaryObject>();
>
>//populate some data ...
>
> public static IgniteConfiguration DefaultIgniteConfig()
> {
> return new IgniteConfiguration
> {
>
>
> PeerAssemblyLoadingMode =
> PeerAssemblyLoadingMode.CurrentAppDomain,
> BinaryConfiguration = new BinaryConfiguration
> {
> NameMapper = new BinaryBasicNameMapper { IsSimpleName =
> true },
> CompactFooter = true,
> TypeConfigurations = new[] {
> new BinaryTypeConfiguration(typeof(Trade)) {
> Serializer = new IgniteTradeSerializer()
> }
> }
> },
> DiscoverySpi = new TcpDiscoverySpi
> {
> IpFinder = new TcpDiscoveryMulticastIpFinder
> {
> Endpoints = new[] { "127.0.0.1:47500..47509" }
> },
> SocketTimeout = TimeSpan.FromSeconds(0.10)
> },
> Logger = new IgniteNLogLogger(),
> CacheConfiguration = new[]{
> new CacheConfiguration{
> PartitionLossPolicy=PartitionLossPolicy.Ignore,
> RebalanceMode=CacheRebalanceMode.None,
> Name = CACHE_NAME,
> CacheMode = CacheMode.Partitioned,
> Backups = 0,
> QueryEntities = new[] {
> new QueryEntity(typeof(AffinityKey),
> typeof(Trade))
>   

Re: Bug with client cache_size for metrics() and localMetrics()

2020-07-21 Thread tschauenberg
Apologies.  I modified the tests to wait longer for the 2.8.1 test scenario
on the client printouts and eventually the cluster metrics started
reporting.

Client - Cluster metrics: cache_size=100
Client -   Local metrics: cache_size=0



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 3rd party persistence: Mapping with RDBMS tables that don't have primary keys

2020-07-21 Thread Denis Magda
Hi Alex,

How will you access such records in Ignite? SQL lookups? If primary
key-based searches are irrelevant in Ignite as well then think about the
following:

   - Obviously, Ignite still requires a primary key and that can be an
   integer number incremented by your application:
   https://apacheignite.readme.io/docs/id-generator
   - The default CacheStore implementation that writes down changes to a
   relational database needs to be overridden. CacheStore.update() and
   CacheStore.insert() methods must not try to use the value of the primary
   key and use a custom technique to update those Postgres tables with Ignite
   data.


   -
   Denis



On Tue, Jul 21, 2020 at 12:27 PM Alex Panchenko 
wrote:

> Hi,
>
> I have an Ignite cluster with 3rd party persistence enabled (Postgres db).
> I
> have a few tables in Postgres which don't have primary keys  bur I need to
> keep these tables in Ignite:
>  They look like this table (as an example):
>
>   KeyValuesTable {
> key: UUID
> value: String
> }
>
> NOTE: there is no Primary key or any unique index in this table. The same
> key value may have multiple values attached, for ex:
>
> #1 record: key=1, value="v1"
> #2 record: key=1, value="v2"
> #3 record: key=2, value="v1"
> #4 record: key=3, value="v1"
> #5 record: key=3, value="v2"
>
> NOTE: I can not change the table in Postress and add a primary key field,
> like id or something like that.
> Can you suggest some solution, please?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ContinuousQueryWithTransformer with custom return type

2020-07-21 Thread Denis Magda
Both the local listener and remote filter will return BinaryObjects as long
as the continuous query was registered with the "cache.withKeepBinary()"
option set.

However, there is a trick you can use in the implementation of the local
listener. Just call "deserialize()" method on the instance of a received
binary object:
https://github.com/GridGain-Demos/ignite-learning-by-examples/blob/master/complete/src/main/java/org/gridgain/examples/App4ContinousQueries.java#L76

-
Denis


On Tue, Jul 21, 2020 at 9:21 AM Devakumar J 
wrote:

> Hi,
>
> I am trying to register CQ with transformer with keep binary option and
> trying to return a cutom defined object. I see the transformer properly
> creating custom object and returning properly.
>
> But at the same time local listener is not receiving the custom object,
> instead i think it received binary object. How can i ensure that local
> listener is receiving custom object properly.
>
> ContinuousQueryWithTransformer
> continuousQueryWithTransformer = new ContinuousQueryWithTransformer<>();
>
> I want local listener to receive ListenerEvent (Custom defined object) as
> below:
>
> continuousQueryWithTransformer.setLocalListener(cacheEntryEvents ->
> cacheEntryEvents.forEach(listenerEvent -> {
> LOG.info("Event Received : {}", listenerEvent);}
>
> But local listener throws BinaryObject cant cast to ListenerEvent.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Load balanced CQ listener/Catch up missed events

2020-07-21 Thread Denis Magda
The queue can be a good alternative [1] but the application has to care
about faults handling. For example, if a client reads a message from the
queue, starts doing some job as the message prescribes and then fails in
the middle, then the job is incomplete and nobody else can pick it up. So,
you would need to track somewhere else that the message is in fact was
fully processed (just toggle the status).

[1]
https://apacheignite.readme.io/docs/queue-and-set#cache-queues-and-load-balancing

-
Denis


On Sat, Jul 18, 2020 at 12:42 AM Devakumar J 
wrote:

> Hi Denis,
>
> Thanks for your reply.
>
> I will explore the custom implementation of remote filter for load
> balancing.
>
> Also currently we dont use kafka in our tech stack, so we dont want to
> introduce it just for this purpose.
>
> I was also checking ignite events and distributed message queue as an
> alternative for Continuous Queries.
>
> Can we define setup like,
> 1. Define distributed message queue across the cluster.
> 2. Each server node publish the selective cache events to the queue.
> 3. Client nodes consume the queue.
>
> So in this case as per my understanding on distributed queue,
>   a. Client nodes will get events in load balanced way.
>   b. If any one client node down then other node will start consuming
> all the events.
>   c. If both client nodes down, queue will still have the entries that
> are not yet consumed as it is persisted. So if client nodes come up, it
> will
> resume from where it left.
>
> Is this feasible or do you see any issue in setup in case of server node
> failure/ server node re-balancing.
>
> Thanks,
> Devakumar J
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ContinuousQueryWithTransformer with custom return type

2020-07-21 Thread akorensh
Hi,
  Looks like you are not correctly setting up your code:
   Take a look at:
https://apacheignite.readme.io/docs/continuous-queries#remote-transformer
  see:  https://github.com/apache/ignite/blob/master/example
s/src/main/java/org/apache/ignite/examples/datagrid/CacheContinuousQueryWithTransformerExample.java

  If you are still having trouble send a full reproducer and I'll take a
look.
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Bug with client cache_size for metrics() and localMetrics()

2020-07-21 Thread tschauenberg
IgniteClusterClient.java

  
IgniteClusterNode1.java

  
IgniteClusterNode2.java

  

The following bug could be reproduced on both server and client nodes with
Ignite 2.7.0.  In Ignite 2.8.1, the server node counts are correct but the
client node counts are still incorrect.

*Procedure:*

1. Run attached IgniteClusterNode1.java
2. Run attached IgniteClusterNode2.java
3. Run attached IgniteClusterClient.java
4. Wait for client to put 100 items in the test cache and then observe the
local and cluster wide cache_size metrics
5. The cluster wide metrics should be 100 on both servers and client.  The
local metrics on the server should sum to 100 and the client should show 0.

*Summary:
*
* both server and client counts were wrong in Ignite 2.7.0
* server counts were fixed somewhere before Ignite 2.8.1 but client counts
are still wrong 

*Ignite 2.7.0 Results:*

*Server1:* 

/Actual:/

Server1 - Cluster metrics: cache_size=40
Server1 - Local metrics: cache_size=40

/Expected:/

Server1 - Cluster metrics: cache_size=100
Server1 - Local metrics: cache_size=40

/Working Incorrectly/

*Server2:*

/Actual:/

Server2 - Cluster metrics: cache_size=60
Server2 - Local metrics: cache_size=60

/Expected:/

Server2 - Cluster metrics: cache_size=100
Server2 - Local metrics: cache_size=60

/Working Incorrectly/

*Client:*

/Actual:/

Client - Cluster metrics: cache_size=0
Client - Local metrics: cache_size=0

/Expected:/

Client - Cluster metrics: cache_size=100
Client - Local metrics: cache_size=0

/Working Incorrectly/

*Ignite 2.8.1 Results:*

*Server1:*

/Actual and Expected:/

Server1 - Cluster metrics: cache_size=100
Server1 - Local metrics: cache_size=40

/Working Correctly/

*Server2:*

/Actual and Expected:/

Server2 - Cluster metrics: cache_size=100
Server2 - Local metrics: cache_size=60

/Working Correctly/

*Client:*

/Actual:/

Client - Cluster metrics: cache_size=0
Client - Local metrics: cache_size=0

/Expected:/

Client - Cluster metrics: cache_size=100
Client - Local metrics: cache_size=0

/Working Incorrectly/



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.8.1 - Loading Plugin Provider

2020-07-21 Thread akorensh
Hi, 
  You don't need to load the the plugin provider through the service loader
of
the JDK in addition to using setPluginProvider.

  It is preferable to use the service loader alone like in MLPlugin
  see:
https://github.com/apache/ignite/blob/master/modules/ml/src/main/resources/META-INF/services/org.apache.ignite.plugin.PluginProvider

Thanks, Alex




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


3rd party persistence: Mapping with RDBMS tables that don't have primary keys

2020-07-21 Thread Alex Panchenko
Hi,

I have an Ignite cluster with 3rd party persistence enabled (Postgres db). I
have a few tables in Postgres which don't have primary keys  bur I need to
keep these tables in Ignite:
 They look like this table (as an example):

  KeyValuesTable {
key: UUID
value: String 
}

NOTE: there is no Primary key or any unique index in this table. The same
key value may have multiple values attached, for ex:
  
#1 record: key=1, value="v1"
#2 record: key=1, value="v2"
#3 record: key=2, value="v1"
#4 record: key=3, value="v1"
#5 record: key=3, value="v2"

NOTE: I can not change the table in Postress and add a primary key field,
like id or something like that.
Can you suggest some solution, please?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Change in CacheStore Serialization from 2.7.6 to 2.8.x breaks Spring Injected dataSource

2020-07-21 Thread Yohan Fernando
Hello All,

We are migrating from Ignite 2.7.6 to 2.8.1 and have hit an issue where 
CacheStore implementations that include Spring injected DataSource objects, 
these datasources turn out to be null. After investigation, it appears that 
there is a change in behaviour under Ignite 2.8.x where it seems like the 
CacheStore is Serialized and therefore loose the injected Spring references. 
This was not the case with 2.7.6 as the transient DataSource did not loose it's 
injected value in that version.

I have tried to make the CacheStore implement ApplicationContextAware, but it 
hasn't helped as the Spring context is not re-initialized post serialization.

Has anyone come across this and figured out a way to resolve this?

Thanks

Yohan

_

This email, its contents, and any attachments transmitted with it are intended 
only for the addressee(s) and may be confidential and legally privileged. We do 
not waive any confidentiality by misdelivery. If you have received this email 
in error, please notify the sender immediately and delete it. You should not 
copy it, forward it or otherwise use the contents, attachments or information 
in any way. Any liability for viruses is excluded to the fullest extent 
permitted by law.

Tudor Capital Europe LLP (TCE) is authorised and regulated by The Financial 
Conduct Authority (the FCA). TCE is registered as a limited liability 
partnership in England and Wales No: OC340673 with its registered office at 10 
New Burlington Street, London, W1S 3BE, United Kingdom


ContinuousQueryWithTransformer with custom return type

2020-07-21 Thread Devakumar J
Hi,

I am trying to register CQ with transformer with keep binary option and
trying to return a cutom defined object. I see the transformer properly
creating custom object and returning properly.

But at the same time local listener is not receiving the custom object,
instead i think it received binary object. How can i ensure that local
listener is receiving custom object properly.

ContinuousQueryWithTransformer
continuousQueryWithTransformer = new ContinuousQueryWithTransformer<>();

I want local listener to receive ListenerEvent (Custom defined object) as
below:

continuousQueryWithTransformer.setLocalListener(cacheEntryEvents ->
cacheEntryEvents.forEach(listenerEvent -> {
LOG.info("Event Received : {}", listenerEvent);}

But local listener throws BinaryObject cant cast to ListenerEvent.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Force backup on different physical machine

2020-07-21 Thread narges saleh
Thanks Stephan.
I am saying one ignite-node/pod/k8 node spread across availability zones

On Tue, Jul 21, 2020 at 9:51 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> 1) I think you’re saying you have two Ignite nodes on one physical
> machine. What I mean is, don’t do that. All else being equal, one server
> node per physical machine is the way to go
>
> 2) This is an Ignite feature and not limited to any particular cloud
> vendor. It does rely on your cloud vendor “publishing” the underlying
> machine (or availability zone) as an environment variable so Ignite can
> find it
>
> On 21 Jul 2020, at 14:01, narges saleh  wrote:
>
> Hi Stephen,
> 1) [Having said that, you’re probably best configuring k8s not to put two
> Ignite server nodes on a single machine.]
> Do you mean by having k8s place the VMs in different availability zones?
>
> Then how would you tell ignite to place the backups in a different zone?
> I've read this is possible via affinityBackupFilter. Which brings up my
> next question.
>
> 2) Is affinityBackupFilter available with all public cloud platforms or
> just AWS?
>
> thanks.
>
> On Tue, Jul 7, 2020 at 5:20 AM Stephen Darlington <
> stephen.darling...@gridgain.com> wrote:
>
>> You can configure the affinity function (RendezvousAffinityFunction
>> ).
>> If you set the backup filter, you can customise which nodes are considered
>> for use as backups:
>>
>> cacheConfiguration.setBackups(1);
>> cacheConfiguration.setAffinity(new RendezvousAffinityFunction(1024, (n,p) -> 
>> {
>> return !p.hostNames().equals(n.hostNames());
>> }));
>>
>> Having said that, you’re probably best configuring k8s not to put two
>> Ignite server nodes on a single machine.
>>
>> Regards,
>> Stephen
>>
>> On 7 Jul 2020, at 10:42, Humphrey  wrote:
>>
>> Let say I have 2 kubernetes nodes and I have 4 ignite server nodes.
>> This will result (if kubernetes have 2 pods running on each node) in the
>> following:
>>
>> *kubernetes_node1:* ignite_node1, ignite_node2
>> *kubernetes_node2:* ignite_node3, ignite_node4
>>
>> I specify that my cache backup = 1
>>
>> Is there a way to configure that the backup data of the ignite_node1 goes
>> on
>> the ignite_node3 or ignite_node4 and NOT on ignite_node2 (same physical
>> machine/kubernetes node)? Is there any configuration for this (I assume
>> it's
>> something on runtime, because we don't know where kubernetes will schedule
>> the pod)?
>>
>> Background:
>> If kubernetes_node1 goes down, then there won't be any data loss.
>>
>> Humphrey
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>
>>
>>
>
>


Re: Force backup on different physical machine

2020-07-21 Thread Stephen Darlington
1) I think you’re saying you have two Ignite nodes on one physical machine. 
What I mean is, don’t do that. All else being equal, one server node per 
physical machine is the way to go

2) This is an Ignite feature and not limited to any particular cloud vendor. It 
does rely on your cloud vendor “publishing” the underlying machine (or 
availability zone) as an environment variable so Ignite can find it

> On 21 Jul 2020, at 14:01, narges saleh  wrote:
> 
> Hi Stephen,
> 1) [Having said that, you’re probably best configuring k8s not to put two 
> Ignite server nodes on a single machine.]
> Do you mean by having k8s place the VMs in different availability zones?
> 
> Then how would you tell ignite to place the backups in a different zone?
> I've read this is possible via affinityBackupFilter. Which brings up my next 
> question.
> 
> 2) Is affinityBackupFilter available with all public cloud platforms or just 
> AWS?
> 
> thanks.
> 
> On Tue, Jul 7, 2020 at 5:20 AM Stephen Darlington 
> mailto:stephen.darling...@gridgain.com>> 
> wrote:
> You can configure the affinity function (RendezvousAffinityFunction 
> ).
>  If you set the backup filter, you can customise which nodes are considered 
> for use as backups:
> cacheConfiguration.setBackups(1);
> cacheConfiguration.setAffinity(new RendezvousAffinityFunction(1024, (n,p) -> {
> return !p.hostNames().equals(n.hostNames());
> }));
> Having said that, you’re probably best configuring k8s not to put two Ignite 
> server nodes on a single machine.
> 
> Regards,
> Stephen
> 
>> On 7 Jul 2020, at 10:42, Humphrey > > wrote:
>> 
>> Let say I have 2 kubernetes nodes and I have 4 ignite server nodes.
>> This will result (if kubernetes have 2 pods running on each node) in the
>> following:
>> 
>> *kubernetes_node1:* ignite_node1, ignite_node2
>> *kubernetes_node2:* ignite_node3, ignite_node4
>> 
>> I specify that my cache backup = 1
>> 
>> Is there a way to configure that the backup data of the ignite_node1 goes on
>> the ignite_node3 or ignite_node4 and NOT on ignite_node2 (same physical
>> machine/kubernetes node)? Is there any configuration for this (I assume it's
>> something on runtime, because we don't know where kubernetes will schedule
>> the pod)?
>> 
>> Background:
>> If kubernetes_node1 goes down, then there won't be any data loss.
>> 
>> Humphrey
>> 
>> 
>> 
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 
>> 
> 
> 




Re: Force backup on different physical machine

2020-07-21 Thread narges saleh
Hi Stephen,
1) [Having said that, you’re probably best configuring k8s not to put two
Ignite server nodes on a single machine.]
Do you mean by having k8s place the VMs in different availability zones?

Then how would you tell ignite to place the backups in a different zone?
I've read this is possible via affinityBackupFilter. Which brings up my
next question.

2) Is affinityBackupFilter available with all public cloud platforms or
just AWS?

thanks.

On Tue, Jul 7, 2020 at 5:20 AM Stephen Darlington <
stephen.darling...@gridgain.com> wrote:

> You can configure the affinity function (RendezvousAffinityFunction
> ).
> If you set the backup filter, you can customise which nodes are considered
> for use as backups:
>
> cacheConfiguration.setBackups(1);
> cacheConfiguration.setAffinity(new RendezvousAffinityFunction(1024, (n,p) -> {
> return !p.hostNames().equals(n.hostNames());
> }));
>
> Having said that, you’re probably best configuring k8s not to put two
> Ignite server nodes on a single machine.
>
> Regards,
> Stephen
>
> On 7 Jul 2020, at 10:42, Humphrey  wrote:
>
> Let say I have 2 kubernetes nodes and I have 4 ignite server nodes.
> This will result (if kubernetes have 2 pods running on each node) in the
> following:
>
> *kubernetes_node1:* ignite_node1, ignite_node2
> *kubernetes_node2:* ignite_node3, ignite_node4
>
> I specify that my cache backup = 1
>
> Is there a way to configure that the backup data of the ignite_node1 goes
> on
> the ignite_node3 or ignite_node4 and NOT on ignite_node2 (same physical
> machine/kubernetes node)? Is there any configuration for this (I assume
> it's
> something on runtime, because we don't know where kubernetes will schedule
> the pod)?
>
> Background:
> If kubernetes_node1 goes down, then there won't be any data loss.
>
> Humphrey
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
>


Re: How to evaluate memory consumption of indexes

2020-07-21 Thread steve.hostettler
Thanks for the answer, did not thing of using the persistence to assess the
size :)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


2.8.1 - Loading Plugin Provider

2020-07-21 Thread VeenaMithare
Hi , 

I saw that 'SetPluginConfiguration' in IgniteConfiguration has been
deprecated in 2.8.1: 
( IgniteConfiguration javadoc :
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html
)
*IgniteConfigurationsetPluginConfigurations(PluginConfiguration...
pluginCfgs)*
Deprecated. 
Since PluginProviders can be set explicitly via
setPluginProviders(PluginProvider[]) it's preferable to store
PluginConfiguration as a part of PluginProvider.

*IgniteConfigurationsetPluginProviders(PluginProvider... pluginProvs)*
Sets plugin providers.


I guess we need to set Plugin Providers instead of setting Plugin
Configuration. The question is 
 do we also need to load the plugin provider through the service loader of
the JDK ?
(
https://apacheignite.readme.io/docs/plugins ( Section : Load Plugin Provider
)
)

What is the purpose of doing this ?

regards,
Veena.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


third-party persistance and junction table

2020-07-21 Thread Bastien Durel
Hello,

I have a junction table in my model, and used the web console to
generate ignite config and classes from my SQL database

-> There is a table user with id (long) and some data
-> There is a table role with id (long) and some data
-> There is a table user_role with user_id (fk) and role_id (fk)

Reading cache from table works, I can query ignite with jdbc and I get
my relations as expected.

But if I want to add a new relation, the query :
insert into "UserRoleCache".user_role(USER_ID, ROLE_ID) values(6003, 2)
is translated into this one, sent to postgresql :
UPDATE public.user_role SET  WHERE (user_id=$1 AND role_id=$2)

Which obviously is rejected.

The web console generated a cache for this table, with UserRole
& UserRoleKey types, which each contains userId and roleId Long's.

Is there a better (correct) way to handle these many-to-many relations
in ignite (backed by RDBMS) ?

Regards,

-- 
Bastien Durel
DATA
Intégration des données de l'entreprise,
Systèmes d'information décisionnels.

bastien.du...@data.fr
tel : +33 (0) 1 57 19 59 28
fax : +33 (0) 1 57 19 59 73
12 avenue Raspail, 94250 GENTILLY France
www.data.fr