Re: rejecting communication connection & Failed to process selector key

2024-09-19 Thread MJ
Yes - kubernetes discovery spi.   No client nodes are out of the 
cluster.


and below setting for communication spi.



 config of communicationSpi



Re: rejecting communication connection & Failed to process selector key

2024-09-19 Thread Humphrey Lopez
Are you using the kubernetes discovery SPI?Humphrey On 16 Sep 2024, at 02:58, MJ <6733...@qq.com> wrote:Hi Igniters,
 
I am experiencing the “Failed to process selector key” error once every one or two days. Every time it appears received and rejected multiple communication connections and then threw the exception.
Below logging is about “Broken pipe” original exception but not only “Broken pipe”, occasionally the “Failed to process selector key” wraps “Connection Reset”, “javax.net.ssl.SSLException:
 Failed to encrypt data (SSL engine error) [status=CLOSED, handshakeStatus=NOT_HANDSHAKING”.  
 
Is there any solution to fix it ? or its configuration can be improved ?
 
Ignite 2.16.0 / 4 data nodes, running in openshift 4 
 
 config of communicationSpi

    
    
    
    
    
    

 
 
24-09-15 17:18:35.146 [INFO ] grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi% o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35160] 

24-09-15 17:18:35.147 [INFO ] grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi% o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when already connected to this node, rejecting [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
 rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]  
24-09-15 17:18:35.357 [INFO ] grid-nio-worker-tcp-comm-3-#26%TcpCommunicationSpi% o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35162] 

24-09-15 17:18:35.358 [INFO ] grid-nio-worker-tcp-comm-3-#26%TcpCommunicationSpi% o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when already connected to this node, rejecting [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
 rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]  
24-09-15 17:18:35.568 [INFO ] grid-nio-worker-tcp-comm-0-#23%TcpCommunicationSpi% o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35164] 

24-09-15 17:18:35.569 [INFO ] grid-nio-worker-tcp-comm-0-#23%TcpCommunicationSpi% o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when already connected to this node, rejecting [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
 rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]  
24-09-15 17:18:35.975 [ERROR] grid-nio-worker-tcp-comm-1-#24%TcpCommunicationSpi% o.a.i.s.c.t.TcpCommunicationSpi:137 - Failed to process selector key [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker [super=AbstractNioClientWorker
 [idx=1, bytesRcvd=29406013584, bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker [name=grid-nio-worker-tcp-comm-1, igniteInstanceName=TcpCommunicationSpi, finished=false, heartbeatTs=1726435114873, hashCode=1144648384, interrupted=false,
 runner=grid-nio-worker-tcp-comm-1-#24%TcpCommunicationSpi%]]], writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768], inRecovery=GridNioRecoveryDescriptor [acked=20129536, resendCnt=0, rcvCnt=19533551,
 sentCnt=20129879, reserved=true, lastAck=19533551, nodeLeft=false, node=TcpDiscoveryNode [id=7c28b6bc-8991-47a2-b69c-6adba0482713, consistentId=10.254.13.83,127.0.0.1:47500, addrs=ArrayList [10.254.13.83, 127.0.0.1], sockAddrs=HashSet [/10.254.13.83:47500,
 /127.0.0.1:47500], discPort=47500, order=3, intOrder=3, lastExchangeTime=1724822271382, loc=false, ver=2.16.0#20231215-sha1:7bde6a42, isClient=false], connected=false, connectCnt=205, queueLimit=131072, reserveCnt=260, pairedConnections=false], outRecovery=GridNioRecoveryDescriptor
 [acked=20129536, resendCnt=0, rcvCnt=19533551, sentCnt=20129879, reserved=true, lastAck=19533551, nodeLeft=false, node=TcpDiscoveryNode [id=7c28b6bc-8991-47a2-b69c-6adba0482713, consistentId=10.254.13.83,127.0.0.1:47500, addrs=ArrayList [10.254.13.83, 127.0.0.1],
 sockAddrs=HashSet [/10.254.13.83:47500, /127.0.0.1:47500], discPort=47500, order=3, intOrder=3, lastExchangeTime=1724822271382, loc=false, ver=2.16.0#20231215-sha1:7bde6a42, isClient=false], connected=false, connectCnt=205, queueLimit=131072, reserveCnt=260,
 pairedConnections=false], closeSocket=true, 
outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@69a257d1, super=GridNioSessionImpl [locAddr=/10.254.32.162:52542, rmtAddr=/10.254.13.83:47100, createTime=1726435114863, closeTime=0, bytesSent=164200, bytesRcvd=468, bytesSent0=0,
 bytesRcvd0=0, sndSchedTime=1726435114863, lastSndTime=1726435114972, lastRcvTime=1726435114972, readsPaused=false, filterChain=FilterChain[filters=[GridNioCodecFilter [parser=o.a.i.i.util.nio.GridDirectParser@5196c6f7, directMode=true], GridConnectionBytesVerifyFilter,
 SSL filter], accepted=false, markedForClose=true]]] java.io.IOException: Broken pipe
  

Re: How to use ClusterNodeAttributeAffinityBackupFilter to have atleast one replica in second zone

2024-09-19 Thread Amit Jolly
Hello Alex,

Thanks for the suggestion. I will try with both options. Most likely I will
use the custom AffinityBackupFilter implementation you provided and will
try to enhance zone skewness configurable.

Regards,

Amit

On Thu, Sep 19, 2024 at 3:16 PM Alex Plehanov 
wrote:

> Hello Amit,
>
> You can use ClusterNodeAttributeAffinityBackupFilter and introduce
> some virtual zones. For example, if you have 5 nodes in zone 1 and 5
> nodes in zone 2, you can assign 'zone1' attribute value to 3 nodes
> from zone 1, assign 'zone2' attribute value to 3 nodes from zone 2,
> and assign 'zone3' attribute value to 4 remaining nodes (2 from zone 1
> and 2 from zone 2). In this case, there will be 3 copies of each
> partition, each partition will be in each virtual zone, but nodes in
> virtual 'zone3' will contain a little bit less partitions than nodes
> from 'zone1' and 'zone2'.
>
> Or you can create your own backup filter, allowing no more than two
> nodes for the same attribute value, for example like this:
>
> public class MyAffinityBackupFilter implements
> IgniteBiPredicate> {
> private final String attrName;
>
> public MyAffinityBackupFilter(String attrName) {
> this.attrName = attrName;
> }
>
> @Override public boolean apply(ClusterNode candidate,
> List previouslySelected) {
> Set usedAttrs = new HashSet<>();
>
> for (ClusterNode node : previouslySelected) {
> if (Objects.equals(candidate.attribute(attrName),
> node.attribute(attrName)) &&
> !usedAttrs.add(candidate.attribute(attrName)))
> return false;
> }
>
> return true;
> }
> }
>
> In this case you can achieve a more even distribution.
>
> чт, 19 сент. 2024 г. в 16:58, Amit Jolly :
> >
> > Hi  Pavel,
> >
> > Well based upon documentation of
> ClusterNodeAttributeAffinityBackupFilter.java class. It says "This
> implementation will discard backups rather than place multiple on the same
> set of nodes. This avoids trying to cram more data onto remaining nodes
> when some have failed." and i have verified the same by running a small
> test with three node cluster (one assigned with node attribute as
> AVAILABILITY_ZONE=ZONE1 and other two assigned node attribute
> AVAILABILITY_ZONE=ZONE2) , Created a cache with 2 backups and using
> ClusterNodeAttributeAffinityBackupFilter in RendezvousAffinityFunction as
> below. After that added an entry into the cache and verified the nodes
> count for both primary and backup using cache affinity function. It
> returned 2 instead of 3.
> >
> > ClusterNodeAttributeAffinityBackupFilter backupFilter = new
> ClusterNodeAttributeAffinityBackupFilter("AVAILABILITY_ZONE");
> > RendezvousAffinityFunction rendezvousAffinityFunction = new
> RendezvousAffinityFunction();
> > rendezvousAffinityFunction.setAffinityBackupFilter(backupFilter);
> >
> > CacheConfiguration cacheConfiguration = new
> CacheConfiguration<>();
> > cacheConfiguration.setBackups(2);
> > cacheConfiguration.setAffinity(rendezvousAffinityFunction);
> >
> > IgniteCache cache =
> ignite.getOrCreateCache(cacheConfiguration);
> > cache.put("1","1");
> > Collection nodes =
> ((Ignite)cache.unwrap(Ignite.class)).affinity(cache.getName()).mapKeyToPrimaryAndBackups("1");
> > assertEquals(3, nodes.size()); //This fails even though i have three
> nodes (1 with node attribute AVAILABILITY_ZONE="ZONE1" and other two with
> node attribute AVAILABILITY_ZONE="ZONE2")
> >
> > PS: I started three nodes with Custom cache configuration
> IgniteConfiguration.setUserAttributes
> >
> > Server Node1
> > =
> > Map userAttributes = new HashMap<>();
> > userAttributes.put("AVAILABILITY_ZONE", "ZONE1");
> > IgniteConfiguration cfg = new IgniteConfiguration();
> > cfg.setUserAttributes(userAttributes);
> > Ignition.start(cfg);
> >
> > Server Node2
> > =
> > Map userAttributes = new HashMap<>();
> > userAttributes.put("AVAILABILITY_ZONE", "ZONE2");
> > IgniteConfiguration cfg = new IgniteConfiguration();
> > cfg.setUserAttributes(userAttributes);
> > Ignition.start(cfg);
> >
> > Server Node3
> > =
> > Map userAttributes = new HashMap<>();
> > userAttributes.put("AVAILABILITY_ZONE", "ZONE2");
> > IgniteConfiguration cfg = new IgniteConfiguration();
> > cfg.setUserAttributes(userAttributes);
> > Ignition.start(cfg);
> >
> >
> >
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html
> >
> > Thanks,
> >
> > Amit Jolly
> >
> > On Thu, Sep 19, 2024 at 12:51 AM Pavel Tupitsyn 
> wrote:
> >>
> >> Hi Amit,
> >>
> >> >  if the backup count is let's say 2, Ignite won't create a second
> backup as there are not enough zones
> >> Not correct - Ignite will create backups anyway.
> >> - A backup is a copy of a partition on another node
> >> - With 2 backups every partition will have 3 copies (1 primary, 2
> backup), all on differe

Re: How to use ClusterNodeAttributeAffinityBackupFilter to have atleast one replica in second zone

2024-09-19 Thread Alex Plehanov
Hello Amit,

You can use ClusterNodeAttributeAffinityBackupFilter and introduce
some virtual zones. For example, if you have 5 nodes in zone 1 and 5
nodes in zone 2, you can assign 'zone1' attribute value to 3 nodes
from zone 1, assign 'zone2' attribute value to 3 nodes from zone 2,
and assign 'zone3' attribute value to 4 remaining nodes (2 from zone 1
and 2 from zone 2). In this case, there will be 3 copies of each
partition, each partition will be in each virtual zone, but nodes in
virtual 'zone3' will contain a little bit less partitions than nodes
from 'zone1' and 'zone2'.

Or you can create your own backup filter, allowing no more than two
nodes for the same attribute value, for example like this:

public class MyAffinityBackupFilter implements
IgniteBiPredicate> {
private final String attrName;

public MyAffinityBackupFilter(String attrName) {
this.attrName = attrName;
}

@Override public boolean apply(ClusterNode candidate,
List previouslySelected) {
Set usedAttrs = new HashSet<>();

for (ClusterNode node : previouslySelected) {
if (Objects.equals(candidate.attribute(attrName),
node.attribute(attrName)) &&
!usedAttrs.add(candidate.attribute(attrName)))
return false;
}

return true;
}
}

In this case you can achieve a more even distribution.

чт, 19 сент. 2024 г. в 16:58, Amit Jolly :
>
> Hi  Pavel,
>
> Well based upon documentation of 
> ClusterNodeAttributeAffinityBackupFilter.java class. It says "This 
> implementation will discard backups rather than place multiple on the same 
> set of nodes. This avoids trying to cram more data onto remaining nodes when 
> some have failed." and i have verified the same by running a small test with 
> three node cluster (one assigned with node attribute as 
> AVAILABILITY_ZONE=ZONE1 and other two assigned node attribute 
> AVAILABILITY_ZONE=ZONE2) , Created a cache with 2 backups and using 
> ClusterNodeAttributeAffinityBackupFilter in RendezvousAffinityFunction as 
> below. After that added an entry into the cache and verified the nodes count 
> for both primary and backup using cache affinity function. It returned 2 
> instead of 3.
>
> ClusterNodeAttributeAffinityBackupFilter backupFilter = new 
> ClusterNodeAttributeAffinityBackupFilter("AVAILABILITY_ZONE");
> RendezvousAffinityFunction rendezvousAffinityFunction = new 
> RendezvousAffinityFunction();
> rendezvousAffinityFunction.setAffinityBackupFilter(backupFilter);
>
> CacheConfiguration cacheConfiguration = new 
> CacheConfiguration<>();
> cacheConfiguration.setBackups(2);
> cacheConfiguration.setAffinity(rendezvousAffinityFunction);
>
> IgniteCache cache = 
> ignite.getOrCreateCache(cacheConfiguration);
> cache.put("1","1");
> Collection nodes = 
> ((Ignite)cache.unwrap(Ignite.class)).affinity(cache.getName()).mapKeyToPrimaryAndBackups("1");
> assertEquals(3, nodes.size()); //This fails even though i have three nodes (1 
> with node attribute AVAILABILITY_ZONE="ZONE1" and other two with node 
> attribute AVAILABILITY_ZONE="ZONE2")
>
> PS: I started three nodes with Custom cache configuration 
> IgniteConfiguration.setUserAttributes
>
> Server Node1
> =
> Map userAttributes = new HashMap<>();
> userAttributes.put("AVAILABILITY_ZONE", "ZONE1");
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setUserAttributes(userAttributes);
> Ignition.start(cfg);
>
> Server Node2
> =
> Map userAttributes = new HashMap<>();
> userAttributes.put("AVAILABILITY_ZONE", "ZONE2");
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setUserAttributes(userAttributes);
> Ignition.start(cfg);
>
> Server Node3
> =
> Map userAttributes = new HashMap<>();
> userAttributes.put("AVAILABILITY_ZONE", "ZONE2");
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setUserAttributes(userAttributes);
> Ignition.start(cfg);
>
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html
>
> Thanks,
>
> Amit Jolly
>
> On Thu, Sep 19, 2024 at 12:51 AM Pavel Tupitsyn  wrote:
>>
>> Hi Amit,
>>
>> >  if the backup count is let's say 2, Ignite won't create a second backup 
>> > as there are not enough zones
>> Not correct - Ignite will create backups anyway.
>> - A backup is a copy of a partition on another node
>> - With 2 backups every partition will have 3 copies (1 primary, 2 backup), 
>> all on different nodes (since you have 10 nodes)
>> - Use ClusterNodeAttributeAffinityBackupFilter to ensure that at least one 
>> of the copies is in a different AZ
>>
>> And that is enough for 3 copies.
>>
>> On Thu, Sep 19, 2024 at 12:10 AM Amit Jolly  wrote:
>>>
>>> Hi Team
>>>
>>> We are planning to run 10 node Ignite clusters in AWS with 5 nodes each 
>>> into two availability zones. Using Kubernetes topologyspreadconstraints we 
>>> have made sure th

Re: How to use ClusterNodeAttributeAffinityBackupFilter to have atleast one replica in second zone

2024-09-19 Thread Amit Jolly
Hi  Pavel,

Well based upon documentation of
ClusterNodeAttributeAffinityBackupFilter.java class. It says "This
implementation will discard backups rather than place multiple on the same
set of nodes. This avoids trying to cram more data onto remaining nodes
when some have failed." and i have verified the same by running a small
test with three node cluster (one assigned with node attribute as
AVAILABILITY_ZONE=ZONE1 and other two assigned node attribute
AVAILABILITY_ZONE=ZONE2) , Created a cache with 2 backups and using
ClusterNodeAttributeAffinityBackupFilter in RendezvousAffinityFunction as
below. After that added an entry into the cache and verified the nodes
count for both primary and backup using cache affinity function. It
returned 2 instead of 3.

ClusterNodeAttributeAffinityBackupFilter backupFilter = new
ClusterNodeAttributeAffinityBackupFilter("AVAILABILITY_ZONE");
RendezvousAffinityFunction rendezvousAffinityFunction = new
RendezvousAffinityFunction();
rendezvousAffinityFunction.setAffinityBackupFilter(backupFilter);

CacheConfiguration cacheConfiguration = new
CacheConfiguration<>();
cacheConfiguration.setBackups(2);
cacheConfiguration.setAffinity(rendezvousAffinityFunction);

IgniteCache cache = ignite.getOrCreateCache(cacheConfiguration);
cache.put("1","1");
Collection nodes =
((Ignite)cache.unwrap(Ignite.class)).affinity(cache.getName()).mapKeyToPrimaryAndBackups("1");
assertEquals(3, nodes.size()); //This fails even though i have three
nodes (1 with node attribute AVAILABILITY_ZONE="ZONE1" and other two
with node attribute AVAILABILITY_ZONE="ZONE2")

PS: I started three nodes with Custom cache
configuration IgniteConfiguration.setUserAttributes

Server Node1
=
Map userAttributes = new HashMap<>();
userAttributes.put("AVAILABILITY_ZONE", "ZONE1");
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setUserAttributes(userAttributes);
Ignition.start(cfg);

Server Node2
=
Map userAttributes = new HashMap<>();
userAttributes.put("AVAILABILITY_ZONE", "ZONE2");
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setUserAttributes(userAttributes);
Ignition.start(cfg);

Server Node3
=
Map userAttributes = new HashMap<>();
userAttributes.put("AVAILABILITY_ZONE", "ZONE2");
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setUserAttributes(userAttributes);
Ignition.start(cfg);


https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html

Thanks,

Amit Jolly

On Thu, Sep 19, 2024 at 12:51 AM Pavel Tupitsyn 
wrote:

> Hi Amit,
>
> >  if the backup count is let's say 2, Ignite won't create a second backup
> as there are not enough zones
> Not correct - Ignite will create backups anyway.
> - A backup is a copy of a partition on another node
> - With 2 backups every partition will have 3 copies (1 primary, 2 backup),
> all on different nodes (since you have 10 nodes)
> - Use ClusterNodeAttributeAffinityBackupFilter to ensure that at least one
> of the copies is in a different AZ
>
> And that is enough for 3 copies.
>
> On Thu, Sep 19, 2024 at 12:10 AM Amit Jolly  wrote:
>
>> Hi Team
>>
>> We are planning to run 10 node Ignite clusters in AWS with 5 nodes each
>> into two availability zones. Using Kubernetes topologyspreadconstraints we
>> have made sure that no two Ignite pods are started on the same virtual
>> machine/node/host.
>>
>> I understand with ClusterNodeAttributeAffinityBackupFilter i can force
>> ignite to store the backup in a different zone if backup count is 1.
>>
>> But if the backup count is let's say 2, Ignite won't create a second
>> backup as there are not enough zones.
>>
>> My question is if i have backup count 2, How can I
>> use ClusterNodeAttributeAffinityBackupFilter or (custom
>> AffinityBackupFilter) to have at least one backup in each zone and another
>> backup anywhere else where available.
>>
>> I think in order to achieve what I am thinking somehow I
>> need currentTopologySnapshot
>> available in ClusterNodeAttributeAffinityBackupFilter or custom
>> AffinityBackupFilter
>>
>> Thanks,
>>
>> Amit Jolly
>>
>


Re: How to use ClusterNodeAttributeAffinityBackupFilter to have atleast one replica in second zone

2024-09-18 Thread Pavel Tupitsyn
Hi Amit,

>  if the backup count is let's say 2, Ignite won't create a second backup
as there are not enough zones
Not correct - Ignite will create backups anyway.
- A backup is a copy of a partition on another node
- With 2 backups every partition will have 3 copies (1 primary, 2 backup),
all on different nodes (since you have 10 nodes)
- Use ClusterNodeAttributeAffinityBackupFilter to ensure that at least one
of the copies is in a different AZ

And that is enough for 3 copies.

On Thu, Sep 19, 2024 at 12:10 AM Amit Jolly  wrote:

> Hi Team
>
> We are planning to run 10 node Ignite clusters in AWS with 5 nodes each
> into two availability zones. Using Kubernetes topologyspreadconstraints we
> have made sure that no two Ignite pods are started on the same virtual
> machine/node/host.
>
> I understand with ClusterNodeAttributeAffinityBackupFilter i can force
> ignite to store the backup in a different zone if backup count is 1.
>
> But if the backup count is let's say 2, Ignite won't create a second
> backup as there are not enough zones.
>
> My question is if i have backup count 2, How can I
> use ClusterNodeAttributeAffinityBackupFilter or (custom
> AffinityBackupFilter) to have at least one backup in each zone and another
> backup anywhere else where available.
>
> I think in order to achieve what I am thinking somehow I
> need currentTopologySnapshot
> available in ClusterNodeAttributeAffinityBackupFilter or custom
> AffinityBackupFilter
>
> Thanks,
>
> Amit Jolly
>


Re: rejecting communication connection & Failed to process selector key

2024-09-18 Thread Jeremy McMillan
I suspect your openshift networking is doing something wrong: NAT is
particularly suspicious.

Share your discovery configuration and openshift network layout.

On Mon, Sep 16, 2024 at 4:38 AM MJ <6733...@qq.com> wrote:

> Donot think so.  As below the remote ip 10.254.13.83
>  is the another server node.
> --- log
> Accepted incoming communication connection [locAddr=/10.254.32.162:47100,
> rmtAddr=/10.254.13.83:35160]
> super=GridNioSessionImpl [locAddr=/10.254.32.162:52542, rmtAddr=/
> 10.254.13.83:47100
> ---
>
> so the multiple connections kept being rejected were between two server
> nodes. Any scenarios could cause that ?  it appears to be that the original
> connection was shutdown or interrupted fast by one node but the other node
> was not aware of the connection close event, or was not informed ?  Any
> configuration can help on that ?
>
>
> Thanks,
> -MJ
>
> Original Email
>
> From:"Pavel Tupitsyn"< ptupit...@apache.org >;
>
> Sent Time:2024/9/16 12:58
>
> To:"user"< user@ignite.apache.org >;
>
> Subject:Re: rejecting communication connection & Failed to process
> selector key
>
> Looks like some non-Ignite application connects to the Ignite server, then
> sends unexpected data or disconnects quickly.
>
> Could it be some kind of a security tool, port scanner, or a misconfigured
> service somewhere on the network?
>
> On Mon, Sep 16, 2024 at 3:59 AM MJ <6733...@qq.com> wrote:
>
>> Hi Igniters,
>>
>>
>>
>> I am experiencing the “Failed to process selector key” error once every
>> one or two days. Every time it appears received and rejected multiple
>> communication connections and then threw the exception.
>>
>> Below logging is about “Broken pipe” original exception but not only
>> “Broken pipe”, occasionally the “Failed to process selector key” wraps
>> “Connection Reset”, “javax.net.ssl.SSLException: Failed to encrypt data
>> (SSL engine error) [status=CLOSED, handshakeStatus=NOT_HANDSHAKING”.
>>
>>
>>
>> Is there any solution to fix it ? or its configuration can be improved ?
>>
>>
>>
>> Ignite 2.16.0 / 4 data nodes, running in openshift 4
>>
>>
>>
>>  config of communicationSpi
>>
>> 
>>
>> > class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>>
>> 
>>
>> > value="1024"/>
>>
>> > value="25000"/>
>>
>> > value="6"/>
>>
>> 
>>
>> 
>>
>>
>>
>>
>>
>> 24-09-15 17:18:35.146 [INFO ]
>> grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi%
>> o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication
>> connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35160]
>>
>> 24-09-15 17:18:35.147 [INFO ]
>> grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi%
>> o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when
>> already connected to this node, rejecting
>> [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
>> rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]
>>
>> 24-09-15 17:18:35.357 [INFO ]
>> grid-nio-worker-tcp-comm-3-#26%TcpCommunicationSpi%
>> o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication
>> connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35162]
>>
>> 24-09-15 17:18:35.358 [INFO ]
>> grid-nio-worker-tcp-comm-3-#26%TcpCommunicationSpi%
>> o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when
>> already connected to this node, rejecting
>> [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
>> rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]
>>
>> 24-09-15 17:18:35.568 [INFO ]
>> grid-nio-worker-tcp-comm-0-#23%TcpCommunicationSpi%
>> o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication
>> connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35164]
>>
>> 24-09-15 17:18:35.569 [INFO ]
>> grid-nio-worker-tcp-comm-0-#23%TcpCommunicationSpi%
>> o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when
>> already connected to this node, rejecting
>> [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
>> rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]
>>
>> 24-09-15 17:18:35.975 [ERROR]
>> grid-nio-worker-tcp-comm-1-#24%TcpCommunicationSpi%
>> o.a.i.s.c.t.TcpCommunicationSpi:137 - Failed to process selector key
>> [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker
>> [super=AbstractNioClientWorker [idx=1, bytesRcvd=29406013584, bytesSent=0,
>> bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
>> [name=grid-nio-worker-tcp-comm-1, igniteInstanceName=TcpCommunicationSpi,
>> finished=false, heartbeatTs=1726435114873, hashCode=1144648384,
>> interrupted=false,
>> runner=grid-nio-worker-tcp-comm-1-#24%TcpCommunicationSpi%]]],
>> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
>> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
>> inRecovery=GridNioRecoveryDescriptor [acked=20129536, resendCnt=0,
>> rcvCnt=19533551, sentCnt=20129879, reserv

Re: Failed to execute query because cache partition has been lost

2024-09-18 Thread Jeremy McMillan
If you want to do maintenance, and you want to block access during
maintenance, inactivate the cluster, then do the maintenance, then activate
the cluster.

I recommend that you tell the community what you are trying to do, and then
ask with an open mind how the community would accomplish that goal.

Think carefully, like a database: if you had been tracking changes to data
on a remote node, and that node disappeared and reappeared, is it safe to
assume nothing bad has happened to that node or its data while it was not
available? Is it fair to others for you to assert that data is trustworthy?
Lost partitions must be recovered in the default case. If you want unsafe
behavior, configure the cluster to ignore lost partitions.

On Wed, Sep 18, 2024 at 4:56 AM  wrote:

> Ok, thanks, I understand.
>
> But in this case, if someone tries to modify the DB while a node is down,
> does Ignites offers any mechanism to prevent this or should I implement it?
>
>
>
>
>
> *From:* Pavel Tupitsyn 
> *Sent:* miércoles, 18 de septiembre de 2024 11:30
> *To:* user@ignite.apache.org
> *Subject:* Re: Failed to execute query because cache partition has been
> lost
>
>
>
> > 2 servers and 1 client, and no backups
>
> > shut down one node
>
>
>
> There are no backups => any node shutdown leads to partition loss.
>
> If you want to ignore data loss, set partitionLossPolicy = IGNORE [1]
>
>
>
> [1]
> https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy
> <https://www.google.com/url?q=https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy&source=gmail-imap&ust=172725668500&usg=AOvVaw1bBnb3M95kjNcHsDKBEVir>
>
>
>
> On Wed, Sep 18, 2024 at 12:04 PM  wrote:
>
> Hi.
>
>
>
> We are using Apache Ignite in our application, and currently, we are
> testing the behaviour of the system when there are system errors.
>
>
>
> One of our tests is not working as expected:
>
>- we have got an Ignite cluster with 2 servers and 1 client, and no
>backups
>- Ignite version 2.16
>- We shut down one node server for several minutes
>- During this time there is no read nor write to Ignite (we do not use
>the DB)
>- When we restart the server node, we expect to recover the system
>smoothly BUT we have exceptions when we query the data: “Failed to execute
>query because cache partition has been lost”
>
>
>
> We can resolve the problem resetting the lost partitions, but is this a
> normal behaviour of Ignite? I mean, it is a simple case, and the node
> should be able to join the cluster without problems.
>
>
>
> Thank you.
>
>
>
>


Re: Failed to execute query because cache partition has been lost

2024-09-18 Thread Pavel Tupitsyn
I recommend enabling backups in CacheConfiguration to avoid dealing with
partition loss at all. Is there any reason not to?

On Wed, Sep 18, 2024 at 1:21 PM  wrote:

> After sending the reply, I was thinking… partitionLossPolicy prevents
> that someone tries to modify the DB while a partition is lost (this means
> that a node is not on the cache).
>
> So, if I set partitionLossPolicy = IGNORE the DB can be modified when a
> node is not present.
>
>
>
> Maybe the solution could be, maintain partitionLossPolicy as
> READ_WRITE_SAFE (all reads and writes will be allowed for keys in valid
> partitions), and when the node rejoins to the cluster, reset the lost
> partitions.
>
>
>
> What event do I have to listen? Maybe EVT_NODE_JOINED?
>
>
>
> *From:* jrov...@identy.io 
> *Sent:* miércoles, 18 de septiembre de 2024 11:54
> *To:* user@ignite.apache.org
> *Subject:* RE: Failed to execute query because cache partition has been
> lost
>
>
>
> Ok, thanks, I understand.
>
> But in this case, if someone tries to modify the DB while a node is down,
> does Ignites offers any mechanism to prevent this or should I implement it?
>
>
>
>
>
> *From:* Pavel Tupitsyn 
> *Sent:* miércoles, 18 de septiembre de 2024 11:30
> *To:* user@ignite.apache.org
> *Subject:* Re: Failed to execute query because cache partition has been
> lost
>
>
>
> > 2 servers and 1 client, and no backups
>
> > shut down one node
>
>
>
> There are no backups => any node shutdown leads to partition loss.
>
> If you want to ignore data loss, set partitionLossPolicy = IGNORE [1]
>
>
>
> [1]
> https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy
> <https://www.google.com/url?q=https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy&source=gmail-imap&ust=172725668500&usg=AOvVaw1bBnb3M95kjNcHsDKBEVir>
>
>
>
> On Wed, Sep 18, 2024 at 12:04 PM  wrote:
>
> Hi.
>
>
>
> We are using Apache Ignite in our application, and currently, we are
> testing the behaviour of the system when there are system errors.
>
>
>
> One of our tests is not working as expected:
>
>- we have got an Ignite cluster with 2 servers and 1 client, and no
>backups
>- Ignite version 2.16
>- We shut down one node server for several minutes
>- During this time there is no read nor write to Ignite (we do not use
>the DB)
>- When we restart the server node, we expect to recover the system
>smoothly BUT we have exceptions when we query the data: “Failed to execute
>query because cache partition has been lost”
>
>
>
> We can resolve the problem resetting the lost partitions, but is this a
> normal behaviour of Ignite? I mean, it is a simple case, and the node
> should be able to join the cluster without problems.
>
>
>
> Thank you.
>
>
>
>


RE: Failed to execute query because cache partition has been lost

2024-09-18 Thread jrovira
After sending the reply, I was thinking… partitionLossPolicy prevents that 
someone tries to modify the DB while a partition is lost (this means that a 
node is not on the cache).

So, if I set partitionLossPolicy = IGNORE the DB can be modified when a node is 
not present.

 

Maybe the solution could be, maintain partitionLossPolicy as READ_WRITE_SAFE 
(all reads and writes will be allowed for keys in valid partitions), and when 
the node rejoins to the cluster, reset the lost partitions.

 

What event do I have to listen? Maybe EVT_NODE_JOINED?

 

From: jrov...@identy.io  
Sent: miércoles, 18 de septiembre de 2024 11:54
To: user@ignite.apache.org
Subject: RE: Failed to execute query because cache partition has been lost

 

Ok, thanks, I understand.

But in this case, if someone tries to modify the DB while a node is down, does 
Ignites offers any mechanism to prevent this or should I implement it?

 

 

From: Pavel Tupitsyn mailto:ptupit...@apache.org> > 
Sent: miércoles, 18 de septiembre de 2024 11:30
To: user@ignite.apache.org <mailto:user@ignite.apache.org> 
Subject: Re: Failed to execute query because cache partition has been lost

 

> 2 servers and 1 client, and no backups

> shut down one node

 

There are no backups => any node shutdown leads to partition loss.

If you want to ignore data loss, set partitionLossPolicy = IGNORE [1]

 

[1] 
https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy 
<https://www.google.com/url?q=https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy&source=gmail-imap&ust=172725668500&usg=AOvVaw1bBnb3M95kjNcHsDKBEVir>
 

 

On Wed, Sep 18, 2024 at 12:04 PM mailto:jrov...@identy.io> 
> wrote:

Hi.

 

We are using Apache Ignite in our application, and currently, we are testing 
the behaviour of the system when there are system errors.

 

One of our tests is not working as expected:

*   we have got an Ignite cluster with 2 servers and 1 client, and no 
backups
*   Ignite version 2.16
*   We shut down one node server for several minutes
*   During this time there is no read nor write to Ignite (we do not use 
the DB)
*   When we restart the server node, we expect to recover the system 
smoothly BUT we have exceptions when we query the data: “Failed to execute 
query because cache partition has been lost”

 

We can resolve the problem resetting the lost partitions, but is this a normal 
behaviour of Ignite? I mean, it is a simple case, and the node should be able 
to join the cluster without problems.

 

Thank you.

 



RE: Failed to execute query because cache partition has been lost

2024-09-18 Thread jrovira
Ok, thanks, I understand.

But in this case, if someone tries to modify the DB while a node is down, does 
Ignites offers any mechanism to prevent this or should I implement it?

 

 

From: Pavel Tupitsyn  
Sent: miércoles, 18 de septiembre de 2024 11:30
To: user@ignite.apache.org
Subject: Re: Failed to execute query because cache partition has been lost

 

> 2 servers and 1 client, and no backups

> shut down one node

 

There are no backups => any node shutdown leads to partition loss.

If you want to ignore data loss, set partitionLossPolicy = IGNORE [1]

 

[1] 
https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy 
<https://www.google.com/url?q=https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy&source=gmail-imap&ust=172725668500&usg=AOvVaw1bBnb3M95kjNcHsDKBEVir>
 

 

On Wed, Sep 18, 2024 at 12:04 PM mailto:jrov...@identy.io> 
> wrote:

Hi.

 

We are using Apache Ignite in our application, and currently, we are testing 
the behaviour of the system when there are system errors.

 

One of our tests is not working as expected:

*   we have got an Ignite cluster with 2 servers and 1 client, and no 
backups
*   Ignite version 2.16
*   We shut down one node server for several minutes
*   During this time there is no read nor write to Ignite (we do not use 
the DB)
*   When we restart the server node, we expect to recover the system 
smoothly BUT we have exceptions when we query the data: “Failed to execute 
query because cache partition has been lost”

 

We can resolve the problem resetting the lost partitions, but is this a normal 
behaviour of Ignite? I mean, it is a simple case, and the node should be able 
to join the cluster without problems.

 

Thank you.

 



Re: Failed to execute query because cache partition has been lost

2024-09-18 Thread Pavel Tupitsyn
> 2 servers and 1 client, and no backups
> shut down one node

There are no backups => any node shutdown leads to partition loss.
If you want to ignore data loss, set partitionLossPolicy = IGNORE [1]

[1]
https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy

On Wed, Sep 18, 2024 at 12:04 PM  wrote:

> Hi.
>
>
>
> We are using Apache Ignite in our application, and currently, we are
> testing the behaviour of the system when there are system errors.
>
>
>
> One of our tests is not working as expected:
>
>- we have got an Ignite cluster with 2 servers and 1 client, and no
>backups
>- Ignite version 2.16
>- We shut down one node server for several minutes
>- During this time there is no read nor write to Ignite (we do not use
>the DB)
>- When we restart the server node, we expect to recover the system
>smoothly BUT we have exceptions when we query the data: “Failed to execute
>query because cache partition has been lost”
>
>
>
> We can resolve the problem resetting the lost partitions, but is this a
> normal behaviour of Ignite? I mean, it is a simple case, and the node
> should be able to join the cluster without problems.
>
>
>
> Thank you.
>
>
>


Re: rejecting communication connection & Failed to process selector key

2024-09-16 Thread MJ
Donot think so.  As below the remote ip 10.254.13.83 is the 
another server node. 
--- log
Accepted incoming communication connection [locAddr=/10.254.32.162:47100, 
rmtAddr=/10.254.13.83:35160] 

super=GridNioSessionImpl [locAddr=/10.254.32.162:52542, 
rmtAddr=/10.254.13.83:47100

--- 





so the multiple connections kept being rejected were between two server nodes. 
Any scenarios could cause that ?  it appears to be that the original 
connection was shutdown or interrupted fast by one node but the other node was 
not aware of the connection close event, or was not informed ?  Any 
configuration can help on that ?




Thanks,
-MJ

   
Original Email
   
 

From:"Pavel Tupitsyn"< ptupit...@apache.org >;

Sent Time:2024/9/16 12:58

To:"user"< user@ignite.apache.org >;

Subject:Re: rejecting communication connection & Failed to process selector 
key


Looks like some non-Ignite application connects to the Ignite server, then 
sends unexpected data or disconnects quickly.

Could it be some kind of a security tool, port scanner, or a misconfigured 
service somewhere on the network?


On Mon, Sep 16, 2024 at 3:59 AM MJ <6733...@qq.com> wrote:

Hi Igniters,



 
 
 
I am experiencing the “Failed to process selector key” error once every one or 
two days. Every time it appears received and rejected multiple communication 
connections and then threw the exception.
 
Below logging is about “Broken pipe” original exception but not only “Broken 
pipe”, occasionally the “Failed to process selector key” wraps “Connection 
Reset”, “javax.net.ssl.SSLException:  Failed to encrypt data (SSL engine error) 
[status=CLOSED, handshakeStatus=NOT_HANDSHAKING”.  
 
 
 
Is there any solution to fix it ? or its configuration can be improved ?
 
 
 
Ignite 2.16.0 / 4 data nodes, running in openshift 4 
 
 
 
 config of communicationSpi
 


RE: Self manageemnt script ( Control Scripts) + Ignite .NET Clusters

2024-09-16 Thread satyajit.mandal.barclays.com via user
Hi  Pavel,

Under this [3] 
https://ignite.apache.org/docs/latest/net-specific/net-configuration-options#configure-with-spring-xml<https://urldefense.com/v3/__https:/ignite.apache.org/docs/latest/net-specific/net-configuration-options*configure-with-spring-xml__;Iw!!G7x19LdBoP0e!gtpU5npObMPEjX3StNsMcSsddmuefqYW_SR0k69GAfx10cZ_8TdnCTAPA-5JL4VR403eRoBNiFBQF3Jk5_t2Mrti2g$>


Which  is the spring  property  which  needs  to  be set for enabling SSL for  
control  script  in server nodes?. Is  this right







Regards
Satyajit





Restricted - External
From: Pavel Tupitsyn 
Sent: Monday, September 16, 2024 12:36 PM
To: Mandal, Satyajit: IT (PUN) 
Cc: user@ignite.apache.org
Subject: Re: Self manageemnt script ( Control Scripts) + Ignite .NET Clusters


CAUTION: This email originated from outside our organization - 
ptupit...@apache.org<mailto:ptupit...@apache.org> Do not click on links, open 
attachments, or respond unless you recognize the sender and can validate the 
content is safe.

Hi, control script [1] uses REST API
You can disable it or set up secret keys, SSL, etc, as described in the docs [2]

To do that from .NET, use IgniteConfiguration.SpringConfigUrl property [3]

[1] 
https://ignite.apache.org/docs/latest/tools/control-script<https://urldefense.com/v3/__https:/ignite.apache.org/docs/latest/tools/control-script__;!!G7x19LdBoP0e!gtpU5npObMPEjX3StNsMcSsddmuefqYW_SR0k69GAfx10cZ_8TdnCTAPA-5JL4VR403eRoBNiFBQF3Jk5_tllM7Y3g$>
[2] 
https://ignite.apache.org/docs/latest/restapi<https://urldefense.com/v3/__https:/ignite.apache.org/docs/latest/restapi__;!!G7x19LdBoP0e!gtpU5npObMPEjX3StNsMcSsddmuefqYW_SR0k69GAfx10cZ_8TdnCTAPA-5JL4VR403eRoBNiFBQF3Jk5_tBkkpV2Q$>
[3] 
https://ignite.apache.org/docs/latest/net-specific/net-configuration-options#configure-with-spring-xml<https://urldefense.com/v3/__https:/ignite.apache.org/docs/latest/net-specific/net-configuration-options*configure-with-spring-xml__;Iw!!G7x19LdBoP0e!gtpU5npObMPEjX3StNsMcSsddmuefqYW_SR0k69GAfx10cZ_8TdnCTAPA-5JL4VR403eRoBNiFBQF3Jk5_t2Mrti2g$>

On Mon, Sep 16, 2024 at 9:35 AM 
mailto:satyajit.man...@barclays.com>> wrote:
Hi  Pavel,

How can  we prevent  self-management scripts ( Control  scripts)  to  join  the 
cluster  which  has TLS/SSL  enabled.  Currently  without certificates it is 
able  to  join the cluster  though  TLS/SSL  is enabled in  Ignite .NET Cluster.

Is  there  any  setting on  server nodes  which  we are missing? Can’t  find  
this setting  in  Ignite .NET library ( 
ConnectorConfiguration.sslClientAuth=true )

Under  this  documentation  nothing  is  mentioned

https://ignite.apache.org/docs/latest/security/ssl-tls<https://urldefense.com/v3/__https:/ignite.apache.org/docs/latest/security/ssl-tls__;!!G7x19LdBoP0e!gtpU5npObMPEjX3StNsMcSsddmuefqYW_SR0k69GAfx10cZ_8TdnCTAPA-5JL4VR403eRoBNiFBQF3Jk5_sDIdKP2w$>


Found  this under Gridgain  documentation  but  can’t  find  this  on  Ignite 
documentation.
https://www.gridgain.com/docs/latest/administrators-guide/security/ssl-tls<https://urldefense.com/v3/__https:/www.gridgain.com/docs/latest/administrators-guide/security/ssl-tls__;!!G7x19LdBoP0e!gtpU5npObMPEjX3StNsMcSsddmuefqYW_SR0k69GAfx10cZ_8TdnCTAPA-5JL4VR403eRoBNiFBQF3Jk5_u9w-eQpQ$>
Management Tools SSL/TLS Authentication
By default, management scripts such as control.sh|bat, management.sh|bat, and 
snapshot-utility.sh|bat are not required to have client certificates.
To enable client certificate validation, set 
ConnectorConfiguration.sslClientAuth=true on the server nodes.

Regards
Satyajit









Restricted - External

Barclays Execution Services Limited registered in England. Registered No. 
1767980. Registered office: 1 Churchill Place, London, E14 5HP

Barclays Execution Services Limited provides support and administrative 
services across Barclays group. Barclays Execution Services Limited is an 
appointed representative of Barclays Bank UK plc and Barclays Bank plc. 
Barclays Bank UK plc and Barclays Bank plc are authorised by the Prudential 
Regulation Authority and regulated by the Financial Conduct Authority and the 
Prudential Regulation Authority.

This email and any attachments are confidential and intended solely for the 
addressee and may also be privileged or exempt from disclosure under applicable 
law. If you are not the addressee, or have received this email in error, please 
notify the sender and immediately delete it and any attachments from your 
system. Do not copy, use, disclose or otherwise act on any part of this email 
or its attachments.

Internet communications are not guaranteed to be secure or virus-free. The 
Barclays group does not accept responsibility for any loss arising from 
unauthorised access to, or interference with, any internet communications by 
any third party, or from the transmission of any viruses. Replies to this email 
may be monitored by the Barclays

Re: Self manageemnt script ( Control Scripts) + Ignite .NET Clusters

2024-09-16 Thread Pavel Tupitsyn
Hi, control script [1] uses REST API
You can disable it or set up secret keys, SSL, etc, as described in the
docs [2]

To do that from .NET, use IgniteConfiguration.SpringConfigUrl property [3]

[1] https://ignite.apache.org/docs/latest/tools/control-script
[2] https://ignite.apache.org/docs/latest/restapi
[3]
https://ignite.apache.org/docs/latest/net-specific/net-configuration-options#configure-with-spring-xml

On Mon, Sep 16, 2024 at 9:35 AM  wrote:

> Hi  Pavel,
>
>
>
> How can  we prevent  self-management scripts ( Control  scripts)  to
> join  the cluster  which  has TLS/SSL  enabled.  Currently  without
> certificates it is able  to  join the cluster  though  TLS/SSL  is enabled
> in  Ignite .NET Cluster.
>
>
>
> Is  there  any  setting on  server nodes  which  we are missing? Can’t
> find  this setting  in  Ignite .NET library (
> *ConnectorConfiguration.sslClientAuth=true* )
>
>
>
> Under  this  documentation  nothing  is  mentioned
>
>
>
> https://ignite.apache.org/docs/latest/security/ssl-tls
>
>
>
>
>
> Found  this under Gridgain  documentation  but  can’t  find  this  on
> Ignite documentation.
>
> https://www.gridgain.com/docs/latest/administrators-guide/security/ssl-tls
>
> *Management Tools SSL/TLS Authentication*
>
> By default, management scripts such as control.sh|bat, management.sh|bat,
> and snapshot-utility.sh|bat are not required to have client certificates.
>
> To enable client certificate validation, set
> *ConnectorConfiguration.sslClientAuth=true* on the server nodes.
>
>
>
> Regards
>
> Satyajit
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Restricted - External
>
> Barclays Execution Services Limited registered in England. Registered No.
> 1767980. Registered office: 1 Churchill Place, London, E14 5HP
>
> Barclays Execution Services Limited provides support and administrative
> services across Barclays group. Barclays Execution Services Limited is an
> appointed representative of Barclays Bank UK plc and Barclays Bank plc.
> Barclays Bank UK plc and Barclays Bank plc are authorised by the Prudential
> Regulation Authority and regulated by the Financial Conduct Authority and
> the Prudential Regulation Authority.
>
> This email and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this email
> in error, please notify the sender and immediately delete it and any
> attachments from your system. Do not copy, use, disclose or otherwise act
> on any part of this email or its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free. The
> Barclays group does not accept responsibility for any loss arising from
> unauthorised access to, or interference with, any internet communications
> by any third party, or from the transmission of any viruses. Replies to
> this email may be monitored by the Barclays group for operational or
> business reasons.
>
> Any opinion or other information in this email or its attachments that
> does not relate to the business of the Barclays group is personal to the
> sender and is not given or endorsed by the Barclays group.
>
> Unless specifically indicated, this e-mail is not an offer to buy or sell
> or a solicitation to buy or sell any securities, investment products or
> other financial product or service, an official confirmation of any
> transaction, or an official statement of Barclays.
>


Re: rejecting communication connection & Failed to process selector key

2024-09-15 Thread Pavel Tupitsyn
Looks like some non-Ignite application connects to the Ignite server, then
sends unexpected data or disconnects quickly.

Could it be some kind of a security tool, port scanner, or a misconfigured
service somewhere on the network?

On Mon, Sep 16, 2024 at 3:59 AM MJ <6733...@qq.com> wrote:

> Hi Igniters,
>
>
>
> I am experiencing the “Failed to process selector key” error once every
> one or two days. Every time it appears received and rejected multiple
> communication connections and then threw the exception.
>
> Below logging is about “Broken pipe” original exception but not only
> “Broken pipe”, occasionally the “Failed to process selector key” wraps
> “Connection Reset”, “javax.net.ssl.SSLException: Failed to encrypt data
> (SSL engine error) [status=CLOSED, handshakeStatus=NOT_HANDSHAKING”.
>
>
>
> Is there any solution to fix it ? or its configuration can be improved ?
>
>
>
> Ignite 2.16.0 / 4 data nodes, running in openshift 4
>
>
>
>  config of communicationSpi
>
> 
>
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>
> 
>
>  value="1024"/>
>
>  value="25000"/>
>
>  value="6"/>
>
> 
>
> 
>
>
>
>
>
> 24-09-15 17:18:35.146 [INFO ]
> grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi%
> o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication
> connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35160]
>
> 24-09-15 17:18:35.147 [INFO ]
> grid-nio-worker-tcp-comm-2-#25%TcpCommunicationSpi%
> o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when
> already connected to this node, rejecting
> [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
> rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]
>
> 24-09-15 17:18:35.357 [INFO ]
> grid-nio-worker-tcp-comm-3-#26%TcpCommunicationSpi%
> o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication
> connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35162]
>
> 24-09-15 17:18:35.358 [INFO ]
> grid-nio-worker-tcp-comm-3-#26%TcpCommunicationSpi%
> o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when
> already connected to this node, rejecting
> [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
> rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]
>
> 24-09-15 17:18:35.568 [INFO ]
> grid-nio-worker-tcp-comm-0-#23%TcpCommunicationSpi%
> o.a.i.s.c.t.TcpCommunicationSpi:117 - Accepted incoming communication
> connection [locAddr=/10.254.32.162:47100, rmtAddr=/10.254.13.83:35164]
>
> 24-09-15 17:18:35.569 [INFO ]
> grid-nio-worker-tcp-comm-0-#23%TcpCommunicationSpi%
> o.a.i.s.c.t.TcpCommunicationSpi:117 - Received incoming connection when
> already connected to this node, rejecting
> [locNode=52437bc3-3dfe-4f76-bec6-d2f22f8a5d40,
> rmtNode=7c28b6bc-8991-47a2-b69c-6adba0482713]
>
> 24-09-15 17:18:35.975 [ERROR]
> grid-nio-worker-tcp-comm-1-#24%TcpCommunicationSpi%
> o.a.i.s.c.t.TcpCommunicationSpi:137 - Failed to process selector key
> [ses=GridSelectorNioSessionImpl [worker=DirectNioClientWorker
> [super=AbstractNioClientWorker [idx=1, bytesRcvd=29406013584, bytesSent=0,
> bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
> [name=grid-nio-worker-tcp-comm-1, igniteInstanceName=TcpCommunicationSpi,
> finished=false, heartbeatTs=1726435114873, hashCode=1144648384,
> interrupted=false,
> runner=grid-nio-worker-tcp-comm-1-#24%TcpCommunicationSpi%]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> inRecovery=GridNioRecoveryDescriptor [acked=20129536, resendCnt=0,
> rcvCnt=19533551, sentCnt=20129879, reserved=true, lastAck=19533551,
> nodeLeft=false, node=TcpDiscoveryNode
> [id=7c28b6bc-8991-47a2-b69c-6adba0482713, consistentId=10.254.13.83,
> 127.0.0.1:47500, addrs=ArrayList [10.254.13.83, 127.0.0.1],
> sockAddrs=HashSet [/10.254.13.83:47500, /127.0.0.1:47500],
> discPort=47500, order=3, intOrder=3, lastExchangeTime=1724822271382,
> loc=false, ver=2.16.0#20231215-sha1:7bde6a42, isClient=false],
> connected=false, connectCnt=205, queueLimit=131072, reserveCnt=260,
> pairedConnections=false], outRecovery=GridNioRecoveryDescriptor
> [acked=20129536, resendCnt=0, rcvCnt=19533551, sentCnt=20129879,
> reserved=true, lastAck=19533551, nodeLeft=false, node=TcpDiscoveryNode
> [id=7c28b6bc-8991-47a2-b69c-6adba0482713, consistentId=10.254.13.83,
> 127.0.0.1:47500, addrs=ArrayList [10.254.13.83, 127.0.0.1],
> sockAddrs=HashSet [/10.254.13.83:47500, /127.0.0.1:47500],
> discPort=47500, order=3, intOrder=3, lastExchangeTime=1724822271382,
> loc=false, ver=2.16.0#20231215-sha1:7bde6a42, isClient=false],
> connected=false, connectCnt=205, queueLimit=131072, reserveCnt=260,
> pairedConnections=false], closeSocket=true,
> outboundMessagesQueueSizeMetric=o.a.i.i.processors.metric.impl.LongAdderMetric@69a257d1,
> super=GridNioSessionImpl [locAddr=/1

Re: Rolling Update

2024-09-13 Thread Stephen Darlington
The baseline topology is still A Thing with memory--only clusters. The
difference is that auto-adjust is enabled by default. But yes, in short you
don't need to worry about the baseline if you don't use native persistence.

On Thu, 12 Sept 2024 at 21:21, Humphrey  wrote:

> And about baseline topology, that is only for when you using storage
> right? When using only in memory you don’t have baseline topology but just
> a cluster with pods.
>
> I’ll incorporate the check if the node has joined the cluster.
>
> On 10 Sep 2024, at 23:13, Jeremy McMillan  wrote:
>
> 
> Your pod flag should check baseline topology to see if it has fully joined
> the cluster AND that rebalancing has BOTH started and finished.
>
> There is a race condition if the pod does not immediately join the
> cluster, but checks to see if the cluster is balanced and THEN joins the
> cluster, triggering another rebalance after it's already reported that it
> is ready.
>
> Try to control for that.
>
> On Tue, Sep 10, 2024 at 3:01 AM Humphrey Lopez  wrote:
>
>> Thanks, seems a bit complicated. When I have more time I'll try that
>> approach.
>> For now we still going to (mis) use the Readiness probe to wait for the
>> rebalancing in a smart way. When the pod starts we have a flag that is set
>> to False, then the pod won't get ready until the cluster is rebalanced.
>> When the status of the cluster is rebalanced the pod will get the state to
>> ready and the flag will be set to true. Next Rebalancing triggered by
>> another pod will not affect the already running pod cause the flag will be
>> True.
>>
>> Let's see if this will wait long enough for the cluster to be in a stable
>> phase.
>>
>> Humphrey
>>
>> Op ma 9 sep 2024 om 17:34 schreef Jeremy McMillan :
>>
>>> An operator as I understand it, is just a pod that interacts with your
>>> application and Kubernetes API server as necessary to do what you might be
>>> doing manually.
>>>
>>> https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
>>> https://kubernetes.io/docs/reference/using-api/client-libraries/
>>>
>>> You might start by creating an admin-pod with Ignite control.sh,
>>> sqlline.sh, thin client, etc. tools PLUS kubectl or some other Kubernetes
>>> API client that you can exec into and manually perform all of the rolling
>>> update steps. Once you know you have all the tools and steps complete, you
>>> can try adding scripts to the pod to automate sequences of steps. Then once
>>> the scripts are fairly robust and complete, you can use the admin-pod as a
>>> basis for Kubernetes Job definitions. It's up to you whether you'd like to
>>> continue integrating with Kubernetes further. Next steps would be to create
>>> a CustomResourceDefinition instead of using Kubernetes Job, or
>>> writing/adding a Kubernetes compatible API that does what your Job command
>>> line startup does, but with more control over parameters.
>>>
>>> Please share your results once you've got things working. Best of luck!
>>>
>>> On Fri, Sep 6, 2024 at 10:15 AM Humphrey  wrote:
>>>
 Thanks for the explanation, is there any operator ready for use? Is it
 hard to create own Operator if it doesn’t exist yet?

 Thanks

 On 5 Sep 2024, at 19:39, Jeremy McMillan  wrote:

 
 It is correct for an operator, but not correct for readiness probe.
 It's not your understanding of Ignite metrics. It is your understanding of
 Kubernetes.
 Kubernetes rolling update logic assumes all of your service backend
 nodes are completely independent, but you have chosen a readiness probe
 which reflects how nodes are interacting and interdependent.

 Hypothetically:
   We have bounced one node, and it has rejoined the cluster, and is
 rebalancing.
   If Kubernetes probes this node for readiness, we fail because we are
 rebalancing. The scheduler will block progress of the rolling update.
   If Kubernetes probes any other node for readiness, it will fail
 because we are rebalancing. The scheduler will remove this node from any
 services.
   All the nodes will reflect the state of the cluster: rebalancing.
   No nodes will remain in the service backend. If you are using the
 Kubernetes discovery SPI, the restarted node will find itself unable to
 discover any peers.

 The problem is that Kubernetes interprets the readiness probe as a NODE
 STATE. The cluster.rebalanced metric is a CLUSTER STATE.

 If you had a Kubernetes job that executes Kubectl commands from within
 the cluster, looping over the pods in a StatefulSet and restarting them, it
 would make perfect sense to check cluster.rebalanced and block until
 rebalancing finishes, but Kubernetes does something different with
 readiness probes based on some assumptions about clustering which do not
 apply to Ignite.

 On Thu, Sep 5, 2024 at 11:29 AM Humphrey Lopez 
 wrote:

> Yes I’m trying to read the cluste

Re: Rolling Update

2024-09-12 Thread Humphrey
And about baseline topology, that is only for when you using storage right? When using only in memory you don’t have baseline topology but just a cluster with pods.I’ll incorporate the check if the node has joined the cluster.On 10 Sep 2024, at 23:13, Jeremy McMillan  wrote:Your pod flag should check baseline topology to see if it has fully joined the cluster AND that rebalancing has BOTH started and finished.There is a race condition if the pod does not immediately join the cluster, but checks to see if the cluster is balanced and THEN joins the cluster, triggering another rebalance after it's already reported that it is ready.Try to control for that.On Tue, Sep 10, 2024 at 3:01 AM Humphrey Lopez  wrote:Thanks, seems a bit complicated. When I have more time I'll try that approach.For now we still going to (mis) use the Readiness probe to wait for the rebalancing in a smart way. When the pod starts we have a flag that is set to False, then the pod won't get ready until the cluster is rebalanced. When the status of the cluster is rebalanced the pod will get the state to ready and the flag will be set to true. Next Rebalancing triggered by another pod will not affect the already running pod cause the flag will be True.Let's see if this will wait long enough for the cluster to be in a stable phase.HumphreyOp ma 9 sep 2024 om 17:34 schreef Jeremy McMillan :An operator as I understand it, is just a pod that interacts with your application and Kubernetes API server as necessary to do what you might be doing manually.https://kubernetes.io/docs/concepts/extend-kubernetes/operator/https://kubernetes.io/docs/reference/using-api/client-libraries/You might start by creating an admin-pod with Ignite control.sh, sqlline.sh, thin client, etc. tools PLUS kubectl or some other Kubernetes API client that you can exec into and manually perform all of the rolling update steps. Once you know you have all the tools and steps complete, you can try adding scripts to the pod to automate sequences of steps. Then once the scripts are fairly robust and complete, you can use the admin-pod as a basis for Kubernetes Job definitions. It's up to you whether you'd like to continue integrating with Kubernetes further. Next steps would be to create a CustomResourceDefinition instead of using Kubernetes Job, or writing/adding a Kubernetes compatible API that does what your Job command line startup does, but with more control over parameters.Please share your results once you've got things working. Best of luck!On Fri, Sep 6, 2024 at 10:15 AM Humphrey  wrote:Thanks for the explanation, is there any operator ready for use? Is it hard to create own Operator if it doesn’t exist yet?ThanksOn 5 Sep 2024, at 19:39, Jeremy McMillan  wrote:It is correct for an operator, but not correct for readiness probe. It's not your understanding of Ignite metrics. It is your understanding of Kubernetes.Kubernetes rolling update logic assumes all of your service backend nodes are completely independent, but you have chosen a readiness probe which reflects how nodes are interacting and interdependent.Hypothetically:  We have bounced one node, and it has rejoined the cluster, and is rebalancing.  If Kubernetes probes this node for readiness, we fail because we are rebalancing. The scheduler will block progress of the rolling update.  If Kubernetes probes any other node for readiness, it will fail because we are rebalancing. The scheduler will remove this node from any services.  All the nodes will reflect the state of the cluster: rebalancing.  No nodes will remain in the service backend. If you are using the Kubernetes discovery SPI, the restarted node will find itself unable to discover any peers.The problem is that Kubernetes interprets the readiness probe as a NODE STATE. The cluster.rebalanced metric is a CLUSTER STATE.If you had a Kubernetes job that executes Kubectl commands from within the cluster, looping over the pods in a StatefulSet and restarting them, it would make perfect sense to check cluster.rebalanced and block until rebalancing finishes, but Kubernetes does something different with readiness probes based on some assumptions about clustering which do not apply to Ignite.On Thu, Sep 5, 2024 at 11:29 AM Humphrey Lopez  wrote:Yes I’m trying to read the cluster.rebalanced metric from the JMX mBean, is that the correct one? I’ve build that into the readiness endpoint from actuator and let kubernetes wait for the cluster to be ready before move to the next pod. Humphrey On 5 Sep 2024, at 17:34, Jeremy McMillan  wrote:I assume you have created your caches/tables with backups>=1.You should restart one node at a time, and wait until the restarted node has rejoined the cluster, then wait for rebalancing to begin, then wait for rebalancing to finish before restarting the next node. Kubernetes readiness probes aren't sophisticated enough. "N

Re: Rolling Update

2024-09-12 Thread Humphrey
Thanks, I’ll incorporate that check of joining the cluster as well.I didn’t know about that race condition.HumphreyOn 10 Sep 2024, at 23:13, Jeremy McMillan  wrote:Your pod flag should check baseline topology to see if it has fully joined the cluster AND that rebalancing has BOTH started and finished.There is a race condition if the pod does not immediately join the cluster, but checks to see if the cluster is balanced and THEN joins the cluster, triggering another rebalance after it's already reported that it is ready.Try to control for that.On Tue, Sep 10, 2024 at 3:01 AM Humphrey Lopez  wrote:Thanks, seems a bit complicated. When I have more time I'll try that approach.For now we still going to (mis) use the Readiness probe to wait for the rebalancing in a smart way. When the pod starts we have a flag that is set to False, then the pod won't get ready until the cluster is rebalanced. When the status of the cluster is rebalanced the pod will get the state to ready and the flag will be set to true. Next Rebalancing triggered by another pod will not affect the already running pod cause the flag will be True.Let's see if this will wait long enough for the cluster to be in a stable phase.HumphreyOp ma 9 sep 2024 om 17:34 schreef Jeremy McMillan :An operator as I understand it, is just a pod that interacts with your application and Kubernetes API server as necessary to do what you might be doing manually.https://kubernetes.io/docs/concepts/extend-kubernetes/operator/https://kubernetes.io/docs/reference/using-api/client-libraries/You might start by creating an admin-pod with Ignite control.sh, sqlline.sh, thin client, etc. tools PLUS kubectl or some other Kubernetes API client that you can exec into and manually perform all of the rolling update steps. Once you know you have all the tools and steps complete, you can try adding scripts to the pod to automate sequences of steps. Then once the scripts are fairly robust and complete, you can use the admin-pod as a basis for Kubernetes Job definitions. It's up to you whether you'd like to continue integrating with Kubernetes further. Next steps would be to create a CustomResourceDefinition instead of using Kubernetes Job, or writing/adding a Kubernetes compatible API that does what your Job command line startup does, but with more control over parameters.Please share your results once you've got things working. Best of luck!On Fri, Sep 6, 2024 at 10:15 AM Humphrey  wrote:Thanks for the explanation, is there any operator ready for use? Is it hard to create own Operator if it doesn’t exist yet?ThanksOn 5 Sep 2024, at 19:39, Jeremy McMillan  wrote:It is correct for an operator, but not correct for readiness probe. It's not your understanding of Ignite metrics. It is your understanding of Kubernetes.Kubernetes rolling update logic assumes all of your service backend nodes are completely independent, but you have chosen a readiness probe which reflects how nodes are interacting and interdependent.Hypothetically:  We have bounced one node, and it has rejoined the cluster, and is rebalancing.  If Kubernetes probes this node for readiness, we fail because we are rebalancing. The scheduler will block progress of the rolling update.  If Kubernetes probes any other node for readiness, it will fail because we are rebalancing. The scheduler will remove this node from any services.  All the nodes will reflect the state of the cluster: rebalancing.  No nodes will remain in the service backend. If you are using the Kubernetes discovery SPI, the restarted node will find itself unable to discover any peers.The problem is that Kubernetes interprets the readiness probe as a NODE STATE. The cluster.rebalanced metric is a CLUSTER STATE.If you had a Kubernetes job that executes Kubectl commands from within the cluster, looping over the pods in a StatefulSet and restarting them, it would make perfect sense to check cluster.rebalanced and block until rebalancing finishes, but Kubernetes does something different with readiness probes based on some assumptions about clustering which do not apply to Ignite.On Thu, Sep 5, 2024 at 11:29 AM Humphrey Lopez  wrote:Yes I’m trying to read the cluster.rebalanced metric from the JMX mBean, is that the correct one? I’ve build that into the readiness endpoint from actuator and let kubernetes wait for the cluster to be ready before move to the next pod. Humphrey On 5 Sep 2024, at 17:34, Jeremy McMillan  wrote:I assume you have created your caches/tables with backups>=1.You should restart one node at a time, and wait until the restarted node has rejoined the cluster, then wait for rebalancing to begin, then wait for rebalancing to finish before restarting the next node. Kubernetes readiness probes aren't sophisticated enough. "Node ready" state isn't the same thing as "Cluster ready" state, but Kubernetes scheduler can't distinguish. Thi

Re: Rolling Update

2024-09-10 Thread Humphrey Lopez
Thanks, seems a bit complicated. When I have more time I'll try that
approach.
For now we still going to (mis) use the Readiness probe to wait for the
rebalancing in a smart way. When the pod starts we have a flag that is set
to False, then the pod won't get ready until the cluster is rebalanced.
When the status of the cluster is rebalanced the pod will get the state to
ready and the flag will be set to true. Next Rebalancing triggered by
another pod will not affect the already running pod cause the flag will be
True.

Let's see if this will wait long enough for the cluster to be in a stable
phase.

Humphrey

Op ma 9 sep 2024 om 17:34 schreef Jeremy McMillan :

> An operator as I understand it, is just a pod that interacts with your
> application and Kubernetes API server as necessary to do what you might be
> doing manually.
>
> https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
> https://kubernetes.io/docs/reference/using-api/client-libraries/
>
> You might start by creating an admin-pod with Ignite control.sh,
> sqlline.sh, thin client, etc. tools PLUS kubectl or some other Kubernetes
> API client that you can exec into and manually perform all of the rolling
> update steps. Once you know you have all the tools and steps complete, you
> can try adding scripts to the pod to automate sequences of steps. Then once
> the scripts are fairly robust and complete, you can use the admin-pod as a
> basis for Kubernetes Job definitions. It's up to you whether you'd like to
> continue integrating with Kubernetes further. Next steps would be to create
> a CustomResourceDefinition instead of using Kubernetes Job, or
> writing/adding a Kubernetes compatible API that does what your Job command
> line startup does, but with more control over parameters.
>
> Please share your results once you've got things working. Best of luck!
>
> On Fri, Sep 6, 2024 at 10:15 AM Humphrey  wrote:
>
>> Thanks for the explanation, is there any operator ready for use? Is it
>> hard to create own Operator if it doesn’t exist yet?
>>
>> Thanks
>>
>> On 5 Sep 2024, at 19:39, Jeremy McMillan  wrote:
>>
>> 
>> It is correct for an operator, but not correct for readiness probe. It's
>> not your understanding of Ignite metrics. It is your understanding of
>> Kubernetes.
>> Kubernetes rolling update logic assumes all of your service backend nodes
>> are completely independent, but you have chosen a readiness probe which
>> reflects how nodes are interacting and interdependent.
>>
>> Hypothetically:
>>   We have bounced one node, and it has rejoined the cluster, and is
>> rebalancing.
>>   If Kubernetes probes this node for readiness, we fail because we are
>> rebalancing. The scheduler will block progress of the rolling update.
>>   If Kubernetes probes any other node for readiness, it will fail because
>> we are rebalancing. The scheduler will remove this node from any services.
>>   All the nodes will reflect the state of the cluster: rebalancing.
>>   No nodes will remain in the service backend. If you are using the
>> Kubernetes discovery SPI, the restarted node will find itself unable to
>> discover any peers.
>>
>> The problem is that Kubernetes interprets the readiness probe as a NODE
>> STATE. The cluster.rebalanced metric is a CLUSTER STATE.
>>
>> If you had a Kubernetes job that executes Kubectl commands from within
>> the cluster, looping over the pods in a StatefulSet and restarting them, it
>> would make perfect sense to check cluster.rebalanced and block until
>> rebalancing finishes, but Kubernetes does something different with
>> readiness probes based on some assumptions about clustering which do not
>> apply to Ignite.
>>
>> On Thu, Sep 5, 2024 at 11:29 AM Humphrey Lopez 
>> wrote:
>>
>>> Yes I’m trying to read the cluster.rebalanced metric from the JMX mBean,
>>> is that the correct one? I’ve build that into the readiness endpoint from
>>> actuator and let kubernetes wait for the cluster to be ready before move to
>>> the next pod.
>>>
>>> Humphrey
>>>
>>> On 5 Sep 2024, at 17:34, Jeremy McMillan  wrote:
>>>
>>> 
>>> I assume you have created your caches/tables with backups>=1.
>>>
>>> You should restart one node at a time, and wait until the restarted node
>>> has rejoined the cluster, then wait for rebalancing to begin, then wait for
>>> rebalancing to finish before restarting the next node. Kubernetes readiness
>>> probes aren't sophisticated enough. "Node ready" state isn't the same thing
>>> as "Cluster ready" state, but Kubernetes scheduler can't distinguish. This
>>> should be handled by an operator, either human, or a Kubernetes automated
>>> one.
>>>
>>> On Tue, Sep 3, 2024 at 1:13 PM Humphrey  wrote:
>>>
 Thanks, I meant Rolling Update of the same version of Ignite (2.16).
 Not upgrade to a new version. We have our ignite embedded in Spring Boot
 application, and when changing code we need to deploy new version of the
 jar.

 Humphrey

 On 3 Sep 2024, at 19:24, Gianluca Bo

Re: Rolling Update

2024-09-09 Thread Jeremy McMillan
An operator as I understand it, is just a pod that interacts with your
application and Kubernetes API server as necessary to do what you might be
doing manually.

https://kubernetes.io/docs/concepts/extend-kubernetes/operator/
https://kubernetes.io/docs/reference/using-api/client-libraries/

You might start by creating an admin-pod with Ignite control.sh,
sqlline.sh, thin client, etc. tools PLUS kubectl or some other Kubernetes
API client that you can exec into and manually perform all of the rolling
update steps. Once you know you have all the tools and steps complete, you
can try adding scripts to the pod to automate sequences of steps. Then once
the scripts are fairly robust and complete, you can use the admin-pod as a
basis for Kubernetes Job definitions. It's up to you whether you'd like to
continue integrating with Kubernetes further. Next steps would be to create
a CustomResourceDefinition instead of using Kubernetes Job, or
writing/adding a Kubernetes compatible API that does what your Job command
line startup does, but with more control over parameters.

Please share your results once you've got things working. Best of luck!

On Fri, Sep 6, 2024 at 10:15 AM Humphrey  wrote:

> Thanks for the explanation, is there any operator ready for use? Is it
> hard to create own Operator if it doesn’t exist yet?
>
> Thanks
>
> On 5 Sep 2024, at 19:39, Jeremy McMillan  wrote:
>
> 
> It is correct for an operator, but not correct for readiness probe. It's
> not your understanding of Ignite metrics. It is your understanding of
> Kubernetes.
> Kubernetes rolling update logic assumes all of your service backend nodes
> are completely independent, but you have chosen a readiness probe which
> reflects how nodes are interacting and interdependent.
>
> Hypothetically:
>   We have bounced one node, and it has rejoined the cluster, and is
> rebalancing.
>   If Kubernetes probes this node for readiness, we fail because we are
> rebalancing. The scheduler will block progress of the rolling update.
>   If Kubernetes probes any other node for readiness, it will fail because
> we are rebalancing. The scheduler will remove this node from any services.
>   All the nodes will reflect the state of the cluster: rebalancing.
>   No nodes will remain in the service backend. If you are using the
> Kubernetes discovery SPI, the restarted node will find itself unable to
> discover any peers.
>
> The problem is that Kubernetes interprets the readiness probe as a NODE
> STATE. The cluster.rebalanced metric is a CLUSTER STATE.
>
> If you had a Kubernetes job that executes Kubectl commands from within the
> cluster, looping over the pods in a StatefulSet and restarting them, it
> would make perfect sense to check cluster.rebalanced and block until
> rebalancing finishes, but Kubernetes does something different with
> readiness probes based on some assumptions about clustering which do not
> apply to Ignite.
>
> On Thu, Sep 5, 2024 at 11:29 AM Humphrey Lopez  wrote:
>
>> Yes I’m trying to read the cluster.rebalanced metric from the JMX mBean,
>> is that the correct one? I’ve build that into the readiness endpoint from
>> actuator and let kubernetes wait for the cluster to be ready before move to
>> the next pod.
>>
>> Humphrey
>>
>> On 5 Sep 2024, at 17:34, Jeremy McMillan  wrote:
>>
>> 
>> I assume you have created your caches/tables with backups>=1.
>>
>> You should restart one node at a time, and wait until the restarted node
>> has rejoined the cluster, then wait for rebalancing to begin, then wait for
>> rebalancing to finish before restarting the next node. Kubernetes readiness
>> probes aren't sophisticated enough. "Node ready" state isn't the same thing
>> as "Cluster ready" state, but Kubernetes scheduler can't distinguish. This
>> should be handled by an operator, either human, or a Kubernetes automated
>> one.
>>
>> On Tue, Sep 3, 2024 at 1:13 PM Humphrey  wrote:
>>
>>> Thanks, I meant Rolling Update of the same version of Ignite (2.16). Not
>>> upgrade to a new version. We have our ignite embedded in Spring Boot
>>> application, and when changing code we need to deploy new version of the
>>> jar.
>>>
>>> Humphrey
>>>
>>> On 3 Sep 2024, at 19:24, Gianluca Bonetti 
>>> wrote:
>>>
>>> 
>>> Hello
>>>
>>> If you want to upgrade Apache Ignite version, this is not supported by
>>> Apache Ignite
>>>
>>> "Ignite cluster cannot have nodes that run on different Ignite versions.
>>> You need to stop the cluster and start it again on the new Ignite version."
>>> https://ignite.apache.org/docs/latest/installation/upgrades
>>>
>>> If you need rolling upgrades you can upgrade to GridGain which bring
>>> rolling upgrades together with many other interesting features
>>> "Rolling Upgrades is a feature of GridGain Enterprise and Ultimate
>>> Edition that allows nodes with different GridGain versions to coexist in a
>>> cluster while you roll out a new version. This prevents downtime when
>>> performing software upgrades."
>>> https://www.grid

Re: Rolling Update

2024-09-05 Thread Jeremy McMillan
It is correct for an operator, but not correct for readiness probe. It's
not your understanding of Ignite metrics. It is your understanding of
Kubernetes.
Kubernetes rolling update logic assumes all of your service backend nodes
are completely independent, but you have chosen a readiness probe which
reflects how nodes are interacting and interdependent.

Hypothetically:
  We have bounced one node, and it has rejoined the cluster, and is
rebalancing.
  If Kubernetes probes this node for readiness, we fail because we are
rebalancing. The scheduler will block progress of the rolling update.
  If Kubernetes probes any other node for readiness, it will fail because
we are rebalancing. The scheduler will remove this node from any services.
  All the nodes will reflect the state of the cluster: rebalancing.
  No nodes will remain in the service backend. If you are using the
Kubernetes discovery SPI, the restarted node will find itself unable to
discover any peers.

The problem is that Kubernetes interprets the readiness probe as a NODE
STATE. The cluster.rebalanced metric is a CLUSTER STATE.

If you had a Kubernetes job that executes Kubectl commands from within the
cluster, looping over the pods in a StatefulSet and restarting them, it
would make perfect sense to check cluster.rebalanced and block until
rebalancing finishes, but Kubernetes does something different with
readiness probes based on some assumptions about clustering which do not
apply to Ignite.

On Thu, Sep 5, 2024 at 11:29 AM Humphrey Lopez  wrote:

> Yes I’m trying to read the cluster.rebalanced metric from the JMX mBean,
> is that the correct one? I’ve build that into the readiness endpoint from
> actuator and let kubernetes wait for the cluster to be ready before move to
> the next pod.
>
> Humphrey
>
> On 5 Sep 2024, at 17:34, Jeremy McMillan  wrote:
>
> 
> I assume you have created your caches/tables with backups>=1.
>
> You should restart one node at a time, and wait until the restarted node
> has rejoined the cluster, then wait for rebalancing to begin, then wait for
> rebalancing to finish before restarting the next node. Kubernetes readiness
> probes aren't sophisticated enough. "Node ready" state isn't the same thing
> as "Cluster ready" state, but Kubernetes scheduler can't distinguish. This
> should be handled by an operator, either human, or a Kubernetes automated
> one.
>
> On Tue, Sep 3, 2024 at 1:13 PM Humphrey  wrote:
>
>> Thanks, I meant Rolling Update of the same version of Ignite (2.16). Not
>> upgrade to a new version. We have our ignite embedded in Spring Boot
>> application, and when changing code we need to deploy new version of the
>> jar.
>>
>> Humphrey
>>
>> On 3 Sep 2024, at 19:24, Gianluca Bonetti 
>> wrote:
>>
>> 
>> Hello
>>
>> If you want to upgrade Apache Ignite version, this is not supported by
>> Apache Ignite
>>
>> "Ignite cluster cannot have nodes that run on different Ignite versions.
>> You need to stop the cluster and start it again on the new Ignite version."
>> https://ignite.apache.org/docs/latest/installation/upgrades
>>
>> If you need rolling upgrades you can upgrade to GridGain which bring
>> rolling upgrades together with many other interesting features
>> "Rolling Upgrades is a feature of GridGain Enterprise and Ultimate
>> Edition that allows nodes with different GridGain versions to coexist in a
>> cluster while you roll out a new version. This prevents downtime when
>> performing software upgrades."
>> https://www.gridgain.com/docs/latest/installation-guide/rolling-upgrades
>>
>> Cheers
>> Gianluca Bonetti
>>
>> On Tue, 3 Sept 2024 at 18:15, Humphrey Lopez  wrote:
>>
>>> Hello, we have several pods with ignite caches running in kubernetes. We
>>> only use memory mode (not persistence) and want to perform rolling update
>>> of without losing data. What metric should we monitor to know when it’s
>>> safe to replace the next pod?
>>>
>>> We have tried the Cluser.Rebalanced (1) metric from JMX in a readiness
>>> probe but we still end up losing data from the caches.
>>>
>>> 1)
>>> https://ignite.apache.org/docs/latest/monitoring-metrics/new-metrics#cluster
>>>
>>> Should we use another mechanism or metric for determining the readiness
>>> of the new started pod?
>>>
>>>
>>> Humphrey
>>>
>>


Re: Rolling Update

2024-09-05 Thread Jeremy McMillan
I assume you have created your caches/tables with backups>=1.

You should restart one node at a time, and wait until the restarted node
has rejoined the cluster, then wait for rebalancing to begin, then wait for
rebalancing to finish before restarting the next node. Kubernetes readiness
probes aren't sophisticated enough. "Node ready" state isn't the same thing
as "Cluster ready" state, but Kubernetes scheduler can't distinguish. This
should be handled by an operator, either human, or a Kubernetes automated
one.

On Tue, Sep 3, 2024 at 1:13 PM Humphrey  wrote:

> Thanks, I meant Rolling Update of the same version of Ignite (2.16). Not
> upgrade to a new version. We have our ignite embedded in Spring Boot
> application, and when changing code we need to deploy new version of the
> jar.
>
> Humphrey
>
> On 3 Sep 2024, at 19:24, Gianluca Bonetti 
> wrote:
>
> 
> Hello
>
> If you want to upgrade Apache Ignite version, this is not supported by
> Apache Ignite
>
> "Ignite cluster cannot have nodes that run on different Ignite versions.
> You need to stop the cluster and start it again on the new Ignite version."
> https://ignite.apache.org/docs/latest/installation/upgrades
>
> If you need rolling upgrades you can upgrade to GridGain which bring
> rolling upgrades together with many other interesting features
> "Rolling Upgrades is a feature of GridGain Enterprise and Ultimate Edition
> that allows nodes with different GridGain versions to coexist in a cluster
> while you roll out a new version. This prevents downtime when performing
> software upgrades."
> https://www.gridgain.com/docs/latest/installation-guide/rolling-upgrades
>
> Cheers
> Gianluca Bonetti
>
> On Tue, 3 Sept 2024 at 18:15, Humphrey Lopez  wrote:
>
>> Hello, we have several pods with ignite caches running in kubernetes. We
>> only use memory mode (not persistence) and want to perform rolling update
>> of without losing data. What metric should we monitor to know when it’s
>> safe to replace the next pod?
>>
>> We have tried the Cluser.Rebalanced (1) metric from JMX in a readiness
>> probe but we still end up losing data from the caches.
>>
>> 1)
>> https://ignite.apache.org/docs/latest/monitoring-metrics/new-metrics#cluster
>>
>> Should we use another mechanism or metric for determining the readiness
>> of the new started pod?
>>
>>
>> Humphrey
>>
>


Re: Rolling Update

2024-09-03 Thread Humphrey
Thanks, I meant Rolling Update of the same version of Ignite (2.16). Not upgrade to a new version. We have our ignite embedded in Spring Boot application, and when changing code we need to deploy new version of the jar.HumphreyOn 3 Sep 2024, at 19:24, Gianluca Bonetti  wrote:HelloIf you want to upgrade Apache Ignite version, this is not supported by Apache Ignite"Ignite cluster cannot have nodes that run on different Ignite versions. You need to stop the cluster and start it again on the new Ignite version."https://ignite.apache.org/docs/latest/installation/upgradesIf you need rolling upgrades you can upgrade to GridGain which bring rolling upgrades together with many other interesting features"Rolling Upgrades is a feature of GridGain Enterprise and Ultimate Edition that allows nodes with different GridGain versions to coexist in a cluster while you roll out a new version. This prevents downtime when performing software upgrades."https://www.gridgain.com/docs/latest/installation-guide/rolling-upgradesCheersGianluca BonettiOn Tue, 3 Sept 2024 at 18:15, Humphrey Lopez  wrote:Hello, we have several pods with ignite caches running in kubernetes. We only use memory mode (not persistence) and want to perform rolling update of without losing data. What metric should we monitor to know when it’s safe to replace the next pod?We have tried the Cluser.Rebalanced (1) metric from JMX in a readiness probe but we still end up losing data from the caches. 1) https://ignite.apache.org/docs/latest/monitoring-metrics/new-metrics#clusterShould we use another mechanism or metric for determining the readiness of the new started pod?Humphrey 


Re: Rolling Update

2024-09-03 Thread Gianluca Bonetti
Hello

If you want to upgrade Apache Ignite version, this is not supported by
Apache Ignite

"Ignite cluster cannot have nodes that run on different Ignite versions.
You need to stop the cluster and start it again on the new Ignite version."
https://ignite.apache.org/docs/latest/installation/upgrades

If you need rolling upgrades you can upgrade to GridGain which bring
rolling upgrades together with many other interesting features
"Rolling Upgrades is a feature of GridGain Enterprise and Ultimate Edition
that allows nodes with different GridGain versions to coexist in a cluster
while you roll out a new version. This prevents downtime when performing
software upgrades."
https://www.gridgain.com/docs/latest/installation-guide/rolling-upgrades

Cheers
Gianluca Bonetti

On Tue, 3 Sept 2024 at 18:15, Humphrey Lopez  wrote:

> Hello, we have several pods with ignite caches running in kubernetes. We
> only use memory mode (not persistence) and want to perform rolling update
> of without losing data. What metric should we monitor to know when it’s
> safe to replace the next pod?
>
> We have tried the Cluser.Rebalanced (1) metric from JMX in a readiness
> probe but we still end up losing data from the caches.
>
> 1)
> https://ignite.apache.org/docs/latest/monitoring-metrics/new-metrics#cluster
>
> Should we use another mechanism or metric for determining the readiness of
> the new started pod?
>
>
> Humphrey
>


Re: Does the JDBC thin driver support partition aware execution of INSERT statements?

2024-08-29 Thread Pavel Tupitsyn
A) From what I can see - no, it resolves a hostname to a single address
B) Not sure about this

On Thu, Aug 29, 2024 at 5:32 PM Jeremy McMillan  wrote:

> Thanks Pavel!
>
> That is really cool, but looks like it only works in very carefully
> managed situations.
>
> Do you happen to know off the top of your head
>
> A) the JDBC thin driver client needs to have a complete list of node
> addresses.
>   Q: will it collect IPs from DNS hostnames that return multiple A or 
> records for each node?
>
> B) Q: does it only work for single row INSERT statements, or will it also
> work for batched INSERT containing many rows worth of VALUES?
>
> On Thu, Aug 29, 2024 at 12:27 AM Pavel Tupitsyn 
> wrote:
>
>> JDBC driver does support partition awareness [1]
>> And it works for INSERT statements too, as I understand [2]
>>
>> > When a query is executed for the first time, the driver receives the
>> partition distribution for the table
>> > that is being queried and saves it for future use locally.
>> > When you query this table next time, the driver uses the partition
>> distribution
>> > to determine where the data being queried is located to send the query
>> to the right nodes.
>>
>> [1]
>> https://ignite.apache.org/docs/latest/SQL/JDBC/jdbc-driver#partition-awareness
>> [2]
>> https://ignite.apache.org/docs/latest/SQL/JDBC/jdbc-driver#partitionAwarenessSQLCacheSize
>>
>> On Wed, Aug 28, 2024 at 11:24 PM Jeremy McMillan  wrote:
>>
>>> Probably not in the way you might expect from the question. From the
>>> documentation:
>>> "The driver connects to one of the cluster nodes and forwards all the
>>> queries to it for final execution. The node handles the query distribution
>>> and the result’s aggregations. Then the result is sent back to the client
>>> application."
>>>
>>> The JDBC client has a persistent connection to one cluster node, to
>>> which all queries are sent. The JDBC client does not connect to multiple
>>> nodes to handle multiple INSERTs.
>>>
>>> On Wed, Aug 28, 2024 at 3:45 AM 38797715 <38797...@qq.com> wrote:
>>>
 Does the JDBC thin driver support partition aware execution of INSERT
 statements?

>>>


Re: Does the JDBC thin driver support partition aware execution of INSERT statements?

2024-08-29 Thread Jeremy McMillan
Thanks Pavel!

That is really cool, but looks like it only works in very carefully managed
situations.

Do you happen to know off the top of your head

A) the JDBC thin driver client needs to have a complete list of node
addresses.
  Q: will it collect IPs from DNS hostnames that return multiple A or 
records for each node?

B) Q: does it only work for single row INSERT statements, or will it also
work for batched INSERT containing many rows worth of VALUES?

On Thu, Aug 29, 2024 at 12:27 AM Pavel Tupitsyn 
wrote:

> JDBC driver does support partition awareness [1]
> And it works for INSERT statements too, as I understand [2]
>
> > When a query is executed for the first time, the driver receives the
> partition distribution for the table
> > that is being queried and saves it for future use locally.
> > When you query this table next time, the driver uses the partition
> distribution
> > to determine where the data being queried is located to send the query
> to the right nodes.
>
> [1]
> https://ignite.apache.org/docs/latest/SQL/JDBC/jdbc-driver#partition-awareness
> [2]
> https://ignite.apache.org/docs/latest/SQL/JDBC/jdbc-driver#partitionAwarenessSQLCacheSize
>
> On Wed, Aug 28, 2024 at 11:24 PM Jeremy McMillan  wrote:
>
>> Probably not in the way you might expect from the question. From the
>> documentation:
>> "The driver connects to one of the cluster nodes and forwards all the
>> queries to it for final execution. The node handles the query distribution
>> and the result’s aggregations. Then the result is sent back to the client
>> application."
>>
>> The JDBC client has a persistent connection to one cluster node, to which
>> all queries are sent. The JDBC client does not connect to multiple nodes to
>> handle multiple INSERTs.
>>
>> On Wed, Aug 28, 2024 at 3:45 AM 38797715 <38797...@qq.com> wrote:
>>
>>> Does the JDBC thin driver support partition aware execution of INSERT
>>> statements?
>>>
>>


Re: Does the JDBC thin driver support partition aware execution of INSERT statements?

2024-08-28 Thread Pavel Tupitsyn
JDBC driver does support partition awareness [1]
And it works for INSERT statements too, as I understand [2]

> When a query is executed for the first time, the driver receives the
partition distribution for the table
> that is being queried and saves it for future use locally.
> When you query this table next time, the driver uses the partition
distribution
> to determine where the data being queried is located to send the query to
the right nodes.

[1]
https://ignite.apache.org/docs/latest/SQL/JDBC/jdbc-driver#partition-awareness
[2]
https://ignite.apache.org/docs/latest/SQL/JDBC/jdbc-driver#partitionAwarenessSQLCacheSize

On Wed, Aug 28, 2024 at 11:24 PM Jeremy McMillan  wrote:

> Probably not in the way you might expect from the question. From the
> documentation:
> "The driver connects to one of the cluster nodes and forwards all the
> queries to it for final execution. The node handles the query distribution
> and the result’s aggregations. Then the result is sent back to the client
> application."
>
> The JDBC client has a persistent connection to one cluster node, to which
> all queries are sent. The JDBC client does not connect to multiple nodes to
> handle multiple INSERTs.
>
> On Wed, Aug 28, 2024 at 3:45 AM 38797715 <38797...@qq.com> wrote:
>
>> Does the JDBC thin driver support partition aware execution of INSERT
>> statements?
>>
>


Re: Does the JDBC thin driver support partition aware execution of INSERT statements?

2024-08-28 Thread Jeremy McMillan
Probably not in the way you might expect from the question. From the
documentation:
"The driver connects to one of the cluster nodes and forwards all the
queries to it for final execution. The node handles the query distribution
and the result’s aggregations. Then the result is sent back to the client
application."

The JDBC client has a persistent connection to one cluster node, to which
all queries are sent. The JDBC client does not connect to multiple nodes to
handle multiple INSERTs.

On Wed, Aug 28, 2024 at 3:45 AM 38797715 <38797...@qq.com> wrote:

> Does the JDBC thin driver support partition aware execution of INSERT
> statements?
>


Re: Ignite Thick Client running Node Filters???

2024-08-23 Thread Jeremy McMillan
I think you might be looking for events.

https://ignite.apache.org/docs/latest/events/listening-to-events#enabling-events
https://ignite.apache.org/docs/latest/events/events#cluster-state-changed-events

On Fri, Aug 23, 2024 at 11:59 AM Gregory Sylvain 
wrote:

> Hi,
>
> Thanks for the reply.
>
> I was looking down this road, however, everything is automated and the
> cluster is activated by a script when all ServerNodes are in the baseline.
>
> Is there a hook that can be called when the cluster is activated to do
> this work?
>
> Thanks.
> Greg
>
>
> On Fri, Aug 23, 2024 at 12:29 PM Jeremy McMillan  wrote:
>
>> The example in the documentation explaining nodeFilter uses node
>> attributes as a condition, but the logic might include dynamic node state
>> like performance metrics to decide whether to run a service or not.
>>
>> It seems like the behavior you want/expect might be implemented better
>> using clusterGroup
>> https://ignite.apache.org/docs/2.15.0/services/services#clustergroup
>>
>> You would need to do something like (
>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cluster/ClusterGroup.html
>> )
>>
>> // Cluster group over all nodes that have the user attribute "group" set to 
>> the value "worker".
>>  ClusterGroup workerNodes = cluster.forAttribute("group", "worker");
>>
>> .. and then start services with this
>>
>> https://ignite.apache.org/releases/2.15.0/javadoc/org/apache/ignite/IgniteSpringBean.html#services-org.apache.ignite.cluster.ClusterGroup-
>>
>> On Fri, Aug 23, 2024 at 8:35 AM Gregory Sylvain 
>> wrote:
>>
>>> Hi Igniters,
>>>
>>> I'm running Ignite 2.15 cluster with native persistence enabled running
>>> on RHEL 8 VMs.
>>>
>>> I am running a Service cluster of 5 ServerNodes and 32 Thick Clients.
>>>
>>> Each Service has a User Attribute that indicates which service to run.
>>> Each ServerNode sets two User Attributes - indicating it should run two
>>> services.
>>>
>>> When the Cluster starts up from nothing, it sets the BLT and starts all
>>> services as expected.
>>>
>>> After the BLT is set, the cluster ports are opened (via firewalld) to
>>> allow the clients to connect to the cluster and start utilizing the
>>> services offered.
>>>
>>> If, after this point, a BLT cluster node restarts and drops out of the
>>> cluster and then re-joins, the Node Filter's apply() method is invoked on
>>> all ServerNodes *and *Thick Clients!
>>>
>>>
>>>- Q1: Why is a Node Filter running on a Thick Client and can I
>>>disable this?
>>>
>>>
>>> So, if a Node Filter is invoked on a Thick Client and it gets passed a
>>> ClusterNode representing a ServerNode that should run a specific service,
>>> the filter should return *true*, according to the API.  However, I do
>>> not want Clients to run services.
>>>
>>>
>>>
>>>- Q2: Can I limit the Node Filter invocations to only BLT nodes (or
>>>at least only Server Nodes) ?
>>>- Q3: If Node Filters are intended to run on Thick Clients as well,
>>>can I just return false from the apply method and how does that affect 
>>> the
>>>semantics of service balancing that I am trying to achieve?
>>>
>>>
>>>
>>>
>>> Thanks in advance,
>>> Greg
>>>
>>>
>>> --
>>>
>>> *Greg Sylvain*
>>>
>>> Software Architect/Lead Developer on XOComm
>>>
>>> Booz | Allen | Hamilton
>>>
>>>
>>>
>>> sylvain_greg...@bah.com
>>>
>>> cell: 571.236.8951
>>>
>>> ofc: 703.633.3195
>>>
>>> Chantilly, VA
>>>
>>>
>>>
>>


Re: Ignite Thick Client running Node Filters???

2024-08-23 Thread Gregory Sylvain
Hi,

Thanks for the reply.

I was looking down this road, however, everything is automated and the
cluster is activated by a script when all ServerNodes are in the baseline.

Is there a hook that can be called when the cluster is activated to do this
work?

Thanks.
Greg


On Fri, Aug 23, 2024 at 12:29 PM Jeremy McMillan  wrote:

> The example in the documentation explaining nodeFilter uses node
> attributes as a condition, but the logic might include dynamic node state
> like performance metrics to decide whether to run a service or not.
>
> It seems like the behavior you want/expect might be implemented better
> using clusterGroup
> https://ignite.apache.org/docs/2.15.0/services/services#clustergroup
>
> You would need to do something like (
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cluster/ClusterGroup.html
> )
>
> // Cluster group over all nodes that have the user attribute "group" set to 
> the value "worker".
>  ClusterGroup workerNodes = cluster.forAttribute("group", "worker");
>
> .. and then start services with this
>
> https://ignite.apache.org/releases/2.15.0/javadoc/org/apache/ignite/IgniteSpringBean.html#services-org.apache.ignite.cluster.ClusterGroup-
>
> On Fri, Aug 23, 2024 at 8:35 AM Gregory Sylvain 
> wrote:
>
>> Hi Igniters,
>>
>> I'm running Ignite 2.15 cluster with native persistence enabled running
>> on RHEL 8 VMs.
>>
>> I am running a Service cluster of 5 ServerNodes and 32 Thick Clients.
>>
>> Each Service has a User Attribute that indicates which service to run.
>> Each ServerNode sets two User Attributes - indicating it should run two
>> services.
>>
>> When the Cluster starts up from nothing, it sets the BLT and starts all
>> services as expected.
>>
>> After the BLT is set, the cluster ports are opened (via firewalld) to
>> allow the clients to connect to the cluster and start utilizing the
>> services offered.
>>
>> If, after this point, a BLT cluster node restarts and drops out of the
>> cluster and then re-joins, the Node Filter's apply() method is invoked on
>> all ServerNodes *and *Thick Clients!
>>
>>
>>- Q1: Why is a Node Filter running on a Thick Client and can I
>>disable this?
>>
>>
>> So, if a Node Filter is invoked on a Thick Client and it gets passed a
>> ClusterNode representing a ServerNode that should run a specific service,
>> the filter should return *true*, according to the API.  However, I do
>> not want Clients to run services.
>>
>>
>>
>>- Q2: Can I limit the Node Filter invocations to only BLT nodes (or
>>at least only Server Nodes) ?
>>- Q3: If Node Filters are intended to run on Thick Clients as well,
>>can I just return false from the apply method and how does that affect the
>>semantics of service balancing that I am trying to achieve?
>>
>>
>>
>>
>> Thanks in advance,
>> Greg
>>
>>
>> --
>>
>> *Greg Sylvain*
>>
>> Software Architect/Lead Developer on XOComm
>>
>> Booz | Allen | Hamilton
>>
>>
>>
>> sylvain_greg...@bah.com
>>
>> cell: 571.236.8951
>>
>> ofc: 703.633.3195
>>
>> Chantilly, VA
>>
>>
>>
>


Re: Ignite Thick Client running Node Filters???

2024-08-23 Thread Jeremy McMillan
The example in the documentation explaining nodeFilter uses node attributes
as a condition, but the logic might include dynamic node state like
performance metrics to decide whether to run a service or not.

It seems like the behavior you want/expect might be implemented better
using clusterGroup
https://ignite.apache.org/docs/2.15.0/services/services#clustergroup

You would need to do something like (
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cluster/ClusterGroup.html
)

// Cluster group over all nodes that have the user attribute "group"
set to the value "worker".
 ClusterGroup workerNodes = cluster.forAttribute("group", "worker");

.. and then start services with this
https://ignite.apache.org/releases/2.15.0/javadoc/org/apache/ignite/IgniteSpringBean.html#services-org.apache.ignite.cluster.ClusterGroup-

On Fri, Aug 23, 2024 at 8:35 AM Gregory Sylvain 
wrote:

> Hi Igniters,
>
> I'm running Ignite 2.15 cluster with native persistence enabled running on
> RHEL 8 VMs.
>
> I am running a Service cluster of 5 ServerNodes and 32 Thick Clients.
>
> Each Service has a User Attribute that indicates which service to run.
> Each ServerNode sets two User Attributes - indicating it should run two
> services.
>
> When the Cluster starts up from nothing, it sets the BLT and starts all
> services as expected.
>
> After the BLT is set, the cluster ports are opened (via firewalld) to
> allow the clients to connect to the cluster and start utilizing the
> services offered.
>
> If, after this point, a BLT cluster node restarts and drops out of the
> cluster and then re-joins, the Node Filter's apply() method is invoked on
> all ServerNodes *and *Thick Clients!
>
>
>- Q1: Why is a Node Filter running on a Thick Client and can I disable
>this?
>
>
> So, if a Node Filter is invoked on a Thick Client and it gets passed a
> ClusterNode representing a ServerNode that should run a specific service,
> the filter should return *true*, according to the API.  However, I do not
> want Clients to run services.
>
>
>
>- Q2: Can I limit the Node Filter invocations to only BLT nodes (or at
>least only Server Nodes) ?
>- Q3: If Node Filters are intended to run on Thick Clients as well,
>can I just return false from the apply method and how does that affect the
>semantics of service balancing that I am trying to achieve?
>
>
>
>
> Thanks in advance,
> Greg
>
>
> --
>
> *Greg Sylvain*
>
> Software Architect/Lead Developer on XOComm
>
> Booz | Allen | Hamilton
>
>
>
> sylvain_greg...@bah.com
>
> cell: 571.236.8951
>
> ofc: 703.633.3195
>
> Chantilly, VA
>
>
>


Re: Query regarding Apache ignite open source

2024-08-21 Thread Jeremy McMillan
It isn't clear exactly what you're asking in any of these questions. If you
want a guided introduction to Apache Ignite, maybe you should try to attend
a free training workshop. This should prepare you to navigate the
documentation and enable you to answer your own questions as they arise.

https://www.gridgain.com/services/gridgain-apache-ignite-training

If you'd like the Ignite community to answer in this thread, please
describe a short story for each need explaining what you mean by "backup"
and "servers" and "sync or async", "updates in the cluster" and "Entry
processor."

I suspect there are conventional Ignite ways of dealing with your concerns,
but you may be bringing terminology from another domain which doesn't match
exactly how Ignite behavior is usually explained.

On Tue, Aug 20, 2024 at 5:40 AM Mahesh yadavalli <
mahesh.yadaval...@gmail.com> wrote:

> Thanks for the response.
> I am looking into Apache ignite for our caching needs and specifically few
> features like
> 1.  backup servers configurable to be sync or async.
> 2. Write behind updates in the cluster
> 3. Entry processor
>
>
> Are the above features part  of open source?
>
> Is there a way to know which feature is open source and which is not ?
>
>
> On Tue, Aug 20, 2024  3:43 PM Stephen Darlington 
> wrote:
>
>> Ignite has the Apache 2.0 Licence (
>> https://github.com/apache/ignite/blob/master/LICENSE) which is an
>> approved "open source" licence (https://opensource.org/license/apache-2-0
>> ).
>>
>> There are distributions of Ignite with more restrictive licences, and
>> they may have additional features or different release schedules.
>>
>> On Tue, 20 Aug 2024 at 11:06, Mahesh yadavalli <
>> mahesh.yadaval...@gmail.com> wrote:
>>
>>> Hi,
>>> I would like to know if Apache ignite is completely open source. If not,
>>> what features are not covered in the free/community version?
>>>
>>> Thank you!
>>>
>>


Re: Query regarding Apache ignite open source

2024-08-20 Thread Pavel Tupitsyn
Apache Ignite is fully open-source, this is how Apache Software Foundation
works.

All features, everything you see on the website, in the docs, blogs etc is
open-source.
The development happens in the open as well, JIRA, discussions, pull
requests are open to the public.

On Tue, Aug 20, 2024 at 1:39 PM Mahesh yadavalli <
mahesh.yadaval...@gmail.com> wrote:

> Thanks for the response.
> I am looking into Apache ignite for our caching needs and specifically few
> features like
> 1.  backup servers configurable to be sync or async.
> 2. Write behind updates in the cluster
> 3. Entry processor
>
>
> Are the above features part  of open source?
>
> Is there a way to know which feature is open source and which is not ?
>
>
> On Tue, Aug 20, 2024  3:43 PM Stephen Darlington 
> wrote:
>
>> Ignite has the Apache 2.0 Licence (
>> https://github.com/apache/ignite/blob/master/LICENSE) which is an
>> approved "open source" licence (https://opensource.org/license/apache-2-0
>> ).
>>
>> There are distributions of Ignite with more restrictive licences, and
>> they may have additional features or different release schedules.
>>
>> On Tue, 20 Aug 2024 at 11:06, Mahesh yadavalli <
>> mahesh.yadaval...@gmail.com> wrote:
>>
>>> Hi,
>>> I would like to know if Apache ignite is completely open source. If not,
>>> what features are not covered in the free/community version?
>>>
>>> Thank you!
>>>
>>


Re: Query regarding Apache ignite open source

2024-08-20 Thread Mahesh yadavalli
Thanks for the response.
I am looking into Apache ignite for our caching needs and specifically few
features like
1.  backup servers configurable to be sync or async.
2. Write behind updates in the cluster
3. Entry processor


Are the above features part  of open source?

Is there a way to know which feature is open source and which is not ?


On Tue, Aug 20, 2024  3:43 PM Stephen Darlington 
wrote:

> Ignite has the Apache 2.0 Licence (
> https://github.com/apache/ignite/blob/master/LICENSE) which is an
> approved "open source" licence (https://opensource.org/license/apache-2-0
> ).
>
> There are distributions of Ignite with more restrictive licences, and they
> may have additional features or different release schedules.
>
> On Tue, 20 Aug 2024 at 11:06, Mahesh yadavalli <
> mahesh.yadaval...@gmail.com> wrote:
>
>> Hi,
>> I would like to know if Apache ignite is completely open source. If not,
>> what features are not covered in the free/community version?
>>
>> Thank you!
>>
>


Re: Query regarding Apache ignite open source

2024-08-20 Thread Stephen Darlington
Ignite has the Apache 2.0 Licence (
https://github.com/apache/ignite/blob/master/LICENSE) which is an approved
"open source" licence (https://opensource.org/license/apache-2-0).

There are distributions of Ignite with more restrictive licences, and they
may have additional features or different release schedules.

On Tue, 20 Aug 2024 at 11:06, Mahesh yadavalli 
wrote:

> Hi,
> I would like to know if Apache ignite is completely open source. If not,
> what features are not covered in the free/community version?
>
> Thank you!
>


Re: Ignite H2 to Calcite issues

2024-08-16 Thread Amit Jolly
Hi Alex,

Also looks like the query does work with this error/warning. But not sure
what's the implication on the result or overall performance.

Thanks,

Amit Jolly

On Fri, Aug 16, 2024 at 10:03 AM Amit Jolly  wrote:

> Hi Alex,
>
> Find below the details you requested.
>
> CacheConfiguration cacheConfiguration = new
> CacheConfiguration<>();
> cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
> cacheConfiguration.setBackups(2);
> cacheConfiguration.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new
> Duration(MINUTES, 30)));
> cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
> cacheConfiguration.setName("ORDERSTOMATCH");
> cacheConfiguration.setSqlSchema("PUBLIC");
> *cacheConfiguration.setIndexedTypes(OrderId.class, OrdersToMatch.class);*
> cacheConfiguration.setOnheapCacheEnabled(true);
> cacheConfiguration.setEvictionPolicyFactory(new
> LruEvictionPolicyFactory<>());
>
> explain plan for
>
> SELECT _key FROM PUBLIC.ORDERSTOMATCH
>
>
> IgniteExchange(distribution=[single]): rowcount = 1.0, cumulative cost =
> IgniteCost [rowCount=2.0, cpu=2.0, memory=1.0, io=1.0, network=5.0], id =
> 1098
>   IgniteIndexScan(table=[[PUBLIC, ORDERSTOMATCH]], index=[_key_PK],
> requiredColumns=[{0}], inlineScan=[false], collation=[[0
> ASC-nulls-first]]): rowcount = 1.0, cumulative cost = IgniteCost
> [rowCount=1.0, cpu=1.0, memory=1.0, io=1.0, network=1.0], id = 1087
>
> It is using Index scan.
>
> Thanks,
>
> Amit Jolly
>
>
> On Fri, Aug 16, 2024 at 3:03 AM Alex Plehanov 
> wrote:
>
>> Amit,
>>
>> Can you please show the output of "EXPLAIN PLAN FOR "?
>> I've found the bug in index scan on binary object field (ticket [1]),
>> but I can't reproduce it with select without "order by", or without
>> forcing index scan.
>>
>> [1]: https://issues.apache.org/jira/browse/IGNITE-23004
>>
>> ср, 14 авг. 2024 г. в 18:07, Amit Jolly :
>> >
>> > HI, Jeremy & Alex,
>> >
>> > First of all thanks for the quick response.
>> >
>> > Sorry for the confusion here, We are trying to switch from using H2 as
>> a default query engine to Calcite due to above mentioned CVE's.
>> > I have read the ticket and understood that those CVE's do not have any
>> impact on how Ignite uses H2.
>> > We are trying to explain the same to our internal security audit team
>> and at the same time trying to evaluate Calcite.
>> >
>> > Below is the query we are using
>> >
>> > SELECT _key FROM OrdersToMatchCache
>> >
>> > OrdersToMatchCache has OrderId.java as key and OrdersToMatch.java as
>> value
>> >
>> > OrderId.java
>> > 
>> > import lombok.Data;
>> > import org.apache.ignite.cache.query.annotations.QuerySqlField;
>> > import java.io.Serializable;
>> >
>> > @Data
>> > public class OrderId implements Serializable {
>> > @QuerySqlField
>> > private String orderId;
>> > @QuerySqlField
>> > private String regionId;
>> > @QuerySqlField
>> > private Integer date;
>> > }
>> >
>> >
>> > OrdersToMatch.java
>> > 
>> > import lombok.Data;
>> > import org.apache.ignite.cache.query.annotations.QuerySqlField;
>> > import java.io.Serializable;
>> >
>> > @Data
>> > public class OrdersToMatch implements Serializable {
>> > @QuerySqlField
>> > private List buyOrders = new ArrayList<>();
>> > @QuerySqlField
>> > private List sellOrders = new ArrayList<>();
>> > }
>> >
>> >
>> > Order.java
>> > 
>> > import lombok.Data;
>> > import java.io.Serializable;
>> >
>> > @Data
>> > public class Order implements Serializable {
>> > private String direction; // BUY or SELL
>> > private Integer qty;
>> > }
>> >
>> > Thanks,
>> >
>> > Amit Jolly
>> >
>> > On Wed, Aug 14, 2024 at 10:27 AM Jeremy McMillan 
>> wrote:
>> >>
>> >> Amit:
>> >>
>> >> I'm concerned that you may be misreading the CVE details in the ticket
>> you cited, since you indicated you are moving TO H2.. Also the stack trace
>> is a Calcite stack trace. This is ambiguous whether this is the before
>> (persistence config changes) depicted or after changing persistence
>> depicted.
>> >>
>> >> A) The CVEs cited in the ticket
>> https://issues.apache.org/jira/browse/IGNITE-15241 are all H2
>> vulnerabilities.
>> >> B) The H2 vulnerabilities cited all involve behaviors of H2 that
>> Ignite does not use, therefore Ignite is affected neither by Calcite nor H2
>> persistence involvement.
>> >>
>> >> I don't want to discourage you from moving from H2 to Calcite, but
>> maybe this isn't as urgent as it appears, so please proceed carefully. As
>> Alex requested, it will be helpful for the community to see which queries
>> produce exceptions and which ones do not. H2 and Calcite have different SQL
>> parsers and query planners and underlying implementations, so it should not
>> be surprising that queries might need rework in the course of switching.
>> You should expect to encounter issues like this one, and others like it.
>> It's a migration effort.
>> >>
>> >>
>> >> On Tue, Aug 13, 2024 at 9:17 AM Am

Re: Ignite H2 to Calcite issues

2024-08-16 Thread Amit Jolly
Hi Alex,

Find below the details you requested.

CacheConfiguration cacheConfiguration = new
CacheConfiguration<>();
cacheConfiguration.setCacheMode(CacheMode.PARTITIONED);
cacheConfiguration.setBackups(2);
cacheConfiguration.setExpiryPolicyFactory(CreatedExpiryPolicy.factoryOf(new
Duration(MINUTES, 30)));
cacheConfiguration.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheConfiguration.setName("ORDERSTOMATCH");
cacheConfiguration.setSqlSchema("PUBLIC");
*cacheConfiguration.setIndexedTypes(OrderId.class, OrdersToMatch.class);*
cacheConfiguration.setOnheapCacheEnabled(true);
cacheConfiguration.setEvictionPolicyFactory(new
LruEvictionPolicyFactory<>());

explain plan for

SELECT _key FROM PUBLIC.ORDERSTOMATCH


IgniteExchange(distribution=[single]): rowcount = 1.0, cumulative cost =
IgniteCost [rowCount=2.0, cpu=2.0, memory=1.0, io=1.0, network=5.0], id =
1098
  IgniteIndexScan(table=[[PUBLIC, ORDERSTOMATCH]], index=[_key_PK],
requiredColumns=[{0}], inlineScan=[false], collation=[[0
ASC-nulls-first]]): rowcount = 1.0, cumulative cost = IgniteCost
[rowCount=1.0, cpu=1.0, memory=1.0, io=1.0, network=1.0], id = 1087

It is using Index scan.

Thanks,

Amit Jolly


On Fri, Aug 16, 2024 at 3:03 AM Alex Plehanov 
wrote:

> Amit,
>
> Can you please show the output of "EXPLAIN PLAN FOR "?
> I've found the bug in index scan on binary object field (ticket [1]),
> but I can't reproduce it with select without "order by", or without
> forcing index scan.
>
> [1]: https://issues.apache.org/jira/browse/IGNITE-23004
>
> ср, 14 авг. 2024 г. в 18:07, Amit Jolly :
> >
> > HI, Jeremy & Alex,
> >
> > First of all thanks for the quick response.
> >
> > Sorry for the confusion here, We are trying to switch from using H2 as a
> default query engine to Calcite due to above mentioned CVE's.
> > I have read the ticket and understood that those CVE's do not have any
> impact on how Ignite uses H2.
> > We are trying to explain the same to our internal security audit team
> and at the same time trying to evaluate Calcite.
> >
> > Below is the query we are using
> >
> > SELECT _key FROM OrdersToMatchCache
> >
> > OrdersToMatchCache has OrderId.java as key and OrdersToMatch.java as
> value
> >
> > OrderId.java
> > 
> > import lombok.Data;
> > import org.apache.ignite.cache.query.annotations.QuerySqlField;
> > import java.io.Serializable;
> >
> > @Data
> > public class OrderId implements Serializable {
> > @QuerySqlField
> > private String orderId;
> > @QuerySqlField
> > private String regionId;
> > @QuerySqlField
> > private Integer date;
> > }
> >
> >
> > OrdersToMatch.java
> > 
> > import lombok.Data;
> > import org.apache.ignite.cache.query.annotations.QuerySqlField;
> > import java.io.Serializable;
> >
> > @Data
> > public class OrdersToMatch implements Serializable {
> > @QuerySqlField
> > private List buyOrders = new ArrayList<>();
> > @QuerySqlField
> > private List sellOrders = new ArrayList<>();
> > }
> >
> >
> > Order.java
> > 
> > import lombok.Data;
> > import java.io.Serializable;
> >
> > @Data
> > public class Order implements Serializable {
> > private String direction; // BUY or SELL
> > private Integer qty;
> > }
> >
> > Thanks,
> >
> > Amit Jolly
> >
> > On Wed, Aug 14, 2024 at 10:27 AM Jeremy McMillan 
> wrote:
> >>
> >> Amit:
> >>
> >> I'm concerned that you may be misreading the CVE details in the ticket
> you cited, since you indicated you are moving TO H2.. Also the stack trace
> is a Calcite stack trace. This is ambiguous whether this is the before
> (persistence config changes) depicted or after changing persistence
> depicted.
> >>
> >> A) The CVEs cited in the ticket
> https://issues.apache.org/jira/browse/IGNITE-15241 are all H2
> vulnerabilities.
> >> B) The H2 vulnerabilities cited all involve behaviors of H2 that Ignite
> does not use, therefore Ignite is affected neither by Calcite nor H2
> persistence involvement.
> >>
> >> I don't want to discourage you from moving from H2 to Calcite, but
> maybe this isn't as urgent as it appears, so please proceed carefully. As
> Alex requested, it will be helpful for the community to see which queries
> produce exceptions and which ones do not. H2 and Calcite have different SQL
> parsers and query planners and underlying implementations, so it should not
> be surprising that queries might need rework in the course of switching.
> You should expect to encounter issues like this one, and others like it.
> It's a migration effort.
> >>
> >>
> >> On Tue, Aug 13, 2024 at 9:17 AM Amit Jolly 
> wrote:
> >>>
> >>> Hi,
> >>>
> >>> We are trying to switch to H2 due to Security Vulnerabilities as
> listed in JIRA https://issues.apache.org/jira/browse/IGNITE-15241
> >>>
> >>> We are seeing below errors post switching. We are just running select
> * From table query.
> >>>
> >>> Caused by: java.lang.ClassCastException: class
> org.apache.ignite.internal.binary.BinaryObjectImpl

Re: Ignite H2 to Calcite issues

2024-08-16 Thread Alex Plehanov
Amit,

Can you please show the output of "EXPLAIN PLAN FOR "?
I've found the bug in index scan on binary object field (ticket [1]),
but I can't reproduce it with select without "order by", or without
forcing index scan.

[1]: https://issues.apache.org/jira/browse/IGNITE-23004

ср, 14 авг. 2024 г. в 18:07, Amit Jolly :
>
> HI, Jeremy & Alex,
>
> First of all thanks for the quick response.
>
> Sorry for the confusion here, We are trying to switch from using H2 as a 
> default query engine to Calcite due to above mentioned CVE's.
> I have read the ticket and understood that those CVE's do not have any impact 
> on how Ignite uses H2.
> We are trying to explain the same to our internal security audit team and at 
> the same time trying to evaluate Calcite.
>
> Below is the query we are using
>
> SELECT _key FROM OrdersToMatchCache
>
> OrdersToMatchCache has OrderId.java as key and OrdersToMatch.java as value
>
> OrderId.java
> 
> import lombok.Data;
> import org.apache.ignite.cache.query.annotations.QuerySqlField;
> import java.io.Serializable;
>
> @Data
> public class OrderId implements Serializable {
> @QuerySqlField
> private String orderId;
> @QuerySqlField
> private String regionId;
> @QuerySqlField
> private Integer date;
> }
>
>
> OrdersToMatch.java
> 
> import lombok.Data;
> import org.apache.ignite.cache.query.annotations.QuerySqlField;
> import java.io.Serializable;
>
> @Data
> public class OrdersToMatch implements Serializable {
> @QuerySqlField
> private List buyOrders = new ArrayList<>();
> @QuerySqlField
> private List sellOrders = new ArrayList<>();
> }
>
>
> Order.java
> 
> import lombok.Data;
> import java.io.Serializable;
>
> @Data
> public class Order implements Serializable {
> private String direction; // BUY or SELL
> private Integer qty;
> }
>
> Thanks,
>
> Amit Jolly
>
> On Wed, Aug 14, 2024 at 10:27 AM Jeremy McMillan  wrote:
>>
>> Amit:
>>
>> I'm concerned that you may be misreading the CVE details in the ticket you 
>> cited, since you indicated you are moving TO H2.. Also the stack trace is a 
>> Calcite stack trace. This is ambiguous whether this is the before 
>> (persistence config changes) depicted or after changing persistence depicted.
>>
>> A) The CVEs cited in the ticket 
>> https://issues.apache.org/jira/browse/IGNITE-15241 are all H2 
>> vulnerabilities.
>> B) The H2 vulnerabilities cited all involve behaviors of H2 that Ignite does 
>> not use, therefore Ignite is affected neither by Calcite nor H2 persistence 
>> involvement.
>>
>> I don't want to discourage you from moving from H2 to Calcite, but maybe 
>> this isn't as urgent as it appears, so please proceed carefully. As Alex 
>> requested, it will be helpful for the community to see which queries produce 
>> exceptions and which ones do not. H2 and Calcite have different SQL parsers 
>> and query planners and underlying implementations, so it should not be 
>> surprising that queries might need rework in the course of switching. You 
>> should expect to encounter issues like this one, and others like it. It's a 
>> migration effort.
>>
>>
>> On Tue, Aug 13, 2024 at 9:17 AM Amit Jolly  wrote:
>>>
>>> Hi,
>>>
>>> We are trying to switch to H2 due to Security Vulnerabilities as listed in 
>>> JIRA https://issues.apache.org/jira/browse/IGNITE-15241
>>>
>>> We are seeing below errors post switching. We are just running select * 
>>> From table query.
>>>
>>> Caused by: java.lang.ClassCastException: class 
>>> org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to class 
>>> java.lang.Comparable (org.apache.ignite.internal.binary.BinaryObjectImpl is 
>>> in unnamed module of loader 'app'; java.lang.Comparable is in module 
>>> java.base of loader 'bootstrap')
>>> at 
>>> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl.compare(ExpressionFactoryImpl.java:223)
>>>  ~[ignite-calcite-2.16.0.jar:2.16.0]
>>> at 
>>> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl.access$100(ExpressionFactoryImpl.java:85)
>>>  ~[ignite-calcite-2.16.0.jar:2.16.0]
>>> at 
>>> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl$1.compare(ExpressionFactoryImpl.java:157)
>>>  ~[ignite-calcite-2.16.0.jar:2.16.0]
>>> at 
>>> java.base/java.util.Map$Entry.lambda$comparingByKey$6d558cbf$1(Map.java:560)
>>>  ~[?:?]
>>> at 
>>> java.base/java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:660)
>>>  ~[?:?]
>>> at java.base/java.util.PriorityQueue.siftUp(PriorityQueue.java:637) ~[?:?]
>>> at java.base/java.util.PriorityQueue.offer(PriorityQueue.java:330) ~[?:?]
>>> at 
>>> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.pushOrdered(Inbox.java:239)
>>>  ~[ignite-calcite-2.16.0.jar:2.16.0]
>>> at 
>>> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.push(Inbox.java:201)
>>>  ~[ignite-calcite-2.

Re: Ignite H2 to Calcite issues

2024-08-14 Thread Amit Jolly
HI, Jeremy & Alex,

First of all thanks for the quick response.

Sorry for the confusion here, We are trying to switch from using H2 as a
default query engine to Calcite due to above mentioned CVE's.
I have read the ticket and understood that those CVE's do not have any
impact on how Ignite uses H2.
We are trying to explain the same to our internal security audit team and
at the same time trying to evaluate Calcite.

Below is the query we are using

SELECT _key FROM OrdersToMatchCache

OrdersToMatchCache has OrderId.java as key and OrdersToMatch.java as value

OrderId.java

import lombok.Data;
import org.apache.ignite.cache.query.annotations.QuerySqlField;
import java.io.Serializable;

@Data
public class OrderId implements Serializable {
@QuerySqlField
private String orderId;
@QuerySqlField
private String regionId;
@QuerySqlField
private Integer date;
}


OrdersToMatch.java

import lombok.Data;
import org.apache.ignite.cache.query.annotations.QuerySqlField;
import java.io.Serializable;

@Data
public class OrdersToMatch implements Serializable {
@QuerySqlField
private List buyOrders = new ArrayList<>();
@QuerySqlField
private List sellOrders = new ArrayList<>();
}


Order.java

import lombok.Data;
import java.io.Serializable;

@Data
public class Order implements Serializable {
private String direction; // BUY or SELL
private Integer qty;
}

Thanks,

Amit Jolly

On Wed, Aug 14, 2024 at 10:27 AM Jeremy McMillan  wrote:

> Amit:
>
> I'm concerned that you may be misreading the CVE details in the ticket you
> cited, since you indicated you are moving TO H2.. Also the stack trace is a
> Calcite stack trace. This is ambiguous whether this is the before
> (persistence config changes) depicted or after changing persistence
> depicted.
>
> A) The CVEs cited in the ticket
> https://issues.apache.org/jira/browse/IGNITE-15241 are all H2
> vulnerabilities.
> B) The H2 vulnerabilities cited all involve behaviors of H2 that Ignite
> does not use, therefore Ignite is affected neither by Calcite nor H2
> persistence involvement.
>
> I don't want to discourage you from moving from H2 to Calcite, but maybe
> this isn't as urgent as it appears, so please proceed carefully. As Alex
> requested, it will be helpful for the community to see which queries
> produce exceptions and which ones do not. H2 and Calcite have different SQL
> parsers and query planners and underlying implementations, so it should not
> be surprising that queries might need rework in the course of switching.
> You should expect to encounter issues like this one, and others like it.
> It's a migration effort.
>
>
> On Tue, Aug 13, 2024 at 9:17 AM Amit Jolly  wrote:
>
>> Hi,
>>
>> We are trying to switch to H2 due to Security Vulnerabilities as listed
>> in JIRA https://issues.apache.org/jira/browse/IGNITE-15241
>>
>> We are seeing below errors post switching. We are just running select *
>> From table query.
>>
>> Caused by: java.lang.ClassCastException: class
>> org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to class
>> java.lang.Comparable (org.apache.ignite.internal.binary.BinaryObjectImpl is
>> in unnamed module of loader 'app'; java.lang.Comparable is in module
>> java.base of loader 'bootstrap')
>> at
>> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl.compare(ExpressionFactoryImpl.java:223)
>> ~[ignite-calcite-2.16.0.jar:2.16.0]
>> at
>> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl.access$100(ExpressionFactoryImpl.java:85)
>> ~[ignite-calcite-2.16.0.jar:2.16.0]
>> at
>> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl$1.compare(ExpressionFactoryImpl.java:157)
>> ~[ignite-calcite-2.16.0.jar:2.16.0]
>> at
>> java.base/java.util.Map$Entry.lambda$comparingByKey$6d558cbf$1(Map.java:560)
>> ~[?:?]
>> at
>> java.base/java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:660)
>> ~[?:?]
>> at java.base/java.util.PriorityQueue.siftUp(PriorityQueue.java:637) ~[?:?]
>> at java.base/java.util.PriorityQueue.offer(PriorityQueue.java:330) ~[?:?]
>> at
>> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.pushOrdered(Inbox.java:239)
>> ~[ignite-calcite-2.16.0.jar:2.16.0]
>> at
>> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.push(Inbox.java:201)
>> ~[ignite-calcite-2.16.0.jar:2.16.0]
>> at
>> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.onBatchReceived(Inbox.java:177)
>> ~[ignite-calcite-2.16.0.jar:2.16.0]
>> at
>> org.apache.ignite.internal.processors.query.calcite.exec.ExchangeServiceImpl.onMessage(ExchangeServiceImpl.java:324)
>> ~[ignite-calcite-2.16.0.jar:2.16.0]
>> at
>> org.apache.ignite.internal.processors.query.calcite.exec.ExchangeServiceImpl.lambda$init$2(ExchangeServiceImpl.java:195)
>> ~[ignite-calcite-2.16.0.jar:2.16.0]
>> at
>> org.apache.ignite.internal.pr

Re: Ignite H2 to Calcite issues

2024-08-14 Thread Jeremy McMillan
Amit:

I'm concerned that you may be misreading the CVE details in the ticket you
cited, since you indicated you are moving TO H2.. Also the stack trace is a
Calcite stack trace. This is ambiguous whether this is the before
(persistence config changes) depicted or after changing persistence
depicted.

A) The CVEs cited in the ticket
https://issues.apache.org/jira/browse/IGNITE-15241 are all H2
vulnerabilities.
B) The H2 vulnerabilities cited all involve behaviors of H2 that Ignite
does not use, therefore Ignite is affected neither by Calcite nor H2
persistence involvement.

I don't want to discourage you from moving from H2 to Calcite, but maybe
this isn't as urgent as it appears, so please proceed carefully. As Alex
requested, it will be helpful for the community to see which queries
produce exceptions and which ones do not. H2 and Calcite have different SQL
parsers and query planners and underlying implementations, so it should not
be surprising that queries might need rework in the course of switching.
You should expect to encounter issues like this one, and others like it.
It's a migration effort.


On Tue, Aug 13, 2024 at 9:17 AM Amit Jolly  wrote:

> Hi,
>
> We are trying to switch to H2 due to Security Vulnerabilities as listed in
> JIRA https://issues.apache.org/jira/browse/IGNITE-15241
>
> We are seeing below errors post switching. We are just running select *
> From table query.
>
> Caused by: java.lang.ClassCastException: class
> org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to class
> java.lang.Comparable (org.apache.ignite.internal.binary.BinaryObjectImpl is
> in unnamed module of loader 'app'; java.lang.Comparable is in module
> java.base of loader 'bootstrap')
> at
> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl.compare(ExpressionFactoryImpl.java:223)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl.access$100(ExpressionFactoryImpl.java:85)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl$1.compare(ExpressionFactoryImpl.java:157)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> java.base/java.util.Map$Entry.lambda$comparingByKey$6d558cbf$1(Map.java:560)
> ~[?:?]
> at
> java.base/java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:660)
> ~[?:?]
> at java.base/java.util.PriorityQueue.siftUp(PriorityQueue.java:637) ~[?:?]
> at java.base/java.util.PriorityQueue.offer(PriorityQueue.java:330) ~[?:?]
> at
> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.pushOrdered(Inbox.java:239)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.push(Inbox.java:201)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.onBatchReceived(Inbox.java:177)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.exec.ExchangeServiceImpl.onMessage(ExchangeServiceImpl.java:324)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.exec.ExchangeServiceImpl.lambda$init$2(ExchangeServiceImpl.java:195)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.message.MessageServiceImpl.onMessageInternal(MessageServiceImpl.java:276)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.message.MessageServiceImpl.lambda$onMessage$0(MessageServiceImpl.java:254)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
> at
> org.apache.ignite.internal.processors.query.calcite.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:66)
> ~[ignite-calcite-2.16.0.jar:2.16.0]
>
> Any idea what might be causing this?
>
> Regards,
>
> Amit Jolly
>


Re: Ignite H2 to Calcite issues

2024-08-13 Thread Alex Plehanov
Hello,

Please provide the query you are using. Such a stack can't be caused
by just "select * from table" (at least sorting is used). Perhaps you
are using "ORDER BY" for fields which can't be compared in SQL
(complex _key or _val for example).

вт, 13 авг. 2024 г. в 17:17, Amit Jolly :
>
> Hi,
>
> We are trying to switch to H2 due to Security Vulnerabilities as listed in 
> JIRA https://issues.apache.org/jira/browse/IGNITE-15241
>
> We are seeing below errors post switching. We are just running select * From 
> table query.
>
> Caused by: java.lang.ClassCastException: class 
> org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to class 
> java.lang.Comparable (org.apache.ignite.internal.binary.BinaryObjectImpl is 
> in unnamed module of loader 'app'; java.lang.Comparable is in module 
> java.base of loader 'bootstrap')
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl.compare(ExpressionFactoryImpl.java:223)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl.access$100(ExpressionFactoryImpl.java:85)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.exp.ExpressionFactoryImpl$1.compare(ExpressionFactoryImpl.java:157)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> java.base/java.util.Map$Entry.lambda$comparingByKey$6d558cbf$1(Map.java:560) 
> ~[?:?]
> at 
> java.base/java.util.PriorityQueue.siftUpUsingComparator(PriorityQueue.java:660)
>  ~[?:?]
> at java.base/java.util.PriorityQueue.siftUp(PriorityQueue.java:637) ~[?:?]
> at java.base/java.util.PriorityQueue.offer(PriorityQueue.java:330) ~[?:?]
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.pushOrdered(Inbox.java:239)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.push(Inbox.java:201)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.rel.Inbox.onBatchReceived(Inbox.java:177)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.ExchangeServiceImpl.onMessage(ExchangeServiceImpl.java:324)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.ExchangeServiceImpl.lambda$init$2(ExchangeServiceImpl.java:195)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.message.MessageServiceImpl.onMessageInternal(MessageServiceImpl.java:276)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.message.MessageServiceImpl.lambda$onMessage$0(MessageServiceImpl.java:254)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
> at 
> org.apache.ignite.internal.processors.query.calcite.exec.QueryTaskExecutorImpl.lambda$execute$0(QueryTaskExecutorImpl.java:66)
>  ~[ignite-calcite-2.16.0.jar:2.16.0]
>
> Any idea what might be causing this?
>
> Regards,
>
> Amit Jolly


Re: High speed writing(INSERT) can significantly reduce SQL query performance

2024-08-09 Thread Pavel Tupitsyn
- Try changing QueryThreadPoolSize [1]
- Consider Data Streamer [2] for high-volume data insertion instead of SQL

[1]
https://ignite.apache.org/docs/latest/perf-and-troubleshooting/thread-pools-tuning#queries-pool
[2] https://ignite.apache.org/docs/latest/data-streaming

On Fri, Aug 9, 2024 at 3:16 AM 38797715 <38797...@qq.com> wrote:

> Hi team,
>
> When performing high-speed writes(INSERT statement), it can lead to a
> significant decrease in SQL query performance, the execution time will be
> extended by tens of times.
>
> We are using the H2 SQL engine, Ignite 2.16.0. We know that Ignite uses
> different thread pools for writing and querying.
>
> May I ask if this is a known issue? Or are there any parameters that can
> optimize the performance of this scenario?
>


Re: Requesting information about known CVEs related to Apache Ignite 2.16

2024-08-07 Thread Vishy Ramaswamy
Hi Jeremy
Thanks for your response and appreciate the insights.
Vishy

Vishy Ramaswamy
Modernization Architect  |  Workload Automation
Mainframe Division | Broadcom

mobile: +1.236.638.9672

CAN-British Columbia Remote Location

vishy.ramasw...@broadcom.com   |   broadcom.com


On Fri, Aug 2, 2024 at 12:40 PM Jeremy McMillan  wrote:

> There is an example of and gentle introduction to building Ignite out of
> Maven dependencies in the Ignite Essentials workshop. You could enroll in
> an upcoming free training session or access the same material on your own
> schedule for free on university.gridgain.com. The Ignite quick start
> documentation provides a little more detail and a starting point for
> further exploration.
>
> https://ignite.apache.org/docs/latest/quick-start/java
>
> What I would strongly recommend avoiding is trying to embed Ignite classes
> into another applications' JVMs which do not provide stable infrastructure.
> Good performance relies, among other things, partly upon a stable cluster
> topology. How you build the nodes doesn't matter as much as where and how
> you instantiate/run them.
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.


Re: Requesting information about known CVEs related to Apache Ignite 2.16

2024-08-02 Thread Jeremy McMillan
There is an example of and gentle introduction to building Ignite out of
Maven dependencies in the Ignite Essentials workshop. You could enroll in
an upcoming free training session or access the same material on your own
schedule for free on university.gridgain.com. The Ignite quick start
documentation provides a little more detail and a starting point for
further exploration.

https://ignite.apache.org/docs/latest/quick-start/java

What I would strongly recommend avoiding is trying to embed Ignite classes
into another applications' JVMs which do not provide stable infrastructure.
Good performance relies, among other things, partly upon a stable cluster
topology. How you build the nodes doesn't matter as much as where and how
you instantiate/run them.


Re: Requesting information about known CVEs related to Apache Ignite 2.16

2024-08-02 Thread Vishy Ramaswamy
Hi Jeremy
thanks for your response. ill take a look at the scan reports. To be more
specific we are running into the following issue
We are using Ignite standalone, from https://ignite.apache.org/download.cgi,
version 2.16.0. We are using thin client to connect to Ignite. When running
Ignite itself, it is using ignite-spring-2.16.0 and this one is using
Tomcat 9.0.63 and org.springframework 5.2.25.RELEASE, that both have known
vulnerabilities. So, we are not using the components listed above as
dependencies for our project ( or not all of them, just the ones needed for
thin client), our concern is about the standalone distribution of Ignite
and its current dependencies on vulnerable versions of Tomcat and Spring .
So our questions are as follows:
1) Is there a plan to address these 2 dependencies and upgrade them to
newer versions? if so when can we see this in the upcoming releases(3.0
release etc)
2) Can we consume ignite in such a way that we can update those
dependencies for Tomcat and Spring to leverage versions that dont have
vulnerabilities?
3) CAn we build ignite from source and update these versions ? Is this even
receommended?

thanks in advance
Vishy

Vishy Ramaswamy
Modernization Architect  |  Workload Automation
Mainframe Division | Broadcom

mobile: +1.236.638.9672

CAN-British Columbia Remote Location

vishy.ramasw...@broadcom.com   |   broadcom.com


On Fri, Aug 2, 2024 at 10:38 AM Jeremy McMillan  wrote:

> Apache Ignite release notes contain details about fixes including CVEs
> addressed.
> https://github.com/apache/ignite/blob/master/RELEASE_NOTES.txt
>
> Current known vulnerabilities are determined by vulnerability testing,
> which differs depending on who (test/scan tool vendor, stakeholder/user)
> does the testing. All scanner tools are different, and most support
> configurable policy around what to recognize and what to report. GridGain
> performs security audits of commercial distributions, but the Ignite
> community is responsible to perform its own testing.
>
> Some public vulnerability scan reports are available. YMMV:
> https://security.snyk.io/package/maven/org.apache.ignite:ignite-core
>
>
> On Thu, Aug 1, 2024 at 7:53 PM Vishy Ramaswamy <
> vishy.ramasw...@broadcom.com> wrote:
>
>> Hi All,
>> We are trying out Apache Ignite version 2.16.0. I want to know where I
>> can get information about what vulnerabilities (CVE) got addressed in
>> 2.16.0 as well as what are the current known vulnerabilities on 2.16 (if
>> any). Appreciate the help and thanks in advance for your response
>>
>> Vishy
>>
>>
>> Vishy Ramaswamy
>> Modernization Architect  |  Workload Automation
>> Mainframe Division | Broadcom
>>
>> mobile: +1.236.638.9672
>>
>> CAN-British Columbia Remote Location
>>
>> vishy.ramasw...@broadcom.com   |   broadcom.com
>>
>> This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the use of the individual or entity to whom it is addressed and
>> may contain information that is confidential, legally privileged, protected
>> by privacy laws, or otherwise restricted from disclosure to anyone else. If
>> you are not the intended recipient or the person responsible for delivering
>> the e-mail to the intended recipient, you are hereby notified that any use,
>> copying, distributing, dissemination, forwarding, printing, or copying of
>> this e-mail is strictly prohibited. If you received this e-mail in error,
>> please return the e-mail to the sender, delete it from your computer, and
>> destroy any printed copy of it.
>
>

-- 
This electronic communication and the information and any files transmitted 
with it, or attached to it, are confidential and are intended solely for 
the use of the individual or entity to whom it is addressed and may contain 
information that is confidential, legally privileged, protected by privacy 
laws, or otherwise restricted from disclosure to anyone else. If you are 
not the intended recipient or the person responsible for delivering the 
e-mail to the intended recipient, you are hereby notified that any use, 
copying, distributing, dissemination, forwarding, printing, or copying of 
this e-mail is strictly prohibited. If you received this e-mail in error, 
please return the e-mail to the sender, delete it from your computer, and 
destroy any printed copy of it.


Re: Requesting information about known CVEs related to Apache Ignite 2.16

2024-08-02 Thread Jeremy McMillan
Apache Ignite release notes contain details about fixes including CVEs
addressed.
https://github.com/apache/ignite/blob/master/RELEASE_NOTES.txt

Current known vulnerabilities are determined by vulnerability testing,
which differs depending on who (test/scan tool vendor, stakeholder/user)
does the testing. All scanner tools are different, and most support
configurable policy around what to recognize and what to report. GridGain
performs security audits of commercial distributions, but the Ignite
community is responsible to perform its own testing.

Some public vulnerability scan reports are available. YMMV:
https://security.snyk.io/package/maven/org.apache.ignite:ignite-core


On Thu, Aug 1, 2024 at 7:53 PM Vishy Ramaswamy 
wrote:

> Hi All,
> We are trying out Apache Ignite version 2.16.0. I want to know where I can
> get information about what vulnerabilities (CVE) got addressed in 2.16.0 as
> well as what are the current known vulnerabilities on 2.16 (if any).
> Appreciate the help and thanks in advance for your response
>
> Vishy
>
>
> Vishy Ramaswamy
> Modernization Architect  |  Workload Automation
> Mainframe Division | Broadcom
>
> mobile: +1.236.638.9672
>
> CAN-British Columbia Remote Location
>
> vishy.ramasw...@broadcom.com   |   broadcom.com
>
> This electronic communication and the information and any files
> transmitted with it, or attached to it, are confidential and are intended
> solely for the use of the individual or entity to whom it is addressed and
> may contain information that is confidential, legally privileged, protected
> by privacy laws, or otherwise restricted from disclosure to anyone else. If
> you are not the intended recipient or the person responsible for delivering
> the e-mail to the intended recipient, you are hereby notified that any use,
> copying, distributing, dissemination, forwarding, printing, or copying of
> this e-mail is strictly prohibited. If you received this e-mail in error,
> please return the e-mail to the sender, delete it from your computer, and
> destroy any printed copy of it.


RE: Ignite cache with custom key : key not found

2024-07-31 Thread Louis C
Thanks for your answer. For the moment, the workaround I use is that I created 
a task in Java that does the "Get" part for the C++ client (but this is less 
efficient as it causes deserialization).
I might try your workaround.

De : Igor Sapego 
Envoyé : mercredi 31 juillet 2024 13:13
À : user@ignite.apache.org 
Objet : Re: Ignite cache with custom key : key not found

Well, the quick work around would be to add at least one named field to the 
object.
Meanwhile, we'll try to figure out the right way to fix it without breaking 
backward compatibility.

Best Regards,
Igor


On Wed, Jul 31, 2024 at 10:46 AM Louis C 
mailto:l...@outlook.fr>> wrote:
Hello,

Thanks for your answers. Igor, indeed I only use the raw reader/writer to 
serialize my object. It indeed seems like the Java client thinks there is a 
schema, even if this is fully raw.

Pavel, find attached a repro code. There are the C++, Java and the conf I used.
The C++ client first puts a value, then gets it. Then it uses a Java task 
(TestTask) to add a particular value on the same key in the cache. When we get 
back this value, this is still the initial value.
Then the C++ client calls the "ListCountersTask" that list all the keys in the 
cache. As we can see there are 2 keys in the cache, that deserialize to the 
same object, but have different binary representation.
See the capture:
[cid:ii_191087d8d3ccb971f161]
Hope you'll manage to run the example.

Regards
Louis

De : Pavel Tupitsyn mailto:ptupit...@apache.org>>
Envoyé : mardi 30 juillet 2024 11:16
À : user@ignite.apache.org<mailto:user@ignite.apache.org> 
mailto:user@ignite.apache.org>>
Objet : Re: Ignite cache with custom key : key not found

Louis, we need to see the code to help you - could you please share it, both 
Java and C++ parts? Ideally a reproducer that we can run.

On Tue, Jul 30, 2024 at 11:59 AM Igor Sapego 
mailto:isap...@apache.org>> wrote:
What I see from the object is that it's fully raw, meaning, all the fields 
written without names. Is that correct?

In this case, there is no schema on C++ side, because there are no named fields 
and schema is not needed.
Java client writes schemaInitialId() in this case, which is 
FNV1_OFFSET_BASIS(0x811C9DC5).

Looks like a bug in Java client to me. Pavel, what do you think?

Best Regards,
Igor


On Tue, Jul 30, 2024 at 10:03 AM Louis C 
mailto:l...@outlook.fr>> wrote:
Sorry to up this subject, but I have not been able to find a solution/reason 
for this problem.
Does anyone have an idea ?

Thanks,

Louis C.

De : Louis C mailto:l...@outlook.fr>>
Envoyé : lundi 22 juillet 2024 17:06
À : user@ignite.apache.org<mailto:user@ignite.apache.org> 
mailto:user@ignite.apache.org>>
Objet : RE: Ignite cache with custom key : key not found

Hello,

Thanks for your answers.

As for the "compactFooter" I already set it to "false" in the xml config of the 
server. For the client, it is the C++ thin client and in the 
object"IgniteClientConfiguration" I can not do it, it seems. I believe there is 
no need to set this parameter for the C++ thin client.

It will be quite hard to extract a reproducible example, but I might do it if 
necessary.
In the meantime I managed to extract the binary representartion of the 2 
differents keys (in hexadecimal) :
Key from C++ thin client :
67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 00 00 00 00 18 00 00 00 05 00 
00 00 0C 05 00 00 00 32 32 32 32 32
Key from Java :
67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 C5 9D 1C 81 18 00 00 00 05 00 
00 00 0C 05 00 00 00 32 32 32 32 32

As we can see, the difference is the 4 bytes in position 17 to 20, 
corresponding, if we 
followhttps://cwiki.apache.org/confluence/display/IGNITE/Binary%20object%20format
 to the "Schema Id" which is a "has of the object fields".
In case of C++ thin client it is "00 00 00 00", and "C5 9D 1C 81".
The bytes in position 3 to 4 are " 05 00" which seems to indicate user type + 
raw data, but indeed there is not the flag "compact footer" (0x20). So it seems 
to be OK.

I do not know if I said it but I add keys with a CacheEntryProcessor on the 
java side of things.
Anyone has an idea ?

Best regards,
Louis C.


De : Николай Ижиков mailto:nizhikov@gmail.com>> de 
la part de Nikolay Izhikov mailto:nizhi...@apache.org>>
Envoyé : lundi 22 juillet 2024 10:33
À : user@ignite.apache.org<mailto:user@ignite.apache.org> 
mailto:user@ignite.apache.org>>
Objet : Re: Ignite cache with custom key : key not found

Hello.

It common issue with the thin client.
Please, set same value of BinaryConfiguration#compactFooter explicitly to false 
both on the server side and client side.

On 22 Jul 2024, at 10:32, Pavel Tupitsyn 
mai

Re: Ignite cache with custom key : key not found

2024-07-31 Thread Igor Sapego
Well, the quick work around would be to add at least one named field to the
object.
Meanwhile, we'll try to figure out the right way to fix it without breaking
backward compatibility.

Best Regards,
Igor


On Wed, Jul 31, 2024 at 10:46 AM Louis C  wrote:

> Hello,
>
> Thanks for your answers. Igor, indeed I only use the raw reader/writer to
> serialize my object. It indeed seems like the Java client thinks there is a
> schema, even if this is fully raw.
>
> Pavel, find attached a repro code. There are the C++, Java and the conf I
> used.
> The C++ client first puts a value, then gets it. Then it uses a Java task
> (TestTask) to add a particular value on the same key in the cache. When we
> get back this value, this is still the initial value.
> Then the C++ client calls the "ListCountersTask" that list all the keys in
> the cache. As we can see there are 2 keys in the cache, that deserialize to
> the same object, but have different binary representation.
> See the capture:
> Hope you'll manage to run the example.
>
> Regards
> Louis
> --
> *De :* Pavel Tupitsyn 
> *Envoyé :* mardi 30 juillet 2024 11:16
> *À :* user@ignite.apache.org 
> *Objet :* Re: Ignite cache with custom key : key not found
>
> Louis, we need to see the code to help you - could you please share it,
> both Java and C++ parts? Ideally a reproducer that we can run.
>
> On Tue, Jul 30, 2024 at 11:59 AM Igor Sapego  wrote:
>
> What I see from the object is that it's fully raw, meaning, all the fields
> written without names. Is that correct?
>
> In this case, there is no schema on C++ side, because there are no named
> fields and schema is not needed.
> Java client writes schemaInitialId() in this case, which is
> FNV1_OFFSET_BASIS(0x811C9DC5).
>
> Looks like a bug in Java client to me. Pavel, what do you think?
>
> Best Regards,
> Igor
>
>
> On Tue, Jul 30, 2024 at 10:03 AM Louis C  wrote:
>
> Sorry to up this subject, but I have not been able to find a
> solution/reason for this problem.
> Does anyone have an idea ?
>
> Thanks,
>
> Louis C.
> --
> *De :* Louis C 
> *Envoyé :* lundi 22 juillet 2024 17:06
> *À :* user@ignite.apache.org 
> *Objet :* RE: Ignite cache with custom key : key not found
>
> Hello,
>
> Thanks for your answers.
>
> As for the "compactFooter" I already set it to "false" in the xml config
> of the server. For the client, it is the C++ thin client and in the
> object"IgniteClientConfiguration" I can not do it, it seems. I believe
> there is no need to set this parameter for the C++ thin client.
>
> It will be quite hard to extract a reproducible example, but I might do it
> if necessary.
> In the meantime I managed to extract the binary representartion of the 2
> differents keys (in hexadecimal) :
> Key from C++ thin client :
> 67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 00 00 00 00 18 00 00 00 05
> 00 00 00 0C 05 00 00 00 32 32 32 32 32
> Key from Java :
> 67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 C5 9D 1C 81 18 00 00 00 05
> 00 00 00 0C 05 00 00 00 32 32 32 32 32
>
> As we can see, the difference is the 4 bytes in position 17 to 20,
> corresponding, if we follow
> https://cwiki.apache.org/confluence/display/IGNITE/Binary%20object%20format to
> the "Schema Id" which is a "has of the object fields".
> In case of C++ thin client it is "00 00 00 00", and "C5 9D 1C 81".
> The bytes in position 3 to 4 are " 05 00" which seems to indicate user
> type + raw data, but indeed there is not the flag "compact footer" (0x20).
> So it seems to be OK.
>
> I do not know if I said it but I add keys with a CacheEntryProcessor on
> the java side of things.
> Anyone has an idea ?
>
> Best regards,
> Louis C.
>
> --
> *De :* Николай Ижиков  de la part de Nikolay
> Izhikov 
> *Envoyé :* lundi 22 juillet 2024 10:33
> *À :* user@ignite.apache.org 
> *Objet :* Re: Ignite cache with custom key : key not found
>
> Hello.
>
> It common issue with the thin client.
> Please, set same value of BinaryConfiguration#compactFooter explicitly to
> false both on the server side and client side.
>
> On 22 Jul 2024, at 10:32, Pavel Tupitsyn  wrote:
>
> Hello, could you please attach a reproducer?
>
> This might have to do with type names / ids mismatch, but hard to tell
> without the code.
>
> On Fri, Jul 19, 2024 at 7:39 PM Louis C  wrote:
>
> Hello,
>
> I have a strange problem for which I can't find the reason.
>
> I made a cache (key/value cache) with a custom key type that is c

Re: Ignite cache with custom key : key not found

2024-07-30 Thread Pavel Tupitsyn
Louis, we need to see the code to help you - could you please share it,
both Java and C++ parts? Ideally a reproducer that we can run.

On Tue, Jul 30, 2024 at 11:59 AM Igor Sapego  wrote:

> What I see from the object is that it's fully raw, meaning, all the fields
> written without names. Is that correct?
>
> In this case, there is no schema on C++ side, because there are no named
> fields and schema is not needed.
> Java client writes schemaInitialId() in this case, which is
> FNV1_OFFSET_BASIS(0x811C9DC5).
>
> Looks like a bug in Java client to me. Pavel, what do you think?
>
> Best Regards,
> Igor
>
>
> On Tue, Jul 30, 2024 at 10:03 AM Louis C  wrote:
>
>> Sorry to up this subject, but I have not been able to find a
>> solution/reason for this problem.
>> Does anyone have an idea ?
>>
>> Thanks,
>>
>> Louis C.
>> --
>> *De :* Louis C 
>> *Envoyé :* lundi 22 juillet 2024 17:06
>> *À :* user@ignite.apache.org 
>> *Objet :* RE: Ignite cache with custom key : key not found
>>
>> Hello,
>>
>> Thanks for your answers.
>>
>> As for the "compactFooter" I already set it to "false" in the xml config
>> of the server. For the client, it is the C++ thin client and in the
>> object"IgniteClientConfiguration" I can not do it, it seems. I believe
>> there is no need to set this parameter for the C++ thin client.
>>
>> It will be quite hard to extract a reproducible example, but I might do
>> it if necessary.
>> In the meantime I managed to extract the binary representartion of the 2
>> differents keys (in hexadecimal) :
>> Key from C++ thin client :
>> 67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 00 00 00 00 18 00 00 00
>> 05 00 00 00 0C 05 00 00 00 32 32 32 32 32
>> Key from Java :
>> 67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 C5 9D 1C 81 18 00 00 00
>> 05 00 00 00 0C 05 00 00 00 32 32 32 32 32
>>
>> As we can see, the difference is the 4 bytes in position 17 to 20,
>> corresponding, if we follow
>> https://cwiki.apache.org/confluence/display/IGNITE/Binary%20object%20format 
>> to
>> the "Schema Id" which is a "has of the object fields".
>> In case of C++ thin client it is "00 00 00 00", and "C5 9D 1C 81".
>> The bytes in position 3 to 4 are " 05 00" which seems to indicate user
>> type + raw data, but indeed there is not the flag "compact footer" (0x20).
>> So it seems to be OK.
>>
>> I do not know if I said it but I add keys with a CacheEntryProcessor on
>> the java side of things.
>> Anyone has an idea ?
>>
>> Best regards,
>> Louis C.
>>
>> --
>> *De :* Николай Ижиков  de la part de Nikolay
>> Izhikov 
>> *Envoyé :* lundi 22 juillet 2024 10:33
>> *À :* user@ignite.apache.org 
>> *Objet :* Re: Ignite cache with custom key : key not found
>>
>> Hello.
>>
>> It common issue with the thin client.
>> Please, set same value of BinaryConfiguration#compactFooter explicitly to
>> false both on the server side and client side.
>>
>> On 22 Jul 2024, at 10:32, Pavel Tupitsyn  wrote:
>>
>> Hello, could you please attach a reproducer?
>>
>> This might have to do with type names / ids mismatch, but hard to tell
>> without the code.
>>
>> On Fri, Jul 19, 2024 at 7:39 PM Louis C  wrote:
>>
>> Hello,
>>
>> I have a strange problem for which I can't find the reason.
>>
>> I made a cache (key/value cache) with a custom key type that is called
>> "IgniteBinaryData".
>>
>> I have a C++ thin client that calls the server and execute a Java
>> ComputeTaskAdapter that I made (let's call it
>> "Task1").
>> This Task1 writes data in the cache with the custom key type
>> "IgniteBinaryData".
>>
>> But the issue is that when I request the same cache from the C++ thin
>> client, the key is not found.
>>
>> What is strange is that I can then add the key with a "Put" from the C++,
>> and when I look at the deserialized keys in the java code, there does not
>> seem to be any difference between the 2 "different" keys, which are both
>> present in the cache.
>>
>> What I saw is that when I do a "Get" from the C++, the key is not
>> deserialized (Ignite looks only at the serialized data of the keys).
>>
>> So I think there might be a difference in the serialization of the key
>> between the Java code and the C++, but not visible when deserialized.
>>
>> But looking at all the entries in the cache with an iterator, I found no
>> differences. I tried using the".withKeepBinary();" method to access the
>> keys without deserialization, but I can't find a way to get the "bytes[]"
>> corresponding to the key from the BinaryObject.
>>
>> So, my question would be : how to get the "bytes[]" corresponding to a
>> custom key ?
>> And also, is there a known issue that could arise when doing this ? I
>> carefully followed
>> https://ignite.apache.org/docs/latest/cpp-specific/cpp-platform-interoperability
>>  and
>> I have no problem of deserialization...
>>
>> Best regards,
>>
>> Louis C.
>>
>>
>>


Re: Ignite cache with custom key : key not found

2024-07-30 Thread Igor Sapego
What I see from the object is that it's fully raw, meaning, all the fields
written without names. Is that correct?

In this case, there is no schema on C++ side, because there are no named
fields and schema is not needed.
Java client writes schemaInitialId() in this case, which is
FNV1_OFFSET_BASIS(0x811C9DC5).

Looks like a bug in Java client to me. Pavel, what do you think?

Best Regards,
Igor


On Tue, Jul 30, 2024 at 10:03 AM Louis C  wrote:

> Sorry to up this subject, but I have not been able to find a
> solution/reason for this problem.
> Does anyone have an idea ?
>
> Thanks,
>
> Louis C.
> --
> *De :* Louis C 
> *Envoyé :* lundi 22 juillet 2024 17:06
> *À :* user@ignite.apache.org 
> *Objet :* RE: Ignite cache with custom key : key not found
>
> Hello,
>
> Thanks for your answers.
>
> As for the "compactFooter" I already set it to "false" in the xml config
> of the server. For the client, it is the C++ thin client and in the
> object"IgniteClientConfiguration" I can not do it, it seems. I believe
> there is no need to set this parameter for the C++ thin client.
>
> It will be quite hard to extract a reproducible example, but I might do it
> if necessary.
> In the meantime I managed to extract the binary representartion of the 2
> differents keys (in hexadecimal) :
> Key from C++ thin client :
> 67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 00 00 00 00 18 00 00 00 05
> 00 00 00 0C 05 00 00 00 32 32 32 32 32
> Key from Java :
> 67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 C5 9D 1C 81 18 00 00 00 05
> 00 00 00 0C 05 00 00 00 32 32 32 32 32
>
> As we can see, the difference is the 4 bytes in position 17 to 20,
> corresponding, if we follow
> https://cwiki.apache.org/confluence/display/IGNITE/Binary%20object%20format to
> the "Schema Id" which is a "has of the object fields".
> In case of C++ thin client it is "00 00 00 00", and "C5 9D 1C 81".
> The bytes in position 3 to 4 are " 05 00" which seems to indicate user
> type + raw data, but indeed there is not the flag "compact footer" (0x20).
> So it seems to be OK.
>
> I do not know if I said it but I add keys with a CacheEntryProcessor on
> the java side of things.
> Anyone has an idea ?
>
> Best regards,
> Louis C.
>
> --
> *De :* Николай Ижиков  de la part de Nikolay
> Izhikov 
> *Envoyé :* lundi 22 juillet 2024 10:33
> *À :* user@ignite.apache.org 
> *Objet :* Re: Ignite cache with custom key : key not found
>
> Hello.
>
> It common issue with the thin client.
> Please, set same value of BinaryConfiguration#compactFooter explicitly to
> false both on the server side and client side.
>
> On 22 Jul 2024, at 10:32, Pavel Tupitsyn  wrote:
>
> Hello, could you please attach a reproducer?
>
> This might have to do with type names / ids mismatch, but hard to tell
> without the code.
>
> On Fri, Jul 19, 2024 at 7:39 PM Louis C  wrote:
>
> Hello,
>
> I have a strange problem for which I can't find the reason.
>
> I made a cache (key/value cache) with a custom key type that is called
> "IgniteBinaryData".
>
> I have a C++ thin client that calls the server and execute a Java
> ComputeTaskAdapter that I made (let's call it
> "Task1").
> This Task1 writes data in the cache with the custom key type
> "IgniteBinaryData".
>
> But the issue is that when I request the same cache from the C++ thin
> client, the key is not found.
>
> What is strange is that I can then add the key with a "Put" from the C++,
> and when I look at the deserialized keys in the java code, there does not
> seem to be any difference between the 2 "different" keys, which are both
> present in the cache.
>
> What I saw is that when I do a "Get" from the C++, the key is not
> deserialized (Ignite looks only at the serialized data of the keys).
>
> So I think there might be a difference in the serialization of the key
> between the Java code and the C++, but not visible when deserialized.
>
> But looking at all the entries in the cache with an iterator, I found no
> differences. I tried using the".withKeepBinary();" method to access the
> keys without deserialization, but I can't find a way to get the "bytes[]"
> corresponding to the key from the BinaryObject.
>
> So, my question would be : how to get the "bytes[]" corresponding to a
> custom key ?
> And also, is there a known issue that could arise when doing this ? I
> carefully followed
> https://ignite.apache.org/docs/latest/cpp-specific/cpp-platform-interoperability
>  and
> I have no problem of deserialization...
>
> Best regards,
>
> Louis C.
>
>
>


RE: Ignite cache with custom key : key not found

2024-07-30 Thread Louis C
Sorry to up this subject, but I have not been able to find a solution/reason 
for this problem.
Does anyone have an idea ?

Thanks,

Louis C.

De : Louis C 
Envoyé : lundi 22 juillet 2024 17:06
À : user@ignite.apache.org 
Objet : RE: Ignite cache with custom key : key not found

Hello,

Thanks for your answers.

As for the "compactFooter" I already set it to "false" in the xml config of the 
server. For the client, it is the C++ thin client and in the 
object"IgniteClientConfiguration" I can not do it, it seems. I believe there is 
no need to set this parameter for the C++ thin client.

It will be quite hard to extract a reproducible example, but I might do it if 
necessary.
In the meantime I managed to extract the binary representartion of the 2 
differents keys (in hexadecimal) :
Key from C++ thin client :
67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 00 00 00 00 18 00 00 00 05 00 
00 00 0C 05 00 00 00 32 32 32 32 32
Key from Java :
67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 C5 9D 1C 81 18 00 00 00 05 00 
00 00 0C 05 00 00 00 32 32 32 32 32

As we can see, the difference is the 4 bytes in position 17 to 20, 
corresponding, if we 
followhttps://cwiki.apache.org/confluence/display/IGNITE/Binary%20object%20format
 to the "Schema Id" which is a "has of the object fields".
In case of C++ thin client it is "00 00 00 00", and "C5 9D 1C 81".
The bytes in position 3 to 4 are " 05 00" which seems to indicate user type + 
raw data, but indeed there is not the flag "compact footer" (0x20). So it seems 
to be OK.

I do not know if I said it but I add keys with a CacheEntryProcessor on the 
java side of things.
Anyone has an idea ?

Best regards,
Louis C.


De : Николай Ижиков  de la part de Nikolay Izhikov 

Envoyé : lundi 22 juillet 2024 10:33
À : user@ignite.apache.org 
Objet : Re: Ignite cache with custom key : key not found

Hello.

It common issue with the thin client.
Please, set same value of BinaryConfiguration#compactFooter explicitly to false 
both on the server side and client side.

On 22 Jul 2024, at 10:32, Pavel Tupitsyn  wrote:

Hello, could you please attach a reproducer?

This might have to do with type names / ids mismatch, but hard to tell without 
the code.

On Fri, Jul 19, 2024 at 7:39 PM Louis C 
mailto:l...@outlook.fr>> wrote:
Hello,

I have a strange problem for which I can't find the reason.

I made a cache (key/value cache) with a custom key type that is called 
"IgniteBinaryData".

I have a C++ thin client that calls the server and execute a Java 
ComputeTaskAdapter that I made (let's call it "Task1").
This Task1 writes data in the cache with the custom key type "IgniteBinaryData".

But the issue is that when I request the same cache from the C++ thin client, 
the key is not found.

What is strange is that I can then add the key with a "Put" from the C++, and 
when I look at the deserialized keys in the java code, there does not seem to 
be any difference between the 2 "different" keys, which are both present in the 
cache.

What I saw is that when I do a "Get" from the C++, the key is not deserialized 
(Ignite looks only at the serialized data of the keys).

So I think there might be a difference in the serialization of the key between 
the Java code and the C++, but not visible when deserialized.

But looking at all the entries in the cache with an iterator, I found no 
differences. I tried using the".withKeepBinary();" method to access the keys 
without deserialization, but I can't find a way to get the "bytes[]" 
corresponding to the key from the BinaryObject.

So, my question would be : how to get the "bytes[]" corresponding to a custom 
key ?
And also, is there a known issue that could arise when doing this ? I carefully 
followed 
https://ignite.apache.org/docs/latest/cpp-specific/cpp-platform-interoperability
 and I have no problem of deserialization...

Best regards,

Louis C.



RE: Ignite cache with custom key : key not found

2024-07-22 Thread Louis C
Hello,

Thanks for your answers.

As for the "compactFooter" I already set it to "false" in the xml config of the 
server. For the client, it is the C++ thin client and in the 
object"IgniteClientConfiguration" I can not do it, it seems. I believe there is 
no need to set this parameter for the C++ thin client.

It will be quite hard to extract a reproducible example, but I might do it if 
necessary.
In the meantime I managed to extract the binary representartion of the 2 
differents keys (in hexadecimal) :
Key from C++ thin client :
67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 00 00 00 00 18 00 00 00 05 00 
00 00 0C 05 00 00 00 32 32 32 32 32
Key from Java :
67 01 05 00 91 D3 05 6C 87 E6 CF 1E 26 00 00 00 C5 9D 1C 81 18 00 00 00 05 00 
00 00 0C 05 00 00 00 32 32 32 32 32

As we can see, the difference is the 4 bytes in position 17 to 20, 
corresponding, if we 
followhttps://cwiki.apache.org/confluence/display/IGNITE/Binary%20object%20format
 to the "Schema Id" which is a "has of the object fields".
In case of C++ thin client it is "00 00 00 00", and "C5 9D 1C 81".
The bytes in position 3 to 4 are " 05 00" which seems to indicate user type + 
raw data, but indeed there is not the flag "compact footer" (0x20). So it seems 
to be OK.

I do not know if I said it but I add keys with a CacheEntryProcessor on the 
java side of things.
Anyone has an idea ?

Best regards,
Louis C.


De : Николай Ижиков  de la part de Nikolay Izhikov 

Envoyé : lundi 22 juillet 2024 10:33
À : user@ignite.apache.org 
Objet : Re: Ignite cache with custom key : key not found

Hello.

It common issue with the thin client.
Please, set same value of BinaryConfiguration#compactFooter explicitly to false 
both on the server side and client side.

On 22 Jul 2024, at 10:32, Pavel Tupitsyn  wrote:

Hello, could you please attach a reproducer?

This might have to do with type names / ids mismatch, but hard to tell without 
the code.

On Fri, Jul 19, 2024 at 7:39 PM Louis C 
mailto:l...@outlook.fr>> wrote:
Hello,

I have a strange problem for which I can't find the reason.

I made a cache (key/value cache) with a custom key type that is called 
"IgniteBinaryData".

I have a C++ thin client that calls the server and execute a Java 
ComputeTaskAdapter that I made (let's call it "Task1").
This Task1 writes data in the cache with the custom key type "IgniteBinaryData".

But the issue is that when I request the same cache from the C++ thin client, 
the key is not found.

What is strange is that I can then add the key with a "Put" from the C++, and 
when I look at the deserialized keys in the java code, there does not seem to 
be any difference between the 2 "different" keys, which are both present in the 
cache.

What I saw is that when I do a "Get" from the C++, the key is not deserialized 
(Ignite looks only at the serialized data of the keys).

So I think there might be a difference in the serialization of the key between 
the Java code and the C++, but not visible when deserialized.

But looking at all the entries in the cache with an iterator, I found no 
differences. I tried using the".withKeepBinary();" method to access the keys 
without deserialization, but I can't find a way to get the "bytes[]" 
corresponding to the key from the BinaryObject.

So, my question would be : how to get the "bytes[]" corresponding to a custom 
key ?
And also, is there a known issue that could arise when doing this ? I carefully 
followed 
https://ignite.apache.org/docs/latest/cpp-specific/cpp-platform-interoperability
 and I have no problem of deserialization...

Best regards,

Louis C.



Re: Ignite cache with custom key : key not found

2024-07-22 Thread Nikolay Izhikov
Hello.

It common issue with the thin client.
Please, set same value of BinaryConfiguration#compactFooter explicitly to false 
both on the server side and client side.

> On 22 Jul 2024, at 10:32, Pavel Tupitsyn  wrote:
> 
> Hello, could you please attach a reproducer?
> 
> This might have to do with type names / ids mismatch, but hard to tell 
> without the code.
> 
> On Fri, Jul 19, 2024 at 7:39 PM Louis C  > wrote:
>> Hello,
>> 
>> I have a strange problem for which I can't find the reason.
>> 
>> I made a cache (key/value cache) with a custom key type that is called 
>> "IgniteBinaryData".
>> 
>> I have a C++ thin client that calls the server and execute a Java 
>> ComputeTaskAdapter that I made (let's call it 
>> "Task1").
>> This Task1 writes data in the cache with the custom key type 
>> "IgniteBinaryData".
>> 
>> But the issue is that when I request the same cache from the C++ thin 
>> client, the key is not found.
>> 
>> What is strange is that I can then add the key with a "Put" from the C++, 
>> and when I look at the deserialized keys in the java code, there does not 
>> seem to be any difference between the 2 "different" keys, which are both 
>> present in the cache.
>> 
>> What I saw is that when I do a "Get" from the C++, the key is not 
>> deserialized (Ignite looks only at the serialized data of the keys).
>> 
>> So I think there might be a difference in the serialization of the key 
>> between the Java code and the C++, but not visible when deserialized.
>> 
>> But looking at all the entries in the cache with an iterator, I found no 
>> differences. I tried using the".withKeepBinary();" method to access the keys 
>> without deserialization, but I can't find a way to get the "bytes[]" 
>> corresponding to the key from the BinaryObject.
>> 
>> So, my question would be : how to get the "bytes[]" corresponding to a 
>> custom key ?
>> And also, is there a known issue that could arise when doing this ? I 
>> carefully followed 
>> https://ignite.apache.org/docs/latest/cpp-specific/cpp-platform-interoperability
>>  and I have no problem of deserialization...
>> 
>> Best regards,
>> 
>> Louis C.



Re: Ignite cache with custom key : key not found

2024-07-22 Thread Pavel Tupitsyn
Hello, could you please attach a reproducer?

This might have to do with type names / ids mismatch, but hard to tell
without the code.

On Fri, Jul 19, 2024 at 7:39 PM Louis C  wrote:

> Hello,
>
> I have a strange problem for which I can't find the reason.
>
> I made a cache (key/value cache) with a custom key type that is called
> "IgniteBinaryData".
>
> I have a C++ thin client that calls the server and execute a Java
> ComputeTaskAdapter that I made (let's call it
> "Task1").
> This Task1 writes data in the cache with the custom key type
> "IgniteBinaryData".
>
> But the issue is that when I request the same cache from the C++ thin
> client, the key is not found.
>
> What is strange is that I can then add the key with a "Put" from the C++,
> and when I look at the deserialized keys in the java code, there does not
> seem to be any difference between the 2 "different" keys, which are both
> present in the cache.
>
> What I saw is that when I do a "Get" from the C++, the key is not
> deserialized (Ignite looks only at the serialized data of the keys).
>
> So I think there might be a difference in the serialization of the key
> between the Java code and the C++, but not visible when deserialized.
>
> But looking at all the entries in the cache with an iterator, I found no
> differences. I tried using the".withKeepBinary();" method to access the
> keys without deserialization, but I can't find a way to get the "bytes[]"
> corresponding to the key from the BinaryObject.
>
> So, my question would be : how to get the "bytes[]" corresponding to a
> custom key ?
> And also, is there a known issue that could arise when doing this ? I
> carefully followed
> https://ignite.apache.org/docs/latest/cpp-specific/cpp-platform-interoperability
>  and
> I have no problem of deserialization...
>
> Best regards,
>
> Louis C.
>


Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-15 Thread Raymond Liu
Thanks Pavel, Salava... I guess it boils down to a lack of an
"exactly-once" processing guarantee when more than one cache is involved.

We can maybe solve this problem for our smaller caches by combining them
into one single cache... that way, all data is contained within the
MutableEntry which gets reset to its original state for the second
execution. Some of our caches have small values and we can use this
approach. But some caches further downstream have large values and churn
frequently, so combining them may be less viable.

It would be nice if we could have a configuration option or an API overload
for bubbling up exceptional scenarios (connection errors, rebalancing,
binary registration) to the application so the application could handle
retries itself - and thus, treat retries differently than the initial
attempt. That way, the application can use the "short circuit if nothing
changed" path for the 99% of cases where the entry processor only executed
once, and then the "repropagate everything downstream even if nothing
changed" path for the exceptional cases where the entry processor may have
executed twice. This would save us a lot of processing.

If we have that in Ignite 3, we may even be able to get close to
"exactly-once" by starting a transaction on the client, using that
transaction id in a ComputeJob, and then if an exception is bubbled back to
the client, the client can roll back the transaction and start a new one
for its retry.


On Fri, Jul 12, 2024 at 5:54 AM Pavel Tupitsyn  wrote:

> > how did you discover the answer to this
>
> By enabling "break on all exceptions" in my IDE.
> This exception is not logged, Ignite considers this a normal situation,
> registers the type automatically and then re-runs the processor.
>
> As Slava said, you should not rely on the fact that the processor will be
> executed only once, automated retries are possible in different situations
> (connection errors, rebalance, etc).
>


Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-12 Thread Pavel Tupitsyn
> how did you discover the answer to this

By enabling "break on all exceptions" in my IDE.
This exception is not logged, Ignite considers this a normal situation,
registers the type automatically and then re-runs the processor.

As Slava said, you should not rely on the fact that the processor will be
executed only once, automated retries are possible in different situations
(connection errors, rebalance, etc).


Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-12 Thread Вячеслав Коптилин
Hi Raymond,

Besides the answer Pavel provided, please take into account that entry
processor implementation should avoid generating random values for the
entry being updated.
For example, mutableEntry.setValue(rand.nextInt()) might lead to data
inconsistency.

* 
* An instance of entry processor must be stateless as it may be
invoked multiple times on primary and
* backup nodes in the cache. It is guaranteed that the value passed to
the entry processor will be always
* the same.
* 

If you need random values, please consider generating them before the
invocation and passing them through additional parameters as follows:
Integer randomValue = rand.nextInt();
cache.invoke(key, entryProcessor, randomValue);

Thanks,
Slava.


ср, 10 июл. 2024 г. в 19:59, Raymond Liu :

> I've only tested the real deal with 2.16, but I just ran the sample repo
> test with 2.14, and the output is the same. I can test with even earlier
> versions if you'd like.
>
> On Wed, Jul 10, 2024 at 4:08 AM Stephen Darlington 
> wrote:
>
>> Do you see the same behaviour with older versions of Ignite, or is this
>> unique to 2.16?
>>
>> On Tue, 9 Jul 2024 at 21:34, Raymond Liu  wrote:
>>
>>> Hi all,
>>>
>>> We're encountering an issue where entry processors execute twice.
>>> Executing twice is a problem for us because, for easier optimization, we
>>> would like our entry processors *not* to be idempotent.
>>>
>>> Here is a sample self-contained junit test on Github which demonstrates
>>> this issue: https://github.com/Philosobyte/ignite
>>> -duplicate-processing-test/blob/main/src/test/java/com/philosobyte/igniteduplicateprocessingtest/DuplicateProcessingTest.java
>>>
>>>
>>> (in case that link doesn't work, my github username is Philosobyte and
>>> the project is called "ignite-duplicate-processing-test")
>>>
>>> When the test is run, it will log two executions instead of just one.
>>>
>>> To rule out the entry processor executing on both a primary and backup
>>> partition, I set the number of backups to 0. I've also set atomicityMode to
>>> ATOMIC.
>>>
>>> Does anyone have any ideas about why this might happen?
>>>
>>> Thank you,
>>> Raymond
>>>
>>


Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-11 Thread Raymond Liu
One followup question - how did you discover the answer to this, and how
would we discover answers to problems like these ourselves? I set
IGNITE_QUIET=false, and I increased logging level in
config/java.util.logging.properties and log4j2.xml on the client side to no
avail. Is following the tree of possible code execution paths with the help
of a debugger the only option?

On Thu, Jul 11, 2024 at 12:41 PM Raymond Liu  wrote:

> Hey Pavel,
>
> That does the trick! Interestingly, if I change the return value of the
> entry processor to some other POJO and I register that POJO in the
> BinaryConfiguration instead of LightsaberColor, the duplicate processing
> still occurs. Perhaps this is related to the cache value rather than the
> return value - it just so happens that the original test used
> LightsaberColor as both. Thank you very much for looking into this!
>
> - Raymond
>
> On Thu, Jul 11, 2024 at 8:42 AM Pavel Tupitsyn 
> wrote:
>
>> - Duplicate invocation happens due to automatic retry
>> for UnregisteredClassException caused by return value of type
>> LightsaberColor.
>> - Ignite handles the exception, registers the type automatically, and
>> re-runs the processor
>> - This only happens once, subsequent invocations are not duplicated
>> - To fix this, register the LightsaberColor explicitly in
>> BinaryConfiguration. Your code should be changed like this:
>>
>> static class DuplicateProcessingTestConfiguration {
>> @Bean
>> @SneakyThrows
>> public Ignite ignite(ApplicationContext applicationContext) {
>> IgniteConfiguration cfg = new IgniteConfiguration();
>> cfg.setBinaryConfiguration(new 
>> BinaryConfiguration().setTypeConfigurations(Collections.singletonList(
>> new BinaryTypeConfiguration(LightsaberColor.class.getName())
>> )));
>> return IgniteSpring.start(cfg, applicationContext);
>> }
>>
>> ...
>>
>>
>> On Wed, Jul 10, 2024 at 7:58 PM Raymond Liu 
>> wrote:
>>
>>> I've only tested the real deal with 2.16, but I just ran the sample repo
>>> test with 2.14, and the output is the same. I can test with even earlier
>>> versions if you'd like.
>>>
>>> On Wed, Jul 10, 2024 at 4:08 AM Stephen Darlington <
>>> sdarling...@apache.org> wrote:
>>>
>>>> Do you see the same behaviour with older versions of Ignite, or is this
>>>> unique to 2.16?
>>>>
>>>> On Tue, 9 Jul 2024 at 21:34, Raymond Liu  wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> We're encountering an issue where entry processors execute twice.
>>>>> Executing twice is a problem for us because, for easier optimization, we
>>>>> would like our entry processors *not* to be idempotent.
>>>>>
>>>>> Here is a sample self-contained junit test on Github which
>>>>> demonstrates this issue: https://github.com/Philosobyte/ignite
>>>>> -duplicate-processing-test/blob/main/src/test/java/com/philosobyte/igniteduplicateprocessingtest/DuplicateProcessingTest.java
>>>>>
>>>>>
>>>>> (in case that link doesn't work, my github username is Philosobyte and
>>>>> the project is called "ignite-duplicate-processing-test")
>>>>>
>>>>> When the test is run, it will log two executions instead of just one.
>>>>>
>>>>> To rule out the entry processor executing on both a primary and backup
>>>>> partition, I set the number of backups to 0. I've also set atomicityMode 
>>>>> to
>>>>> ATOMIC.
>>>>>
>>>>> Does anyone have any ideas about why this might happen?
>>>>>
>>>>> Thank you,
>>>>> Raymond
>>>>>
>>>>


Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-11 Thread Raymond Liu
Hey Pavel,

That does the trick! Interestingly, if I change the return value of the
entry processor to some other POJO and I register that POJO in the
BinaryConfiguration instead of LightsaberColor, the duplicate processing
still occurs. Perhaps this is related to the cache value rather than the
return value - it just so happens that the original test used
LightsaberColor as both. Thank you very much for looking into this!

- Raymond

On Thu, Jul 11, 2024 at 8:42 AM Pavel Tupitsyn  wrote:

> - Duplicate invocation happens due to automatic retry
> for UnregisteredClassException caused by return value of type
> LightsaberColor.
> - Ignite handles the exception, registers the type automatically, and
> re-runs the processor
> - This only happens once, subsequent invocations are not duplicated
> - To fix this, register the LightsaberColor explicitly in
> BinaryConfiguration. Your code should be changed like this:
>
> static class DuplicateProcessingTestConfiguration {
> @Bean
> @SneakyThrows
> public Ignite ignite(ApplicationContext applicationContext) {
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setBinaryConfiguration(new 
> BinaryConfiguration().setTypeConfigurations(Collections.singletonList(
> new BinaryTypeConfiguration(LightsaberColor.class.getName())
> )));
> return IgniteSpring.start(cfg, applicationContext);
> }
>
> ...
>
>
> On Wed, Jul 10, 2024 at 7:58 PM Raymond Liu  wrote:
>
>> I've only tested the real deal with 2.16, but I just ran the sample repo
>> test with 2.14, and the output is the same. I can test with even earlier
>> versions if you'd like.
>>
>> On Wed, Jul 10, 2024 at 4:08 AM Stephen Darlington <
>> sdarling...@apache.org> wrote:
>>
>>> Do you see the same behaviour with older versions of Ignite, or is this
>>> unique to 2.16?
>>>
>>> On Tue, 9 Jul 2024 at 21:34, Raymond Liu  wrote:
>>>
>>>> Hi all,
>>>>
>>>> We're encountering an issue where entry processors execute twice.
>>>> Executing twice is a problem for us because, for easier optimization, we
>>>> would like our entry processors *not* to be idempotent.
>>>>
>>>> Here is a sample self-contained junit test on Github which demonstrates
>>>> this issue: https://github.com/Philosobyte/ignite
>>>> -duplicate-processing-test/blob/main/src/test/java/com/philosobyte/igniteduplicateprocessingtest/DuplicateProcessingTest.java
>>>>
>>>>
>>>> (in case that link doesn't work, my github username is Philosobyte and
>>>> the project is called "ignite-duplicate-processing-test")
>>>>
>>>> When the test is run, it will log two executions instead of just one.
>>>>
>>>> To rule out the entry processor executing on both a primary and backup
>>>> partition, I set the number of backups to 0. I've also set atomicityMode to
>>>> ATOMIC.
>>>>
>>>> Does anyone have any ideas about why this might happen?
>>>>
>>>> Thank you,
>>>> Raymond
>>>>
>>>


Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-11 Thread Pavel Tupitsyn
- Duplicate invocation happens due to automatic retry
for UnregisteredClassException caused by return value of type
LightsaberColor.
- Ignite handles the exception, registers the type automatically, and
re-runs the processor
- This only happens once, subsequent invocations are not duplicated
- To fix this, register the LightsaberColor explicitly in
BinaryConfiguration. Your code should be changed like this:

static class DuplicateProcessingTestConfiguration {
@Bean
@SneakyThrows
public Ignite ignite(ApplicationContext applicationContext) {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setBinaryConfiguration(new
BinaryConfiguration().setTypeConfigurations(Collections.singletonList(
new BinaryTypeConfiguration(LightsaberColor.class.getName())
)));
return IgniteSpring.start(cfg, applicationContext);
}

...


On Wed, Jul 10, 2024 at 7:58 PM Raymond Liu  wrote:

> I've only tested the real deal with 2.16, but I just ran the sample repo
> test with 2.14, and the output is the same. I can test with even earlier
> versions if you'd like.
>
> On Wed, Jul 10, 2024 at 4:08 AM Stephen Darlington 
> wrote:
>
>> Do you see the same behaviour with older versions of Ignite, or is this
>> unique to 2.16?
>>
>> On Tue, 9 Jul 2024 at 21:34, Raymond Liu  wrote:
>>
>>> Hi all,
>>>
>>> We're encountering an issue where entry processors execute twice.
>>> Executing twice is a problem for us because, for easier optimization, we
>>> would like our entry processors *not* to be idempotent.
>>>
>>> Here is a sample self-contained junit test on Github which demonstrates
>>> this issue: https://github.com/Philosobyte/ignite
>>> -duplicate-processing-test/blob/main/src/test/java/com/philosobyte/igniteduplicateprocessingtest/DuplicateProcessingTest.java
>>>
>>>
>>> (in case that link doesn't work, my github username is Philosobyte and
>>> the project is called "ignite-duplicate-processing-test")
>>>
>>> When the test is run, it will log two executions instead of just one.
>>>
>>> To rule out the entry processor executing on both a primary and backup
>>> partition, I set the number of backups to 0. I've also set atomicityMode to
>>> ATOMIC.
>>>
>>> Does anyone have any ideas about why this might happen?
>>>
>>> Thank you,
>>> Raymond
>>>
>>


Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-10 Thread Raymond Liu
I've only tested the real deal with 2.16, but I just ran the sample repo
test with 2.14, and the output is the same. I can test with even earlier
versions if you'd like.

On Wed, Jul 10, 2024 at 4:08 AM Stephen Darlington 
wrote:

> Do you see the same behaviour with older versions of Ignite, or is this
> unique to 2.16?
>
> On Tue, 9 Jul 2024 at 21:34, Raymond Liu  wrote:
>
>> Hi all,
>>
>> We're encountering an issue where entry processors execute twice.
>> Executing twice is a problem for us because, for easier optimization, we
>> would like our entry processors *not* to be idempotent.
>>
>> Here is a sample self-contained junit test on Github which demonstrates
>> this issue: https://github.com/Philosobyte/ignite
>> -duplicate-processing-test/blob/main/src/test/java/com/philosobyte/igniteduplicateprocessingtest/DuplicateProcessingTest.java
>>
>>
>> (in case that link doesn't work, my github username is Philosobyte and
>> the project is called "ignite-duplicate-processing-test")
>>
>> When the test is run, it will log two executions instead of just one.
>>
>> To rule out the entry processor executing on both a primary and backup
>> partition, I set the number of backups to 0. I've also set atomicityMode to
>> ATOMIC.
>>
>> Does anyone have any ideas about why this might happen?
>>
>> Thank you,
>> Raymond
>>
>


Re: Ignite 2.16 entry processors sometimes execute twice

2024-07-10 Thread Stephen Darlington
Do you see the same behaviour with older versions of Ignite, or is this
unique to 2.16?

On Tue, 9 Jul 2024 at 21:34, Raymond Liu  wrote:

> Hi all,
>
> We're encountering an issue where entry processors execute twice.
> Executing twice is a problem for us because, for easier optimization, we
> would like our entry processors *not* to be idempotent.
>
> Here is a sample self-contained junit test on Github which demonstrates
> this issue: https://github.com/Philosobyte/ignite
> -duplicate-processing-test/blob/main/src/test/java/com/philosobyte/igniteduplicateprocessingtest/DuplicateProcessingTest.java
>
>
> (in case that link doesn't work, my github username is Philosobyte and the
> project is called "ignite-duplicate-processing-test")
>
> When the test is run, it will log two executions instead of just one.
>
> To rule out the entry processor executing on both a primary and backup
> partition, I set the number of backups to 0. I've also set atomicityMode to
> ATOMIC.
>
> Does anyone have any ideas about why this might happen?
>
> Thank you,
> Raymond
>


Re: Tracing in Ignite .NET

2024-07-08 Thread Pavel Tupitsyn
Hi, the page in question applies to .NET as well (server and thick client
modes), you can use an XML config file as described in [1]

[1]
https://ignite.apache.org/docs/latest/net-specific/net-configuration-options#configure-with-spring-xml

On Thu, Jul 4, 2024 at 9:09 AM  wrote:

> Hi  Pavel,
>
>
>
> How can  we achieve Tracing  using  Ignite .net  where Ignite is hosted as
> a  .NET service. Could  not find  any  example  from .NET perspective.
>
>
>
> https://ignite.apache.org/docs/latest/monitoring-metrics/tracing
>
>
>
> Can  we use OpenTelemetry  Nuget packages which  can  provide similar
> results as mentioned in  above docs?
>
>
>
>
>
> Thanks
>
> Satyajit
>
>
>
>
>
>
>
>
>
> Barclays Execution Services Limited registered in England. Registered No.
> 1767980. Registered office: 1 Churchill Place, London, E14 5HP
>
> Barclays Execution Services Limited provides support and administrative
> services across Barclays group. Barclays Execution Services Limited is an
> appointed representative of Barclays Bank UK plc and Barclays Bank plc.
> Barclays Bank UK plc and Barclays Bank plc are authorised by the Prudential
> Regulation Authority and regulated by the Financial Conduct Authority and
> the Prudential Regulation Authority.
>
> This email and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this email
> in error, please notify the sender and immediately delete it and any
> attachments from your system. Do not copy, use, disclose or otherwise act
> on any part of this email or its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free. The
> Barclays group does not accept responsibility for any loss arising from
> unauthorised access to, or interference with, any internet communications
> by any third party, or from the transmission of any viruses. Replies to
> this email may be monitored by the Barclays group for operational or
> business reasons.
>
> Any opinion or other information in this email or its attachments that
> does not relate to the business of the Barclays group is personal to the
> sender and is not given or endorsed by the Barclays group.
>
> Unless specifically indicated, this e-mail is not an offer to buy or sell
> or a solicitation to buy or sell any securities, investment products or
> other financial product or service, an official confirmation of any
> transaction, or an official statement of Barclays.
>


Re: Ignite Cluster with Thin Java Clients give same node id

2024-07-06 Thread Alex Plehanov
Hello,

1. Client sends requests to random known server node for load balancing.
2. Client uses the provided address as entry point and gets
information about other nodes from the server. If you don't need such
functionality, you can set property
ClientConfiguration.ClusterDiscoveryEnabled to false.
3. It's not quite correct to rely on the ClusterNode.isLocal flag on
the client side. Nodes list is cached on client side and, I think,
there can be cases when this flag is inconsistent (for example, there
can be more than one node with this flag).

ср, 3 июл. 2024 г. в 23:31, Murat ÖZDEMİR :
>
> Hi,
>
> I setup a Ignite cluster on docker desktop with the commands and  
> default-config.xml provided below;
>
> for ignite-1 node
> docker --context desktop-linux run --name ignite-1 -p 10800:10800 -p 
> 11211:11211 -p 47100:47100 -p 47500:47500 -p 49112:49112 -p 8080:8080 -d 
> apacheignite/ignite:latest
> (docker container ip: 172.17.0.2)
>
> default-config.xml for ignite-1 node;
>   xmlns="http://www.springframework.org/schema/beans"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
> xmlns:spring="http://camel.apache.org/schema/spring"; 
> xmlns:util="http://www.springframework.org/schema/util"; 
> xsi:schemaLocation="http://www.springframework.org/schema/beans 
> http://www.springframework.org/schema/beans/spring-beans.xsd 
> http://camel.apache.org/schema/spring 
> http://camel.apache.org/schema/spring/camel-spring.xsd 
> http://www.springframework.org/schema/util 
> https://www.springframework.org/schema/util/spring-util.xsd";>  class="org.apache.ignite.configuration.IgniteConfiguration">  name="consistentId" value="ILETISIM1" />  value="/opt/ignite" />  />static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT" /> 
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ" /> 
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED" /> 
>... 
>name="clientConnectorConfiguration">  class="org.apache.ignite.configuration.ClientConnectorConfiguration"> 
>   class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">name="localPortRange" value="1" />   class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>localhost:47500 
> 172.17.0.7:47501 
>
>
> for ignite-2 node
> docker --context desktop-linux run --name ignite-2 -p 10801:10800 -p 
> 11212:11211 -p 47101:47100 -p 47501:47500 -p 49113:49112 -p 8081:8080 -d 
> apacheignite/ignite:latest
> (docker container ip: 172.17.0.7)
>
> default-config.xml for ignite-2 node;
>   xmlns="http://www.springframework.org/schema/beans"; 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
> xmlns:spring="http://camel.apache.org/schema/spring"; 
> xmlns:util="http://www.springframework.org/schema/util"; 
> xsi:schemaLocation="http://www.springframework.org/schema/beans 
> http://www.springframework.org/schema/beans/spring-beans.xsd 
> http://camel.apache.org/schema/spring 
> http://camel.apache.org/schema/spring/camel-spring.xsd 
> http://www.springframework.org/schema/util 
> https://www.springframework.org/schema/util/spring-util.xsd";>  class="org.apache.ignite.configuration.IgniteConfiguration">  name="consistentId" value="ILETISIM2" />  value="/opt/ignite" />  />static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT" /> 
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ" /> 
>  static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED" /> 
>... 
>name="clientConnectorConfiguration">  class="org.apache.ignite.configuration.ClientConnectorConfiguration"> 
>   class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">name="localPortRange" value="1" />   class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>localhost:47500 
> 172.17.0.2:47500 
>
>
>
>
> I have a Java code below to test cluster and thin clients.
>
> public class IgniteClusterTest {
> public static void main(String[] args) {
> IgniteClient client1 = null;
> IgniteClient client2 = null;
> try {
> ApplicationContext clientConfiguration = new 
> ClassPathXmlApplicationContext("ignite-config.xml");
> client1 = Ignition.startClient((ClientConfiguration) 
> clientConfiguration.getBean("clientConfiguration.cfg"));
> client2 = Ignition.startClient((ClientConfiguration) 
> clientConfiguration.getBean("clientConfiguration2.cfg"));
>
> System.out.println("nodesFromClient1: " + 
> Arrays.toString(client1.cluster().nodes().stream().map(node - > 
> node.id().toString()).sorted().toArray()));
> Optional < ClusterNode > client1Local = 
> client1.cluster().nodes().stream().filter(ClusterNode::isLocal).findFirst();
> client1Local.ifPresent(clusterNode - > 
> System.out.println("client1NodeId: " + clusterNode.id()));
>
> List < String > nodesFromClient2 = 
> client2.cluster().nodes(

Re: How to resolve an "Already in Cache" error when using INSERT .... SELECT

2024-06-24 Thread Pavel Tupitsyn
> Even though I get the message, it DOES insert the row
> This does not happen when I connect to Ignite running on my local
machine, only when I connect to one that is running on the cloud

Looks a lot like the query is being executed twice, the first one inserts,
the second one fails.
Could that be caused by automated retries in the cloud environment? Load
balancer misconfig?

On Sat, Jun 22, 2024 at 11:02 AM Darius Cooper 
wrote:

> I'm connecting to Ignite via JDBC 10800, and trying to INSERT a row into a
> table, that has three columns.
>
>  - THE_KEY
>  - MY_OTHER_ID
>  - SOME_DATA
>
> If I try to insert a second record with the same value for THE_KEY, I get
> a regular SQL error telling me I'm trying to insert a duplicate. So far so
> good.
>
> But, when I start with an *empty* table, and do this:
>
> INSERT INTO MY_TABLE (THE_KEY, MY_OTHER_ID, SOME_DATA)
> SELECT ?, ?, ?
> WHERE NOT EXISTS (SELECT 1 from MY_TABLE WHERE MY_OTHER_ID = ?)
>
> I get an error that says:
>
> OperationError: 5: Failed to INSERT some keys because they are already in 
> cache [keys=[123]]
>
> Some additional facts:
>
>- Even though I get the message, it DOES insert the row
>- This does not happen when I connect to Ignite running on my local
>machine, only when I connect to one that is running on the cloud
>
> Any clues to what might be going on here? Thanks!
>


Re: [support] ignite tuning help

2024-06-21 Thread wkhapy...@gmail.com
thank you share so good article
 Replied Message 
From Jeremy McMillan Date 06/21/2024 20:57 To 
user@ignite.apache.org Cc Subject Re: [support] ignite tuning help 
Also, I didn't look at your network trace screen cap, but you should have zero 
TCP retransmissions if you set your initial TCP window send window small enough.
https://www.auvik.com/franklyit/blog/tcp-window-size/
On Fri, Jun 21, 2024, 07:53 Jeremy McMillan  wrote:
It could be network or persistent storage. What's the proportion of fast to 
slow gets?
On Thu, Jun 20, 2024, 22:48 f cad  wrote:
here is a screenshot example
f cad  于2024年6月21日周五 11:45写道:
Hello, community: 
I have a cluster ,with three nodes.
I have two cache, that AtomicityMode is TRANSACTIONAL and Backups number is two 
and WriteSynchronizationMode is PRIMARY_SYNC
I use IgniteClientSpringTransactionManagerwith OPTIMISTIC transaction and 
SERIALIZABLE concurrency mode.
pseudocode like below
ignite.transactions().txStart
if(acahce.get(key)==null) {
aCahce.put(key,value)
bCahce.put(key,value)
}
tx.commit()
Sometimes I find aCahce.get(key) costs 80ms ,Sometimes that costs only 5ms.
and three ignite nodes usage of cpu and io and memory all not high.
and client node usage of cpu and io and memory all not high. but I use tcpdump 
to find that Between nodes, there are over 40 TCP retransmissions per second.
So is this a network issue?

Re: [support] ignite tuning help

2024-06-21 Thread Jeremy McMillan
Also, I didn't look at your network trace screen cap, but you should have
zero TCP retransmissions if you set your initial TCP window send window
small enough.

https://www.auvik.com/franklyit/blog/tcp-window-size/

On Fri, Jun 21, 2024, 07:53 Jeremy McMillan  wrote:

> It could be network or persistent storage. What's the proportion of fast
> to slow gets?
>
>
> On Thu, Jun 20, 2024, 22:48 f cad  wrote:
>
>> here is a screenshot example
>> [image: image.png]
>>
>> f cad  于2024年6月21日周五 11:45写道:
>>
>>> Hello, community:
>>>
>>> I have a cluster ,with three nodes.
>>> I have two cache, that AtomicityMode is TRANSACTIONAL and Backups number is 
>>> two and WriteSynchronizationMode is PRIMARY_SYNC
>>>
>>>
>>> I use IgniteClientSpringTransactionManagerwith OPTIMISTIC transaction and 
>>> SERIALIZABLE concurrency mode.
>>>
>>> pseudocode like below
>>> ignite.transactions().txStart
>>> if(acahce.get(key)==null) {
>>>  aCahce.put(key,value)
>>>  bCahce.put(key,value)
>>> }
>>> tx.commit()
>>>
>>> Sometimes I find aCahce.get(key) costs 80ms ,Sometimes that costs only 5ms.
>>> and three ignite nodes usage of cpu and io and memory all not high.
>>> and client node usage of cpu and io and memory all not high.
>>> but I use tcpdump to find that Between nodes, there are over 40 TCP 
>>> retransmissions per second.
>>> So is this a network issue?
>>>
>>>


Re: [support] ignite tuning help

2024-06-21 Thread Jeremy McMillan
It could be network or persistent storage. What's the proportion of fast to
slow gets?


On Thu, Jun 20, 2024, 22:48 f cad  wrote:

> here is a screenshot example
> [image: image.png]
>
> f cad  于2024年6月21日周五 11:45写道:
>
>> Hello, community:
>>
>> I have a cluster ,with three nodes.
>> I have two cache, that AtomicityMode is TRANSACTIONAL and Backups number is 
>> two and WriteSynchronizationMode is PRIMARY_SYNC
>>
>>
>> I use IgniteClientSpringTransactionManagerwith OPTIMISTIC transaction and 
>> SERIALIZABLE concurrency mode.
>>
>> pseudocode like below
>> ignite.transactions().txStart
>> if(acahce.get(key)==null) {
>>  aCahce.put(key,value)
>>  bCahce.put(key,value)
>> }
>> tx.commit()
>>
>> Sometimes I find aCahce.get(key) costs 80ms ,Sometimes that costs only 5ms.
>> and three ignite nodes usage of cpu and io and memory all not high.
>> and client node usage of cpu and io and memory all not high.
>> but I use tcpdump to find that Between nodes, there are over 40 TCP 
>> retransmissions per second.
>> So is this a network issue?
>>
>>


Re: [support] ignite tuning help

2024-06-20 Thread f cad
here is a screenshot example
[image: image.png]

f cad  于2024年6月21日周五 11:45写道:

> Hello, community:
>
> I have a cluster ,with three nodes.
> I have two cache, that AtomicityMode is TRANSACTIONAL and Backups number is 
> two and WriteSynchronizationMode is PRIMARY_SYNC
>
>
> I use IgniteClientSpringTransactionManagerwith OPTIMISTIC transaction and 
> SERIALIZABLE concurrency mode.
>
> pseudocode like below
> ignite.transactions().txStart
> if(acahce.get(key)==null) {
>  aCahce.put(key,value)
>  bCahce.put(key,value)
> }
> tx.commit()
>
> Sometimes I find aCahce.get(key) costs 80ms ,Sometimes that costs only 5ms.
> and three ignite nodes usage of cpu and io and memory all not high.
> and client node usage of cpu and io and memory all not high.
> but I use tcpdump to find that Between nodes, there are over 40 TCP 
> retransmissions per second.
> So is this a network issue?
>
>


Re: 2.x timeline and migration to 3.x

2024-06-19 Thread Pavel Tupitsyn
No specific dates yet

On Thu, Jun 20, 2024 at 7:45 AM Guofeng Zhang  wrote:

> It’s good news to hear that Ignite 3 will be released in 2024. Do you have
> a specific plan for the release date?
>
> On Thu, May 30, 2024 at 9:48 PM Raymond Liu  wrote:
>
>> Thank you so much, Pavel! That is all great to hear.
>>
>> On Thu, May 30, 2024 at 9:13 AM Pavel Tupitsyn 
>> wrote:
>>
>>> > will 2.x continue to be developed for a year or two, will it be on
>>> critical patch support, or will it no longer be developed at all?
>>>
>>> 2.x and 3.x will co-exist in foreseeable future
>>>
>>> > will there be a typical upgrade path from 2.x to 3.x for existing
>>> clusters
>>>
>>> Yes
>>>
>>> > Is there a general target timeframe for Ignite 3.x to hit general
>>> availability
>>>
>>> 2024
>>>
>>> On Thu, May 30, 2024 at 2:56 PM Raymond Liu 
>>> wrote:
>>>
 Hello,

 Does anyone know the plans for Ignite 2.x after 3.0 is released? e.g.
 will 2.x continue to be developed for a year or two, will it be on critical
 patch support, or will it no longer be developed at all?

 And, in the same vein, will there be a typical upgrade path from 2.x to
 3.x for existing clusters, or have the internals changed too much for that?

 Is there a general target timeframe for Ignite 3.x to hit general
 availability? e.g. 2024-2025 vs. 2027-2029.

 I'm writing a proof of concept with Ignite 2.16 on a small team, and
 answers to these questions will help us choose whether to use Ignite, which
 features to rely on, and how soon we might plan for a migration after our
 initial work.

 Thanks,
 Raymond

>>>


Re: 2.x timeline and migration to 3.x

2024-06-19 Thread Guofeng Zhang
It’s good news to hear that Ignite 3 will be released in 2024. Do you have
a specific plan for the release date?

On Thu, May 30, 2024 at 9:48 PM Raymond Liu  wrote:

> Thank you so much, Pavel! That is all great to hear.
>
> On Thu, May 30, 2024 at 9:13 AM Pavel Tupitsyn 
> wrote:
>
>> > will 2.x continue to be developed for a year or two, will it be on
>> critical patch support, or will it no longer be developed at all?
>>
>> 2.x and 3.x will co-exist in foreseeable future
>>
>> > will there be a typical upgrade path from 2.x to 3.x for existing
>> clusters
>>
>> Yes
>>
>> > Is there a general target timeframe for Ignite 3.x to hit general
>> availability
>>
>> 2024
>>
>> On Thu, May 30, 2024 at 2:56 PM Raymond Liu 
>> wrote:
>>
>>> Hello,
>>>
>>> Does anyone know the plans for Ignite 2.x after 3.0 is released? e.g.
>>> will 2.x continue to be developed for a year or two, will it be on critical
>>> patch support, or will it no longer be developed at all?
>>>
>>> And, in the same vein, will there be a typical upgrade path from 2.x to
>>> 3.x for existing clusters, or have the internals changed too much for that?
>>>
>>> Is there a general target timeframe for Ignite 3.x to hit general
>>> availability? e.g. 2024-2025 vs. 2027-2029.
>>>
>>> I'm writing a proof of concept with Ignite 2.16 on a small team, and
>>> answers to these questions will help us choose whether to use Ignite, which
>>> features to rely on, and how soon we might plan for a migration after our
>>> initial work.
>>>
>>> Thanks,
>>> Raymond
>>>
>>


Re: Ignite .NET + CDC replication using Kafka

2024-06-17 Thread Nikolay Izhikov
CDC not related to Ignite.NET .

It captures changes that happens on storage (WAL) layer of Ignite.
So, all you need is to follow the examples from documentation and configure CDC 
replication.

> 12 июня 2024 г., в 14:02, Pavel Tupitsyn  написал(а):
> 
> Can you  share some  working  examples  to  capture  the changes  happening  
> to  cache realtime using  .NET?
> 



Re: Ignite TryGet - cache data not found intermittently

2024-06-13 Thread Charlin S
Hi  Slava Koptilin,
Thank you for your email,
I am working on this changes. keep you posted.

Regards,
Charlin


On Thu, 13 Jun 2024 at 12:09, Вячеслав Коптилин 
wrote:

> Hello Charlin,
>
> As I wrote, the first option is the `full sync` mode:
> CacheConfiguration.WriteSynchronizationMode =
> CacheWriteSynchronizationMode.FullSync [1]
> The second one is disabling reading from backups:
> CacheConfiguration.ReadFromBackup = false [2]
>
> [1]
> https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Configuration.CacheConfiguration.html#Apache_Ignite_Core_Cache_Configuration_CacheConfiguration_WriteSynchronizationMode
> [2]
> https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Configuration.CacheConfiguration.html#Apache_Ignite_Core_Cache_Configuration_CacheConfiguration_ReadFromBackup
>
> Thanks,
> S.
>
>
> ср, 12 июн. 2024 г. в 12:58, Charlin S :
>
>> Hi  Slava Koptilin
>>
>> Thanks for your email.
>>
>> Cache configuration used at time of cache creation in C# code. Please
>> suggest me if any configuration changes required in cache level or grid
>> level
>> CacheConfiguration.CopyOnRead=false
>> CacheConfiguration.EagerTtl=true
>> CacheConfiguration.CacheMode = CacheMode.Partitioned
>> CacheConfiguration.Backups = 1
>>
>> *Client node xml bean*
>>
>> 
>> http://www.springframework.org/schema/beans";
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>>xmlns:util="http://www.springframework.org/schema/util";
>>xsi:schemaLocation="http://www.springframework.org/schema/beans
>>
>> http://www.springframework.org/schema/beans/spring-beans.xsd
>>http://www.springframework.org/schema/util
>>
>> http://www.springframework.org/schema/util/spring-util.xsd";>
>>   
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>   
>> 
>> 
>>   > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>> 
>>   
>> 1.0.0.1:55500
>>
>> 1.0.0.2:55500
>>   
>> 
>>   
>> 
>>   
>> 
>> 
>> > class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>>
>> 
>> 
>> 
>>   
>> 
>>
>> *Server node xml bean*
>>
>> 
>> http://www.springframework.org/schema/beans";
>>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>>xmlns:util="http://www.springframework.org/schema/util";
>>xsi:schemaLocation="http://www.springframework.org/schema/beans
>>
>> http://www.springframework.org/schema/beans/spring-beans.xsd
>>http://www.springframework.org/schema/util
>>
>> http://www.springframework.org/schema/util/spring-util.xsd";>
>>   > class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
>> 
>> > factory-method="factoryOf">
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> 
>>   
>>   
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>   
>> 
>> 
>>   > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>> 
>>   
>> 1.0.0.1:55500
>> 1.0.0.2:55500
>>   
>> 
>>   
>> 
>>   
>> 
>>
>> 
>> 
>> > value="TestModel1"/>
>> 
>> 
>> > value="TestModel2"/>
>> 
>> 
>> > value="TestModel3"/>
>> 
>> 
>> > value="TestModel4"/>
>> 
>> 
>> > value="TestModel5"/>
>> 
>> 
>> 
>> 
>> > class="org.apache.ignite.configuration.DataStorageConfiguration">
>> 
>> > class="org.apache.ignite.configuration.DataRegionConfiguration">
>> > value="Common_Dynamic_Data_Region"/>
>> 
>> 
>> > value="RANDOM_2_LRU"/>
>> 
>> > value="65536"/>
>> 
>> 
>> 
>> 
>> 
>> > class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>> 
>> 
>> 
>> 
>>   
>> 
>>
>> Thanks & Regards,
>> Charlin
>>
>>
>>
>> On Wed, 12 Jun 2024 at 13:07

Re: Ignite TryGet - cache data not found intermittently

2024-06-12 Thread Вячеслав Коптилин
Hello Charlin,

As I wrote, the first option is the `full sync` mode:
CacheConfiguration.WriteSynchronizationMode =
CacheWriteSynchronizationMode.FullSync [1]
The second one is disabling reading from backups:
CacheConfiguration.ReadFromBackup = false [2]

[1]
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Configuration.CacheConfiguration.html#Apache_Ignite_Core_Cache_Configuration_CacheConfiguration_WriteSynchronizationMode
[2]
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Configuration.CacheConfiguration.html#Apache_Ignite_Core_Cache_Configuration_CacheConfiguration_ReadFromBackup

Thanks,
S.


ср, 12 июн. 2024 г. в 12:58, Charlin S :

> Hi  Slava Koptilin
>
> Thanks for your email.
>
> Cache configuration used at time of cache creation in C# code. Please
> suggest me if any configuration changes required in cache level or grid
> level
> CacheConfiguration.CopyOnRead=false
> CacheConfiguration.EagerTtl=true
> CacheConfiguration.CacheMode = CacheMode.Partitioned
> CacheConfiguration.Backups = 1
>
> *Client node xml bean*
>
> 
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xmlns:util="http://www.springframework.org/schema/util";
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>
> http://www.springframework.org/schema/beans/spring-beans.xsd
>http://www.springframework.org/schema/util
>
> http://www.springframework.org/schema/util/spring-util.xsd";>
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   
> 
> 
>class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
>   
> 1.0.0.1:55500
>
> 1.0.0.2:55500
>   
> 
>   
> 
>   
> 
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
>
> 
> 
> 
>   
> 
>
> *Server node xml bean*
>
> 
> http://www.springframework.org/schema/beans";
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>xmlns:util="http://www.springframework.org/schema/util";
>xsi:schemaLocation="http://www.springframework.org/schema/beans
>
> http://www.springframework.org/schema/beans/spring-beans.xsd
>http://www.springframework.org/schema/util
>
> http://www.springframework.org/schema/util/spring-util.xsd";>
>class="org.apache.ignite.configuration.CacheConfiguration" abstract="true">
> 
>  factory-method="factoryOf">
> 
> 
> 
> 
> 
> 
>
> 
>   
>   
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>   
> 
> 
>class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> 
>   
> 1.0.0.1:55500
> 1.0.0.2:55500
>   
> 
>   
> 
>   
> 
>
> 
> 
>  value="TestModel1"/>
> 
> 
>  value="TestModel2"/>
> 
> 
>  value="TestModel3"/>
> 
> 
>  value="TestModel4"/>
> 
> 
>  value="TestModel5"/>
> 
> 
> 
> 
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
> 
>  class="org.apache.ignite.configuration.DataRegionConfiguration">
>  value="Common_Dynamic_Data_Region"/>
> 
> 
>  value="RANDOM_2_LRU"/>
> 
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> 
> 
> 
> 
>   
> 
>
> Thanks & Regards,
> Charlin
>
>
>
> On Wed, 12 Jun 2024 at 13:07, Вячеслав Коптилин 
> wrote:
>
>> Hi Charlin,
>>
>> I mean that it might be "well-known" behavior if you use `primary sync`
>> mode and the `readFromBackup` property equals `true` (which is `true` by
>> default).
>>
>> The first option, to overcome this problem, is using `full sync` mode. In
>> that case, the update request will wait for the write to complete on all
>> participating nod

Re: Ignite .NET + CDC replication using Kafka

2024-06-12 Thread Pavel Tupitsyn
> Is  this feature  available in  Ignite  .NET ? Can  you  share  the
sample example?

Yes, Continuous Query is available in Ignite.NET, sample code:
https://github.com/apache/ignite/blob/master/modules/platforms/dotnet/examples/Thick/Cache/QueryContinuous/Program.cs


RE: Ignite .NET + CDC replication using Kafka

2024-06-12 Thread satyajit.mandal.barclays.com via user
Thanks Pavel.

Is  this feature  available in  Ignite  .NET ? Can  you  share  the sample 
example?

https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Affinity.Rendezvous.ClusterNodeAttributeAffinityBackupFilter.html

Regards
Satyajit


From: Pavel Tupitsyn 
Sent: Wednesday, June 12, 2024 4:39 PM
To: Mandal, Satyajit: IT (PUN) 
Subject: Re: Ignite .NET + CDC replication using Kafka


CAUTION: This email originated from outside our organisation - 
ptupit...@apache.org<mailto:ptupit...@apache.org> Do not click on links, open 
attachments, or respond unless you recognize the sender and can validate the 
content is safe.
Some vendors also provide cluster replication with native .NET APIs based on 
Ignite:

https://www.gridgain.com/docs/latest/administrators-guide/data-center-replication/configuring-replication<https://clicktime.symantec.com/15t5z5LCoSPzxZnd8HV1f?h=idSuwKpOy7WVXyTrMyN6I_jp2pG9Xj2Vk1v_yWPSoTM=&u=https://www.gridgain.com/docs/latest/administrators-guide/data-center-replication/configuring-replication>

On Wed, Jun 12, 2024 at 2:02 PM Pavel Tupitsyn 
mailto:ptupit...@apache.org>> wrote:
Hi Satyajit,

CDC extension is not available directly in Ignite.NET, you can use a Compute 
job [1] or a Service [2] implemented in Java to access this functionality.

Alternatively, Continuous Query [3] provides a way to capture all changes in a 
specific cache. Example: [4]

[1] 
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Compute.ICompute.html#Apache_Ignite_Core_Compute_ICompute_ExecuteJavaTask__1_System_String_System_Object_<https://clicktime.symantec.com/15t5pQwdtD2p8g8n3AgiR?h=0OmzUAZhJ4oIYVK3YnnrK4Utc8FX4mfLzbzlFY_zyVw=&u=https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Compute.ICompute.html%23Apache_Ignite_Core_Compute_ICompute_ExecuteJavaTask__1_System_String_System_Object_>
[2] 
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Services.IServices.html<https://clicktime.symantec.com/15t5uF8vLpiQYcxhaj5s3?h=k1Ix2oO44D6JEfJZZylfedVJMd23dk5v3xOQ1LFGRJM=&u=https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Services.IServices.html>
[3] 
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Query.Continuous.ContinuousQuery-2.html<https://clicktime.symantec.com/15t5jakMRbMDijJrVcHZo?h=rnbT0tB2FA_fBxze82QMiPgYEpJAV9mFZFsm0ZGDCIY=&u=https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Query.Continuous.ContinuousQuery-2.html>
[4] 
https://github.com/apache/ignite/blob/master/modules/platforms/dotnet/examples/Thick/Cache/QueryContinuous/Program.cs<https://clicktime.symantec.com/15t5ZvMnWMz2tqf1QVVGZ?h=u43Fem4zddoJccj5n3cleHCQMjH1ZEAC0uTZQihPSl0=&u=https://github.com/apache/ignite/blob/master/modules/platforms/dotnet/examples/Thick/Cache/QueryContinuous/Program.cs>

On Wed, Jun 12, 2024 at 9:52 AM 
mailto:satyajit.man...@barclays.com>> wrote:
Hi  Pavel,

How  can  we achieve  replication  from  Ignite .NET  cluster  to  another  
Ignite .NET cluster  as  backup using  Kafka  as  middleware? This  is  needed  
for resiliency  test  for Ignite cluster  where we  shutdown  one cluster  and  
move  all  application  service  to  backup  cluster  as  part  of  resiliency  
testing.

Basically  need  to  know  which  Ignite features supports to  capture changes  
in  caches  so  that we can write  programs  to  capture  those changes and  
stream  those  to  Kafka  using  .NET libraries.

Found  this  doc  but  not sure how  Ignite .NET fits  into  this ( CDC 
replication  using  Kafka)
https://ignite.apache.org/docs/latest/extensions-and-integrations/change-data-capture-extensions<https://clicktime.symantec.com/15t5ekZ4xyfdJnUvx3tRB?h=uLyfAFvoWOjEF9JvWuSOSIGaUYLH7IKGUW28dcFAg2M=&u=https://ignite.apache.org/docs/latest/extensions-and-integrations/change-data-capture-extensions>

Can you  share some  working  examples  to  capture  the changes  happening  to 
 cache realtime using  .NET?

Thanks
Satyajit



Barclays Execution Services Limited registered in England. Registered No. 
1767980. Registered office: 1 Churchill Place, London, E14 5HP

Barclays Execution Services Limited provides support and administrative 
services across Barclays group. Barclays Execution Services Limited is an 
appointed representative of Barclays Bank UK plc and Barclays Bank plc. 
Barclays Bank UK plc and Barclays Bank plc are authorised by the Prudential 
Regulation Authority and regulated by the Financial Conduct Authority and the 
Prudential Regulation Authority.

This email and any attachments are confidential and intended solely for the 
addressee and may also be privileged or exempt from disclosure under applicable 
law. If you are not the addressee, or have received this email in error, please 
notify the sender and immediately delete it and any attachments from your 
sys

Re: Ignite .NET + CDC replication using Kafka

2024-06-12 Thread Pavel Tupitsyn
Hi Satyajit,

CDC extension is not available directly in Ignite.NET, you can use a
Compute job [1] or a Service [2] implemented in Java to access this
functionality.

Alternatively, Continuous Query [3] provides a way to capture all changes
in a specific cache. Example: [4]

[1]
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Compute.ICompute.html#Apache_Ignite_Core_Compute_ICompute_ExecuteJavaTask__1_System_String_System_Object_
[2]
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Services.IServices.html
[3]
https://ignite.apache.org/releases/latest/dotnetdoc/api/Apache.Ignite.Core.Cache.Query.Continuous.ContinuousQuery-2.html
[4]
https://github.com/apache/ignite/blob/master/modules/platforms/dotnet/examples/Thick/Cache/QueryContinuous/Program.cs

On Wed, Jun 12, 2024 at 9:52 AM  wrote:

> Hi  Pavel,
>
>
>
> How  can  we achieve  replication  from  Ignite .NET  cluster  to
> another  Ignite .NET cluster  as  backup using  Kafka  as  middleware?
> This  is  needed  for resiliency  test  for Ignite cluster  where we
> shutdown  one cluster  and  move  all  application  service  to  backup
> cluster  as  part  of  resiliency  testing.
>
>
>
> Basically  need  to  know  which  Ignite features supports to  capture
> changes  in  caches  so  that we can write  programs  to  capture  those
> changes and  stream  those  to  Kafka  using  .NET libraries.
>
>
>
> Found  this  doc  but  not sure how  Ignite .NET fits  into  this ( CDC
> replication  using  Kafka)
>
>
> https://ignite.apache.org/docs/latest/extensions-and-integrations/change-data-capture-extensions
>
>
>
> Can you  share some  working  examples  to  capture  the changes
> happening  to  cache realtime using  .NET?
>
>
>
> Thanks
>
> Satyajit
>
>
>
>
>
> Barclays Execution Services Limited registered in England. Registered No.
> 1767980. Registered office: 1 Churchill Place, London, E14 5HP
>
> Barclays Execution Services Limited provides support and administrative
> services across Barclays group. Barclays Execution Services Limited is an
> appointed representative of Barclays Bank UK plc and Barclays Bank plc.
> Barclays Bank UK plc and Barclays Bank plc are authorised by the Prudential
> Regulation Authority and regulated by the Financial Conduct Authority and
> the Prudential Regulation Authority.
>
> This email and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this email
> in error, please notify the sender and immediately delete it and any
> attachments from your system. Do not copy, use, disclose or otherwise act
> on any part of this email or its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free. The
> Barclays group does not accept responsibility for any loss arising from
> unauthorised access to, or interference with, any internet communications
> by any third party, or from the transmission of any viruses. Replies to
> this email may be monitored by the Barclays group for operational or
> business reasons.
>
> Any opinion or other information in this email or its attachments that
> does not relate to the business of the Barclays group is personal to the
> sender and is not given or endorsed by the Barclays group.
>
> Unless specifically indicated, this e-mail is not an offer to buy or sell
> or a solicitation to buy or sell any securities, investment products or
> other financial product or service, an official confirmation of any
> transaction, or an official statement of Barclays.
>


Re: Ignite TryGet - cache data not found intermittently

2024-06-12 Thread Charlin S
Hi  Slava Koptilin

Thanks for your email.

Cache configuration used at time of cache creation in C# code. Please
suggest me if any configuration changes required in cache level or grid
level
CacheConfiguration.CopyOnRead=false
CacheConfiguration.EagerTtl=true
CacheConfiguration.CacheMode = CacheMode.Partitioned
CacheConfiguration.Backups = 1

*Client node xml bean*


http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util

http://www.springframework.org/schema/util/spring-util.xsd";>
  










  


  

  
1.0.0.1:55500

1.0.0.2:55500
  

  

  



   



  


*Server node xml bean*


http://www.springframework.org/schema/beans";
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
   xmlns:util="http://www.springframework.org/schema/util";
   xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util

http://www.springframework.org/schema/util/spring-util.xsd";>
  








   

  
  










  


  

  
1.0.0.1:55500
1.0.0.2:55500
  

  

  

   






































  


Thanks & Regards,
Charlin



On Wed, 12 Jun 2024 at 13:07, Вячеслав Коптилин 
wrote:

> Hi Charlin,
>
> I mean that it might be "well-known" behavior if you use `primary sync`
> mode and the `readFromBackup` property equals `true` (which is `true` by
> default).
>
> The first option, to overcome this problem, is using `full sync` mode. In
> that case, the update request will wait for the write to complete on all
> participating nodes (primary and backups).
> The second option, that can be used here, is to use 'primary sync' and set
> 'CacheConfiguration#readFromBackup' flag to false. Ignite will always send
> the request to the primary node and get the value from there.
>
> Thanks,
> S.
>
> пн, 10 июн. 2024 г. в 14:22, Вячеслав Коптилин :
>
>> Hello Charlin,
>>
>> Could you share your cache configuration? Specifically, what values are
>> used for `readFromBackup` and `writeSynchronizationMode`.
>>
>> Thanks,
>> S.
>>
>> ср, 5 июн. 2024 г. в 15:49, Charlin S :
>>
>>> Hi All,
>>> I am unable to fetch data from cache by reading by
>>> key.intermittently (very rarely).
>>>
>>> Ignite version: 2.10
>>> Cache mode: Partition
>>> Client : C# with Ignite thick client
>>>
>>> Scenario:
>>> My C# application received a request for cache data insertion @ 09:09:35
>>> and successfully insertion initiated at application side.
>>> Thereafter @ 09:10:21 C# application received a request to read cache
>>> data for the same key and Ignite TryGet could not fetch data.
>>> Note: We are able to get cache data by the same key after some time.
>>>
>>> Cache creation code
>>> var IgniteCache= IIgnite.GetCache("cacheModel")
>>> .WithExpiryPolicy(new ExpiryPolicy(
>>>  TimeSpan.FromMinutes(60),
>>>  TimeSpan.FromMinutes(60),
>>>  TimeSpan.FromMinutes(60)
>>>  ));
>>>
>>> Cache data insertion code
>>> IgniteCache.Put(cacheKey, (T)data);
>>>
>>> Cache data reading code
>>>   IgniteCache.TryGet(Key, out var value);
>>>
>>> Thanks & Regards,
>>> Charlin
>>>
>>>
>>>
>>>


Re: Ignite TryGet - cache data not found intermittently

2024-06-12 Thread Вячеслав Коптилин
Hi Charlin,

I mean that it might be "well-known" behavior if you use `primary sync`
mode and the `readFromBackup` property equals `true` (which is `true` by
default).

The first option, to overcome this problem, is using `full sync` mode. In
that case, the update request will wait for the write to complete on all
participating nodes (primary and backups).
The second option, that can be used here, is to use 'primary sync' and set
'CacheConfiguration#readFromBackup' flag to false. Ignite will always send
the request to the primary node and get the value from there.

Thanks,
S.

пн, 10 июн. 2024 г. в 14:22, Вячеслав Коптилин :

> Hello Charlin,
>
> Could you share your cache configuration? Specifically, what values are
> used for `readFromBackup` and `writeSynchronizationMode`.
>
> Thanks,
> S.
>
> ср, 5 июн. 2024 г. в 15:49, Charlin S :
>
>> Hi All,
>> I am unable to fetch data from cache by reading by
>> key.intermittently (very rarely).
>>
>> Ignite version: 2.10
>> Cache mode: Partition
>> Client : C# with Ignite thick client
>>
>> Scenario:
>> My C# application received a request for cache data insertion @ 09:09:35
>> and successfully insertion initiated at application side.
>> Thereafter @ 09:10:21 C# application received a request to read cache
>> data for the same key and Ignite TryGet could not fetch data.
>> Note: We are able to get cache data by the same key after some time.
>>
>> Cache creation code
>> var IgniteCache= IIgnite.GetCache("cacheModel")
>> .WithExpiryPolicy(new ExpiryPolicy(
>>  TimeSpan.FromMinutes(60),
>>  TimeSpan.FromMinutes(60),
>>  TimeSpan.FromMinutes(60)
>>  ));
>>
>> Cache data insertion code
>> IgniteCache.Put(cacheKey, (T)data);
>>
>> Cache data reading code
>>   IgniteCache.TryGet(Key, out var value);
>>
>> Thanks & Regards,
>> Charlin
>>
>>
>>
>>


Re: Ignite TryGet - cache data not found intermittently

2024-06-10 Thread Вячеслав Коптилин
Hello Charlin,

Could you share your cache configuration? Specifically, what values are
used for `readFromBackup` and `writeSynchronizationMode`.

Thanks,
S.

ср, 5 июн. 2024 г. в 15:49, Charlin S :

> Hi All,
> I am unable to fetch data from cache by reading by
> key.intermittently (very rarely).
>
> Ignite version: 2.10
> Cache mode: Partition
> Client : C# with Ignite thick client
>
> Scenario:
> My C# application received a request for cache data insertion @ 09:09:35
> and successfully insertion initiated at application side.
> Thereafter @ 09:10:21 C# application received a request to read cache data
> for the same key and Ignite TryGet could not fetch data.
> Note: We are able to get cache data by the same key after some time.
>
> Cache creation code
> var IgniteCache= IIgnite.GetCache("cacheModel")
> .WithExpiryPolicy(new ExpiryPolicy(
>  TimeSpan.FromMinutes(60),
>  TimeSpan.FromMinutes(60),
>  TimeSpan.FromMinutes(60)
>  ));
>
> Cache data insertion code
> IgniteCache.Put(cacheKey, (T)data);
>
> Cache data reading code
>   IgniteCache.TryGet(Key, out var value);
>
> Thanks & Regards,
> Charlin
>
>
>
>


Re: Ignite TryGet - cache data not found intermittently

2024-06-05 Thread Charlin S
Hi Pavel,
Thanking for your email.
> Perhaps something else is involved here, another thread or node changing
the value?
   I checked the application log and another thread did not do any other
operation for this particular data between this 45 seconds time gap.
   no chance for changing cache key value as this data is able to be read
after some time.
 I will keep you posted for small projects.

Thanks & Regards,
Charlin



On Thu, 6 Jun 2024 at 11:11, Pavel Tupitsyn  wrote:

> > Put and tryget are in two different methods called one after another
>
> That would be a very apparent bug in Ignite, and I don't see anything like
> that in JIRA or release notes.
> Perhaps something else is involved here, another thread or node changing
> the value?
>
> Please prepare a small project that we can run to reproduce the issue.
>
> On Wed, Jun 5, 2024 at 9:18 PM Charlin S  wrote:
>
>> Hi,
>> Put and tryget are in two different methods called one after another one
>> with some time gap based on work flow process
>>
>> We felt Ignite 2.10 okay for us.
>>
>> Thanks and Regards
>> Charlin
>>
>> On Wed, 5 Jun, 2024, 8:23 pm Pavel Tupitsyn, 
>> wrote:
>>
>>> - Do you run Put and TryGet on the same node?
>>> - Do you have a reproducer?
>>> - Ignite 2.10 was released 3 years ago, have you tried a newer version?
>>>
>>> On Wed, Jun 5, 2024 at 3:49 PM Charlin S  wrote:
>>>
 Hi All,
 I am unable to fetch data from cache by reading by
 key.intermittently (very rarely).

 Ignite version: 2.10
 Cache mode: Partition
 Client : C# with Ignite thick client

 Scenario:
 My C# application received a request for cache data insertion
 @ 09:09:35 and successfully insertion initiated at application side.
 Thereafter @ 09:10:21 C# application received a request to read cache
 data for the same key and Ignite TryGet could not fetch data.
 Note: We are able to get cache data by the same key after some time.

 Cache creation code
 var IgniteCache= IIgnite.GetCache("cacheModel")
 .WithExpiryPolicy(new ExpiryPolicy(
  TimeSpan.FromMinutes(60),
  TimeSpan.FromMinutes(60),
  TimeSpan.FromMinutes(60)
  ));

 Cache data insertion code
 IgniteCache.Put(cacheKey, (T)data);

 Cache data reading code
   IgniteCache.TryGet(Key, out var value);

 Thanks & Regards,
 Charlin






Re: Ignite TryGet - cache data not found intermittently

2024-06-05 Thread Pavel Tupitsyn
> Put and tryget are in two different methods called one after another

That would be a very apparent bug in Ignite, and I don't see anything like
that in JIRA or release notes.
Perhaps something else is involved here, another thread or node changing
the value?

Please prepare a small project that we can run to reproduce the issue.

On Wed, Jun 5, 2024 at 9:18 PM Charlin S  wrote:

> Hi,
> Put and tryget are in two different methods called one after another one
> with some time gap based on work flow process
>
> We felt Ignite 2.10 okay for us.
>
> Thanks and Regards
> Charlin
>
> On Wed, 5 Jun, 2024, 8:23 pm Pavel Tupitsyn,  wrote:
>
>> - Do you run Put and TryGet on the same node?
>> - Do you have a reproducer?
>> - Ignite 2.10 was released 3 years ago, have you tried a newer version?
>>
>> On Wed, Jun 5, 2024 at 3:49 PM Charlin S  wrote:
>>
>>> Hi All,
>>> I am unable to fetch data from cache by reading by
>>> key.intermittently (very rarely).
>>>
>>> Ignite version: 2.10
>>> Cache mode: Partition
>>> Client : C# with Ignite thick client
>>>
>>> Scenario:
>>> My C# application received a request for cache data insertion @ 09:09:35
>>> and successfully insertion initiated at application side.
>>> Thereafter @ 09:10:21 C# application received a request to read cache
>>> data for the same key and Ignite TryGet could not fetch data.
>>> Note: We are able to get cache data by the same key after some time.
>>>
>>> Cache creation code
>>> var IgniteCache= IIgnite.GetCache("cacheModel")
>>> .WithExpiryPolicy(new ExpiryPolicy(
>>>  TimeSpan.FromMinutes(60),
>>>  TimeSpan.FromMinutes(60),
>>>  TimeSpan.FromMinutes(60)
>>>  ));
>>>
>>> Cache data insertion code
>>> IgniteCache.Put(cacheKey, (T)data);
>>>
>>> Cache data reading code
>>>   IgniteCache.TryGet(Key, out var value);
>>>
>>> Thanks & Regards,
>>> Charlin
>>>
>>>
>>>
>>>


Re: Ignite TryGet - cache data not found intermittently

2024-06-05 Thread Charlin S
Hi,
Put and tryget are in two different methods called one after another one
with some time gap based on work flow process

We felt Ignite 2.10 okay for us.

Thanks and Regards
Charlin

On Wed, 5 Jun, 2024, 8:23 pm Pavel Tupitsyn,  wrote:

> - Do you run Put and TryGet on the same node?
> - Do you have a reproducer?
> - Ignite 2.10 was released 3 years ago, have you tried a newer version?
>
> On Wed, Jun 5, 2024 at 3:49 PM Charlin S  wrote:
>
>> Hi All,
>> I am unable to fetch data from cache by reading by
>> key.intermittently (very rarely).
>>
>> Ignite version: 2.10
>> Cache mode: Partition
>> Client : C# with Ignite thick client
>>
>> Scenario:
>> My C# application received a request for cache data insertion @ 09:09:35
>> and successfully insertion initiated at application side.
>> Thereafter @ 09:10:21 C# application received a request to read cache
>> data for the same key and Ignite TryGet could not fetch data.
>> Note: We are able to get cache data by the same key after some time.
>>
>> Cache creation code
>> var IgniteCache= IIgnite.GetCache("cacheModel")
>> .WithExpiryPolicy(new ExpiryPolicy(
>>  TimeSpan.FromMinutes(60),
>>  TimeSpan.FromMinutes(60),
>>  TimeSpan.FromMinutes(60)
>>  ));
>>
>> Cache data insertion code
>> IgniteCache.Put(cacheKey, (T)data);
>>
>> Cache data reading code
>>   IgniteCache.TryGet(Key, out var value);
>>
>> Thanks & Regards,
>> Charlin
>>
>>
>>
>>


Re: Ignite TryGet - cache data not found intermittently

2024-06-05 Thread Pavel Tupitsyn
- Do you run Put and TryGet on the same node?
- Do you have a reproducer?
- Ignite 2.10 was released 3 years ago, have you tried a newer version?

On Wed, Jun 5, 2024 at 3:49 PM Charlin S  wrote:

> Hi All,
> I am unable to fetch data from cache by reading by
> key.intermittently (very rarely).
>
> Ignite version: 2.10
> Cache mode: Partition
> Client : C# with Ignite thick client
>
> Scenario:
> My C# application received a request for cache data insertion @ 09:09:35
> and successfully insertion initiated at application side.
> Thereafter @ 09:10:21 C# application received a request to read cache data
> for the same key and Ignite TryGet could not fetch data.
> Note: We are able to get cache data by the same key after some time.
>
> Cache creation code
> var IgniteCache= IIgnite.GetCache("cacheModel")
> .WithExpiryPolicy(new ExpiryPolicy(
>  TimeSpan.FromMinutes(60),
>  TimeSpan.FromMinutes(60),
>  TimeSpan.FromMinutes(60)
>  ));
>
> Cache data insertion code
> IgniteCache.Put(cacheKey, (T)data);
>
> Cache data reading code
>   IgniteCache.TryGet(Key, out var value);
>
> Thanks & Regards,
> Charlin
>
>
>
>


Re: Inquiry about Data Storage in ignite3: RockDB Usage and Data Preloading

2024-06-04 Thread Pavel Tupitsyn
Hello,

- Ignite 3 has native storage that does not rely on any third-party
implementations, similar to Ignite 2.
- RocksDB is a secondary, optional storage implementation.
- You don't need to preload data like in Ignite 2 (neither for native
storage nor for RocksDB)

On Tue, Jun 4, 2024 at 12:33 PM 常鑫  wrote:

> Hello,
> I am trying out ignite3 and have a few questions about RocksDB.
> My specific questions are as follows:
> 1. Is the usage of RocksDB in Ignite3 equivalent to using Ignite3's native
> storage, or are there any significant differences in terms of performance,
> maintenance, or functionality?
> 2. If I were to use RocksDB, would I need to preload data before it can be
> effectively utilized by Ignite3? (Like loadCache() in Ignite2)
> 3. Are there any specific use cases ?
> Your expertise in this area would be greatly appreciated.
> Thanks,
> Xin Chang
>
>


Re: Will Apache Ignite 3.0 be compliant with JSR107 sepc?

2024-06-04 Thread Stephen Darlington
The way that clients operate is quite different in AI3. You can't assume
the same thick/thin distinction.

There are two answers to your question:

1. Ignite 2 will continue to be available. Version 3 is a big update and
it's unlikely that everyone will move over on day 1. So if you like AI2,
it's safe to use it
2. Ignite 3 is currently in beta, so no one can give you definitive
answers. However, the goal is that AI3 will support the same (and more!)
workloads as 2, although the way you achieve the functionality may differ


On Mon, 3 Jun 2024 at 22:43, Amit Jolly  wrote:

> Hi Pavel,
>
> Thanks for the quick response.
>
> We are currently evaluating Apache Ignite for our project and are planning
> to use features like *CacheEntryProcessor*, *Thick clients with Near
> Cache *(Looks like 3.0 will only have thin clients and near cache is only
> supported in thick clients) and *Continues query cache*, while going
> through the code of 3.0, I could not find/validate whether these features
> will be supported in 3.0 or not.
>
> Is there any matrix which explains/shows feature to feature comparison
> between 2.X and 3.0?
>
> Thanks,
>
> Amit Jolly
>
> On Mon, Jun 3, 2024 at 1:41 AM Pavel Tupitsyn 
> wrote:
>
>> Amit, unfortunately, I don't have answers at the moment.
>>
>> I think a JSR107 wrapper can be developed on top of existing Ignite 3
>> APIs (Table + Compute), including CacheEntryProcessor support, but we don't
>> have specific plans for now.
>>
>> On Fri, May 31, 2024 at 4:34 PM Amit Jolly  wrote:
>>
>>> Hi Pavel,
>>>
>>> Thanks for the quick response.
>>>
>>> I had looked at the ignite-3 github repo and could not find any
>>> reference to JSR 107, hence asked this question.
>>>
>>> Since Ignite 2.X is fully JSR 107 compliant, now the question is if
>>> ignite-3 is going to be the successor of Ignite 2.X and is going to replace
>>> Ignite 2.X in future, will Ignite 3 be JSR 107 compliant as well? If yes,
>>> do we have timelines when ignite-3 will be JSR 107 compliant, If no, what
>>> will be the migration strategy for the current Ignite 2.X user using
>>> features listed in JSR 107.
>>>
>>> Thanks,
>>>
>>> Amit Jolly
>>>
>>>
>>>
>>>
>>>
>>> On Fri, May 31, 2024 at 12:08 AM Pavel Tupitsyn 
>>> wrote:
>>>
 For now it does not have any of that. KeyValueView [1] is a table
 access interface in Ignite 3 that is most similar to a "cache".


 https://github.com/apache/ignite-3/blob/main/modules/api/src/main/java/org/apache/ignite/table/KeyValueView.java

 On Thu, May 30, 2024 at 6:19 PM Amit Jolly 
 wrote:

> HI,
>
> Will Apache Ignite 3.0 be compliant with JSR107 sepc?
>
> In particular, I am looking at the feature CacheEntryProcessor support
> in Ignite 3.0
>
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.CacheEntryProcessor-java.lang.Object...-
>
>
> https://www.javadoc.io/doc/javax.cache/cache-api/latest/javax/cache/processor/EntryProcessor.html
>
> Thanks,
>
> Amit Jolly
>



Re: Will Apache Ignite 3.0 be compliant with JSR107 sepc?

2024-06-03 Thread Amit Jolly
Hi Pavel,

Thanks for the quick response.

We are currently evaluating Apache Ignite for our project and are planning
to use features like *CacheEntryProcessor*, *Thick clients with Near
Cache *(Looks
like 3.0 will only have thin clients and near cache is only supported in
thick clients) and *Continues query cache*, while going through the code of
3.0, I could not find/validate whether these features will be supported in
3.0 or not.

Is there any matrix which explains/shows feature to feature comparison
between 2.X and 3.0?

Thanks,

Amit Jolly

On Mon, Jun 3, 2024 at 1:41 AM Pavel Tupitsyn  wrote:

> Amit, unfortunately, I don't have answers at the moment.
>
> I think a JSR107 wrapper can be developed on top of existing Ignite 3 APIs
> (Table + Compute), including CacheEntryProcessor support, but we don't have
> specific plans for now.
>
> On Fri, May 31, 2024 at 4:34 PM Amit Jolly  wrote:
>
>> Hi Pavel,
>>
>> Thanks for the quick response.
>>
>> I had looked at the ignite-3 github repo and could not find any reference
>> to JSR 107, hence asked this question.
>>
>> Since Ignite 2.X is fully JSR 107 compliant, now the question is if
>> ignite-3 is going to be the successor of Ignite 2.X and is going to replace
>> Ignite 2.X in future, will Ignite 3 be JSR 107 compliant as well? If yes,
>> do we have timelines when ignite-3 will be JSR 107 compliant, If no, what
>> will be the migration strategy for the current Ignite 2.X user using
>> features listed in JSR 107.
>>
>> Thanks,
>>
>> Amit Jolly
>>
>>
>>
>>
>>
>> On Fri, May 31, 2024 at 12:08 AM Pavel Tupitsyn 
>> wrote:
>>
>>> For now it does not have any of that. KeyValueView [1] is a table access
>>> interface in Ignite 3 that is most similar to a "cache".
>>>
>>>
>>> https://github.com/apache/ignite-3/blob/main/modules/api/src/main/java/org/apache/ignite/table/KeyValueView.java
>>>
>>> On Thu, May 30, 2024 at 6:19 PM Amit Jolly  wrote:
>>>
 HI,

 Will Apache Ignite 3.0 be compliant with JSR107 sepc?

 In particular, I am looking at the feature CacheEntryProcessor support
 in Ignite 3.0


 https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.CacheEntryProcessor-java.lang.Object...-


 https://www.javadoc.io/doc/javax.cache/cache-api/latest/javax/cache/processor/EntryProcessor.html

 Thanks,

 Amit Jolly

>>>


Re: Will Apache Ignite 3.0 be compliant with JSR107 sepc?

2024-06-02 Thread Pavel Tupitsyn
Amit, unfortunately, I don't have answers at the moment.

I think a JSR107 wrapper can be developed on top of existing Ignite 3 APIs
(Table + Compute), including CacheEntryProcessor support, but we don't have
specific plans for now.

On Fri, May 31, 2024 at 4:34 PM Amit Jolly  wrote:

> Hi Pavel,
>
> Thanks for the quick response.
>
> I had looked at the ignite-3 github repo and could not find any reference
> to JSR 107, hence asked this question.
>
> Since Ignite 2.X is fully JSR 107 compliant, now the question is if
> ignite-3 is going to be the successor of Ignite 2.X and is going to replace
> Ignite 2.X in future, will Ignite 3 be JSR 107 compliant as well? If yes,
> do we have timelines when ignite-3 will be JSR 107 compliant, If no, what
> will be the migration strategy for the current Ignite 2.X user using
> features listed in JSR 107.
>
> Thanks,
>
> Amit Jolly
>
>
>
>
>
> On Fri, May 31, 2024 at 12:08 AM Pavel Tupitsyn 
> wrote:
>
>> For now it does not have any of that. KeyValueView [1] is a table access
>> interface in Ignite 3 that is most similar to a "cache".
>>
>>
>> https://github.com/apache/ignite-3/blob/main/modules/api/src/main/java/org/apache/ignite/table/KeyValueView.java
>>
>> On Thu, May 30, 2024 at 6:19 PM Amit Jolly  wrote:
>>
>>> HI,
>>>
>>> Will Apache Ignite 3.0 be compliant with JSR107 sepc?
>>>
>>> In particular, I am looking at the feature CacheEntryProcessor support
>>> in Ignite 3.0
>>>
>>>
>>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.CacheEntryProcessor-java.lang.Object...-
>>>
>>>
>>> https://www.javadoc.io/doc/javax.cache/cache-api/latest/javax/cache/processor/EntryProcessor.html
>>>
>>> Thanks,
>>>
>>> Amit Jolly
>>>
>>


Re: Will Apache Ignite 3.0 be compliant with JSR107 sepc?

2024-05-31 Thread Amit Jolly
Hi Pavel,

Thanks for the quick response.

I had looked at the ignite-3 github repo and could not find any reference
to JSR 107, hence asked this question.

Since Ignite 2.X is fully JSR 107 compliant, now the question is if
ignite-3 is going to be the successor of Ignite 2.X and is going to replace
Ignite 2.X in future, will Ignite 3 be JSR 107 compliant as well? If yes,
do we have timelines when ignite-3 will be JSR 107 compliant, If no, what
will be the migration strategy for the current Ignite 2.X user using
features listed in JSR 107.

Thanks,

Amit Jolly





On Fri, May 31, 2024 at 12:08 AM Pavel Tupitsyn 
wrote:

> For now it does not have any of that. KeyValueView [1] is a table access
> interface in Ignite 3 that is most similar to a "cache".
>
>
> https://github.com/apache/ignite-3/blob/main/modules/api/src/main/java/org/apache/ignite/table/KeyValueView.java
>
> On Thu, May 30, 2024 at 6:19 PM Amit Jolly  wrote:
>
>> HI,
>>
>> Will Apache Ignite 3.0 be compliant with JSR107 sepc?
>>
>> In particular, I am looking at the feature CacheEntryProcessor support in
>> Ignite 3.0
>>
>>
>> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.CacheEntryProcessor-java.lang.Object...-
>>
>>
>> https://www.javadoc.io/doc/javax.cache/cache-api/latest/javax/cache/processor/EntryProcessor.html
>>
>> Thanks,
>>
>> Amit Jolly
>>
>


RE: Best way to update and organize nodes

2024-05-31 Thread Louis C
Thanks for you answer, this is what I was looking for.
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/cache/affinity/rendezvous/ClusterNodeAttributeAffinityBackupFilter.html

De : Jeremy McMillan 
Envoyé : jeudi 30 mai 2024 19:33
À : user@ignite.apache.org 
Objet : Re: Best way to update and organize nodes

This could work if you set up availability zones and use backup filters. Then 
you could perform maintenance one entire AZ at a time. When running during 
maintenance, your workload might exceed the capacity of the fraction of server 
nodes remaining up, so beware that.



On Thu, May 30, 2024, 11:30 Louis C mailto:l...@outlook.fr>> 
wrote:
Hello everyone,


I had a question that I could not really answer reading the documentation :
Let's say I have a cluster of 10 Ignite server nodes, with one cache with 
persistent data and 2 data backups.

I want to update the different nodes while maintaining the cluster activity 
(answering the clients requests). To do so I can stop gracefully one node, 
update it, and restart it, and then take care of the following nodes in the 
same fashion.
In my understanding, this should ensure that no data is lost and that the 
cluster is still active (is this really the case ?).
But this is quite long.

I wanted to know if it was possible to set the different partitions in such a 
way that we know that we can shutdown half (or 1/3) of the nodes in the same 
time, to speed up this process.
I guess it would be as if we have 5 primary nodes and 5 backups nodes, and that 
the 5 backup nodes take over when the 5 primary nodes shut down.


Is such a thing possible ?

Best regards,

Louis C.



Re: Will Apache Ignite 3.0 be compliant with JSR107 sepc?

2024-05-30 Thread Pavel Tupitsyn
For now it does not have any of that. KeyValueView [1] is a table access
interface in Ignite 3 that is most similar to a "cache".

https://github.com/apache/ignite-3/blob/main/modules/api/src/main/java/org/apache/ignite/table/KeyValueView.java

On Thu, May 30, 2024 at 6:19 PM Amit Jolly  wrote:

> HI,
>
> Will Apache Ignite 3.0 be compliant with JSR107 sepc?
>
> In particular, I am looking at the feature CacheEntryProcessor support in
> Ignite 3.0
>
>
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#invoke-K-org.apache.ignite.cache.CacheEntryProcessor-java.lang.Object...-
>
>
> https://www.javadoc.io/doc/javax.cache/cache-api/latest/javax/cache/processor/EntryProcessor.html
>
> Thanks,
>
> Amit Jolly
>


Re: Best way to update and organize nodes

2024-05-30 Thread Jeremy McMillan
This could work if you set up availability zones and use backup filters.
Then you could perform maintenance one entire AZ at a time. When running
during maintenance, your workload might exceed the capacity of the fraction
of server nodes remaining up, so beware that.



On Thu, May 30, 2024, 11:30 Louis C  wrote:

> Hello everyone,
>
>
> I had a question that I could not really answer reading the documentation :
> Let's say I have a cluster of 10 Ignite server nodes, with one cache with
> persistent data and 2 data backups.
>
> I want to update the different nodes while maintaining the cluster
> activity (answering the clients requests). To do so I can stop gracefully
> one node, update it, and restart it, and then take care of the following
> nodes in the same fashion.
> In my understanding, this should ensure that no data is lost and that the
> cluster is still active (is this really the case ?).
> But this is quite long.
>
> I wanted to know if it was possible to set the different partitions in
> such a way that we know that we can shutdown half (or 1/3) of the nodes in
> the same time, to speed up this process.
> I guess it would be as if we have 5 primary nodes and 5 backups nodes, and
> that the 5 backup nodes take over when the 5 primary nodes shut down.
>
>
> Is such a thing possible ?
>
> Best regards,
>
> Louis C.
>
>


Re: 2.x timeline and migration to 3.x

2024-05-30 Thread Raymond Liu
Thank you so much, Pavel! That is all great to hear.

On Thu, May 30, 2024 at 9:13 AM Pavel Tupitsyn  wrote:

> > will 2.x continue to be developed for a year or two, will it be on
> critical patch support, or will it no longer be developed at all?
>
> 2.x and 3.x will co-exist in foreseeable future
>
> > will there be a typical upgrade path from 2.x to 3.x for existing
> clusters
>
> Yes
>
> > Is there a general target timeframe for Ignite 3.x to hit general
> availability
>
> 2024
>
> On Thu, May 30, 2024 at 2:56 PM Raymond Liu  wrote:
>
>> Hello,
>>
>> Does anyone know the plans for Ignite 2.x after 3.0 is released? e.g.
>> will 2.x continue to be developed for a year or two, will it be on critical
>> patch support, or will it no longer be developed at all?
>>
>> And, in the same vein, will there be a typical upgrade path from 2.x to
>> 3.x for existing clusters, or have the internals changed too much for that?
>>
>> Is there a general target timeframe for Ignite 3.x to hit general
>> availability? e.g. 2024-2025 vs. 2027-2029.
>>
>> I'm writing a proof of concept with Ignite 2.16 on a small team, and
>> answers to these questions will help us choose whether to use Ignite, which
>> features to rely on, and how soon we might plan for a migration after our
>> initial work.
>>
>> Thanks,
>> Raymond
>>
>


Re: 2.x timeline and migration to 3.x

2024-05-30 Thread Pavel Tupitsyn
> will 2.x continue to be developed for a year or two, will it be on
critical patch support, or will it no longer be developed at all?

2.x and 3.x will co-exist in foreseeable future

> will there be a typical upgrade path from 2.x to 3.x for existing clusters

Yes

> Is there a general target timeframe for Ignite 3.x to hit general
availability

2024

On Thu, May 30, 2024 at 2:56 PM Raymond Liu  wrote:

> Hello,
>
> Does anyone know the plans for Ignite 2.x after 3.0 is released? e.g. will
> 2.x continue to be developed for a year or two, will it be on critical
> patch support, or will it no longer be developed at all?
>
> And, in the same vein, will there be a typical upgrade path from 2.x to
> 3.x for existing clusters, or have the internals changed too much for that?
>
> Is there a general target timeframe for Ignite 3.x to hit general
> availability? e.g. 2024-2025 vs. 2027-2029.
>
> I'm writing a proof of concept with Ignite 2.16 on a small team, and
> answers to these questions will help us choose whether to use Ignite, which
> features to rely on, and how soon we might plan for a migration after our
> initial work.
>
> Thanks,
> Raymond
>


Re: Node requires maintenance, non-empty set of maintainance tasks is found - node is not coming up

2024-05-29 Thread Jeremy McMillan
If backup partitions are available when a node is lost, we should not
expect lost partitions.

There is a lot more to this story than this thread explains, so for the
community: please don't follow this procedure.

https://ignite.apache.org/docs/latest/configuring-caches/partition-loss-policy
"A partition is lost when both the primary copy and all backup copies of
the partition are not available to the cluster, i.e. when the primary and
backup nodes for the partition become unavailable."

If you attempt to access a cache and receive a lost partitions error, this
means there IS DATA LOSS. Partition loss means there are no primary or
backup copies of a particular cache partition available. Have multiple
server nodes experienced trouble? Can we be certain that the affected
caches were created with backups>=1?

If a node fails to start up, and complains about maintenance tasks, we
should be very suspicious this node's persistent data is corrupted. If the
cluster is activated with a missing node and caches have lost partitions,
then we know these caches have lost some data. If there are no lost
partitions, we can safely remove the corrupted node from the baseline and
bring up a fresh node, and add it to the baseline to replace it thus
restoring redundancy. If there are lost partitions and we need to reset
lost partitions to bring a cache back online, we should expect that cache
is missing some data and may need to be reloaded.

Cache configuration backups=2 is excessive except in edge cases. For
backups=n, the memory and persistence footprint cost is n+1 times the
nominal data footprint. This scales linear. The marginal utility we derive
from each additional backup copy is diminishing because for a probability
of any single node failure p or p/1, the likelihood of needing those extra
copies is p/(n+1) for n backup copies.

Think of backup partitions like multiple coats of paint. After the second
coat, nobody will be able to tell the difference if you applied a third or
fourth coat. It still takes the same effort and materials to apply each
coat of paint.

If you NEED fault tolerance, then it should be mandatory to conduct testing
to make sure the configuration you have chosen is working as expected. If
backups=1 isn't effective for single node failures, then backups=2 will
make no beneficial difference. With backups=1 we should expect a cache to
work without complaining about lost partitions when one server node is
offline.

On Wed, May 29, 2024 at 12:15 PM Naveen Kumar 
wrote:

> Thanks very much for your prompt response Gianluca
>
> just for the community, I could solve this by running the control.sh with
> reset lost partitions for individual cachereset_lost_partitions
> looks like it worked, those partition issue is resolved, I suppose there
> wouldnt be any data loss as we have set all our caches with 2 replicas
>
> coming to the node which was not getting added to the cluster earlier,
> removed from baseline --> cleared all persistence store --> brought up the
> node --> added the node to baseline, this also seems to have worked fine.
>
> Thanks
>
>
> On Wed, May 29, 2024 at 5:13 PM Gianluca Bonetti <
> gianluca.bone...@gmail.com> wrote:
>
>> Hello Naveen
>>
>> Apache Ignite 2.13 is more than 2 years old, 25 months old in actual fact.
>> Three bugfix releases had been rolled out over time up to 2.16 release.
>>
>> It seems you are restarting your cluster on a regular basis, so you'd
>> better upgrade to 2.16 as soon as possible.
>> Otherwise it will also be very difficult for people on a community based
>> mailing list, on volunteer time, to work out a solution with a 2 years old
>> version running.
>>
>> Besides that, you are not providing very much information about your
>> cluster setup.
>> How many nodes, what infrastructure, how many caches, overall data size.
>> One could only guess you have more than 1 node running, with at least 1
>> cache, and non-empty dataset. :)
>>
>> This document from GridGain may be helpful but I don't see the same for
>> Ignite, it may still be worth checking it out.
>>
>> https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/maintenance-mode
>>
>> On the other hand you should also check your failing node.
>> If it is always the same node failing, then there should be some root
>> cause apart from Ignite.
>> Indeed if the nodes configuration is the same across all nodes, and just
>> this one fails, you should also consider some network issues (check
>> connectivity and network latency between nodes) and hardware related issues
>> (faulty disks, faulty memory)
>> In the end, one option might be to replace the faulty machine with a
>> brand new one.
>> In cloud environments this is actually quite cheap and easy to do.
>>
>> Cheers
>> Gianluca
>>
>> On Wed, 29 May 2024 at 08:43, Naveen Kumar 
>> wrote:
>>
>>> Hello All
>>>
>>> We are using Ignite 2.13.0
>>>
>>> After a cluster restart, one of the node is not coming up and in node
>>> logs are seeing this error - Nod

Re: Node requires maintenance, non-empty set of maintainance tasks is found - node is not coming up

2024-05-29 Thread Naveen Kumar
Thanks very much for your prompt response Gianluca

just for the community, I could solve this by running the control.sh with
reset lost partitions for individual cachereset_lost_partitions
looks like it worked, those partition issue is resolved, I suppose there
wouldnt be any data loss as we have set all our caches with 2 replicas

coming to the node which was not getting added to the cluster earlier,
removed from baseline --> cleared all persistence store --> brought up the
node --> added the node to baseline, this also seems to have worked fine.

Thanks


On Wed, May 29, 2024 at 5:13 PM Gianluca Bonetti 
wrote:

> Hello Naveen
>
> Apache Ignite 2.13 is more than 2 years old, 25 months old in actual fact.
> Three bugfix releases had been rolled out over time up to 2.16 release.
>
> It seems you are restarting your cluster on a regular basis, so you'd
> better upgrade to 2.16 as soon as possible.
> Otherwise it will also be very difficult for people on a community based
> mailing list, on volunteer time, to work out a solution with a 2 years old
> version running.
>
> Besides that, you are not providing very much information about your
> cluster setup.
> How many nodes, what infrastructure, how many caches, overall data size.
> One could only guess you have more than 1 node running, with at least 1
> cache, and non-empty dataset. :)
>
> This document from GridGain may be helpful but I don't see the same for
> Ignite, it may still be worth checking it out.
>
> https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/maintenance-mode
>
> On the other hand you should also check your failing node.
> If it is always the same node failing, then there should be some root
> cause apart from Ignite.
> Indeed if the nodes configuration is the same across all nodes, and just
> this one fails, you should also consider some network issues (check
> connectivity and network latency between nodes) and hardware related issues
> (faulty disks, faulty memory)
> In the end, one option might be to replace the faulty machine with a brand
> new one.
> In cloud environments this is actually quite cheap and easy to do.
>
> Cheers
> Gianluca
>
> On Wed, 29 May 2024 at 08:43, Naveen Kumar 
> wrote:
>
>> Hello All
>>
>> We are using Ignite 2.13.0
>>
>> After a cluster restart, one of the node is not coming up and in node
>> logs are seeing this error - Node requires maintenance, non-empty set of
>> maintainance  tasks is found - node is not coming up
>>
>> we are getting errors like time out is reached before computation is
>> completed error in other nodes as well.
>>
>> I could see that, we have control.sh script to backup and clean up the
>> corrupted files, but when I run the command, it fails.
>>
>> I have removed the node from baseline and tried to run as well, still its
>> failing
>>
>> what could be the solution for this, cluster is functioning,
>> however there are requests failing
>>
>> Is there anyway we can start ignite node in  maintenance mode and try
>> running clean corrupted commands
>>
>> Thanks
>> Naveen
>>
>>
>>

-- 
Thanks & Regards,
Naveen Bandaru


Re: Node requires maintenance, non-empty set of maintainance tasks is found - node is not coming up

2024-05-29 Thread Gianluca Bonetti
Hello Naveen

Apache Ignite 2.13 is more than 2 years old, 25 months old in actual fact.
Three bugfix releases had been rolled out over time up to 2.16 release.

It seems you are restarting your cluster on a regular basis, so you'd
better upgrade to 2.16 as soon as possible.
Otherwise it will also be very difficult for people on a community based
mailing list, on volunteer time, to work out a solution with a 2 years old
version running.

Besides that, you are not providing very much information about your
cluster setup.
How many nodes, what infrastructure, how many caches, overall data size.
One could only guess you have more than 1 node running, with at least 1
cache, and non-empty dataset. :)

This document from GridGain may be helpful but I don't see the same for
Ignite, it may still be worth checking it out.
https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/maintenance-mode

On the other hand you should also check your failing node.
If it is always the same node failing, then there should be some root cause
apart from Ignite.
Indeed if the nodes configuration is the same across all nodes, and just
this one fails, you should also consider some network issues (check
connectivity and network latency between nodes) and hardware related issues
(faulty disks, faulty memory)
In the end, one option might be to replace the faulty machine with a brand
new one.
In cloud environments this is actually quite cheap and easy to do.

Cheers
Gianluca

On Wed, 29 May 2024 at 08:43, Naveen Kumar  wrote:

> Hello All
>
> We are using Ignite 2.13.0
>
> After a cluster restart, one of the node is not coming up and in node logs
> are seeing this error - Node requires maintenance, non-empty set of
> maintainance  tasks is found - node is not coming up
>
> we are getting errors like time out is reached before computation is
> completed error in other nodes as well.
>
> I could see that, we have control.sh script to backup and clean up the
> corrupted files, but when I run the command, it fails.
>
> I have removed the node from baseline and tried to run as well, still its
> failing
>
> what could be the solution for this, cluster is functioning, however there
> are requests failing
>
> Is there anyway we can start ignite node in  maintenance mode and try
> running clean corrupted commands
>
> Thanks
> Naveen
>
>
>


Re: Realtime CDC demo

2024-05-20 Thread Maksim Timonin
Hi!

Apache Ignite now provides the `CdcManager` that captures changes in WAL
segments within the Ignite node. Actually, Ignite doesn't provide a
complete implementation of the CDC mode, but provides an opportunity to
implement it. Please check the docs in the `CdcManager` interface.

I suppose there are few possible implementations for the interface. Actual
implementation depends on the security requirements for the environment,
limiting the influence of the CDC on the operation of the node, etc. AFAIK,
currently there is no implementation to be open source.

Maksim


Re: Possible too long JVM pause - Ignite 2.10

2024-05-09 Thread Stephen Darlington
That's a great article, Ibrahim. Thanks for sharing!

On Thu, 9 May 2024 at 18:00, Ibrahim Altun 
wrote:

> Try this post
>
> https://medium.com/segmentify-tech/garbage-collection-g1gc-optimisation-on-apache-ignite-7217f2d9186e
>
>
> İbrahim Halil AltunExpert R&D Developer+90
> 536 3327510 • segmentify.com → UK • Germany
> • Turkey • Spain • Poland 
>
>
> On Thu, May 9, 2024 at 19:51 Jeremy McMillan  wrote:
>
>> Finding happiness is unfortunately never quite that simple.
>>
>>1. Understand why the garbage collector cannot function with shorter
>>pauses.
>>(may require GC logging configuration to provide key details)
>>2. Identify priorities.
>>(ie. absolute minimal GC pauses for best latency performance, or
>>maximum throughput, or minimal hardware footprint/cost...)
>>3. Choose a remediation solution based on stated priorities.
>>(ie. any combination of increase RAM, or possibly ironically CPU or
>>network capacity, decrease query workload, tune GC parameters, ...)
>>4. Implement the solution with appropriate changes to hardware, code,
>>configuration, and command line options, etc.
>>
>> Ignite tends to use Java heap mostly for handling query workload. The
>> slower these queries are, the greater number of them will be running
>> concurrently. Java heap needs to accommodate the sum of all running
>> queries' memory footprints, so the first remediation option on the list
>> should include making the slowest queries faster or less memory-hungry.
>> Alternatively, these queries could receive more server resources to spread
>> the load thinner, putatively by adding more nodes to the cluster. This will
>> divide the query load up, and also provide additional resources at the same
>> time. Node resource levels may also be upgraded to help the queries
>> complete faster if analysis reveals they are CPU bound or memory bound.
>> Only when we know the workload and resource level are properly matched
>> should we experiment with GC tuning options.
>>
>> On Thu, May 9, 2024 at 1:31 AM Charlin S  wrote:
>>
>>> Hi All,
>>>
>>> I am getting Possible too long JVM pause: 6403 milliseconds. JVM options
>>> used as below
>>> -XX:+DisableExplicitGC,-XX:+UseG1GC,-Xms3g,-Xmx5g - client node 1
>>> -XX:+DisableExplicitGC,-XX:+UseG1GC,-Xms1g,-Xmx4g  - client node 2
>>>
>>> Please suggest this.jvm option to avoid JVM pause issue.
>>>
>>> Thanks & Regards,
>>> Charlin
>>>
>>>
>>>
>>>
>>>
>>>
>>>


  1   2   3   4   5   6   7   8   9   10   >