Re: apache ignite 2.10.0 heap starvation

2021-09-12 Thread Ibrahim Altun
Hi Ilya,

since this is production environment i could not risk to take heap dump for 
now, but i will try to convince my superiors to get one and analyze it.

Queries are heavily used in our system but aren't they autoclosable objects? do 
we have to close them anyway?

here are some usage examples on our system;
--insert query is like this; MERGE INTO "ProductLabel" ("productId", "label", 
"language") VALUES (?, ?, ?)
igniteCacheService.getCache(ID, IgniteCacheType.LABEL).query(insertQuery);

another usage example;
--sqlFieldsQuery is like this; 
String sql = "SELECT _val FROM \"UserRecord\" WHERE \"email\" IN (?)";
SqlFieldsQuery sqlFieldsQuery = new SqlFieldsQuery(sql);
sqlFieldsQuery.setLazy(true);
sqlFieldsQuery.setArgs(emails.toArray());

try (QueryCursor> ignored = igniteCacheService.getCache(ID, 
IgniteCacheType.USER).query(sqlFieldsQuery)) {...}



On 2021/09/12 20:28:09, Shishkov Ilya  wrote: 
> Hi, Ibrahim!
> Have you analyzed the heap dump of the server node JVMs?
> In case your application executes queries are their cursors closed?
> 
> пт, 10 сент. 2021 г. в 11:54, Ibrahim Altun :
> 
> > Igniters any comment on this issue, we are facing huge GC problems on
> > production environment, please advise.
> >
> > On 2021/09/07 14:11:09, Ibrahim Altun 
> > wrote:
> > > Hi,
> > >
> > > totally 400 - 600K reads/writes/updates
> > > 12core
> > > 64GB RAM
> > > no iowait
> > > 10 nodes
> > >
> > > On 2021/09/07 12:51:28, Piotr Jagielski  wrote:
> > > > Hi,
> > > > Can you provide some information on how you use the cluster? How many
> > reads/writes/updates per second? Also CPU / RAM spec of cluster nodes?
> > > >
> > > > We observed full GC / CPU load / OOM killer when loading big amount of
> > data (15 mln records, data streamer + allowOverwrite=true). We've seen
> > 200-400k updates per sec on JMX metrics, but load up to 10 on nodes, iowait
> > to 30%. Our cluster is 3 x 4CPU, 16GB RAM (already upgradingto 8CPU, 32GB
> > RAM). Ignite 2.10
> > > >
> > > > Regards,
> > > > Piotr
> > > >
> > > > On 2021/09/02 08:36:07, Ibrahim Altun 
> > wrote:
> > > > > After upgrading from 2.7.1 version to 2.10.0 version ignite nodes
> > facing
> > > > > huge full GC operations after 24-36 hours after node start.
> > > > >
> > > > > We try to increase heap size but no luck, here is the start
> > configuration
> > > > > for nodes;
> > > > >
> > > > > JVM_OPTS="$JVM_OPTS -Xms12g -Xmx12g -server
> > > > >
> > -javaagent:/etc/prometheus/jmx_prometheus_javaagent-0.14.0.jar=8090:/etc/prometheus/jmx.yml
> > > > > -Dcom.sun.management.jmxremote
> > > > > -Dcom.sun.management.jmxremote.authenticate=false
> > > > > -Dcom.sun.management.jmxremote.port=49165
> > > > > -Dcom.sun.management.jmxremote.host=localhost
> > > > > -XX:MaxMetaspaceSize=256m -XX:MaxDirectMemorySize=1g
> > > > > -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true
> > > > > -DIGNITE_WAL_MMAP=true -DIGNITE_BPLUS_TREE_LOCK_RETRIES=10
> > > > > -Djava.net.preferIPv4Stack=true"
> > > > >
> > > > > JVM_OPTS="$JVM_OPTS -XX:+AlwaysPreTouch -XX:+UseG1GC
> > > > > -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
> > > > > -XX:+UseStringDeduplication -Xloggc:/var/log/apache-ignite/gc.log
> > > > > -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> > > > > -XX:+PrintTenuringDistribution -XX:+PrintGCCause
> > > > > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
> > > > > -XX:GCLogFileSize=100M"
> > > > >
> > > > > here is the 80 hours of GC analyize report:
> > > > >
> > https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjEvMDgvMzEvLS1nYy5sb2cuMC5jdXJyZW50LnppcC0tNS01MS0yOQ==&channel=WEB
> > > > >
> > > > > do we need more heap size or is there a BUG that we need to be aware?
> > > > >
> > > > > here is the node configuration:
> > > > >
> > > > > 
> > > > > http://www.springframework.org/schema/beans";
> > > > >xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> > > > >xsi:schemaLocation="
> > > > > http://www.springframework.org/schema/beans
> > > > > http://www.springframework.org/schema/beans/spring-beans.xsd";>
> > > > >  > > > > class="org.apache.ignite.configuration.IgniteConfiguration">
> > > > > 
> > > > > 
> > > > >  > > > > value="/etc/apache-ignite/ignite-log4j2.xml"/>
> > > > > 
> > > > > 
> > > > > 
> > > > >  > class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > >
> > > > > 
> > > > > 
> > > > >
> > > > > 
> > > > > 
> > > > >  > class="org.apache.ignite.spi.deployment.uri.UriDeploymentSpi">
> > > > >  > > > > value="/tmp/temp_ignite_libs"/>
> > > > > 
> > > > > 
> > > > >
> > > > > file://freq=5000@localhost
> > /usr/share/apache-ignite/libs/segmentify/
> > > > > 
> > > > > 
> > > > > 
> > 

Re: Subscribe

2021-09-12 Thread Ilya Kazakov
Hi Ibrahim, to subscribe please email: user-subscr...@ignite.apache.org

And welcome!

---
Ilya

вт, 7 сент. 2021 г. в 20:20, Ibrahim Altun :

> please subscribe me
>
> --
> İbrahim Halil AltunSenior Software Engineer+90
> 536 3327510 • segmentify.com → UK • Germany
> • Turkey 
> 
>


Re: Ignite CheckpointReadLock /Long running cache futures

2021-09-12 Thread Ilya Kazakov
As I mentioned above, it was hanged PME. PME is a cluster-wide operation
that leads to refreshing information about partition distribution across
nodes. Ans any cache operations should await PME ending. In your case,
client reconnection to another server node does not resolve the issue,
because the root cause - hanging PME. You should resolve PME hanging. To
resolve it, try to read logs carefully and determine which server node is
hanging. Sometimes it may be necessary to unwind a chain of several server
nodes to get to the root cause.

You can read more about PME here:
https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood


Ilya

чт, 9 сент. 2021 г. в 21:21, Mike Wiesenberg :

> Apologies can't paste logs due to firm policy. What do you think about my
> questions regarding clients switching to good nodes automatically?
>
> On Tue, Sep 7, 2021 at 4:14 AM Ilya Kazakov 
> wrote:
>
>> Hello Mike. According to your description, there was a hanging PME. If
>> you need more detailed analysis, could you share your logs and thread dumps?
>>
>> ---
>> Ilya
>>
>> пн, 6 сент. 2021 г. в 22:21, Mike Wiesenberg :
>>
>>> Using Ignite 2.10.0
>>>
>>> We had a frustrating series of issues with Ignite the other day. We're
>>> using a 4-node cluster with 1 backup per table and cacheMode set to
>>> Partitioned, and write behind enabled. We have a client that inserts data
>>> into caches and another client that listens for new data in those caches.
>>> (Apologies I can't paste logs or configuration due to firm policy)
>>>
>>> What happened:
>>>
>>> 1. We observed that our insertion client was not working after startup,
>>> it logged every 20 seconds that 'Still awaiting for initial partition map
>>> exchange.' This continued until we restarted the node it was trying to
>>> connect to, at which point the client connected to another node and the
>>> warning stopped.
>>>
>>>  Possible Bug #1 - why didn't it automatically try a different node, or
>>> if it would have that same issue connecting to any node, why couldn't the
>>> cluster print an error and function anyhow?
>>>
>>> 2. After rebooting bad node #1, the insertion client still didn't work,
>>> it then started printing totally different warnings about 'First 10 long
>>> running cache futures [total=1]', whatever that means, and then printed the
>>> ID of a node. We killed that referenced node, and then everything started
>>> working.
>>>
>>>  Again, why didn't the client switch to a good node automatically(or is
>>> there a way to configure such failover capability that I don't know about)?
>>>
>>> 3. In terms of root cause, it seems bad node #1 had a 'blocked
>>> system-critical thread' which according to the stack trace was blocked at
>>> CheckpointReadWriteLock.java line 69. Is there a way to automatically
>>> recover from this or handle this more gracefully? If not I will probably
>>> disable WAL (which I understand will disable checkpointing).
>>>
>>>  Possible Bug #2 - why couldn't it recover from this lock if restarting
>>> fixed it?
>>>
>>> Regards, and thanks in advance, for any advice!
>>>
>>


Re: apache ignite 2.10.0 heap starvation

2021-09-12 Thread Shishkov Ilya
Hi, Ibrahim!
Have you analyzed the heap dump of the server node JVMs?
In case your application executes queries are their cursors closed?

пт, 10 сент. 2021 г. в 11:54, Ibrahim Altun :

> Igniters any comment on this issue, we are facing huge GC problems on
> production environment, please advise.
>
> On 2021/09/07 14:11:09, Ibrahim Altun 
> wrote:
> > Hi,
> >
> > totally 400 - 600K reads/writes/updates
> > 12core
> > 64GB RAM
> > no iowait
> > 10 nodes
> >
> > On 2021/09/07 12:51:28, Piotr Jagielski  wrote:
> > > Hi,
> > > Can you provide some information on how you use the cluster? How many
> reads/writes/updates per second? Also CPU / RAM spec of cluster nodes?
> > >
> > > We observed full GC / CPU load / OOM killer when loading big amount of
> data (15 mln records, data streamer + allowOverwrite=true). We've seen
> 200-400k updates per sec on JMX metrics, but load up to 10 on nodes, iowait
> to 30%. Our cluster is 3 x 4CPU, 16GB RAM (already upgradingto 8CPU, 32GB
> RAM). Ignite 2.10
> > >
> > > Regards,
> > > Piotr
> > >
> > > On 2021/09/02 08:36:07, Ibrahim Altun 
> wrote:
> > > > After upgrading from 2.7.1 version to 2.10.0 version ignite nodes
> facing
> > > > huge full GC operations after 24-36 hours after node start.
> > > >
> > > > We try to increase heap size but no luck, here is the start
> configuration
> > > > for nodes;
> > > >
> > > > JVM_OPTS="$JVM_OPTS -Xms12g -Xmx12g -server
> > > >
> -javaagent:/etc/prometheus/jmx_prometheus_javaagent-0.14.0.jar=8090:/etc/prometheus/jmx.yml
> > > > -Dcom.sun.management.jmxremote
> > > > -Dcom.sun.management.jmxremote.authenticate=false
> > > > -Dcom.sun.management.jmxremote.port=49165
> > > > -Dcom.sun.management.jmxremote.host=localhost
> > > > -XX:MaxMetaspaceSize=256m -XX:MaxDirectMemorySize=1g
> > > > -DIGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK=true
> > > > -DIGNITE_WAL_MMAP=true -DIGNITE_BPLUS_TREE_LOCK_RETRIES=10
> > > > -Djava.net.preferIPv4Stack=true"
> > > >
> > > > JVM_OPTS="$JVM_OPTS -XX:+AlwaysPreTouch -XX:+UseG1GC
> > > > -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
> > > > -XX:+UseStringDeduplication -Xloggc:/var/log/apache-ignite/gc.log
> > > > -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> > > > -XX:+PrintTenuringDistribution -XX:+PrintGCCause
> > > > -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10
> > > > -XX:GCLogFileSize=100M"
> > > >
> > > > here is the 80 hours of GC analyize report:
> > > >
> https://gceasy.io/my-gc-report.jsp?p=c2hhcmVkLzIwMjEvMDgvMzEvLS1nYy5sb2cuMC5jdXJyZW50LnppcC0tNS01MS0yOQ==&channel=WEB
> > > >
> > > > do we need more heap size or is there a BUG that we need to be aware?
> > > >
> > > > here is the node configuration:
> > > >
> > > > 
> > > > http://www.springframework.org/schema/beans";
> > > >xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
> > > >xsi:schemaLocation="
> > > > http://www.springframework.org/schema/beans
> > > > http://www.springframework.org/schema/beans/spring-beans.xsd";>
> > > >  > > > class="org.apache.ignite.configuration.IgniteConfiguration">
> > > > 
> > > > 
> > > >  > > > value="/etc/apache-ignite/ignite-log4j2.xml"/>
> > > > 
> > > > 
> > > > 
> > > >  class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > >
> > > > 
> > > > 
> > > >
> > > > 
> > > > 
> > > >  class="org.apache.ignite.spi.deployment.uri.UriDeploymentSpi">
> > > >  > > > value="/tmp/temp_ignite_libs"/>
> > > > 
> > > > 
> > > >
> > > > file://freq=5000@localhost
> /usr/share/apache-ignite/libs/segmentify/
> > > > 
> > > > 
> > > > 
> > > > 
> > > >
> > > > 
> > > > 
> > > > 
> > > >  class="org.apache.ignite.configuration.CacheConfiguration">
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > >
> > > > 
> > > > 
> > > >  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> > > > 
> > > > 
> > > >  > > >
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > > 
> > > >
> > > > 
> > > > 
> > > >  class="org.apache.ignite.configuration.DataStorageConfiguration">
> > > > 
> > > >  > > > class="org.apache.ignite.configuration.DataRegionConfiguration">
> > > >  value="true"/>
> > > >  > > > value="#{ 2L * 1024 * 1024 * 1024}"/>
> > > > 
>

Re: Ignite Cluster Snapshots

2021-09-12 Thread Shishkov Ilya
Hi, Siva!

As I see, it is not expected behaviour.
Is cluster baseline topology correct and all nodes are in the cluster? To
check this, you can run the 'control.sh --baseline' command.
Have you checked logs for errors on all nodes of the cluster?
Is the above behaviour the same when creating snapshot via the control
script (control.sh|bat)?

пт, 10 сент. 2021 г. в 21:43, :

> We are using  Ignite  version 2.10  and trying to create cache snapshots
> using java
> https://ignite.apache.org/docs/latest/persistence/snapshots#restoring-from-snapshot
>
>
>
> We have a 3 nodes cluster running on three different servers.
>
>
>
> ignite.snapshot().createSnapshot("snapshot_02092020")
>
>
>
> But it doesn’t create snapshot for the 3 nodes but just the node where it
> is executed. Is this expected? Is there a way to create snapshot for the
> whole cluster as mentioned in the documentation -
> https://ignite.apache.org/docs/latest/persistence/snapshots#overview
>
>
>
>
>
> Thanks,
> Siva.
>
>
> _
> “This message is for information purposes only, it is not a
> recommendation, advice, offer or solicitation to buy or sell a product or
> service nor an official confirmation of any transaction. It is directed at
> persons who are professionals and is not intended for retail customer use.
> Intended for recipient only. This message is subject to the terms at:
> www.barclays.com/emaildisclaimer.
>
> For important disclosures, please see:
> www.barclays.com/salesandtradingdisclaimer regarding market commentary
> from Barclays Sales and/or Trading, who are active market participants;
> https://www.investmentbank.barclays.com/disclosures/barclays-global-markets-disclosures.html
> regarding our standard terms for the Investment Bank of Barclays where we
> trade with you in principal-to-principal wholesale markets transactions;
> and in respect of Barclays Research, including disclosures relating to
> specific issuers, please see http://publicresearch.barclays.com.”
>
> _
> If you are incorporated or operating in Australia, please see
> https://www.home.barclays/disclosures/importantapacdisclosures.html for
> important disclosure.
>
> _
> How we use personal information  see our privacy notice
> https://www.investmentbank.barclays.com/disclosures/personalinformationuse.html
>
> _
>


Re: Apache Ignite Sink Connector

2021-09-12 Thread Saikat Maitra
Are you looking for a parallel Ignite sink connector that can connect to
kafka topic?

My thoughts are you should be able to do it as long as your group_id is
same like mentioned here
https://medium.com/@jhansireddy007/how-to-parallelise-kafka-consumers-59c8b0bbc37a

I may have misunderstood the question as well.

Regards,
Saikat





On Sun, Sep 12, 2021 at 1:32 PM Shubham Shirur 
wrote:

> Yes, I agree with you. Basically kafka connect works in two different ways
> i.e standalone and distributed mode.
>
> In standard mode ignite sink connectors run very smoothly. So if I want to
> create multiple topic cache data transmission, I need to setup same number
> of workers and connectors which is not recommended in production because
> its standalone, if connect node goes down connector will stop.
>
> But in distributed mode single worker is running across nodes and
> rebalancing happens for connectors tasks if any node goes down. In this
> case I am able to create just 1 ignite connector at a time. No parallelism.
>
> Hope this gives better clarity.
>
> Thanks and regards,
> Shubham
>
> On Sun, Sep 12, 2021, 11:50 PM Saikat Maitra 
> wrote:
>
>> Can you please elaborate on distributed mode?
>>
>> You can always connect multiple client to different kafka topic and write
>> data to cluster of Ignite nodes.
>>
>> Regards,
>> Saikat
>>
>> On Sun, Sep 12, 2021 at 10:47 AM Shubham Shirur 
>> wrote:
>>
>>> Hey,
>>>
>>> Thanks for replying. I have gone through the documentation and could
>>> setup and run connectors in standalone mode as described there in docs. But
>>> I want to run it in distributed mode, can you help me with that?
>>>
>>> Thanks & Regards,
>>> Shubham
>>>
>>> On Sun, Sep 12, 2021, 7:58 PM Saikat Maitra 
>>> wrote:
>>>
 Hi Shubham,

 Here are the documents for Apache Ignite Sink Connector using Kafka
 https://github.com/apache/ignite-extensions/tree/master/modules/kafka-ext

 Please let us know if you have any questions.

 Regards,
 Saikat

 On Wed, Aug 18, 2021 at 11:40 PM Shubham Shirur <
 shirurshub...@gmail.com> wrote:

> Hi,
>
> I did not find any specific documentation on Apache Ignite Sink
> Connector. I am using a kafka ignite sink connector and want to push some
> kafka topic data in ignite where kafka and ignite are on remote nodes. My
> connector should ideally run on a remote node from the ignite node.
>
> How can I achieve this?
> What configurations I need to pass in the spring xml file which I pass
> as a parameter in connector properties?
> What configurations I need to pass in the spring xml file of the
> ignite server node?
>
> Thanks,
> Shubham
>



Re: Apache Ignite Sink Connector

2021-09-12 Thread Shubham Shirur
Yes, I agree with you. Basically kafka connect works in two different ways
i.e standalone and distributed mode.

In standard mode ignite sink connectors run very smoothly. So if I want to
create multiple topic cache data transmission, I need to setup same number
of workers and connectors which is not recommended in production because
its standalone, if connect node goes down connector will stop.

But in distributed mode single worker is running across nodes and
rebalancing happens for connectors tasks if any node goes down. In this
case I am able to create just 1 ignite connector at a time. No parallelism.

Hope this gives better clarity.

Thanks and regards,
Shubham

On Sun, Sep 12, 2021, 11:50 PM Saikat Maitra 
wrote:

> Can you please elaborate on distributed mode?
>
> You can always connect multiple client to different kafka topic and write
> data to cluster of Ignite nodes.
>
> Regards,
> Saikat
>
> On Sun, Sep 12, 2021 at 10:47 AM Shubham Shirur 
> wrote:
>
>> Hey,
>>
>> Thanks for replying. I have gone through the documentation and could
>> setup and run connectors in standalone mode as described there in docs. But
>> I want to run it in distributed mode, can you help me with that?
>>
>> Thanks & Regards,
>> Shubham
>>
>> On Sun, Sep 12, 2021, 7:58 PM Saikat Maitra 
>> wrote:
>>
>>> Hi Shubham,
>>>
>>> Here are the documents for Apache Ignite Sink Connector using Kafka
>>> https://github.com/apache/ignite-extensions/tree/master/modules/kafka-ext
>>>
>>> Please let us know if you have any questions.
>>>
>>> Regards,
>>> Saikat
>>>
>>> On Wed, Aug 18, 2021 at 11:40 PM Shubham Shirur 
>>> wrote:
>>>
 Hi,

 I did not find any specific documentation on Apache Ignite Sink
 Connector. I am using a kafka ignite sink connector and want to push some
 kafka topic data in ignite where kafka and ignite are on remote nodes. My
 connector should ideally run on a remote node from the ignite node.

 How can I achieve this?
 What configurations I need to pass in the spring xml file which I pass
 as a parameter in connector properties?
 What configurations I need to pass in the spring xml file of the ignite
 server node?

 Thanks,
 Shubham

>>>


Re: Apache Ignite Sink Connector

2021-09-12 Thread Saikat Maitra
Can you please elaborate on distributed mode?

You can always connect multiple client to different kafka topic and write
data to cluster of Ignite nodes.

Regards,
Saikat

On Sun, Sep 12, 2021 at 10:47 AM Shubham Shirur 
wrote:

> Hey,
>
> Thanks for replying. I have gone through the documentation and could setup
> and run connectors in standalone mode as described there in docs. But I
> want to run it in distributed mode, can you help me with that?
>
> Thanks & Regards,
> Shubham
>
> On Sun, Sep 12, 2021, 7:58 PM Saikat Maitra 
> wrote:
>
>> Hi Shubham,
>>
>> Here are the documents for Apache Ignite Sink Connector using Kafka
>> https://github.com/apache/ignite-extensions/tree/master/modules/kafka-ext
>>
>> Please let us know if you have any questions.
>>
>> Regards,
>> Saikat
>>
>> On Wed, Aug 18, 2021 at 11:40 PM Shubham Shirur 
>> wrote:
>>
>>> Hi,
>>>
>>> I did not find any specific documentation on Apache Ignite Sink
>>> Connector. I am using a kafka ignite sink connector and want to push some
>>> kafka topic data in ignite where kafka and ignite are on remote nodes. My
>>> connector should ideally run on a remote node from the ignite node.
>>>
>>> How can I achieve this?
>>> What configurations I need to pass in the spring xml file which I pass
>>> as a parameter in connector properties?
>>> What configurations I need to pass in the spring xml file of the ignite
>>> server node?
>>>
>>> Thanks,
>>> Shubham
>>>
>>


Re: Apache Ignite Sink Connector

2021-09-12 Thread Shubham Shirur
Hey,

Thanks for replying. I have gone through the documentation and could setup
and run connectors in standalone mode as described there in docs. But I
want to run it in distributed mode, can you help me with that?

Thanks & Regards,
Shubham

On Sun, Sep 12, 2021, 7:58 PM Saikat Maitra  wrote:

> Hi Shubham,
>
> Here are the documents for Apache Ignite Sink Connector using Kafka
> https://github.com/apache/ignite-extensions/tree/master/modules/kafka-ext
>
> Please let us know if you have any questions.
>
> Regards,
> Saikat
>
> On Wed, Aug 18, 2021 at 11:40 PM Shubham Shirur 
> wrote:
>
>> Hi,
>>
>> I did not find any specific documentation on Apache Ignite Sink
>> Connector. I am using a kafka ignite sink connector and want to push some
>> kafka topic data in ignite where kafka and ignite are on remote nodes. My
>> connector should ideally run on a remote node from the ignite node.
>>
>> How can I achieve this?
>> What configurations I need to pass in the spring xml file which I pass as
>> a parameter in connector properties?
>> What configurations I need to pass in the spring xml file of the ignite
>> server node?
>>
>> Thanks,
>> Shubham
>>
>


Re: Apache Ignite Sink Connector

2021-09-12 Thread Saikat Maitra
Hi Shubham,

Here are the documents for Apache Ignite Sink Connector using Kafka
https://github.com/apache/ignite-extensions/tree/master/modules/kafka-ext

Please let us know if you have any questions.

Regards,
Saikat

On Wed, Aug 18, 2021 at 11:40 PM Shubham Shirur 
wrote:

> Hi,
>
> I did not find any specific documentation on Apache Ignite Sink Connector.
> I am using a kafka ignite sink connector and want to push some kafka topic
> data in ignite where kafka and ignite are on remote nodes. My connector
> should ideally run on a remote node from the ignite node.
>
> How can I achieve this?
> What configurations I need to pass in the spring xml file which I pass as
> a parameter in connector properties?
> What configurations I need to pass in the spring xml file of the ignite
> server node?
>
> Thanks,
> Shubham
>