Re: Reading offset from one consumer group to use for another consumer group.

2021-05-27 Thread Ran Lupovich
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long)

בתאריך יום ו׳, 28 במאי 2021, 08:04, מאת Ran Lupovich ‏:

> While your DB consumer is running you get the access to the partition
> ${partition} @ offset ${offset}
>
> https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
> setting your second consumers for real time just set them tostart from that
> point
>
>
> בתאריך יום ו׳, 28 במאי 2021, 01:51, מאת Ronald Fenner ‏<
> rfen...@gamecircus.com>:
>
>> I'm trying to figure out how to pragmatically read a consumer groups
>> offset for a topic.
>> What I'm trying to do is read the offsets of our DB consumers that run
>> once an hour and batch lad all new messages. I then would have another
>> consumer that monitors the offsets that have been consumed and consume the
>> message not yet loaded storing them in  memory to be able to send them to a
>> viewer. As messages get consumed they then get pruned from the in memory
>> cache.
>>
>> Basically I'm wanting to create window on the messages that haven't been
>> loaded into the db.
>>
>> I've seen ways of getting it from the command line but I'd like to from
>> with in code.
>>
>> Currently I'm using node-rdkafka.
>>
>> I guess as a last resort I could shell the command line for the offsets
>> then parse it and get it that way.
>>
>>
>> Ronald Fenner
>> Network Architect
>> Game Circus LLC.
>>
>> rfen...@gamecircus.com
>>
>>


Re: Reading offset from one consumer group to use for another consumer group.

2021-05-27 Thread Ran Lupovich
While your DB consumer is running you get the access to the partition
${partition} @ offset ${offset}
https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
setting your second consumers for real time just set them tostart from that
point


בתאריך יום ו׳, 28 במאי 2021, 01:51, מאת Ronald Fenner ‏<
rfen...@gamecircus.com>:

> I'm trying to figure out how to pragmatically read a consumer groups
> offset for a topic.
> What I'm trying to do is read the offsets of our DB consumers that run
> once an hour and batch lad all new messages. I then would have another
> consumer that monitors the offsets that have been consumed and consume the
> message not yet loaded storing them in  memory to be able to send them to a
> viewer. As messages get consumed they then get pruned from the in memory
> cache.
>
> Basically I'm wanting to create window on the messages that haven't been
> loaded into the db.
>
> I've seen ways of getting it from the command line but I'd like to from
> with in code.
>
> Currently I'm using node-rdkafka.
>
> I guess as a last resort I could shell the command line for the offsets
> then parse it and get it that way.
>
>
> Ronald Fenner
> Network Architect
> Game Circus LLC.
>
> rfen...@gamecircus.com
>
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread Ran Lupovich
name=elasticsearch-sinkconnector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnectortasks.max=1topics=test-elasticsearch-sinkkey.ignore=trueconnection.url=https://localhost:9200type.name=kafka-connect
elastic.security.protocol=SSLelastic.https.ssl.keystore.location=/home/directory/elasticsearch-6.6.0/config/certs/keystore.jkselastic.https.ssl.keystore.password=asdfasdfelastic.https.ssl.key.password=asdfasdfelastic.https.ssl.keystore.type=JKSelastic.https.ssl.truststore.location=/home/directory/elasticsearch-6.6.0/config/certs/truststore.jkselastic.https.ssl.truststore.password=asdfasdfelastic.https.ssl.truststore.type=JKSelastic.https.ssl.protocol=TLS


בתאריך יום ו׳, 28 במאי 2021, 07:03, מאת Ran Lupovich ‏:

> https://docs.confluent.io/kafka-connect-elasticsearch/current/security.html
>
> בתאריך יום ו׳, 28 במאי 2021, 07:00, מאת sunil chaudhari ‏<
> sunilmchaudhar...@gmail.com>:
>
>> The configurations doesnt have provision for the truststore. Thats my
>> concern.
>>
>>
>> On Thu, 27 May 2021 at 10:47 PM, Ran Lupovich 
>> wrote:
>>
>> > For https connections you need to set truststore configuration
>> parameters ,
>> > giving it jks with password , the jks needs the contain the certficate
>> of
>> > CA that is signing your certifcates
>> >
>> > בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
>> > sunilmchaudhar...@gmail.com>:
>> >
>> > > Hi Ran,
>> > > That problem is solved already.
>> > > If you read complete thread and see that last problem is about https
>> > > connection.
>> > >
>> > >
>> > > On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich 
>> > > wrote:
>> > >
>> > > > Try setting  es.port = "9200" without quotes?
>> > > >
>> > > > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
>> > > > sunilmchaudhar...@gmail.com>:
>> > > >
>> > > > > Hello team,
>> > > > > Can anyone help me with this issue?
>> > > > >
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
>> > > > >
>> > > > >
>> > > > > Regards,
>> > > > > Sunil.
>> > > > >
>> > > >
>> > >
>> >
>>
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread Ran Lupovich
https://docs.confluent.io/kafka-connect-elasticsearch/current/security.html

בתאריך יום ו׳, 28 במאי 2021, 07:00, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> The configurations doesnt have provision for the truststore. Thats my
> concern.
>
>
> On Thu, 27 May 2021 at 10:47 PM, Ran Lupovich 
> wrote:
>
> > For https connections you need to set truststore configuration
> parameters ,
> > giving it jks with password , the jks needs the contain the certficate of
> > CA that is signing your certifcates
> >
> > בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
> > sunilmchaudhar...@gmail.com>:
> >
> > > Hi Ran,
> > > That problem is solved already.
> > > If you read complete thread and see that last problem is about https
> > > connection.
> > >
> > >
> > > On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich 
> > > wrote:
> > >
> > > > Try setting  es.port = "9200" without quotes?
> > > >
> > > > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
> > > > sunilmchaudhar...@gmail.com>:
> > > >
> > > > > Hello team,
> > > > > Can anyone help me with this issue?
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
> > > > >
> > > > >
> > > > > Regards,
> > > > > Sunil.
> > > > >
> > > >
> > >
> >
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread sunil chaudhari
The configurations doesnt have provision for the truststore. Thats my
concern.


On Thu, 27 May 2021 at 10:47 PM, Ran Lupovich  wrote:

> For https connections you need to set truststore configuration parameters ,
> giving it jks with password , the jks needs the contain the certficate of
> CA that is signing your certifcates
>
> בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
> sunilmchaudhar...@gmail.com>:
>
> > Hi Ran,
> > That problem is solved already.
> > If you read complete thread and see that last problem is about https
> > connection.
> >
> >
> > On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich 
> > wrote:
> >
> > > Try setting  es.port = "9200" without quotes?
> > >
> > > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
> > > sunilmchaudhar...@gmail.com>:
> > >
> > > > Hello team,
> > > > Can anyone help me with this issue?
> > > >
> > > >
> > > >
> > >
> >
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
> > > >
> > > >
> > > > Regards,
> > > > Sunil.
> > > >
> > >
> >
>


Re: Kafka contributor list request

2021-05-27 Thread Sophie Blee-Goldman
Done, added you to Confluence and Jira so you should be able to self-assign
tickets and create KIPs if necessary.

Welcome to Kafka :)

On Thu, May 27, 2021 at 4:28 PM Norbert Wojciechowski <
wojciechowski.norbert.git...@gmail.com> wrote:

> Hello,
>
> Can I please be assigned to Kafka contributor list on Confluence/Jira, so I
> can start contributing to Kafka and be able to work on issues?
>
> My Jira username is: erzbnif
>
> Thanks,
> Norbert
>


Kafka contributor list request

2021-05-27 Thread Norbert Wojciechowski
Hello,

Can I please be assigned to Kafka contributor list on Confluence/Jira, so I
can start contributing to Kafka and be able to work on issues?

My Jira username is: erzbnif

Thanks,
Norbert


Reading offset from one consumer group to use for another consumer group.

2021-05-27 Thread Ronald Fenner
I'm trying to figure out how to pragmatically read a consumer groups offset for 
a topic. 
What I'm trying to do is read the offsets of our DB consumers that run once an 
hour and batch lad all new messages. I then would have another consumer that 
monitors the offsets that have been consumed and consume the message not yet 
loaded storing them in  memory to be able to send them to a viewer. As messages 
get consumed they then get pruned from the in memory cache.

Basically I'm wanting to create window on the messages that haven't been loaded 
into the db.

I've seen ways of getting it from the command line but I'd like to from with in 
code.

Currently I'm using node-rdkafka.

I guess as a last resort I could shell the command line for the offsets then 
parse it and get it that way.


Ronald Fenner
Network Architect
Game Circus LLC.

rfen...@gamecircus.com



Re: Kafka getting down every week due to log file deletion.

2021-05-27 Thread Ran Lupovich
The main purpose of the /*tmp* directory is to temporarily store *files* when
installing an OS or software. If any *files* in the /*tmp* directory have
not been accessed for a while, they will be automatically *deleted* from
the system

בתאריך יום ה׳, 27 במאי 2021, 19:04, מאת Ran Lupovich ‏:

> Seems you log dir is sending your data to tmp folder, if I am bot mistken
> this dir automatically removing files from itself, causing the log deletuon
> procedure of the kafka internal to fail and shutdown broker on file not
> found
>
> בתאריך יום ה׳, 27 במאי 2021, 17:52, מאת Neeraj Gulia ‏<
> neeraj.gu...@opsworld.in>:
>
>> Hi team,
>>
>> Our Kafka is getting down almost once or twice a month due to log file
>> deletion failure.
>>
>>
>> There is single node kafka broker is running in our system and gets down
>> every time it tires to delete the log files as cleanup and fails.
>>
>> Sharing the Error Logs, we need a robust solution for this so that our
>> kafka broker doesn't gets down like this every time.
>>
>> Regards,
>> Neeraj Gulia
>>
>> Caused by: java.io.FileNotFoundException:
>> /tmp/kafka-logs/dokutopic-0/.index (No such file or
>> directory)
>> at java.base/java.io.RandomAccessFile.open0(Native Method)
>> at java.base/java.io.RandomAccessFile.open(RandomAccessFile.java:345)
>> at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:259)
>> at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:214)
>> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:183)
>> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:176)
>> at
>>
>> kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:242)
>> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:242)
>> at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:508)
>> at kafka.log.Log.$anonfun$roll$8(Log.scala:1954)
>> at kafka.log.Log.$anonfun$roll$2(Log.scala:1954)
>> at kafka.log.Log.roll(Log.scala:2387)
>> at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1749)
>> at kafka.log.Log.deleteSegments(Log.scala:2387)
>> at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1737)
>> at kafka.log.Log.deleteOldSegments(Log.scala:1806)
>> at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:1074)
>> at
>> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:1071)
>> at scala.collection.immutable.List.foreach(List.scala:431)
>> at kafka.log.LogManager.cleanupLogs(LogManager.scala:1071)
>> at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:409)
>> at
>> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
>> at
>>
>> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>> at
>> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
>> at
>>
>> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>> at
>>
>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>> at
>>
>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>> at java.base/java.lang.Thread.run(Thread.java:829)
>> [2021-05-27 09:34:07,972] WARN [ReplicaManager broker=0] Broker 0 stopped
>> fetcher for partitions
>>
>> __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,fliptopic-0,__consumer_offsets-25,webhook-events-0,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,dokutopic-0,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,post_payment_topic-0,__consumer_offsets-18,__consumer_offsets-37,topic-0,events-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,disbursementtopic-0,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40,faspaytopic-0
>> and stopped moving logs for partitions because they are in the failed log
>> directory /tmp/kafka-logs. (kafka.server.ReplicaManager)
>> [2021-05-27 09:34:07,974] WARN Stopping serving logs in dir
>> /tmp/kafka-logs
>> (kafka.log.LogManager)
>> [2021-05-27 09:34:07,983] ERROR Shutdown broker because all log dirs in
>> /tmp/kafka-logs have failed (kafka.log.LogManager)
>>
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread Ran Lupovich
For https connections you need to set truststore configuration parameters ,
giving it jks with password , the jks needs the contain the certficate of
CA that is signing your certifcates

בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> Hi Ran,
> That problem is solved already.
> If you read complete thread and see that last problem is about https
> connection.
>
>
> On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich 
> wrote:
>
> > Try setting  es.port = "9200" without quotes?
> >
> > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
> > sunilmchaudhar...@gmail.com>:
> >
> > > Hello team,
> > > Can anyone help me with this issue?
> > >
> > >
> > >
> >
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
> > >
> > >
> > > Regards,
> > > Sunil.
> > >
> >
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread sunil chaudhari
Hi Ran,
That problem is solved already.
If you read complete thread and see that last problem is about https
connection.


On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich  wrote:

> Try setting  es.port = "9200" without quotes?
>
> בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
> sunilmchaudhar...@gmail.com>:
>
> > Hello team,
> > Can anyone help me with this issue?
> >
> >
> >
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
> >
> >
> > Regards,
> > Sunil.
> >
>


Re: Kafka getting down every week due to log file deletion.

2021-05-27 Thread Ran Lupovich
Seems you log dir is sending your data to tmp folder, if I am bot mistken
this dir automatically removing files from itself, causing the log deletuon
procedure of the kafka internal to fail and shutdown broker on file not
found

בתאריך יום ה׳, 27 במאי 2021, 17:52, מאת Neeraj Gulia ‏<
neeraj.gu...@opsworld.in>:

> Hi team,
>
> Our Kafka is getting down almost once or twice a month due to log file
> deletion failure.
>
>
> There is single node kafka broker is running in our system and gets down
> every time it tires to delete the log files as cleanup and fails.
>
> Sharing the Error Logs, we need a robust solution for this so that our
> kafka broker doesn't gets down like this every time.
>
> Regards,
> Neeraj Gulia
>
> Caused by: java.io.FileNotFoundException:
> /tmp/kafka-logs/dokutopic-0/.index (No such file or
> directory)
> at java.base/java.io.RandomAccessFile.open0(Native Method)
> at java.base/java.io.RandomAccessFile.open(RandomAccessFile.java:345)
> at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:259)
> at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:214)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:183)
> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:176)
> at
> kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:242)
> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:242)
> at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:508)
> at kafka.log.Log.$anonfun$roll$8(Log.scala:1954)
> at kafka.log.Log.$anonfun$roll$2(Log.scala:1954)
> at kafka.log.Log.roll(Log.scala:2387)
> at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1749)
> at kafka.log.Log.deleteSegments(Log.scala:2387)
> at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1737)
> at kafka.log.Log.deleteOldSegments(Log.scala:1806)
> at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:1074)
> at
> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:1071)
> at scala.collection.immutable.List.foreach(List.scala:431)
> at kafka.log.LogManager.cleanupLogs(LogManager.scala:1071)
> at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:409)
> at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
> at
>
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
> at
>
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:829)
> [2021-05-27 09:34:07,972] WARN [ReplicaManager broker=0] Broker 0 stopped
> fetcher for partitions
>
> __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,fliptopic-0,__consumer_offsets-25,webhook-events-0,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,dokutopic-0,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,post_payment_topic-0,__consumer_offsets-18,__consumer_offsets-37,topic-0,events-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,disbursementtopic-0,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40,faspaytopic-0
> and stopped moving logs for partitions because they are in the failed log
> directory /tmp/kafka-logs. (kafka.server.ReplicaManager)
> [2021-05-27 09:34:07,974] WARN Stopping serving logs in dir /tmp/kafka-logs
> (kafka.log.LogManager)
> [2021-05-27 09:34:07,983] ERROR Shutdown broker because all log dirs in
> /tmp/kafka-logs have failed (kafka.log.LogManager)
>


Kafka getting down every week due to log file deletion.

2021-05-27 Thread Neeraj Gulia
Hi team,

Our Kafka is getting down almost once or twice a month due to log file
deletion failure.


There is single node kafka broker is running in our system and gets down
every time it tires to delete the log files as cleanup and fails.

Sharing the Error Logs, we need a robust solution for this so that our
kafka broker doesn't gets down like this every time.

Regards,
Neeraj Gulia

Caused by: java.io.FileNotFoundException:
/tmp/kafka-logs/dokutopic-0/.index (No such file or
directory)
at java.base/java.io.RandomAccessFile.open0(Native Method)
at java.base/java.io.RandomAccessFile.open(RandomAccessFile.java:345)
at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:259)
at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:214)
at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:183)
at kafka.log.AbstractIndex.resize(AbstractIndex.scala:176)
at
kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:242)
at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:242)
at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:508)
at kafka.log.Log.$anonfun$roll$8(Log.scala:1954)
at kafka.log.Log.$anonfun$roll$2(Log.scala:1954)
at kafka.log.Log.roll(Log.scala:2387)
at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1749)
at kafka.log.Log.deleteSegments(Log.scala:2387)
at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1737)
at kafka.log.Log.deleteOldSegments(Log.scala:1806)
at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:1074)
at
kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:1071)
at scala.collection.immutable.List.foreach(List.scala:431)
at kafka.log.LogManager.cleanupLogs(LogManager.scala:1071)
at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:409)
at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at
java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
[2021-05-27 09:34:07,972] WARN [ReplicaManager broker=0] Broker 0 stopped
fetcher for partitions
__consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,fliptopic-0,__consumer_offsets-25,webhook-events-0,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,dokutopic-0,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,post_payment_topic-0,__consumer_offsets-18,__consumer_offsets-37,topic-0,events-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,disbursementtopic-0,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40,faspaytopic-0
and stopped moving logs for partitions because they are in the failed log
directory /tmp/kafka-logs. (kafka.server.ReplicaManager)
[2021-05-27 09:34:07,974] WARN Stopping serving logs in dir /tmp/kafka-logs
(kafka.log.LogManager)
[2021-05-27 09:34:07,983] ERROR Shutdown broker because all log dirs in
/tmp/kafka-logs have failed (kafka.log.LogManager)


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread Ran Lupovich
Try setting  es.port = "9200" without quotes?

בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> Hello team,
> Can anyone help me with this issue?
>
>
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
>
>
> Regards,
> Sunil.
>


does consumer thread wait for producer to return (synchronous) in normal consume-process-produce topology? And how it is handled in streams?

2021-05-27 Thread Pushkar Deole
Hi,

I am trying to understand few things:

in a normal consumer-process-produce topology, consumer is polling records,
then process each and then gives to producer to produce on destination
topic. In this case,
is the 'produce' a synchronous call i.e does it happen in the same consumer
thread or produce takes place in a background producer thread
asynchronously?

If asynchronous, then how can consumer commit offset before produce
happened successfully?
If synchronous, then consumer thread gets held till produce happens,
possibly increasing consumer lag?