[DISCUSS] KIP-766: fetch/findSessions queries with open endpoints for SessionStore/WindowStore

2021-08-04 Thread Luke Chen
Hi everyone,

I'd like to start the discussion for *KIP-766: fetch/findSessions queries
with open endpoints for WindowStore/SessionStore*.

This is a follow-up KIP for KIP-763: Range queries with open endpoints
.
In KIP-763, we focused on *ReadOnlyKeyValueStore*, in this KIP, we'll focus
on *ReadOnlySessionStore* and *ReadOnlyWindowStore, *to have open endpoints
queries for SessionStore/WindowStore.

The KIP can be found here:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186876596


Thank you.
Luke


Re: [VOTE] KIP-717: Deprecate batch-size config from console producer

2021-08-04 Thread Kamal Chandraprakash
Bringing up this minor KIP.

The batch-size option is unused in the console producer. We should either
remove or deprecate it.

PR: https://github.com/apache/kafka/pull/10202/

On Fri, Apr 30, 2021 at 5:32 PM Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> Bumping up this thread.
>
> On Wed, Mar 17, 2021 at 7:44 PM Manikumar 
> wrote:
>
>> Hi Kamal,
>>
>> It looks like we just forgot this config, when we removed old producer
>> code.  I think we dont require KIP for this.
>> we can directly fix with a minor PR .
>>
>> Thanks.
>>
>> On Wed, Mar 17, 2021 at 7:02 PM Dongjin Lee  wrote:
>>
>> > +1. (non-binding)
>> >
>> > Thanks,
>> > Dongjin
>> >
>> > On Thu, Mar 11, 2021 at 5:52 PM Manikumar 
>> > wrote:
>> >
>> > > +1 (binding). Thanks for the KIP
>> > > I think we can remove the config option as the config option is
>> unused.
>> > >
>> > > On Wed, Mar 10, 2021 at 3:06 PM Kamal Chandraprakash <
>> > > kamal.chandraprak...@gmail.com> wrote:
>> > >
>> > > > Hi,
>> > > >
>> > > > I'd like to start a vote on KIP-717 to remove batch-size config from
>> > the
>> > > > console producer.
>> > > >
>> > > > https://cwiki.apache.org/confluence/x/DB1RCg
>> > > >
>> > > > Thanks,
>> > > > Kamal
>> > > >
>> > >
>> > --
>> > *Dongjin Lee*
>> >
>> > *A hitchhiker in the mathematical world.*
>> >
>> >
>> >
>> > *github:  github.com/dongjinleekr
>> > keybase:
>> https://keybase.io/dongjinleekr
>> > linkedin:
>> kr.linkedin.com/in/dongjinleekr
>> > speakerdeck:
>> > speakerdeck.com/dongjin
>> > *
>> >
>>
>


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.0 #82

2021-08-04 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.0 #81

2021-08-04 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #389

2021-08-04 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-13160) Fix the code that calls the broker’s config handler to pass the expected default resource name when using KRaft.

2021-08-04 Thread Colin McCabe (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin McCabe resolved KAFKA-13160.
--
Resolution: Fixed

> Fix the code that calls the broker’s config handler to pass the expected 
> default resource name when using KRaft.
> 
>
> Key: KAFKA-13160
> URL: https://issues.apache.org/jira/browse/KAFKA-13160
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Ryan Dielhenn
>Assignee: Ryan Dielhenn
>Priority: Blocker
> Fix For: 3.0.0
>
>
> In a ZK cluster, dynamic default broker configs are stored in the zNode 
> /brokers/. Without this fix, when dynamic configs from snapshots are 
> processed by the KRaft brokers, the BrokerConfigHandler checks if the 
> resource name is "" to do a default update and converts the resource 
> name to an integer otherwise to do a per-broker config update.
> In KRaft, dynamic default broker configs are serialized in metadata with 
> empty string instead of "". This was causing the BrokerConfigHandler 
> to throw a NumberFormatException for dynamic default broker configs since the 
> resource name for them is not "" or a single integer. The code that 
> calls the handler method for config changes should be fixed to pass 
> "" instead of empty string to the handler method if using KRaft.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 3.0.0 release plan with new updated dates

2021-08-04 Thread José Armando García Sancio
Hey all,

For the KIP-500 work for 3.0 we would like to propose the following
Jiras as blockers:

1. https://issues.apache.org/jira/browse/KAFKA-13168
2. https://issues.apache.org/jira/browse/KAFKA-13165
3. https://issues.apache.org/jira/browse/KAFKA-13161

The description for each Jira should have more details.

Thanks,
-Jose

On Tue, Aug 3, 2021 at 12:14 PM Ryan Dielhenn
 wrote:
>
> Hi Konstantine,
>
> I would like to report another bug in KRaft.
>
> The ConfigHandler that processes dynamic broker config deltas in KRaft
> expects that the default resource name for dynamic broker configs is the
> old default entity name used in ZK: "". Since dynamic default
> broker configs are persisted as empty string in the quorum instead of
> "", the brokers are not updating the their default configuration
> when they see empty string as a resource name in the config delta and are
> throwing a NumberFormatException when they try to parse the resource name
> to process it as a per-broker configuration.
>
> I filed a JIRA: https://issues.apache.org/jira/browse/KAFKA-13160
>
> I also have a PR to fix this: https://github.com/apache/kafka/pull/11168
>
> I think that this should be a blocker for 3.0 because dynamic default
> broker configs will not be usable in KRaft otherwise.
>
> Best,
> Ryan Dielhenn
>
> On Sat, Jul 31, 2021 at 10:42 AM Konstantine Karantasis <
> kkaranta...@apache.org> wrote:
>
> > Thanks Ryan,
> >
> > Approved. Seems also like a low risk fix.
> > With that opportunity, let's make sure there are no other configs that
> > would need a similar validation.
> >
> > Konstantine
> >
> > On Fri, Jul 30, 2021 at 8:33 AM Ryan Dielhenn
> >  wrote:
> >
> > > Hey Konstantine,
> > >
> > > Thanks for the question. If these configs are not validated the user's
> > > experience will be affected and upgrades from 3.0 will be harder.
> > >
> > > Best,
> > > Ryan Dielhenn
> > >
> > > On Thu, Jul 29, 2021 at 3:59 PM Konstantine Karantasis <
> > > kkaranta...@apache.org> wrote:
> > >
> > > > Thanks for reporting this issue Ryan.
> > > >
> > > > I believe what you mention corresponds to the ticket you created here:
> > > > https://issues.apache.org/jira/projects/KAFKA/issues/KAFKA-13151
> > > >
> > > > What happens if the configurations are present but the broker doesn't
> > > fail
> > > > at startup when configured to run in KRaft mode?
> > > > Asking to see if we have any workarounds in our availability.
> > > >
> > > > Thanks,
> > > > Konstantine
> > > >
> > > > On Thu, Jul 29, 2021 at 2:51 PM Ryan Dielhenn
> > > >  wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > Disregard log.clean.policy being included in this blocker.
> > > > >
> > > > > Best,
> > > > > Ryan Dielhenn
> > > > >
> > > > > On Thu, Jul 29, 2021 at 2:38 PM Ryan Dielhenn <
> > rdielh...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Hey Konstantine,
> > > > > >
> > > > > > I'd like to report another bug in KRaft.
> > > > > >
> > > > > > log.cleanup.policy, alter.config.policy.class.name, and
> > > > > > create.topic.policy.class.name are all unsupported by KRaft but
> > > KRaft
> > > > > > servers allow them to be configured. I believe this should be
> > > > considered
> > > > > a
> > > > > > blocker and that KRaft servers should fail startup if any of these
> > > are
> > > > > > configured. I do not have a PR yet but will soon.
> > > > > >
> > > > > > On another note, I have a PR for the dynamic broker configuration
> > fix
> > > > > > here: https://github.com/apache/kafka/pull/11141
> > > > > >
> > > > > > Best,
> > > > > > Ryan Dielhenn
> > > > > >
> > > > > > On Wed, May 26, 2021 at 2:48 PM Konstantine Karantasis
> > > > > >  wrote:
> > > > > >
> > > > > >> Hi all,
> > > > > >>
> > > > > >> Please find below the updated release plan for the Apache Kafka
> > > 3.0.0
> > > > > >> release.
> > > > > >>
> > > > > >>
> > > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177046466
> > > > > >>
> > > > > >> New suggested dates for the release are as follows:
> > > > > >>
> > > > > >> KIP Freeze is 09 June 2021 (same date as in the initial plan)
> > > > > >> Feature Freeze is 30 June 2021 (new date, extended by two weeks)
> > > > > >> Code Freeze is 14 July 2021 (new date, extended by two weeks)
> > > > > >>
> > > > > >> At least two weeks of stabilization will follow Code Freeze.
> > > > > >>
> > > > > >> The release plan is up to date and currently includes all the
> > > approved
> > > > > >> KIPs
> > > > > >> that are targeting 3.0.0.
> > > > > >>
> > > > > >> Please let me know if you have any objections with the recent
> > > > extension
> > > > > of
> > > > > >> Feature Freeze and Code Freeze or any other concerns.
> > > > > >>
> > > > > >> Regards,
> > > > > >> Konstantine
> > > > > >>
> > > > > >
> > > > >
> > > >
> > >
> >



-- 
-Jose


[jira] [Created] (KAFKA-13168) KRaft observers should not have a replica id

2021-08-04 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-13168:
--

 Summary: KRaft observers should not have a replica id
 Key: KAFKA-13168
 URL: https://issues.apache.org/jira/browse/KAFKA-13168
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Reporter: Jose Armando Garcia Sancio
 Fix For: 3.0.0


To avoid miss configuration of a broker affecting the quorum of the cluster 
metadata partition when a Kafka node is configure as broker only the replica id 
for the KRaft client should be set to {{Optional::empty()}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #388

2021-08-04 Thread Apache Jenkins Server
See 




Request Permissions to Contribute

2021-08-04 Thread Jordan Bull
Hi,

I'd like to request permissions to contribute to Kafka to propose a KIP.

Wiki ID: jbull
Jira ID: jbull

Thanks you,
Jordan



Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.0 #80

2021-08-04 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #387

2021-08-04 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13167) KRaft broker should heartbeat immediately during controlled shutdown

2021-08-04 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-13167:
---

 Summary: KRaft broker should heartbeat immediately during 
controlled shutdown
 Key: KAFKA-13167
 URL: https://issues.apache.org/jira/browse/KAFKA-13167
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Controlled shutdown in KRaft is signaled through a heartbeat request with the 
`shouldShutDown` flag set to true. When we begin controlled shutdown, we should 
immediately schedule the next heartbeat instead of waiting for the next 
periodic heartbeat so that we can shutdown more quickly. Otherwise controlled 
shutdown can be delayed by several seconds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13166) EOFException when Controller handles unknown API

2021-08-04 Thread David Arthur (Jira)
David Arthur created KAFKA-13166:


 Summary: EOFException when Controller handles unknown API
 Key: KAFKA-13166
 URL: https://issues.apache.org/jira/browse/KAFKA-13166
 Project: Kafka
  Issue Type: Bug
Reporter: David Arthur
Assignee: David Arthur


When ControllerApis handles an unsupported RPC, it silently drops the request 
due to an unhandled exception. 

The following stack trace was manually printed since this exception was 
suppressed on the controller. 
{code}
java.util.NoSuchElementException: key not found: UpdateFeatures
at scala.collection.MapOps.default(Map.scala:274)
at scala.collection.MapOps.default$(Map.scala:273)
at scala.collection.AbstractMap.default(Map.scala:405)
at scala.collection.mutable.HashMap.apply(HashMap.scala:425)
at kafka.network.RequestChannel$Metrics.apply(RequestChannel.scala:74)
at 
kafka.network.RequestChannel.$anonfun$updateErrorMetrics$1(RequestChannel.scala:458)
at 
kafka.network.RequestChannel.$anonfun$updateErrorMetrics$1$adapted(RequestChannel.scala:457)
at 
kafka.utils.Implicits$MapExtensionMethods$.$anonfun$forKeyValue$1(Implicits.scala:62)
at 
scala.collection.convert.JavaCollectionWrappers$JMapWrapperLike.foreachEntry(JavaCollectionWrappers.scala:359)
at 
scala.collection.convert.JavaCollectionWrappers$JMapWrapperLike.foreachEntry$(JavaCollectionWrappers.scala:355)
at 
scala.collection.convert.JavaCollectionWrappers$AbstractJMapWrapper.foreachEntry(JavaCollectionWrappers.scala:309)
at 
kafka.network.RequestChannel.updateErrorMetrics(RequestChannel.scala:457)
at kafka.network.RequestChannel.sendResponse(RequestChannel.scala:388)
at 
kafka.server.RequestHandlerHelper.sendErrorOrCloseConnection(RequestHandlerHelper.scala:93)
at 
kafka.server.RequestHandlerHelper.sendErrorResponseMaybeThrottle(RequestHandlerHelper.scala:121)
at 
kafka.server.RequestHandlerHelper.handleError(RequestHandlerHelper.scala:78)
at kafka.server.ControllerApis.handle(ControllerApis.scala:116)
at 
kafka.server.ControllerApis.$anonfun$handleEnvelopeRequest$1(ControllerApis.scala:125)
at 
kafka.server.ControllerApis.$anonfun$handleEnvelopeRequest$1$adapted(ControllerApis.scala:125)
at 
kafka.server.EnvelopeUtils$.handleEnvelopeRequest(EnvelopeUtils.scala:65)
at 
kafka.server.ControllerApis.handleEnvelopeRequest(ControllerApis.scala:125)
at kafka.server.ControllerApis.handle(ControllerApis.scala:103)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:75)
at java.lang.Thread.run(Thread.java:748)
{code}

This is due to a bug in the metrics code in RequestChannel.

The result is that the request fails, but no indication is given that it was 
due to an unsupported API on either the broker, controller, or client.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13165) Validate node id, process role and quorum voters

2021-08-04 Thread Jose Armando Garcia Sancio (Jira)
Jose Armando Garcia Sancio created KAFKA-13165:
--

 Summary: Validate node id, process role and quorum voters
 Key: KAFKA-13165
 URL: https://issues.apache.org/jira/browse/KAFKA-13165
 Project: Kafka
  Issue Type: Sub-task
  Components: kraft
Reporter: Jose Armando Garcia Sancio


Under certain configuration is possible for the Kafka Server to boot up as a 
broker only but be the cluster metadata quorum leader. We should validate the 
configuration to avoid this case.
 # If the {{process.roles}} contains {{controller}} then the {{node.id}} needs 
to be in the {{controller.quorum.voters}}
 # If the {{process.roles}} doesn't contain {{controller}} then the {{node.id}} 
cannot be in the {{controller.quorum.voters}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-10413) rebalancing leads to unevenly balanced connectors

2021-08-04 Thread yazgoo (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yazgoo reopened KAFKA-10413:


Hi,

I'm reopening this because I'm still seeing this issue with CP 6.2.0 which 
ships with 2.8.0 which is marked as a fixed version.
We basically see the same behavior as mentioned in the issue description.

> rebalancing leads to unevenly balanced connectors
> -
>
> Key: KAFKA-10413
> URL: https://issues.apache.org/jira/browse/KAFKA-10413
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.5.1
>Reporter: yazgoo
>Assignee: rameshkrishnan muthusamy
>Priority: Major
> Fix For: 2.4.2, 2.5.2, 2.8.0, 2.7.1, 2.6.2
>
> Attachments: connect_worker_balanced.png
>
>
> Hi,
> With CP 5.5, running kafka connect s3 sink on EC2 whith autoscaling enabled, 
> if a connect instance disappear, or a new one appear, we're seeing unbalanced 
> consumption, much like mentionned in this post:
> [https://stackoverflow.com/questions/58644622/incremental-cooperative-rebalancing-leads-to-unevenly-balanced-connectors]
> This usually leads to one kafka connect instance taking most of the load and 
> consumption not being able to keep on.
> Currently, we're "fixing" this by deleting the connector and re-creating it, 
> but this is far from ideal.
> Any suggestion on what we could do to mitigate this ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13164) State store is attached to wrong node in the Kafka Streams topology

2021-08-04 Thread Ralph Matthias Debusmann (Jira)
Ralph Matthias Debusmann created KAFKA-13164:


 Summary: State store is attached to wrong node in the Kafka 
Streams topology
 Key: KAFKA-13164
 URL: https://issues.apache.org/jira/browse/KAFKA-13164
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.8.0
 Environment: local development (MacOS Big Sur 11.4)
Reporter: Ralph Matthias Debusmann
 Attachments: 1.jpg, 3.jpg

Hi,

mjsax and me noticed a bug where a state store is attached to the wrong node in 
the Kafka Streams topology.

The issue arised when I tried to read a topic into a KTable, then continued 
with a mapValues(), and then joined this KTable with a KStream, like so:
 
var kTable = this.streamsBuilder.table().mapValues();
 
and then later:
 
var joinedKStream = kstream.leftJoin(kTable, );
 
The join didn't work, and neither did it work when I added Materialized.as() to 
mapValues(), like so:
var kTable = this.streamsBuilder.table().mapValues(, 
*Materialized.as()*);
 
 Interestingly, I could get the join to work, when I first read the topic into 
a *KStream*, then continued with the mapValues(), then turned the KStream into 
a KTable, and then joined the KTable with the other KStream, like so:
 
var kTable = this.streamsBuilder.stream().mapValues().toTable();
 
(the join worked the same as above)
 
When mjsax and me had a look on the topology, we could see that in the former, 
not working code, the state store (required for the join) is attached to the 
pre-final "KTABLE-SOURCE", and not the final "KTABLE-MAPVALUES" node (see 
attachment "1.jpg"). In the working code, the state store is (correctly) 
attached to the final "KSTREAM-TOTABLE" node (see attachment "3.jpg").
 
Best regards,
xdgrulez
 
 
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9805) Running MirrorMaker in a Connect cluster,but the task not running

2021-08-04 Thread Andras Katona (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Katona resolved KAFKA-9805.
--
Resolution: Duplicate

> Running MirrorMaker in a Connect cluster,but the task not running
> -
>
> Key: KAFKA-9805
> URL: https://issues.apache.org/jira/browse/KAFKA-9805
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.4.1
> Environment: linux
>Reporter: ZHAO GH
>Priority: Major
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> when i am Running MirrorMaker in a Connect clusterwhen i am Running 
> MirrorMaker in a Connect cluster,sometime the task running,but sometime the 
> task cannot assignment。
> I post connector config to connect cluster,here is my config
> http://99.12.98.33:8083/connectorshttp://99.12.98.33:8083/connectors
> {    "name": "kafka->kafka241-3",   
> "config": {       
>     "connector.class": 
> "org.apache.kafka.connect.mirror.MirrorSourceConnector",     
>     "topics": "MM2-3",     
>     "key.converter": 
> "org.apache.kafka.connect.converters.ByteArrayConverter",                     
>   "value.converter": 
> "org.apache.kafka.connect.converters.ByteArrayConverter",     
> "tasks.max": 8,     
> "sasl.mechanism": "PLAIN",   
>  "security.protocol": "SASL_PLAINTEXT",   
>  "sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule 
> required username=\"admin\" password=\"admin\";",          
> "source.cluster.alias": "kafka",     "source.cluster.bootstrap.servers": 
> "55.13.104.70:9092,55.13.104.74:9092,55.13.104.126:9092",     
> "source.admin.sasl.mechanism": "PLAIN",     
> "source.admin.security.protocol": "SASL_PLAINTEXT",     
> "source.admin.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",         
> "target.cluster.alias": "kafka241",   
>  "target.cluster.bootstrap.servers": 
> "55.14.111.22:9092,55.14.111.23:9092,55.14.111.25:9092",     
> "target.admin.sasl.mechanism": "PLAIN",     
> "target.admin.security.protocol": "SASL_PLAINTEXT",   
> "target.admin.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",       
>  "producer.sasl.mechanism": "PLAIN",   
>  "producer.security.protocol": "SASL_PLAINTEXT",     
> "producer.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",          
> "consumer.sasl.mechanism": "PLAIN",   
>  "consumer.security.protocol": "SASL_PLAINTEXT",   
>  "consumer.sasl.jaas.config": 
> "org.apache.kafka.common.security.plain.PlainLoginModule required 
> username=\"admin\" password=\"admin\";",     
> "consumer.group.id": "mm2-1"       
> }
> }
>  
> but I get the connector status,found not tasks running
> http://99.12.98.33:8083/connectors/kafka->kafka241-3/status
> {
>  "name": "kafka->kafka241-3",
>  "connector": {
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  "tasks": [],
>  "type": "source"
> }
>  
> but sometime,the task run success
> http://99.12.98.33:8083/connectors/kafka->kafka241-1/status
> {
>  "name": "kafka->kafka241-1",
>  "connector": {
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  "tasks": [
>  {
>  "id": 0,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  },
>  {
>  "id": 1,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.33:8083"
>  },
>  {
>  "id": 2,
>  "state": "RUNNING",
>  "worker_id": "99.12.98.34:8083"
>  }
>  ],
>  "type": "source"
> }
> is somebody met this problem? how to fix it,is it a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13163) Issue with MySql worker sink connector

2021-08-04 Thread Muddam Pullaiah Yadav (Jira)
Muddam Pullaiah Yadav created KAFKA-13163:
-

 Summary: Issue with MySql worker sink connector
 Key: KAFKA-13163
 URL: https://issues.apache.org/jira/browse/KAFKA-13163
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.1.1
 Environment: PreProd
Reporter: Muddam Pullaiah Yadav


Please help with the following issue. Really appreciate it! 

 

We are using Azure HDInsight Kafka cluster 

My sink Properties:

 

cat mysql-sink-connector
 {
 "name":"mysql-sink-connector",
 "config":

{ "tasks.max":"2", "batch.size":"1000", "batch.max.rows":"1000", 
"poll.interval.ms":"500", 
"connector.class":"org.apache.kafka.connect.file.FileStreamSinkConnector", 
"connection.url":"jdbc:mysql://moddevdb.mysql.database.azure.com:3306/db_grab_dev",
 "table.name":"db_grab_dev.tbl_clients_merchants", "topics":"test", 
"connection.user":"grabmod", "connection.password":"#admin", 
"auto.create":"true", "auto.evolve":"true", 
"value.converter":"org.apache.kafka.connect.json.JsonConverter", 
"value.converter.schemas.enable":"false", 
"key.converter":"org.apache.kafka.connect.json.JsonConverter", 
"key.converter.schemas.enable":"true" }

}

 

[2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} Task 
threw an uncaught and unrecoverable exception 
(org.apache.kafka.connect.runtime.WorkerTask:177)
 org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
handler
 at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
 at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:514)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)
 at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
 at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
 at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
 Caused by: org.apache.kafka.connect.errors.DataException: Unknown schema type: 
null
 at 
org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:743)
 at 
org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:363)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:514)
 at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
 at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
 ... 13 more
 [2021-08-04 11:18:30,234] ERROR WorkerSinkTask\{id=mysql-sink-connector-0} 
Task is being killed and will not recover until manually restarted 
(org.apache.kafka.connect.runtime.WorkerTask:178)
 [2021-08-04 11:18:30,235] INFO [Consumer clientId=consumer-18, 
groupId=connect-mysql-sink-connector] Sending LeaveGroup request to coordinator 
wn2-grabde.fkgw2p1emuqu5d21xcbqrhqqbf.rx.internal.cloudapp.net:9092 (id: 
2147482646 rack: null) 
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:782)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)