[jira] [Created] (KAFKA-16900) kafka-producer-perf-test reports error when using transaction.
Chen He created KAFKA-16900: --- Summary: kafka-producer-perf-test reports error when using transaction. Key: KAFKA-16900 URL: https://issues.apache.org/jira/browse/KAFKA-16900 Project: Kafka Issue Type: Bug Components: producer Affects Versions: 2.9 Reporter: Chen He [https://lists.apache.org/thread/dmrbx8kzv2w5t1v0xjvyjbp5y23omlq8] encounter the same issue as mentioned above. Did not found the 2.13 version in affects versions so mark it as the most latest it provided. 2.9. Please feel free to change if possible. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-15983) Kafka-acls should return authorization already done if repeat work is issued
Chen He created KAFKA-15983: --- Summary: Kafka-acls should return authorization already done if repeat work is issued Key: KAFKA-15983 URL: https://issues.apache.org/jira/browse/KAFKA-15983 Project: Kafka Issue Type: Improvement Components: security Affects Versions: 3.6.0 Reporter: Chen He kafka-acls.sh cmd will always issue normal operation for a cmd if customer already authorized a user. It should reports something like "user {} already authorized with {} resources" instead of do it again and again. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (KAFKA-7381) Parameterize connector rebalancing behavior
Chen He created KAFKA-7381: -- Summary: Parameterize connector rebalancing behavior Key: KAFKA-7381 URL: https://issues.apache.org/jira/browse/KAFKA-7381 Project: Kafka Issue Type: Improvement Components: KafkaConnect Affects Versions: 1.0.0 Reporter: Chen He Assignee: Chen He I have a question about connector rebalancing issue. Why don't we make it option, I mean have a parameter that turn on/off it instead of having it as a must? We can have a parameter like: "connector.rebalancing.enable" parameter and make it as "true" by default. It allows users to turn it off if they want. There are some cases that connector rebalancing is not needed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6602) Support Kafka to save credentials in Java Key Store on Zookeeper node
Chen He created KAFKA-6602: -- Summary: Support Kafka to save credentials in Java Key Store on Zookeeper node Key: KAFKA-6602 URL: https://issues.apache.org/jira/browse/KAFKA-6602 Project: Kafka Issue Type: New Feature Components: security Reporter: Chen He Kafka connect needs to talk to multifarious distributed systems. However, each system has its own authentication mechanism. How we manage these credentials become a common problem. Here are my thoughts: # We may need to save it in java key store; # We may need to put this key store in a distributed system (topic or zookeeper); # Key store password may be configured in Kafka configuration; I have implement the feature that allows store java key store in zookeeper node. If Kafka community likes this idea, I am happy to contribute. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (KAFKA-6353) Connector status shows FAILED but actually task is in RUNNING status
Chen He created KAFKA-6353: -- Summary: Connector status shows FAILED but actually task is in RUNNING status Key: KAFKA-6353 URL: https://issues.apache.org/jira/browse/KAFKA-6353 Project: Kafka Issue Type: Bug Components: KafkaConnect Affects Versions: 0.10.2.1 Reporter: Chen He {"name":"test","connector":{"state":"FAILED","trace":"ERROR MESSAGE","worker_id":"localhost:8083"},"tasks":[{"state":"RUNNING","id":0,"worker_id":"localhost:8083"}]} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5935) Kafka connect should provide configurable CONNECTOR_EXCLUDES
Chen He created KAFKA-5935: -- Summary: Kafka connect should provide configurable CONNECTOR_EXCLUDES Key: KAFKA-5935 URL: https://issues.apache.org/jira/browse/KAFKA-5935 Project: Kafka Issue Type: Improvement Reporter: Chen He Priority: Minor In o.a.kafka.connect.runtime.rest.resources.ConnectorPluginsResource, there is a CONNECTOR_EXCLUDES list which is in charge of filtering connectors classes that will be shown through the REST_API. It will be great if we can have a similar config in kafka-connect.properties that exclude given connectors. Then, we can select which connector should be exposed through the REST_API instead of directly post all founded classname there. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5705) Kafka Server start failed and reports "unsafe memory access operation"
Chen He created KAFKA-5705: -- Summary: Kafka Server start failed and reports "unsafe memory access operation" Key: KAFKA-5705 URL: https://issues.apache.org/jira/browse/KAFKA-5705 Project: Kafka Issue Type: Bug Components: log Affects Versions: 0.10.2.0 Reporter: Chen He [2017-08-02 15:50:23,361] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable) java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code at kafka.log.TimeIndex$$anonfun$maybeAppend$1.apply$mcV$sp(TimeIndex.scala:128) at kafka.log.TimeIndex$$anonfun$maybeAppend$1.apply(TimeIndex.scala:107) at kafka.log.TimeIndex$$anonfun$maybeAppend$1.apply(TimeIndex.scala:107) at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213) at kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:107) at kafka.log.LogSegment.recover(LogSegment.scala:252) at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:231) at kafka.log.Log$$anonfun$loadSegments$4.apply(Log.scala:188) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) at kafka.log.Log.loadSegments(Log.scala:188) at kafka.log.Log.(Log.scala:116) at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$3$$anonfun$apply$10$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:157) at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (KAFKA-5518) General Kafka connector performanc workload
Chen He created KAFKA-5518: -- Summary: General Kafka connector performanc workload Key: KAFKA-5518 URL: https://issues.apache.org/jira/browse/KAFKA-5518 Project: Kafka Issue Type: Bug Components: KafkaConnect Affects Versions: 0.10.2.1 Reporter: Chen He Sorry, first time to create Kafka JIRA. Just curious whether there is a general purpose performance workload for Kafka connector (hdfs, s3, etc). Then, we can setup an standard and evaluate the performance for further connectors such as swift, etc. Please feel free to comment or mark as dup if there already is one jira tracking this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (KAFKA-3554) Generate actual data with specific compression ratio and add multi-thread support in the ProducerPerformance tool.
[ https://issues.apache.org/jira/browse/KAFKA-3554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16001535#comment-16001535 ] Chen He commented on KAFKA-3554: This contribution is really valuable. Why it is not checked-in? If not resolved, why it mark fixed-version? > Generate actual data with specific compression ratio and add multi-thread > support in the ProducerPerformance tool. > -- > > Key: KAFKA-3554 > URL: https://issues.apache.org/jira/browse/KAFKA-3554 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.9.0.1 >Reporter: Jiangjie Qin >Assignee: Jiangjie Qin > Fix For: 0.11.0.0 > > > Currently the ProducerPerformance always generate the payload with same > bytes. This does not quite well to test the compressed data because the > payload is extremely compressible no matter how big the payload is. > We can make some changes to make it more useful for compressed messages. > Currently I am generating the payload containing integer from a given range. > By adjusting the range of the integers, we can get different compression > ratios. > API wise, we can either let user to specify the integer range or the expected > compression ratio (we will do some probing to get the corresponding range for > the users) > Besides that, in many cases, it is useful to have multiple producer threads > when the producer threads themselves are bottleneck. Admittedly people can > run multiple ProducerPerformance to achieve similar result, but it is still > different from the real case when people actually use the producer. -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Commented] (KAFKA-1689) automatic migration of log dirs to new locations
[ https://issues.apache.org/jira/browse/KAFKA-1689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14241540#comment-14241540 ] Chen He commented on KAFKA-1689: Just a newbie to the Kafka community, maybe this one is a toy that I can play with. automatic migration of log dirs to new locations Key: KAFKA-1689 URL: https://issues.apache.org/jira/browse/KAFKA-1689 Project: Kafka Issue Type: New Feature Components: config, core Affects Versions: 0.8.1.1 Reporter: Javier Alba Priority: Minor Labels: newbie++ There is no automated way in Kafka 0.8.1.1 to make a migration of log data if we want to reconfigure our cluster nodes to use several data directories where we have mounted new disks instead our original data directory. For example, say we have our brokers configured with: log.dirs = /tmp/kafka-logs And we added 3 new disks and now we want our brokers to use them as log.dirs: logs.dirs = /srv/data/1,/srv/data/2,/srv/data/3 It would be great to have an automated way of doing such a migration, of course without losing current data in the cluster. It would be ideal to be able to do this migration without losing service. -- This message was sent by Atlassian JIRA (v6.3.4#6332)