[jira] [Commented] (KAFKA-7164) Follower should truncate after every leader epoch change

2018-07-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16563169#comment-16563169
 ] 

ASF GitHub Bot commented on KAFKA-7164:
---

bob-barrett opened a new pull request #5436: KAFKA-7164: Follower should 
truncate after every leader epoch change
URL: https://github.com/apache/kafka/pull/5436
 
 
   Currently, we skip the steps to make a replica a follower if the leader does 
not change, inlcuding truncating the follower log if necessary. This can cause 
problems if the follower has missed one or more leader updates. Change the 
logic to only skip the steps if the new epoch is the same or one greater than 
the old epoch. Tested with unit tests that verify the behavior of 
Partition.scala and that show log truncation when the follower's log is ahead 
of the leader's, the follower has missed an epoch update, and the follower 
receives a LeaderAndIsrRequest making it a follower
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Follower should truncate after every leader epoch change
> 
>
> Key: KAFKA-7164
> URL: https://issues.apache.org/jira/browse/KAFKA-7164
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Bob Barrett
>Priority: Major
>
> Currently we skip log truncation for followers if a LeaderAndIsr request is 
> received, but the leader does not change. This can lead to log divergence if 
> the follower missed a leader change before the current known leader was 
> reelected. Basically the problem is that the leader may truncate its own log 
> prior to becoming leader again, so the follower would need to reconcile its 
> log again.
> For example, suppose that we have three replicas: r1, r2, and r3. Initially, 
> r1 is the leader in epoch 0 and writes one record at offset 0. r3 replicates 
> this successfully.
> {code}
> r1: 
>   status: leader
>   epoch: 0
>   log: [{id: 0, offset: 0, epoch:0}]
> r2: 
>   status: follower
>   epoch: 0
>   log: []
> r3: 
>   status: follower
>   epoch: 0
>   log: [{id: 0, offset: 0, epoch:0}]
> {code}
> Suppose then that r2 becomes leader in epoch 1. r1 notices the leader change 
> and truncates, but r3 for whatever reason, does not.
> {code}
> r1: 
>   status: follower
>   epoch: 1
>   log: []
> r2: 
>   status: leader
>   epoch: 1
>   log: []
> r3: 
>   status: follower
>   epoch: 0
>   log: [{offset: 0, epoch:0}]
> {code}
> Now suppose that r2 fails and r1 becomes the leader in epoch 2. Immediately 
> it writes a new record:
> {code}
> r1: 
>   status: leader
>   epoch: 2
>   log: [{id: 1, offset: 0, epoch:2}]
> r2: 
>   status: follower
>   epoch: 2
>   log: []
> r3: 
>   status: follower
>   epoch: 0
>   log: [{id: 0, offset: 0, epoch:0}]
> {code}
> If the replica continues fetching with the old epoch, we can have log 
> divergence as noted in KAFKA-6880. However, if r3 successfully receives the 
> new LeaderAndIsr request which updates the epoch to 2, but skips the 
> truncation, then the logs will stay inconsistent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-7216) Exception while running kafka-acls.sh from 1.0 env on target Kafka env with 1.1.1

2018-07-30 Thread Satish Duggana (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated KAFKA-7216:
--
Affects Version/s: 2.0.0

> Exception while running kafka-acls.sh from 1.0 env on target Kafka env with 
> 1.1.1
> -
>
> Key: KAFKA-7216
> URL: https://issues.apache.org/jira/browse/KAFKA-7216
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.1.1, 2.0.0
>Reporter: Satish Duggana
>Priority: Major
>
> When `kafka-acls.sh` with SimpleAclAuthorizer on target Kafka cluster with 
> 1.1.1 version, it throws the below error.
> {code:java}
> kafka.common.KafkaException: DelegationToken not a valid resourceType name. 
> The valid names are Topic,Group,Cluster,TransactionalId
>   at 
> kafka.security.auth.ResourceType$$anonfun$fromString$1.apply(ResourceType.scala:56)
>   at 
> kafka.security.auth.ResourceType$$anonfun$fromString$1.apply(ResourceType.scala:56)
>   at scala.Option.getOrElse(Option.scala:121)
>   at kafka.security.auth.ResourceType$.fromString(ResourceType.scala:56)
>   at 
> kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1$$anonfun$apply$mcV$sp$1.apply(SimpleAclAuthorizer.scala:233)
>   at 
> kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1$$anonfun$apply$mcV$sp$1.apply(SimpleAclAuthorizer.scala:232)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at 
> kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply$mcV$sp(SimpleAclAuthorizer.scala:232)
>   at 
> kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply(SimpleAclAuthorizer.scala:230)
>   at 
> kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply(SimpleAclAuthorizer.scala:230)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:216)
>   at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:224)
>   at 
> kafka.security.auth.SimpleAclAuthorizer.loadCache(SimpleAclAuthorizer.scala:230)
>   at 
> kafka.security.auth.SimpleAclAuthorizer.configure(SimpleAclAuthorizer.scala:114)
>   at kafka.admin.AclCommand$.withAuthorizer(AclCommand.scala:83)
>   at kafka.admin.AclCommand$.addAcl(AclCommand.scala:93)
>   at kafka.admin.AclCommand$.main(AclCommand.scala:53)
>   at kafka.admin.AclCommand.main(AclCommand.scala)
> {code}
>  
>  This is because it tries to get all the resource types registered from ZK 
> path and it throws error when `DelegationToken` resource is not defined in 
> `ResourceType` of client's Kafka version(which is earlier than 1.1.x)
>   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7217) Loading dynamic topic data into kafka connector sink using regex

2018-07-30 Thread Pratik Gaglani (JIRA)
Pratik Gaglani created KAFKA-7217:
-

 Summary: Loading dynamic topic data into kafka connector sink 
using regex
 Key: KAFKA-7217
 URL: https://issues.apache.org/jira/browse/KAFKA-7217
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 1.1.0
Reporter: Pratik Gaglani


The new feature to use regex KAFKA-3074

in connectors, however it seems that the topic data from the newly added topics 
after the connector has been started is not consumed until the connector is 
restarted. We have a need to dynamically added new topic and have connector 
consume the topic based on regex defined in properties of connector. How can it 
be achieved? Ex: regex: topic-.* topic: topic-1, topic-2 If I introduce new 
topic topic-3, then how can I make the connector consume the topic data without 
restarting it?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7131) Update release script to generate announcement email text

2018-07-30 Thread bibin sebastian (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562228#comment-16562228
 ] 

bibin sebastian commented on KAFKA-7131:


[~mjsax] ^^

> Update release script to generate announcement email text
> -
>
> Key: KAFKA-7131
> URL: https://issues.apache.org/jira/browse/KAFKA-7131
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Matthias J. Sax
>Assignee: bibin sebastian
>Priority: Minor
>  Labels: newbie
>
> When a release is finalized, we send out an email to announce the release. 
> Atm, we have a template in the wiki 
> ([https://cwiki.apache.org/confluence/display/KAFKA/Release+Process|https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]).
>  However, the template needs some manual changes to fill in the release 
> number, number of contributors, etc.
> Some parts could be automated – the corresponding commands are document in 
> the wiki already.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2018-07-30 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562215#comment-16562215
 ] 

Guozhang Wang commented on KAFKA-5998:
--

This is what I have discovered so far:

Before 1.1.0, even a stateless task will try to write a checkpoint file into 
its directory; but the parent directory will not be created since during the 
initialization of the task we determined it is stateless. On different 
file-systems, the behavior of new FileOutputStream(File) is really 
"un-defined": some fs would allow the constructor to auto-create the parent 
directory with this call, and some will not; and some will allow auto-creation 
"sometimes". I think this is what's observed in this ticket.

In https://issues.apache.org/jira/browse/KAFKA-6499 we have a related fix 
trying to avoid writing a checkpoint file anymore. BUT in 
https://issues.apache.org/jira/browse/KAFKA-6767 we still see people reporting 
similar issues on 1.1.0, I've looked into the source code of 1.1 but cannot 
find obvious bugs that could result in checkpointable offsets to be non-empty, 
i.e. we will still try to write the checkpoint file. For now I'd suggest you 
upgrading to 1.1.0 or even 2.0.0 and see if this issue has been resolved 
because of KAFKA-6499.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Yogesh BG
>Priority: Major
> Attachments: 5998.v1.txt, 5998.v2.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 

[jira] [Commented] (KAFKA-6767) OffsetCheckpoint write assumes parent directory exists

2018-07-30 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562211#comment-16562211
 ] 

Guozhang Wang commented on KAFKA-6767:
--

More specifically: In https://issues.apache.org/jira/browse/KAFKA-6499 we have 
a related fix trying to avoid writing a checkpoint file anymore. BUT in 
https://issues.apache.org/jira/browse/KAFKA-6767 we still see people reporting 
similar issues on 1.1.0, I've looked into the source code of 1.1 but cannot 
find obvious bugs that could result in checkpointable offsets to be non-empty, 
i.e. we will still try to write the checkpoint file.

> OffsetCheckpoint write assumes parent directory exists
> --
>
> Key: KAFKA-6767
> URL: https://issues.apache.org/jira/browse/KAFKA-6767
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Steven Schlansker
>Priority: Minor
>
> We run Kafka Streams with RocksDB state stores on ephemeral disks (i.e. if an 
> instance dies it is created from scratch, rather than reusing the existing 
> RocksDB.)
> We routinely see:
> {code:java}
> 2018-04-09T19:14:35.004Z WARN <> 
> [chat-0319e3c3-d8b2-4c60-bd69-a8484d8d4435-StreamThread-1] 
> o.a.k.s.p.i.ProcessorStateManager - task [0_11] Failed to write offset 
> checkpoint file to /mnt/mesos/sandbox/storage/chat/0_11/.checkpoint: {}
> java.io.FileNotFoundException: 
> /mnt/mesos/sandbox/storage/chat/0_11/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> at java.io.FileOutputStream.(FileOutputStream.java:162)
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:78)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:320)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:314)
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:307)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:297)
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:67)
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:357)
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:347)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.commitAll(TaskManager.java:403)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:994)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:811)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720){code}
> Inspecting the state store directory, I can indeed see that {{chat/0_11}} 
> does not exist (although many other partitions do).
>  
> Looking at the OffsetCheckpoint write method, it seems to try to open a new 
> checkpoint file without first ensuring that the parent directory exists.
>  
> {code:java}
>     public void write(final Map offsets) throws 
> IOException {
>     // if there is no offsets, skip writing the file to save disk IOs
>     if (offsets.isEmpty()) {
>     return;
>     }
>     synchronized (lock) {
>     // write to temp file and then swap with the existing file
>     final File temp = new File(file.getAbsolutePath() + ".tmp");{code}
>  
> Either the OffsetCheckpoint class should initialize the directories if 
> needed, or some precondition of it being called should ensure that is the 
> case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6767) OffsetCheckpoint write assumes parent directory exists

2018-07-30 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562209#comment-16562209
 ] 

Guozhang Wang commented on KAFKA-6767:
--

In the newly released 2.0 version we have fixed an issue to not let stateless 
tasks trying to write the checkpoint file at all: 
https://issues.apache.org/jira/browse/KAFKA-6499

Could you try upgrade to that version and see if it fixes your issue?

> OffsetCheckpoint write assumes parent directory exists
> --
>
> Key: KAFKA-6767
> URL: https://issues.apache.org/jira/browse/KAFKA-6767
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Steven Schlansker
>Priority: Minor
>
> We run Kafka Streams with RocksDB state stores on ephemeral disks (i.e. if an 
> instance dies it is created from scratch, rather than reusing the existing 
> RocksDB.)
> We routinely see:
> {code:java}
> 2018-04-09T19:14:35.004Z WARN <> 
> [chat-0319e3c3-d8b2-4c60-bd69-a8484d8d4435-StreamThread-1] 
> o.a.k.s.p.i.ProcessorStateManager - task [0_11] Failed to write offset 
> checkpoint file to /mnt/mesos/sandbox/storage/chat/0_11/.checkpoint: {}
> java.io.FileNotFoundException: 
> /mnt/mesos/sandbox/storage/chat/0_11/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> at java.io.FileOutputStream.(FileOutputStream.java:162)
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:78)
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:320)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:314)
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:307)
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:297)
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:67)
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:357)
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:347)
> at 
> org.apache.kafka.streams.processor.internals.TaskManager.commitAll(TaskManager.java:403)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:994)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:811)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720){code}
> Inspecting the state store directory, I can indeed see that {{chat/0_11}} 
> does not exist (although many other partitions do).
>  
> Looking at the OffsetCheckpoint write method, it seems to try to open a new 
> checkpoint file without first ensuring that the parent directory exists.
>  
> {code:java}
>     public void write(final Map offsets) throws 
> IOException {
>     // if there is no offsets, skip writing the file to save disk IOs
>     if (offsets.isEmpty()) {
>     return;
>     }
>     synchronized (lock) {
>     // write to temp file and then swap with the existing file
>     final File temp = new File(file.getAbsolutePath() + ".tmp");{code}
>  
> Either the OffsetCheckpoint class should initialize the directories if 
> needed, or some precondition of it being called should ensure that is the 
> case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7214) Mystic FATAL error

2018-07-30 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562106#comment-16562106
 ] 

Guozhang Wang commented on KAFKA-7214:
--

Hello [~habdank],

Have you tried to upgrade to a newer version and see if this issue has been 
fixed?

> Mystic FATAL error
> --
>
> Key: KAFKA-7214
> URL: https://issues.apache.org/jira/browse/KAFKA-7214
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.3
>Reporter: Seweryn Habdank-Wojewodzki
>Priority: Critical
>
> Dears,
> Very often at startup of the streaming application I got exception:
> {code}
> Exception caught in process. taskId=0_1, processor=KSTREAM-SOURCE-00, 
> topic=my_instance_medium_topic, partition=1, offset=198900203; 
> [org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:212),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks$2.apply(AssignedTasks.java:347),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:420),
>  
> org.apache.kafka.streams.processor.internals.AssignedTasks.process(AssignedTasks.java:339),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.processAndPunctuate(StreamThread.java:648),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:513),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:482),
>  
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:459)]
>  in thread 
> my_application-my_instance-my_instance_medium-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62
> {code}
> and then (without shutdown request from my side):
> {code}
> 2018-07-30 07:45:02 [ar313] [INFO ] StreamThread:912 - stream-thread 
> [my_application-my_instance-my_instance-72ee1819-edeb-4d85-9d65-f67f7c321618-StreamThread-62]
>  State transition from PENDING_SHUTDOWN to DEAD.
> {code}
> What is this?
> How to correctly handle it?
> Thanks in advance for help.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7216) Exception while running kafka-acls.sh from 1.0 env on target Kafka env with 1.1.1

2018-07-30 Thread Satish Duggana (JIRA)
Satish Duggana created KAFKA-7216:
-

 Summary: Exception while running kafka-acls.sh from 1.0 env on 
target Kafka env with 1.1.1
 Key: KAFKA-7216
 URL: https://issues.apache.org/jira/browse/KAFKA-7216
 Project: Kafka
  Issue Type: Bug
Affects Versions: 1.1.1, 1.1.0
Reporter: Satish Duggana


When `kafka-acls.sh` with SimpleAclAuthorizer on target Kafka cluster with 
1.1.1 version, it throws the below error.
{code:java}
kafka.common.KafkaException: DelegationToken not a valid resourceType name. The 
valid names are Topic,Group,Cluster,TransactionalId
  at 
kafka.security.auth.ResourceType$$anonfun$fromString$1.apply(ResourceType.scala:56)
  at 
kafka.security.auth.ResourceType$$anonfun$fromString$1.apply(ResourceType.scala:56)
  at scala.Option.getOrElse(Option.scala:121)
  at kafka.security.auth.ResourceType$.fromString(ResourceType.scala:56)
  at 
kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1$$anonfun$apply$mcV$sp$1.apply(SimpleAclAuthorizer.scala:233)
  at 
kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1$$anonfun$apply$mcV$sp$1.apply(SimpleAclAuthorizer.scala:232)
  at scala.collection.Iterator$class.foreach(Iterator.scala:891)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
  at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
  at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
  at 
kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply$mcV$sp(SimpleAclAuthorizer.scala:232)
  at 
kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply(SimpleAclAuthorizer.scala:230)
  at 
kafka.security.auth.SimpleAclAuthorizer$$anonfun$loadCache$1.apply(SimpleAclAuthorizer.scala:230)
  at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:216)
  at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:224)
  at 
kafka.security.auth.SimpleAclAuthorizer.loadCache(SimpleAclAuthorizer.scala:230)
  at 
kafka.security.auth.SimpleAclAuthorizer.configure(SimpleAclAuthorizer.scala:114)
  at kafka.admin.AclCommand$.withAuthorizer(AclCommand.scala:83)
  at kafka.admin.AclCommand$.addAcl(AclCommand.scala:93)
  at kafka.admin.AclCommand$.main(AclCommand.scala:53)
  at kafka.admin.AclCommand.main(AclCommand.scala)
{code}
 
 This is because it tries to get all the resource types registered from ZK path 
and it throws error when `DelegationToken` resource is not defined in 
`ResourceType` of client's Kafka version(which is earlier than 1.1.x)
  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6690) Priorities for Source Topics

2018-07-30 Thread Guozhang Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16562046#comment-16562046
 ] 

Guozhang Wang commented on KAFKA-6690:
--

I've granted you the permission and you should be able to create new KIP pages 
now.

> Priorities for Source Topics
> 
>
> Key: KAFKA-6690
> URL: https://issues.apache.org/jira/browse/KAFKA-6690
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Bala Prassanna I
>Assignee: Nick Afshartous
>Priority: Major
>
> We often encounter use cases where we need to prioritise source topics. If a 
> consumer is listening more than one topic, say, HighPriorityTopic and 
> LowPriorityTopic, it should consume events from LowPriorityTopic only when 
> all the events from HighPriorityTopic are consumed. This is needed in Kafka 
> Streams processor topologies as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5734) Heap (Old generation space) gradually increase

2018-07-30 Thread mayong (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16561703#comment-16561703
 ] 

mayong commented on KAFKA-5734:
---

how to disable quota configs?which config?

> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
>Priority: Major
> Attachments: heap-log.xlsx, jconsole.png
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7215) Improve LogCleaner behavior on error

2018-07-30 Thread Stanislav Kozlovski (JIRA)
Stanislav Kozlovski created KAFKA-7215:
--

 Summary: Improve LogCleaner behavior on error
 Key: KAFKA-7215
 URL: https://issues.apache.org/jira/browse/KAFKA-7215
 Project: Kafka
  Issue Type: Improvement
Reporter: Stanislav Kozlovski
Assignee: Stanislav Kozlovski


For more detailed information see 
[KIP-346|https://cwiki.apache.org/confluence/display/KAFKA/KIP-346+-+Improve+LogCleaner+behavior+on+error]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7134) KafkaLog4jAppender - Appender exceptions are propagated to caller

2018-07-30 Thread venkata praveen (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16561600#comment-16561600
 ] 

venkata praveen commented on KAFKA-7134:


[~asasvari] and [~akatona] - Got the point and its valid. And thanks for 
mentioning it. \

By the way how the appender behaves when kafka is down while its being 
initialized?  And what happens if the application ignores the appender 
initialization exception and continues with application startup? there could be 
a chance of NullPointer on producer?

> KafkaLog4jAppender - Appender exceptions are propagated to caller
> -
>
> Key: KAFKA-7134
> URL: https://issues.apache.org/jira/browse/KAFKA-7134
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: venkata praveen
>Assignee: Andras Katona
>Priority: Major
>
> KafkaLog4jAppender exceptions are propagated to caller when Kafka is 
> down/slow/other, it may cause the application crash. Ideally appender should 
> print and ignore the exception
>  or should provide option to ignore/throw the exceptions like 
> 'ignoreExceptions' property of 
> https://logging.apache.org/log4j/2.x/manual/appenders.html#KafkaAppender



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6620) Documentation about "exactly_once" doesn't mention "transaction.state.log.min.isr"

2018-07-30 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16561476#comment-16561476
 ] 

ASF GitHub Bot commented on KAFKA-6620:
---

dongjinleekr opened a new pull request #5434: KAFKA-6620: Documentation about 
'exactly_once' doesn't mention 'transaction.state.log.min.isr'
URL: https://github.com/apache/kafka/pull/5434
 
 
   Add note on `transaction.state.log.min.isr` property, which can cause a 
problem when `transaction.state.log.replication.factor` is set to 1 for 
development without changing it to 1.
   
   - note: 
[GlobalKTableEOSIntegrationTest.java](https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/integration/GlobalKTableEOSIntegrationTest.java#L66)
   - note: 
[GlobalThreadShutDownOrderTest.java](https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/integration/GlobalThreadShutDownOrderTest.java#L70)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Documentation about "exactly_once" doesn't mention 
> "transaction.state.log.min.isr" 
> ---
>
> Key: KAFKA-6620
> URL: https://issues.apache.org/jira/browse/KAFKA-6620
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Reporter: Daniel Qian
>Priority: Major
>
> Documentation about "processing.guarantee" says:
> {quote}The processing guarantee that should be used. Possible values are 
> {{at_least_once}}(default) and {{exactly_once}}. Note that exactly-once 
> processing requires a cluster of at least three brokers by default what is 
> the recommended setting for production; *for development you can change this, 
> by adjusting broker setting* 
> `{color:#FF}*transaction.state.log.replication.factor*{color}`
> {quote}
> If one only set *transaction.state.log.replication.factor=1* but leave 
> *transaction.state.log.min.isr* with default value (which is 2) the Streams 
> Application will break.
> Hope you guys modify the doc, thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)