[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Yogesh BG (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187301#comment-16187301
 ] 

Yogesh BG commented on KAFKA-5998:
--

Patch v1 is one way to not leave source behind. Another option is to delete the 
source.

Do you have the patch can you share? I will update my setup... We can not 
delete the source manually... What is the impact of this issue? Its gonna drop 
some messages or it violates any consumer semantics like exactly once?
 

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Yogesh BG
> Attachments: 5998.v1.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  

[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Yogesh BG (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187299#comment-16187299
 ] 

Yogesh BG commented on KAFKA-5998:
--

17:48:31.389 [ks_0_inst-StreamThread-5] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.389 [ks_0_inst-StreamThread-6] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.391 [ks_0_inst-StreamThread-8] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.391 [ks_0_inst-StreamThread-1] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.392 [ks_0_inst-StreamThread-15] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.392 [ks_0_inst-StreamThread-8] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.392 [ks_0_inst-StreamThread-1] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.393 [ks_0_inst-StreamThread-15] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.484 [ks_0_inst-StreamThread-16] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.485 [ks_0_inst-StreamThread-16] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.485 [ks_0_inst-StreamThread-11] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.485 [ks_0_inst-StreamThread-2] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.485 [ks_0_inst-StreamThread-10] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.486 [ks_0_inst-StreamThread-2] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.486 [ks_0_inst-StreamThread-11] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.486 [ks_0_inst-StreamThread-10] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Marking the coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) dead for 
group rtp-kafkastreams
17:48:31.489 [ks_0_inst-StreamThread-7] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.489 [ks_0_inst-StreamThread-3] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.585 [ks_0_inst-StreamThread-16] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.586 [ks_0_inst-StreamThread-2] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.586 [ks_0_inst-StreamThread-11] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.587 [ks_0_inst-StreamThread-10] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.591 [ks_0_inst-StreamThread-7] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.591 [ks_0_inst-StreamThread-6] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.591 [ks_0_inst-StreamThread-3] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.592 [ks_0_inst-StreamThread-5] INFO  o.a.k.c.c.i.AbstractCoordinator - 
Discovered coordinator 172.30.0.9:9092 (id: 2147482646 rack: null) for group 
rtp-kafkastreams.
17:48:31.594 [ks_0_inst-StreamThread-15] INFO  o.a.k.c.c.i.AbstractCoordinator 
- Discovered co

[jira] [Comment Edited] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-09-30 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187287#comment-16187287
 ] 

Manikumar edited comment on KAFKA-5734 at 10/1/17 6:02 AM:
---

[~jangbora] looking at the metrics above, they represent per client-id quota 
metrics. Inactive quota metrics will be deleted after one hour.  looks like you 
are creating many producer instances within an hour. As mentioned [~huxi_2b],  
You can use single producer instance across multiple threads. (or) you can 
reuse client.id string (or)  if you are not using quotas, you can disable quota 
configs.

Please close the JIRA, if you satisfy with one of the above approaches.


was (Author: omkreddy):
[~jangbora] looking at the metrics above, they are per client-id quota metrics. 
 inactive quota metrics will be deleted after one hour.  looks like you are 
creating many producer instances within an hour. As mentioned [~huxi_2b],  You 
can use single producer instance across multiple threads. (or) you can reuse 
client.id string (or)  if you not using quotas, you can disable quota configs.

Please close the JIRA, if you satisfy with one of the above approaches.

> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx, jconsole.png
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5734) Heap (Old generation space) gradually increase

2017-09-30 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187287#comment-16187287
 ] 

Manikumar commented on KAFKA-5734:
--

[~jangbora] looking at the metrics above, they are per client-id quota metrics. 
 inactive quota metrics will be deleted after one hour.  looks like you are 
creating many producer instances within an hour. As mentioned [~huxi_2b],  You 
can use single producer instance across multiple threads. (or) you can reuse 
client.id string (or)  if you not using quotas, you can disable quota configs.

Please close the JIRA, if you satisfy with one of the above approaches.

> Heap (Old generation space) gradually increase
> --
>
> Key: KAFKA-5734
> URL: https://issues.apache.org/jira/browse/KAFKA-5734
> Project: Kafka
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.10.2.0
> Environment: ubuntu 14.04 / java 1.7.0
>Reporter: jang
> Attachments: heap-log.xlsx, jconsole.png
>
>
> I set up kafka server on ubuntu with 4GB ram.
> Heap ( Old generation space ) size is increasing gradually like attached 
> excel file which recorded gc info in 1 minute interval.
> Finally OU occupies 2.6GB and GC expend too much time ( And out of memory 
> exception )
> kafka process argumens are below.
> _java -Xmx3000M -Xms2G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 
> -XX:InitiatingHeapOccupancyPercent=35 -XX:+DisableExplicitGC 
> -Djava.awt.headless=true 
> -Xloggc:/usr/local/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -Dcom.sun.management.jmxremote 
> -Dcom.sun.management.jmxremote.authenticate=false 
> -Dcom.sun.management.jmxremote.ssl=false 
> -Dkafka.logs.dir=/usr/local/kafka/bin/../logs 
> -Dlog4j.configuration=file:/usr/local/kafka/bin/../config/log4j.properties_



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5746) Add new metrics to support health checks

2017-09-30 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187238#comment-16187238
 ] 

ASF GitHub Bot commented on KAFKA-5746:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3996


> Add new metrics to support health checks
> 
>
> Key: KAFKA-5746
> URL: https://issues.apache.org/jira/browse/KAFKA-5746
> Project: Kafka
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
> Fix For: 1.0.0
>
>
> It will be useful to have some additional metrics to support health checks.
> Details are in 
> [KIP-188|https://cwiki.apache.org/confluence/display/KAFKA/KIP-188+-+Add+new+metrics+to+support+health+checks]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated KAFKA-5998:
--
Attachment: 5998.v1.txt

{code}
Files.move(source, target, StandardCopyOption.REPLACE_EXISTING);
log.debug("Non-atomic move of {} to {} succeeded after atomic 
move failed due to {}", source, target,
outer.getMessage());
} catch (IOException inner) {
{code}
If there was IOE from the move call, the source might be left around.

Patch v1 is one way to not leave source behind. Another option is to delete the 
source.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Yogesh BG
> Attachments: 5998.v1.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.a

[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Matthias J. Sax (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-5998:
---
Priority: Major  (was: Trivial)

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0
>Reporter: Yogesh BG
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkas

[jira] [Assigned] (KAFKA-5967) Ineffective check of negative value in CompositeReadOnlyKeyValueStore#approximateNumEntries()

2017-09-30 Thread siva santhalingam (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

siva santhalingam reassigned KAFKA-5967:


Assignee: siva santhalingam

> Ineffective check of negative value in 
> CompositeReadOnlyKeyValueStore#approximateNumEntries()
> -
>
> Key: KAFKA-5967
> URL: https://issues.apache.org/jira/browse/KAFKA-5967
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: siva santhalingam
>Priority: Minor
>  Labels: beginner, newbie
>
> {code}
> long total = 0;
> for (ReadOnlyKeyValueStore store : stores) {
> total += store.approximateNumEntries();
> }
> return total < 0 ? Long.MAX_VALUE : total;
> {code}
> The check for negative value seems to account for wrapping.
> However, wrapping can happen within the for loop. So the check should be 
> performed inside the loop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5972) Flatten SMT does not work with null values

2017-09-30 Thread siva santhalingam (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16187181#comment-16187181
 ] 

siva santhalingam commented on KAFKA-5972:
--

Hi [~tomas.zuk...@gmail.com] Can i assign this to myself? Thanks!

> Flatten SMT does not work with null values
> --
>
> Key: KAFKA-5972
> URL: https://issues.apache.org/jira/browse/KAFKA-5972
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.11.0.0, 0.11.0.1
>Reporter: Tomas Zuklys
>Priority: Minor
>  Labels: easyfix, patch
> Attachments: kafka-transforms.patch
>
>
> Hi,
> I noticed a bug in Flatten SMT while doing tests with different SMTs that are 
> provided out-of-box.
> Flatten SMT does not work as expected with schemaless JSON that has 
> properties with null values. 
> Example json:
> {code}
>   {A={D=dValue, B=null, C=cValue}}
> {code}
> The issue is in if statement that checks for null value.
> Current version:
> {code}
>   for (Map.Entry entry : originalRecord.entrySet()) {
> final String fieldName = fieldName(fieldNamePrefix, 
> entry.getKey());
> Object value = entry.getValue();
> if (value == null) {
> newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
> null);
> return;
> }
> ...
> {code}
> should be
> {code}
>   for (Map.Entry entry : originalRecord.entrySet()) {
> final String fieldName = fieldName(fieldNamePrefix, 
> entry.getKey());
> Object value = entry.getValue();
> if (value == null) {
> newRecord.put(fieldName(fieldNamePrefix, entry.getKey()), 
> null);
> continue;
> }
> {code}
> I have attached a patch containing the fix for this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Yogesh BG (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yogesh BG updated KAFKA-5998:
-
Description: 
I have one kafka broker and one kafka stream running... I am running its since 
two days under load of around 2500 msgs per second.. On third day am getting 
below exception for some of the partitions, I have 16 partitions only 0_0 and 
0_1 gives this error

{{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
 

[jira] [Updated] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Yogesh BG (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yogesh BG updated KAFKA-5998:
-
Description: 
I have one kafka broker and one kafka stream running... I am running its since 
two days under load of around 2500 msgs per second.. On third day am getting 
below exception for some of the partitions, I have 16 partitions only 0_0 and 
0_1 gives this error

{{
09:43:25.955 [ks_0_inst-StreamThread-6] WARN  o.a.k.s.p.i.ProcessorStateManager 
- Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]

[jira] [Created] (KAFKA-5998) /.checkpoint.tmp Not found exception

2017-09-30 Thread Yogesh BG (JIRA)
Yogesh BG created KAFKA-5998:


 Summary: /.checkpoint.tmp Not found exception
 Key: KAFKA-5998
 URL: https://issues.apache.org/jira/browse/KAFKA-5998
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 0.11.0.0
Reporter: Yogesh BG
Priority: Trivial


I have one kafka broker and one kafka stream running... I am running its since 
two days under load of around 2500 msgs per second.. On third day am getting 
below exception for some of the partitions, I have 16 partitions only 0_0 and 
0_1 gives this error

09:43:25.955 [ks_0_inst-StreamThread-6] WARN  o.a.k.s.p.i.ProcessorStateManager 
- Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
java.io.FileNotFoundException: 
/data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or directory)
at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:221) 
~[na:1.7.0_111]
at java.io.FileOutputStream.(FileOutputStream.java:171) 
~[na:1.7.0_111]
at 
org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
 ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
 [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
at 
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
 [rtp-kafkastreams-1.0-SNAPSHO

[jira] [Resolved] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Ronald van de Kuil (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ronald van de Kuil resolved KAFKA-5997.
---
Resolution: Not A Bug

> acks=all does not seem to be honoured
> -
>
> Key: KAFKA-5997
> URL: https://issues.apache.org/jira/browse/KAFKA-5997
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.11.0.0
>Reporter: Ronald van de Kuil
>
> I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
> replication factor of 2. 
> The replicas landed on broker 1 and 3. 
> When I stopped the leader I still could produce on it. Also, I saw the 
> consumer consume it.
> I produced with acks=all:
> [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
> values: 
>   acks = all
> The documentations says says about acc=all:
> "This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record."
> The producer api returns the offset which is present in the Metadatarecord.
> I would have expected that producer could not publish because only 1 replica 
> is in sync.
> This is the output of the topic state:
> Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
> Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1
> I am wondering whether or not this is a bug in the current version of the API 
> or whether the documentation (or my understanding) is eligible for an update? 
> I expect the former.
> If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Ronald van de Kuil (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16186990#comment-16186990
 ] 

Ronald van de Kuil commented on KAFKA-5997:
---

Thank you Manikumar.

After adding the configuration it works as expected:

[main] ERROR eu.bde.sc4pilot.kafka.Producer - 
org.apache.kafka.common.errors.NotEnoughReplicasException: Messages are 
rejected since there are fewer in-sync replicas than required.


> acks=all does not seem to be honoured
> -
>
> Key: KAFKA-5997
> URL: https://issues.apache.org/jira/browse/KAFKA-5997
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.11.0.0
>Reporter: Ronald van de Kuil
>
> I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
> replication factor of 2. 
> The replicas landed on broker 1 and 3. 
> When I stopped the leader I still could produce on it. Also, I saw the 
> consumer consume it.
> I produced with acks=all:
> [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
> values: 
>   acks = all
> The documentations says says about acc=all:
> "This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record."
> The producer api returns the offset which is present in the Metadatarecord.
> I would have expected that producer could not publish because only 1 replica 
> is in sync.
> This is the output of the topic state:
> Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
> Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1
> I am wondering whether or not this is a bug in the current version of the API 
> or whether the documentation (or my understanding) is eligible for an update? 
> I expect the former.
> If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Manikumar (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16186974#comment-16186974
 ] 

Manikumar commented on KAFKA-5997:
--

 We need to use min.insync.replicas to enforce durability guarantees. Pl read 
docs about "min.insync.replicas" here:
https://kafka.apache.org/documentation/#topicconfigs


> acks=all does not seem to be honoured
> -
>
> Key: KAFKA-5997
> URL: https://issues.apache.org/jira/browse/KAFKA-5997
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.11.0.0
>Reporter: Ronald van de Kuil
>
> I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
> replication factor of 2. 
> The replicas landed on broker 1 and 3. 
> When I stopped the leader I still could produce on it. Also, I saw the 
> consumer consume it.
> I produced with acks=all:
> [main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
> values: 
>   acks = all
> The documentations says says about acc=all:
> "This means the leader will wait for the full set of in-sync replicas to 
> acknowledge the record."
> The producer api returns the offset which is present in the Metadatarecord.
> I would have expected that producer could not publish because only 1 replica 
> is in sync.
> This is the output of the topic state:
> Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
> Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1
> I am wondering whether or not this is a bug in the current version of the API 
> or whether the documentation (or my understanding) is eligible for an update? 
> I expect the former.
> If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5997) acks=all does not seem to be honoured

2017-09-30 Thread Ronald van de Kuil (JIRA)
Ronald van de Kuil created KAFKA-5997:
-

 Summary: acks=all does not seem to be honoured
 Key: KAFKA-5997
 URL: https://issues.apache.org/jira/browse/KAFKA-5997
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.11.0.0
Reporter: Ronald van de Kuil


I have a 3 node Kafka cluster. I made a topic with 1 partition with a 
replication factor of 2. 

The replicas landed on broker 1 and 3. 

When I stopped the leader I still could produce on it. Also, I saw the consumer 
consume it.

I produced with acks=all:

[main] INFO org.apache.kafka.clients.producer.ProducerConfig - ProducerConfig 
values: 
acks = all

The documentations says says about acc=all:
"This means the leader will wait for the full set of in-sync replicas to 
acknowledge the record."

The producer api returns the offset which is present in the Metadatarecord.

I would have expected that producer could not publish because only 1 replica is 
in sync.

This is the output of the topic state:

Topic:topic1PartitionCount:1ReplicationFactor:2 Configs:
Topic: topic1   Partition: 0Leader: 1   Replicas: 1,3   Isr: 1

I am wondering whether or not this is a bug in the current version of the API 
or whether the documentation (or my understanding) is eligible for an update? I 
expect the former.

If there is anything that I can do, please let me know.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)