[jira] [Commented] (KAFKA-5749) Refactor SessionStore hierarchy

2017-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142677#comment-16142677
 ] 

ASF GitHub Bot commented on KAFKA-5749:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3729


> Refactor SessionStore hierarchy
> ---
>
> Key: KAFKA-5749
> URL: https://issues.apache.org/jira/browse/KAFKA-5749
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Damian Guy
>Assignee: Damian Guy
> Fix For: 1.0.0
>
>
> In order to support bytes store we need to create a MeteredSessionStore and 
> ChangeloggingSessionStore. We then need to refactor the current SessionStore 
> implementations to use this. All inner stores should by of type  byte[]>



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2017-08-26 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142877#comment-16142877
 ] 

Richard Yu commented on KAFKA-4468:
---

When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating RocksDBWindowStore's window. They called the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of that particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {code} windowSize {code} is known to the class already.  
Therefore, this is a successful implementation of
using a window of limited time alive, since the fixed size is already known. 

In other words, for us to find the fixed length of time that is needed to be 
alive, it must be found in the series of calls in which WindowedDeserializer is 
called. However, when attempting to look for find this sequence of calls 
between Window Store and WindowedDeserializer. Yet, to date, I could not find 
such a series of calls.

In other words, their might not be a practical way to retrieve the length of 
the window.

> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2017-08-26 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142877#comment-16142877
 ] 

Richard Yu edited comment on KAFKA-4468 at 8/26/17 5:56 PM:


When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating RocksDBWindowStore's window. They called the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of that particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {code} windowSize {code} is known to the class already.  
Therefore, this is a successful implementation of
using a window of limited time alive, since the fixed size is already known. 

In other words, for us to find the fixed length of time that is needed to be 
alive, it must be found in the series of calls in which WindowedDeserializer is 
called. However, when attempting to look for find this sequence of calls 
between Window Store and WindowedDeserializer, it could not be found.

In other words, their might not be a practical way to retrieve the length of 
the window.


was (Author: yohan123):
When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating RocksDBWindowStore's window. They called the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of that particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {code} windowSize {code} is known to the class already.  
Therefore, this is a successful implementation of
using a window of limited time alive, since the fixed size is already known. 

In other words, for us to find the fixed length of time that is needed to be 
alive, it must be found in the series of calls in which WindowedDeserializer is 
called. However, when attempting to look for find this sequence of calls 
between Window Store and WindowedDeserializer. Yet, to date, I could not find 
such a series of calls.

In other words, their might not be a practical way to retrieve the length of 
the window.

> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2017-08-26 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142877#comment-16142877
 ] 

Richard Yu edited comment on KAFKA-4468 at 8/26/17 5:57 PM:


When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating RocksDBWindowStore's window. They called the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of that particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {code} windowSize {code} is known to the class already.  
Therefore, this is a successful implementation of
using a window of limited time alive, since the fixed size is already known. 

In other words, for us to find the fixed length of time that is needed to be 
alive, it must be found in the series of calls in which WindowedDeserializer is 
called. However, when attempting to look for find this sequence of calls 
between Window Store and WindowedDeserializer, the window length could not be 
found.

In other words, there might not be a practical way to retrieve the length of 
the window.


was (Author: yohan123):
When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating RocksDBWindowStore's window. They called the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of that particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {code} windowSize {code} is known to the class already.  
Therefore, this is a successful implementation of
using a window of limited time alive, since the fixed size is already known. 

In other words, for us to find the fixed length of time that is needed to be 
alive, it must be found in the series of calls in which WindowedDeserializer is 
called. However, when attempting to look for find this sequence of calls 
between Window Store and WindowedDeserializer, it could not be found.

In other words, their might not be a practical way to retrieve the length of 
the window.

> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2017-08-26 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142877#comment-16142877
 ] 

Richard Yu edited comment on KAFKA-4468 at 8/26/17 6:03 PM:


When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating window in RocksDBWindowStore, they call the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of this particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {{ windowSize }} is known to the class already.  Therefore, 
this is a successful implementation of using a window of limited time alive, 
since the fixed size is already known. 

In other words, for us to find the fixed length of time, it must be found in 
the series of calls in which WindowedDeserializer is called. However, when 
attempting to look for it in this sequence of calls between Window Store and 
WindowedDeserializer, the window length could not be found.

In other words, there might not be a practical way to retrieve the length of 
the window.


was (Author: yohan123):
When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating RocksDBWindowStore's window. They called the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of that particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {code} windowSize {code} is known to the class already.  
Therefore, this is a successful implementation of
using a window of limited time alive, since the fixed size is already known. 

In other words, for us to find the fixed length of time that is needed to be 
alive, it must be found in the series of calls in which WindowedDeserializer is 
called. However, when attempting to look for find this sequence of calls 
between Window Store and WindowedDeserializer, the window length could not be 
found.

In other words, there might not be a practical way to retrieve the length of 
the window.

> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-4906) Support 0.9 brokers with a newer Producer or Consumer version

2017-08-26 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke resolved KAFKA-4906.

Resolution: Won't Fix

> Support 0.9 brokers with a newer Producer or Consumer version
> -
>
> Key: KAFKA-4906
> URL: https://issues.apache.org/jira/browse/KAFKA-4906
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.10.2.0
>Reporter: Grant Henke
>Assignee: Grant Henke
>
> KAFKA-4507 added the ability for newer Kafka clients to talk to older Kafka 
> brokers if a new feature supported by a newer wire protocol was not 
> used/required. 
> We currently support brokers as old as 0.10.0.0 because thats when the 
> ApiVersionsRequest/Response was added to the broker (KAFKA-3307).
> However, there are relatively few changes between 0.9.0.0 and 0.10.0.0 on the 
> wire, making it possible to support another major broker version set by 
> assuming that any disconnect resulting from an ApiVersionsRequest is from a 
> 0.9 broker and defaulting to legacy protocol versions. 
> Supporting 0.9 with newer clients can drastically simplify upgrades, allow 
> for libraries and frameworks to easily support a wider set of environments, 
> and let developers take advantage of client side improvements without 
> requiring cluster upgrades first. 
> Below is a list of the wire protocol versions by release for reference: 
> {noformat}
> 0.10.x
>   Produce(0): 0 to 2
>   Fetch(1): 0 to 2 
>   Offsets(2): 0
>   Metadata(3): 0 to 1
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): 0
>   ApiVersions(18): 0
> 0.9.x:
>   Produce(0): 0 to 1 (no response timestamp from v2)
>   Fetch(1): 0 to 1 (no response timestamp from v2)
>   Offsets(2): 0
>   Metadata(3): 0 (no cluster id or rack info from v1)
>   OffsetCommit(8): 0 to 2
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): 0
>   Heartbeat(12): 0
>   LeaveGroup(13): 0
>   SyncGroup(14): 0
>   DescribeGroups(15): 0
>   ListGroups(16): 0
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> 0.8.2.x:
>   Produce(0): 0 (no quotas from v1)
>   Fetch(1): 0 (no quotas from v1)
>   Offsets(2): 0
>   Metadata(3): 0
>   OffsetCommit(8): 0 to 1 (no global retention time from v2)
>   OffsetFetch(9): 0 to 1
>   GroupCoordinator(10): 0
>   JoinGroup(11): UNSUPPORTED
>   Heartbeat(12): UNSUPPORTED
>   LeaveGroup(13): UNSUPPORTED
>   SyncGroup(14): UNSUPPORTED
>   DescribeGroups(15): UNSUPPORTED
>   ListGroups(16): UNSUPPORTED
>   SaslHandshake(17): UNSUPPORTED
>   ApiVersions(18): UNSUPPORTED
> {noformat}
> Note: Due to KAFKA-3088 it may take up to request.timeout.time to fail an 
> ApiVersionsRequest and failover to legacy protocol versions unless we handle 
> that scenario specifically in this patch. The workaround would be to reduce 
> request.timeout.time if needed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2017-08-26 Thread Richard Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142877#comment-16142877
 ] 

Richard Yu edited comment on KAFKA-4468 at 8/26/17 6:10 PM:


When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating window in RocksDBWindowStore, they call the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of this particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {{windowSize}} is known already.  Therefore, this is a 
successful implementation of using a window of limited time. 

However, since we do not know the window size for WindowedDeserializer, it must 
be retrievable in the series of calls in which WindowedDeserializer is 
involved. However, when attempting to look for the window length, it appears 
that no such sequence of calls exists between WindowedDeserializer and Window 
Store.

I am considering some other practical way to retrieve the length of the window.


was (Author: yohan123):
When looking through the references for Windowed Deserializer, it appears that 
there was none other than unit tests. 

The other approach is to look through the Window Store classes and look for the 
Window size. 
Inherently, since RocksDBWindowStore is the class that is most commonly used. I 
looked through that particular class
and found that when instantiating window in RocksDBWindowStore, they call the 
class WindowStoreIteratorWrapper. 
In lines 54 - 58 of this particular class, this is what I found:

{code}
@Override
public Windowed peekNextKey() {
final Bytes next = bytesIterator.peekNextKey();
final long timestamp = 
WindowStoreUtils.timestampFromBinaryKey(next.get());
final Bytes key = WindowStoreUtils.bytesKeyFromBinaryKey(next.get());
return new Windowed<>(key, 
WindowStoreUtils.timeWindowForSize(timestamp, windowSize));
}
{code}

In this method, {{ windowSize }} is known to the class already.  Therefore, 
this is a successful implementation of using a window of limited time alive, 
since the fixed size is already known. 

In other words, for us to find the fixed length of time, it must be found in 
the series of calls in which WindowedDeserializer is called. However, when 
attempting to look for it in this sequence of calls between Window Store and 
WindowedDeserializer, the window length could not be found.

In other words, there might not be a practical way to retrieve the length of 
the window.

> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5787) StoreChangeLogReader needs to restore partitions that were added post initialization

2017-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142879#comment-16142879
 ] 

ASF GitHub Bot commented on KAFKA-5787:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3736


> StoreChangeLogReader needs to restore partitions that were added post 
> initialization
> 
>
> Key: KAFKA-5787
> URL: https://issues.apache.org/jira/browse/KAFKA-5787
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.1, 1.0.0
>Reporter: Damian Guy
>Assignee: Damian Guy
>Priority: Blocker
>
> Investigation of {{KStreamRepartitionJoinTest}} failures uncovered this bug. 
> If a task fails during initialization due to a {{LockException}}, its 
> changelog partitions are not immediately added to the 
> {{StoreChangelogReader}} as the thread doesn't hold the lock. However 
> {{StoreChangelogReader#restore}} will be called and it sets the initialized 
> flag. On a subsequent successfull call to initialize the new tasks the 
> partitions are added to the {{StoreChangelogReader}}, however as it is 
> already initialized these new partitions will never be restored. So the task 
> will remain in a non-running state forever



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-4468) Correctly calculate the window end timestamp after read from state stores

2017-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142916#comment-16142916
 ] 

ASF GitHub Bot commented on KAFKA-4468:
---

GitHub user ConcurrencyPractitioner opened a pull request:

https://github.com/apache/kafka/pull/3745

[KAFKA-4468] Correctly calculate the window end timestamp after read from 
state stores

I have decided to use the following approach to fixing this bug:

1) Since the Window Size in WindowedDeserializer was originally unknown, I 
have initialized
a field _windowSize_ and created a constructor to allow it to be 
instantiated

2) The default size for __windowSize__ is _Long.MAX_VALUE_. If that is the 
case, then the 
deserialize method will return an Unlimited Window, or else will return 
Timed one.

3) Temperature Demo was modified to demonstrate how to use this new 
constructor, given
that the window size is known.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ConcurrencyPractitioner/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/3745.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3745


commit 7dcf42e110dadd3f257c7ea1a0d10adc60bd0eea
Author: Richard Yu 
Date:   2017-08-26T20:18:48Z

KAFKA-4468 Correctly calculate the window end timestamp after read from 
state stores




> Correctly calculate the window end timestamp after read from state stores
> -
>
> Key: KAFKA-4468
> URL: https://issues.apache.org/jira/browse/KAFKA-4468
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
>
> When storing the WindowedStore on the persistent KV store, we only use the 
> start timestamp of the window as part of the combo-key as (start-timestamp, 
> key). The reason that we do not add the end-timestamp as well is that we can 
> always calculate it from the start timestamp + window_length, and hence we 
> can save 8 bytes per key on the persistent KV store.
> However, after read it (via {{WindowedDeserializer}}) we do not set its end 
> timestamp correctly but just read it as an {{UnlimitedWindow}}. We should fix 
> this by calculating its end timestamp as mentioned above.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KAFKA-3916) Connection from controller to broker disconnects

2017-08-26 Thread Aman Choudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142922#comment-16142922
 ] 

Aman Choudhary edited comment on KAFKA-3916 at 8/26/17 8:30 PM:


Hi,
 I am using Kafka version 0.10.0.2 in the production environment. I am facing 
issues in my kafka broker and consumer machines which is very similar to issue 
described here.
Controller logs are very similar to the one described above:

 
||Heading 1||Heading 2||
|WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-5-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-5-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-2:9092 (id: 5 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-4-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-4-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-3:9092 (id: 4 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-2-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-2-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null}]}
 to broker host-5:9092 (id: 2 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-3-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-3-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-1:9092 (id: 3 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-1-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-1-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null}]}
 to broker host-4:9092 (id: 1 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-6-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-6-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,securi

[jira] [Commented] (KAFKA-3916) Connection from controller to broker disconnects

2017-08-26 Thread Aman Choudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142922#comment-16142922
 ] 

Aman Choudhary commented on KAFKA-3916:
---

Hi,
 I am using Kafka version 0.10.0.2 in the production environment. I am facing 
issues in my kafka broker and consumer machines which is very similar to issue 
described here.
Controller logs are very similar to the one described above:

 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-5-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-5-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-2:9092 (id: 5 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-4-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-4-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-3:9092 (id: 4 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-2-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-2-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null}]}
 to broker host-5:9092 (id: 2 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-3-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-3-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-1:9092 (id: 3 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-1-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-1-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null}]}
 to broker host-4:9092 (id: 1 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-6-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-6-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null}]}
 to broker host-6:9092 (id: 6 rack: null).

[jira] [Comment Edited] (KAFKA-3916) Connection from controller to broker disconnects

2017-08-26 Thread Aman Choudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142922#comment-16142922
 ] 

Aman Choudhary edited comment on KAFKA-3916 at 8/26/17 8:31 PM:


Hi,
 I am using Kafka version 0.10.0.2 in the production environment. I am facing 
issues in my kafka broker and consumer machines which is very similar to issue 
described here.
Controller logs are very similar to the one described above:



{panel:title=My title}
|WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-5-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-5-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-2:9092 (id: 5 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-4-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-4-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-3:9092 (id: 4 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-2-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-2-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null}]}
 to broker host-5:9092 (id: 2 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-3-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-3-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-1:9092 (id: 3 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-1-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-1-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null}]}
 to broker host-4:9092 (id: 1 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-6-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-6-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security

[jira] [Comment Edited] (KAFKA-3916) Connection from controller to broker disconnects

2017-08-26 Thread Aman Choudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142922#comment-16142922
 ] 

Aman Choudhary edited comment on KAFKA-3916 at 8/26/17 8:32 PM:


Hi,
 I am using Kafka version 0.10.0.2 in the production environment. I am facing 
issues in my kafka broker and consumer machines which is very similar to issue 
described here.
Controller logs are very similar to the one described above:



{code:java}
|WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-5-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-5-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-2:9092 (id: 5 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-4-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-4-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-3:9092 (id: 4 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-2-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-2-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null}]}
 to broker host-5:9092 (id: 2 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-3-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-3-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-1:9092 (id: 3 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-1-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-1-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null}]}
 to broker host-4:9092 (id: 1 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-6-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-6-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_t

[jira] [Comment Edited] (KAFKA-3916) Connection from controller to broker disconnects

2017-08-26 Thread Aman Choudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142922#comment-16142922
 ] 

Aman Choudhary edited comment on KAFKA-3916 at 8/26/17 8:32 PM:


Hi,
 I am using Kafka version 0.10.0.2 in the production environment. I am facing 
issues in my kafka broker and consumer machines which is very similar to issue 
described here.
Controller logs are very similar to the one described above:


|WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-5-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-5-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-2:9092 (id: 5 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-4-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-4-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-3:9092 (id: 4 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-2-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-2-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null}]}
 to broker host-5:9092 (id: 2 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-3-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-3-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-1:9092 (id: 3 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-1-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-1-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null}]}
 to broker host-4:9092 (id: 1 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-6-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-6-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=

[jira] [Comment Edited] (KAFKA-3916) Connection from controller to broker disconnects

2017-08-26 Thread Aman Choudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142922#comment-16142922
 ] 

Aman Choudhary edited comment on KAFKA-3916 at 8/26/17 8:36 PM:


Hi,
 I am using Kafka version 0.10.0.2 in the production environment. I am facing 
issues in my kafka broker and consumer machines which is very similar to issue 
described here.
Controller logs are very similar to the one described above:



{code:java}
|WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-5-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-5-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-2:9092 (id: 5 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-4-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-4-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-3:9092 (id: 4 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,204] [Controller-6-to-broker-2-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-2-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null}]}
 to broker host-5:9092 (id: 2 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-3-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-3-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]}],live_leaders=[{id=3,host=host-1,port=9092}]}
 to broker host-1:9092 (id: 3 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-1-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-1-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null}]}
 to broker host-4:9092 (id: 1 rack: null). Reconnecting to broker.
 WARN [2017-08-26 19:19:27,205] [Controller-6-to-broker-6-send-thread][] 
kafka.controller.RequestSendThread - [Controller-6-to-broker-6-send-thread], 
Controller 6 epoch 34 fails to send request 
{controller_id=6,controller_epoch=34,partition_states=[{topic=topic-1,partition=0,controller_epoch=34,leader=3,leader_epoch=0,isr=[3,4,5],zk_version=0,replicas=[3,4,5]},{topic=topic-4,partition=0,controller_epoch=34,leader=-2,leader_epoch=0,isr=[],zk_version=0,replicas=[0]}],live_brokers=[{id=5,end_points=[{port=9092,host=host-2,security_protocol_type=0}],rack=null},{id=6,end_points=[{port=9092,host=host-6,security_protocol_type=0}],rack=null},{id=2,end_points=[{port=9092,host=host-5,security_protocol_type=0}],rack=null},{id=3,end_points=[{port=9092,host=host-1,security_protocol_type=0}],rack=null},{id=4,end_points=[{port=9092,host=host-3,security_protocol_type=0}],rack=null},{id=1,end_points=[{port=9092,host=host-4,security_protocol_t

[jira] [Commented] (KAFKA-5460) Documentation on website uses word-breaks resulting in confusion

2017-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142957#comment-16142957
 ] 

ASF GitHub Bot commented on KAFKA-5460:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka-site/pull/63


> Documentation on website uses word-breaks resulting in confusion
> 
>
> Key: KAFKA-5460
> URL: https://issues.apache.org/jira/browse/KAFKA-5460
> Project: Kafka
>  Issue Type: Bug
>Reporter: Karel Vervaeke
> Attachments: Screen Shot 2017-06-16 at 14.45.40.png, Screenshot from 
> 2017-06-23 14-45-02.png
>
>
> Documentation seems to suggest there is a configuration property 
> auto.off-set.reset but it really is auto.offset.reset.
> We should look into disabling the word-break css properties (globally or at 
> least in the configuration reference tables)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-5620) SerializationException in doSend() masks class cast exception

2017-08-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-5620.
--
   Resolution: Fixed
Fix Version/s: 1.0.0

Issue resolved by pull request 3556
[https://github.com/apache/kafka/pull/3556]

> SerializationException in doSend() masks class cast exception
> -
>
> Key: KAFKA-5620
> URL: https://issues.apache.org/jira/browse/KAFKA-5620
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Jeremy Custenborder
>Assignee: Jeremy Custenborder
> Fix For: 1.0.0
>
>
> I misconfigured my Serializer and passed a byte array to BytesSerializer. 
> This caused the following exception to be thrown. 
> {code}
> org.apache.kafka.common.errors.SerializationException: Can't convert value of 
> class [B to class org.apache.kafka.common.serialization.BytesSerializer 
> specified in value.serializer
> {code}
> This doesn't provide much detail because it strips the ClassCastException. It 
> made figuring this out much more difficult. The real value was the inner 
> exception which was:
> {code}
> [B cannot be cast to org.apache.kafka.common.utils.Bytes
> {code}
> We should include the ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KAFKA-5620) SerializationException in doSend() masks class cast exception

2017-08-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16142967#comment-16142967
 ] 

ASF GitHub Bot commented on KAFKA-5620:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/3556


> SerializationException in doSend() masks class cast exception
> -
>
> Key: KAFKA-5620
> URL: https://issues.apache.org/jira/browse/KAFKA-5620
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.11.0.0
>Reporter: Jeremy Custenborder
>Assignee: Jeremy Custenborder
> Fix For: 1.0.0
>
>
> I misconfigured my Serializer and passed a byte array to BytesSerializer. 
> This caused the following exception to be thrown. 
> {code}
> org.apache.kafka.common.errors.SerializationException: Can't convert value of 
> class [B to class org.apache.kafka.common.serialization.BytesSerializer 
> specified in value.serializer
> {code}
> This doesn't provide much detail because it strips the ClassCastException. It 
> made figuring this out much more difficult. The real value was the inner 
> exception which was:
> {code}
> [B cannot be cast to org.apache.kafka.common.utils.Bytes
> {code}
> We should include the ClassCastException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5797) StoreChangelogReader should be resilient to broker-side metadata not available

2017-08-26 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-5797:


 Summary: StoreChangelogReader should be resilient to broker-side 
metadata not available
 Key: KAFKA-5797
 URL: https://issues.apache.org/jira/browse/KAFKA-5797
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Guozhang Wang
Assignee: Guozhang Wang


In {{StoreChangelogReader#validatePartitionExists}}, if the metadata for the 
required partition is not available, or a timeout exception is thrown, today 
the function would directly throw the exception all the way up to user's 
exception handlers.

Since we have now extracted the restoration out of the consumer callback, a 
better way to handle this, is to only validate the partition during restoring, 
and if it does not exist we can just proceed and retry in the next loop



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-5798) Couldnt connect to mysql using mysql.jdbc driver

2017-08-26 Thread Bubesh Shankar Govindarajan (JIRA)
Bubesh Shankar Govindarajan created KAFKA-5798:
--

 Summary: Couldnt connect to mysql using mysql.jdbc driver
 Key: KAFKA-5798
 URL: https://issues.apache.org/jira/browse/KAFKA-5798
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 0.10.0.1
 Environment: Ubuntu
Reporter: Bubesh Shankar Govindarajan


Am a beginner to both java and kafka, trying to connect kafka and mysql to 
stream data from mysql database and consume it via kafka consumers. 

I have downloaded the confluent 3.3.0 from the below link 
https://www.confluent.io/download/

I am using - *confluent-3.3.0*
*+Java version :+*
$ java -version
openjdk version "9-Ubuntu"
OpenJDK Runtime Environment (build 9-Ubuntu+0-9b161-1)
OpenJDK Server VM (build 9-Ubuntu+0-9b161-1, mixed mode)

Mysql JDBC driver: *com.mysql.jdbc_5.1.5.jar*

I have started the zookeeper and kafka server  using the below commands

* zookeeper-server-start 
/home/bubesh/Kafka/confluent-3.3.0/etc/kafka/zookeeper.properties
* kafka-server-start 
/home/bubesh/Kafka/confluent-3.3.0/etc/kafka/server.properties

*+I have used the below command to invoke kafka-connect - to connect to mysql 
via jdbc driver:+*

connect-standalone 
/home/bubesh/Kafka/confluent-3.3.0/etc/schema-registry/connect-avro-standalone.properties
 
/home/bubesh/Kafka/confluent-3.3.0/etc/kafka-connect-jdbc/source-quickstart-mysql.properties

*+My CLASSPATH variable has: +*

/home/bubesh/JDBCDriver/com.mysql.jdbc_5.1.5.jar::/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/confluent-common/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-serde-tools/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-elasticsearch/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-hdfs/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-jdbc/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-s3/*:/home/bubesh/Kafka/confluent-3.3.0/share/java/kafka-connect-storage-common/*


*+Error while running the command:+*
Error: Config file not found: 
/usr/lib/jvm/java-9-openjdk-i386/conf/management/management.properties

*+connect-avro-standalone.properties:+*

bootstrap.servers=localhost:9092
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets

I use a VMware with Ubuntu installed in it and my MySQL sever is installed in 
my host machine. I have also connected to this 

*+ipconfig of the host:+*

 Connection-specific DNS Suffix  . : home
   Link-local IPv6 Address . . . . . : fe80::c81:5928:4ee6:5a2d%9
   IPv4 Address. . . . . . . . . . . : 192.168.1.10
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Default Gateway . . . . . . . . . : 192.168.1.1

*+source-quickstart-mysql.properties:+*

name=mysql-whitelist-timestamp-source
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1

connection.url=jdbc:mysql://192.168.1.10:3306/sandbox?user=bubesh&password=bubesh21
query=SELECT * from temp1
mode=incrementing
incrementing.column.name=c1

topic.prefix=mysql-test-topic1




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)