[jira] [Created] (KAFKA-7631) NullPointerException when SCRAM is allowed bu ScramLoginModule is not in broker's jaas.conf

2018-11-15 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-7631:
--

 Summary: NullPointerException when SCRAM is allowed bu 
ScramLoginModule is not in broker's jaas.conf
 Key: KAFKA-7631
 URL: https://issues.apache.org/jira/browse/KAFKA-7631
 Project: Kafka
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0
Reporter: Andras Beni


When user wants to use delegation tokens and lists {{SCRAM}} in 
{{sasl.enabled.mechanisms}}, but does not add {{ScramLoginModule}} to broker's 
JAAS configuration, a null pointer exception is thrown on broker side and the 
connection is closed.

Meaningful error message should be logged and sent back to the client.
{code}
java.lang.NullPointerException
at 
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:376)
at 
org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:262)
at 
org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:127)
at 
org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:489)
at org.apache.kafka.common.network.Selector.poll(Selector.java:427)
at kafka.network.Processor.poll(SocketServer.scala:679)
at kafka.network.Processor.run(SocketServer.scala:584)
at java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7630) Clarify that broker doesn't need scram username/password for delegation tokens

2018-11-14 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-7630:
--

 Summary: Clarify that broker doesn't need scram username/password 
for delegation tokens
 Key: KAFKA-7630
 URL: https://issues.apache.org/jira/browse/KAFKA-7630
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, security
Affects Versions: 2.0.0
Reporter: Andras Beni


[Documentation|https://kafka.apache.org/documentation/#security_token_authentication]
 on delegation tokens refers to SCRAM 
[configuration|https://kafka.apache.org/documentation/#security_sasl_scram_brokerconfig]
 section. However, in a setup where only delegation tokens use SCRAM and all 
other authentication goes via Kerberos, {{ScramLoginModule}} does not need 
{{username}} and {{password}}.

This is not obvious from the documentation.

I believe the same is true for setups where SCRAM is used by clients but inter 
broker communication is GSSAPI or PLAIN, but have not tested it.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 2.1.0 RC1

2018-11-13 Thread Andras Beni
+1 (non-binding)

Verified signatures and checksums of release artifacts
Performed quickstart steps on rc artifacts (both scala 2.11 and 2.12)

Andras

On Tue, Nov 13, 2018 at 10:51 AM Eno Thereska 
wrote:

> Built code and ran tests. Getting a single integration test failure:
>
> kafka.log.LogCleanerParameterizedIntegrationTest >
> testCleansCombinedCompactAndDeleteTopic[3] FAILED
> java.lang.AssertionError: Contents of the map shouldn't change
> expected: (340,340), 5 -> (345,345), 10 -> (350,350), 14 ->
> (354,354), 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353),
> 2 -> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 8 ->
> (348,348), 19 -> (359,359), 4 -> (344,344), 15 -> (355,355))> but
> was: (340,340), 5 -> (345,345), 10 -> (350,350), 14 -> (354,354),
> 1 -> (341,341), 6 -> (346,346), 9 -> (349,349), 13 -> (353,353), 2 ->
> (342,342), 17 -> (357,357), 12 -> (352,352), 7 -> (347,347), 3 ->
> (343,343), 18 -> (358,358), 16 -> (356,356), 11 -> (351,351), 99 ->
> (299,299), 8 -> (348,348), 19 -> (359,359), 4 -> (344,344), 15 ->
> (355,355))>
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:834)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at
>
> kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(LogCleanerParameterizedIntegrationTest.scala:129)
>
> Thanks
> Eno
>
> On Sun, Nov 11, 2018 at 7:34 PM Jonathan Santilli <
> jonathansanti...@gmail.com> wrote:
>
> > Hello,
> >
> > +1
> >
> > I have downloaded the release artifact from
> > http://home.apache.org/~lindong/kafka-2.1.0-rc1/
> > Executed a 3 brokers cluster. (java8 8u192b12)
> > Executed kafka-monitor for about 1 hour without problems.
> >
> > Thanks,
> > --
> > Jonathan
> >
> >
> > On Fri, Nov 9, 2018 at 11:33 PM Dong Lin  wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the second candidate for feature release of Apache Kafka 2.1.0.
> > >
> > > This is a major version release of Apache Kafka. It includes 28 new
> KIPs
> > > and
> > >
> > > critical bug fixes. Please see the Kafka 2.1.0 release plan for more
> > > details:
> > >
> > > *
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044*
> > > <
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044
> > > >
> > >
> > > Here are a few notable highlights:
> > >
> > > - Java 11 support
> > > - Support for Zstandard, which achieves compression comparable to gzip
> > with
> > > higher compression and especially decompression speeds(KIP-110)
> > > - Avoid expiring committed offsets for active consumer group (KIP-211)
> > > - Provide Intuitive User Timeouts in The Producer (KIP-91)
> > > - Kafka's replication protocol now supports improved fencing of
> zombies.
> > > Previously, under certain rare conditions, if a broker became
> partitioned
> > > from Zookeeper but not the rest of the cluster, then the logs of
> > replicated
> > > partitions could diverge and cause data loss in the worst case
> (KIP-320)
> > > - Streams API improvements (KIP-319, KIP-321, KIP-330, KIP-353,
> KIP-356)
> > > - Admin script and admin client API improvements to simplify admin
> > > operation (KIP-231, KIP-308, KIP-322, KIP-324, KIP-338, KIP-340)
> > > - DNS handling improvements (KIP-235, KIP-302)
> > >
> > > Release notes for the 2.1.0 release:
> > > http://home.apache.org/~lindong/kafka-2.1.0-rc0/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Thursday, Nov 15, 12 pm PT ***
> > >
> > > * Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~lindong/kafka-2.1.0-rc1/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~lindong/kafka-2.1.0-rc1/javadoc/
> > >
> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.0-rc1 tag:
> > > https://github.com/apache/kafka/tree/2.1.0-rc1
> > >
> > > * Documentation:
> > > *http://kafka.apache.org/21/documentation.html*
> > > 
> > >
> > > * Protocol:
> > > http://kafka.apache.org/21/protocol.html
> > >
> > > * Successful Jenkins builds for the 2.1 branch:
> > > Unit/integration tests: *
> > https://builds.apache.org/job/kafka-2.1-jdk8/50/
> > > *
> > >
> > > Please test and verify the release artifacts and submit a vote for this
> > RC,
> > > or report any issues so we can fix them and get a new RC out ASAP.
> > Although
> > > this release vote requires PMC votes to pass, testing, votes, and bug
> > > reports are valuable and appreciated from everyone.
> > >
> > > Cheers,
> > > Dong
> > >
> >
> >
> > --

Re: [VOTE] 2.1.0 RC0

2018-10-24 Thread Andras Beni
+1 (non-binding)

Verified signatures and checksums of release artifacts
Performed quickstart steps on rc artifacts (both scala 2.11 and 2.12) and
one built from tag 2.1.0-rc0

Andras

On Wed, Oct 24, 2018 at 10:17 AM Dong Lin  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for feature release of Apache Kafka 2.1.0.
>
> This is a major version release of Apache Kafka. It includes 28 new KIPs
> and
>
> critical bug fixes. Please see the Kafka 2.1.0 release plan for more
> details:
>
> *
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044*
>  >
>
> Here are a few notable highlights:
>
> - Java 11 support
> - Support for Zstandard, which achieves compression comparable to gzip with
> higher compression and especially decompression speeds(KIP-110)
> - Avoid expiring committed offsets for active consumer group (KIP-211)
> - Provide Intuitive User Timeouts in The Producer (KIP-91)
> - Kafka's replication protocol now supports improved fencing of zombies.
> Previously, under certain rare conditions, if a broker became partitioned
> from Zookeeper but not the rest of the cluster, then the logs of replicated
> partitions could diverge and cause data loss in the worst case (KIP-320)
> - Streams API improvements (KIP-319, KIP-321, KIP-330, KIP-353, KIP-356)
> - Admin script and admin client API improvements to simplify admin
> operation (KIP-231, KIP-308, KIP-322, KIP-324, KIP-338, KIP-340)
> - DNS handling improvements (KIP-235, KIP-302)
>
> Release notes for the 2.1.0 release:
> http://home.apache.org/~lindong/kafka-2.1.0-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote ***
>
> * Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~lindong/kafka-2.1.0-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~lindong/kafka-2.1.0-rc0/javadoc/
>
> * Tag to be voted upon (off 2.1 branch) is the 2.1.0-rc0 tag:
> https://github.com/apache/kafka/tree/2.1.0-rc0
>
> * Documentation:
> *http://kafka.apache.org/21/documentation.html*
> 
>
> * Protocol:
> http://kafka.apache.org/21/protocol.html
>
> * Successful Jenkins builds for the 2.1 branch:
> Unit/integration tests: *https://builds.apache.org/job/kafka-2.1-jdk8/38/
> *
>
> Please test and verify the release artifacts and submit a vote for this RC,
> or report any issues so we can fix them and get a new RC out ASAP. Although
> this release vote requires PMC votes to pass, testing, votes, and bug
> reports are valuable and appreciated from everyone.
>
> Cheers,
> Dong
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Andras Beni
Congratulations, Manikumar!

Srinivas Reddy  ezt írta (időpont: 2018. okt.
12., P 3:00):

> Congratulations Mani. We'll deserved 👍
>
> -
> Srinivas
>
> - Typed on tiny keys. pls ignore typos.{mobile app}
>
> On Fri 12 Oct, 2018, 01:39 Jason Gustafson,  wrote:
>
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> we
> > are
> > pleased to announce that he has accepted!
> >
> > Manikumar has contributed 134 commits including significant work to add
> > support for delegation tokens in Kafka:
> >
> > KIP-48:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > KIP-249
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> >
> > :
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> >
> > He has broad experience working with many of the core components in Kafka
> > and he has reviewed over 80 PRs. He has also made huge progress
> addressing
> > some of our technical debt.
> >
> > We appreciate the contributions and we are looking forward to more.
> > Congrats Manikumar!
> >
> > Jason, on behalf of the Apache Kafka PMC
> >
>


Re: KIP-327: Add describe all topics API to AdminClient

2018-07-13 Thread Andras Beni
The KIP looks good to me.
However, if there is willingness in the community to work on metadata
request with patterns, the feature proposed here and filtering by '*' or
'.*' would be redundant.

Andras



On Fri, Jul 13, 2018 at 12:38 AM Jason Gustafson  wrote:

> Hey Manikumar,
>
> As Kafka begins to scale to larger and larger numbers of topics/partitions,
> I'm a little concerned about the scalability of APIs such as this. The API
> looks benign, but imagine you have have a few million partitions. We
> already expose similar APIs in the producer and consumer, so probably not
> much additional harm to expose it in the AdminClient, but it would be nice
> to put a little thought into some longer term options. We should be giving
> users an efficient way to select a smaller set of the topics they are
> interested in. We have always discussed adding some filtering support to
> the Metadata API. Perhaps now is a good time to reconsider this? We now
> have a convention for wildcard ACLs, so perhaps we can do something
> similar. Full regex support might be ideal given the consumer's
> subscription API, but that is more challenging. What do you think?
>
> Thanks,
> Jason
>
> On Thu, Jul 12, 2018 at 2:35 PM, Harsha  wrote:
>
> > Very useful. LGTM.
> >
> > Thanks,
> > Harsha
> >
> > On Thu, Jul 12, 2018, at 9:56 AM, Manikumar wrote:
> > > Hi all,
> > >
> > > I have created a KIP to add describe all topics API to AdminClient .
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 327%3A+Add+describe+all+topics+API+to+AdminClient
> > >
> > > Please take a look.
> > >
> > > Thanks,
> >
>


Kafka Namespaces

2018-07-13 Thread Andras Beni
Hi All,

At Kafka Summit London 2018, Neha presented
 a thought experiment
about namespaces in Apache Kafka. I'd like to know if work on this vision
has started and if so, where I can find more information on it.
KIP-37 seems to be related, but was abandoned way before the talk.

Thanks,
Andras


Re: [VOTE] KIP-280: Enhanced log compaction

2018-07-09 Thread Andras Beni
Hi Luís,

Can you please clarify how the header value has to be encoded in case log
compaction strategy is 'header'. As I see current PR reads varLong in
CleanerCache.extractVersion and read String and uses toLong in
Cleaner.extractVersion while the KIP says no more than 'the header value
(which must be of type "long")'.

Otherwise +1 for the KIP

As for current implementation: it seems in Cleaner class header key
"version" is hardwired.

Andras



On Fri, Jul 6, 2018 at 10:36 PM Jun Rao  wrote:

> Hi, Guozhang,
>
> For #4, what you suggested could make sense for timestamp based de-dup, but
> not sure how general it is since the KIP also supports de-dup based on
> header.
>
> Thanks,
>
> Jun
>
> On Fri, Jul 6, 2018 at 1:12 PM, Guozhang Wang  wrote:
>
> > Hello Jun,
> > Thanks for your feedbacks. I'd agree on #3 that it's worth adding a
> special
> > check to not delete the last message, since although unlikely, it is
> still
> > possible that a new active segment gets rolled out but contains no data
> > yet, and hence the actual last message in this case would be in a
> > "compact-able" segment.
> >
> > For the second part of #4 you raised, maybe we could educate users to
> set "
> > message.timestamp.difference.max.ms" to be no larger than "
> > log.cleaner.delete.retention.ms" (its default value is Long.MAX_VALUE)?
> A
> > more aggressive approach would be changing the default value of the
> former
> > to be the value of the latter if:
> >
> > 1. cleanup.policy = compact OR compact,delete
> > 2. log.cleaner.compaction.strategy != offset
> >
> > Because in this case I think it makes sense to really allow users send
> any
> > data longer than "log.cleaner.delete.retention.ms", WDYT?
> >
> >
> > Guozhang
> >
> >
> > On Fri, Jul 6, 2018 at 11:51 AM, Jun Rao  wrote:
> >
> > > Hi, Luis,
> > >
> > > 1. The cleaning policy is configurable at both global and topic level.
> > The
> > > global one has the name log.cleanup.policy and the topic level has the
> > name
> > > cleanup.policy by just stripping the log prefix. We can probably do the
> > > same for the new configs.
> > >
> > > 2. Since this KIP may require an admin to configure a larger dedup
> buffer
> > > size, it would be useful to document this impact in the wiki and the
> > > release notes.
> > >
> > > 3. Yes, it's unlikely for the last message to be removed in the current
> > > implementation since we never clean the active segment. However, in
> > theory,
> > > this can happen. So it would be useful to guard this explicitly.
> > >
> > > 4. Just thought about another issue. We probably want to be a bit
> careful
> > > with key deletion. Currently, one can delete a key by sending a message
> > > with a delete tombstone (a null payload). To prevent a reader from
> > missing
> > > a deletion if it's removed too quickly, we depend on a configuration
> > > log.cleaner.delete.retention.ms (defaults to 1 day). The delete
> > tombstone
> > > will only be physically removed from the log after that amount of time.
> > The
> > > expectation is that a reader should finish reading to the end of the
> log
> > > after consuming a message within that configured time. With the new
> > > strategy, we have similar, but slightly different problems. The first
> > > problem is that the delete tombstone may be delivered earlier than an
> > > outdated record in offset order to a consumer. In order for the
> consumer
> > > not to take the outdated record, the consumer should cache the deletion
> > > tombstone for some configured amount of time. We ca probably piggyback
> > this
> > > on log.cleaner.delete.retention.ms, but we need to document this. The
> > > second problem is that once the delete tombstone is physically removed
> > from
> > > the log, how can we prevent outdated records to be added (otherwise,
> they
> > > will never be garbage collected)? Not sure what's the best way to do
> > this.
> > > One possible way is to push this back to the application and require
> the
> > > user not to publish outdated records after
> log.cleaner.delete.retention.
> > ms
> > > .
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Wed, Jul 4, 2018 at 11:11 AM, Luís Cabral
> >  > > >
> > > wrote:
> > >
> > > > Hi Jun,
> > > >
> > > > -:  1. I guess both new configurations will be at the topic level?
> > > >
> > > > They will exist in the global configuration, at the very least.
> > > > I would like to have them on the topic level as well, but there is an
> > > > inconsistency between the cleanup/compaction properties that exist
> > “only
> > > > globally” vs “globally + per topic”.
> > > > I haven’t gotten around to investigating why, and if that reason
> would
> > > > then also impact the properties I’m suggesting. At first glance they
> > seem
> > > > to belong with the properties that are "only globally” configured,
> but
> > > > Guozhang has written earlier with a suggestion of a compaction
> property
> > > > that works for both (though I haven’t had the time to look into it
> ye

Re: [VOTE] 1.1.1 RC0

2018-06-20 Thread Andras Beni
+1 (non-binding)

Built .tar.gz, created a cluster from it and ran a basic end-to-end test:
performed a rolling restart while console-producer and console-consumer ran
at around 20K messages/sec. No errors or data loss.

Ran unit and integration tests successfully 3 out of 5 times. Encountered
some flakies:
 - DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout
 - LogDirFailureTest.testIOExceptionDuringCheckpoint
 - SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls


Andras


On Wed, Jun 20, 2018 at 4:59 AM Ted Yu  wrote:

> +1
>
> Ran unit test suite which passed.
>
> Checked signatures.
>
> On Tue, Jun 19, 2018 at 4:47 PM, Dong Lin  wrote:
>
> > Re-send to kafka-clie...@googlegroups.com
> >
> > On Tue, Jun 19, 2018 at 4:29 PM, Dong Lin  wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the first candidate for release of Apache Kafka 1.1.1.
> > >
> > > Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was
> first
> > > released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> > since
> > > that release. A few of the more significant fixes include:
> > >
> > > KAFKA-6925  - Fix
> > > memory leak in StreamsMetricsThreadImpl
> > > KAFKA-6937  -
> In-sync
> > > replica delayed during fetch if replica throttle is exceeded
> > > KAFKA-6917  -
> Process
> > > txn completion asynchronously to avoid deadlock
> > > KAFKA-6893  - Create
> > > processors before starting acceptor to avoid ArithmeticException
> > > KAFKA-6870  -
> > > Fix ConcurrentModificationException in SampledStat
> > > KAFKA-6878  - Fix
> > > NullPointerException when querying global state store
> > > KAFKA-6879  - Invoke
> > > session init callbacks outside lock to avoid Controller deadlock
> > > KAFKA-6857  -
> Prevent
> > > follower from truncating to the wrong offset if undefined leader epoch
> is
> > > requested
> > > KAFKA-6854  - Log
> > > cleaner fails with transaction markers that are deleted during clean
> > > KAFKA-6747  - Check
> > > whether there is in-flight transaction before aborting transaction
> > > KAFKA-6748  - Double
> > > check before scheduling a new task after the punctuate call
> > > KAFKA-6739  -
> > > Fix IllegalArgumentException when down-converting from V2 to V0/V1
> > > KAFKA-6728  -
> > > Fix NullPointerException when instantiating the HeaderConverter
> > >
> > > Kafka 1.1.1 release plan:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> > >
> > > Release notes for the 1.1.1 release:
> > > http://home.apache.org/~lindong/kafka-1.1.1-rc0/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~lindong/kafka-1.1.1-rc0/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc0 tag:
> > > https://github.com/apache/kafka/tree/1.1.1-rc0
> > >
> > > * Documentation:
> > > http://kafka.apache.org/11/documentation.html
> > >
> > > * Protocol:
> > > http://kafka.apache.org/11/protocol.html
> > >
> > > * Successful Jenkins builds for the 1.1 branch:
> > > Unit/integration tests: https://builds.apache.org/job/
> > kafka-1.1-jdk7/150/
> > >
> > > Please test and verify the release artifacts and submit a vote for this
> > RC,
> > > or report any issues so we can fix them and get a new RC out ASAP.
> > Although
> > > this release vote requires PMC votes to pass, testing, votes, and bug
> > > reports are valuable and appreciated from everyone.
> > >
> > > Cheers,
> > > Dong
> > >
> > >
> > >
> >
>


[jira] [Created] (KAFKA-6987) Reimplement KafkaFuture with CopletableFuture

2018-06-04 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-6987:
--

 Summary: Reimplement KafkaFuture with CopletableFuture
 Key: KAFKA-6987
 URL: https://issues.apache.org/jira/browse/KAFKA-6987
 Project: Kafka
  Issue Type: Improvement
  Components: clients
Affects Versions: 2.0.0
Reporter: Andras Beni
Assignee: Andras Beni


KafkaFuture documentation states:
{{This will eventually become a thin shim on top of Java 8's 
CompletableFuture.}}
With Java 7 support dropped in 2.0, it is time to get rid of custom code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6968) Call RebalanceListener in MockConsumer

2018-05-30 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-6968:
--

 Summary: Call RebalanceListener in MockConsumer
 Key: KAFKA-6968
 URL: https://issues.apache.org/jira/browse/KAFKA-6968
 Project: Kafka
  Issue Type: Improvement
  Components: consumer
Affects Versions: 1.1.0
Reporter: Andras Beni


{{org.apache.kafka.clients.consumer.MockConsumer}} simulates rebalance with 
method {{public synchronized void rebalance(Collection 
newAssignment)}}. This method does not call {{ConsumerRebalanceListener}} 
methods. Calls to {{onPartitionsRevoked(...)}} and 
{{onPartitionsAssigned(...)}} should be added in appropriate order.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6812) Async ConsoleProducer exists with 0 status even after data loss

2018-04-21 Thread Andras Beni (JIRA)
Andras Beni created KAFKA-6812:
--

 Summary: Async ConsoleProducer exists with 0 status even after 
data loss
 Key: KAFKA-6812
 URL: https://issues.apache.org/jira/browse/KAFKA-6812
 Project: Kafka
  Issue Type: Bug
  Components: tools
Affects Versions: 1.1.0
Reporter: Andras Beni


When {{ConsoleProducer}} is run without {{--sync}} flag and one of the batches 
times out, {{ErrorLoggingCallback}} logs the error:
{code:java}
 18/04/21 04:23:01 WARN clients.NetworkClient: [Producer 
clientId=console-producer] Connection to node 10 could not be established. 
Broker may not be available.
 18/04/21 04:23:02 ERROR internals.ErrorLoggingCallback: Error when sending 
message to topic my-topic with key: null, value: 8 bytes with error:
 org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for 
my-topic-0: 1530 ms has passed since batch creation plus linger time{code}
 However, the tool exits with status code 0. 
 In my opinion the tool should indicate in the exit status that there was data 
lost. Maybe it's reasonable to exit after the first error.
  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Contributor

2017-06-21 Thread Andras Beni
Hi All,

I'd like to contribute to Apache Kafka.
Can you please add me (username: andrasbeni) to the contributors list for
this project at issues.apache.org?

Thank you,
Andras