Re: [ANNOUNCE] New committer: Boyang Chen

2020-06-22 Thread Harsha Chintalapani
Congratulations, Boyang
-Harsha

On Mon, Jun 22, 2020 at 4:32 PM AJ Chen  wrote:

> Congrats, Boyang!
>
> -aj
>
>
>
>
> On Mon, Jun 22, 2020 at 4:26 PM Guozhang Wang  wrote:
>
> > The PMC for Apache Kafka has invited Boyang Chen as a committer and we
> are
> > pleased to announce that he has accepted!
> >
> > Boyang has been active in the Kafka community more than two years ago.
> > Since then he has presented his experience operating with Kafka Streams
> at
> > Pinterest as well as several feature development including rebalance
> > improvements (KIP-345) and exactly-once scalability improvements
> (KIP-447)
> > in various Kafka Summit and Kafka Meetups. More recently he's also been
> > participating in Kafka broker development including post-Zookeeper
> > controller design (KIP-500). Besides all the code contributions, Boyang
> has
> > also helped reviewing even more PRs and KIPs than his own.
> >
> > Thanks for all the contributions Boyang! And look forward to more
> > collaborations with you on Apache Kafka.
> >
> >
> > -- Guozhang, on behalf of the Apache Kafka PMC
> >
>


Re: TLS Communication in With Zookeeper Cluster

2019-07-29 Thread Harsha
Here is the guide
https://cwiki.apache.org/confluence/display/ZOOKEEPER/ZooKeeper+SSL+User+Guide
you need zookeeper 3.5 or higher for TLS.

On Mon, Jul 29, 2019, at 1:21 AM, Nayak, Soumya R. wrote:
> Hi Team,
> 
> Is there any way  mutual TLS communication set up can be done with 
> zookeeper. If any references, can you please let me know.
> 
> I am trying to set up a Zookeeper cluster (3 Zookeepers) and Kafka 
> cluster (4 Kafka Brokers) using docker images in Azure Ubuntu VM 
> servers.
> 
> 
> Also, there is a new protocol of RAFT-ETCD . How is it when compared to 
> Kafka Zookeeper set up?
> 
> Regards,
> Soumya
> 
> **
> This message may contain confidential or proprietary information 
> intended only for the use of the
> addressee(s) named above or may contain information that is legally 
> privileged. If you are
> not the intended addressee, or the person responsible for delivering it 
> to the intended addressee,
> you are hereby notified that reading, disseminating, distributing or 
> copying this message is strictly
> prohibited. If you have received this message by mistake, please 
> immediately notify us by
> replying to the message and delete the original message and any copies 
> immediately thereafter.
> 
> If you received this email as a commercial message and would like to 
> opt out of future commercial
> messages, please let us know and we will remove you from our 
> distribution list.
> 
> Thank you.~
> **
> FAFLD
>


Re: [ANNOUNCE] Apache Kafka 2.2.1

2019-06-03 Thread Harsha
Thanks Vahid.
-Harsha

On Mon, Jun 3, 2019, at 9:21 AM, Jonathan Santilli wrote:
> That's fantastic! thanks a lot Vahid for managing the release.
> 
> --
> Jonathan
> 
> 
> 
> 
> On Mon, Jun 3, 2019 at 5:18 PM Mickael Maison 
> wrote:
> 
> > Thank you Vahid
> >
> > On Mon, Jun 3, 2019 at 5:12 PM Wladimir Schmidt 
> > wrote:
> > >
> > > Thanks Vahid!
> > >
> > > On Mon, Jun 3, 2019, 16:23 Vahid Hashemian  wrote:
> > >
> > > > The Apache Kafka community is pleased to announce the release for
> > Apache
> > > > Kafka 2.2.1
> > > >
> > > > This is a bugfix release for Kafka 2.2.0. All of the changes in this
> > > > release can be found in the release notes:
> > > > https://www.apache.org/dist/kafka/2.2.1/RELEASE_NOTES.html
> > > >
> > > > You can download the source and binary release from:
> > > > https://kafka.apache.org/downloads#2.2.1
> > > >
> > > >
> > > >
> > ---
> > > >
> > > > Apache Kafka is a distributed streaming platform with four core APIs:
> > > >
> > > > ** The Producer API allows an application to publish a stream records
> > to
> > > > one or more Kafka topics.
> > > >
> > > > ** The Consumer API allows an application to subscribe to one or more
> > > > topics and process the stream of records produced to them.
> > > >
> > > > ** The Streams API allows an application to act as a stream processor,
> > > > consuming an input stream from one or more topics and producing an
> > output
> > > > stream to one or more output topics, effectively transforming the input
> > > > streams to output streams.
> > > >
> > > > ** The Connector API allows building and running reusable producers or
> > > > consumers that connect Kafka topics to existing applications or data
> > > > systems. For example, a connector to a relational database might
> > capture
> > > > every change to a table.
> > > >
> > > > With these APIs, Kafka can be used for two broad classes of
> > application:
> > > >
> > > > ** Building real-time streaming data pipelines that reliably get data
> > > > between systems or applications.
> > > >
> > > > ** Building real-time streaming applications that transform or react
> > to the
> > > > streams of data.
> > > >
> > > > Apache Kafka is in use at large and small companies worldwide,
> > including
> > > > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest,
> > Rabobank,
> > > > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> > > >
> > > > A big thank you for the following 30 contributors to this release!
> > > >
> > > > Anna Povzner, Arabelle Hou, A. Sophie Blee-Goldman, Bill Bejeck, Bob
> > > > Barrett, Chris Egerton, Colin Patrick McCabe, Cyrus Vafadari, Dhruvil
> > Shah,
> > > > Doroszlai, Attila, Guozhang Wang, huxi, Jason Gustafson, John Roesler,
> > > > Konstantine Karantasis, Kristian Aurlien, Lifei Chen, Magesh
> > Nandakumar,
> > > > Manikumar Reddy, Massimo Siani, Matthias J. Sax, Nicholas Parker,
> > pkleindl,
> > > > Rajini Sivaram, Randall Hauch, Sebastián Ortega, Vahid Hashemian,
> > Victoria
> > > > Bialas, Yaroslav Klymko, Zhanxiang (Patrick) Huang
> > > >
> > > > We welcome your help and feedback. For more information on how to
> > report
> > > > problems, and to get involved, visit the project website at
> > > > https://kafka.apache.org/
> > > >
> > > > Thank you!
> > > >
> > > > Regards,
> > > > --Vahid Hashemian
> > > >
> >
> 
> 
> -- 
> Santilli Jonathan
>


Re: [VOTE] 2.2.1 RC1

2019-05-23 Thread Harsha
+1 (binding)

1. Ran unit tests
2. System tests
3. 3 node cluster with few manual tests.

Thanks,
Harsha

On Wed, May 22, 2019, at 8:09 PM, Vahid Hashemian wrote:
> Bumping this thread to get some more votes, especially from committers, so
> we can hopefully make a decision on this RC by the end of the week.
> 
> Thanks,
> --Vahid
> 
> On Mon, May 13, 2019 at 8:15 PM Vahid Hashemian 
> wrote:
> 
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka 2.2.1.
> >
> > Compared to RC0, this release candidate also fixes the following issues:
> >
> >- [KAFKA-6789] - Add retry logic in AdminClient requests
> >- [KAFKA-8348] - Document of kafkaStreams improvement
> >- [KAFKA-7633] - Kafka Connect requires permission to create internal
> >topics even if they exist
> >- [KAFKA-8240] - Source.equals() can fail with NPE
> >- [KAFKA-8335] - Log cleaner skips Transactional mark and batch
> >record, causing unlimited growth of __consumer_offsets
> >- [KAFKA-8352] - Connect System Tests are failing with 404
> >
> > Release notes for the 2.2.1 release:
> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, May 16, 9:00 pm PT.
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~vahid/kafka-2.2.1-rc1/javadoc/
> >
> > * Tag to be voted upon (off 2.2 branch) is the 2.2.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.2.1-rc1
> >
> > * Documentation:
> > https://kafka.apache.org/22/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/22/protocol.html
> >
> > * Successful Jenkins builds for the 2.2 branch:
> > Unit/integration tests: https://builds.apache.org/job/kafka-2.2-jdk8/115/
> >
> > Thanks!
> > --Vahid
> >
> 
> 
> -- 
> 
> Thanks!
> --Vahid
>


Re: Question about Kafka TLS

2019-05-16 Thread Harsha
Hi Thomas,
  We recently fixed a bug 
https://issues.apache.org/jira/browse/KAFKA-8191 , which allows users to 
configure their own KeyManager, TrustManager. One can implement these 
KeyManagers and pass them as configs and these Keymanagers can make a call to 
service to fetch a certificate to enable TLS.  JKS stores are for doing it 
manually. You can check out https://github.com/spiffe/java-spiffe which talks 
spiffee agent to get a certificate and pass it to Kafka's SSL context.

Thanks,
Harsha

On Thu, May 16, 2019, at 3:57 PM, Zhou, Thomas wrote:
> Hi,
> 
> I have a question about how TLS config at Kafka client side. Based on 
> the official document, if clients want to enable TLS, they must put 
> ssl.truststore.location in the client config in where there is a JKS 
> file to hold the trust store. My question is that is this config 
> mandatory? Is there a possibility that we get truststore.jks from a 
> service and store in memory so we don’t have to maintain a file in 
> client side.
> 
> Thanks,
> Thomas
>


Question related to Kafka Connect

2019-05-06 Thread Mandadi, Harsha
Hi There,

We currently have the Kafka connect cluster set up with 4 nodes and 
registered the connectors to read the data from Kafka topics and write to S3. 
When we were trying to collect the JMX metrics using MBeans using java API we 
are able to get the metrics on the node on which the connector is registered 
but not from the other nodes in the cluster. We had a thought that we would be 
able to get the metrics from any of the nodes in the cluster of Kafka connect.

Can you please let us know if there could be a solution to solve this?

   Thanks for your help in advance.

Thanks,
Harsha
Mobile: +919440849355


Re: [ANNOUNCE] New Kafka PMC member: Sriharsh Chintalapan

2019-04-19 Thread Harsha
Thanks, Everyone.
-Harsha

On Fri, Apr 19, 2019, at 2:39 AM, Satish Duggana wrote:
> Congrats Harsha!
> 
> On Fri, Apr 19, 2019 at 2:58 PM Mickael Maison 
> wrote:
> 
> > Congratulations Harsha!
> >
> >
> > On Fri, Apr 19, 2019 at 5:49 AM Manikumar 
> > wrote:
> > >
> > > Congrats Harsha!.
> > >
> > > On Fri, Apr 19, 2019 at 7:43 AM Dong Lin  wrote:
> > >
> > > > Congratulations Sriharsh!
> > > >
> > > > On Thu, Apr 18, 2019 at 11:46 AM Jun Rao  wrote:
> > > >
> > > > > Hi, Everyone,
> > > > >
> > > > > Sriharsh Chintalapan has been active in the Kafka community since he
> > > > became
> > > > > a Kafka committer in 2015. I am glad to announce that Harsh is now a
> > > > member
> > > > > of Kafka PMC.
> > > > >
> > > > > Congratulations, Harsh!
> > > > >
> > > > > Jun
> > > > >
> > > >
> >
>


Re: plaintext connection attempts to SSL secured broker

2019-04-04 Thread Harsha
Hi,
  Yes, this needs to be handled more elegantly. Can you please file a JIRA 
here 
https://issues.apache.org/jira/projects/KAFKA/issues

Thanks,
Harsha

On Mon, Apr 1, 2019, at 1:52 AM, jorg.heym...@gmail.com wrote:
> Hi,
> 
> We have our brokers secured with these standard properties
> 
> listeners=SSL://a.b.c:9030
> ssl.truststore.location=...
> ssl.truststore.password=...
> ssl.keystore.location=...
> ssl.keystore.password=...
> ssl.key.password=...
> ssl.client.auth=required
> ssl.enabled.protocols=TLSv1.2
> 
> It's a bit surprising to see that when a (java) client attempts to 
> connect without having SSL configured, so doing a PLAINTEXT connection 
> instead, it does not get a TLS exception indicating that SSL is 
> required. Somehow i would have expected a hard transport-level 
> exception making it clear that non-SSL connections are not allowed, 
> instead the client sees this (when debug logging is enabled)
> 
> [main] INFO org.apache.kafka.common.utils.AppInfoParser - Kafka 
> commitId : 21234bee31165527
> [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - 
> [Consumer clientId=consumer-1, groupId=my-test-group] Kafka consumer 
> initialized
> [main] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - 
> [Consumer clientId=consumer-1, groupId=my-test-group] Subscribed to 
> topic(s): events
> [main] DEBUG 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator - 
> [Consumer clientId=consumer-1, groupId=my-test-group] Sending 
> FindCoordinator request to broker a.b.c:9030 (id: -1 rack: null)
> [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer 
> clientId=consumer-1, groupId=my-test-group] Initiating connection to 
> node a.b.c:9030 (id: -1 rack: null) using address /a.b.c
> [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor 
> with name node--1.bytes-sent
> [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor 
> with name node--1.bytes-received
> [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor 
> with name node--1.latency
> [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer 
> clientId=consumer-1, groupId=my-test-group] Created socket with 
> SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
> [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer 
> clientId=consumer-1, groupId=my-test-group] Completed connection to 
> node -1. Fetching API versions.
> [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer 
> clientId=consumer-1, groupId=my-test-group] Initiating API versions 
> fetch from node -1.
> [main] DEBUG org.apache.kafka.common.network.Selector - [Consumer 
> clientId=consumer-1, groupId=my-test-group] Connection with /a.b.c 
> disconnected
> java.io.EOFException
>   at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:119)
>   at 
> org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:381)
>   at 
> org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:342)
>   at 
> org.apache.kafka.common.network.Selector.attemptRead(Selector.java:609)
>   at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:541)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
>   at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164)
>   at eu.europa.ec.han.TestConsumer.main(TestConsumer.java:22)
> [main] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer 
> clientId=consumer-1, groupId=my-test-group] Node -1 disconnected.
> [main] DEBUG 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient - 
> [Consumer clientId=consumer-1, groupId=my-test-group] Cancelled request 
> with header RequestHeader(apiKey=FIND_COORDINATOR, apiVersion=2, 
> clien

Re: Broker continuously expand and shrinks to itself

2019-01-28 Thread Harsha Chintalapani
We’ve seen this similar in our setup and as you noticed it does happen 
infrequently. Based on my debugging there are few things that might be causing 
this issue , one of them would be
1. replica.lag.time.max.ms set to 10secs by default
2. replica.socket.timeout.ms set to 30secs by default

In situations where the broker is busy with lots of clients , a follower making 
a replica request and if this request takes longer or times out i.e waits for 
30 secs and didn’t get any response. ReplicaManager thread calls maybeShrinkISR 
and shrinks the ISR if there no call from a follower with in 
replica.lag.time.max.ms which is possible in cases of heavy load and given the 
socket timeout itself takes 30secs it can be marked as not in ISR.

What we’ve seen is shrinkISR and expandISR happening back to back i.e one call 
is getting timed out and subsequent call making it part of ISR.  One option to 
try is to lower the socket timeout to be lower and increase the lag.time.max.ms 
.

Thanks,
Harsha
On Jan 27, 2019, 8:48 AM -0800, Ashish Karalkar 
, wrote:
> Hi Harsha,
> Thanks for the reply.
> Issue is resolved as of now and the root cause was a runaway application 
> spawning many instances of kafkacat and hammering kafka brokers. I am still 
> wondering that what could be reason for shrink and expand is a client hammers 
> a broker  .
> --Ashish
> On Thursday, January 24, 2019, 8:53:10 AM PST, Harsha Chintalapani 
>  wrote:
>
> Hi Ashish,
>            Whats your replica.lag.time.max.ms set to and do you see any 
> network issues between brokers.
> -Harsha
>
>
>
> On Jan 22, 2019, 10:09 PM -0800, Ashish Karalkar 
> , wrote:
> > Hi All,
> > We just upgraded from 0.10.x to 1.1 and enabled rack awareness on an 
> > existing clusters which has about 20 nodes in 4 rack . After this we see 
> > that few brokers goes on continuous expand and shrink ISR to itself  cycle 
> > , it is also causing high time for serving meta data requests.
> > What is the impact of enabling rack awareness on existing cluster assuming 
> > replication factor is 3 and all existing replica may or may not be in 
> > different rack when rack awareness was enabled after which a rolling bounce 
> > was done.
> > Symptoms we are having are replica lag and slow metadata requests. Also in 
> > brokers log we continuously see disconnection from the broker where it is 
> > trying to expand.
> > Thanks for helping
> > --A


Re: Kafka consumer configuration to minimize rebalance time

2019-01-24 Thread Harsha Chintalapani
Hi Marcos,
           I think what you need is static membership which reduces the no.of 
rebalances required. There is active discussion and work going for this KIP 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-345%3A+Introduce+static+membership+protocol+to+reduce+consumer+rebalances

-Harsha

On Jan 24, 2019, 9:51 AM -0800, Marcos Juarez , wrote:
> One of our internal customers is working on a service that spans around 120
> kubernetes pods. Due to design constraints, every one of these pods has a
> single kafka consumer, and they're all using the same consumer group id.
> Since it's kubernetes, and the service is sized according to volume
> throughout the day, pods are added/removed constantly, at least a few times
> per hour.
>
> What we are seeing with initial testing is that, whenever a single pod
> joins or leaves the consumer group, it triggers a rebalance that sometimes
> takes up to 60+ seconds to resolve. Consumption resumes after the
> rebalance event, but of course now there's 60+ second lag in consumption
> for that topic. Whenever there's a code deploy to these pods, and we need
> to re-create all 120 pods, the problem seems to be exacerbated, and we run
> into rebalances taking 200+ seconds. This particular service is somewhat
> sensitive to lag, so we'd like to keep the rebalance time to a minimum.
>
> With that context, what kafka configs should we focus on on the consumer
> side (and maybe the broker side?) that would enable us to minimize the time
> spent on the rebalance?
>
> Thanks,
>
> Marcos Juarez


Re: Deploying Kafka topics in a kerberized Zookeeper without superuser (in a CI flow)

2019-01-24 Thread Harsha Chintalapani
Hi,
      When you kerberoize Kafka and enable zookeeper.set.acl to true, all the 
zookeeper nodes created under zookeeper root will have ACLs to allow only Kafka 
Broker’s principal. Since all topic creation will go directly to zookeeper, i.e 
Kafka-topic.sh script creates a zookeeper node under /broker/topics and it 
needs to have Kafka Broker’s principal set as ACL.  If you use any other 
principal it will create issues like you are seeing.
      One option is to disable zookeeper.set.acl. This means anyone who has 
access to zookeeper can create a topic. Better option would be to use 
KafkaAdminClient to createTopics which will send a createTopicRequest through 
brokers which can be authorized. Your CI can have its own principal and you can 
create authorization policy which will allow this principal to create topics.

-Harsha

On Jan 21, 2019, 3:00 AM -0800, Kristjan Peil , wrote:
> I'm running Kafka 1.1.1 and Zookeeper 3.4.6 in a cluster, both guarded by
> Kerberos. My app stack includes a module containing topic configurations,
> and my continuous integration build autodeploys changes to topics with
> kafka-topics.sh and kafka-configs.sh.
>
> When I try to use a non-superuser principal to authenticate in the scripts,
> the topic metadata is created by kafka-topics.sh in Zookeeper in such a way
> that Kafka cannot process it to create the actual topics in Kafka brokers -
> partitions are not created in the broker. Also, running kafka-configs.sh to
> alter configs of existing topics gets "NoAuth for /configs/".
>
> When I authenticate with the superuser principal "kafka" then everything
> works fine. But making the "kafka" superuser credentials available in CI
> context seems unsecure.
>
> Is it possible to use kafka-topics.sh and kafka-configs.sh in a kerberized
> environment with a non-superuser Kerberos principal and how can this be
> made to happen?
> Can you suggest an alternate solution to achieve CI for Kafka topics?
>
> Best regards,
> Kristjan Peil


Re: Broker continuously expand and shrinks to itself

2019-01-24 Thread Harsha Chintalapani
Hi Ashish,
           Whats your replica.lag.time.max.ms set to and do you see any network 
issues between brokers.
-Harsha



On Jan 22, 2019, 10:09 PM -0800, Ashish Karalkar 
, wrote:
> Hi All,
> We just upgraded from 0.10.x to 1.1 and enabled rack awareness on an existing 
> clusters which has about 20 nodes in 4 rack . After this we see that few 
> brokers goes on continuous expand and shrink ISR to itself  cycle , it is 
> also causing high time for serving meta data requests.
> What is the impact of enabling rack awareness on existing cluster assuming 
> replication factor is 3 and all existing replica may or may not be in 
> different rack when rack awareness was enabled after which a rolling bounce 
> was done.
> Symptoms we are having are replica lag and slow metadata requests. Also in 
> brokers log we continuously see disconnection from the broker where it is 
> trying to expand.
> Thanks for helping
> --A


Re: kafka.consumers Java Package

2018-11-09 Thread Harsha Chintalapani
Chris,
        You are upgrading from 0.10.2.2 to 2.0.0 . There will be quite few 
changes and it looks like you might be using classes other than KafkaConsumer 
which are not public API? What classes specifically are not available.

-Harsha
On Nov 9, 2018, 7:47 AM -0800, Chris Barlock , wrote:
> I was trying to move some of our Kafka client code from
> kafka_2.11_0.10.2.2 to kafka_2.11_2.0.0. However, we have some code that
> uses code in the kafka.consumer package. That appears to be either moved
> or removed from kafka_2.11_2.0.0. Is it gone or moved?
>
> Thanks!
>
> Chris
>


Re: [VOTE] 2.0.1 RC0

2018-11-01 Thread Harsha Chintalapani
+1.
Ran a 3 node cluster with few simple tests.

Thanks,
Harsha
On Nov 1, 2018, 9:50 AM -0700, Eno Thereska , wrote:
> Anything else holding this up?
>
> Thanks
> Eno
>
> On Thu, Nov 1, 2018 at 10:27 AM Jakub Scholz  wrote:
>
> > +1 (non-binding) ... I used the staged binaries and run tests with
> > different clients.
> >
> > On Fri, Oct 26, 2018 at 4:29 AM Manikumar 
> > wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the first candidate for release of Apache Kafka 2.0.1.
> > >
> > > This is a bug fix release closing 49 tickets:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
> > >
> > > Release notes for the 2.0.1 release:
> > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Tuesday, October 30, end of day
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
> > >
> > > * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> > > https://github.com/apache/kafka/releases/tag/2.0.1-rc0
> > >
> > > * Documentation:
> > > http://kafka.apache.org/20/documentation.html
> > >
> > > * Protocol:
> > > http://kafka.apache.org/20/protocol.html
> > >
> > > * Successful Jenkins builds for the 2.0 branch:
> > > Unit/integration tests:
> > https://builds.apache.org/job/kafka-2.0-jdk8/177/
> > >
> > > /**
> > >
> > > Thanks,
> > > Manikumar
> > >
> >


Re: [DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread Harsha
Hi,
  Thanks for the KIP. I am trying to understand the intent of the KIP.  Is 
the use case you specified can't be achieved by implementing the Partitioner 
interface here? 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/Partitioner.java#L28
 .
Use your custom partitioner to be configured in your producer clients.   

Thanks,
Harsha

On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
> Hello,
> 
> I opened a very simple KIP and there exists a JIRA for it.
> 
> I would be grateful if any comments are available for action.
> 
> Regards,


Re: Performance Impact with Apache Kafka Security

2018-08-24 Thread Sri Harsha Chavali
Hi Eric,

I have SSL configured for Inter-broker communication as well but I left the 
acks at 1. So I will not wait for Guaranteed Replicated Delivery. My process 
just goes on once I get acknowledgement about delivery from leader and not all 
ISRs. So do you really think that will cause the impact in my case?

Thank you,
Harsha

Sent from Outlook<http://aka.ms/weboutlook>


From: Eric Azama 
Sent: Friday, August 24, 2018 2:04 PM
To: users@kafka.apache.org
Cc: ka...@harsha.io
Subject: Re: Performance Impact with Apache Kafka Security

I saw a similar 30-40% performance hit when testing a move from Plaintext
to SASL_SSL.

It seemed to be due to the additional traffic generated by replication
between brokers. Enabling SSL only between the client and brokers and
leaving inter-broker traffic on Plaintext only introduced ~10% performance
loss. Enabling SSL everywhere, but reducing either the number of replicas
also saw a much smaller performance reduction.

On Fri, Aug 24, 2018 at 9:05 AM Sri Harsha Chavali <
sriharsha.chav...@outlook.com> wrote:

> Hi Harsha,
>
> Given below are all the details. We are using Kafka On CDH.  Do you have
> any suggestion based on the below statistics.
>
> CDK - 2.2.0 - 0.10.2.0+kafka2.2.0+110
> Apache Kafka - 0.10.2.
> java version "1.8.0_151"
>
> Tried Using Java 9 with not much difference.  We need to make a small
> change to kafka-run-class.sh file in order for it to pickup Java 9.
> Statistics with Java 9, not very great.
> 18/08/24 12:00:06 INFO utils.AppInfoParser: Kafka commitId : unknown
> 28270 records sent, 5634.8 records/sec (6.45 MB/sec), 1892.8 ms avg
> latency, 2988.0 max latency.
>60588 records sent, 12117.6 records/sec
> (13.87 MB/sec), 2483.3 ms avg latency, 3410.0 max latency.
> 59940 records sent,
> 11973.6 records/sec (13.70 MB/sec), 2226.4 ms avg latency, 2436.0 max
> latency.
> 68364 records sent, 13648.2 records/sec (15.62 MB/sec), 1911.4 ms avg
> latency, 2261.0 max latency.
> 78570 records sent, 15714.0 records/sec (17.98 MB/sec), 1875.9 ms avg
> latency, 2490.0 max latency.
> 91368 records sent, 18259.0 records/sec (20.90 MB/sec), 1525.3 ms avg
> latency, 1695.0 max latency.
> 80838 records sent, 16141.8 records/sec (18.47 MB/sec), 1498.3 ms avg
> latency, 2003.0 max latency.
> 60750 records sent, 12140.3 records/sec (13.89 MB/sec), 2288.1 ms avg
> latency, 2906.0 max latency.
> 70146 records sent, 14015.2 records/sec (16.04 MB/sec), 2012.6 ms avg
> latency, 2288.0 max latency.
> 62208 records sent, 12439.1 records/sec (14.24 MB/sec), 2170.9 ms avg
> latency, 2523.0 max latency.
> 92502 records sent, 18493.0 records/sec (21.16 MB/sec), 1502.5 ms avg
> latency, 2144.0 max latency.
> 82458 records sent, 16491.6 records/sec (18.87 MB/sec), 1667.4 ms avg
> latency, 1851.0 max latency.
> 102708 records sent, 20537.5 records/sec (23.50 MB/sec), 1318.7 ms avg
> latency, 1575.0 max latency.
> 100 records sent, 14652.229337 records/sec (16.77 MB/sec), 1792.89 ms
> avg latency, 3410.00 ms max latency, 1677 ms 50th, 2625 ms 95th, 3043 ms
> 99th, 3372 ms 99.9th.
>
>
> We use openssl to generate rsa:4096 bit keys. These are how the speeds
> look like on the node.
>
>  sign  verify   sign/s
>  verify/s
> rsa  512 bits 0.52s 0.04s  19084.2 265528.2
> rsa 1024 bits 0.000194s 0.10s   5160.4  96859.5
> rsa 2048 bits 0.001147s 0.34s872.1  29052.4
> rsa 4096 bits 0.008723s 0.000129s114.6   7766.2
>
> Thank you,
> Harsha
> Sent from Outlook<http://aka.ms/weboutlook>
> 
> From: Harsha 
> Sent: Thursday, August 23, 2018 3:42 PM
> To: users@kafka.apache.org
> Subject: Re: Performance Impact with Apache Kafka Security
>
> Hi,
>   Which Kafka version and Java version are you using? Did you try this
> with Java 9 which has 2.5x perf improvements over Java 8 for SSL? Can you
> try using a slightly weaker cipher suite to improve the performance?
>
> -Harsha
>
> On Wed, Aug 22, 2018, at 1:11 PM, Sri Harsha Chavali wrote:
> > Hi Guys,
> >
> > We are trying to secure the Kafka-Cluster in order to enforce topic
> > level security based on sentry roles. We are seeing a big performance
> > impact after SSL_SASL is enabled. I read multiple blog posts describing
> > the performance impact but that also said that the impact would be
> > negligible, but I see a significant hit in my case (60-85%). Could
> > someone please suggest if you have seen this kind of performance impact
> > and what is done to overcome the same? We cannot afford to have an
> > unsecure clus

Re: Performance Impact with Apache Kafka Security

2018-08-24 Thread Sri Harsha Chavali
Hi Harsha,

Given below are all the details. We are using Kafka On CDH.  Do you have any 
suggestion based on the below statistics.

CDK - 2.2.0 - 0.10.2.0+kafka2.2.0+110
Apache Kafka - 0.10.2.
java version "1.8.0_151"

Tried Using Java 9 with not much difference.  We need to make a small change to 
kafka-run-class.sh file in order for it to pickup Java 9.
Statistics with Java 9, not very great.
18/08/24 12:00:06 INFO utils.AppInfoParser: Kafka commitId : unknown
28270 records sent, 5634.8 records/sec (6.45 MB/sec), 1892.8 ms avg latency, 
2988.0 max latency. 
  60588 records sent, 12117.6 records/sec (13.87 MB/sec), 
2483.3 ms avg latency, 3410.0 max latency.  
59940 records sent, 11973.6 records/sec 
(13.70 MB/sec), 2226.4 ms avg latency, 2436.0 max latency.
68364 records sent, 13648.2 records/sec (15.62 MB/sec), 1911.4 ms avg latency, 
2261.0 max latency.
78570 records sent, 15714.0 records/sec (17.98 MB/sec), 1875.9 ms avg latency, 
2490.0 max latency.
91368 records sent, 18259.0 records/sec (20.90 MB/sec), 1525.3 ms avg latency, 
1695.0 max latency.
80838 records sent, 16141.8 records/sec (18.47 MB/sec), 1498.3 ms avg latency, 
2003.0 max latency.
60750 records sent, 12140.3 records/sec (13.89 MB/sec), 2288.1 ms avg latency, 
2906.0 max latency.
70146 records sent, 14015.2 records/sec (16.04 MB/sec), 2012.6 ms avg latency, 
2288.0 max latency.
62208 records sent, 12439.1 records/sec (14.24 MB/sec), 2170.9 ms avg latency, 
2523.0 max latency.
92502 records sent, 18493.0 records/sec (21.16 MB/sec), 1502.5 ms avg latency, 
2144.0 max latency.
82458 records sent, 16491.6 records/sec (18.87 MB/sec), 1667.4 ms avg latency, 
1851.0 max latency.
102708 records sent, 20537.5 records/sec (23.50 MB/sec), 1318.7 ms avg latency, 
1575.0 max latency.
100 records sent, 14652.229337 records/sec (16.77 MB/sec), 1792.89 ms avg 
latency, 3410.00 ms max latency, 1677 ms 50th, 2625 ms 95th, 3043 ms 99th, 3372 
ms 99.9th.


We use openssl to generate rsa:4096 bit keys. These are how the speeds look 
like on the node.

 sign  verify   sign/s   verify/s
rsa  512 bits 0.52s 0.04s  19084.2 265528.2
rsa 1024 bits 0.000194s 0.10s   5160.4  96859.5
rsa 2048 bits 0.001147s 0.34s872.1  29052.4
rsa 4096 bits 0.008723s 0.000129s114.6   7766.2

Thank you,
Harsha
Sent from Outlook<http://aka.ms/weboutlook>
________
From: Harsha 
Sent: Thursday, August 23, 2018 3:42 PM
To: users@kafka.apache.org
Subject: Re: Performance Impact with Apache Kafka Security

Hi,
  Which Kafka version and Java version are you using? Did you try this with 
Java 9 which has 2.5x perf improvements over Java 8 for SSL? Can you try using 
a slightly weaker cipher suite to improve the performance?

-Harsha

On Wed, Aug 22, 2018, at 1:11 PM, Sri Harsha Chavali wrote:
> Hi Guys,
>
> We are trying to secure the Kafka-Cluster in order to enforce topic
> level security based on sentry roles. We are seeing a big performance
> impact after SSL_SASL is enabled. I read multiple blog posts describing
> the performance impact but that also said that the impact would be
> negligible, but I see a significant hit in my case (60-85%). Could
> someone please suggest if you have seen this kind of performance impact
> and what is done to overcome the same? We cannot afford to have an
> unsecure cluster.
>
> In the below example, I'm trying to produce a record of size 1200 bytes
> and set the batch.size property to 10.
>
> Before Securityis Enabled:
>
> kafka-producer-perf-test --topic myPerformanceTestTopic --num-records
> 100 --print-metrics --record-size 1200 --throughput 100  --
> producer-props acks=1 bootstrap.servers=:9092
> batch.size=10
>
> 18/08/22 10:12:37 INFO utils.AppInfoParser: Kafka commitId : unknown
>
> 294801 records sent, 58960.2 records/sec (67.47 MB/sec), 55.0 ms avg
> latency, 265.0 max latency.
>
> 275261 records sent, 49632.3 records/sec (56.80 MB/sec), 251.1 ms avg
> latency, 1420.0 max latency.
>
> 293654 records sent, 58730.8 records/sec (67.21 MB/sec), 244.0 ms avg
> latency, 1485.0 max latency.
>
> 100 records sent, 57733.387218 records/sec (66.07 MB/sec), 162.61 ms
> avg latency, 1485.00 ms max latency, 73 ms 50th, 546 ms 95th, 1459 ms
> 99th, 1477 ms 99.9th.
>
> After Security is Enabled:
>
> kafka-producer-perf-test --topic myPerformanceTestTopic --num-records
> 100 --print-metrics --record-size 1200 --throughput 100  --
> producer-props acks=1 bootstrap.servers=:9094
> batch.size=10 --producer.config /sasl-ssl-
> auth.properties
>
>
> 18/08/22 12:33:36 INFO utils.AppInfoParser: Kafka commitId : unknown
>
> 396

Re: Performance Impact with Apache Kafka Security

2018-08-23 Thread Harsha
Hi,
  Which Kafka version and Java version are you using? Did you try this with 
Java 9 which has 2.5x perf improvements over Java 8 for SSL? Can you try using 
a slightly weaker cipher suite to improve the performance?

-Harsha

On Wed, Aug 22, 2018, at 1:11 PM, Sri Harsha Chavali wrote:
> Hi Guys,
> 
> We are trying to secure the Kafka-Cluster in order to enforce topic 
> level security based on sentry roles. We are seeing a big performance 
> impact after SSL_SASL is enabled. I read multiple blog posts describing 
> the performance impact but that also said that the impact would be 
> negligible, but I see a significant hit in my case (60-85%). Could 
> someone please suggest if you have seen this kind of performance impact 
> and what is done to overcome the same? We cannot afford to have an 
> unsecure cluster.
> 
> In the below example, I'm trying to produce a record of size 1200 bytes 
> and set the batch.size property to 10.
> 
> Before Securityis Enabled:
> 
> kafka-producer-perf-test --topic myPerformanceTestTopic --num-records 
> 100 --print-metrics --record-size 1200 --throughput 100  --
> producer-props acks=1 bootstrap.servers=:9092 
> batch.size=10
> 
> 18/08/22 10:12:37 INFO utils.AppInfoParser: Kafka commitId : unknown
> 
> 294801 records sent, 58960.2 records/sec (67.47 MB/sec), 55.0 ms avg 
> latency, 265.0 max latency.
> 
> 275261 records sent, 49632.3 records/sec (56.80 MB/sec), 251.1 ms avg 
> latency, 1420.0 max latency.
> 
> 293654 records sent, 58730.8 records/sec (67.21 MB/sec), 244.0 ms avg 
> latency, 1485.0 max latency.
> 
> 100 records sent, 57733.387218 records/sec (66.07 MB/sec), 162.61 ms 
> avg latency, 1485.00 ms max latency, 73 ms 50th, 546 ms 95th, 1459 ms 
> 99th, 1477 ms 99.9th.
> 
> After Security is Enabled:
> 
> kafka-producer-perf-test --topic myPerformanceTestTopic --num-records 
> 100 --print-metrics --record-size 1200 --throughput 100  --
> producer-props acks=1 bootstrap.servers=:9094 
> batch.size=10 --producer.config /sasl-ssl-
> auth.properties
> 
> 
> 18/08/22 12:33:36 INFO utils.AppInfoParser: Kafka commitId : unknown
> 
> 39610 records sent, 7917.2 records/sec (9.06 MB/sec), 1669.5 ms avg 
> latency, 2608.0 max latency.
> 
> 58320 records sent, 11659.3 records/sec (13.34 MB/sec), 2514.2 ms avg 
> latency, 3242.0 max latency.
> 
> 92016 records sent, 18399.5 records/sec (21.06 MB/sec), 1579.2 ms avg 
> latency, 2119.0 max latency.
> 
> 84645 records sent, 16925.6 records/sec (19.37 MB/sec), 1578.7 ms avg 
> latency, 2111.0 max latency.
> 
> 106515 records sent, 21286.0 records/sec (24.36 MB/sec), 1300.4 ms avg 
> latency, 1662.0 max latency.
> 
> l74520 records sent, 14895.1 records/sec (17.05 MB/sec), 1688.1 ms avg 
> latency, 2350.0 max latency.
> 
> 77841 records sent, 15562.0 records/sec (17.81 MB/sec), 1749.2 ms avg 
> latency, 2030.0 max latency.
> 
> 94851 records sent, 18970.2 records/sec (21.71 MB/sec), 1495.4 ms avg 
> latency, 2111.0 max latency.
> 
> 102870 records sent, 20569.9 records/sec (23.54 MB/sec), 1345.6 ms avg 
> latency, 1559.0 max latency.
> 
> 121095 records sent, 24219.0 records/sec (27.72 MB/sec), 1143.8 ms avg 
> latency, 1738.0 max latency.
> 
> 126036 records sent, 25202.2 records/sec (28.84 MB/sec), 1080.5 ms avg 
> latency, 1384.0 max latency.
> 
> 100 records sent, 17906.385417 records/sec (20.49 MB/sec), 1465.52 
> ms avg latency, 3242.00 ms max latency, 1355 ms 50th, 2339 ms 95th, 2914 
> ms 99th, 3211 ms 99.9th
> 
> Thank you,
> Harsha


Performance Impact with Apache Kafka Security

2018-08-22 Thread Sri Harsha Chavali
Hi Guys,

We are trying to secure the Kafka-Cluster in order to enforce topic level 
security based on sentry roles. We are seeing a big performance impact after 
SSL_SASL is enabled. I read multiple blog posts describing the performance 
impact but that also said that the impact would be negligible, but I see a 
significant hit in my case (60-85%). Could someone please suggest if you have 
seen this kind of performance impact and what is done to overcome the same? We 
cannot afford to have an unsecure cluster.

In the below example, I'm trying to produce a record of size 1200 bytes and set 
the batch.size property to 10.

Before Securityis Enabled:

kafka-producer-perf-test --topic myPerformanceTestTopic --num-records 100 
--print-metrics --record-size 1200 --throughput 100  --producer-props 
acks=1 bootstrap.servers=:9092 batch.size=10

18/08/22 10:12:37 INFO utils.AppInfoParser: Kafka commitId : unknown

294801 records sent, 58960.2 records/sec (67.47 MB/sec), 55.0 ms avg latency, 
265.0 max latency.

275261 records sent, 49632.3 records/sec (56.80 MB/sec), 251.1 ms avg latency, 
1420.0 max latency.

293654 records sent, 58730.8 records/sec (67.21 MB/sec), 244.0 ms avg latency, 
1485.0 max latency.

100 records sent, 57733.387218 records/sec (66.07 MB/sec), 162.61 ms avg 
latency, 1485.00 ms max latency, 73 ms 50th, 546 ms 95th, 1459 ms 99th, 1477 ms 
99.9th.

After Security is Enabled:

kafka-producer-perf-test --topic myPerformanceTestTopic --num-records 100 
--print-metrics --record-size 1200 --throughput 100  --producer-props 
acks=1 bootstrap.servers=:9094 batch.size=10 --producer.config 
/sasl-ssl-auth.properties


18/08/22 12:33:36 INFO utils.AppInfoParser: Kafka commitId : unknown

39610 records sent, 7917.2 records/sec (9.06 MB/sec), 1669.5 ms avg latency, 
2608.0 max latency.

58320 records sent, 11659.3 records/sec (13.34 MB/sec), 2514.2 ms avg latency, 
3242.0 max latency.

92016 records sent, 18399.5 records/sec (21.06 MB/sec), 1579.2 ms avg latency, 
2119.0 max latency.

84645 records sent, 16925.6 records/sec (19.37 MB/sec), 1578.7 ms avg latency, 
2111.0 max latency.

106515 records sent, 21286.0 records/sec (24.36 MB/sec), 1300.4 ms avg latency, 
1662.0 max latency.

l74520 records sent, 14895.1 records/sec (17.05 MB/sec), 1688.1 ms avg latency, 
2350.0 max latency.

77841 records sent, 15562.0 records/sec (17.81 MB/sec), 1749.2 ms avg latency, 
2030.0 max latency.

94851 records sent, 18970.2 records/sec (21.71 MB/sec), 1495.4 ms avg latency, 
2111.0 max latency.

102870 records sent, 20569.9 records/sec (23.54 MB/sec), 1345.6 ms avg latency, 
1559.0 max latency.

121095 records sent, 24219.0 records/sec (27.72 MB/sec), 1143.8 ms avg latency, 
1738.0 max latency.

126036 records sent, 25202.2 records/sec (28.84 MB/sec), 1080.5 ms avg latency, 
1384.0 max latency.

100 records sent, 17906.385417 records/sec (20.49 MB/sec), 1465.52 ms avg 
latency, 3242.00 ms max latency, 1355 ms 50th, 2339 ms 95th, 2914 ms 99th, 3211 
ms 99.9th

Thank you,
Harsha


Re: Restrict access on kafka with multiple listener

2018-07-17 Thread Harsha


There is no listener to topic mappings right now. But you can run two listeners 
one PLAINTEXT and another SASL. Configure your authorizer to allow anonymous 
read/write on topics that are public and the topics you want to protect give a 
explicit ACL to principal names. This will protect any read/writes on the 
secure topics and it will reject any request on PLAINTEXT port for these topics 
 as AuthorizationException and rest of the topics you can continue access 
through both the ports.

-Harsha

On Tue, Jul 17, 2018, at 5:09 PM, Matt L wrote:
> Hi,
> 
> I have an existing Kafka Cluster that is configured as PLAINTEXT. We want
> to enable SASL (GSSAPI) as an additional listener.
> 
> Is there a way to force specific topics to only accept traffic
> (publish/consume) from a certain listener?
> 
> e.g. if i create a topic and set ACLS, how do i stop a client from using
> the PLAINTEXT protocol and publishing and consuming to that topic
> 
> Thanks!


Re: [VOTE] 1.1.1 RC3

2018-07-09 Thread Harsha
+1.

* Ran unit tests
* Installed in a cluster and ran simple tests

Thanks,
Harsha

On Mon, Jul 9th, 2018 at 6:38 AM, Ted Yu  wrote:

> 
> 
> 
> +1
> 
> Ran test suite.
> 
> Checked signatures.
> 
> 
> 
> On Sun, Jul 8, 2018 at 3:36 PM Dong Lin < lindon...@gmail.com > wrote:
> 
> > Hello Kafka users, developers and client-developers,
> >
> > This is the fourth candidate for release of Apache Kafka 1.1.1.
> >
> > Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was
> first
> > released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> since
> > that release. A few of the more significant fixes include:
> >
> > KAFKA-6925 < https://issues.apache.org/jira/browse/KAFKA-6925> - Fix
> memory
> > leak in StreamsMetricsThreadImpl
> > KAFKA-6937 < https://issues.apache.org/jira/browse/KAFKA-6937> - In-sync
> 
> > replica delayed during fetch if replica throttle is exceeded
> > KAFKA-6917 < https://issues.apache.org/jira/browse/KAFKA-6917> - Process
> 
> > txn
> > completion asynchronously to avoid deadlock
> > KAFKA-6893 < https://issues.apache.org/jira/browse/KAFKA-6893> - Create
> > processors before starting acceptor to avoid ArithmeticException
> > KAFKA-6870 < https://issues.apache.org/jira/browse/KAFKA-6870> -
> > Fix ConcurrentModificationException in SampledStat
> > KAFKA-6878 < https://issues.apache.org/jira/browse/KAFKA-6878> - Fix
> > NullPointerException when querying global state store
> > KAFKA-6879 < https://issues.apache.org/jira/browse/KAFKA-6879> - Invoke
> > session init callbacks outside lock to avoid Controller deadlock
> > KAFKA-6857 < https://issues.apache.org/jira/browse/KAFKA-6857> - Prevent
> 
> > follower from truncating to the wrong offset if undefined leader epoch
> is
> > requested
> > KAFKA-6854 < https://issues.apache.org/jira/browse/KAFKA-6854> - Log
> > cleaner
> > fails with transaction markers that are deleted during clean
> > KAFKA-6747 < https://issues.apache.org/jira/browse/KAFKA-6747> - Check
> > whether there is in-flight transaction before aborting transaction
> > KAFKA-6748 < https://issues.apache.org/jira/browse/KAFKA-6748> - Double
> > check before scheduling a new task after the punctuate call
> > KAFKA-6739 < https://issues.apache.org/jira/browse/KAFKA-6739> -
> > Fix IllegalArgumentException when down-converting from V2 to V0/V1
> > KAFKA-6728 < https://issues.apache.org/jira/browse/KAFKA-6728> -
> > Fix NullPointerException when instantiating the HeaderConverter
> >
> > Kafka 1.1.1 release plan:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> >
> > Release notes for the 1.1.1 release:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc3/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, July 12, 12pm PT ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~lindong/kafka-1.1.1-rc3/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc3/javadoc/
> >
> > * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc3 tag:
> > https://github.com/apache/kafka/tree/1.1.1-rc3
> >
> > * Documentation:
> > http://kafka.apache.org/11/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/11/protocol.html
> >
> > * Successful Jenkins builds for the 1.1 branch:
> > Unit/integration tests: * https://builds.apache.org/job/kafka-1.1-jdk7/162
> 
> > < https://builds.apache.org/job/kafka-1.1-jdk7/162>*
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/1.1/156/
> >
> > Please test and verify the release artifacts and submit a vote for this
> RC,
> > or report any issues so we can fix them and get a new RC out ASAP.
> Although
> > this release vote requires PMC votes to pass, testing, votes, and bug
> > reports are valuable and appreciated from everyone.
> >
> >
> > Regards,
> > Dong
> >
> 
> 
> 
> 
> 
>

Re: [VOTE] 2.0.0 RC1

2018-07-02 Thread Harsha
+1. 

1) Ran unit tests 
2) 3 node cluster , tested basic operations. 

Thanks,
Harsha

On Mon, Jul 2nd, 2018 at 11:13 AM, "Vahid S Hashemian" 
 wrote:

> 
> 
> 
> +1 (non-binding)
> 
> Built from source and ran quickstart successfully on Ubuntu (with Java 8).
> 
> 
> Minor: It seems this doc update PR is not included in the RC:
> https://github.com/apache/kafka/pull/5280
> Guozhang seems to have wanted to cherry-pick it to 2.0.
> 
> Thanks Rajini!
> --Vahid
> 
> 
> 
> 
> From: Rajini Sivaram < rajinisiva...@gmail.com >
> To: dev < d...@kafka.apache.org >, Users < users@kafka.apache.org >,
> kafka-clients < kafka-clie...@googlegroups.com >
> Date: 06/29/2018 11:36 AM
> Subject: [VOTE] 2.0.0 RC1
> 
> 
> 
> Hello Kafka users, developers and client-developers,
> 
> 
> This is the second candidate for release of Apache Kafka 2.0.0.
> 
> 
> This is a major version release of Apache Kafka. It includes 40 new KIPs
> and
> 
> several critical bug fixes. Please see the 2.0.0 release plan for more
> details:
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
> 
> 
> 
> A few notable highlights:
> 
> - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
> (KIP-277)
> - SASL/OAUTHBEARER implementation (KIP-255)
> - Improved quota communication and customization of quotas (KIP-219,
> KIP-257)
> - Efficient memory usage for down conversion (KIP-283)
> - Fix log divergence between leader and follower during fast leader
> failover (KIP-279)
> - Drop support for Java 7 and remove deprecated code including old
> scala
> clients
> - Connect REST extension plugin, support for externalizing secrets and
> improved error handling (KIP-285, KIP-297, KIP-298 etc.)
> - Scala API for Kafka Streams and other Streams API improvements
> (KIP-270, KIP-150, KIP-245, KIP-251 etc.)
> 
> Release notes for the 2.0.0 release:
> 
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/RELEASE_NOTES.html
> 
> 
> 
> 
> *** Please download, test and vote by Tuesday, July 3rd, 4pm PT
> 
> 
> Kafka's KEYS file containing PGP keys we use to sign the release:
> 
> http://kafka.apache.org/KEYS
> 
> 
> 
> * Release artifacts to be voted upon (source and binary):
> 
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/
> 
> 
> 
> * Maven artifacts to be voted upon:
> 
> https://repository.apache.org/content/groups/staging/
> 
> 
> 
> * Javadoc:
> 
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/javadoc/
> 
> 
> 
> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
> 
> https://github.com/apache/kafka/tree/2.0.0-rc1
> 
> 
> 
> * Documentation:
> 
> http://kafka.apache.org/20/documentation.html
> 
> 
> 
> * Protocol:
> 
> http://kafka.apache.org/20/protocol.html
> 
> 
> 
> * Successful Jenkins builds for the 2.0 branch:
> 
> Unit/integration tests:
> https://builds.apache.org/job/kafka-2.0-jdk8/66/
> 
> 
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/2.0/15/
> 
> 
> 
> 
> Please test and verify the release artifacts and submit a vote for this RC
> 
> or report any issues so that we can fix them and roll out a new RC ASAP!
> 
> Although this release vote requires PMC votes to pass, testing, votes, and
> 
> bug
> reports are valuable and appreciated from everyone.
> 
> 
> Thanks,
> 
> 
> Rajini
> 
> 
> 
> 
> 
> 
> 
>

Re: [VOTE] 2.0.0 RC1

2018-07-02 Thread Harsha Ch
+1 .

* Ran unit tests
* Verified signatures
* Ran 3 node cluster with basic operations

Thanks,
Harsha

On Mon, Jul 2nd, 2018 at 11:13 AM, "Vahid S Hashemian" 
 wrote:

> 
> 
> 
> +1 (non-binding)
> 
> Built from source and ran quickstart successfully on Ubuntu (with Java 8).
> 
> 
> Minor: It seems this doc update PR is not included in the RC:
> https://github.com/apache/kafka/pull/5280
> Guozhang seems to have wanted to cherry-pick it to 2.0.
> 
> Thanks Rajini!
> --Vahid
> 
> 
> 
> 
> From: Rajini Sivaram < rajinisiva...@gmail.com >
> To: dev < d...@kafka.apache.org >, Users < users@kafka.apache.org >,
> kafka-clients < kafka-clie...@googlegroups.com >
> Date: 06/29/2018 11:36 AM
> Subject: [VOTE] 2.0.0 RC1
> 
> 
> 
> Hello Kafka users, developers and client-developers,
> 
> 
> This is the second candidate for release of Apache Kafka 2.0.0.
> 
> 
> This is a major version release of Apache Kafka. It includes 40 new KIPs
> and
> 
> several critical bug fixes. Please see the 2.0.0 release plan for more
> details:
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
> 
> 
> 
> A few notable highlights:
> 
> - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
> (KIP-277)
> - SASL/OAUTHBEARER implementation (KIP-255)
> - Improved quota communication and customization of quotas (KIP-219,
> KIP-257)
> - Efficient memory usage for down conversion (KIP-283)
> - Fix log divergence between leader and follower during fast leader
> failover (KIP-279)
> - Drop support for Java 7 and remove deprecated code including old
> scala
> clients
> - Connect REST extension plugin, support for externalizing secrets and
> improved error handling (KIP-285, KIP-297, KIP-298 etc.)
> - Scala API for Kafka Streams and other Streams API improvements
> (KIP-270, KIP-150, KIP-245, KIP-251 etc.)
> 
> Release notes for the 2.0.0 release:
> 
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/RELEASE_NOTES.html
> 
> 
> 
> 
> *** Please download, test and vote by Tuesday, July 3rd, 4pm PT
> 
> 
> Kafka's KEYS file containing PGP keys we use to sign the release:
> 
> http://kafka.apache.org/KEYS
> 
> 
> 
> * Release artifacts to be voted upon (source and binary):
> 
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/
> 
> 
> 
> * Maven artifacts to be voted upon:
> 
> https://repository.apache.org/content/groups/staging/
> 
> 
> 
> * Javadoc:
> 
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/javadoc/
> 
> 
> 
> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
> 
> https://github.com/apache/kafka/tree/2.0.0-rc1
> 
> 
> 
> * Documentation:
> 
> http://kafka.apache.org/20/documentation.html
> 
> 
> 
> * Protocol:
> 
> http://kafka.apache.org/20/protocol.html
> 
> 
> 
> * Successful Jenkins builds for the 2.0 branch:
> 
> Unit/integration tests:
> https://builds.apache.org/job/kafka-2.0-jdk8/66/
> 
> 
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/2.0/15/
> 
> 
> 
> 
> Please test and verify the release artifacts and submit a vote for this RC
> 
> or report any issues so that we can fix them and roll out a new RC ASAP!
> 
> Although this release vote requires PMC votes to pass, testing, votes, and
> 
> bug
> reports are valuable and appreciated from everyone.
> 
> 
> Thanks,
> 
> 
> Rajini
> 
> 
> 
> 
> 
> 
> 
>

Re: [kafka-clients] [VOTE] 1.0.2 RC1

2018-07-02 Thread Harsha
+1.
    
1) Ran unit tests
2) 3 node cluster , tested basic operations.

Thanks,
Harsha

On Mon, Jul 2nd, 2018 at 11:57 AM, Jun Rao  wrote:

> 
> 
> 
> Hi, Matthias,
> 
> Thanks for the running the release. Verified quickstart on scala 2.12
> binary. +1
> 
> Jun
> 
> On Fri, Jun 29, 2018 at 10:02 PM, Matthias J. Sax < matth...@confluent.io >
> 
> wrote:
> 
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka 1.0.2.
> >
> > This is a bug fix release addressing 27 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.0.2
> >
> > Release notes for the 1.0.2 release:
> > http://home.apache.org/~mjsax/kafka-1.0.2-rc1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by end of next week (7/6/18).
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~mjsax/kafka-1.0.2-rc1/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~mjsax/kafka-1.0.2-rc1/javadoc/
> >
> > * Tag to be voted upon (off 1.0 branch) is the 1.0.2 tag:
> > https://github.com/apache/kafka/releases/tag/1.0.2-rc1
> >
> > * Documentation:
> > http://kafka.apache.org/10/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/10/protocol.html
> >
> > * Successful Jenkins builds for the 1.0 branch:
> > Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/214/
> 
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/1.0/225/
> >
> > /**
> >
> > Thanks,
> > -Matthias
> >
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an
> > email to kafka-clients+ unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit https://groups.google.com/d/
> > msgid/kafka-clients/ca183ad4-9285-e423-3850-261f9dfec044%40confluent.io.
> 
> > For more options, visit https://groups.google.com/d/optout.
> >
> 
> 
> 
>

Re: [VOTE] 0.11.0.3 RC0

2018-06-24 Thread Harsha
+1

* Ran Unit tests

* 3 node cluster . Ran simple tests.

Thanks,
Harsha

On Sat, Jun 23rd, 2018 at 9:7 AM, Ted Yu  wrote:

> 
> 
> 
> +1
> 
> Checked signatures.
> 
> Ran unit test suite.
> 
> On Fri, Jun 22, 2018 at 4:56 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com > wrote:
> 
> > +1 (non-binding)
> >
> > Built from source and ran quickstart successfully on Ubuntu (with Java
> 8).
> >
> > Thanks Matthias!
> > --Vahid
> >
> >
> >
> >
> > From: "Matthias J. Sax" < matth...@confluent.io >
> > To: d...@kafka.apache.org , users@kafka.apache.org ,
> > kafka-clie...@googlegroups.com
> > Date: 06/22/2018 03:14 PM
> > Subject: [VOTE] 0.11.0.3 RC0
> >
> >
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 0.11.0.3.
> >
> > This is a bug fix release closing 27 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.3
> >
> > Release notes for the 0.11.0.3 release:
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> 
> > can close the vote on Wednesday.
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/javadoc/
> >
> > * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.3 tag:
> > https://github.com/apache/kafka/releases/tag/0.11.0.3-rc0
> >
> > * Documentation:
> > http://kafka.apache.org/0110/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0110/protocol.html
> >
> > * Successful Jenkins builds for the 0.11.0 branch:
> > Unit/integration tests:
> > https://builds.apache.org/job/kafka-0.11.0-jdk7/385/
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/0.11.0/217/
> >
> > /**
> >
> > Thanks,
> > -Matthias
> >
> > [attachment "signature.asc" deleted by Vahid S Hashemian/Silicon
> > Valley/IBM]
> >
> >
> >
> >
> 
> 
> 
>

Re: [VOTE] 1.1.1 RC1

2018-06-22 Thread Harsha
+1 . 
 1. Ran unit tests. 
 2. Ran few tests on 3-node cluster

Thanks,
Harsha

On Fri, Jun 22nd, 2018 at 2:41 PM Jakob Homan wrote:

> 
> 
> 
> +1 (binding)
> 
> * verified sigs, NOTICE, LICENSE
> * ran unit tests
> * spot checked headers
> 
> -Jakob
> 
> 
> 
> On 22 June 2018 at 13:19, Ted Yu < yuzhih...@gmail.com > wrote:
> > +1
> >
> > Ran test suite.
> > Checked signatures.
> >
> > On Fri, Jun 22, 2018 at 12:14 PM, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com > wrote:
> >
> >> +1 (non-binding)
> >>
> >> Built from source and ran quickstart successfully on Ubuntu (with Java
> 8).
> >>
> >> Thanks Dong!
> >> --Vahid
> >>
> >>
> >>
> >> From: Dong Lin < lindon...@gmail.com >
> >> To: d...@kafka.apache.org , users@kafka.apache.org ,
> >> kafka-clie...@googlegroups.com
> >> Date: 06/22/2018 10:10 AM
> >> Subject: [VOTE] 1.1.1 RC1
> >>
> >>
> >>
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the second candidate for release of Apache Kafka 1.1.1.
> >>
> >> Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was
> first
> >> released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> >> since
> >> that release. A few of the more significant fixes include:
> >>
> >> KAFKA-6925 <
> >> https://issues.apache.org/jira/browse/KAFKA-6925
> >> > - Fix memory
> >> leak in StreamsMetricsThreadImpl
> >> KAFKA-6937 <
> >> https://issues.apache.org/jira/browse/KAFKA-6937
> >> > - In-sync
> >> replica delayed during fetch if replica throttle is exceeded
> >> KAFKA-6917 <
> >> https://issues.apache.org/jira/browse/KAFKA-6917
> >> > - Process txn
> >> completion asynchronously to avoid deadlock
> >> KAFKA-6893 <
> >> https://issues.apache.org/jira/browse/KAFKA-6893
> >> > - Create
> >> processors before starting acceptor to avoid ArithmeticException
> >> KAFKA-6870 <
> >> https://issues.apache.org/jira/browse/KAFKA-6870
> >> > -
> >> Fix ConcurrentModificationException in SampledStat
> >> KAFKA-6878 <
> >> https://issues.apache.org/jira/browse/KAFKA-6878
> >> > - Fix
> >> NullPointerException when querying global state store
> >> KAFKA-6879 <
> >> https://issues.apache.org/jira/browse/KAFKA-6879
> >> > - Invoke
> >> session init callbacks outside lock to avoid Controller deadlock
> >> KAFKA-6857 <
> >> https://issues.apache.org/jira/browse/KAFKA-6857
> >> > - Prevent
> >> follower from truncating to the wrong offset if undefined leader epoch
> is
> >> requested
> >> KAFKA-6854 <
> >> https://issues.apache.org/jira/browse/KAFKA-6854
> >> > - Log cleaner
> >> fails with transaction markers that are deleted during clean
> >> KAFKA-6747 <
> >> https://issues.apache.org/jira/browse/KAFKA-6747
> >> > - Check
> >> whether there is in-flight transaction before aborting transaction
> >> KAFKA-6748 <
> >> https://issues.apache.org/jira/browse/KAFKA-6748
> >> > - Double
> >> check before scheduling a new task after the punctuate call
> >> KAFKA-6739 <
> >> https://issues.apache.org/jira/browse/KAFKA-6739
> >> > -
> >> Fix IllegalArgumentException when down-converting from V2 to V0/V1
> >> KAFKA-6728 <
> >> https://issues.apache.org/jira/browse/KAFKA-6728
> >> > -
> >> Fix NullPointerException when instantiating the HeaderConverter
> >>
> >> Kafka 1.1.1 release plan:
> >> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> >>
> >>
> >> Release notes for the 1.1.1 release:
> >> http://home.apache.org/~lindong/kafka-1.1.1-rc1/RELEASE_NOTES.html
> >>
> >>
> >> *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> http://kafka.apache.org/KEYS
> >>
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> http://home.apache.org/~lindong/kafka-1.1.1-rc1/
> >>
> >>
> >> * Maven artifacts to be voted upon:
> >> https://repository.apache.org/content/groups/staging/
> >>
> >>
> >> * Javadoc:
> >> http://home.apache.org/~lindong/kafka-1.1.1-rc1/javadoc/
> >>
> >>
> >> * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc1 tag:
> >> https://github.com/apache/kafka/tree/1.1.1-rc1
> >>
> >>
> >> * Documentation:
> >> http://kafka.apache.org/11/documentation.html
> >>
> >>
> >> * Protocol:
> >> http://kafka.apache.org/11/protocol.html
> >>
> >>
> >> * Successful Jenkins builds for the 1.1 branch:
> >> Unit/integration tests:
> >> * https://builds.apache.org/job/kafka-1.1-jdk7/152/
> >> <
> >> https://builds.apache.org/job/kafka-1.1-jdk7/152/
> >> >*
> >> System tests:
> >> https://jenkins.confluent.io/job/system-test-
> >>
> >> kafka-branch-builder/1817
> >>
> >>
> >> Please test and verify the release artifacts and submit a vote for this
> 
> >> RC,
> >> or report any issues so we can fix them and get a new RC out ASAP.
> >> Although
> >> this release vote requires PMC votes to pass, testing, votes, and bug
> >> reports are valuable and appreciated from everyone.
> >>
> >> Cheers,
> >> Dong
> >>
> >>
> >>
> >>
> >>
> 
> 
> 
> 
> 
>

Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Harsha Chintalapani
Hi Aditya,
 Thanks for your interest. We entatively planning one in June
1st week. If you haven't already please register here
https://www.meetup.com/Apache-Storm-Apache-Kafka/ . I'll keep the Storm
lists updated once we finalize the date & location.

Thanks,
Harsha

On Mon, Apr 24, 2017 at 7:02 AM Aditya Desai  wrote:

> Hello Everyone
>
> Can you please let us know when is the next meet up? It would be great if
> we can have in May.
>
> Regards
> Aditya Desai
>
> On Mon, Apr 24, 2017 at 2:16 AM, Xin Wang  wrote:
>
>> How about publishing this to Storm site?
>>
>>  - Xin
>>
>> 2017-04-22 19:27 GMT+08:00 steve tueno :
>>
>>> great
>>>
>>> Thanks
>>>
>>>
>>>
>>> Cordialement,
>>>
>>> TUENO FOTSO Steve Jeffrey
>>> Ingénieur de conception
>>> Génie Informatique
>>> +237 676 57 17 28 <+237%206%2076%2057%2017%2028>
>>> +237 697 86 36 38 <+237%206%2097%2086%2036%2038>
>>>
>>> +33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>
>>>
>>>
>>> https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__jobs.jumia.cm_fr_candidats_CVTF1486563.html&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=GaQrzR02LWZVUrE0QJsdVvmFqqVLTgaJw8mahNlm9As&e=>
>>>
>>> __
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.remotecomputer
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.polytech.remotecomputer&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=qQphaS7iehggUZXFCjfv7qaAa7b-bv0YPNM3hGRl6WM&e=>
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.internetaccesschecker
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.polytech.internetaccesschecker&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=a48dJmbXEEcCNsVdX-mqdEN_WcfSDYV6VWOVwrO_7Vs&e=>
>>> *http://www.traveler.cm/
>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__remotecomputer.traveler.cm_&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=fdcEd-WWb2Y9Dc_EENRO1ctKcdCM0Aetcq-VUNkSWWo&e=>*
>>> http://remotecomputer.traveler.cm/
>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__remotecomputer.traveler.cm_&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=fdcEd-WWb2Y9Dc_EENRO1ctKcdCM0Aetcq-VUNkSWWo&e=>
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.androidsmssender
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.polytech.androidsmssender&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=cbngecoo97iCdOv1b8AFOFW3SXRaRsikcY_xW1LA9RM&e=>
>>>
>>> https://github.com/stuenofotso/notre-jargon
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_stuenofotso_notre-2Djargon&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=js0gVdd_GhgUEiUmzI7tbceTxMw-C6a-_j2ZYS-9sCE&e=>
>>> https://play.google.com/store/apps/details?id=com.polytech.welovecameroon
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.polytech.welovecameroon&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=Mg-q0FcZEaENh5tpdSndEJNsedpAjzueRxBGAc3Srhs&e=>
>>> https://play.google.com/store/apps/details?id=com.polytech.welovefrance
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__play.google.com_store_apps_details-3Fid-3Dcom.polytech.welovefrance&d=DwMFaQ&c=clK7kQUTWtAVEOVIgvi0NU5BOUHhpN0H8p7CSfnc_gI&r=aLfk1zsmx4LG4nTElFRiaw&m=9sHUKQRDjS9DG9IvEZNtjgCtreJjfZbupLo3QX9wBRE&s=8aYw4IRqwbFe6WN7ghOTkDJRT51Keti9f_-JbJ09jNg&e=>
>>>
>>

Re: Securing Multi-Node single broker kafka instance

2017-03-01 Thread Harsha
Here is the recommended way to setup a 3-node Kafka cluster. Its always 
recommended to keep zookeeper nodes on different set of nodes than the one you 
are running Kafka. To go with your current 3-node installation.
1. Install 3-node zookeeper make sure they are forming the quorum 
(https://zookeeper.apache.org/doc/r3.3.2/zookeeperAdmin.html)
2. Install apache kafka binaries on all 3 nodes.
3. Make sure you keep the same zookeeper.connect in server.properties on all 3 
nodes for your kafka broker.
4. Start Kafka brokers
5. For sanity check, make sure you create a topic with 3-replication factor and 
see if you can produce & consume messages

Before stepping into security make sure your non-secure Kafka cluster works ok. 
Once you’ve a stable & working cluster
follow instructions in the doc to enable SSL.

-Harsha

On Mar 1, 2017, 1:08 PM -0800, IT Consultant <0binarybudd...@gmail.com>, wrote:
> Hi Harsha ,
>
> Thanks a lot .
>
> Let me explain where am i stuck ,
>
> i have three machines on which i am running apache kafka with single broker
> but zookeeper of each machine is configured with other machine.
>
> Example : node1=zk1,zk2,zk3
> node2=zk1,zk2,zk3
> node3=zk1,zk2,zk3
>
> This is done for HA .
>
> Now i need to secure this deployment using SSL .
>
> *Things tried so far :*
>
> Create a key and certificate for each of these nodes and configure broker
> according to the documentation .
>
> However , i see following error when i run console producer and consumer
> with client certificate or client properties file .
>
> WARN Error while fetching metadata for topic
>
>
> How do i make each broker work with other broker ?
> How do i generate and store certificate for this ? because online document
> seems to be confusing for me.
> How do i make zookeepers sync with each other and behave as earlier ?
>
>
>
> On Thu, Mar 2, 2017 at 2:25 AM, Harsha Chintalapani  wrote:
>
> > For inter broker communication over SSL all you need is to add
> > security.inter.broker.protocol to SSL.
> > "How do i make zookeeper talk to each other and brokers?"
> > Not sure I understand the question. You need to make sure zookeeper hosts
> > and port are reachable from your broker nodes.
> > -Harsha
> >
> > On Wed, Mar 1, 2017 at 12:45 PM IT Consultant <0binarybudd...@gmail.com
> > wrote:
> >
> > > Hi Team ,
> > >
> > > Can you please help me understand ,
> > >
> > > 1. How can i secure multi-node (3 machine) single broker (1 broker )
> > Apache
> > > Kafka deployment secure using SSL ?
> > >
> > > i tried to follow instructions here but found pretty confusing .
> > >
> > > https://www.confluent.io/blog/apache-kafka-security-authoriz
> > > ation-authentication-encryption/
> > >
> > > http://docs.confluent.io/2.0.0/kafka/security.html
> > >
> > > Currently , i have kafka running on 3 different machines .
> > > 2. How do i make them talk to each other over SSL ?
> > > 3. How do i make zookeeper talk to each other and brokers?
> > >
> > > Requesting your help .
> > >
> > > Thanks in advance.
> > >
> >


Re: Securing Multi-Node single broker kafka instance

2017-03-01 Thread Harsha Chintalapani
For inter broker communication over SSL all you need is to add
security.inter.broker.protocol to SSL.
"How do i make zookeeper talk to each other and brokers?"
Not sure I understand the question. You need to make sure zookeeper hosts
and port are reachable from your broker nodes.
-Harsha

On Wed, Mar 1, 2017 at 12:45 PM IT Consultant <0binarybudd...@gmail.com>
wrote:

> Hi Team ,
>
> Can you please help me understand ,
>
> 1. How can i secure multi-node (3 machine) single broker (1 broker ) Apache
> Kafka deployment secure using SSL ?
>
> i tried to follow instructions here but found pretty confusing .
>
> https://www.confluent.io/blog/apache-kafka-security-authoriz
> ation-authentication-encryption/
>
> http://docs.confluent.io/2.0.0/kafka/security.html
>
> Currently , i have kafka running on 3 different machines .
> 2. How do i make them talk to each other over SSL ?
> 3. How do i make zookeeper talk to each other and brokers?
>
> Requesting your help .
>
> Thanks in advance.
>


Re: Kafka SASL and custom LoginModule and Authorizer

2017-02-26 Thread Harsha Chintalapani
Hi Christian,
 Kafka client connections are long-llving connections,
hence the authentication part comes up during connection establishment and
once we authenticate regular kafka protocols can be exchanged.
Doing heartbeat to keep the token alive in a Authorizer is not a good idea.
Authorizer' role is to tell if user A has permission on topic X etc.. not
to invalidate a  user's session. Hence it won't propagate a exception into
LoginModule. What you trying to do seems similar to DelegationToken . Have
you checked this KIP
https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
.

Thanks,
Harsha

On Sat, Feb 25, 2017 at 6:48 PM Christian  wrote:

> We have implemented our own LoginModule and Authorizer. The LoginModule
> does an authentication on the client side, obtains a token and passes that
> token down to our custom SaslServer which then verifies that this token is
> valid. Our Authorizer gets that token and asks another custom service if
> the necessary topic permissions are there. This is a very simplified
> description, but it should suffice for my question.
>
> I've found that the LoginModule only authenticates once and passes that
> token down once as well. Our service requires a heartbeat to keep the token
> alive. I would like the SaslService to call our authentication service
> every once in.a while and if the token ever times out (it times out after
> 24 hours; even with heartbeats, but heartbeats every so many minutes can
> extend the session to 24 hours), then I'd like it to respond back to the
> LoginModule with some sort of failed to authorize message or code.
>
> Once this gets passed to the Authorizer, we can extend the session by
> querying our internal Authentication/Authorization service. I was hoping,
> as.a fallback plan that the Authorizer could do this, by simply throwing an
> exception or failing the request when the authorization finally returns
> false (due to session timeout), but I don't see anywhere in the
> documentation where a certain kind of failure in the authorizer can bubble
> up to the authenticator and I don't see how I can configure the loginmodule
> to periodically redo authentication. Can anyone out there help me? Is the
> Kafka SASL implementation not meant for such a complicated scenario or am I
> just thinking about it all wrong?
>
> Thanks,
> Christian
>


Re: [ANNOUNCE] New committer: Jiangjie (Becket) Qin

2016-10-31 Thread Harsha Chintalapani
Congrats Becket!
-Harsha

On Mon, Oct 31, 2016 at 2:13 PM Rajini Sivaram 
wrote:

> Congratulations, Becket!
>
> On Mon, Oct 31, 2016 at 8:38 PM, Matthias J. Sax 
> wrote:
>
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > Congrats!
> >
> > On 10/31/16 11:01 AM, Renu Tewari wrote:
> > > Congratulations Becket!! Absolutely thrilled to hear this. Well
> > > deserved!
> > >
> > > regards renu
> > >
> > >
> > > On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy 
> > > wrote:
> > >
> > >> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to
> > >> join as a committer and we are pleased to announce that he has
> > >> accepted!
> > >>
> > >> Becket has made significant contributions to Kafka over the last
> > >> two years. He has been deeply involved in a broad range of KIP
> > >> discussions and has contributed several major features to the
> > >> project. He recently completed the implementation of a series of
> > >> improvements (KIP-31, KIP-32, KIP-33) to Kafka’s message format
> > >> that address a number of long-standing issues such as avoiding
> > >> server-side re-compression, better accuracy for time-based log
> > >> retention, log roll and time-based indexing of messages.
> > >>
> > >> Congratulations Becket! Thank you for your many contributions. We
> > >> are excited to have you on board as a committer and look forward
> > >> to your continued participation!
> > >>
> > >> Joel
> > >>
> > >
> > -BEGIN PGP SIGNATURE-
> > Comment: GPGTools - https://gpgtools.org
> >
> > iQIcBAEBCgAGBQJYF6uzAAoJECnhiMLycopPBuwP/1N2MtwWw7ms5gAfT/jvVCGi
> > mdNvdJprSwJHe3qwsc+glsvAqwS6OZfaVzK2qQcaxMX5KjQtwkkOKyErOl9hG7jD
> > Vw0aDcCbPuV2oEZ4m9K2J4Q3mZIfFrevicVb7oPGf4Yjt1sh9wxP08o7KHP2l5pN
> > 3mpIBEDp4rZ2pg/jXldyh57dW1btg3gZi1gNczWvXEAKf1ypXRPwPeDbvXADXDv3
> > 0NgmcXn242geoggnIbL30WgjH0bwHpVjLBr++YQ33FzRoHzASfAYHR/jSDKAytQe
> > a7Bkc69Bb1NSzkfhiJa+VW9V2DweO8kD+Xfz4dM02GQF0iJkAqare7a6zWedk/+U
> > hJRPz+tGlDSLePCYdyNj1ivJrFOmIQtyFOI3SBANfaneOmGJhPKtlNQQlNFKDbWS
> > CD1pBsc1iHNq6rXy21evc/aFk0Rrfs5d4rU9eG6jD8jc1mCbSwtzJI0vweX0r9Y/
> > 6Ao8cnsmDejYfap5lUMWeQfZOTkNRNpbkL7eoiVpe6wZw1nGL3T7GkrrWGRS3EQO
> > qp4Jjp+7yY4gIqsLfYouaHTEzAX7yN78QNUNCB4OqUiEL9+a8wTQ7dlTgXinEd8r
> > Kh9vTfpW7fb4c58aSpzntPUU4YFD3MHMam0iu5UrV9d5DrVTFDMJ83k15Z5DyTMt
> > 45nPYdjvJgFGWLYFnPwr
> > =VbpG
> > -END PGP SIGNATURE-
> >
>
>
>
> --
> Regards,
>
> Rajini
>


Re: [VOTE] Add REST Server to Apache Kafka

2016-10-29 Thread Harsha Chintalapani
This Vote now closed. There is the majority who are not in favor of
including the REST server.
Thanks everyone for participating.


Vote tally:

+1 Votes:
Harsha Chintalapani
Parth Brahbhatt
Ken Jackson
Suresh Srinivas
Jungtaek Lim
Ali Akthar
Haohui Mai
Shekhar Tippur
Guruditta Golani

-1 Votes:
   Jay Kreps
   Ben Davidson
   Jeff Widman
   Stevo Slavic
   Sriram Subramanian
   Craig W
   Jun Rao
   Ismael Juma
   Matthias J. Sax
   Neha Narkhede
   Jan Filipiak
   Ofir Manor
   Samuel Taylor
   Dana Powers
   Zakee
   Andrew Otto
   Hugo Picado
   Jaikiran Pai
   Roger Hoover
   Gwen Shapira



Thanks,
Harsha




On Sat, Oct 29, 2016 at 8:16 AM Gwen Shapira  wrote:

> Oops. Sorry, didn't notice the 72h voting period has passed. You can
> disregard.
>
> Gwen
>
> On Sat, Oct 29, 2016 at 4:29 PM, Gwen Shapira  wrote:
>
> > -1
> >
> > Kafka's development model is a good fit for critical path and
> > well-established APIs. It doesn't work as well for add-ons that need to
> > rapidly evolve. Merging communities with different development pace and
> > models rarely ends well - I think the REST Proxy will benefit from being
> a
> > separate project.
> >
> > On Tue, Oct 25, 2016 at 11:16 PM, Harsha Chintalapani 
> > wrote:
> >
> >> Hi All,
> >>We are proposing to have a REST Server as part of  Apache
> Kafka
> >> to provide producer/consumer/admin APIs. We Strongly believe having
> >> REST server functionality with Apache Kafka will help a lot of users.
> >> Here is the KIP that Mani Kumar wrote
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-80:+
> >> Kafka+Rest+Server.
> >> There is a discussion thread in dev list that had differing opinions on
> >> whether to include REST server in Apache Kafka or not. You can read more
> >> about that in this thread
> >> http://mail-archives.apache.org/mod_mbox/kafka-dev/201610.mb
> >> ox/%3CCAMVt_AyMqeuDM39ZnSXGKtPDdE46sowmqhsXoP-+JMBCUV74Dw@
> >> mail.gmail.com%3E
> >>
> >>   This is a VOTE thread to check interest in the community for
> >> adding REST Server implementation in Apache Kafka.
> >>
> >> Thanks,
> >> Harsha
> >>
> >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
> > <http://www.confluent.io/blog>
> >
> >
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 <(650)%20450-2760> | @gwenshap
> Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
> <http://www.confluent.io/blog>
>


Re: [VOTE] Add REST Server to Apache Kafka

2016-10-25 Thread Harsha Chintalapani
I am not able to link the whole thread on REST server discussion thread
from dev list properly.
But if you go here
http://mail-archives.apache.org/mod_mbox/kafka-dev/201610.mbox/thread
look for subject  "[DISCUSS] KIP-80: Kafka REST Server"

Thanks,
Harsha

On Tue, Oct 25, 2016 at 4:13 PM Ali Akhtar  wrote:

> And this will make adding health checks via Kubernetes easy.
>
> On Wed, Oct 26, 2016 at 4:12 AM, Ali Akhtar  wrote:
>
> > +1. I hope there will be a corresponding Java library for doing admin
> > functionality.
> >
> > On Wed, Oct 26, 2016 at 4:10 AM, Jungtaek Lim  wrote:
> >
> >> +1
> >>
> >>
> >> On Wed, 26 Oct 2016 at 8:00 AM craig w  wrote:
> >>
> >> > -1
> >> >
> >> > On Tuesday, October 25, 2016, Sriram Subramanian 
> >> wrote:
> >> >
> >> > > -1 for all the reasons that have been described before. This does
> not
> >> > need
> >> > > to be part of the core project.
> >> > >
> >> > > On Tue, Oct 25, 2016 at 3:25 PM, Suresh Srinivas <
> >> sur...@hortonworks.com
> >> > > >
> >> > > wrote:
> >> > >
> >> > > > +1.
> >> > > >
> >> > > > This is an http access to core Kafka. This is very much needed as
> >> part
> >> > of
> >> > > > Apache Kafka under ASF governance model.  This would be great for
> >> the
> >> > > > community instead of duplicated and splintered efforts that may
> >> spring
> >> > > up.
> >> > > >
> >> > > > Get Outlook for iOS<https://aka.ms/o0ukef>
> >> > > >
> >> > > > _
> >> > > > From: Harsha Chintalapani  >> > > ka...@harsha.io >>
> >> > > > Sent: Tuesday, October 25, 2016 2:20 PM
> >> > > > Subject: [VOTE] Add REST Server to Apache Kafka
> >> > > > To: <mailto:dev@kafk
> >> a.apache.org
> >> > > >>, <
> >> > > > users@kafka.apache.org  users@kafka.apache.org
> >> > > >>
> >> > > >
> >> > > >
> >> > > > Hi All,
> >> > > >We are proposing to have a REST Server as part of
> Apache
> >> > > Kafka
> >> > > > to provide producer/consumer/admin APIs. We Strongly believe
> having
> >> > > > REST server functionality with Apache Kafka will help a lot of
> >> users.
> >> > > > Here is the KIP that Mani Kumar wrote
> >> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > > > 80:+Kafka+Rest+Server.
> >> > > > There is a discussion thread in dev list that had differing
> >> opinions on
> >> > > > whether to include REST server in Apache Kafka or not. You can
> read
> >> > more
> >> > > > about that in this thread
> >> > > >
> >> > http://mail-archives.apache.org/mod_mbox/kafka-dev/201610.mb
> >> ox/%3CCAMVt_
> >> > > > aymqeudm39znsxgktpdde46sowmqhsxop-+jmbcuv7...@mail.gmail.com
> >> > > %3E
> >> > > >
> >> > > >   This is a VOTE thread to check interest in the community
> >> for
> >> > > > adding REST Server implementation in Apache Kafka.
> >> > > >
> >> > > > Thanks,
> >> > > > Harsha
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> >
> >> >
> >> > --
> >> >
> >> > https://github.com/mindscratch
> >> > https://www.google.com/+CraigWickesser
> >> > https://twitter.com/mind_scratch
> >> > https://twitter.com/craig_links
> >> >
> >>
> >
> >
>


Re: [VOTE] Add REST Server to Apache Kafka

2016-10-25 Thread Harsha Chintalapani
Jeff,
  Thanks for participating. We already have the discussion thread going
on. Please add your thoughts there . I'll keep this as interest check vote
thread.
Thanks,
Harsha

On Tue, Oct 25, 2016 at 3:12 PM Stevo Slavić  wrote:

> -1
>
> I fully agree with Jay and Jeff.
>
> On Wed, Oct 26, 2016 at 12:03 AM, Jeff Widman  wrote:
>
> > -1
> >
> > As an end-user, while I like the idea in theory, in practice I don't
> think
> > it's a good idea just yet.
> >
> > Certainly, it'd be useful, enabling things like
> > https://github.com/Landoop/kafka-topics-ui to work without needing
> > anything
> > outside of Kafka core.
> >
> > But there are already enough things in the existing Kafka core that feel
> a
> > little half-baked, especially on the ops side... there's been a few
> threads
> > lately on the users mailing list where users have asked about
> functionality
> > that seems like it should be standard and all proffered solutions have
> > seemed a bit hacky.
> >
> > Similarly, while I understand the argument that having a solid HTTP
> > endpoint would obviate the need for other language bindings, I'm not sure
> > that's what would happen. From a speed perspective, I'd often rather use
> > the native bindings in my language of choice, even if it's unofficial and
> > may be lagging behind on features. So this doesn't feel that compelling
> to
> > me.
> >
> > I'd rather the Kafka community focus on making Kafka rock-solid at what
> it
> > does, particularly having the existing CLI ops tooling and docs be really
> > solid before trying to add fairly major new pieces to the project.
> >
> > Plus I think it's very beneficial for the Kafka community for Confluent
> to
> > have a strong business model--they provide many contributions to core
> that
> > most of us benefit from for free. Keeping a few things like the REST
> > interface under their umbrella gives them free marketing exposure while
> > still allowing the rest of us access to their tooling as
> free/open-source.
> >
> >
> > On Tue, Oct 25, 2016 at 2:16 PM, Harsha Chintalapani 
> > wrote:
> >
> > > Hi All,
> > >We are proposing to have a REST Server as part of  Apache
> > Kafka
> > > to provide producer/consumer/admin APIs. We Strongly believe having
> > > REST server functionality with Apache Kafka will help a lot of users.
> > > Here is the KIP that Mani Kumar wrote
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 80:+Kafka+Rest+Server.
> > > There is a discussion thread in dev list that had differing opinions on
> > > whether to include REST server in Apache Kafka or not. You can read
> more
> > > about that in this thread
> > >
> http://mail-archives.apache.org/mod_mbox/kafka-dev/201610.mbox/%3CCAMVt_
> > > aymqeudm39znsxgktpdde46sowmqhsxop-+jmbcuv7...@mail.gmail.com%3E
> > >
> > >   This is a VOTE thread to check interest in the community for
> > > adding REST Server implementation in Apache Kafka.
> > >
> > > Thanks,
> > > Harsha
> > >
> >
>


[VOTE] Add REST Server to Apache Kafka

2016-10-25 Thread Harsha Chintalapani
Hi All,
   We are proposing to have a REST Server as part of  Apache Kafka
to provide producer/consumer/admin APIs. We Strongly believe having
REST server functionality with Apache Kafka will help a lot of users.
Here is the KIP that Mani Kumar wrote
https://cwiki.apache.org/confluence/display/KAFKA/KIP-80:+Kafka+Rest+Server.
There is a discussion thread in dev list that had differing opinions on
whether to include REST server in Apache Kafka or not. You can read more
about that in this thread
http://mail-archives.apache.org/mod_mbox/kafka-dev/201610.mbox/%3ccamvt_aymqeudm39znsxgktpdde46sowmqhsxop-+jmbcuv7...@mail.gmail.com%3E

  This is a VOTE thread to check interest in the community for
adding REST Server implementation in Apache Kafka.

Thanks,
Harsha


Re: SSL Kafka 0.9.0.1

2016-10-03 Thread Harsha Chintalapani
Shri,
  SSL in 0.9.0.1 is not beta and can be used in production. If you want
to put authorizer on top of SSL to enable ACLs for clients and topics
 that's possible too.

Thanks,
Harsha

On Mon, Oct 3, 2016 at 8:30 AM Shrikant Patel  wrote:

> We are are 0.9.0.1 and want to use SSL for ACL and securing communication
> between borker, producer and consumer.
>
> Was \ Is the SSL based ACL in beta for this version of Kafka???
>
>
> We don't want upgrade to 0.10.x unless it absolutely needed.
>
> Thanks,
> Shri
> __
> Shrikant Patel   |   PDX-NHIN
> Enterprise Architecture Team
> Asserting the Role of Pharmacy in Healthcare  www.pdxinc.com<
> http://www.pdxinc.com/>
> main 817.246.6760 | ext 4302
> 101 Jim Wright Freeway South, Suite 200, Fort Worth, Texas 76108-2202<
> http://maps.google.com/maps?q=PDX,+Inc.&hl=en&sll=32.758696,-97.476397&sspn=0.006295,0.006295&filter=0&update=1&t=h&z=17&iwloc=A
> >
>
>
> P Please consider the environment before printing this email.
>
> This e-mail and its contents (to include attachments) are the property of
> National Health Systems, Inc., its subsidiaries and affiliates, including
> but not limited to Rx.com Community Healthcare Network, Inc. and its
> subsidiaries, and may contain confidential and proprietary or privileged
> information. If you are not the intended recipient of this e-mail, you are
> hereby notified that any unauthorized disclosure, copying, or distribution
> of this e-mail or of its attachments, or the taking of any unauthorized
> action based on information contained herein is strictly prohibited.
> Unauthorized use of information contained herein may subject you to civil
> and criminal prosecution and penalties. If you are not the intended
> recipient, please immediately notify the sender by telephone at
> 800-433-5719 or return e-mail and permanently delete the original e-mail.
>


Re: a broker is already registered on path /brokers/ids/1

2016-08-29 Thread Harsha Chintalapani
how many brokers you've in this cluster. Do you try using a stable
zookeeper release like 3.4.8?
-Harhsa

On Mon, Aug 29, 2016 at 5:21 AM Nomar Morado  wrote:

> we are using kafka 0.9.0.1 and zk 3.5.0-alpha
>
> On Mon, Aug 29, 2016 at 8:12 AM, Nomar Morado 
> wrote:
>
> > we would get this occasionally after a weekend reboot/restart.
> >
> > we tried restarting a couple of times all to naught.
> >
> > we had to delete dk's directory to get his going again.
> >
> > any ideas what might cause this issue and suggestions on how to resolve
> > this?
> >
> >
> > thanks.
> >
>
>
>
> --
> Regards,
> Nomar Morado
>


Re: [kafka-clients] [VOTE] 0.10.0.1 RC2

2016-08-07 Thread Harsha Chintalapani
+1 (binding)
1. Ran 3 node cluser
2. Ran few tests in creating, producing , consuming from secure &
non-secure clients.

Thanks,
Harsha

On Fri, Aug 5, 2016 at 8:50 PM Manikumar Reddy 
wrote:

> +1 (non-binding).
> verified quick start and artifacts.
>
> On Sat, Aug 6, 2016 at 5:45 AM, Joel Koshy  wrote:
>
> > +1 (binding)
> >
> > Thanks Ismael!
> >
> > On Thu, Aug 4, 2016 at 6:54 AM, Ismael Juma  wrote:
> >
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the third candidate for the release of Apache Kafka 0.10.0.1.
> >> This is a bug fix release and it includes fixes and improvements from 53
> >> JIRAs (including a few critical bugs). See the release notes for more
> >> details:
> >>
> >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/RELEASE_NOTES.html
> >>
> >> When compared to RC1, RC2 contains a fix for a regression where an older
> >> version of slf4j-log4j12 was also being included in the libs folder of
> the
> >> binary tarball (KAFKA-4008). Thanks to Manikumar Reddy for reporting the
> >> issue.
> >>
> >> *** Please download, test and vote by Monday, 8 August, 8am PT ***
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> http://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/
> >>
> >> * Maven artifacts to be voted upon:
> >> https://repository.apache.org/content/groups/staging
> >>
> >> * Javadoc:
> >> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc2/javadoc/
> >>
> >> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc2 tag:
> >> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> >> f8f56751744ba8e55f90f5c4f3aed8c3459447b2
> >>
> >> * Documentation:
> >> http://kafka.apache.org/0100/documentation.html
> >>
> >> * Protocol:
> >> http://kafka.apache.org/0100/protocol.html
> >>
> >> * Successful Jenkins builds for the 0.10.0 branch:
> >> Unit/integration tests: *
> https://builds.apache.org/job/kafka-0.10.0-jdk7/182/
> >> <https://builds.apache.org/job/kafka-0.10.0-jdk7/182/>*
> >> System tests: *
> https://jenkins.confluent.io/job/system-test-kafka-0.10.0/138/
> >> <https://jenkins.confluent.io/job/system-test-kafka-0.10.0/138/>*
> >>
> >> Thanks,
> >> Ismael
> >>
> >> --
> >> You received this message because you are subscribed to the Google
> Groups
> >> "kafka-clients" group.
> >> To unsubscribe from this group and stop receiving emails from it, send
> an
> >> email to kafka-clients+unsubscr...@googlegroups.com.
> >> To post to this group, send email to kafka-clie...@googlegroups.com.
> >> Visit this group at https://groups.google.com/group/kafka-clients.
> >> To view this discussion on the web visit https://groups.google.com/d/ms
> >> gid/kafka-clients/CAD5tkZYMMxDEjg_2jt4x-mVZZHgJ6EC6HKSf4Hn%2
> >> Bi59DbTdVoQ%40mail.gmail.com
> >> <
> https://groups.google.com/d/msgid/kafka-clients/CAD5tkZYMMxDEjg_2jt4x-mVZZHgJ6EC6HKSf4Hn%2Bi59DbTdVoQ%40mail.gmail.com?utm_medium=email&utm_source=footer
> >
> >> .
> >> For more options, visit https://groups.google.com/d/optout.
> >>
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit https://groups.google.com/d/
> > msgid/kafka-clients/CAAOfhrAUcmrFRH2PpsLLmv579WDOi
> > oMOcpy1LBrLJfdWff5iFA%40mail.gmail.com
> > <
> https://groups.google.com/d/msgid/kafka-clients/CAAOfhrAUcmrFRH2PpsLLmv579WDOioMOcpy1LBrLJfdWff5iFA%40mail.gmail.com?utm_medium=email&utm_source=footer
> >
> > .
> >
> > For more options, visit https://groups.google.com/d/optout.
> >
>


Re: [kafka-clients] [VOTE] 0.10.0.1 RC0

2016-07-31 Thread Harsha Ch
Thanks Ismael.

On Sat, Jul 30, 2016 at 7:43 PM Ismael Juma  wrote:

> Hi Dana,
>
> Thanks for testing releases so promptly. Very much appreciated!
>
> It's funny, Ewen had suggested something similar with regards to the
> release notes a couple of days ago. We now have a Python script for
> generating the release notes:
>
> https://github.com/apache/kafka/blob/trunk/release_notes.py
>
> It should be straightforward to change it to do the grouping. Contributions
> encouraged. :)
>
> Ismael
>
> On Fri, Jul 29, 2016 at 5:02 PM, Dana Powers 
> wrote:
>
> > +1
> >
> > tested against kafka-python integration test suite = pass.
> >
> > Aside: as the scope of kafka gets bigger, it may be useful to organize
> > release notes into functional groups like core, brokers, clients,
> > kafka-streams, etc. I've found this useful when organizing
> > kafka-python release notes.
> >
> > -Dana
> >
> > On Fri, Jul 29, 2016 at 7:46 AM, Ismael Juma  wrote:
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the first candidate for the release of Apache Kafka 0.10.0.1.
> > This
> > > is a bug fix release and it includes fixes and improvements from 50
> JIRAs
> > > (including a few critical bugs). See the release notes for more
> details:
> > >
> > > http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Monday, 1 August, 8am PT ***
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging
> > >
> > > * Javadoc:
> > > http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/javadoc/
> > >
> > > * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=0c2322c2cf7ab7909cfd8b834d1d2fffc34db109
> > >
> > > * Documentation:
> > > http://kafka.apache.org/0100/documentation.html
> > >
> > > * Protocol:
> > > http://kafka.apache.org/0100/protocol.html
> > >
> > > * Successful Jenkins builds for the 0.10.0 branch:
> > > Unit/integration tests:
> > https://builds.apache.org/job/kafka-0.10.0-jdk7/170/
> > > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka-0.10.0/130/
> > >
> > > Thanks,
> > > Ismael
> > >
> > > --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "kafka-clients" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an
> > > email to kafka-clients+unsubscr...@googlegroups.com.
> > > To post to this group, send email to kafka-clie...@googlegroups.com.
> > > Visit this group at https://groups.google.com/group/kafka-clients.
> > > To view this discussion on the web visit
> > >
> >
> https://groups.google.com/d/msgid/kafka-clients/CAD5tkZYz8fbLAodpqKg5eRiCsm4ze9QK3ufTz3Q4U%3DGs0CRb1A%40mail.gmail.com
> > .
> > > For more options, visit https://groups.google.com/d/optout.
> >
>


Re: [VOTE] 0.10.0.1 RC0

2016-07-29 Thread Harsha Ch
Hi Ismael,
  I would like to this JIRA included in the minor release
https://issues.apache.org/jira/browse/KAFKA-3950.
Thanks,
Harsha

On Fri, Jul 29, 2016 at 7:46 AM Ismael Juma  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for the release of Apache Kafka 0.10.0.1. This
> is a bug fix release and it includes fixes and improvements from 50 JIRAs
> (including a few critical bugs). See the release notes for more details:
>
> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Monday, 1 August, 8am PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging
>
> * Javadoc:
> http://home.apache.org/~ijuma/kafka-0.10.0.1-rc0/javadoc/
>
> * Tag to be voted upon (off 0.10.0 branch) is the 0.10.0.1-rc0 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=0c2322c2cf7ab7909cfd8b834d1d2fffc34db109
>
> * Documentation:
> http://kafka.apache.org/0100/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0100/protocol.html
>
> * Successful Jenkins builds for the 0.10.0 branch:
> Unit/integration tests:
> https://builds.apache.org/job/kafka-0.10.0-jdk7/170/
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka-0.10.0/130/
>
> Thanks,
> Ismael
>


Re: TLS based ACL: Does Kafka support multiple CA Certs on broker

2016-07-17 Thread Harsha Chintalapani
Did you make sure both those CA's are imported into Broker's truststore?

-Harsha

On Fri, Jul 15, 2016 at 5:12 PM Raghavan, Gopal 
wrote:

> Hi,
>
> Can Kakfa support multiple CA certs on broker.
> If yes, can you please point me to an example.
>
> Producer signed with second CA (CA2) is failing. Client signed with CA1 is
> working fine.
>
> kafka-console-producer --broker-list kafka.example.com:9093 --topic
> oem2-kafka --producer.config /etc/kafka/oem_producer_ssl.properties
> hello oem2
> are you there
> [2016-07-15 23:01:04,643] ERROR Error when sending message to topic
> oem2-kafka with key: null, value: 15 bytes with error: Failed to update
> metadata after 6 ms.
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
> [2016-07-15 23:02:04,646] ERROR Error when sending message to topic
> oem2-kafka with key: null, value: 17 bytes with error: Failed to update
> metadata after 6 ms.
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
>
> Any suggestions?
>
>
> --
>
> Server shows two CA names, but only one subject/issuer name.
>
> openssl s_client -debug -connect localhost:9093 -tls1
> subject=/C=GB/ST=London/L=London/O=Confluent/OU=Broker/CN=
> kafka.example.com
> issuer=/CN=ca.example.com/L=London/ST=London/C=GB
> ---
> Acceptable client certificate CA names
> /CN=ca.example.com/L=London/ST=London/C=GB
> /CN=ca2.example.com/L=London/ST=London/C=GB
>
>
>
> Here is my configuration:
>
> kafka.server.truststore.jks:
> 2 entries
> CA1: C=GB, ST=London, L=London, CN=ca.example.com
> CA2: C=GB, ST=London, L=London, CN=ca2.example.com
>
> kafka.server.keystore.jks:
> 4 entries
> Alias name: ca2root
> Owner: C=GB, ST=London, L=London, CN=ca2.example.com
> Issuer: C=GB, ST=London, L=London, CN=ca2.example.com
> Alias name: caroot
> Owner: C=GB, ST=London, L=London, CN=ca.example.com
> Issuer: C=GB, ST=London, L=London, CN=ca.example.com
> Alias name: kafka.example.com
> Certificate chain length: 2
> Certificate[1]:
> Owner: CN=kafka.example.com, OU=Broker, O=Confluent, L=London, ST=London,
> C=GB
> Issuer: C=GB, ST=London, L=London, CN=ca.example.com
> Alias name: oemkafka.example.com
> Certificate chain length: 2
> Certificate[1]:
> Owner: CN=kafka.example.com, OU=oemBroker, O=Confluent, L=London,
> ST=London, C=GB
> Issuer: C=GB, ST=London, L=London, CN=ca2.example.com
>
>
> Client Side
> kafka.oem.truststore.jks
> 1 entry
> Alias name: ca2root
> Owner: C=GB, ST=London, L=London, CN=ca2.example.com
> Issuer: C=GB, ST=London, L=London, CN=ca2.example.com
>
> kafka.oem.keystore.jks
> Alias name: oemkafka.example.com
> Certificate chain length: 2
> Certificate[1]:
> Owner: CN=kafka.example.com, OU=OEM, O=Client2, L=Boston, ST=Boston, C=US
> Issuer: C=GB, ST=London, L=London, CN=ca2.example.com
> Alias name: ca2root
> Owner: C=GB, ST=London, L=London, CN=ca2.example.com
> Issuer: C=GB, ST=London, L=London, CN=ca2.example.com
>
>
> Thanks,
> --
> Gopal
>
>


Re: Deleting a topic on the 0.8x brokers

2016-07-14 Thread Harsha Chintalapani
One way to delete is to delete the topic partition directories from disks
and delete /broker/topics.
If you just shutdown those brokers controller might try to replicate the
topic onto brokers and since you don't have any leaders you might replica
fetcher errors in the logs.
Thanks,
Harsha

On Thu, Jul 14, 2016 at 1:36 PM Rajiv Kurian  wrote:

> We plan to stop using a particular Kafka topic running on a certain subset
> of a 0.82x cluster. This topic is served by 9 brokers (leaders + replicas)
> and these 9 brokers have no other topics on them.
>
> Once we have stopped sending and consuming traffic from this topic (and
> hence the 9 brokers) what is the best way to delete the topic and shut them
> down?
>
> Will it suffice to shut down the brokers since they are not servicing
> traffic from any other topic?
>
> Thanks,
> Rajiv
>


Re: Error in znode creation after adding SASL digest on server and client

2016-07-08 Thread Harsha Ch
Hi,
  So we specifically kept the consumers to world writable in secure
mode. This is to allow zookeeper based consumers to create their own child
nodes under /consumers and they can add their own sasl based acls on top of
it. From the looks of it incase of zookeeper digest based connection it
expects all the nodes to have an ACL on it. This could be an issue with
ZkClient tha we use or we need to navigate this case differently.  Can you
file a JIRA for this.

Thanks,
Harsha

On Thu, Jul 7, 2016 at 10:48 PM Vipul Sharma 
wrote:

> I am running zookeeper and kafka on local machine.
> This is the user permission on zookeeper
> [zk: localhost:2181(CONNECTED) 0] getAcl /
> 'digest,'broker:TqgUewyrgBbYEWTfsNStYmIfD2Q=
> : cdrwa
>
> I am using the same user in kafka to connect to this local zookeeper
>
> /usr/lib/jvm/java-8-oracle-amd64/bin/java -Xmx200m -Xms200m
> -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf -server
> -Djava.awt.headless=true -XX:PermSize=48m -XX:MaxPermSize=48m -XX:+UseG1GC
> -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
> -Xloggc:/var/log/kafka/kafka-gc.log -XX:+PrintGCDateStamps
> -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote
> -Dcom.sun.management.jmxremote.authenticate=false
> -Dcom.sun.management.jmxremote.ssl=false
> -Dcom.sun.management.jmxremote.port=
> -Dkafka.logs.dir=/opt/kafka/bin/../logs
> -Dlog4j.configuration=file:/opt/kafka/config/log4j.properties -cp
> :/opt/kafka/bin/../libs/* kafka.Kafka /opt/kafka/config/server.properties
>
> root@default-ubuntu-1404:~# cat /opt/kafka/config/jaas.conf
> Client {
>org.apache.zookeeper.server.auth.DigestLoginModule required
>username=broker
>password=password;
> };
>
>
> The kafka start fails with these logs
>
> [2016-07-08 05:43:32,326] INFO Client
>
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,327] INFO Client environment:java.io.tmpdir=/tmp
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,327] INFO Client environment:java.compiler=
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,327] INFO Client environment:os.name=Linux
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,328] INFO Client environment:os.arch=amd64
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,328] INFO Client
> environment:os.version=4.2.0-35-generic (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,328] INFO Client environment:user.name=root
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,329] INFO Client environment:user.home=/root
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,329] INFO Client environment:user.dir=/root
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,330] INFO Initiating client connection,
> connectString=default-ubuntu-1404:2181,localhost:2181 sessionTimeout=6000
> watcher=org.I0Itec.zkclient.ZkClient@bef2d72
> (org.apache.zookeeper.ZooKeeper)
> [2016-07-08 05:43:32,359] INFO Waiting for keeper state SaslAuthenticated
> (org.I0Itec.zkclient.ZkClient)
> [2016-07-08 05:43:32,362] INFO successfully logged in.
> (org.apache.zookeeper.Login)
> [2016-07-08 05:43:32,363] INFO Client will use DIGEST-MD5 as SASL
> mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
> [2016-07-08 05:43:32,507] INFO Opening socket connection to server
> localhost/0:0:0:0:0:0:0:1:2181. Will attempt to SASL-authenticate using
> Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
> [2016-07-08 05:43:32,519] INFO Socket connection established to
> localhost/0:0:0:0:0:0:0:1:2181, initiating session
> (org.apache.zookeeper.ClientCnxn)
> [2016-07-08 05:43:32,537] INFO Session establishment complete on server
> localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x155c8e99f690005, negotiated
> timeout = 6000 (org.apache.zookeeper.ClientCnxn)
> [2016-07-08 05:43:32,541] INFO zookeeper state changed (SyncConnected)
> (org.I0Itec.zkclient.ZkClient)
> [2016-07-08 05:43:32,564] INFO zookeeper state changed (SaslAuthenticated)
> (org.I0Itec.zkclient.ZkClient)
> [2016-07-08 05:43:32,614] FATAL Fatal error during KafkaServer startup.
> Prepare to shutdown (kafka.server.KafkaServer)
> org.I0Itec.zkclient.exception.ZkException:
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> NoAuth for /consumers
> at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)
> at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:1000)
> at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:527)
> at org.I0Itec.zkclient.ZkClient.createPersistent(ZkClient.java:293)
> at kafka.utils.ZkPath$.createPersistent(ZkUtils.scala:938)
&

Re: Error in znode creation after adding SASL digest on server and client

2016-07-08 Thread Harsha Chintalapani
Hi,
  So we specifically kept the consumers to world writable in secure
mode. This is to allow zookeeper based consumers to create their own child
nodes under /consumers and they can add their own sasl based acls on top of
it. From the looks of it incase of zookeeper digest based connection it
expects all the nodes to have an ACL on it. This could be an issue with
ZkClient tha we use or we need to navigate this case differently.  Can you
file a JIRA for this.

Thanks,
Harsha

On Fri, Jul 8, 2016 at 3:24 PM Harsha Ch  wrote:

> Hi,
>   So we specifically kept the consumers to world writable in secure
> mode. This is to allow zookeeper based consumers to create their own child
> nodes under /consumers and they can add their own sasl based acls on top of
> it. From the looks of it incase of zookeeper digest based connection it
> expects all the nodes to have an ACL on it. This could be an issue with
> ZkClient tha we use or we need to navigate this case differently.  Can you
> file a JIRA for this.
>
> Thanks,
> Harsha
>
> On Thu, Jul 7, 2016 at 10:48 PM Vipul Sharma 
> wrote:
>
>> I am running zookeeper and kafka on local machine.
>> This is the user permission on zookeeper
>> [zk: localhost:2181(CONNECTED) 0] getAcl /
>> 'digest,'broker:TqgUewyrgBbYEWTfsNStYmIfD2Q=
>> : cdrwa
>>
>> I am using the same user in kafka to connect to this local zookeeper
>>
>> /usr/lib/jvm/java-8-oracle-amd64/bin/java -Xmx200m -Xms200m
>> -Djava.security.auth.login.config=/opt/kafka/config/jaas.conf -server
>> -Djava.awt.headless=true -XX:PermSize=48m -XX:MaxPermSize=48m -XX:+UseG1GC
>> -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35
>> -Xloggc:/var/log/kafka/kafka-gc.log -XX:+PrintGCDateStamps
>> -XX:+PrintGCTimeStamps -Dcom.sun.management.jmxremote
>> -Dcom.sun.management.jmxremote.authenticate=false
>> -Dcom.sun.management.jmxremote.ssl=false
>> -Dcom.sun.management.jmxremote.port=
>> -Dkafka.logs.dir=/opt/kafka/bin/../logs
>> -Dlog4j.configuration=file:/opt/kafka/config/log4j.properties -cp
>> :/opt/kafka/bin/../libs/* kafka.Kafka /opt/kafka/config/server.properties
>>
>> root@default-ubuntu-1404:~# cat /opt/kafka/config/jaas.conf
>> Client {
>>org.apache.zookeeper.server.auth.DigestLoginModule required
>>username=broker
>>password=password;
>> };
>>
>>
>> The kafka start fails with these logs
>>
>> [2016-07-08 05:43:32,326] INFO Client
>>
>> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,327] INFO Client environment:java.io.tmpdir=/tmp
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,327] INFO Client environment:java.compiler=
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,327] INFO Client environment:os.name=Linux
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,328] INFO Client environment:os.arch=amd64
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,328] INFO Client
>> environment:os.version=4.2.0-35-generic (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,328] INFO Client environment:user.name=root
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,329] INFO Client environment:user.home=/root
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,329] INFO Client environment:user.dir=/root
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,330] INFO Initiating client connection,
>> connectString=default-ubuntu-1404:2181,localhost:2181 sessionTimeout=6000
>> watcher=org.I0Itec.zkclient.ZkClient@bef2d72
>> (org.apache.zookeeper.ZooKeeper)
>> [2016-07-08 05:43:32,359] INFO Waiting for keeper state SaslAuthenticated
>> (org.I0Itec.zkclient.ZkClient)
>> [2016-07-08 05:43:32,362] INFO successfully logged in.
>> (org.apache.zookeeper.Login)
>> [2016-07-08 05:43:32,363] INFO Client will use DIGEST-MD5 as SASL
>> mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
>> [2016-07-08 05:43:32,507] INFO Opening socket connection to server
>> localhost/0:0:0:0:0:0:0:1:2181. Will attempt to SASL-authenticate using
>> Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
>> [2016-07-08 05:43:32,519] INFO Socket connection established to
>> localhost/0:0:0:0:0:0:0:1:2181, initiating session
>> (org.apache.zookeeper.ClientCnxn)
>> [2016-07-08 05:43:32,537] INFO Session establishment complete on server
>> localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x155c8e99f690005, negotiated
>> timeout = 6000 (org.apache.

Re: ZkUtils creating parent path with different ACL

2016-07-01 Thread Harsha

If i remember correctly it was because we wanted to allow non-secure
client still get into child consumers node and create their zookeeper
nodes to keep track of offset. If we add the acl at the parent path they
won't be able to write to the child nodes.

Thanks,
Harsha
On Fri, Jul 1, 2016, at 06:00 AM, Stevo Slavić wrote:
> Hello Apache Kafka community,
> 
> Is there a reason why acls are not passed through in
> https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/utils/ZkUtils.scala#L404
> call ?
> 
> If one overrides and uses custom acls for a node, parent if missing might
> get created with default ACL.
> 
> Kind regards,
> Stevo Slavic.


Re: SSL support for command line tools

2016-06-22 Thread Harsha
Radu,
 Please follow the instructions here
 http://kafka.apache.org/documentation.html#security_ssl . At
 the end of the SSL section we've an example for produce and
 consumer command line tools to pass in ssl configs.

Thanks,
Harsha

On Wed, Jun 22, 2016, at 07:40 AM, Gerard Klijs wrote:
> To eleborate:
> We start the process with --command-config /some/folder/ssl.properties
> the
> file we include in the image, and contains the ssl properties it needs,
> which is a subset of the properties (those specific for ssl) the client
> uses. In this case the certificate is accessed in a data container,
> having
> access to the same certificate as the broker (so we don't need to set
> acl's
> to use the tool).
> 
> On Wed, Jun 22, 2016 at 2:47 PM Gerard Klijs 
> wrote:
> 
> > You need to pass the correct options, similar to how you would do to a
> > client. We use the consumer-groups in a docker container, in an environment
> > witch is now only SSL (since the schema registry now supports it).
> >
> > On Wed, Jun 22, 2016 at 2:47 PM Radu Radutiu  wrote:
> >
> >> Hi,
> >>
> >> Is is possible to configure the command line tools like
> >> kafka-consumer-groups.sh , kafka-topics.sh and all other command that are
> >> not a consumer or producer to connect to a SSL only kafka cluster ?
> >>
> >> Regards,
> >> Radu
> >>
> >


Re: [DISCUSS] Java 8 as a minimum requirement

2016-06-19 Thread Harsha
Hi Ismael,
  Agree on timing is more important. If we give enough heads
  up to the users who are on Java 7 thats great but still
  shipping this in 0.10.x line is won't be good as it still
  perceived as maint release even the release might contain
  lot of features .  If we can make this as part of 0.11 and
  cutting 0.10.1 features moving to 0.11 and giving rough
  timeline when that would be released would be ideal.

Thanks,
Harsha

On Fri, Jun 17, 2016, at 11:13 AM, Ismael Juma wrote:
> Hi Harsha,
> 
> Comments below.
> 
> On Fri, Jun 17, 2016 at 7:48 PM, Harsha  wrote:
> 
> > Hi Ismael,
> > "Are you saying that you are aware of many Kafka users still
> > using Java 7
> > > who would be ready to upgrade to the next Kafka feature release (whatever
> > > that version number is) before they can upgrade to Java 8?"
> > I know there quite few users who are still on java 7
> 
> 
> This is good to know.
> 
> 
> > and regarding the
> > upgrade we can't say Yes or no.  Its upto the user discretion when they
> > choose to upgrade and ofcourse if there are any critical fixes that
> > might go into the release.  We shouldn't be restricting their upgrade
> > path just because we removed Java 7 support.
> >
> 
> My point is that both paths have their pros and cons and we need to weigh
> them up. If some users are slow to upgrade the Java version (Java 7 has
> been EOL'd for over a year), there's a good chance that they are slow to
> upgrade Kafka too. And if that is the case (and it may not be), then
> holding up improvements for the ones who actually do upgrade may be the
> wrong call. To be clear, I am still in listening mode and I haven't made
> up
> my mind on the subject.
> 
> Once we released 0.9.0 there aren't any 0.8.x releases. i.e we don't
> > have LTS type release where we continually ship critical fixes over
> > 0.8.x minor releases. So if a user notices a critical fix the only
> > option today is to upgrade to next version where that fix is shipped.
> >
> 
> We haven't done a great job at this in the past, but there is no decision
> that once a new major release is out, we don't do patch releases for the
> previous major release. In fact, we have been collecting critical fixes
> in
> the 0.9.0 branch for a potential 0.9.0.2.
> 
> I understand there is no decision made yet but given the premise was to
> > ship this in 0.10.x  , possibly 0.10.1 which I don't agree with. In
> > general against shipping this in 0.10.x version. Removing Java 7 support
> > when the release is minor in general not a good idea to users.
> >
> 
> Sorry if I didn't communicate this properly. I simply meant the next
> feature release. I used 0.10.1.0 as an example, but it could also be
> 0.11.0.0 if that turns out to be the next release. A discussion on that
> will probably take place once the scope is clear. Personally, I think the
> timing is more important the the version number, but it seems like some
> people disagree.
> 
> Ismael


Re: [DISCUSS] Java 8 as a minimum requirement

2016-06-17 Thread Harsha
Hi Ismael,
"Are you saying that you are aware of many Kafka users still
using Java 7
> who would be ready to upgrade to the next Kafka feature release (whatever
> that version number is) before they can upgrade to Java 8?"
I know there quite few users who are still on java 7 and regarding the
upgrade we can't say Yes or no.  Its upto the user discretion when they
choose to upgrade and ofcourse if there are any critical fixes that
might go into the release.  We shouldn't be restricting their upgrade
path just because we removed Java 7 support.

"The 0.10.1 versus 0.11.0.0 is something that can be discussed
separately
> as
> no decision has been made on what the next version will be (we did go
> straight from 0.9.0 to 0.10.0 whereas the 0.8.x series had multiple minor
> releases)"
Once we released 0.9.0 there aren't any 0.8.x releases. i.e we don't
have LTS type release where we continually ship critical fixes over
0.8.x minor releases. So if a user notices a critical fix the only
option today is to upgrade to next version where that fix is shipped.

"no decision has been made on what the next version will be (we did go
> straight from 0.9.0 to 0.10.0 whereas the 0.8.x series had multiple minor
> releases). 

I understand there is no decision made yet but given the premise was to
ship this in 0.10.x  , possibly 0.10.1 which I don't agree with. In
general against shipping this in 0.10.x version. Removing Java 7 support
when the release is minor in general not a good idea to users. 
-Harsha

"Also note that Kafka bug fixes go to 0.10.0.1, not 0.10.1 and
> 0.10.0.x would still be available for users using Java 7."


On Fri, Jun 17, 2016, at 12:18 AM, Ismael Juma wrote:
> Hi Harsha,
> 
> Are you saying that you are aware of many Kafka users still using Java 7
> who would be ready to upgrade to the next Kafka feature release (whatever
> that version number is) before they can upgrade to Java 8?
> 
> The 0.10.1 versus 0.11.0.0 is something that can be discussed separately
> as
> no decision has been made on what the next version will be (we did go
> straight from 0.9.0 to 0.10.0 whereas the 0.8.x series had multiple minor
> releases). Also note that Kafka bug fixes go to 0.10.0.1, not 0.10.1 and
> 0.10.0.x would still be available for users using Java 7.
> 
> Ismael
> 
> On Fri, Jun 17, 2016 at 12:48 AM, Harsha  wrote:
> 
> > -1 on removing suport 0.10.1.0 . This is minor release and removing
> > support JDK 1.7 which lot of users still depend on not a good idea and
> > definitely they are not getting enough heads up to migrate their other
> > services to JDK1.7.
> > We can consider this for 0.11.0 release time line again depends on the
> > dates .
> >
> > Thanks,
> > Harsha
> >
> > On Thu, Jun 16, 2016, at 03:08 PM, Ismael Juma wrote:
> > > Hi Jan,
> > >
> > > That's interesting. Do you have some references you can share on this? It
> > > would be good to know which Java 8 versions have been tested and whether
> > > it
> > > is something that is being worked on.
> > >
> > > Ismael
> > >
> > > On Fri, Jun 17, 2016 at 12:02 AM,  wrote:
> > >
> > > >
> > > > Hi Ismael,
> > > >
> > > > Unfortunately Java 8 doesn't play nice with FreeBSD. We have seen a
> > lot of
> > > > JVM crashes running our 0.9 brokers on Java 8... Java 7 on the other
> > hand
> > > > is totally stable.
> > > >
> > > > Until these issues have been addressed, this would cause some serious
> > > > issues for us.
> > > >
> > > > Regards
> > > >
> > > > Jan
> >


Re: [DISCUSS] Java 8 as a minimum requirement

2016-06-16 Thread Harsha
-1 on removing suport 0.10.1.0 . This is minor release and removing
support JDK 1.7 which lot of users still depend on not a good idea and
definitely they are not getting enough heads up to migrate their other
services to JDK1.7. 
We can consider this for 0.11.0 release time line again depends on the
dates .

Thanks,
Harsha

On Thu, Jun 16, 2016, at 03:08 PM, Ismael Juma wrote:
> Hi Jan,
> 
> That's interesting. Do you have some references you can share on this? It
> would be good to know which Java 8 versions have been tested and whether
> it
> is something that is being worked on.
> 
> Ismael
> 
> On Fri, Jun 17, 2016 at 12:02 AM,  wrote:
> 
> >
> > Hi Ismael,
> >
> > Unfortunately Java 8 doesn't play nice with FreeBSD. We have seen a lot of
> > JVM crashes running our 0.9 brokers on Java 8... Java 7 on the other hand
> > is totally stable.
> >
> > Until these issues have been addressed, this would cause some serious
> > issues for us.
> >
> > Regards
> >
> > Jan


Re: Questions on Kafka Security

2016-06-08 Thread Harsha
1) Can the ACLs be specified statically in a config file of sorts? Or is
bin/kafka-acl.sh or a similar kafka client API the only way to specify
the
ACLs?

kafka-acls.sh executes simpleAClAuthorizer and the only way it accepts
acls is via command-line params. 


2) I notice that bin/kafka-acl.sh takes an argument to specify
zookeeper,
but doesn't seem to have a mechanism to specify any other authentication
constructs. Does that mean anyone can point to my zookeeper instance and
add/remove the ACLs?

simpleAClAuthorizer uses zookeeper as ACL storage.  Remember in kerberos
secure mode we highly recommend to turn on zookeeper.set.acl . This will
put "sasl:principal_name" acls on zookeeper nodes. Here principal_name
is the broker's principal.
So one has to login with that principal name to make changes to any of
the zookeeper nodes.
Only the users who has access to the broker's keytab can modify
zookeeper nodes. 

3) I'd like to use SSL certificates for Authentication and ACLs, but
don't
wont to use encryption over the wire because of latency concerns
mentioned
here: https://issues.apache.org/jira/browse/KAFKA-2561
Is that supported? Any instructions?

openSSL is not supported yet.  Also dropping the encryption in SSL
channel is not possible yet.
Any reason for not use kerberos for this since we support non-encrypted
channel for kerberos.


Thanks,
harsha


On Wed, Jun 8, 2016, at 02:06 PM, Samir Shah wrote:
> Hello,
> 
> Few questions on Kafka Security.
> 
> 1) Can the ACLs be specified statically in a config file of sorts? Or is
> bin/kafka-acl.sh or a similar kafka client API the only way to specify
> the
> ACLs?
> 
> 2) I notice that bin/kafka-acl.sh takes an argument to specify zookeeper,
> but doesn't seem to have a mechanism to specify any other authentication
> constructs. Does that mean anyone can point to my zookeeper instance and
> add/remove the ACLs?
> 
> 3) I'd like to use SSL certificates for Authentication and ACLs, but
> don't
> wont to use encryption over the wire because of latency concerns
> mentioned
> here: https://issues.apache.org/jira/browse/KAFKA-2561
> Is that supported? Any instructions?
> 
> Thanks in advance.
> - Samir


Re: Not able to monitor consumer group lag with new consumer and kerberos

2016-06-07 Thread Harsha
Hi Pierre,
  Do you see any errors in the server.log when this command
  ran.  Can you please open a thread here
  https://community.hortonworks.com/answers/index.html .

Thanks,
Harsha

On Tue, Jun 7, 2016, at 09:22 AM, Pierre Labiausse wrote:
> Hi,
> 
> I'm not able to use kafka-consumer-groups.sh to monitor the lag of my
> consumers when my cluster is kerberized.
> 
> I'm using kafka version 0.9.0 installed on an hortonworks hdp 2.4.0
> cluster.
> 
> I've replicated my setup on two sandboxes, one without kerberos and one
> with kerberos.
> 
> On the one without kerberos, I'm able to get an answer from the
> consumer-groups client with the following command:
> ./kafka-consumer-groups.sh --list --new-consumer --bootstrap-server
> sandbox.hortonworks.com:6667
> 
> On the one with kerberos activated, i've tried the same command without
> specifying the security-protocoln which fails with a java.io.EOFException
> when connecting to the broker.
> When specifying the security-protocol (PLAINTEXTSASL or SASL_PLAINTEXT),
> I'm seeing a java.io.IOException: Connection reset by peer (full log is
> below).
> 
> In both the cases with kerberos, without activating the debug logs in the
> client, it appears to hang since it is stuck in an infinite loop of
> retrying to connect with the broker (and failing)
> 
> 
> Am I missing something in order to use this tool in a kerberized
> environment ? As of now, I'm not seeing any other way to monitor consumer
> offsets, since they are not stored in zookeeper anymore.
> 
> Thanks in advance,
> Pierre
> 
> 
> ** Full stack of execution with kerberos and security-protocol
> specified **
> ./kafka-consumer-groups.sh --security-protocol PLAINTEXTSASL
> --new-consumer
> --bootstrap-server sandbox.hortonworks.com:6667 --list
> [2016-06-07 12 :42:26,951] INFO Successfully
> logged
> in. (org.apache.kafka.common.security.kerberos.Login)
> [2016-06-07 12 :42:26,952] DEBUG It is a Kerberos
> ticket (org.apache.kafka.common.security.kerberos.Login)
> [2016-06-07 12 :42:26,971] INFO TGT refresh
> thread
> started. (org.apache.kafka.common.security.kerberos.Login)
> [2016-06-07 12 :42:26,981] DEBUG Found TGT Ticket
> (hex) =
> : 61 82 01 5F 30 82 01 5B A0 03 02 01 05 A1 09 1B a.._0..[
> 0010: 07 49 54 52 2E 4C 41 4E A2 1C 30 1A A0 03 02 01 .ITR.LAN..0.
> 0020: 02 A1 13 30 11 1B 06 6B 72 62 74 67 74
> 
> 1B 07 49 ...0...krbtgt..I
> 0030: 54 52 2E 4C 41 4E A3 82 01 29 30 82 01 25
>  A0 03 TR.LAN...)0..%..
> 0040: 02 01 12 A1 03 02 01 01 A2 82 01 17 04 82 01 13
>  
> 0050: D9 9F 09 9C F7 96 72 D2 5F 84 20 B9 D7 5D DC 7B ..r._. ..]..
> 0060: 8D 4F A0 03 DC D5 85 54 86 4D A1 A6 F1 31 5A BF .O.T.M...1Z.
> 0070: F9 40 71 43 20 97  7F 84 D6 F7 2D 93
> 16 27 06 B2 .@qC .-..'..
> 0080: 42 45 9D DE C3 4C 61 B6 8B 9B B3 E5 F8 F7 EB 3E BE...La>
> 0090: D6 53 AE 9D 5D E0 06 DA 75 E0 43 DE 28 9C DE CB .S..]...u.C.(...
> 00A0: CF 72 00 5A CC 69 20 82 5F C6 4F 1D 7F D0 1F FB .r.Z.i ._.O.
> 00B0: 92 55 B6 31 69 F1 E8 5B FD 2B 22 F8 15 E0 5D 84 .U.1i..[.+"...].
> 00C0: 5A 1A 2B 6D 0B 90 93 97 5B 06 EC 30 37 3C BB 71 Z.+m[..07<.q
> 00D0: 0B 23 24 67 F2 70 ED 1A E2 FF 6F 3A 12 0F B2 1D .#$g.po:
> 00E0: AD B9 C9 2C 24 B3 89 B3 90 22 8F 5C 1E AE 86 99 ...,$".\
> 00F0: 1A B5 B4 4A 3E 1D 6F 73 FD CB 60 D3 E3 76 71 6C ...J>.os..`..vql
> 0100: 90 B5 EA 4A D3 74 87 0E 02 9E C4 6D 0E 49 A2 47 ...J.t.m.I.G
> 0110: A4 2A FA CD D4 96 65 F3 FC E1 FB 9A 6F A1 0F 0E .*e.o...
> 0120: AF 6F 9F 9F D5 7C 5A 29 FE BF 84 18 2E CC 7F 0C .oZ)
> 0130: 07 53 D2 F9 0A 44 DA 8E 3C B6 90 C0 71 69 5C CA .S...D..<...qi\.
> 0140: 9F E1 FE 23 71 C1 B7 B1 1A 7D 84 BD 33 AA ED A6 ...#q...3...
> 0150: 9A CE 08 A9 9B 6E 29 54 B5 6B 06 9A 4D 4C 5F 3A .n)T.k..ML_:
> 0160: CF A6 FF ...
> 
> Client Principal = kafka/sandbox.hortonworks@itr.lan
> Server Principal = krbtgt/itr@itr.lan
> Session Key = EncryptionKey: keyType=18 keyBytes (hex dump)=
> : 8E 4E 45 F8 0D B4 33 0C ED C5 7C A2 2D E2 C2 19 .NE...3.-...
> 0010: 87 CC 27 68 72 B1 5B F8 C4 7D E8 BF EC F0 E9 F4 ..'hr.[.
> 
> 
> Forwardable Ticket true
> Forwarded Ticket false
> Proxiable Ticket false
> Proxy Ticket false
> Postdated Ticket false
> Renewable Ticket true
> Initial Ticket true
> Auth Time = Tue Jun 07 12:36:46 UTC 2016
> Start Time = Tue Jun 07 12:36:46 UTC 2016
> End Time = Wed Jun 08 12:36:46 UTC 2016
> Renew Till = Tue Jun 07 12:36:46 UTC 2016
> Client Addresses Null . (org.apache.kafka.common.security.kerberos.Login)

Re: Reg slack channel

2016-04-26 Thread Harsha
We use apache JIRA and mailing lists for any discussion.
Thanks,
Harsha

On Tue, Apr 26, 2016, at 06:20 PM, Kanagha wrote:
> Hi,
> 
> Is there a slack channel for discussing issues related to Kafka? I would
> like to get an invite. Thanks!
> 
> 
> -- 
> Kanagha


Re: Kafka Newbie question

2016-04-10 Thread Harsha
Pradeep,
How about

https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seekToBeginning%28org.apache.kafka.common.TopicPartition...%29

-Harsha

On Sat, Apr 9, 2016, at 09:48 PM, Pradeep Bhattiprolu wrote:
> Liquan , thanks for the response.
> By setting the auto commit to false do i have to manage queue offset
> manually ?
> I am running a multiple threads with each thread being a consumer, it
> would
> be complicated to manage offsets across threads, if i dont use kafka's
> automatic consumer group abstraction.
> 
> Thanks
> Pradeep
> 
> On Sat, Apr 9, 2016 at 3:12 AM, Liquan Pei  wrote:
> 
> > Hi Pradeep,
> >
> > Can you try to set enable.auto.commit = false if you want to read to the
> > earliest offset? According to the documentation, auto.offset.reset controls
> > what to do when there is no initial offset in Kafka or if the current
> > offset does not exist any more on the server (e.g. because that data has
> > been deleted). In case that auto commit is enabled, the committed offset is
> > available in some servers.
> >
> > Thanks,
> > Liquan
> >
> > On Fri, Apr 8, 2016 at 10:44 PM, Pradeep Bhattiprolu 
> > wrote:
> >
> > > Hi All
> > >
> > > I am a newbie to kafka. I am using the new Consumer API in a thread
> > acting
> > > as a consumer for a topic in Kafka.
> > > For my testing and other purposes I have read the queue multiple times
> > > using console-consumer.sh script of kafka.
> > >
> > > To start reading the message from the beginning in my java code , I have
> > > set the value of the auto.offset.reset to "earliest".
> > >
> > > However that property does not guarantee that i begin reading the
> > messages
> > > from start, it goes by the most recent smallest offset for the consumer
> > > group.
> > >
> > > Here is my question,
> > > Is there a assured way of starting to read the messages from beginning
> > from
> > > Java based Kafka Consumer ?
> > > Once I reset one of my consumers to zero, do i have to do offset
> > management
> > > myself for other consumer threads or does kafka automatically lower the
> > > offset to the first threads read offset ?
> > >
> > > Any information / material pointing to the solution are highly
> > appreciated.
> > >
> > > Thanks
> > > Pradeep
> > >
> >
> >
> >
> > --
> > Liquan Pei
> > Software Engineer, Confluent Inc
> >


Re: Kafka Error while doing SSL authentication on 0.9.0

2016-03-15 Thread Harsha
Are you sure broker side ssl configs are properly set. One quick way to
test is to use openssl to connect to the broker
openssl s_client -debug -connect localhost:9093 -tls1
make sure you change the port where the broker is running the SSL.

More details are here
http://kafka.apache.org/documentation.html#security_ssl

-Harsha

On Thu, Mar 10, 2016, at 12:41 AM, Ranjan Baral wrote:
> i getting below warning while doing produce from client side which is
> connecting to server side which contains SSL based authentication.
> 
> *[2016-03-10 07:09:13,018] WARN The configuration ssl.keystore.location =
> /etc/pki/tls/certs/keystore-hpfs.jks was supplied but isn't a known
> config.
> (org.apache.kafka.clients.producer.ProducerConfig) [2016-03-10
> 07:09:13,019] WARN The configuration ssl.keystore.password = 1qazxsw2 was
> supplied but isn't a known config.
> (org.apache.kafka.clients.producer.ProducerConfig) [2016-03-10
> 07:09:13,019] WARN The configuration ssl.key.password = 1qazxsw2 was
> supplied but isn't a known config.
> (org.apache.kafka.clients.producer.ProducerConfig) [2016-03-10
> 07:09:13,019] WARN The configuration ssl.truststore.type = JKS was
> supplied
> but isn't a known config.
> (org.apache.kafka.clients.producer.ProducerConfig) [2016-03-10
> 07:09:13,019] WARN The configuration ecurity.protocol = SSL was supplied
> but isn't a known config.
> (org.apache.kafka.clients.producer.ProducerConfig) [2016-03-10
> 07:09:13,019] WARN The configuration ssl.keystore.type = JKS was supplied
> but isn't a known config.
> (org.apache.kafka.clients.producer.ProducerConfig) [2016-03-10
> 07:09:13,019] WA*RN The configuration ssl.enabled.protocols =
> TLSv1.2,TLSv1.1,TLSv1 was supplied but isn't a known config.
> (org.apache.kafka.clients.producer.ProducerConfig)
> 
> So i am not able to produce any message getting below error
> 
> *ERROR Error when sending message to topic test with key: null, value: 2
> bytes with error: Failed to update metadata after 6 ms.
> (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)*
> 
> 
> -- 
> Thanks & Regards
> Ranjan Kumar Baral


Re: Kafka 0.9 Spout

2016-03-01 Thread Harsha
Ankit,
We've one in the making and getting to ready to be merged in .
You can follow the PR here
https://github.com/apache/storm/pull/1131 . Any storm related
questions please send an email at these lists
http://storm.apache.org/getting-help.html .

Thanks,
Harsha

On Tue, Mar 1, 2016, at 12:56 PM, Ankit Jain wrote:
> Hi All,
> 
> We would need to use the SSL feature of Kafka and that would require the
> Kafka Spout upgrade from version 0.8.x to 0.9.x as the SSL is only
> supported by new consumer API.
> 
> Please share any URL for the same.
> 
> -- 
> Thanks,
> Ankit


Re: Kerberized Kafka setup issues

2016-02-23 Thread Harsha
whats your zookeeper.connect in server.properties  looks like. Did you
use the hostname or localhost
-Harsha

On Tue, Feb 23, 2016, at 12:01 PM, Oleg Zhurakousky wrote:
> Still digging, but here is more info that may help
> 
> 2016-02-23 14:59:24,240] INFO zookeeper state changed (SyncConnected)
> (org.I0Itec.zkclient.ZkClient)
> Found ticket for kafka/ubuntu.oleg@oleg.com to go to
> krbtgt/oleg@oleg.com expiring on Wed Feb 24 00:59:24 EST 2016
> Entered Krb5Context.initSecContext with state=STATE_NEW
> Found ticket for kafka/ubuntu.oleg@oleg.com to go to
> krbtgt/oleg@oleg.com expiring on Wed Feb 24 00:59:24 EST 2016
> Service ticket not found in the subject
> >>> Credentials acquireServiceCreds: same realm
> Using builtin default etypes for default_tgs_enctypes
> default etypes for default_tgs_enctypes: 17 16 23.
> >>> CksumType: sun.security.krb5.internal.crypto.RsaMd5CksumType
> >>> EType: sun.security.krb5.internal.crypto.Des3CbcHmacSha1KdEType
> >>> KrbKdcReq send: kdc=ubuntu.oleg.com UDP:88, timeout=3, number of 
> >>> retries =3, #bytes=660
> >>> KDCCommunication: kdc=ubuntu.oleg.com UDP:88, timeout=3,Attempt =1, 
> >>> #bytes=660
> >>> KrbKdcReq send: #bytes read=183
> >>> KdcAccessibility: remove ubuntu.oleg.com
> >>> KDCRep: init() encoding tag is 126 req type is 13
> >>>KRBError:
>cTime is Sat Aug 01 11:32:55 EDT 1998 901985575000
>sTime is Tue Feb 23 14:59:24 EST 2016 1456257564000
>suSec is 248635
>error code is 7
>error Message is Server not found in Kerberos database
>cname is kafka/ubuntu.oleg@oleg.com
>sname is zookeeper/localh...@oleg.com
>msgType is 30
> 
> > On Feb 23, 2016, at 2:46 PM, Oleg Zhurakousky 
> >  wrote:
> > 
> > No joy. the same error
> > 
> > KafkaServer {
> >com.sun.security.auth.module.Krb5LoginModule required
> >debug=true
> >useKeyTab=true
> >storeKey=true
> >keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> >principal="kafka/ubuntu.oleg@oleg.com";
> > };
> > Client {
> >   com.sun.security.auth.module.Krb5LoginModule required
> >   debug=true
> >   useKeyTab=true
> >   serviceName=zookeeper
> >   storeKey=true
> >   keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> >   principal="kafka/ubuntu.oleg@oleg.com";
> > };
> >> On Feb 23, 2016, at 2:41 PM, Harsha  wrote:
> >> 
> >> My bad it should be under Client section
> >> 
> >> Client {
> >>  com.sun.security.auth.module.Krb5LoginModule required
> >>  debug=true
> >>  useKeyTab=true
> >>  storeKey=true
> >>  serviceName=zookeeper
> >>  keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> >>  principal="kafka/ubuntu.oleg@oleg.com";
> >> };
> >> 
> >> -Harsha
> >> 
> >> On Tue, Feb 23, 2016, at 11:37 AM, Harsha wrote:
> >>> can you try adding "serviceName=zookeeper" to KafkaServer section like
> >>> KafkaServer {
> >>>   com.sun.security.auth.module.Krb5LoginModule required
> >>>   debug=true
> >>>   useKeyTab=true
> >>>   storeKey=true
> >>>   serviceName=zookeeper
> >>>   keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> >>>   principal="kafka/ubuntu.oleg@oleg.com";
> >>> };
> >>> 
> >>> On Tue, Feb 23, 2016, at 11:24 AM, Oleg Zhurakousky wrote:
> >>>> More info
> >>>> 
> >>>> I am starting both services as myself ‘oleg’. Validated that both key tab
> >>>> files are readable. o I am assuming Zookeeper is started as ‘zookeeper’
> >>>> and Kafka as ‘kafka’
> >>>> 
> >>>> Oleg
> >>>> 
> >>>>> On Feb 23, 2016, at 2:22 PM, Oleg Zhurakousky 
> >>>>>  wrote:
> >>>>> 
> >>>>> Harsha 
> >>>>> 
> >>>>> Thanks for following up. Here is is:
> >>>>> oleg@ubuntu:~/kafka_2.10-0.9.0.1/config$ cat  kafka_server_jaas.conf
> >>>>> KafkaServer {
> >>>>>  com.sun.security.auth.module.Krb5LoginModule required
> >>>>>  debug=

Re: Kerberized Kafka setup issues

2016-02-23 Thread Harsha
My bad it should be under Client section

Client {
   com.sun.security.auth.module.Krb5LoginModule required
   debug=true
   useKeyTab=true
   storeKey=true
   serviceName=zookeeper
   keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
   principal="kafka/ubuntu.oleg@oleg.com";
};

-Harsha

On Tue, Feb 23, 2016, at 11:37 AM, Harsha wrote:
> can you try adding "serviceName=zookeeper" to KafkaServer section like
> KafkaServer {
> com.sun.security.auth.module.Krb5LoginModule required
> debug=true
> useKeyTab=true
> storeKey=true
> serviceName=zookeeper
> keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> principal="kafka/ubuntu.oleg@oleg.com";
> };
> 
> On Tue, Feb 23, 2016, at 11:24 AM, Oleg Zhurakousky wrote:
> > More info
> > 
> > I am starting both services as myself ‘oleg’. Validated that both key tab
> > files are readable. o I am assuming Zookeeper is started as ‘zookeeper’
> > and Kafka as ‘kafka’
> > 
> > Oleg
> > 
> > > On Feb 23, 2016, at 2:22 PM, Oleg Zhurakousky 
> > >  wrote:
> > > 
> > > Harsha 
> > > 
> > > Thanks for following up. Here is is:
> > > oleg@ubuntu:~/kafka_2.10-0.9.0.1/config$ cat  kafka_server_jaas.conf
> > > KafkaServer {
> > >com.sun.security.auth.module.Krb5LoginModule required
> > >debug=true
> > >useKeyTab=true
> > >storeKey=true
> > >keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> > >principal="kafka/ubuntu.oleg@oleg.com";
> > > };
> > > Client {
> > >   com.sun.security.auth.module.Krb5LoginModule required
> > >   debug=true
> > >   useKeyTab=true
> > >   storeKey=true
> > >   keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> > >   principal="kafka/ubuntu.oleg@oleg.com";
> > > };
> > > 
> > > oleg@ubuntu:~/kafka_2.10-0.9.0.1/config$ cat  zookeeper_jaas.conf
> > > Server {
> > >com.sun.security.auth.module.Krb5LoginModule required
> > >debug=true
> > >useKeyTab=true
> > >keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/zookeeper.keytab"
> > >storeKey=true
> > >useTicketCache=false
> > >principal="zookeeper/ubuntu.oleg@oleg.com";
> > > };
> > > 
> > > Cheers
> > > Oleg
> > > 
> > >> On Feb 23, 2016, at 2:17 PM, Harsha  wrote:
> > >> 
> > >> Oleg,
> > >>   Can you post your jaas configs. Its important that serviceName
> > >>   must match the principal name with which zookeeper is running.
> > >>   Whats the principal name zookeeper service is running with.
> > >> -Harsha
> > >> 
> > >> On Tue, Feb 23, 2016, at 11:01 AM, Oleg Zhurakousky wrote:
> > >>> Hey guys, first post here so bare with me
> > >>> 
> > >>> Trying to setup Kerberized Kafka 0.9.0.. Followed the instructions here
> > >>> http://kafka.apache.org/documentation.html#security_sasl and i seem to 
> > >>> be
> > >>> very close, but not quite there yet.
> > >>> 
> > >>> ZOOKEEPER
> > >>> Starting Zookeeper seems to be OK (below is the relevant part of the 
> > >>> log)
> > >>> . . .
> > >>> [2016-02-23 13:22:40,336] INFO maxSessionTimeout set to -1
> > >>> (org.apache.zookeeper.server.ZooKeeperServer)
> > >>> Debug is  true storeKey true useTicketCache false useKeyTab true
> > >>> doNotPrompt false ticketCache is null isInitiator true KeyTab is
> > >>> /home/oleg/kafka_2.10-0.9.0.1/config/security/zookeeper.keytab
> > >>> refreshKrb5Config is false principal is
> > >>> zookeeper/ubuntu.oleg@oleg.com<mailto:zookeeper/ubuntu.oleg@oleg.com>
> > >>> tryFirstPass is false useFirstPass is false storePass is false clearPass
> > >>> is false
> > >>> principal is
> > >>> zookeeper/ubuntu.oleg@oleg.com<mailto:zookeeper/ubuntu.oleg@oleg.com>
> > >>> Will use keytab
> > >>> Commit Succeeded
> > >>> 
> > >>> [2016-02-23 13:22:40,541] INFO successfully logged in.
> > >

Re: Kerberized Kafka setup issues

2016-02-23 Thread Harsha
can you try adding "serviceName=zookeeper" to KafkaServer section like
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
debug=true
useKeyTab=true
storeKey=true
serviceName=zookeeper
keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
principal="kafka/ubuntu.oleg@oleg.com";
};

On Tue, Feb 23, 2016, at 11:24 AM, Oleg Zhurakousky wrote:
> More info
> 
> I am starting both services as myself ‘oleg’. Validated that both key tab
> files are readable. o I am assuming Zookeeper is started as ‘zookeeper’
> and Kafka as ‘kafka’
> 
> Oleg
> 
> > On Feb 23, 2016, at 2:22 PM, Oleg Zhurakousky 
> >  wrote:
> > 
> > Harsha 
> > 
> > Thanks for following up. Here is is:
> > oleg@ubuntu:~/kafka_2.10-0.9.0.1/config$ cat  kafka_server_jaas.conf
> > KafkaServer {
> >com.sun.security.auth.module.Krb5LoginModule required
> >debug=true
> >useKeyTab=true
> >storeKey=true
> >keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> >principal="kafka/ubuntu.oleg@oleg.com";
> > };
> > Client {
> >   com.sun.security.auth.module.Krb5LoginModule required
> >   debug=true
> >   useKeyTab=true
> >   storeKey=true
> >   keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/kafka.keytab"
> >   principal="kafka/ubuntu.oleg@oleg.com";
> > };
> > 
> > oleg@ubuntu:~/kafka_2.10-0.9.0.1/config$ cat  zookeeper_jaas.conf
> > Server {
> >com.sun.security.auth.module.Krb5LoginModule required
> >debug=true
> >useKeyTab=true
> >keyTab="/home/oleg/kafka_2.10-0.9.0.1/config/security/zookeeper.keytab"
> >storeKey=true
> >useTicketCache=false
> >principal="zookeeper/ubuntu.oleg@oleg.com";
> > };
> > 
> > Cheers
> > Oleg
> > 
> >> On Feb 23, 2016, at 2:17 PM, Harsha  wrote:
> >> 
> >> Oleg,
> >>   Can you post your jaas configs. Its important that serviceName
> >>   must match the principal name with which zookeeper is running.
> >>   Whats the principal name zookeeper service is running with.
> >> -Harsha
> >> 
> >> On Tue, Feb 23, 2016, at 11:01 AM, Oleg Zhurakousky wrote:
> >>> Hey guys, first post here so bare with me
> >>> 
> >>> Trying to setup Kerberized Kafka 0.9.0.. Followed the instructions here
> >>> http://kafka.apache.org/documentation.html#security_sasl and i seem to be
> >>> very close, but not quite there yet.
> >>> 
> >>> ZOOKEEPER
> >>> Starting Zookeeper seems to be OK (below is the relevant part of the log)
> >>> . . .
> >>> [2016-02-23 13:22:40,336] INFO maxSessionTimeout set to -1
> >>> (org.apache.zookeeper.server.ZooKeeperServer)
> >>> Debug is  true storeKey true useTicketCache false useKeyTab true
> >>> doNotPrompt false ticketCache is null isInitiator true KeyTab is
> >>> /home/oleg/kafka_2.10-0.9.0.1/config/security/zookeeper.keytab
> >>> refreshKrb5Config is false principal is
> >>> zookeeper/ubuntu.oleg@oleg.com<mailto:zookeeper/ubuntu.oleg@oleg.com>
> >>> tryFirstPass is false useFirstPass is false storePass is false clearPass
> >>> is false
> >>> principal is
> >>> zookeeper/ubuntu.oleg@oleg.com<mailto:zookeeper/ubuntu.oleg@oleg.com>
> >>> Will use keytab
> >>> Commit Succeeded
> >>> 
> >>> [2016-02-23 13:22:40,541] INFO successfully logged in.
> >>> (org.apache.zookeeper.Login)
> >>> [2016-02-23 13:22:40,544] INFO binding to port 0.0.0.0/0.0.0.0:2181
> >>> (org.apache.zookeeper.server.NIOServerCnxnFactory)
> >>> [2016-02-23 13:22:40,544] INFO TGT refresh thread started.
> >>> (org.apache.zookeeper.Login)
> >>> [2016-02-23 13:22:40,554] INFO TGT valid starting at:Tue Feb 23
> >>> 13:22:40 EST 2016 (org.apache.zookeeper.Login)
> >>> [2016-02-23 13:22:40,554] INFO TGT expires:  Tue Feb 23
> >>> 23:22:40 EST 2016 (org.apache.zookeeper.Login)
> >>> [2016-02-23 13:22:40,554] INFO TGT refresh sleeping until: Tue Feb 23
> >>> 21:47:35 EST 2016 (org.apache.zookeeper.Login)
> >>> [2016-02-23 13:23:09,012] INFO Accepted socket connection from
> >>> /127.0.0.1:51876 (org.apache.zookeeper.server.NIOServerC

Re: Kerberized Kafka setup issues

2016-02-23 Thread Harsha
Oleg,
Can you post your jaas configs. Its important that serviceName
must match the principal name with which zookeeper is running.
Whats the principal name zookeeper service is running with.
-Harsha

On Tue, Feb 23, 2016, at 11:01 AM, Oleg Zhurakousky wrote:
> Hey guys, first post here so bare with me
> 
> Trying to setup Kerberized Kafka 0.9.0.. Followed the instructions here
> http://kafka.apache.org/documentation.html#security_sasl and i seem to be
> very close, but not quite there yet.
> 
> ZOOKEEPER
> Starting Zookeeper seems to be OK (below is the relevant part of the log)
> . . .
> [2016-02-23 13:22:40,336] INFO maxSessionTimeout set to -1
> (org.apache.zookeeper.server.ZooKeeperServer)
> Debug is  true storeKey true useTicketCache false useKeyTab true
> doNotPrompt false ticketCache is null isInitiator true KeyTab is
> /home/oleg/kafka_2.10-0.9.0.1/config/security/zookeeper.keytab
> refreshKrb5Config is false principal is
> zookeeper/ubuntu.oleg@oleg.com<mailto:zookeeper/ubuntu.oleg@oleg.com>
> tryFirstPass is false useFirstPass is false storePass is false clearPass
> is false
> principal is
> zookeeper/ubuntu.oleg@oleg.com<mailto:zookeeper/ubuntu.oleg@oleg.com>
> Will use keytab
> Commit Succeeded
> 
> [2016-02-23 13:22:40,541] INFO successfully logged in.
> (org.apache.zookeeper.Login)
> [2016-02-23 13:22:40,544] INFO binding to port 0.0.0.0/0.0.0.0:2181
> (org.apache.zookeeper.server.NIOServerCnxnFactory)
> [2016-02-23 13:22:40,544] INFO TGT refresh thread started.
> (org.apache.zookeeper.Login)
> [2016-02-23 13:22:40,554] INFO TGT valid starting at:Tue Feb 23
> 13:22:40 EST 2016 (org.apache.zookeeper.Login)
> [2016-02-23 13:22:40,554] INFO TGT expires:  Tue Feb 23
> 23:22:40 EST 2016 (org.apache.zookeeper.Login)
> [2016-02-23 13:22:40,554] INFO TGT refresh sleeping until: Tue Feb 23
> 21:47:35 EST 2016 (org.apache.zookeeper.Login)
> [2016-02-23 13:23:09,012] INFO Accepted socket connection from
> /127.0.0.1:51876 (org.apache.zookeeper.server.NIOServerCnxnFactory)
> [2016-02-23 13:23:09,025] INFO Client attempting to establish new session
> at /127.0.0.1:51876 (org.apache.zookeeper.server.ZooKeeperServer)
> [2016-02-23 13:23:09,026] INFO Creating new log file: log.57
> (org.apache.zookeeper.server.persistence.FileTxnLog)
> . . .
> 
> 
> KAFKA
> Starting Kafka server is not going well yet although I see that
> interaction with Kerberos is successful (see relevant log below. the
> error is at the bottom)
> . . .
> [2016-02-23 13:26:11,508] INFO starting (kafka.server.KafkaServer)
> [2016-02-23 13:26:11,511] INFO Connecting to zookeeper on localhost:2181
> (kafka.server.KafkaServer)
> [2016-02-23 13:26:11,519] INFO JAAS File name:
> /home/oleg/kafka_2.10-0.9.0.1/config/kafka_server_jaas.conf
> (org.I0Itec.zkclient.ZkClient)
> [2016-02-23 13:26:11,520] INFO Starting ZkClient event thread.
> (org.I0Itec.zkclient.ZkEventThread)
> [2016-02-23 13:26:11,527] INFO Client
> environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09
> GMT (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,527] INFO Client environment:host.name=172.16.137.20
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,527] INFO Client environment:java.version=1.8.0_72
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,527] INFO Client environment:java.vendor=Oracle
> Corporation (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,527] INFO Client
> environment:java.home=/usr/lib/jvm/java-8-oracle/jre
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,527] INFO Client
> environment:java.class.path=:/home/oleg/kafka_2.10-0.9.0.1/bin/../libs/jetty-http-9.2.12.v20150709.jar:/home/oleg/ka.
> . . . . .
> [2016-02-23 13:26:11,531] INFO Client
> environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,531] INFO Client environment:java.io.tmpdir=/tmp
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,531] INFO Client environment:java.compiler=
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,531] INFO Client environment:os.name=Linux
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,531] INFO Client environment:os.arch=amd64
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,531] INFO Client
> environment:os.version=4.2.0-27-generic (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,531] INFO Client environment:user.name=oleg
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,531] INFO Client environment:user.home=/home/oleg
> (org.apache.zookeeper.ZooKeeper)
> [2016-02-23 13:26:11,531] INFO Client
> environment:user.dir=/ho

Re: Java Client connection errors with Kafka 0.9.0.0 when SSL is enabled

2016-02-18 Thread Harsha
Did you try what Adam is suggesting in the earlier email. Also to
quickly check you can try remove keystore and key.password configs from
client side.
-Harsha

On Thu, Feb 18, 2016, at 02:49 PM, Srikrishna Alla wrote:
> Hi,
> 
> We are getting the below error when trying to use a Java new producer
> client. Please let us know the reason for this error -
> 
> Error message:
> [2016-02-18 15:41:06,182] DEBUG Accepted connection from /10.**.***.** on
> /10.**.***.**:9093. sendBufferSize [actual|requested]: [102400|102400]
> recvBufferSize [actual|requested]: [102400|102400]
> (kafka.network.Acceptor)
> [2016-02-18 15:41:06,183] DEBUG Processor 1 listening to new connection
> from /10.**.**.**:46419 (kafka.network.Processor)
> [2016-02-18 15:41:06,283] DEBUG SSLEngine.closeInBound() raised an
> exception. (org.apache.kafka.common.network.SslTransportLayer)
> javax.net.ssl.SSLException: Inbound closed before receiving peer's
> close_notify: possible truncation attack?
>   at sun.security.ssl.Alerts.getSSLException(Alerts.java:208)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1639)
>   at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1607)
>   at sun.security.ssl.SSLEngineImpl.closeInbound(SSLEngineImpl.java:1537)
>   at
>   
> org.apache.kafka.common.network.SslTransportLayer.handshakeFailure(SslTransportLayer.java:723)
>   at
>   
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:313)
>   at
>   org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>   at kafka.network.Processor.run(SocketServer.scala:413)
>   at java.lang.Thread.run(Thread.java:722)
> [2016-02-18 15:41:06,283] DEBUG Connection with
> l.com/10.**.**.** disconnected
> (org.apache.kafka.common.network.Selector)
> javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
> connection?
>   at
>   
> sun.security.ssl.EngineInputRecord.bytesInCompletePacket(EngineInputRecord.java:171)
>   at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:845)
>   at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:758)
>   at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
>   at
>   
> org.apache.kafka.common.network.SslTransportLayer.handshakeUnwrap(SslTransportLayer.java:408)
>   at
>   
> org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:269)
>   at
>   org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:68)
>   at org.apache.kafka.common.network.Selector.poll(Selector.java:281)
>   at kafka.network.Processor.run(SocketServer.scala:413)
>   at java.lang.Thread.run(Thread.java:722)
> 
> Producer Java client code:
> System.setProperty("javax.net.debug","ssl:handshake:verbose");
>Properties props = new Properties();
>props.put("bootstrap.servers", ".com:9093");
>props.put("acks", "all");
>props.put("retries", "0");
>props.put("batch.size", "16384");
>props.put("linger.ms", "1");
>props.put("buffer.memory", "33554432");
>props.put("key.serializer",
>"org.apache.kafka.common.serialization.StringSerializer");
>props.put("value.serializer",
>"org.apache.kafka.common.serialization.StringSerializer");
>props.put("security.protocol", "SSL");
>props.put("ssl.protocal", "SSL");
>props.put("ssl.truststore.location",
>"/idn/home/salla8/ssl/kafka_client_truststore.jks");
>props.put("ssl.truststore.password", "p@ssw0rd");
>props.put("ssl.keystore.location",
>"/idn/home/salla8/ssl/kafka_client_keystore.jks");
>props.put("ssl.keystore.password", "p@ssw0rd");
>props.put("ssl.key.password", "p@ssw0rd");
>Producer producer = new
>KafkaProducer(props);
> 
> 
> Configuration -server.properties:
> broker.id=0
> listeners=SSL://:9093
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> security.inter.broker.protocol=SSL
> ssl.keystore.location=/opt/kafka_2.11-0.9.0.0/config/ss

Re: 答复: NoAuth for /controller

2016-02-16 Thread Harsha
Kafka doesn't have security enabled for 0.8.2.2 so make sure zookeeper
root that you're using doesn't have any acls set

-Harsha

On Sun, Feb 14, 2016, at 06:51 PM, 赵景波 wrote:
> Can you help me?
> 
> ___
> JingBo Zhao<http://weibo.com/zbdba> – 赵景波
> jing...@staff.sina.com.cn<mailto:jing...@staff.sina.com.cn>
> 
> 研发中心 - 平台架构部 - 数据库平台 - DBA
> (手机)18610917066
> Add.: 北京市海淀区北四环西路58号理想国际大厦17层
> ___
> 
> 发件人: 赵景波
> 发送时间: 2016年2月2日 17:04
> 收件人: 'users@kafka.apache.org' 
> 主题: NoAuth for /controller
> 
> Hi
> This is jingbo,I am a database engineer work in sina(china), I meet some
> error when I start the kafka cluster,can you help me ?
> The error is:
> 2016-02-02 16:52:20,916] ERROR Error while electing or becoming leader on
> broker 10798841 (kafka.server.ZookeeperLeaderElector)
> org.I0Itec.zkclient.exception.ZkException:
> org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode =
> NoAuth for /controller
>  at
>  org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)
>  at
>  org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
>  at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
>  at
>  org.I0Itec.zkclient.ZkClient.createEphemeral(ZkClient.java:328)
>  at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:222)
>  at
>  
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:237)
>  at
>  
> kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:275)
>  at
>  
> kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:76)
>  at
>  
> kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:49)
>  at
>  
> kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply(ZookeeperLeaderElector.scala:47)
>  at
>  
> kafka.server.ZookeeperLeaderElector$$anonfun$startup$1.apply(ZookeeperLeaderElector.scala:47)
>  at kafka.utils.Utils$.inLock(Utils.scala:535)
>  at
>  
> kafka.server.ZookeeperLeaderElector.startup(ZookeeperLeaderElector.scala:47)
>  at
>  
> kafka.controller.KafkaController$$anonfun$startup$1.apply$mcV$sp(KafkaController.scala:650)
>  at
>  
> kafka.controller.KafkaController$$anonfun$startup$1.apply(KafkaController.scala:646)
>  at
>  
> kafka.controller.KafkaController$$anonfun$startup$1.apply(KafkaController.scala:646)
>  at kafka.utils.Utils$.inLock(Utils.scala:535)
>  at
>  kafka.controller.KafkaController.startup(KafkaController.scala:646)
>  at kafka.server.KafkaServer.startup(KafkaServer.scala:117)
>  at
>  
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:29)
>  at kafka.Kafka$.main(Kafka.scala:46)
>  at kafka.Kafka.main(Kafka.scala)
> Caused by: org.apache.zookeeper.KeeperException$NoAuthException:
> KeeperErrorCode = NoAuth for /controller
>  at
>  org.apache.zookeeper.KeeperException.create(KeeperException.java:113)
>  at
>  org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
>  at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
>  at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
>  at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
>  at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
>  at
>  org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
>  ... 20 more
> 
> 
> BTW:I had add acl on zookeeper which kafka rely on,Our kafka version is
> 0.8.2.2
> 
> 
> ___
> JingBo Zhao<http://weibo.com/zbdba> – 赵景波
> jing...@staff.sina.com.cn<mailto:jing...@staff.sina.com.cn>
> 
> 研发中心 - 平台架构部 - 数据库平台 - DBA
> (手机)18610917066
> Add.: 北京市海淀区北四环西路58号理想国际大厦17层
> ___
> 


Re: behavior of two live brokers with the same id?

2016-02-04 Thread Harsha
You won't be able to start and register to brokers with same id in
zookeeper. 

On Thu, Feb 4, 2016, at 06:26 AM, Carlo Cabanilla wrote:
> Hello,
> 
> How does a kafka cluster behave if there are two live brokers with the
> same
> broker id, particularly if that broker is a leader? Is it deterministic
> that the older one wins? Magical master-master replication between the
> two?
> 
> .Carlo
> Datadog


Re: Consumer - Failed to find leader

2015-12-30 Thread Harsha
can you add your jass file details. Your jaas file might have
useTicketCache=true and storeKey=true as well example of
KafkaServer jass file

KafkaServer {

com.sun.security.auth.module.Krb5LoginModule required

useKeyTab=true

storeKey=true

serviceName="kafka"

keyTab="/vagrant/keytabs/kafka1.keytab"

principal="kafka/kafka1.witzend@witzend.com"; };

and KafkaClient KafkaClient {

com.sun.security.auth.module.Krb5LoginModule required

useTicketCache=true

serviceName="kafka";

};

On Wed, Dec 30, 2015, at 03:10 AM, prabhu v wrote:
> Hi Harsha,
>
> I have used the Fully qualified domain name. Just for security
> concerns, Before sending this mail,i have replaced our FQDN hostname
> to localhost.
>
> yes, i have tried KINIT and I am able to view the tickets using klist
> command as well.
>
> Thanks, Prabhu
>
> On Wed, Dec 30, 2015 at 11:27 AM, Harsha  wrote:
>> Prabhu,
>>
When using SASL/kerberos always make sure you give FQDN of
>>
the hostname . In your command you are using --zookeeper
>>
localhost:2181 and make sure you change that hostname.
>>
>>
"avax.security.auth.login.LoginException: No key to store Will continue
>>
> connection to Zookeeper server without SASL authentication, if
> Zookeeper"
>>
>> did you try  kinit with that keytab at the command line.
>>
>>
-Harsha
>> On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
>>
> Thanks for the input Ismael.
>>
>
>>
> I will try and let you know.
>>
>
>>
> Also need your valuable inputs for the below issue:)
>>
>
>>
> i am not able to run kafka-topics.sh(0.9.0.0 version)
>>
>
>>
> [root@localhost bin]# ./kafka-topics.sh --list --zookeeper
> localhost:2181
>>
> [2015-12-28 12:41:32,589] WARN SASL configuration failed:
>>
> javax.security.auth.login.LoginException: No key to store Will
> continue
>>
> connection to Zookeeper server without SASL authentication, if
> Zookeeper
>>
> server allows it. (org.apache.zookeeper.ClientCnxn)
>>
> ^Z
>>
>
>>
> I am sure the key is present in its keytab file ( I have cross
> verified
>>
> using kinit command as well).
>>
>
>>
> Am i missing anything while calling the kafka-topics.sh??
>>
>
>>
>
>>
>
>>
> On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma
>  wrote:
>>
>
>>
> > Hi Prabhu,
>>
> >
>>
> > kafka-console-consumer.sh uses the old consumer by default, but
> > only the
>>
> > new consumer supports security. Use --new-consumer to change this.
>>
> >
>>
> > Hope this helps.
>>
> >
>>
> > Ismael
>>
> > On 28 Dec 2015 05:48, "prabhu v"  wrote:
>>
> >
>>
> > > Hi Experts,
>>
> > >
>>
> > > I am getting the below error when running the consumer
>>
> > > "kafka-console-consumer.sh" .
>>
> > >
>>
> > > I am using the new version 0.9.0.1.
>>
> > > Topic name: test
>>
> > >
>>
> > >
>>
> > > [2015-12-28 06:13:34,409] WARN
>>
> > >
>>
> > >
>>
> > [console-consumer-61657_localhost-1451283204993-5512891d-leader-
> > finder-thread],
>>
> > > Failed to find leader for Set([test,0])
>>
> > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
>>
> > > kafka.common.BrokerEndPointNotAvailableException: End point
> > > PLAINTEXT not
>>
> > > found for broker 0
>>
> > >at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
>>
> > >
>>
> > >
>>
> > > Please find the current configuration below.
>>
> > >
>>
> > > Configuration:
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" consumer.properties
>>
> > > zookeeper.connect=localhost:2181
>>
> > > zookeeper.connection.timeout.ms=6
>>
> > > group.id=test-consumer-group
>>
> > > security.protocol=SASL_PLAINTEXT
>>
> > > sasl.kerberos.service.name="kafka"
>>
> > >
>>
> > >
>>
> > > [root@localhost config]# grep -v "^#" producer.properties
>>
> > > metadata.broker.list=localhost:9094,localhost:9095
>>
> > > producer.type=sync
>>
> > > compression.codec=none
>>
> > > serializer.class=kafka.serializer.DefaultEncoder
>>

Re: Consumer - Failed to find leader

2015-12-29 Thread Harsha
Prabhu,
   When using SASL/kerberos always make sure you give FQDN of
   the hostname . In your command you are using --zookeeper
   localhost:2181 and make sure you change that hostname. 

"avax.security.auth.login.LoginException: No key to store Will continue
> connection to Zookeeper server without SASL authentication, if Zookeeper"

did you try  kinit with that keytab at the command line.

-Harsha
On Mon, Dec 28, 2015, at 04:07 AM, prabhu v wrote:
> Thanks for the input Ismael.
> 
> I will try and let you know.
> 
> Also need your valuable inputs for the below issue:)
> 
> i am not able to run kafka-topics.sh(0.9.0.0 version)
> 
> [root@localhost bin]# ./kafka-topics.sh --list --zookeeper localhost:2181
> [2015-12-28 12:41:32,589] WARN SASL configuration failed:
> javax.security.auth.login.LoginException: No key to store Will continue
> connection to Zookeeper server without SASL authentication, if Zookeeper
> server allows it. (org.apache.zookeeper.ClientCnxn)
> ^Z
> 
> I am sure the key is present in its keytab file ( I have cross verified
> using kinit command as well).
> 
> Am i missing anything while calling the kafka-topics.sh??
> 
> 
> 
> On Mon, Dec 28, 2015 at 3:53 PM, Ismael Juma  wrote:
> 
> > Hi Prabhu,
> >
> > kafka-console-consumer.sh uses the old consumer by default, but only the
> > new consumer supports security. Use --new-consumer to change this.
> >
> > Hope this helps.
> >
> > Ismael
> > On 28 Dec 2015 05:48, "prabhu v"  wrote:
> >
> > > Hi Experts,
> > >
> > > I am getting the below error when running the consumer
> > > "kafka-console-consumer.sh" .
> > >
> > > I am using the new version 0.9.0.1.
> > > Topic name: test
> > >
> > >
> > > [2015-12-28 06:13:34,409] WARN
> > >
> > >
> > [console-consumer-61657_localhost-1451283204993-5512891d-leader-finder-thread],
> > > Failed to find leader for Set([test,0])
> > > (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> > > kafka.common.BrokerEndPointNotAvailableException: End point PLAINTEXT not
> > > found for broker 0
> > > at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> > >
> > >
> > > Please find the current configuration below.
> > >
> > > Configuration:
> > >
> > >
> > > [root@localhost config]# grep -v "^#" consumer.properties
> > > zookeeper.connect=localhost:2181
> > > zookeeper.connection.timeout.ms=6
> > > group.id=test-consumer-group
> > > security.protocol=SASL_PLAINTEXT
> > > sasl.kerberos.service.name="kafka"
> > >
> > >
> > > [root@localhost config]# grep -v "^#" producer.properties
> > > metadata.broker.list=localhost:9094,localhost:9095
> > > producer.type=sync
> > > compression.codec=none
> > > serializer.class=kafka.serializer.DefaultEncoder
> > > security.protocol=SASL_PLAINTEXT
> > > sasl.kerberos.service.name="kafka"
> > >
> > > [root@localhost config]# grep -v "^#" server1.properties
> > >
> > > broker.id=0
> > > listeners=SASL_PLAINTEXT://localhost:9094
> > > delete.topic.enable=true
> > > num.network.threads=3
> > > num.io.threads=8
> > > socket.send.buffer.bytes=102400
> > > socket.receive.buffer.bytes=102400
> > > socket.request.max.bytes=104857600
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs
> > > num.partitions=1
> > > num.recovery.threads.per.data.dir=1
> > > log.retention.hours=168
> > > log.segment.bytes=1073741824
> > > log.retention.check.interval.ms=30
> > > log.cleaner.enable=false
> > > zookeeper.connect=localhost:2181
> > > zookeeper.connection.timeout.ms=6
> > > inter.broker.protocol.version=0.9.0.0
> > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > allow.everyone.if.no.acl.found=true
> > >
> > >
> > > [root@localhost config]# grep -v "^#" server4.properties
> > > broker.id=1
> > > listeners=SASL_PLAINTEXT://localhost:9095
> > > delete.topic.enable=true
> > > num.network.threads=3
> > > num.io.threads=8
> > > socket.send.buffer.bytes=102400
> > > socket.receive.buffer.bytes=102400
> > > socket.request.max.bytes=104857600
> > > log.dirs=/data/kafka_2.11-0.9.0.0/kafka-logs-1
> > > num.partitions=1
> > > num.recovery.threads.per.data.dir=1
> > > log.retention.hours=168
> > > log.segment.bytes=1073741824
> > > log.retention.check.interval.ms=30
> > > log.cleaner.enable=false
> > > zookeeper.connect=localhost:2181
> > > zookeeper.connection.timeout.ms=6
> > > inter.broker.protocol.version=0.9.0.0
> > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > zookeeper.sasl.client=zkclient
> > >
> > > [root@localhost config]# grep -v "^#" zookeeper.properties
> > > dataDir=/data/kafka_2.11-0.9.0.0/zookeeper
> > > clientPort=2181
> > > maxClientCnxns=0
> > > requireClientAuthScheme=sasl
> > >
> > authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
> > > jaasLoginRenew=360
> > >
> > >
> > > Need your valuable inputs on this issue.
> > > --
> > > Regards,
> > >
> > > Prabhu.V
> > >
> >
> 
> 
> 
> -- 
> Regards,
> 
> Prabhu.V


Re: The JIRA Awakens [KAFKA-1841]

2015-12-24 Thread Harsha
HI Dana,
I worked on that release. Yes HDP-2.3.0 has lot of
additional patches on top of 0.8.2.1 mainly the kerberos
patches. 
We did missed KAFKA-1841 which was fixed in later maint release. We
already notified everyone to upgrade HDP-2.3.4 this is the apache kafka
0.9.0 + additional patches . We are making sure on our side not to miss
any compatibility patches like these with 3rd party developers and have
tests to ensure that. 

Thanks,
Harsha 
On Wed, Dec 23, 2015, at 04:11 PM, Dana Powers wrote:
> Hi all,
> 
> I've been helping debug an issue filed against kafka-python related to
> compatibility w/ Hortonworks 2.3.0.0 kafka release. As I understand it,
> HDP
> is currently based on snapshots of apache/kafka trunk, merged with some
> custom patches from HDP itself.
> 
> In this case, HDP's 2.3.0.0 kafka release missed a compatibility patch
> that
> I believe is critical for third-party library support. Unfortunately the
> patch -- KAFKA-1841 -- was initially only applied to the 0.8.2 branch (it
> was merged to trunk several months later in KAFKA-2068). Because it
> wasn't
> on trunk, it didn't get included in the HDP kafka releases.
> 
> If you recall, KAFKA-1841 was needed to maintain backwards and forwards
> compatibility wrt the change from zookeeper to kafka-backed offset
> storage.
> Not having this patch is fine if you only ever use the clients /
> libraries
> distributed in the that release -- and I imagine that is probably most
> folks that are using it. But if you remember the thread on this issue
> back
> in the 0.8.2-beta review, the API incompatibility made third-party
> clients
> hard to develop and maintain if the goal is to support multiple broker
> versions w/ the same client code [this is the goal of kafka-python].
> Anyways, I'm really glad that the fix made it into the apache release,
> but
> now I'm sad that it didn't make it into HDP's release.
> 
> Anyways, I think there's a couple takeaways here:
> 
> (1) I'd recommend anyone using HDP who intends to use third-party kafka
> consumers should upgrade to 2.3.4.0 or later. That version appears to
> include the compatibility patch (KAFKA-2068). Of course if anyone is on
> list from HDP, they may be able to provide better help on this.
> 
> (2) I think more care should probably be taken to help vendors or anyone
> tracking changes on trunk wrt released versions. Is there a list of all
> KAFKA- patches that are released but not merged into trunk ?
> KAFKA-1841
> is obviously near and dear to my heart, but I wonder if there are other
> patches like it?
> 
> Happy holidays to all, and may the force be with you
> 
> -Dana


Re: kafka 0.8.2 build problem

2015-11-15 Thread Harsha
Can you  try using the 0.8.2.2 source here
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.2/kafka-0.8.2.2-src.tgz
https://kafka.apache.org/downloads.html

On Sun, Nov 15, 2015, at 05:35 AM, jinxing wrote:
> Hi all, I am newbie for kafka, and I try to build kafka 0.8.2 from source
> code as a start;
> 
> 
> but I'm faced a lot of problem;
> 
> 
> here is the timeline of what I did:
> 
> 
> 1. I cloned kafka:git clone https://github.com/apache/kafka.git;
> 2. I created branch 0.8.2 and checkout that branch and reset this branch
> to 0.8.2:git branch 0.8.2;git checkout 0.8.2;git reset
> --hard   f76013b //this is the last commit id for 0.8.2
> 3. I run command, following (https://github.com/apache/kafka/tree/0.8.2):
>gradle;./gradlew jar; ./gradlew test
> 
> 
> But I found that 46 tests failed, and I got lots of different exceptions;
> 
> 
> If I don't do "git reset --hard   f76013b", the building and test can
> succeed;
> I'm so confused; Did I make any mistakes? what is the right way to build
> 0.8.2?
> 
> 
> Any ideas about this? Lots of thanks and really appreciate !
> btw my scala version is 2.10.2, gradle version is 2.6, java version is
> 1.8


Re: Kafka Broker process: Connection refused

2015-10-04 Thread Harsha
Mark,       Can you try running the /usr/hdp/current/kafka-
broker/bin/kafka start manually as user Kafka and also do you see jars
under kafka-broker/libs/kafka_*.jar . Can you try asking the question
here as well http://hortonworks.com/community/forums/forum/kafka/
Thanks, Harsha

On Sun, Oct 4, 2015, at 01:49 AM, Mark Whalley wrote:
> Hi Kafka List


>


> This is my first posting to this list, and I am relatively new to
> Kafka so please accept my bear with me!


>


> Ambari Stack HDP-2.2


> Kafka
.8.1.2.2.0.0


> Zookeeper 3.4.6.2.2.0.0


>


> I have been using Kafka on a small (4 node) VM cluster running CentOS
> 6.6 successfully for many months.
>


>


> Following a recent cluster reboot, Kafka now refuses to work.


>


> From Ambari, all services start, then within seconds, Kafka Broker
> stops with a “Kafka Broker process: Connection refused”


>


> /var/log/kafka/kafka.err reports: “Error: Could not find or load main
> class kafka.Kafka”


>


> All other services appear to be working fine.


>


> The Ambari’s configuration reports that nothing has changed in the
> configuration since May (Kafka was working until just over a week
> ago).  This is a development cluster, so I cannot guarantee
> nothing outside
 Ambari / Kafka has changed.


>


> I have done the usual Google searching, and followed many threads over
> the past few days, but have drawn a blank.


>


> Any thoughts where I should start to look?


>


>


> Regards


> Mark


>


> ---


> Mark Whalley


> Principal Consultant


> Actian | Services


> Accelerating Big Data  2.0


> O +44 01753 559 569
>


> M +44 07764 290 733


> www.actian.com[1]


> Connect With Us:


> Twitter-iconLinked-In-iconYou-Tube-icon
> http://davidwalsh.name/dw-content/googleplus-icon.png__


> dmi_user_conference_emailsignature_2015


>





Links:

  1. http://www.actian.com/


EOL JDK 1.6 for Kafka

2015-07-01 Thread Harsha
Hi, 
During our SSL Patch KAFKA-1690. Some of the reviewers/users
asked for support this config
https://docs.oracle.com/javase/8/docs/api/javax/net/ssl/SSLParameters.html#setEndpointIdentificationAlgorithm-java.lang.String-
It allows clients to verify the server and prevent potential MITM. This
api doesn't exist in Java 1.6. 
Are there any users still want 1.6 support or can we stop supporting 1.6
from next release on wards. 

Thanks,
Harsha


Re: Failure in Leader Election on broker shutdown

2015-06-19 Thread Harsha
Sandeep,         You need to have multiple replicas. Having single replica 
means you've one copy of the data and if that machine goes down there isn't 
another replica who can take over and be the leader for that partition.-Harsha 



_
From: Sandeep Bishnoi 
Sent: Friday, June 19, 2015 2:37 PM
Subject: Failure in Leader Election on broker shutdown
To:  


Hi,

 I have a kafka cluster of three nodes.

 I have constructed a topic with the following command:
 bin/kafka-topics.sh --create --zookeeper localhost:2181
--replication-factor 1 --partitions 3 --topic testv1p3

 So the topic "testv1p3" has 3 partitions and replication factor is 1.
 Here is the result of describe command:
  kafka_2.10-0.8.2.0]$ bin/kafka-topics.sh --describe --zookeeper
localhost:2181 --topic testv1p3

Topic:testv1p3PartitionCount:3ReplicationFactor:1Configs:
Topic: testv1p3Partition: 0Leader: 1Replicas: 1Isr: 1
Topic: testv1p3Partition: 1Leader: 2Replicas: 2Isr: 2
Topic: testv1p3Partition: 2Leader: 0Replicas: 0Isr: 0

So far things are good.

Now I tried to kill a broker using bin/kafka-server-stop.sh
The broker was stopped successfully.

 Now I wanted to ensure that there is a new leader for the partition which
was hosted on the terminated broker.
 Here is the output of describe command post broker termination:
 Topic:testv1p3PartitionCount:3ReplicationFactor:1Configs:
Topic: testv1p3Partition: 0Leader: 1Replicas: 1Isr: 1
Topic: testv1p3Partition: 1Leader: -1Replicas: 2Isr:
Topic: testv1p3Partition: 2Leader: 0Replicas: 0Isr: 0

Leader for partition:1 is -1.

Java API for kafka returns null for leader() in PartitionMetadata for
partition 1.

When I restarted the broker which was stopped earlier.
Things go back to normal.

1) Does leader selection happen automatically ?
2) If yes, do I need any particular configuration in broker or topic config
?
3) If not, what is command to ensure that I have a leader for partition 1
in case its lead broker goes down.
 FYI I tried to run bin/bin/kafka-preferred-replica-election.sh --zookeeper
localhost:2181
 Post this script run, the topic description still remains same and no
leader for partition 1.

It will be great to get any help on this.


Reference:
Console log for (kafka-server-stop.sh):
[2015-06-19 14:25:00,241] INFO [Kafka Server 2], shutting down
(kafka.server.KafkaServer)
[2015-06-19 14:25:00,243] INFO [Kafka Server 2], Starting controlled
shutdown (kafka.server.KafkaServer)
[2015-06-19 14:25:00,267] INFO [Kafka Server 2], Controlled shutdown
succeeded (kafka.server.KafkaServer)
[2015-06-19 14:25:00,273] INFO Deregistered broker 2 at path
/brokers/ids/2. (kafka.utils.ZkUtils$)
[2015-06-19 14:25:00,274] INFO [Socket Server on Broker 2], Shutting down
(kafka.network.SocketServer)
[2015-06-19 14:25:00,279] INFO [Socket Server on Broker 2], Shutdown
completed (kafka.network.SocketServer)
[2015-06-19 14:25:00,280] INFO [Kafka Request Handler on Broker 2],
shutting down (kafka.server.KafkaRequestHandlerPool)
[2015-06-19 14:25:00,282] INFO [Kafka Request Handler on Broker 2], shut
down completely (kafka.server.KafkaRequestHandlerPool)
[2015-06-19 14:25:00,600] INFO [Replica Manager on Broker 2]: Shut down
(kafka.server.ReplicaManager)
[2015-06-19 14:25:00,601] INFO [ReplicaFetcherManager on broker 2] shutting
down (kafka.server.ReplicaFetcherManager)
[2015-06-19 14:25:00,602] INFO [ReplicaFetcherManager on broker 2] shutdown
completed (kafka.server.ReplicaFetcherManager)
[2015-06-19 14:25:00,604] INFO [Replica Manager on Broker 2]: Shut down
completely (kafka.server.ReplicaManager)
[2015-06-19 14:25:00,605] INFO Shutting down. (kafka.log.LogManager)
[2015-06-19 14:25:00,618] INFO Shutdown complete. (kafka.log.LogManager)
[2015-06-19 14:25:00,620] WARN Kafka scheduler has not been started
(kafka.utils.Utils$)
java.lang.IllegalStateException: Kafka scheduler has not been started
at kafka.utils.KafkaScheduler.ensureStarted(KafkaScheduler.scala:114)
at kafka.utils.KafkaScheduler.shutdown(KafkaScheduler.scala:86)
at
kafka.controller.KafkaController.onControllerResignation(KafkaController.scala:350)
at kafka.controller.KafkaController.shutdown(KafkaController.scala:664)
at
kafka.server.KafkaServer$$anonfun$shutdown$9.apply$mcV$sp(KafkaServer.scala:287)
at kafka.utils.Utils$.swallow(Utils.scala:172)
at kafka.utils.Logging$class.swallowWarn(Logging.scala:92)
at kafka.utils.Utils$.swallowWarn(Utils.scala:45)
at kafka.utils.Logging$class.swallow(Logging.scala:94)
at kafka.utils.Utils$.swallow(Utils.scala:45)
at kafka.server.KafkaServer.shutdown(KafkaServer.scala:287)
at
kafka.server.KafkaServerStartable.shutdown(KafkaServerStartable.scala:42)
at kafka.Kafka$$anon$1.run(Kafka.scala:42)
[2015-06-19 14:25:00,623] INFO Terminate Z

Re: Increasing replication factor of existing topics

2015-04-07 Thread Harsha
Hi Navneet,
          Any reason that you are looking to modify the zk nodes directly to 
increase the topic partition. If you are looking for an api to do this there is 
AdminUtils.addPartitions . 

-- 
Harsha


On April 7, 2015 at 6:45:40 AM, Navneet Gupta (Tech - BLR) 
(navneet.gu...@flipkart.com) wrote:

Hi,  

I got a method to increase replication factor of topics here  
<https://kafka.apache.org/081/ops.html>  

However, I was wondering if it's possible to do it by altering some nodes  
in zookeeper.  

Thoughts/suggestions welcome.  


--  
Thanks & Regards,  
Navneet Gupta  


Re: Per-topic retention.bytes uses kilobytes not bytes?

2015-04-02 Thread Harsha
Hi Willy,
          retention.bytes used to check if log  in total exceeded and take the 
diff between log.size and retention bytes and only delete those log segment 
files that exceeds this diff. So its rounded off to the segment.size and it 
does check bytes not in kbs. Since your log.segment.size is 1024 you are 
getting that behavior that its looks like its checking kbs.
  

For ex:
 you set retention.bytes 150 and log.segment.size to be 100
if you have two log segment files under your topic with 100 bytes size
1) total log size is 200 bytes and retention.bytes is 150 bytes
2) and diff between the two is 50 and this is lower than any individual log 
segment file size and it won’t be able to delete.
3) if you added more data lets we have 3 segment files of total size 250 bytes.
4) This time the diff is 100 and it will be able to delete 1 segment file 
keeping the total log size to 150.

retention.bytes won’t exactly match the total log size. It tries to keep the 
total log file size close to retention.bytes.
In the above same example if we have 3 log segment files with total size of 
300. When the log clean up runs it can only delete 1 log segment file
and keeping the total log size to 200 bytes.

-- 
Harsha


On April 2, 2015 at 2:29:45 PM, Willy Hoang (w...@knewton.com) wrote:

Hello,  

I’ve been having trouble using the retention.bytes per-topic configuration 
(using Kafka version 0.8.2.1). I had the same issue that users described in 
these two threads where logs were growing to sizes larger than retention.bytes. 
I couldn’t find an explanation to explain the issue in either thread.  
http://search-hadoop.com/m/4TaT4Y2YRD1 <http://search-hadoop.com/m/4TaT4Y2YRD1> 
 
http://search-hadoop.com/m/4TaT4A94w9 <http://search-hadoop.com/m/4TaT4A94w9>  

After a bit of exploring I came up with a hypothesis: retention.bytes uses 
kilobytes, not bytes, as its unit of measurement.  

Below are reproduceable steps to support my findings.  

# Create a new topic with retention.bytes = 1 and segment.bytes = 1024  
./kafka-topics.sh --create --zookeeper `kafka-zookeeper` --replication-factor 2 
--partitions 1 --topic test-topic-wh --config retention.ms=60480 --config 
retention.bytes=1 --config segment.bytes=1024  

# Produce a message that will add 1024 bytes to the log (26 bytes of metadata 
and 998 bytes from the message string)  
# [2015-04-01 21:31:30,192]  
./kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic-wh  
48511592621585064912153832133745068851354167277338568723801212367882940512382099547077656452011868167062280671787644034983697360468153738320733530248963074919916340211639682996497736197584019505594305204918092844365775522508769053709992262705578058943319678767004341493111503613353102924979561571028366773343124814043716584730147544725607450538227253470831289390680687225547253363513232291750196998204510607040879259384601451167183178896571219320889861706587525006032028098059014382213355803535550612056296013517434057006192416475524344248518557786455850822677869343421138195772284656076117000648020242375211903419500185954902765027000903916410762342630905680728543902271883661840640596483915010329616341194914110460126269112972976548329834183117816884560790416331259138123086341037733285781009676617847368669318437423457236162645890525200414080351181649588421908379380799396957194784506503965311272014255330651454364327607848972940341663812345678085583832958639819357061848511592621585064912153832
  

ls -r -l /mnt*/spool/kafka/test-topic-wh*  
total 4  
-rw-r--r-- 1 isaak isaak 1024 Apr 1 21:31 0018.log  
-rw-r--r-- 1 isaak isaak 10485760 Apr 1 21:27 0018.index  

# Wait abut 10 minutes (longer than the 5 minute retention check interval)  
# Note that no changes occured  

# Produce any sized message to exceed the 1024 bytes (1 KB) retention limit  
# [2015-04-01 21:40:04,851]  
./kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic-wh  

ls -r -l /mnt*/spool/kafka/test-topic-wh*  
total 8  
-rw-r--r-- 1 isaak isaak 26 Apr 1 21:40 0020.log  
-rw-r--r-- 1 isaak isaak 10485760 Apr 1 21:40 0020.index  
-rw-r--r-- 1 isaak isaak 1024 Apr 1 21:31 0018.log  
-rw-r--r-- 1 isaak isaak 0 Apr 1 21:40 0018.index  

# Note from /var/log/kafka/server.log that the older segment is deleted now 
that we have exceeded the retention.bytes limit  
[2015-04-01 21:40:10,114] INFO Rolled new log segment for 'test-topic-wh-0' in 
0 ms. (kafka.log.Log)  
[2015-04-01 21:42:16,214] INFO Scheduling log segment 18 for log 
test-topic-wh-0 for deletion. (kafka.log.Log)  
[2015-04-01 21:43:16,217] INFO Deleting segment 18 from log test-topic-wh-0. 
(kafka.log.Log)  
[2015-04-01 21:43:16,217] INFO Deleting index 
/mnt/spool/kafka/test-topic-wh-0/0018.index.deleted 
(kafka.log.OffsetIndex)  

ls -r -l /mnt*/spool/kafka/test-topic-wh*  
total 4  
-rw-r--r-- 1 isaa

Re: Kafka Consumer

2015-03-31 Thread Harsha
Hi James,
        Can you elaborate on what you mean by group here?. There are no groups 
on the topic side but there are consumer groups and these will be on consumer 
side .
"Consumers label themselves with a consumer group name, and each message 
published to a topic is delivered to one consumer instance within each 
subscribing consumer group. Consumer instances can be in separate processes or 
on separate machines.”
More info on this page http://kafka.apache.org/documentation.html look for 
"consumer group”. 


-- 
Harsha


On March 31, 2015 at 6:10:59 AM, James King (jakwebin...@gmail.com) wrote:

I created a topic using:  

bin/kafka-topics.sh --create --zookeeper localhost:2181  
--replication-factor 1 --partitions 1 --topic test  

How do I find out what group it belongs to?  

Thank you.  


Re: lost messages -?

2015-03-26 Thread Harsha
Victor,
        Its under kaka.tools.DumpLogSegments you can use kafka-run-class to 
execute it.
-- 
Harsha


On March 26, 2015 at 5:29:32 AM, Victor L (vlyamt...@gmail.com) wrote:

Where's this tool (DumpLogSegments) in Kafka distro? Is it Java class in  
kafka jar, or is it third party binary?  
Thank you,  

On Wed, Mar 25, 2015 at 1:11 PM, Mayuresh Gharat  wrote:  

> DumpLogSegments will give you output something like this :  
>  
> offset: 780613873770 isvalid: true payloadsize: 8055 magic: 1  
> compresscodec:  
> GZIPCompressionCodec  
>  
> If this is what you want you can use the tool, to detect if the messages  
> are getting to your brokers.  
> Console-Consumer will output the messages for you.  
>  
> Thanks,  
>  
> Mayuresh  
>  
>  
> On Wed, Mar 25, 2015 at 9:33 AM, tao xiao  wrote:  
>  
> > You can use kafka-console-consumer consuming the topic from the beginning  
> >  
> > *kafka-console-consumer.sh --zookeeper localhost:2181 --topic test  
> > --from-beginning*  
> >  
> >  
> > On Thu, Mar 26, 2015 at 12:17 AM, Victor L  wrote:  
> >  
> > > Can someone let me know how to dump contents of topics?  
> > > I have producers sending messages to 3 brokers but about half of them  
> > don't  
> > > seem to be consumed. I suppose they are getting stuck in queues but how  
> > can  
> > > i figure out where?  
> > > Thks,  
> > >  
> >  
> >  
> >  
> > --  
> > Regards,  
> > Tao  
> >  
>  
>  
>  
> --  
> -Regards,  
> Mayuresh R. Gharat  
> (862) 250-7125  
>  


Re: KafkaSpout forceFromStart Issue

2015-03-23 Thread Harsha
Hi Francois,
           Looks like this belong storm mailing lists. Can you please send this 
question on storm mailing lists.

Thanks,
Harsha


On March 23, 2015 at 11:17:47 AM, François Méthot (fmetho...@gmail.com) wrote:

Hi,  

We have a storm topology that uses Kafka to read a topic with 6  
partitions. ( Kafka 0.8.2, Storm 0.9.3 )  

Recently, we had to set the KafkaSpout to read from the beginning, so we  
temporary configured our KafkaConfig this way:  

kafkaConfig.forceFromStart=true  
kafkaConfig.startOffsetTime = OffsetRequest.EarliestTime()  

It worked well, but afterward, setting those parameters back to false and  
to LatestTime respectively had no effect. In fact the topology won't read  
from our topic anymore.  

When the topology starts, The spout successully logs the offset and  
consumer group's cursor position for each partition in the worker log. But  
nothing is read.  

The only way we can read back from our Topic is to give our SpoutConfig a  
new Kafka ConsumerGroup Id.  

So it looks like, if we don't want to modify any KafkaSpout/Kafka code, the  
only way to read from the beginning would be to write the position we want  
to read from in Zookeeper for our Consumer Group where offset are stored  
and to restart our topology.  
Would anyone know if this is a bug in the KafkaSpout or an issue inherited  
from bug in Kafka?  
Thanks  
Francois  


Re: Check topic exists after deleting it.

2015-03-23 Thread Harsha
Just to be clear, one needs to stop producers and consumers that 
writing/reading from a topic “test” if they are trying to delete that specific 
topic “test”. Not all producers and clients.

-- 
Harsha
On March 23, 2015 at 10:13:47 AM, Harsha (harsh...@fastmail.fm) wrote:

Currently we have auto.create.topics.enable set to true by default. If this is 
set true any one who is making TopicMetadataRequest can create a topic . As 
both producers and consumers can send TopicMetadataRequest which will create a 
topic if the above config is true. So while doing deletion if there is  
producer or consumer running it can re-create a topic thats in deletion 
process. This issue going to be addressed in upcoming versions. Meanwhile if 
you are not creating topics via producer than turn this config off or stop 
producer and consumers while you are trying to delete a topic.
-- 
Harsha


On March 23, 2015 at 9:57:53 AM, Grant Henke (ghe...@cloudera.com) wrote:

What happens when producers or consumers are running while the topic
deleting is going on?

On Mon, Mar 23, 2015 at 10:02 AM, Harsha  wrote:

> DeleteTopic makes a node in zookeeper to let controller know that there is
> a topic up for deletion. This doesn’t immediately delete the topic it can
> take time depending if all the partitions of that topic are online and
> brokers are available as well. Once all the Log files deleted zookeeper
> node gets deleted as well.
> Also make sure you don’t have any producers or consumers are running while
> the topic deleting is going on.
>
> --
> Harsha
>
>
> On March 23, 2015 at 1:29:50 AM, anthony musyoki (
> anthony.musy...@gmail.com) wrote:
>
> On deleting a topic via TopicCommand.deleteTopic()
>
> I get "Topic test-delete is marked for deletion."
>
> I follow up by checking if the topic exists by using
> AdminUtils.topicExists()
> which suprisingly returns true.
>
> I expected AdminUtils.TopicExists() to check both BrokerTopicsPath
> and DeleteTopicsPath before returning a verdict but it only checks
> BrokerTopicsPath
>
> Shouldn't a topic marked for deletion return false for topicExists() ?
>



--
Grant Henke
Solutions Consultant | Cloudera
ghe...@cloudera.com | 920-980-8979
twitter.com/ghenke <http://twitter.com/gchenke> | linkedin.com/in/granthenke


Re: Check topic exists after deleting it.

2015-03-23 Thread Harsha
Currently we have auto.create.topics.enable set to true by default. If this is 
set true any one who is making TopicMetadataRequest can create a topic . As 
both producers and consumers can send TopicMetadataRequest which will create a 
topic if the above config is true. So while doing deletion if there is  
producer or consumer running it can re-create a topic thats in deletion 
process. This issue going to be addressed in upcoming versions. Meanwhile if 
you are not creating topics via producer than turn this config off or stop 
producer and consumers while you are trying to delete a topic.
-- 
Harsha


On March 23, 2015 at 9:57:53 AM, Grant Henke (ghe...@cloudera.com) wrote:

What happens when producers or consumers are running while the topic  
deleting is going on?  

On Mon, Mar 23, 2015 at 10:02 AM, Harsha  wrote:  

> DeleteTopic makes a node in zookeeper to let controller know that there is  
> a topic up for deletion. This doesn’t immediately delete the topic it can  
> take time depending if all the partitions of that topic are online and  
> brokers are available as well. Once all the Log files deleted zookeeper  
> node gets deleted as well.  
> Also make sure you don’t have any producers or consumers are running while  
> the topic deleting is going on.  
>  
> --  
> Harsha  
>  
>  
> On March 23, 2015 at 1:29:50 AM, anthony musyoki (  
> anthony.musy...@gmail.com) wrote:  
>  
> On deleting a topic via TopicCommand.deleteTopic()  
>  
> I get "Topic test-delete is marked for deletion."  
>  
> I follow up by checking if the topic exists by using  
> AdminUtils.topicExists()  
> which suprisingly returns true.  
>  
> I expected AdminUtils.TopicExists() to check both BrokerTopicsPath  
> and DeleteTopicsPath before returning a verdict but it only checks  
> BrokerTopicsPath  
>  
> Shouldn't a topic marked for deletion return false for topicExists() ?  
>  



--  
Grant Henke  
Solutions Consultant | Cloudera  
ghe...@cloudera.com | 920-980-8979  
twitter.com/ghenke <http://twitter.com/gchenke> | linkedin.com/in/granthenke  


Re: Check topic exists after deleting it.

2015-03-23 Thread Harsha
DeleteTopic makes a node in zookeeper to let controller know that there is a 
topic up for deletion. This doesn’t immediately delete the topic it can take 
time depending if all the partitions of that topic are online and brokers are 
available as well.  Once all the Log files deleted zookeeper node gets deleted 
as well.
Also make sure you don’t have any producers or consumers are running while the 
topic deleting is going on.

-- 
Harsha


On March 23, 2015 at 1:29:50 AM, anthony musyoki (anthony.musy...@gmail.com) 
wrote:

On deleting a topic via TopicCommand.deleteTopic()  

I get "Topic test-delete is marked for deletion."  

I follow up by checking if the topic exists by using  
AdminUtils.topicExists()  
which suprisingly returns true.  

I expected AdminUtils.TopicExists() to check both BrokerTopicsPath  
and DeleteTopicsPath before returning a verdict but it only checks  
BrokerTopicsPath  

Shouldn't a topic marked for deletion return false for topicExists() ?  


Re: Kafka-Storm: troubleshooting low R/W throughput

2015-03-22 Thread Harsha
Hi Emmanuel,
       Can you post your kafka server.properties and in your producer are your 
distributing your messages into all kafka topic partitions.

-- 
Harsha


On March 20, 2015 at 12:33:02 PM, Emmanuel (ele...@msn.com) wrote:

Kafka on test cluster: 
2 Kafka nodes, 2GB, 2CPUs
3 Zookeeper nodes, 2GB, 2CPUs

Storm:
3 nodes, 3CPUs each, on the same Zookeeper cluster as Kafka.

1 topic, 5 partitions, replication x2

Whether I use 1 slot for the Kafka Spout or 5 slots (=#partitions), the 
throughput seems about the same.

I can't seem to read much more than 7000 events/sec.

Same, on writing, I set a generator spout and write to Kafka on 1 
topic/5partitions with a KafkaBolt with parallelism of 5 and I can't seem to 
write much more than 7000 events/sec.

Meanwhile, none of the CPU, IO or MEM seem to be a bottleneck: 
In Storm UI the bolts all show capacities <50%, sometimes much less (in the 
single digit %)
Top shows CPUs being used at ~30% max

We have another process moving data from Kafka to Cassandra and it gives 
similar throughput, so it seems related to Kafka more than Storm.


What could be wrong? 
Sorry for the generic question but I would appreciate any hint on where to 
start to troubleshoot.

Thanks

Re: kafka topic information

2015-03-09 Thread Harsha
In general users are expected to run zookeeper cluster of 3 or 5 nodes. 
Zookeeper requires quorum of servers running which means at least ceil(n/2) 
servers need to be up. For 3 zookeeper nodes there needs to be atleast 2 zk 
nodes up at any time , i.e your cluster can function  fine incase of 1 machine 
failure and incase of 5 there should be at least 3 nodes to be up and running.  
For more info on zookeeper you can look under here 
http://zookeeper.apache.org/doc/r3.4.6/
http://zookeeper.apache.org/doc/r3.4.6/zookeeperAdmin.html


-- 
Harsha
On March 9, 2015 at 8:39:00 AM, Yuheng Du (yuheng.du.h...@gmail.com) wrote:

Harsha,

Thanks for reply. So what if the zookeeper cluster fails? Will the topics 
information be lost? What fault-tolerant mechanism does zookeeper offer?

best,

On Mon, Mar 9, 2015 at 11:36 AM, Harsha  wrote:
Yuheng,
          kafka keeps cluster metadata in zookeeper along with topic metadata 
as well. You can use zookeeper-shell.sh or zkCli.sh to check zk nodes, 
/brokers/topics will give you the list of topics .

-- 
Harsha


On March 9, 2015 at 8:20:59 AM, Yuheng Du (yuheng.du.h...@gmail.com) wrote:

I am wondering where does kafka cluster keep the topic metadata (name,
partition, replication, etc)? How does a server recover the topic's
metadata and messages after restart and what data will be lost?

Thanks for anyone to answer my questions.

best,
Yuheng



Re: kafka topic information

2015-03-09 Thread Harsha
Yuheng,
          kafka keeps cluster metadata in zookeeper along with topic metadata 
as well. You can use zookeeper-shell.sh or zkCli.sh to check zk nodes, 
/brokers/topics will give you the list of topics .

-- 
Harsha


On March 9, 2015 at 8:20:59 AM, Yuheng Du (yuheng.du.h...@gmail.com) wrote:

I am wondering where does kafka cluster keep the topic metadata (name,  
partition, replication, etc)? How does a server recover the topic's  
metadata and messages after restart and what data will be lost?  

Thanks for anyone to answer my questions.  

best,  
Yuheng  


Re: Problem deleting topics in 0.8.2?

2015-03-04 Thread Harsha
Hi Jeff,
 Are you seeing any errors in state-change.log or controller.log
 after issuing kafka-topics.sh --delete command.
There is another known issue is if you have auto.topic.enable.create =
true (this is true by default) your consumer or producer can re-create
the topic. So try stopping any of your consumers or producers run the
delete topic command again.
-Harsha

On Wed, Mar 4, 2015, at 10:28 AM, Jeff Schroeder wrote:
> So I've got 3 kafka brokers that were started with delete.topic.enable
> set
> to true. When they start, I can see in the logs that the property was
> successfully set. The dataset in each broker is only approximately 2G
> (per
> du). When running kafaka-delete.sh with the correct arguments to delete
> all
> of the topics, it says that the topic is marked for deletion. When
> running
> again, it says that the topic is already marked for deletion.
> 
> From reading the documentation, my understanding is that one of the 10
> (default) background threads would eventually process the deletes, and
> clean up both the topics in zookeeper, and the actual data on disk. In
> reality, it didnt seem to delete the data on disk or remove anything in
> zookeeper.
> 
> What is the correct way to remove a topic in kafka 0.8.2 and what is the
> expected timeframe for that to complete expected to be? My "solution" was
> stopping the brokers and rm -rf /var/lib/kafka/*, but that is clearly a
> very poor one once we are done testing our kafka + storm setup.
> 
> -- 
> Jeff Schroeder
> 
> Don't drink and derive, alcohol and analysis don't mix.
> http://www.digitalprognosis.com


Re: 0.7 design doc?

2015-03-02 Thread Harsha
These docs might help
https://kafka.apache.org/08/design.html
http://research.microsoft.com/en-us/um/people/srikanth/netdb11/netdb11papers/netdb11-final12.pdf
-Harsha

On Sun, Mar 1, 2015, at 09:42 PM, Philip O'Toole wrote:
> Thanks Guozhang -- no this isn't quite it. The doc I read before
> contained the rationale for using physical offsets in the file, not
> logical offsets. I know the current version of Kafka now uses logical
> offsets again.  It's not a big deal though, I generally remember the
> contents of the page, and the important section about using the OS for
> caching is also contained the 0.8 docs. I was more curious about
> re-reading it.
> 
> I do have one question though. There are two ways (that I know of) of
> accessing a file -- the read() and write() system calls, or mmap'ing the
> file. Both go through the OS file cache, as far as I know. Which
> technique does Kafka actually use, when accessing log files? I always
> wondered. I started looking at the Scala source, but it's not immediately
> clear to me.
> 
> Thanks,
> Philip
>  -
> http://www.philipotoole.com 
> 
>  On Saturday, February 28, 2015 9:33 PM, Guozhang Wang
>   wrote:
>
> 
>  Is this you are looking for?
> 
> http://kafka.apache.org/07/documentation.html
> 
> On Fri, Feb 27, 2015 at 7:02 PM, Philip O'Toole <
> philip.oto...@yahoo.com.invalid> wrote:
> 
> > There used to be available a very lucid page describing Kafka 0.7, its
> > design, and the rationale behind certain decisions. I last saw it about 18
> > months ago.  I can't find it now. Is it still available? I can find the 0.8
> > version, it's up there on the site.
> >
> > Any help? Any links?
> >
> > Philip
> >
> > 
> > http://www.philipotoole.com
> 
> 
> 
> 
> -- 
> -- Guozhang
> 
> 
>


Re: Delete Topic in 0.8.2

2015-03-01 Thread Harsha

Hi Hema, Can you attach controller.log and state-change.log. Image is
not showing up at least for me. Can you also give us details on how big
the cluster is and topic's partitions and replication-factor and any
steps on reproducing this. Thanks, Harsha


On Sun, Mar 1, 2015, at 12:40 PM, Hema Bhatia wrote:
> I upgraded my kafka server to 0.8.2 and client to use 0.8.2 as well..


>


> I am trying to test delete topic feature and I see that delete topic
> do not work consistently.


> I saw it working fine a first few times but after a while I saw that
> deleting topics add them to delete_topic node in admin folder but does
> not remove it from topics in brokers.


> Also, to make things simple, no other application server was running,
> and no consumer created for these topics. Just create topic followed
> by delete topic.


>


> Anything I am missing?


>


> Here is the snapshot of createTopic followed by deleteTopic:


>


>


>


>


>


>
> This message is private and confidential. If you have received it in
> error, please notify the sender and remove it from your system.



Re: How replicas catch up the leader

2015-02-28 Thread Harsha
you can increase num.replica.fetchers by default its 1 and also try
increasing replica.fetch.max.bytes
-Harsha

On Fri, Feb 27, 2015, at 11:15 PM, tao xiao wrote:
> Hi team,
> 
> I had a replica node that was shutdown improperly due to no disk space
> left. I managed to clean up the disk and restarted the replica but the
> replica since then never caught up the leader shown below
> 
> Topic:test PartitionCount:1 ReplicationFactor:3 Configs:
> 
> Topic: test Partition: 0 Leader: 5 Replicas: 1,5,6 Isr: 5,6
> 
> broker 1 is the replica that failed before. Is there a way that I can
> force
> the replica to catch up the leader?
> 
> -- 
> Regards,
> Tao


Re: Batching with new Producer API

2015-02-26 Thread Harsha
Akshat,
   Produce.batch_size is in bytes and if your messages avg size is
   310 bytes and your current number of messages per batch is 46 you
   are getting close to the max batch size 16384. Did you try
   increasing the producer batch_size bytes?
-Harsha

On Thu, Feb 26, 2015, at 09:49 AM, Akshat Aranya wrote:
> Hi,
> 
> I am using the new Producer API in Kafka 0.8.2. I am writing messages to
> Kafka that are ~310 bytes long with the same partition key to one single
> .
> I'm mostly using the default Producer config, which sets the max batch
> size
> to 16,384.  However, looking at the JMX stats on the broker side, I see
> that I'm only getting an average batch size of 46.  I also tried
> increasing
> the linger.ms value to 100ms (default is 0), but that didn't help either.
> Is there something else that I can tune that will improve write batching?
> 
> Thanks,
> Akshat


Re: Broker Server Crash with HW failure. Broker throwing java.lang.NumberFormatException and will not restart without removing all partitions

2015-02-24 Thread Harsha
Hi Gene,
Looks like you might be running into this
https://issues.apache.org/jira/browse/KAFKA-1758 . 
-Harsha
On Tue, Feb 24, 2015, at 07:17 AM, Gene Robichaux wrote:
> About a week ago one of our brokers crashed with a hardware failure. When
> the server restarted the Kafka broker would not start. The error is
> listed below. I tried a couple of time to restart, but no success. The
> only think that work was to physically remove the partitions for this
> broker. I was able to restart the broker. The partitions were recreated
> and were in-sync within about 30 minutes. My questions for the group are:
> 
> Has anyone seen this error before? If so, what was your resolution?
> Is there a more elegant way to "re-sync" a broker node? 
> Is there a way to identify which partition could be causing the issue and
> possibly just remove that one?
> 
> We have seen this a couple of other time in our environment, twice from
> HW failures and once from someone killing the broker with a KILL -9.
> 
> Is this a known issue? 
> 
> If anyone has any insight as to why this would occur I would greatly
> appreciate it.
> 
> Gene Robichaux
> Manager, Database Operations
> Match.com
> 8300 Douglas Avenue I Suite 800 I Dallas, TX  75225
> 
> <>
> 
> 02/18/2015 09:43:11 PM | FATAL | KafkaServerStartable | Fatal error
> dur

Re: Kakfa question about starting kafka with same broker id

2015-02-18 Thread Harsha
Deepak,
  You should getting following error and kafka server will shutdown
  itself it it sees same brokerid registered in zookeeper.
" Fatal error during KafkaServer startup. Prepare to shutdown
(kafka.server.KafkaServer)
java.lang.RuntimeException: A broker is already registered on the path
/brokers/ids/0. This probably indicates that you either have configured
a brokerid that is already in use, or else you have shutdown this broker
and restarted it faster than the zookeeper timeout so it appears to be
re-registering."

-Harsha

On Wed, Feb 18, 2015, at 03:16 PM, Deepak Dhakal wrote:
> Hi,
> 
> My name is Deepak and I work for salesforce. We are using kafka 8.11  and
> have a question about starting kafka with same broker id.
> 
> Steps:
> 
> Start a kakfa broker with broker id =1 -> it starts fine with external ZK
> Start another kafka with same broker id =1 .. it does not start the kafka
> which is expected but I am seeing the following log and it keeps retrying
> forever.
> 
> Is there way to control how many time a broker tries to starts itself
> with
> the same broker id ?
> 
> 
> Thanks
> Deepak
> 
> [2015-02-18 14:47:20,713] INFO conflict in /controller data:
> {"version":1,"brokerid":19471,"timestamp":"1424299100135"} stored data:
> {"version":1,"brokerid":19471,"timestamp":"1424288444314"}
> (kafka.utils.ZkUtils$)
> 
> [2015-02-18 14:47:20,716] INFO I wrote this conflicted ephemeral node
> [{"version":1,"brokerid":19471,"timestamp":"1424299100135"}] at
> /controller
> a while back in a different session, hence I will backoff for this node
> to
> be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)
> 
> [2015-02-18 14:47:30,719] INFO conflict in /controller data:
> {"version":1,"brokerid":19471,"timestamp":"1424299100135"} stored data:
> {"version":1,"brokerid":19471,"timestamp":"1424288444314"}
> (kafka.utils.ZkUtils$)
> 
> [2015-02-18 14:47:30,722] INFO I wrote this conflicted ephemeral node
> [{"version":1,"brokerid":19471,"timestamp":"1424299100135"}] at
> /controller
> a while back in a different session, hence I will backoff for this node
> to
> be deleted by Zookeeper and retry (kafka.utils.ZkUtils$)


Re: MetadataRequest vs Zookeeper

2015-02-13 Thread Harsha
Paul,
 There is ongoing work to move to Kafka API instead of making
 calls to zookeeper. 
Here is the JIRA https://issues.apache.org/jira/browse/STORM-650 .
-Harsha

On Fri, Feb 13, 2015, at 01:02 PM, Paul Mackles wrote:
> I noticed that the standard Kafka storm spout gets topic metadata from
> zookeeper (under "/brokers/topics/") instead of issuing MetadataRequests
> to one of the brokers. Aside from possible encapsulation issues, are
> there any other downsides to using ZK this way? Are there significant
> cases where ZK can be out-of-sync with the brokers?
> 
> 
> I can see that the high-level consumer gets is topic metadata by issuing
> MetadataRequests? to the brokers so I always assumed that was preferred.
> 
> 
> Thanks,
> 
> Paul
> 
> 


Re: [DISCUSS] KIP-12 - Kafka Sasl/Kerberos implementation

2015-02-12 Thread Harsha

Thanks for the review Gwen. I'll keep in mind about java 6 support. 
-Harsha
On Wed, Feb 11, 2015, at 03:28 PM, Gwen Shapira wrote:
> Looks good. Thanks for working on this.
> 
> One note, the Channel implementation from SSL only works on Java7 and up.
> Since we are still supporting Java 6, I'm working on a lighter wrapper
> that
> will be a composite on SocketChannel but will not extend it. Perhaps
> you'll
> want to use that.
> 
> Looking forward to the patch!
> 
> Gwen
> 
> On Wed, Feb 11, 2015 at 9:17 AM, Harsha  wrote:
> 
> > Hi,
> > Here is the initial proposal for sasl/kerberos implementation for
> > kafka https://cwiki.apache.org/confluence/x/YI4WAw
> > and JIRA https://issues.apache.org/jira/browse/KAFKA-1686. I am
> > currently working on prototype which will add more details to the KIP.
> > Just opening the thread to say the work is in progress. I'll update the
> > thread with a initial prototype patch.
> > Thanks,
> > Harsha
> >


Re: [DISCUSS] KIP-12 - Kafka Sasl/Kerberos implementation

2015-02-11 Thread Harsha
Thanks Joe. It will be part of KafkaServer and will run on its own
thread. Since each kafka server will run with a keytab we should make
sure they are all getting renewed.

On Wed, Feb 11, 2015, at 10:00 AM, Joe Stein wrote:
> Thanks Harsha, looks good so far. How were you thinking of running
> the KerberosTicketManager as a standalone process or like controller or
> is
> it a layer of code that does the plumbing pieces everywhere?
> 
> ~ Joestein
> 
> On Wed, Feb 11, 2015 at 12:18 PM, Harsha  wrote:
> 
> > Hi,
> > Here is the initial proposal for sasl/kerberos implementation for
> > kafka https://cwiki.apache.org/confluence/x/YI4WAw
> > and JIRA https://issues.apache.org/jira/browse/KAFKA-1686. I am
> > currently working on prototype which will add more details to the KIP.
> > Just opening the thread to say the work is in progress. I'll update the
> > thread with a initial prototype patch.
> > Thanks,
> > Harsha
> >


Fwd: [DISCUSS] KIP-12 - Kafka Sasl/Kerberos implementation

2015-02-11 Thread Harsha
Hi,
Here is the initial proposal for sasl/kerberos implementation for
kafka https://cwiki.apache.org/confluence/x/YI4WAw
and JIRA https://issues.apache.org/jira/browse/KAFKA-1686. I am
currently working on prototype which will add more details to the KIP. 
Just opening the thread to say the work is in progress. I'll update the
thread with a initial prototype patch.
Thanks,
Harsha


[DISCUSS] KIP-12 - Kafka Sasl/Kerberos implementation

2015-02-11 Thread Harsha
Hi,
Here is the initial proposal for sasl/kerberos implementation for
kafka https://cwiki.apache.org/confluence/x/YI4WAw
and JIRA https://issues.apache.org/jira/browse/KAFKA-1686. I am
currently working on prototype which will add more details to the KIP. 
Just opening the thread to say the work is in progress. I'll update the
thread with a initial prototype patch.
Thanks,
Harsha


Re: Issue with topic deletion

2015-02-04 Thread Harsha

   whats your zookeeper.session.timeout.ms value

On Wed, Feb 4, 2015, at 09:35 PM, Sumit Rangwala wrote:
> On Wed, Feb 4, 2015 at 6:14 PM, Joel Koshy  wrote:
> 
> > I took a look at your logs. I agree with Harsh that the logs seem
> > truncated. The basic issue though is that you have session expirations
> > and controller failover. Broker 49554 was the controller and hosted
> > some partition(s) of LAX1-GRIFFIN-r13-1423001701601. After controller
> > failover the new controller marks it as ineligible for deletion since
> > 49554 is considered down (until it re-registers in zookeeper) and is
> > relected as the leader - however I don't see those logs.
> >
> > Ok. Just wondering if the delete logic is document anywhere.
> 
> 
> 
> > Any idea why you have session expirations? This is typically due to GC
> > and/or flaky network. Regardless, we should be handling that scenario
> > as well. However, your logs seem incomplete. Can you redo this and
> > perhaps keep the set up running a little longer and send over those
> > logs?
> >
> >
> I am stress testing my application by doing a large number of read and
> writes to kafka. My setup consist many docker instances (of brokers and
> client) running (intentionally) on a single linux box. Since the machine
> is
> overload, congested network and long GC are a possibility.
> 
> I will redo the experiment and keep the kakfa brokers running. However, I
> will move to 0.8.2 release since Jun asked me to try it for another issue
> (topic creation). I hope that is fine.
> 
> 
> Sumit
> 
> 
> 
> > Thanks,
> >
> > Joel
> >
> > On Wed, Feb 04, 2015 at 01:00:46PM -0800, Sumit Rangwala wrote:
> > > >
> > > >
> > > >> I have since stopped the container so I cannot say if
> > > > LAX1-GRIFFIN-r45-142388317 was one of the topic in "marked for
> > > > deletion" forever.  However, there were many topics (at least 10 of
> > them)
> > > > that were perennially in "marked for deletion" state.
> > > >
> > > >
> > > I have the setup to recreate the issue in case the logs are not
> > sufficient.
> > >
> > >
> > > Sumit
> > >
> > >
> > >
> > > > Sumit
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >> -Harsha
> > > >>
> > > >> On Tue, Feb 3, 2015, at 09:19 PM, Harsha wrote:
> > > >> > you are probably handling it but there is a case where you call
> > > >> > deleteTopic and kafka goes through delete topic process but your
> > > >> > consumer is running probably made a TopicMetadataRequest for the
> > same
> > > >> > topic which can re-create the topic with the default num.partitions
> > and
> > > >> > replication.factor.  Did you try stopping the consumer first and
> > issue
> > > >> > the topic delete.
> > > >> > -Harsha
> > > >> >
> > > >> > On Tue, Feb 3, 2015, at 08:37 PM, Sumit Rangwala wrote:
> > > >> > > On Tue, Feb 3, 2015 at 6:48 PM, Harsha  wrote:
> > > >> > >
> > > >> > > > Sumit,
> > > >> > > >lets say you are deleting a older topic "test1" do you
> > have
> > > >> any
> > > >> > > >consumers running simultaneously for the topic "test1"
> > while
> > > >> > > >deletion of topic going on.
> > > >> > > >
> > > >> > >
> > > >> > > Yes it is the case. However, after a small period of time (say few
> > > >> > > minutes)
> > > >> > > there won't be any consumer running for the deleted topic.
> > > >> > >
> > > >> > >
> > > >> > > Sumit
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > > -Harsha
> > > >> > > >
> > > >> > > > On Tue, Feb 3, 2015, at 06:17 PM, Joel Koshy wrote:
> > > >> > > > > Thanks for the logs - will take a look tomorrow unless someone
> > > >> else
> > > >> > > > > gets a chance to get to it today.
> > > >> > > > >
> >

Re: Issue with topic deletion

2015-02-03 Thread Harsha
Sumit,
 I grepped logs for this topic "LAX1-GRIFFIN-r13-1423001701601"
 it looks like topic partitions are getting deleted in
 state-change.log and this happens around 22:59  and server.log
 has data till 22:59.
I looked for other deleted topic "LAX1-GRIFFIN-r45-142388317" which
looks to be getting deleted properly. Do you see same issues with the
above topic i.e /admin/delete_topics/LAX1-GRIFFIN-r45-142388317
still exists. If you can post the logs from 23:00 onwards that will be
helpful.
-Harsha

On Tue, Feb 3, 2015, at 09:19 PM, Harsha wrote:
> you are probably handling it but there is a case where you call
> deleteTopic and kafka goes through delete topic process but your
> consumer is running probably made a TopicMetadataRequest for the same
> topic which can re-create the topic with the default num.partitions and
> replication.factor.  Did you try stopping the consumer first and issue
> the topic delete.
> -Harsha
> 
> On Tue, Feb 3, 2015, at 08:37 PM, Sumit Rangwala wrote:
> > On Tue, Feb 3, 2015 at 6:48 PM, Harsha  wrote:
> > 
> > > Sumit,
> > >lets say you are deleting a older topic "test1" do you have any
> > >consumers running simultaneously for the topic "test1"  while
> > >deletion of topic going on.
> > >
> > 
> > Yes it is the case. However, after a small period of time (say few
> > minutes)
> > there won't be any consumer running for the deleted topic.
> > 
> > 
> > Sumit
> > 
> > 
> > 
> > 
> > > -Harsha
> > >
> > > On Tue, Feb 3, 2015, at 06:17 PM, Joel Koshy wrote:
> > > > Thanks for the logs - will take a look tomorrow unless someone else
> > > > gets a chance to get to it today.
> > > >
> > > > Joel
> > > >
> > > > On Tue, Feb 03, 2015 at 04:11:57PM -0800, Sumit Rangwala wrote:
> > > > > On Tue, Feb 3, 2015 at 3:37 PM, Joel Koshy 
> > > wrote:
> > > > >
> > > > > > Hey Sumit,
> > > > > >
> > > > > > I thought you would be providing the actual steps to reproduce :)
> > > > > >
> > > > >
> > > > > I want to but some proprietary code prevents me to do it.
> > > > >
> > > > >
> > > > > > Nevertheless, can you get all the relevant logs: state change logs
> > > and
> > > > > > controller logs at the very least and if possible server logs and
> > > send
> > > > > > those over?
> > > > > >
> > > > >
> > > > > Here are all the logs you requested (there are three brokers in my
> > > setup
> > > > > k1, k2, k3): http://d.pr/f/1kprY/2quHBRRT (Gmail has issue with the
> > > file)
> > > > >
> > > > >
> > > > > Sumit
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > >
> > > > > > Joel
> > > > > >
> > > > > > On Tue, Feb 03, 2015 at 03:27:43PM -0800, Sumit Rangwala wrote:
> > > > > > > In my setup kafka brokers are set for auto topic creation. In the
> > > > > > scenario
> > > > > > > below a node informs other nodes (currently 5 in total) about  a
> > > number
> > > > > > of
> > > > > > > new (non-existent) topics, and  all the nodes almost
> > > simultaneously open
> > > > > > a
> > > > > > > consumer for each of those topics. Sometime later another node
> > > informs
> > > > > > all
> > > > > > > other nodes of a new list of topics and each node, if they find
> > > that an
> > > > > > > older topic exists in kafka, goes ahead and deletes the older
> > > topic.
> > > > > > What I
> > > > > > > have found is that many of the topics stay in the "marked for
> > > deletion"
> > > > > > > state forever.
> > > > > > >
> > > > > > >
> > > > > > > I get the list of topics using ZkUtils.getAllTopics(zkClient) and
> > > delete
> > > > > > > topics using AdminUtils.deleteTopic(zkClient, topic). Since many
> > > nodes
> > > > > > > might try to delete the same topic at the same time I do
> > > > > > > see Z

Re: Issue with topic deletion

2015-02-03 Thread Harsha
you are probably handling it but there is a case where you call
deleteTopic and kafka goes through delete topic process but your
consumer is running probably made a TopicMetadataRequest for the same
topic which can re-create the topic with the default num.partitions and
replication.factor.  Did you try stopping the consumer first and issue
the topic delete.
-Harsha

On Tue, Feb 3, 2015, at 08:37 PM, Sumit Rangwala wrote:
> On Tue, Feb 3, 2015 at 6:48 PM, Harsha  wrote:
> 
> > Sumit,
> >lets say you are deleting a older topic "test1" do you have any
> >consumers running simultaneously for the topic "test1"  while
> >deletion of topic going on.
> >
> 
> Yes it is the case. However, after a small period of time (say few
> minutes)
> there won't be any consumer running for the deleted topic.
> 
> 
> Sumit
> 
> 
> 
> 
> > -Harsha
> >
> > On Tue, Feb 3, 2015, at 06:17 PM, Joel Koshy wrote:
> > > Thanks for the logs - will take a look tomorrow unless someone else
> > > gets a chance to get to it today.
> > >
> > > Joel
> > >
> > > On Tue, Feb 03, 2015 at 04:11:57PM -0800, Sumit Rangwala wrote:
> > > > On Tue, Feb 3, 2015 at 3:37 PM, Joel Koshy 
> > wrote:
> > > >
> > > > > Hey Sumit,
> > > > >
> > > > > I thought you would be providing the actual steps to reproduce :)
> > > > >
> > > >
> > > > I want to but some proprietary code prevents me to do it.
> > > >
> > > >
> > > > > Nevertheless, can you get all the relevant logs: state change logs
> > and
> > > > > controller logs at the very least and if possible server logs and
> > send
> > > > > those over?
> > > > >
> > > >
> > > > Here are all the logs you requested (there are three brokers in my
> > setup
> > > > k1, k2, k3): http://d.pr/f/1kprY/2quHBRRT (Gmail has issue with the
> > file)
> > > >
> > > >
> > > > Sumit
> > > >
> > > >
> > > >
> > > >
> > > > >
> > > > > Joel
> > > > >
> > > > > On Tue, Feb 03, 2015 at 03:27:43PM -0800, Sumit Rangwala wrote:
> > > > > > In my setup kafka brokers are set for auto topic creation. In the
> > > > > scenario
> > > > > > below a node informs other nodes (currently 5 in total) about  a
> > number
> > > > > of
> > > > > > new (non-existent) topics, and  all the nodes almost
> > simultaneously open
> > > > > a
> > > > > > consumer for each of those topics. Sometime later another node
> > informs
> > > > > all
> > > > > > other nodes of a new list of topics and each node, if they find
> > that an
> > > > > > older topic exists in kafka, goes ahead and deletes the older
> > topic.
> > > > > What I
> > > > > > have found is that many of the topics stay in the "marked for
> > deletion"
> > > > > > state forever.
> > > > > >
> > > > > >
> > > > > > I get the list of topics using ZkUtils.getAllTopics(zkClient) and
> > delete
> > > > > > topics using AdminUtils.deleteTopic(zkClient, topic). Since many
> > nodes
> > > > > > might try to delete the same topic at the same time I do
> > > > > > see ZkNodeExistsException while deleting the topic, which I catch
> > an
> > > > > > ignore. (e.g.,
> > org.apache.zookeeper.KeeperException$NodeExistsException:
> > > > > > KeeperErrorCode = NodeExists for
> > > > > > /admin/delete_topics/LAX1-GRIFFIN-r13-1423001701601)
> > > > > >
> > > > > > # State of one deleted topic on kafka brokers:
> > > > > > Topic:LAX1-GRIFFIN-r13-1423001701601 PartitionCount:8
> > ReplicationFactor:1
> > > > > > Configs:
> > > > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 0 Leader: -1
> > Replicas:
> > > > > > 49558 Isr:
> > > > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 1 Leader: -1
> > Replicas:
> > > > > > 49554 Isr:
> > > > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 2 Leader: -1
> > Replicas:
> > > > > > 49557 Isr:
> > > > > > Topic: LAX1-G

Re: Issue with topic deletion

2015-02-03 Thread Harsha
Sumit,
   lets say you are deleting a older topic "test1" do you have any
   consumers running simultaneously for the topic "test1"  while
   deletion of topic going on.
-Harsha

On Tue, Feb 3, 2015, at 06:17 PM, Joel Koshy wrote:
> Thanks for the logs - will take a look tomorrow unless someone else
> gets a chance to get to it today.
> 
> Joel
> 
> On Tue, Feb 03, 2015 at 04:11:57PM -0800, Sumit Rangwala wrote:
> > On Tue, Feb 3, 2015 at 3:37 PM, Joel Koshy  wrote:
> > 
> > > Hey Sumit,
> > >
> > > I thought you would be providing the actual steps to reproduce :)
> > >
> > 
> > I want to but some proprietary code prevents me to do it.
> > 
> > 
> > > Nevertheless, can you get all the relevant logs: state change logs and
> > > controller logs at the very least and if possible server logs and send
> > > those over?
> > >
> > 
> > Here are all the logs you requested (there are three brokers in my setup
> > k1, k2, k3): http://d.pr/f/1kprY/2quHBRRT (Gmail has issue with the file)
> > 
> > 
> > Sumit
> > 
> > 
> > 
> > 
> > >
> > > Joel
> > >
> > > On Tue, Feb 03, 2015 at 03:27:43PM -0800, Sumit Rangwala wrote:
> > > > In my setup kafka brokers are set for auto topic creation. In the
> > > scenario
> > > > below a node informs other nodes (currently 5 in total) about  a number
> > > of
> > > > new (non-existent) topics, and  all the nodes almost simultaneously open
> > > a
> > > > consumer for each of those topics. Sometime later another node informs
> > > all
> > > > other nodes of a new list of topics and each node, if they find that an
> > > > older topic exists in kafka, goes ahead and deletes the older topic.
> > > What I
> > > > have found is that many of the topics stay in the "marked for deletion"
> > > > state forever.
> > > >
> > > >
> > > > I get the list of topics using ZkUtils.getAllTopics(zkClient) and delete
> > > > topics using AdminUtils.deleteTopic(zkClient, topic). Since many nodes
> > > > might try to delete the same topic at the same time I do
> > > > see ZkNodeExistsException while deleting the topic, which I catch an
> > > > ignore. (e.g., org.apache.zookeeper.KeeperException$NodeExistsException:
> > > > KeeperErrorCode = NodeExists for
> > > > /admin/delete_topics/LAX1-GRIFFIN-r13-1423001701601)
> > > >
> > > > # State of one deleted topic on kafka brokers:
> > > > Topic:LAX1-GRIFFIN-r13-1423001701601 PartitionCount:8 
> > > > ReplicationFactor:1
> > > > Configs:
> > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 0 Leader: -1 Replicas:
> > > > 49558 Isr:
> > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 1 Leader: -1 Replicas:
> > > > 49554 Isr:
> > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 2 Leader: -1 Replicas:
> > > > 49557 Isr:
> > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 3 Leader: -1 Replicas:
> > > > 49558 Isr:
> > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 4 Leader: -1 Replicas:
> > > > 49554 Isr:
> > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 5 Leader: -1 Replicas:
> > > > 49557 Isr:
> > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 6 Leader: -1 Replicas:
> > > > 49558 Isr:
> > > > Topic: LAX1-GRIFFIN-r13-1423001701601 Partition: 7 Leader: -1 Replicas:
> > > > 49554 Isr:
> > > >
> > > >
> > > > # Controller log says
> > > >
> > > > [2015-02-03 22:59:03,399] INFO [delete-topics-thread-49554], Deletion 
> > > > for
> > > > replicas 49557,49554,49558 for partition
> > > >
> > > [LAX1-GRIFFIN-r13-1423001701601,0],[LAX1-GRIFFIN-r13-1423001701601,6],[LAX1-GRIFFIN-r13-1423001701601,5],[LAX1-GRIFFIN-r13-1423001701601,3],[LAX1-GRIFFIN-r13-1423001701601,7],[LAX1-GRIFFIN-r13-1423001701601,1],[LAX1-GRIFFIN-r13-1423001701601,4],[LAX1-GRIFFIN-r13-1423001701601,2]
> > > > of topic LAX1-GRIFFIN-r13-1423001701601 in progress
> > > > (kafka.controller.TopicDeletionManager$DeleteTopicsThread)
> > > >
> > > > current time: Tue Feb  3 23:20:58 UTC 2015
> > > >
> > > >
> > > > Since I don't know the delete topic algorithm, I am not sure why sure
> > > these
> > > > topics are

Re: create topic does not really executed successfully

2015-02-02 Thread Harsha
Xinyl,
Do you have any logs when the kafka-topics.sh unable to create
topic dirs. Apart from this make sure you point to a different
dir other than /tmp/kafka-logs since this dir gets delete when
your machine restarts and not a place to store your topic data.
-Harsha

On Mon, Feb 2, 2015, at 07:03 PM, Xinyi Su wrote:
> Hi,
> 
> -bash-4.1$ bin/kafka-topics.sh  --zookeeper :2181 --create
> --topic
> zerg.hydra --partitions 3 --replication-factor 2
> Created topic "zerg.hydra".
> 
> -bash-4.1$ ls -lrt /tmp/kafka-logs/zerg.hydra-2
> total 0
> -rw-r--r-- 1 users0 Feb  3 02:58 .log
> -rw-r--r-- 1 users 10485760 Feb  3 02:58 .index
> 
> We can see the topic partition directory is created after the shell
> command
> is executed since I have not sent any data yet.
> But this shell command is not always executed successfully, sometimes it
> fails to create the directory for topic-partition.
> 
> Besides, the broker number is greater than replication factor in my kafka
> cluster.
> 
> Thanks.
> Xinyi
> 
> On 2 February 2015 at 22:24, Gwen Shapira  wrote:
> 
> > IIRC, the directory is only created after you send data to the topic.
> >
> > Do you get errors when your producer sends data?
> >
> > Another common issue is that you specify replication-factor 3 when you
> > have fewer than 3 brokers.
> >
> > Gwen
> >
> > On Mon, Feb 2, 2015 at 2:34 AM, Xinyi Su  wrote:
> > > Hi,
> > >
> > > I am using Kafka_2.9.2-0.8.2-beta.  When I use kafka-topic.sh to create
> > > topic, I observed sometimes the topic is not really created successfully
> > as
> > > the output shows in console.
> > >
> > > Below is my command line:
> > >
> > > # bin/kafka-topics.sh  --zookeeper :2181 --create --topic zerg.hydra
> > > --partitions 3 --replication-factor 3
> > >
> > > The command prompts "created topic xxx", but local storage directory used
> > > for this topic under "log.dirs" does not created at all. Normally, there
> > > should be some folders like zerg.hydra-0, zerg.hydra-1... just named
> > > according to partion id and assignment policy.
> > >
> > > I come across this issue about four times, the disk is not full and
> > > directory access permission is legal. Do you know about the cause of this
> > > issue?
> > >
> > > Thanks.
> > >
> > > Xinyi
> >


Re: create topic does not really executed successfully

2015-02-02 Thread Harsha
Xinyl,
Do you have any logs when the kafka-topics.sh unable to create
topic dirs. Apart from this make sure you point to a different
dir other than /tmp/kafka-logs since this dir gets delete when
your machine restarts and not a place to store your topic data.
-Harsha

On Mon, Feb 2, 2015, at 07:03 PM, Xinyi Su wrote:
> Hi,
> 
> -bash-4.1$ bin/kafka-topics.sh  --zookeeper :2181 --create
> --topic
> zerg.hydra --partitions 3 --replication-factor 2
> Created topic "zerg.hydra".
> 
> -bash-4.1$ ls -lrt /tmp/kafka-logs/zerg.hydra-2
> total 0
> -rw-r--r-- 1 users0 Feb  3 02:58 .log
> -rw-r--r-- 1 users 10485760 Feb  3 02:58 .index
> 
> We can see the topic partition directory is created after the shell
> command
> is executed since I have not sent any data yet.
> But this shell command is not always executed successfully, sometimes it
> fails to create the directory for topic-partition.
> 
> Besides, the broker number is greater than replication factor in my kafka
> cluster.
> 
> Thanks.
> Xinyi
> 
> On 2 February 2015 at 22:24, Gwen Shapira  wrote:
> 
> > IIRC, the directory is only created after you send data to the topic.
> >
> > Do you get errors when your producer sends data?
> >
> > Another common issue is that you specify replication-factor 3 when you
> > have fewer than 3 brokers.
> >
> > Gwen
> >
> > On Mon, Feb 2, 2015 at 2:34 AM, Xinyi Su  wrote:
> > > Hi,
> > >
> > > I am using Kafka_2.9.2-0.8.2-beta.  When I use kafka-topic.sh to create
> > > topic, I observed sometimes the topic is not really created successfully
> > as
> > > the output shows in console.
> > >
> > > Below is my command line:
> > >
> > > # bin/kafka-topics.sh  --zookeeper :2181 --create --topic zerg.hydra
> > > --partitions 3 --replication-factor 3
> > >
> > > The command prompts "created topic xxx", but local storage directory used
> > > for this topic under "log.dirs" does not created at all. Normally, there
> > > should be some folders like zerg.hydra-0, zerg.hydra-1... just named
> > > according to partion id and assignment policy.
> > >
> > > I come across this issue about four times, the disk is not full and
> > > directory access permission is legal. Do you know about the cause of this
> > > issue?
> > >
> > > Thanks.
> > >
> > > Xinyi
> >


Re: unable to delete topic with 0.8.2 rc2

2015-01-26 Thread Harsha
Jun,
  I made an attempt at fixing that issue as part of this JIRA
  https://issues.apache.org/jira/browse/KAFKA-1507 . 
As Jay pointed out there should be admin api if there is more info on
this api I am interested in adding/fixing this issue.
Thanks,
Harsha

On Mon, Jan 26, 2015, at 07:28 AM, Jun Rao wrote:
> Yes, that's the issue. Currently, topics can be auto-created on
> TopicMetadataRequest, which can be issued from both the producer and the
> consumer. To prevent that, you would need to stop the producer and the
> consumer before deleting a topic. We plan to address this issue once we
> have a separate request for creating topics.
> 
> Thanks,
> 
> Jun
> 
> On Mon, Jan 26, 2015 at 7:21 AM, Harsha  wrote:
> 
> > There could be another case where if you have auto.create.topics.enable
> > to set to true ( its true by default) . Any TopicMetadataRequest can
> > recreate topics. So if you issued a delete topic command and you have
> > producers running or consumers? too which is issuing a
> > TopicMetadataRequest than the topic will be recreated.
> > -Harsha
> >
> > On Sun, Jan 25, 2015, at 11:26 PM, Jason Rosenberg wrote:
> > > cversion did change (incremented by 2) when I issue the delete command.
> > >
> > > From the logs on the conroller broker (also the leader for the topic), it
> > > looks like the delete proceeds, and then the topic gets recreated
> > > immediately (highlighted in yellow). It appears maybe it’s due to a
> > > consumer client app trying to consume the topic. Also, the consumer is
> > > not
> > > yet updated to 0.8.2 (it’s using 0.8.1.1), perhaps that’s part of the
> > > problem?
> > >
> > >
> > > 2015-01-26 07:02:14,281  INFO
> > > [ZkClient-EventThread-21-myzkserver:12345/mynamespace]
> > > controller.PartitionStateMachine$DeleteTopicsListener -
> > > [DeleteTopicsListener on 6]: Starting topic deletion for topics
> > > mytopic
> > > 2015-01-26 07:02:14,282  INFO [delete-topics-thread-6]
> > > controller.TopicDeletionManager$DeleteTopicsThread -
> > > [delete-topics-thread-6], Handling deletion for topics mytopic
> > > 2015-01-26 07:02:14,286  INFO [delete-topics-thread-6]
> > > controller.TopicDeletionManager$DeleteTopicsThread -
> > > [delete-topics-thread-6], Deletion of topic mytopic (re)started
> > > 2015-01-26 07:02:14,286  INFO [delete-topics-thread-6]
> > > controller.TopicDeletionManager - [Topic Deletion Manager 6], Topic
> > > deletion callback for mytopic
> > > 2015-01-26 07:02:14,289  INFO [delete-topics-thread-6]
> > > controller.TopicDeletionManager - [Topic Deletion Manager 6],
> > > Partition deletion callback for [mytopic,0]
> > > 2015-01-26 07:02:14,295  INFO [delete-topics-thread-6]
> > > controller.ReplicaStateMachine - [Replica state machine on controller
> > > 6]: Invoking state change to OfflineReplica for replicas
> > >
> > [Topic=mytopic,Partition=0,Replica=7],[Topic=mytopic,Partition=0,Replica=6]
> > > 2015-01-26 07:02:14,303  INFO [delete-topics-thread-6]
> > > controller.KafkaController - [Controller 6]: New leader and ISR for
> > > partition [mytopic,0] is {"leader":6,"leader_epoch":1,"isr":[6]}
> > > 2015-01-26 07:02:14,312  INFO [delete-topics-thread-6]
> > > controller.KafkaController - [Controller 6]: New leader and ISR for
> > > partition [mytopic,0] is {"leader":-1,"leader_epoch":2,"isr":[]}
> > > 2015-01-26 07:02:14,313  INFO [delete-topics-thread-6]
> > > controller.ReplicaStateMachine - [Replica state machine on controller
> > > 6]: Invoking state change to ReplicaDeletionStarted for replicas
> > >
> > [Topic=mytopic,Partition=0,Replica=7],[Topic=mytopic,Partition=0,Replica=6]
> > > 2015-01-26 07:02:14,313  INFO [kafka-request-handler-5]
> > > server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 6]
> > > Removed fetcher for partitions [mytopic,0]
> > > 2015-01-26 07:02:14,313  INFO [kafka-request-handler-7]
> > > server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 6]
> > > Removed fetcher for partitions [mytopic,0]
> > > 2015-01-26 07:02:14,313  INFO [kafka-request-handler-7]
> > > log.OffsetIndex - Deleting index
> > > /mypath/mytopic-0/.index
> > > 2015-01-26 07:02:14,313  INFO [kafka-request-handler-7] log.LogManager
> > > - Deleted log for partition [mytopic,0] in /mypath/mytopic-0.
> > > 2015-01-26 07:02:14,314  INFO [Controller-6-to-broker-6-send-thr

Re: unable to delete topic with 0.8.2 rc2

2015-01-26 Thread Harsha
There could be another case where if you have auto.create.topics.enable
to set to true ( its true by default) . Any TopicMetadataRequest can
recreate topics. So if you issued a delete topic command and you have
producers running or consumers? too which is issuing a
TopicMetadataRequest than the topic will be recreated.
-Harsha

On Sun, Jan 25, 2015, at 11:26 PM, Jason Rosenberg wrote:
> cversion did change (incremented by 2) when I issue the delete command.
> 
> From the logs on the conroller broker (also the leader for the topic), it
> looks like the delete proceeds, and then the topic gets recreated
> immediately (highlighted in yellow). It appears maybe it’s due to a
> consumer client app trying to consume the topic. Also, the consumer is
> not
> yet updated to 0.8.2 (it’s using 0.8.1.1), perhaps that’s part of the
> problem?
> 
> 
> 2015-01-26 07:02:14,281  INFO
> [ZkClient-EventThread-21-myzkserver:12345/mynamespace]
> controller.PartitionStateMachine$DeleteTopicsListener -
> [DeleteTopicsListener on 6]: Starting topic deletion for topics
> mytopic
> 2015-01-26 07:02:14,282  INFO [delete-topics-thread-6]
> controller.TopicDeletionManager$DeleteTopicsThread -
> [delete-topics-thread-6], Handling deletion for topics mytopic
> 2015-01-26 07:02:14,286  INFO [delete-topics-thread-6]
> controller.TopicDeletionManager$DeleteTopicsThread -
> [delete-topics-thread-6], Deletion of topic mytopic (re)started
> 2015-01-26 07:02:14,286  INFO [delete-topics-thread-6]
> controller.TopicDeletionManager - [Topic Deletion Manager 6], Topic
> deletion callback for mytopic
> 2015-01-26 07:02:14,289  INFO [delete-topics-thread-6]
> controller.TopicDeletionManager - [Topic Deletion Manager 6],
> Partition deletion callback for [mytopic,0]
> 2015-01-26 07:02:14,295  INFO [delete-topics-thread-6]
> controller.ReplicaStateMachine - [Replica state machine on controller
> 6]: Invoking state change to OfflineReplica for replicas
> [Topic=mytopic,Partition=0,Replica=7],[Topic=mytopic,Partition=0,Replica=6]
> 2015-01-26 07:02:14,303  INFO [delete-topics-thread-6]
> controller.KafkaController - [Controller 6]: New leader and ISR for
> partition [mytopic,0] is {"leader":6,"leader_epoch":1,"isr":[6]}
> 2015-01-26 07:02:14,312  INFO [delete-topics-thread-6]
> controller.KafkaController - [Controller 6]: New leader and ISR for
> partition [mytopic,0] is {"leader":-1,"leader_epoch":2,"isr":[]}
> 2015-01-26 07:02:14,313  INFO [delete-topics-thread-6]
> controller.ReplicaStateMachine - [Replica state machine on controller
> 6]: Invoking state change to ReplicaDeletionStarted for replicas
> [Topic=mytopic,Partition=0,Replica=7],[Topic=mytopic,Partition=0,Replica=6]
> 2015-01-26 07:02:14,313  INFO [kafka-request-handler-5]
> server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 6]
> Removed fetcher for partitions [mytopic,0]
> 2015-01-26 07:02:14,313  INFO [kafka-request-handler-7]
> server.ReplicaFetcherManager - [ReplicaFetcherManager on broker 6]
> Removed fetcher for partitions [mytopic,0]
> 2015-01-26 07:02:14,313  INFO [kafka-request-handler-7]
> log.OffsetIndex - Deleting index
> /mypath/mytopic-0/.index
> 2015-01-26 07:02:14,313  INFO [kafka-request-handler-7] log.LogManager
> - Deleted log for partition [mytopic,0] in /mypath/mytopic-0.
> 2015-01-26 07:02:14,314  INFO [Controller-6-to-broker-6-send-thread]
> controller.ReplicaStateMachine - [Replica state machine on controller
> 6]: Invoking state change to ReplicaDeletionSuccessful for replicas
> [Topic=mytopic,Partition=0,Replica=6]
> 2015-01-26 07:02:14,314  INFO [delete-topics-thread-6]
> controller.TopicDeletionManager$DeleteTopicsThread -
> [delete-topics-thread-6], Handling deletion for topics mytopic
> 2015-01-26 07:02:14,316  INFO [delete-topics-thread-6]
> controller.TopicDeletionManager$DeleteTopicsThread -
> [delete-topics-thread-6], Deletion for replicas 7 for partition
> [mytopic,0] of topic mytopic in progress
> 2015-01-26 07:02:14,316  INFO [Controller-6-to-broker-7-send-thread]
> controller.ReplicaStateMachine - [Replica state machine on controller
> 6]: Invoking state change to ReplicaDeletionSuccessful for replicas
> [Topic=mytopic,Partition=0,Replica=7]
> 2015-01-26 07:02:14,316  INFO [delete-topics-thread-6]
> controller.TopicDeletionManager$DeleteTopicsThread -
> [delete-topics-thread-6], Handling deletion for topics mytopic
> 2015-01-26 07:02:14,318  INFO [delete-topics-thread-6]
> controller.ReplicaStateMachine - [Replica state machine on controller
> 6]: Invoking state change to NonExistentReplica for replicas
> [Topic=mytopic,Partition=0,Replica=6],[Topic=mytopic,Partition=0,Replica=7]
> 2015-01-26 07:02:14,318  INFO [delete-topics-thread-6]
&

  1   2   >