Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Philippe Laflamme
Hi,

We're using compaction on some of our topics. The log cleaner output showed
that it kicked in when the broker was restarted. But now after several
months of uptime, the log cleaner output is empty. The compacted topics
segment files don't seem to be cleaned up (compacted) anymore.

If there anyway to confirm that the log cleaner is really stopped (the log
file doesn't mention any shutdown / error)?

I've searched the mailing list about a similar problem but found nothing.
Is there anything that might explain our issue? We're using 0.8.1.1 and the
topics have very low traffic.

Philippe


Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Guozhang Wang
Hello Philippe,

You can get a thread dump and check if the log cleaner thread is still
alive, or it is blocked.

Also, are you using some compression on the messages stored on server?

Guozhang

Gu


On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme 
wrote:

> Hi,
>
> We're using compaction on some of our topics. The log cleaner output showed
> that it kicked in when the broker was restarted. But now after several
> months of uptime, the log cleaner output is empty. The compacted topics
> segment files don't seem to be cleaned up (compacted) anymore.
>
> If there anyway to confirm that the log cleaner is really stopped (the log
> file doesn't mention any shutdown / error)?
>
> I've searched the mailing list about a similar problem but found nothing.
> Is there anything that might explain our issue? We're using 0.8.1.1 and the
> topics have very low traffic.
>
> Philippe
>



-- 
-- Guozhang


Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Philippe Laflamme
Here's the thread dump:
https://gist.github.com/plaflamme/634411b162f56d8f48f6

There's a log-cleaner thread sleeping. Would there be any reason why it's
not writing to it's log-cleaner.log file if it's still running?

We are not using compression (unless it's on by default?)

Thanks,
Philippe


On Tue, Aug 26, 2014 at 11:25 AM, Guozhang Wang  wrote:

> Hello Philippe,
>
> You can get a thread dump and check if the log cleaner thread is still
> alive, or it is blocked.
>
> Also, are you using some compression on the messages stored on server?
>
> Guozhang
>
> Gu
>
>
> On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme 
> wrote:
>
> > Hi,
> >
> > We're using compaction on some of our topics. The log cleaner output
> showed
> > that it kicked in when the broker was restarted. But now after several
> > months of uptime, the log cleaner output is empty. The compacted topics
> > segment files don't seem to be cleaned up (compacted) anymore.
> >
> > If there anyway to confirm that the log cleaner is really stopped (the
> log
> > file doesn't mention any shutdown / error)?
> >
> > I've searched the mailing list about a similar problem but found nothing.
> > Is there anything that might explain our issue? We're using 0.8.1.1 and
> the
> > topics have very low traffic.
> >
> > Philippe
> >
>
>
>
> --
> -- Guozhang
>


More partitions than consumers

2014-08-26 Thread Vetle Leinonen-Roeim

Hi,

As far as I can see, the (otherwise great and very helpful) 
documentation isn't explicit about this, but: given more partitions than 
consumers, will all messages still be read?


I've discussed this with some people, and there is some disagreement, so 
a clear answer to this would be greatly appriciated!


Regards,
Vetle


Re: More partitions than consumers

2014-08-26 Thread Gwen Shapira
I hope this helps:

https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example

"if you have more partitions than you have threads, some threads will
receive data from multiple partitions"

On Tue, Aug 26, 2014 at 10:00 AM, Vetle Leinonen-Roeim  wrote:
> Hi,
>
> As far as I can see, the (otherwise great and very helpful) documentation
> isn't explicit about this, but: given more partitions than consumers, will
> all messages still be read?
>
> I've discussed this with some people, and there is some disagreement, so a
> clear answer to this would be greatly appriciated!
>
> Regards,
> Vetle


Re: More partitions than consumers

2014-08-26 Thread Vetle Leinonen-Roeim

Exactly what I'm looking for. Thanks! :)

On 26.08.14 19:08, Gwen Shapira wrote:

I hope this helps:

https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example

"if you have more partitions than you have threads, some threads will
receive data from multiple partitions"

On Tue, Aug 26, 2014 at 10:00 AM, Vetle Leinonen-Roeim  wrote:

Hi,

As far as I can see, the (otherwise great and very helpful) documentation
isn't explicit about this, but: given more partitions than consumers, will
all messages still be read?

I've discussed this with some people, and there is some disagreement, so a
clear answer to this would be greatly appriciated!

Regards,
Vetle






Re: Migrating data from old brokers to new borkers question

2014-08-26 Thread Marcin Michalski
I am running on 0.8.1.1 and I thought that the partition reassignment tools
can do this job. Just was not sure if this is the best way to do this.
I will try this out in stage env first and will perform the same in prod.

Thanks,
marcin


On Mon, Aug 25, 2014 at 7:23 PM, Joe Stein  wrote:

> Marcin, that is a typical task now.  What version of Kafka are you running?
>
> Take a look at
> https://kafka.apache.org/documentation.html#basic_ops_cluster_expansion
> and
>
> https://kafka.apache.org/documentation.html#basic_ops_increase_replication_factor
>
> Basically you can do a --generate to get existing JSON topology and with
> that take the results of "Current partition replica assignment" (the first
> JSON that outputs) and make whatever changes (like sed old node for new
> node and add more replica's which increase the replication factor, whatever
> you want) and then --execute.
>
> With lots of data this takes time so you will want to run --verify to see
> what is in progress... good thing do a node at a time (even topic at a
> time) however you want to manage and wait for it as such.
>
> The "preferred" replica is simply the first one in the list of replicas.
>  The kafka-preferred-replica-election.sh just makes that replica the leader
> as this is not automatic yet.
>
> If you are running a version prior to 0.8.1.1 it might make sense to
> upgrade the old nodes first then run reassign to the new servers.
>
>
> /***
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop 
> /
>
>
> On Mon, Aug 25, 2014 at 8:59 PM, Marcin Michalski 
> wrote:
>
> > Hi, I would like to migrate my Kafka setup from old servers to new
> servers.
> > Let say I have 8 really old servers that have the kafka topics/partitions
> > replicated 4 ways and want to migrate the data to 4 brand new servers and
> > want the replication factor be 3. I wonder if anyone has ever performed
> > this type of migration?
> >
> > Will auto rebalancing take care of this automatically if I do the
> > following?
> >
> > Let say I bring down old broker id 1 down and startup new server broker
> id
> > 100 up, is there a way to migrate all of the data of the topic that had
> the
> > topic (where borker id 1 was the leader) over to the new broker 100?
> >
> > Or do I need to use *bin/kafka-preferred-replica-election.sh *to reassign
> > the topics/partitions from old broker 1 to broker 100? And then just keep
> > doing the same thing until all of the old brokers are decommissioned?
> >
> > Also, would kafka-preferred-replica-election.sh let me actually lower the
> > number of replicas as well, if I just simply make sure that given
> > topic/partition was only elected 3 times versus 4?
> >
> > Thanks for your insight,
> > Marcin
> >
>


Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Guozhang Wang
Log cleaner will only wakeup and start the cleaning work when there are
logs that are "dirty" enough to be cleaned. So if the topic-partitions does
not get enough traffic to make it dirty the log cleaner will not kicks in
to that partition again.

Guozhang


On Tue, Aug 26, 2014 at 9:02 AM, Philippe Laflamme 
wrote:

> Here's the thread dump:
> https://gist.github.com/plaflamme/634411b162f56d8f48f6
>
> There's a log-cleaner thread sleeping. Would there be any reason why it's
> not writing to it's log-cleaner.log file if it's still running?
>
> We are not using compression (unless it's on by default?)
>
> Thanks,
> Philippe
>
>
> On Tue, Aug 26, 2014 at 11:25 AM, Guozhang Wang 
> wrote:
>
> > Hello Philippe,
> >
> > You can get a thread dump and check if the log cleaner thread is still
> > alive, or it is blocked.
> >
> > Also, are you using some compression on the messages stored on server?
> >
> > Guozhang
> >
> > Gu
> >
> >
> > On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme  >
> > wrote:
> >
> > > Hi,
> > >
> > > We're using compaction on some of our topics. The log cleaner output
> > showed
> > > that it kicked in when the broker was restarted. But now after several
> > > months of uptime, the log cleaner output is empty. The compacted topics
> > > segment files don't seem to be cleaned up (compacted) anymore.
> > >
> > > If there anyway to confirm that the log cleaner is really stopped (the
> > log
> > > file doesn't mention any shutdown / error)?
> > >
> > > I've searched the mailing list about a similar problem but found
> nothing.
> > > Is there anything that might explain our issue? We're using 0.8.1.1 and
> > the
> > > topics have very low traffic.
> > >
> > > Philippe
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


Handling send failures with async producer

2014-08-26 Thread Ryan Persaud
Hello,

I'm looking to insert log lines from log files into kafka, but I'm concerned 
with handling asynchronous send() failures.  Specifically, if some of the log 
lines fail to send, I want to be notified of the failure so that I can attempt 
to resend them.

Based on previous threads on the mailing list 
(http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that the 
trunk version of kafka supports callbacks for dealing with failures.  However, 
the callback function is not passed any metadata that can be used by the 
producer end to reference the original message.  Including the key of the 
message in the RecordMetadata seems like it would be really useful for recovery 
purposes.  Is anyone using the callback functionality to trigger resends of 
failed messages?  If so, how are they tying the callbacks to messages?  Is 
anyone using other methods for handling async errors/resending today?  I can’t 
imagine that I am the only one trying to do this.  I asked this question on the 
IRC channel today, and it sparked some discussion, but I wanted to hear from a 
wider audience.

Thanks for the information,
-Ryan



Re: Handling send failures with async producer

2014-08-26 Thread Jonathan Weeks
I am interested in this very topic as well. Also, can the trunk version of the 
producer be used with an existing 0.8.1.1 broker installation, or does one need 
to wait for 0.8.2 (at least)?

Thanks,

-Jonathan

On Aug 26, 2014, at 12:35 PM, Ryan Persaud  wrote:

> Hello,
> 
> I'm looking to insert log lines from log files into kafka, but I'm concerned 
> with handling asynchronous send() failures.  Specifically, if some of the log 
> lines fail to send, I want to be notified of the failure so that I can 
> attempt to resend them.
> 
> Based on previous threads on the mailing list 
> (http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that 
> the trunk version of kafka supports callbacks for dealing with failures.  
> However, the callback function is not passed any metadata that can be used by 
> the producer end to reference the original message.  Including the key of the 
> message in the RecordMetadata seems like it would be really useful for 
> recovery purposes.  Is anyone using the callback functionality to trigger 
> resends of failed messages?  If so, how are they tying the callbacks to 
> messages?  Is anyone using other methods for handling async errors/resending 
> today?  I can’t imagine that I am the only one trying to do this.  I asked 
> this question on the IRC channel today, and it sparked some discussion, but I 
> wanted to hear from a wider audience.
> 
> Thanks for the information,
> -Ryan
> 



Re: Handling send failures with async producer

2014-08-26 Thread Christian Csar
TLDR: I use one Callback per job I send to Kafka and include that sort
of information by reference in the Callback instance.

Our system is currently moving data from beanstalkd to Kafka due to
historical reasons so we use the callback to either delete or release
the message depending on success. The
org.apache.kafka.clients.producer.Callback I give to the send method is
an instance of a class that stores all the additional information I need
to process the callback. Remember that the async call operates in the
Kafka producer thread so they must be fast to avoid constraining the
throughput. My call back ends up putting information about the call to
beanstalk into another executor service for later processing.

Christian

On 08/26/2014 12:35 PM, Ryan Persaud wrote:
> Hello,
> 
> I'm looking to insert log lines from log files into kafka, but I'm concerned 
> with handling asynchronous send() failures.  Specifically, if some of the log 
> lines fail to send, I want to be notified of the failure so that I can 
> attempt to resend them.
> 
> Based on previous threads on the mailing list 
> (http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that 
> the trunk version of kafka supports callbacks for dealing with failures.  
> However, the callback function is not passed any metadata that can be used by 
> the producer end to reference the original message.  Including the key of the 
> message in the RecordMetadata seems like it would be really useful for 
> recovery purposes.  Is anyone using the callback functionality to trigger 
> resends of failed messages?  If so, how are they tying the callbacks to 
> messages?  Is anyone using other methods for handling async errors/resending 
> today?  I can’t imagine that I am the only one trying to do this.  I asked 
> this question on the IRC channel today, and it sparked some discussion, but I 
> wanted to hear from a wider audience.
> 
> Thanks for the information,
> -Ryan
> 
> 




signature.asc
Description: OpenPGP digital signature


Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Philippe Laflamme
Yes, and in order to "force" it to compact regardless of the volume, we've
set the "segment.ms" configuration key on the topic. According to the
docs[1], that should force a compaction at a certain time interval.

We're seeing the segment rolling, but not the compaction.

[1]http://kafka.apache.org/documentation.html#configuration


On Tue, Aug 26, 2014 at 2:20 PM, Guozhang Wang  wrote:

> Log cleaner will only wakeup and start the cleaning work when there are
> logs that are "dirty" enough to be cleaned. So if the topic-partitions does
> not get enough traffic to make it dirty the log cleaner will not kicks in
> to that partition again.
>
> Guozhang
>
>
> On Tue, Aug 26, 2014 at 9:02 AM, Philippe Laflamme 
> wrote:
>
> > Here's the thread dump:
> > https://gist.github.com/plaflamme/634411b162f56d8f48f6
> >
> > There's a log-cleaner thread sleeping. Would there be any reason why it's
> > not writing to it's log-cleaner.log file if it's still running?
> >
> > We are not using compression (unless it's on by default?)
> >
> > Thanks,
> > Philippe
> >
> >
> > On Tue, Aug 26, 2014 at 11:25 AM, Guozhang Wang 
> > wrote:
> >
> > > Hello Philippe,
> > >
> > > You can get a thread dump and check if the log cleaner thread is still
> > > alive, or it is blocked.
> > >
> > > Also, are you using some compression on the messages stored on server?
> > >
> > > Guozhang
> > >
> > > Gu
> > >
> > >
> > > On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme <
> plafla...@hopper.com
> > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We're using compaction on some of our topics. The log cleaner output
> > > showed
> > > > that it kicked in when the broker was restarted. But now after
> several
> > > > months of uptime, the log cleaner output is empty. The compacted
> topics
> > > > segment files don't seem to be cleaned up (compacted) anymore.
> > > >
> > > > If there anyway to confirm that the log cleaner is really stopped
> (the
> > > log
> > > > file doesn't mention any shutdown / error)?
> > > >
> > > > I've searched the mailing list about a similar problem but found
> > nothing.
> > > > Is there anything that might explain our issue? We're using 0.8.1.1
> and
> > > the
> > > > topics have very low traffic.
> > > >
> > > > Philippe
> > > >
> > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Joel Koshy
If there are no "dirty" logs then the cleaner does not log anything.

You can try changing the dirty ratio config
(min.cleanable.dirty.ratio) to something smaller than the default
(which is 0.5).

Joel

On Tue, Aug 26, 2014 at 03:56:20PM -0400, Philippe Laflamme wrote:
> Yes, and in order to "force" it to compact regardless of the volume, we've
> set the "segment.ms" configuration key on the topic. According to the
> docs[1], that should force a compaction at a certain time interval.
> 
> We're seeing the segment rolling, but not the compaction.
> 
> [1]http://kafka.apache.org/documentation.html#configuration
> 
> 
> On Tue, Aug 26, 2014 at 2:20 PM, Guozhang Wang  wrote:
> 
> > Log cleaner will only wakeup and start the cleaning work when there are
> > logs that are "dirty" enough to be cleaned. So if the topic-partitions does
> > not get enough traffic to make it dirty the log cleaner will not kicks in
> > to that partition again.
> >
> > Guozhang
> >
> >
> > On Tue, Aug 26, 2014 at 9:02 AM, Philippe Laflamme 
> > wrote:
> >
> > > Here's the thread dump:
> > > https://gist.github.com/plaflamme/634411b162f56d8f48f6
> > >
> > > There's a log-cleaner thread sleeping. Would there be any reason why it's
> > > not writing to it's log-cleaner.log file if it's still running?
> > >
> > > We are not using compression (unless it's on by default?)
> > >
> > > Thanks,
> > > Philippe
> > >
> > >
> > > On Tue, Aug 26, 2014 at 11:25 AM, Guozhang Wang 
> > > wrote:
> > >
> > > > Hello Philippe,
> > > >
> > > > You can get a thread dump and check if the log cleaner thread is still
> > > > alive, or it is blocked.
> > > >
> > > > Also, are you using some compression on the messages stored on server?
> > > >
> > > > Guozhang
> > > >
> > > > Gu
> > > >
> > > >
> > > > On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme <
> > plafla...@hopper.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > We're using compaction on some of our topics. The log cleaner output
> > > > showed
> > > > > that it kicked in when the broker was restarted. But now after
> > several
> > > > > months of uptime, the log cleaner output is empty. The compacted
> > topics
> > > > > segment files don't seem to be cleaned up (compacted) anymore.
> > > > >
> > > > > If there anyway to confirm that the log cleaner is really stopped
> > (the
> > > > log
> > > > > file doesn't mention any shutdown / error)?
> > > > >
> > > > > I've searched the mailing list about a similar problem but found
> > > nothing.
> > > > > Is there anything that might explain our issue? We're using 0.8.1.1
> > and
> > > > the
> > > > > topics have very low traffic.
> > > > >
> > > > > Philippe
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > -- Guozhang
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >



Re: Handling send failures with async producer

2014-08-26 Thread Jun Rao
When you create the callback, you can pass in the original message.

Thanks,

Jun


On Tue, Aug 26, 2014 at 12:35 PM, Ryan Persaud 
wrote:

> Hello,
>
> I'm looking to insert log lines from log files into kafka, but I'm
> concerned with handling asynchronous send() failures.  Specifically, if
> some of the log lines fail to send, I want to be notified of the failure so
> that I can attempt to resend them.
>
> Based on previous threads on the mailing list (
> http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that
> the trunk version of kafka supports callbacks for dealing with failures.
> However, the callback function is not passed any metadata that can be used
> by the producer end to reference the original message.  Including the key
> of the message in the RecordMetadata seems like it would be really useful
> for recovery purposes.  Is anyone using the callback functionality to
> trigger resends of failed messages?  If so, how are they tying the
> callbacks to messages?  Is anyone using other methods for handling async
> errors/resending today?  I can’t imagine that I am the only one trying to
> do this.  I asked this question on the IRC channel today, and it sparked
> some discussion, but I wanted to hear from a wider audience.
>
> Thanks for the information,
> -Ryan
>
>


[DISCUSSION] Error Handling and Logging at Kafka

2014-08-26 Thread Guozhang Wang
Hello all,

We want to kick off some discussions about error handling and logging
conventions. With a number of great patch contributions to Kafka recently,
it is good time for us to sit down and think a little bit more about the
coding style guidelines we have (http://kafka.apache.org/coding-guide.html).

People at LinkedIn have discussed a bit about some observed issues about
the error handling and logging verboseness, and here is a summary of the
bullet points covered:

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Error+Handling+and+Logging

We would like to collect as much comments as possible before we move ahead
to those logging cleaning JIRAs mentioned in the wiki, so please feel free
to shoot any thoughts and suggestions around this topic.

-- Guozhang


Re: Handling send failures with async producer

2014-08-26 Thread Jay Kreps
Also, Jonathan, to answer your question, the new producer on trunk is
running in prod for some use cases at LinkedIn and can be used with
any 0.8.x. version.

-Jay

On Tue, Aug 26, 2014 at 12:38 PM, Jonathan Weeks
 wrote:
> I am interested in this very topic as well. Also, can the trunk version of 
> the producer be used with an existing 0.8.1.1 broker installation, or does 
> one need to wait for 0.8.2 (at least)?
>
> Thanks,
>
> -Jonathan
>
> On Aug 26, 2014, at 12:35 PM, Ryan Persaud  wrote:
>
>> Hello,
>>
>> I'm looking to insert log lines from log files into kafka, but I'm concerned 
>> with handling asynchronous send() failures.  Specifically, if some of the 
>> log lines fail to send, I want to be notified of the failure so that I can 
>> attempt to resend them.
>>
>> Based on previous threads on the mailing list 
>> (http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that 
>> the trunk version of kafka supports callbacks for dealing with failures.  
>> However, the callback function is not passed any metadata that can be used 
>> by the producer end to reference the original message.  Including the key of 
>> the message in the RecordMetadata seems like it would be really useful for 
>> recovery purposes.  Is anyone using the callback functionality to trigger 
>> resends of failed messages?  If so, how are they tying the callbacks to 
>> messages?  Is anyone using other methods for handling async errors/resending 
>> today?  I can’t imagine that I am the only one trying to do this.  I asked 
>> this question on the IRC channel today, and it sparked some discussion, but 
>> I wanted to hear from a wider audience.
>>
>> Thanks for the information,
>> -Ryan
>>
>


Re: [DISCUSSION] Error Handling and Logging at Kafka

2014-08-26 Thread Joe Stein
Hi Guozhang thanks for kicking this off.  I made some comments in the Wiki
(and we can continue the discussion there) but think this type of
collaborative mailing list discussion and confluence writeup is a great way
for different discussions about the same thing in different organizations
to coalesce together in code.

This may also be a good place for folks looking to get their feet wet
contributing code to-do so.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop 
/


On Tue, Aug 26, 2014 at 6:22 PM, Guozhang Wang  wrote:

> Hello all,
>
> We want to kick off some discussions about error handling and logging
> conventions. With a number of great patch contributions to Kafka recently,
> it is good time for us to sit down and think a little bit more about the
> coding style guidelines we have (http://kafka.apache.org/coding-guide.html
> ).
>
> People at LinkedIn have discussed a bit about some observed issues about
> the error handling and logging verboseness, and here is a summary of the
> bullet points covered:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Error+Handling+and+Logging
>
> We would like to collect as much comments as possible before we move ahead
> to those logging cleaning JIRAs mentioned in the wiki, so please feel free
> to shoot any thoughts and suggestions around this topic.
>
> -- Guozhang
>


Re: kafka unit test in Java, TestUtils choosePort sends NoSuchMethodError

2014-08-26 Thread Parin Jogani
Can anyone help?




On Sat, Aug 23, 2014 at 9:38 PM, Parin Jogani 
wrote:

> Kafka:
>
>> org.apache.kafka
>> kafka_2.9.2
>> 0.8.1.1
>> test
>
>
> My tests are in Java: junit, so dont know how scala would make a
> difference.
>
> Hope this helps!
>
> -Parin
>
>
>
> On Sat, Aug 23, 2014 at 7:54 PM, Guozhang Wang  wrote:
>
>> Parin,
>>
>> Which scala version are you using? And which kafka version are you working
>> on?
>>
>> Guozhang
>>
>>
>> On Sat, Aug 23, 2014 at 7:16 PM, Parin Jogani 
>> wrote:
>>
>> > I am trying to create unit test case for Kafka in Java with a simple
>> call
>> >
>> >  Properties props =TestUtils.createBrokerConfig(1,
>> > TestUtils.choosePort(), true);
>> >
>> > It fails on
>> >
>> > > > java.lang.NoSuchMethodError:
>> > > scala.Predef$.intWrapper(I)Lscala/runtime/RichInt;
>> > >
>> > > > at kafka.utils.TestUtils$.choosePorts(TestUtils.scala:68)
>> > > > at kafka.utils.TestUtils$.choosePort(TestUtils.scala:79)
>> > > > at kafka.utils.TestUtils.choosePort(TestUtils.scala)
>> > > > at
>> > >
>> >
>> com.ebay.jetstream.event.channel.kafka.test.KafkaTest.getKafkaConfig(KafkaTest.java:31)
>> > > > at
>> > >
>> >
>> com.ebay.jetstream.event.channel.kafka.test.KafkaTest.runKafkaTest(KafkaTest.java:22)
>> > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > > > at
>> > >
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> > > > at
>> > >
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> > > > at java.lang.reflect.Method.invoke(Method.java:597)
>> > > > at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:59)
>> > > > at
>> > >
>> >
>> org.junit.internal.runners.MethodRoadie.runTestMethod(MethodRoadie.java:98)
>> > > > at
>> org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:79)
>> > > > at
>> > >
>> >
>> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:87)
>> > > > at
>> > org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:77)
>> > > > at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:42)
>> > > > at
>> > >
>> >
>> org.junit.internal.runners.JUnit4ClassRunner.invokeTestMethod(JUnit4ClassRunner.java:88)
>> > > > at
>> > >
>> >
>> org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:51)
>> > > > at
>> > >
>> >
>> org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4ClassRunner.java:44)
>> > > > at
>> > >
>> >
>> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:27)
>> > > > at
>> > >
>> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:37)
>> > > > at
>> > >
>> >
>> org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4ClassRunner.java:42)
>> > > > at
>> > >
>> >
>> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
>> > > > at
>> > >
>> >
>> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>> > > > at
>> > >
>> >
>> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
>> > > > at
>> > >
>> >
>> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
>> > > > at
>> > >
>> >
>> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
>> > > > at
>> > >
>> >
>> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
>> > >
>> > >
>> > I also tried:
>> >
>> > scala.collection.Iterator props
>> > = TestUtils.createBrokerConfigs(1, true).iterator();
>> > Same result.
>> >
>> > This is a simple call to create kafka server, dont know what is going
>> > wrong.
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Aug 19, 2014 at 10:37 PM, Parin Jogani 
>> > wrote:
>> >
>> > > I am trying to create unit test case for Kafka with a simple call
>> > >
>> > >  Properties props =TestUtils.createBrokerConfig(1,
>> > > TestUtils.choosePort(), true);
>> > >
>> > > It fails on
>> > >
>> > >> java.lang.NoSuchMethodError:
>> > >> scala.Predef$.intWrapper(I)Lscala/runtime/RichInt;
>> > >> at kafka.utils.TestUtils$.choosePorts(TestUtils.scala:68)
>> > >> at kafka.utils.TestUtils$.choosePort(TestUtils.scala:79)
>> > >>  at kafka.utils.TestUtils.choosePort(TestUtils.scala)
>> > >> at
>> > >>
>> >
>> com.ebay.jetstream.event.channel.kafka.test.KafkaTest.getKafkaConfig(KafkaTest.java:31)
>> > >>  at
>> > >>
>> >
>> com.ebay.jetstream.event.channel.kafka.test.KafkaTest.runKafkaTest(KafkaTest.java:22)
>> > >> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > >>  at
>> > >>
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> > >> at
>> > >>
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> > >>  at java.lang.reflect.Method.invoke(Method.java:597)
>> > >> at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:59)
>> > >> at
>> > >>
>> >
>> org.junit.i

Re: kafka unit test in Java, TestUtils choosePort sends NoSuchMethodError

2014-08-26 Thread Jun Rao
Could you make sure that you are using the scala 2.9.2 jar?

Thanks,

Jun


On Tue, Aug 26, 2014 at 9:28 PM, Parin Jogani 
wrote:

> Can anyone help?
>
>
>
>
> On Sat, Aug 23, 2014 at 9:38 PM, Parin Jogani 
> wrote:
>
> > Kafka:
> >
> >> org.apache.kafka
> >> kafka_2.9.2
> >> 0.8.1.1
> >> test
> >
> >
> > My tests are in Java: junit, so dont know how scala would make a
> > difference.
> >
> > Hope this helps!
> >
> > -Parin
> >
> >
> >
> > On Sat, Aug 23, 2014 at 7:54 PM, Guozhang Wang 
> wrote:
> >
> >> Parin,
> >>
> >> Which scala version are you using? And which kafka version are you
> working
> >> on?
> >>
> >> Guozhang
> >>
> >>
> >> On Sat, Aug 23, 2014 at 7:16 PM, Parin Jogani 
> >> wrote:
> >>
> >> > I am trying to create unit test case for Kafka in Java with a simple
> >> call
> >> >
> >> >  Properties props =TestUtils.createBrokerConfig(1,
> >> > TestUtils.choosePort(), true);
> >> >
> >> > It fails on
> >> >
> >> > > > java.lang.NoSuchMethodError:
> >> > > scala.Predef$.intWrapper(I)Lscala/runtime/RichInt;
> >> > >
> >> > > > at kafka.utils.TestUtils$.choosePorts(TestUtils.scala:68)
> >> > > > at kafka.utils.TestUtils$.choosePort(TestUtils.scala:79)
> >> > > > at kafka.utils.TestUtils.choosePort(TestUtils.scala)
> >> > > > at
> >> > >
> >> >
> >>
> com.ebay.jetstream.event.channel.kafka.test.KafkaTest.getKafkaConfig(KafkaTest.java:31)
> >> > > > at
> >> > >
> >> >
> >>
> com.ebay.jetstream.event.channel.kafka.test.KafkaTest.runKafkaTest(KafkaTest.java:22)
> >> > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >> > > > at
> >> > >
> >> >
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >> > > > at
> >> > >
> >> >
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >> > > > at java.lang.reflect.Method.invoke(Method.java:597)
> >> > > > at
> org.junit.internal.runners.TestMethod.invoke(TestMethod.java:59)
> >> > > > at
> >> > >
> >> >
> >>
> org.junit.internal.runners.MethodRoadie.runTestMethod(MethodRoadie.java:98)
> >> > > > at
> >> org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:79)
> >> > > > at
> >> > >
> >> >
> >>
> org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:87)
> >> > > > at
> >> > org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:77)
> >> > > > at
> org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:42)
> >> > > > at
> >> > >
> >> >
> >>
> org.junit.internal.runners.JUnit4ClassRunner.invokeTestMethod(JUnit4ClassRunner.java:88)
> >> > > > at
> >> > >
> >> >
> >>
> org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:51)
> >> > > > at
> >> > >
> >> >
> >>
> org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4ClassRunner.java:44)
> >> > > > at
> >> > >
> >> >
> >>
> org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:27)
> >> > > > at
> >> > >
> >> org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:37)
> >> > > > at
> >> > >
> >> >
> >>
> org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4ClassRunner.java:42)
> >> > > > at
> >> > >
> >> >
> >>
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> >> > > > at
> >> > >
> >> >
> >>
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> >> > > > at
> >> > >
> >> >
> >>
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> >> > > > at
> >> > >
> >> >
> >>
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> >> > > > at
> >> > >
> >> >
> >>
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> >> > > > at
> >> > >
> >> >
> >>
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> >> > >
> >> > >
> >> > I also tried:
> >> >
> >> > scala.collection.Iterator props
> >> > = TestUtils.createBrokerConfigs(1, true).iterator();
> >> > Same result.
> >> >
> >> > This is a simple call to create kafka server, dont know what is going
> >> > wrong.
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > On Tue, Aug 19, 2014 at 10:37 PM, Parin Jogani <
> parin.jog...@gmail.com>
> >> > wrote:
> >> >
> >> > > I am trying to create unit test case for Kafka with a simple call
> >> > >
> >> > >  Properties props =TestUtils.createBrokerConfig(1,
> >> > > TestUtils.choosePort(), true);
> >> > >
> >> > > It fails on
> >> > >
> >> > >> java.lang.NoSuchMethodError:
> >> > >> scala.Predef$.intWrapper(I)Lscala/runtime/RichInt;
> >> > >> at kafka.utils.TestUtils$.choosePorts(TestUtils.scala:68)
> >> > >> at kafka.utils.TestUtils$.choosePort(TestUtils.scala:79)
> >> > >>  at kafka.utils.TestUtils.choosePort(TestUtils.scala)
> >> > >> at
> >> > >>
> >> >
> >>
> com.ebay.jetstream.event.channel.kafka.test.KafkaTest.getKafkaConfig(KafkaTest.java:31)
> >> > >>  at
> >> > >>
> >> >
> >>
> com.ebay.jetst