Re: EBCDIC support

2014-08-26 Thread Robert Hodges
Hi Gwen,

I like the approach of converting to general forms in most cases, which is
an early bound design.

You can also take a late bound approach of leaving data in its original
form but adding metadata to enable translation if needed at a later time.
 This is necessary if you have homogeneous consumers downstream and the
translation is lossy in any sense. Just as one example floating point
numbers can lose precision on conversion.

Cheers, Robert


On Mon, Aug 25, 2014 at 5:36 PM, Gwen Shapira gshap...@cloudera.com wrote:

 Personally, I like converting data before writing to Kafka, so I can
 easily support many consumers who don't know about EBCDIC.

 A third option is to have a consumer that reads EBCDIC data from one
 Kafka topic and writes ASCII to another Kafka topic. This has the
 benefits of preserving the raw data in Kafka, in case you need it for
 troubleshooting, and also supporting non-EBCDIC consumers.

 The cost is a more complex architecture, but if you already have a
 stream processing system around (Storm, Samza, Spark), it can be an
 easy addition.


 On Mon, Aug 25, 2014 at 5:28 PM,  sonali.parthasara...@accenture.com
 wrote:
  Thanks Gwen! makes sense. So I'll have to weigh the pros and cons of
 doing an EBCDIC to ASCII conversion before sending to Kafka Vs. using an
 ebcdic library after in the consumer
 
  Thanks!
  S
 
  -Original Message-
  From: Gwen Shapira [mailto:gshap...@cloudera.com]
  Sent: Monday, August 25, 2014 5:22 PM
  To: users@kafka.apache.org
  Subject: Re: EBCDIC support
 
  Hi Sonali,
 
  Kafka doesn't really care about EBCDIC or any other format -  for Kafka
 bits are just bits. So they are all supported.
 
  Kafka does not read data from a socket though. Well, it does, but the
 data has to be sent by a Kafka producer. Most likely you'll need to
 implement a producer that will get the data from the socket and send it as
 a message to Kafka. The content of the message can be anything, including
 EBCDIC -.
 
  Then  you'll need a consumer to read the data from Kafka and do
 something with this - the consumer will need to know what to do with a
 message that contains EBCDIC data. Perhaps you have EBCDIC libraries you
 can reuse there.
 
  Hope this helps.
 
  Gwen
 
  On Mon, Aug 25, 2014 at 5:14 PM,  sonali.parthasara...@accenture.com
 wrote:
  Hey all,
 
  This might seem like a silly question, but does kafka have support for
 EBCDIC? Say I had to read data from an IBM mainframe via a TCP/IP socket
 where the data resides in EBCDIC format, can Kafka read that directly?
 
  Thanks,
  Sonali
 
  
 
  This message is for the designated recipient only and may contain
 privileged, proprietary, or otherwise confidential information. If you have
 received it in error, please notify the sender immediately and delete the
 original. Any other use of the e-mail by you is prohibited. Where allowed
 by local law, electronic communications with Accenture and its affiliates,
 including e-mail and instant messaging (including content), may be scanned
 by our systems for the purposes of information security and assessment of
 internal compliance with Accenture policy.
  __
  
 
  www.accenture.com



Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Philippe Laflamme
Hi,

We're using compaction on some of our topics. The log cleaner output showed
that it kicked in when the broker was restarted. But now after several
months of uptime, the log cleaner output is empty. The compacted topics
segment files don't seem to be cleaned up (compacted) anymore.

If there anyway to confirm that the log cleaner is really stopped (the log
file doesn't mention any shutdown / error)?

I've searched the mailing list about a similar problem but found nothing.
Is there anything that might explain our issue? We're using 0.8.1.1 and the
topics have very low traffic.

Philippe


Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Guozhang Wang
Hello Philippe,

You can get a thread dump and check if the log cleaner thread is still
alive, or it is blocked.

Also, are you using some compression on the messages stored on server?

Guozhang

Gu


On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme plafla...@hopper.com
wrote:

 Hi,

 We're using compaction on some of our topics. The log cleaner output showed
 that it kicked in when the broker was restarted. But now after several
 months of uptime, the log cleaner output is empty. The compacted topics
 segment files don't seem to be cleaned up (compacted) anymore.

 If there anyway to confirm that the log cleaner is really stopped (the log
 file doesn't mention any shutdown / error)?

 I've searched the mailing list about a similar problem but found nothing.
 Is there anything that might explain our issue? We're using 0.8.1.1 and the
 topics have very low traffic.

 Philippe




-- 
-- Guozhang


Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Philippe Laflamme
Here's the thread dump:
https://gist.github.com/plaflamme/634411b162f56d8f48f6

There's a log-cleaner thread sleeping. Would there be any reason why it's
not writing to it's log-cleaner.log file if it's still running?

We are not using compression (unless it's on by default?)

Thanks,
Philippe


On Tue, Aug 26, 2014 at 11:25 AM, Guozhang Wang wangg...@gmail.com wrote:

 Hello Philippe,

 You can get a thread dump and check if the log cleaner thread is still
 alive, or it is blocked.

 Also, are you using some compression on the messages stored on server?

 Guozhang

 Gu


 On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme plafla...@hopper.com
 wrote:

  Hi,
 
  We're using compaction on some of our topics. The log cleaner output
 showed
  that it kicked in when the broker was restarted. But now after several
  months of uptime, the log cleaner output is empty. The compacted topics
  segment files don't seem to be cleaned up (compacted) anymore.
 
  If there anyway to confirm that the log cleaner is really stopped (the
 log
  file doesn't mention any shutdown / error)?
 
  I've searched the mailing list about a similar problem but found nothing.
  Is there anything that might explain our issue? We're using 0.8.1.1 and
 the
  topics have very low traffic.
 
  Philippe
 



 --
 -- Guozhang



More partitions than consumers

2014-08-26 Thread Vetle Leinonen-Roeim

Hi,

As far as I can see, the (otherwise great and very helpful) 
documentation isn't explicit about this, but: given more partitions than 
consumers, will all messages still be read?


I've discussed this with some people, and there is some disagreement, so 
a clear answer to this would be greatly appriciated!


Regards,
Vetle


Re: More partitions than consumers

2014-08-26 Thread Gwen Shapira
I hope this helps:

https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example

if you have more partitions than you have threads, some threads will
receive data from multiple partitions

On Tue, Aug 26, 2014 at 10:00 AM, Vetle Leinonen-Roeim ve...@roeim.net wrote:
 Hi,

 As far as I can see, the (otherwise great and very helpful) documentation
 isn't explicit about this, but: given more partitions than consumers, will
 all messages still be read?

 I've discussed this with some people, and there is some disagreement, so a
 clear answer to this would be greatly appriciated!

 Regards,
 Vetle


Re: More partitions than consumers

2014-08-26 Thread Vetle Leinonen-Roeim

Exactly what I'm looking for. Thanks! :)

On 26.08.14 19:08, Gwen Shapira wrote:

I hope this helps:

https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example

if you have more partitions than you have threads, some threads will
receive data from multiple partitions

On Tue, Aug 26, 2014 at 10:00 AM, Vetle Leinonen-Roeim ve...@roeim.net wrote:

Hi,

As far as I can see, the (otherwise great and very helpful) documentation
isn't explicit about this, but: given more partitions than consumers, will
all messages still be read?

I've discussed this with some people, and there is some disagreement, so a
clear answer to this would be greatly appriciated!

Regards,
Vetle






Re: Migrating data from old brokers to new borkers question

2014-08-26 Thread Marcin Michalski
I am running on 0.8.1.1 and I thought that the partition reassignment tools
can do this job. Just was not sure if this is the best way to do this.
I will try this out in stage env first and will perform the same in prod.

Thanks,
marcin


On Mon, Aug 25, 2014 at 7:23 PM, Joe Stein joe.st...@stealth.ly wrote:

 Marcin, that is a typical task now.  What version of Kafka are you running?

 Take a look at
 https://kafka.apache.org/documentation.html#basic_ops_cluster_expansion
 and

 https://kafka.apache.org/documentation.html#basic_ops_increase_replication_factor

 Basically you can do a --generate to get existing JSON topology and with
 that take the results of Current partition replica assignment (the first
 JSON that outputs) and make whatever changes (like sed old node for new
 node and add more replica's which increase the replication factor, whatever
 you want) and then --execute.

 With lots of data this takes time so you will want to run --verify to see
 what is in progress... good thing do a node at a time (even topic at a
 time) however you want to manage and wait for it as such.

 The preferred replica is simply the first one in the list of replicas.
  The kafka-preferred-replica-election.sh just makes that replica the leader
 as this is not automatic yet.

 If you are running a version prior to 0.8.1.1 it might make sense to
 upgrade the old nodes first then run reassign to the new servers.


 /***
  Joe Stein
  Founder, Principal Consultant
  Big Data Open Source Security LLC
  http://www.stealth.ly
  Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
 /


 On Mon, Aug 25, 2014 at 8:59 PM, Marcin Michalski mmichal...@tagged.com
 wrote:

  Hi, I would like to migrate my Kafka setup from old servers to new
 servers.
  Let say I have 8 really old servers that have the kafka topics/partitions
  replicated 4 ways and want to migrate the data to 4 brand new servers and
  want the replication factor be 3. I wonder if anyone has ever performed
  this type of migration?
 
  Will auto rebalancing take care of this automatically if I do the
  following?
 
  Let say I bring down old broker id 1 down and startup new server broker
 id
  100 up, is there a way to migrate all of the data of the topic that had
 the
  topic (where borker id 1 was the leader) over to the new broker 100?
 
  Or do I need to use *bin/kafka-preferred-replica-election.sh *to reassign
  the topics/partitions from old broker 1 to broker 100? And then just keep
  doing the same thing until all of the old brokers are decommissioned?
 
  Also, would kafka-preferred-replica-election.sh let me actually lower the
  number of replicas as well, if I just simply make sure that given
  topic/partition was only elected 3 times versus 4?
 
  Thanks for your insight,
  Marcin
 



Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Guozhang Wang
Log cleaner will only wakeup and start the cleaning work when there are
logs that are dirty enough to be cleaned. So if the topic-partitions does
not get enough traffic to make it dirty the log cleaner will not kicks in
to that partition again.

Guozhang


On Tue, Aug 26, 2014 at 9:02 AM, Philippe Laflamme plafla...@hopper.com
wrote:

 Here's the thread dump:
 https://gist.github.com/plaflamme/634411b162f56d8f48f6

 There's a log-cleaner thread sleeping. Would there be any reason why it's
 not writing to it's log-cleaner.log file if it's still running?

 We are not using compression (unless it's on by default?)

 Thanks,
 Philippe


 On Tue, Aug 26, 2014 at 11:25 AM, Guozhang Wang wangg...@gmail.com
 wrote:

  Hello Philippe,
 
  You can get a thread dump and check if the log cleaner thread is still
  alive, or it is blocked.
 
  Also, are you using some compression on the messages stored on server?
 
  Guozhang
 
  Gu
 
 
  On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme plafla...@hopper.com
 
  wrote:
 
   Hi,
  
   We're using compaction on some of our topics. The log cleaner output
  showed
   that it kicked in when the broker was restarted. But now after several
   months of uptime, the log cleaner output is empty. The compacted topics
   segment files don't seem to be cleaned up (compacted) anymore.
  
   If there anyway to confirm that the log cleaner is really stopped (the
  log
   file doesn't mention any shutdown / error)?
  
   I've searched the mailing list about a similar problem but found
 nothing.
   Is there anything that might explain our issue? We're using 0.8.1.1 and
  the
   topics have very low traffic.
  
   Philippe
  
 
 
 
  --
  -- Guozhang
 




-- 
-- Guozhang


Re: Handling send failures with async producer

2014-08-26 Thread Jonathan Weeks
I am interested in this very topic as well. Also, can the trunk version of the 
producer be used with an existing 0.8.1.1 broker installation, or does one need 
to wait for 0.8.2 (at least)?

Thanks,

-Jonathan

On Aug 26, 2014, at 12:35 PM, Ryan Persaud ryan_pers...@symantec.com wrote:

 Hello,
 
 I'm looking to insert log lines from log files into kafka, but I'm concerned 
 with handling asynchronous send() failures.  Specifically, if some of the log 
 lines fail to send, I want to be notified of the failure so that I can 
 attempt to resend them.
 
 Based on previous threads on the mailing list 
 (http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that 
 the trunk version of kafka supports callbacks for dealing with failures.  
 However, the callback function is not passed any metadata that can be used by 
 the producer end to reference the original message.  Including the key of the 
 message in the RecordMetadata seems like it would be really useful for 
 recovery purposes.  Is anyone using the callback functionality to trigger 
 resends of failed messages?  If so, how are they tying the callbacks to 
 messages?  Is anyone using other methods for handling async errors/resending 
 today?  I can’t imagine that I am the only one trying to do this.  I asked 
 this question on the IRC channel today, and it sparked some discussion, but I 
 wanted to hear from a wider audience.
 
 Thanks for the information,
 -Ryan
 



Re: Handling send failures with async producer

2014-08-26 Thread Christian Csar
TLDR: I use one Callback per job I send to Kafka and include that sort
of information by reference in the Callback instance.

Our system is currently moving data from beanstalkd to Kafka due to
historical reasons so we use the callback to either delete or release
the message depending on success. The
org.apache.kafka.clients.producer.Callback I give to the send method is
an instance of a class that stores all the additional information I need
to process the callback. Remember that the async call operates in the
Kafka producer thread so they must be fast to avoid constraining the
throughput. My call back ends up putting information about the call to
beanstalk into another executor service for later processing.

Christian

On 08/26/2014 12:35 PM, Ryan Persaud wrote:
 Hello,
 
 I'm looking to insert log lines from log files into kafka, but I'm concerned 
 with handling asynchronous send() failures.  Specifically, if some of the log 
 lines fail to send, I want to be notified of the failure so that I can 
 attempt to resend them.
 
 Based on previous threads on the mailing list 
 (http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that 
 the trunk version of kafka supports callbacks for dealing with failures.  
 However, the callback function is not passed any metadata that can be used by 
 the producer end to reference the original message.  Including the key of the 
 message in the RecordMetadata seems like it would be really useful for 
 recovery purposes.  Is anyone using the callback functionality to trigger 
 resends of failed messages?  If so, how are they tying the callbacks to 
 messages?  Is anyone using other methods for handling async errors/resending 
 today?  I can’t imagine that I am the only one trying to do this.  I asked 
 this question on the IRC channel today, and it sparked some discussion, but I 
 wanted to hear from a wider audience.
 
 Thanks for the information,
 -Ryan
 
 




signature.asc
Description: OpenPGP digital signature


Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Philippe Laflamme
Yes, and in order to force it to compact regardless of the volume, we've
set the segment.ms configuration key on the topic. According to the
docs[1], that should force a compaction at a certain time interval.

We're seeing the segment rolling, but not the compaction.

[1]http://kafka.apache.org/documentation.html#configuration


On Tue, Aug 26, 2014 at 2:20 PM, Guozhang Wang wangg...@gmail.com wrote:

 Log cleaner will only wakeup and start the cleaning work when there are
 logs that are dirty enough to be cleaned. So if the topic-partitions does
 not get enough traffic to make it dirty the log cleaner will not kicks in
 to that partition again.

 Guozhang


 On Tue, Aug 26, 2014 at 9:02 AM, Philippe Laflamme plafla...@hopper.com
 wrote:

  Here's the thread dump:
  https://gist.github.com/plaflamme/634411b162f56d8f48f6
 
  There's a log-cleaner thread sleeping. Would there be any reason why it's
  not writing to it's log-cleaner.log file if it's still running?
 
  We are not using compression (unless it's on by default?)
 
  Thanks,
  Philippe
 
 
  On Tue, Aug 26, 2014 at 11:25 AM, Guozhang Wang wangg...@gmail.com
  wrote:
 
   Hello Philippe,
  
   You can get a thread dump and check if the log cleaner thread is still
   alive, or it is blocked.
  
   Also, are you using some compression on the messages stored on server?
  
   Guozhang
  
   Gu
  
  
   On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme 
 plafla...@hopper.com
  
   wrote:
  
Hi,
   
We're using compaction on some of our topics. The log cleaner output
   showed
that it kicked in when the broker was restarted. But now after
 several
months of uptime, the log cleaner output is empty. The compacted
 topics
segment files don't seem to be cleaned up (compacted) anymore.
   
If there anyway to confirm that the log cleaner is really stopped
 (the
   log
file doesn't mention any shutdown / error)?
   
I've searched the mailing list about a similar problem but found
  nothing.
Is there anything that might explain our issue? We're using 0.8.1.1
 and
   the
topics have very low traffic.
   
Philippe
   
  
  
  
   --
   -- Guozhang
  
 



 --
 -- Guozhang



Re: Kafka Log Cleaner Stopped (?)

2014-08-26 Thread Joel Koshy
If there are no dirty logs then the cleaner does not log anything.

You can try changing the dirty ratio config
(min.cleanable.dirty.ratio) to something smaller than the default
(which is 0.5).

Joel

On Tue, Aug 26, 2014 at 03:56:20PM -0400, Philippe Laflamme wrote:
 Yes, and in order to force it to compact regardless of the volume, we've
 set the segment.ms configuration key on the topic. According to the
 docs[1], that should force a compaction at a certain time interval.
 
 We're seeing the segment rolling, but not the compaction.
 
 [1]http://kafka.apache.org/documentation.html#configuration
 
 
 On Tue, Aug 26, 2014 at 2:20 PM, Guozhang Wang wangg...@gmail.com wrote:
 
  Log cleaner will only wakeup and start the cleaning work when there are
  logs that are dirty enough to be cleaned. So if the topic-partitions does
  not get enough traffic to make it dirty the log cleaner will not kicks in
  to that partition again.
 
  Guozhang
 
 
  On Tue, Aug 26, 2014 at 9:02 AM, Philippe Laflamme plafla...@hopper.com
  wrote:
 
   Here's the thread dump:
   https://gist.github.com/plaflamme/634411b162f56d8f48f6
  
   There's a log-cleaner thread sleeping. Would there be any reason why it's
   not writing to it's log-cleaner.log file if it's still running?
  
   We are not using compression (unless it's on by default?)
  
   Thanks,
   Philippe
  
  
   On Tue, Aug 26, 2014 at 11:25 AM, Guozhang Wang wangg...@gmail.com
   wrote:
  
Hello Philippe,
   
You can get a thread dump and check if the log cleaner thread is still
alive, or it is blocked.
   
Also, are you using some compression on the messages stored on server?
   
Guozhang
   
Gu
   
   
On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme 
  plafla...@hopper.com
   
wrote:
   
 Hi,

 We're using compaction on some of our topics. The log cleaner output
showed
 that it kicked in when the broker was restarted. But now after
  several
 months of uptime, the log cleaner output is empty. The compacted
  topics
 segment files don't seem to be cleaned up (compacted) anymore.

 If there anyway to confirm that the log cleaner is really stopped
  (the
log
 file doesn't mention any shutdown / error)?

 I've searched the mailing list about a similar problem but found
   nothing.
 Is there anything that might explain our issue? We're using 0.8.1.1
  and
the
 topics have very low traffic.

 Philippe

   
   
   
--
-- Guozhang
   
  
 
 
 
  --
  -- Guozhang
 



Re: Handling send failures with async producer

2014-08-26 Thread Jun Rao
When you create the callback, you can pass in the original message.

Thanks,

Jun


On Tue, Aug 26, 2014 at 12:35 PM, Ryan Persaud ryan_pers...@symantec.com
wrote:

 Hello,

 I'm looking to insert log lines from log files into kafka, but I'm
 concerned with handling asynchronous send() failures.  Specifically, if
 some of the log lines fail to send, I want to be notified of the failure so
 that I can attempt to resend them.

 Based on previous threads on the mailing list (
 http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that
 the trunk version of kafka supports callbacks for dealing with failures.
 However, the callback function is not passed any metadata that can be used
 by the producer end to reference the original message.  Including the key
 of the message in the RecordMetadata seems like it would be really useful
 for recovery purposes.  Is anyone using the callback functionality to
 trigger resends of failed messages?  If so, how are they tying the
 callbacks to messages?  Is anyone using other methods for handling async
 errors/resending today?  I can’t imagine that I am the only one trying to
 do this.  I asked this question on the IRC channel today, and it sparked
 some discussion, but I wanted to hear from a wider audience.

 Thanks for the information,
 -Ryan




[DISCUSSION] Error Handling and Logging at Kafka

2014-08-26 Thread Guozhang Wang
Hello all,

We want to kick off some discussions about error handling and logging
conventions. With a number of great patch contributions to Kafka recently,
it is good time for us to sit down and think a little bit more about the
coding style guidelines we have (http://kafka.apache.org/coding-guide.html).

People at LinkedIn have discussed a bit about some observed issues about
the error handling and logging verboseness, and here is a summary of the
bullet points covered:

https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Error+Handling+and+Logging

We would like to collect as much comments as possible before we move ahead
to those logging cleaning JIRAs mentioned in the wiki, so please feel free
to shoot any thoughts and suggestions around this topic.

-- Guozhang


Re: Handling send failures with async producer

2014-08-26 Thread Jay Kreps
Also, Jonathan, to answer your question, the new producer on trunk is
running in prod for some use cases at LinkedIn and can be used with
any 0.8.x. version.

-Jay

On Tue, Aug 26, 2014 at 12:38 PM, Jonathan Weeks
jonathanbwe...@gmail.com wrote:
 I am interested in this very topic as well. Also, can the trunk version of 
 the producer be used with an existing 0.8.1.1 broker installation, or does 
 one need to wait for 0.8.2 (at least)?

 Thanks,

 -Jonathan

 On Aug 26, 2014, at 12:35 PM, Ryan Persaud ryan_pers...@symantec.com wrote:

 Hello,

 I'm looking to insert log lines from log files into kafka, but I'm concerned 
 with handling asynchronous send() failures.  Specifically, if some of the 
 log lines fail to send, I want to be notified of the failure so that I can 
 attempt to resend them.

 Based on previous threads on the mailing list 
 (http://comments.gmane.org/gmane.comp.apache.kafka.user/1322), I know that 
 the trunk version of kafka supports callbacks for dealing with failures.  
 However, the callback function is not passed any metadata that can be used 
 by the producer end to reference the original message.  Including the key of 
 the message in the RecordMetadata seems like it would be really useful for 
 recovery purposes.  Is anyone using the callback functionality to trigger 
 resends of failed messages?  If so, how are they tying the callbacks to 
 messages?  Is anyone using other methods for handling async errors/resending 
 today?  I can’t imagine that I am the only one trying to do this.  I asked 
 this question on the IRC channel today, and it sparked some discussion, but 
 I wanted to hear from a wider audience.

 Thanks for the information,
 -Ryan




Re: [DISCUSSION] Error Handling and Logging at Kafka

2014-08-26 Thread Joe Stein
Hi Guozhang thanks for kicking this off.  I made some comments in the Wiki
(and we can continue the discussion there) but think this type of
collaborative mailing list discussion and confluence writeup is a great way
for different discussions about the same thing in different organizations
to coalesce together in code.

This may also be a good place for folks looking to get their feet wet
contributing code to-do so.

/***
 Joe Stein
 Founder, Principal Consultant
 Big Data Open Source Security LLC
 http://www.stealth.ly
 Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/


On Tue, Aug 26, 2014 at 6:22 PM, Guozhang Wang wangg...@gmail.com wrote:

 Hello all,

 We want to kick off some discussions about error handling and logging
 conventions. With a number of great patch contributions to Kafka recently,
 it is good time for us to sit down and think a little bit more about the
 coding style guidelines we have (http://kafka.apache.org/coding-guide.html
 ).

 People at LinkedIn have discussed a bit about some observed issues about
 the error handling and logging verboseness, and here is a summary of the
 bullet points covered:


 https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Error+Handling+and+Logging

 We would like to collect as much comments as possible before we move ahead
 to those logging cleaning JIRAs mentioned in the wiki, so please feel free
 to shoot any thoughts and suggestions around this topic.

 -- Guozhang



Re: kafka unit test in Java, TestUtils choosePort sends NoSuchMethodError

2014-08-26 Thread Parin Jogani
Can anyone help?




On Sat, Aug 23, 2014 at 9:38 PM, Parin Jogani parin.jog...@gmail.com
wrote:

 Kafka:

 groupIdorg.apache.kafka/groupId
 artifactIdkafka_2.9.2/artifactId
 version0.8.1.1/version
 scopetest/scope


 My tests are in Java: junit, so dont know how scala would make a
 difference.

 Hope this helps!

 -Parin



 On Sat, Aug 23, 2014 at 7:54 PM, Guozhang Wang wangg...@gmail.com wrote:

 Parin,

 Which scala version are you using? And which kafka version are you working
 on?

 Guozhang


 On Sat, Aug 23, 2014 at 7:16 PM, Parin Jogani parin.jog...@gmail.com
 wrote:

  I am trying to create unit test case for Kafka in Java with a simple
 call
 
   Properties props =TestUtils.createBrokerConfig(1,
  TestUtils.choosePort(), true);
 
  It fails on
 
java.lang.NoSuchMethodError:
   scala.Predef$.intWrapper(I)Lscala/runtime/RichInt;
  
at kafka.utils.TestUtils$.choosePorts(TestUtils.scala:68)
at kafka.utils.TestUtils$.choosePort(TestUtils.scala:79)
at kafka.utils.TestUtils.choosePort(TestUtils.scala)
at
  
 
 com.ebay.jetstream.event.channel.kafka.test.KafkaTest.getKafkaConfig(KafkaTest.java:31)
at
  
 
 com.ebay.jetstream.event.channel.kafka.test.KafkaTest.runKafkaTest(KafkaTest.java:22)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
  
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
  
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:59)
at
  
 
 org.junit.internal.runners.MethodRoadie.runTestMethod(MethodRoadie.java:98)
at
 org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:79)
at
  
 
 org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:87)
at
  org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:77)
at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:42)
at
  
 
 org.junit.internal.runners.JUnit4ClassRunner.invokeTestMethod(JUnit4ClassRunner.java:88)
at
  
 
 org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:51)
at
  
 
 org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4ClassRunner.java:44)
at
  
 
 org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:27)
at
  
 org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:37)
at
  
 
 org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4ClassRunner.java:42)
at
  
 
 org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at
  
 
 org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at
  
 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at
  
 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at
  
 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at
  
 
 org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
  
  
  I also tried:
 
  scala.collection.IteratorProperties props
  = TestUtils.createBrokerConfigs(1, true).iterator();
  Same result.
 
  This is a simple call to create kafka server, dont know what is going
  wrong.
 
 
 
 
 
  On Tue, Aug 19, 2014 at 10:37 PM, Parin Jogani parin.jog...@gmail.com
  wrote:
 
   I am trying to create unit test case for Kafka with a simple call
  
Properties props =TestUtils.createBrokerConfig(1,
   TestUtils.choosePort(), true);
  
   It fails on
  
   java.lang.NoSuchMethodError:
   scala.Predef$.intWrapper(I)Lscala/runtime/RichInt;
   at kafka.utils.TestUtils$.choosePorts(TestUtils.scala:68)
   at kafka.utils.TestUtils$.choosePort(TestUtils.scala:79)
at kafka.utils.TestUtils.choosePort(TestUtils.scala)
   at
  
 
 com.ebay.jetstream.event.channel.kafka.test.KafkaTest.getKafkaConfig(KafkaTest.java:31)
at
  
 
 com.ebay.jetstream.event.channel.kafka.test.KafkaTest.runKafkaTest(KafkaTest.java:22)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
  
 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at
  
 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
   at org.junit.internal.runners.TestMethod.invoke(TestMethod.java:59)
   at
  
 
 org.junit.internal.runners.MethodRoadie.runTestMethod(MethodRoadie.java:98)
at
 org.junit.internal.runners.MethodRoadie$2.run(MethodRoadie.java:79)
   at
  
 
 org.junit.internal.runners.MethodRoadie.runBeforesThenTestThenAfters(MethodRoadie.java:87)
at
  org.junit.internal.runners.MethodRoadie.runTest(MethodRoadie.java:77)
   at org.junit.internal.runners.MethodRoadie.run(MethodRoadie.java:42)
at