Connectivity problem with controller breaks cluster

2016-12-27 Thread Felipe Santos
Hi,

We are using kafka 0.10.1.0.

We have three brokers and three zookeeper.

Today broker 1 and 2 lost connectivity with broker 3, and I saw the broker
3 was the controller.
I saw lot of messages
"[rw_campaign_broadcast_nextel_734fae3d46d4da63ee36d2b6fd25a77f3f7c3ef5,9]
on broker 3: Shrinking ISR for partition
[rw_campaign_broadcast_nextel_734fae3d46d4da63ee36d2b6fd25a77f3f7c3ef5,9]
from 1,2,3 to 3"

On the broker 2 and 1:

[2016-12-27 08:10:05,501] WARN [ReplicaFetcherThread-0-3], Error in fetch
kafka.server.ReplicaFetcherThread$FetchRequest@108fd1b0
(kafka.server.ReplicaFetcherThread)
java.io.IOException: Connection to 3 was disconnected before the response
was read
at
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:115)
at
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:112)
at scala.Option.foreach(Option.scala:257)
at
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:112)
at
kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:108)
at
kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:137)
at
kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollContinuously$extension(NetworkClientBlockingOps.scala:143)
at
kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
at
kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:253)
at
kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:238)
at
kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
at
kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)

All my consumers and producers went down.
I try to consume and produce with kafka-console-producer/consumer.sh and it
fails.

The only solution was restart broker 3, after that it correct the problem.

Any tips?
-- 
Felipe Santos


Null Pointer Kafka Client

2016-12-26 Thread Felipe Santos
I am using kafka 0.10.1.0 some times on the client I've got null pointer
exception:

" java.lang.NullPointerException
at
org.apache.kafka.common.record.ByteBufferInputStream.read(org/apache/kafka/common/record/ByteBufferInputStream.java:34)
at
java.util.zip.CheckedInputStream.read(java/util/zip/CheckedInputStream.java:59)
at
java.util.zip.GZIPInputStream.readUByte(java/util/zip/GZIPInputStream.java:266)
at
java.util.zip.GZIPInputStream.readUShort(java/util/zip/GZIPInputStream.java:258)
at
java.util.zip.GZIPInputStream.readHeader(java/util/zip/GZIPInputStream.java:164)
at
java.util.zip.GZIPInputStream.(java/util/zip/GZIPInputStream.java:79)
at
java.util.zip.GZIPInputStream.(java/util/zip/GZIPInputStream.java:91)
at
org.apache.kafka.common.record.Compressor.wrapForInput(org/apache/kafka/common/record/Compressor.java:280)
at
org.apache.kafka.common.record.MemoryRecords$RecordsIterator.(org/apache/kafka/common/record/MemoryRecords.java:247)
at
org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(org/apache/kafka/common/record/MemoryRecords.java:316)
at
org.apache.kafka.common.record.MemoryRecords$RecordsIterator.makeNext(org/apache/kafka/common/record/MemoryRecords.java:222)
at
org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(org/apache/kafka/common/utils/AbstractIterator.java:79)
at
org.apache.kafka.common.utils.AbstractIterator.hasNext(org/apache/kafka/common/utils/AbstractIterator.java:45)
at
org.apache.kafka.clients.consumer.internals.Fetcher.parseFetchedData(org/apache/kafka/clients/consumer/internals/Fetcher.java:679)
at
org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(org/apache/kafka/clients/consumer/internals/Fetcher.java:425)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(org/apache/kafka/clients/consumer/KafkaConsumer.java:1021)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(org/apache/kafka/clients/consumer/KafkaConsumer.java:979)
at java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)
at
RUBY.thread_runner(/opt/logstash/vendor/local_gems/90fefca7/logstash-input-kafka-6.2.0/lib/logstash/inputs/kafka.rb:246)
at java.lang.Thread.run(java/lang/Thread.java:745)

-- 
Felipe Santos


Re: Oversized Message 40k

2016-11-23 Thread Felipe Santos
Thanks guys, for your information, I will do some performance tests

Em qua, 23 de nov de 2016 às 05:14, Ignacio Solis <iso...@igso.net>
escreveu:

> At LinkedIn we have a number of use cases for large messages.  We stick to
> the 1MB message limit at the high end though.
>
> Nacho
>
> On Tue, Nov 22, 2016 at 6:11 PM, Gwen Shapira <g...@confluent.io> wrote:
>
> > This has been our experience as well. I think the largest we've seen
> > in production is 50MB.
> >
> > If you have performance numbers you can share for the large messages,
> > I think we'll all appreciate :)
> >
> > On Tue, Nov 22, 2016 at 1:04 PM, Tauzell, Dave
> > <dave.tauz...@surescripts.com> wrote:
> > > I ran tests with a mix of messages, some as large as 20MB.   These
> large
> > messages do slow down processing, but it still works.
> > >
> > > -Dave
> > >
> > > -Original Message-
> > > From: h...@confluent.io [mailto:h...@confluent.io]
> > > Sent: Tuesday, November 22, 2016 1:41 PM
> > > To: users@kafka.apache.org
> > > Subject: Re: Oversized Message 40k
> > >
> > > The default config handles messages up to 1MB so you should be fine.
> > >
> > > -hans
> > >
> > >> On Nov 22, 2016, at 4:00 AM, Felipe Santos <felip...@gmail.com>
> wrote:
> > >>
> > >> I read on documentation that kafka is not optimized for big messages,
> > >> what is considered a big message?
> > >>
> > >> For us the messages will be on average from 20k ~ 40k? Is this a real
> > >> problem?
> > >>
> > >> Thanks
> > >> --
> > >> Felipe Santos
> > > This e-mail and any files transmitted with it are confidential, may
> > contain sensitive information, and are intended solely for the use of the
> > individual or entity to whom they are addressed. If you have received
> this
> > e-mail in error, please notify the sender by reply e-mail immediately and
> > destroy all copies of the e-mail and any attachments.
> >
> >
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>
>
>
> --
> Nacho - Ignacio Solis - iso...@igso.net
>


Oversized Message 40k

2016-11-22 Thread Felipe Santos
I read on documentation that kafka is not optimized for big messages, what
is considered a big message?

For us the messages will be on average from 20k ~ 40k? Is this a real
problem?

Thanks
-- 
Felipe Santos


Re: Stream processing meetup at LinkedIn (Mountain View) on Tuesday, August 23 at 6pm

2016-08-13 Thread Felipe Santos
Hi,

The event will be broadcast live?

On 13 August 2016 at 08:21, Prabhjot Bharaj <prabhbha...@gmail.com> wrote:

> Hi,
>
> Thanks for the response. Wishing you the best for making all the
> arrangements.
>
> Looking forward to it
>
> Regards,
> Prabhjot
>
> On Aug 12, 2016 5:58 PM, "Ed Yakabosky" <eyakabo...@linkedin.com.invalid>
> wrote:
>
> > Hello,
> >
> > We will be sharing a live-stream as well as recording after the meetup.
> > Thanks for asking!
> >
> > Ed
> >
> > On Fri, Aug 12, 2016 at 2:56 PM, Prabhjot Bharaj <prabhbha...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > Thanks for the invitation. I won't be able to make it this soon.
> > > However, it'll be great if you could arrange to share the video
> > recordings.
> > >
> > > Thanks,
> > > Prabhjot
> > >
> > > On Aug 12, 2016 4:33 PM, "Joel Koshy" <jjkosh...@gmail.com> wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > We would like to invite you to a Stream Processing Meetup at
> > > > LinkedIn’s *Mountain
> > > > View campus on Tuesday, August 23 at 6pm*. Please RSVP here (only if
> > you
> > > > intend to attend in person):
> > > > https://www.meetup.com/Stream-Processing-Meetup-LinkedIn/
> > > events/232864129
> > > >
> > > > We have three great talks lined up with speakers from Confluent,
> > LinkedIn
> > > > and TripAdvisor.
> > > >
> > > > Hope to see you there!
> > > >
> > > > Joel
> > > >
> > >
> >
> >
> >
> > --
> > Thanks,
> > Ed Yakabosky
> >
>



-- 
Felipe Santos