Any one use kafka 0.9.0.1 with zookeeper 3.4.10 (instead of zookeeper-3.4.6
jar file, in the libs folder, replace with zookeeper-3.4.10.jar)? Is there
any issue with this combination?
Thanks,
Fang
Hi,
What is the value of acks set for kafka internal topic __consumer_offsets?
I know the default replication factor for __consumer_offsets is 3, and
we are using version 0.9.01, and set min.sync.replicas = 2 in our
server.properties.
We noticed some partitions of __consumer_offsets has ISR with
Hi,
What is the value of acks set for kafka internal topic __consumer_offsets?
I know the default replication factor for __consumer_offsets is 3, and we
are using version 0.9.01, and set min.sync.replicas = 2 in our
server.properties.
We noticed some partitions of __consumer_offsets has ISR with
Hi,
In our application, we set topic.replication.factor to 3 in the client side
and min.insync.replicas = 2 in the kafka server side (server.properties).
Does min.insync.replicas = 2 apply to kafka internal topic
__consumer_offsets (we are using kafka version 0.9.0.1 and have
offsets.storage =
/TEST-TOPIC-0/.log
> >
> > Use --key-decoder-class , --key-decoder-class options to pass
> > deserializers.
> >
> > On Fri, Mar 18, 2016 at 12:17 PM, Fang Wong <fw...@salesforce.com>
> wrote:
> >
> >> Thanks Guozhang:
>
comes a single big produce request.
>
> Guozhang
>
>
> On Mon, Mar 14, 2016 at 1:59 PM, Fang Wong <fw...@salesforce.com> wrote:
>
> > After changing log level from INFO to TRACE, here is kafka server.log:
> >
> > [2016-03-14 06:43:03
Kafka,
> including encryption / authentication / authorization. For your case, I
> would suggest you upgrade to 0.9 and use its authorization mechanism using
> ACLs.
>
> Guozhang
>
>
> On Wed, Mar 16, 2016 at 11:36 AM, Fang Wong <fw...@salesforce.com> wrote:
>
Guozhang
>
> On Wed, Mar 16, 2016 at 4:03 PM, Fang Wong <fw...@salesforce.com> wrote:
>
> > Thanks Guozhang!
> > We are in the process of upgrading to 0.9.0.0. We will look into using
> > ACLs.
> >
> > Is there a way to see what is the request in the kafka
Guozhang
>
> On Tue, Mar 8, 2016 at 11:45 AM, Fang Wong <fw...@salesforce.com> wrote:
>
> > Thanks Guozhang!
> >
> > No I don't have a way to reproduce this issue. It randomly happens, I am
> > changing the log level from INFO to trace to see if I can get
uld you validate that their cumulated size is less
> than the limit and then try sending them to Kafka and see if it always
> triggers the problem?
>
>
>
> Guozhang
>
> On Mon, Mar 7, 2016 at 10:23 AM, Fang Wong <fw...@salesforce.com> wrote:
>
> > No, we don
> In newer versions we have changed the batching criterion from #.messages
> to
> > bytes, which is aimed at resolving such issues.
> >
> > Guozhang
> >
> > On Thu, Mar 3, 2016 at 1:04 PM, Fang Wong <fw...@salesforce.com> wrote:
> >
> > > Got
Got the following error message with Kafka 0.8.2.1:
[2016-02-26 20:33:43,025] INFO Closing socket connection to /x due to
invalid request: Request of length 1937006964 is not valid, it is larger
than the maximum size of 104857600 bytes. (kafka.network.Processor)
Didn't send a large message at
Also the key serializer is
org.apache.kafka.common.serialization.StringSerializer and value
serializer = org.apache.kafka.common.serialization.ByteArraySerializer.
On Wed, Mar 2, 2016 at 10:24 AM, Fang Wong <fw...@salesforce.com> wrote:
> try (ByteArrayOutputStream
you could see such an error.
>
> Thank you,
> Anirudh
>
> On Wed, Mar 2, 2016 at 6:18 AM, Fang Wong <fw...@salesforce.com> wrote:
>
> > [2016-02-26 20:33:43,025] INFO Closing socket connection to /x due to
> > invalid request: Request of length 1937
payload);
//Send is async by default and when the messages are published to the
Kafka broker, Callback is executed with the status of the delivery
kafkaProducer.send(data);
On Wed, Mar 2, 2016 at 4:59 AM, Asaf Mesika <asaf.mes...@gmail.com> wrote:
> Can you show your code for send
Hi,
Anybody know how to fix the following error? I didn't send any large size
message, seems the system was sending a large message itself:
[2016-02-26 20:33:43,025] INFO Closing socket connection to /x due to
invalid request: Request of length 1937006964 is not valid, it is larger
than the
[2016-02-26 20:33:43,025] INFO Closing socket connection to /x due to
invalid request: Request of length 1937006964 is not valid, it is larger
than the maximum size of 104857600 bytes. (kafka.network.Processor)
[2016-02-26 20:33:43,047] INFO Closing socket connection to /x due to
invalid request:
[2016-02-26 20:33:42,997] INFO Closing socket connection to /x due to
invalid request: Request of length 1937006964 is not valid, [2016-02-26
20:33:42,997] INFO Closing socket connection to /10.224.146.58 due to
invalid request: Request of length 1937006964 is not valid, it is larger
than the
We are using kafka 0.8.2.1 and set acks to 2, see the following warning:
sent a produce request with request.required.acks of 2, which is now
deprecated and will be removed in next release. Valid values are -1, 0 or
1. Please consult Kafka documentation for supported and recommended
Hi,
I have a kafka cluster of 8 servers, but only 4 of them are up, the other 4
are down.
when I started my app server, some time it still tried to connect to the
kafka servers which are down, since the kafka server is down, my app server
couldn't start.
Not sure why it doesn't just try to
our application server? You need
> to config the bootstrap.servers so that it includes the servers that are
> up.
>
> Thanks,
> Liquan
>
> On Fri, Jan 8, 2016 at 2:53 PM, Fang Wong <fw...@salesforce.com> wrote:
>
> > Hi,
> >
> > I have a kafka
21 matches
Mail list logo