Anthony,
I filed https://issues.apache.org/jira/browse/KAFKA-3410 to track this.
-James
> On Feb 25, 2016, at 2:16 PM, Anthony Sparks
> wrote:
>
> Hello James,
>
> We received this exact same error this past Tuesday (we are on 0.8.2). To
> answer at least one of your bullet points -- this is
LinkedIn/burrow tool is there for monitoring consumer
On 16 Mar 2016 02:28, "Vinay Gulani" wrote:
> Hi,
>
> I am new to Kafka and using kafka version 0.8.2.1. I am monitoring kafka
> using kafka-manager tool.
>
> Is there any way to monitor those kafka-consumers (using kafka-manager)
> who are n
Fang,
>From the logs you showed above there is a single produce request with very
large request size:
"[2016-03-14 06:43:03,579] INFO Closing socket connection to
/10.225.36.226 due to invalid request: Request of length *808124929* is
not valid, it is larger than the maximum size of 104857600 byt
Hi,
I am new to Kafka and using kafka version 0.8.2.1. I am monitoring kafka
using kafka-manager tool.
Is there any way to monitor those kafka-consumers (using kafka-manager)
who are not storing their offsets in kafka zookeeper.
Thanks,
Vinay
Thanks Jason,
That's definitely something I can work with. I
expect this to be very rare scenario.
Thanks for your help
Michael
On Mon, Mar 14, 2016 at 5:16 PM, Jason Gustafson wrote:
> Hey Michael,
>
> I don't think a policy of retrying indefinitely is generally possib
Yep, realized it after sending the request. I did send email to that id
and got confirmation.
Thanks for your reply!!
On 3/15/16, 1:40 PM, "Christian Posta" wrote:
>Your best bet is to send a mail to subscr...@kafka.apache.org as outlined
>in the mailing list page http://kafka.apache.org/contac
Your best bet is to send a mail to subscr...@kafka.apache.org as outlined
in the mailing list page http://kafka.apache.org/contact.html
On Tue, Mar 15, 2016 at 11:48 AM, Punyavathi Ambur Krishnamurthy <
akpunyava...@athenahealth.com> wrote:
> Hi,
>
> I would like to subscribe to this list to lear
Hello,I have some question when I use kafka transfer data。
In my test,I create a producer and a consumer and test the time-delay from
producer to consumer with 1000 bytes data. It takes me about 3ms.
but when i ping to broker ,the time-delay is about 0.1ms.
what configuration I can do
Hi,
I would like to subscribe to this list to learn and implement Kafka.
Thanks,
Punya
Hi Gerard,
Thanks for your answer.
Just to make sure I understood it correctly.
If two consumers are running , one with group.id : “purchase_order_metrics”
and another group.id : “purchase_order_communication” both both consumers
doing consumer.subscribe("purchase_order_updated”) both will
Yeah, I agree the bug is probably more serious than I had thought before
(I've gotten too used to examples with only a single commitAsync() call). I
had worked on a patch to fix this at one point. Let me see if I can dig it
up.
-Jason
On Tue, Mar 15, 2016 at 1:06 AM, Alexey Romanchuk <
alexey.rom
Are you sure broker side ssl configs are properly set. One quick way to
test is to use openssl to connect to the broker
openssl s_client -debug -connect localhost:9093 -tls1
make sure you change the port where the broker is running the SSL.
More details are here
http://kafka.apache.org/documentati
Yes, the consumer handles this error by rejoining the group. This is
probably a case where the logging is a bit more verbose than it should be.
I've tried to tune the logging a bit better for 0.10 so that some of these
low-level details only come out at the DEBUG level (in particular for cases
that
I can't get initial subscriptions without poll. As far as I can tell,
I won't get updated subscriptions (because a partition was added or
another topic matching the pattern was added) without poll either,
right?
I'll take a look at those jiras.
On Mon, Mar 14, 2016 at 4:56 PM, Jason Gustafson w
In addition, because we also use the acl, creating a lot of topics is
cumbersome. So in one of our tests I added an uuid to the message so I know
which message was produced for a certain test.
On Mon, Mar 14, 2016 at 11:15 PM Stevo Slavić wrote:
> See
>
> https://cwiki.apache.org/confluence/disp
Good points.
I would backup all partitions to HDFS (etc.), as fast as the data arrives. In
case the Kafka becomes corrupted, the topics can be repopulated from the
backup. In my case, all clients track their own offsets, so they should in
theory be able to continue as if nothing had happened.
Jay, I am completely agreed with you. From API level it is nothing I can do
about it. Anything else I have to do with this issue? Reproducible example
here - https://gist.github.com/13h3r/42633bcd64b80ddffe6b
Thanks!
On Tue, Mar 15, 2016 at 11:24 AM, Jay Kreps wrote:
> This seems like a bug, no
17 matches
Mail list logo