Hi, Liquan
I run the two workers inside docker containers and a connector having 6
tasks. They read from a topic having 6 partitions Then I kill one of the
two containers using docker kill or docker restart command. When the
container is up again a rebalance happens and sometimes few tasks don't
c
Depends on your use case but I guess something like this:
- Install al fresh on the new VM's
- Start a mirror maker in the the new VM's to copy data from the old ones
- Be sure it's working right
- Shut down the old VM's and start using the new ones
The last step is the trickiest and depends a lot
Thanks Gwen - yes, I agree - let me work on it, make it available on github and
then I guess we can go from there.
Thanks,
Jayesh
-Original Message-
From: Gwen Shapira [mailto:g...@confluent.io]
Sent: Wednesday, May 11, 2016 12:26 PM
To: d...@kafka.apache.org; Jayesh Thakrar
Cc: Users
Hi,
This is known issue. Check below links for related discussion
https://issues.apache.org/jira/browse/KAFKA-3494
https://qnalist.com/questions/6420696/discuss-mbeans-overwritten-with-identical-clients-on-a-single-jvm
Manikumar
On Wed, May 11, 2016 at 7:29 PM, Paul Mackles wrote:
> Hi
>
>
>
I need to run an external filter program from a SinkTask. Is there anything
that might break if I fork/exec in the start() method, and forward the data
thru pipes ?
TIA,
Dean
Hi Matteo,
I am not completely follow the steps. Can you share the exact command to
reproduce the issue? What kind of commands did you use to restart the
connector? Which version of Kafka are you using?
Thanks,
Liquan
On Wed, May 11, 2016 at 4:40 AM, Matteo Luzzi
wrote:
> Hi again, I was able
Hello Everyone,
We have Kafka brokers, Zookeepers and Mirror-makers running on old Virtual
Machines. We need to migrate all of this to brand new VMs on a different
DataCenter and bring down the old VMs. Is this possible? If so, please
suggest a way to do it.
Best,
Abhinav
Hello Jayesh,
Thank you for the suggestion. I like the proposal and the new tool seems useful.
Do you already have the tool available in a github repository?
If you don't, then this would be a good place to start - there are
many Kafka utilities in github repositories (Yahoo's Kafka Manager as
a
Hi Spico,
Yes your theory is correct. The sloppy consumer waited in onPartitionRevoked
until session timed out and another round of group rebalance was triggered
which resulted in Consumer B taking all partitions. After waking up from
sleep group rebalance was triggered again
On Wed, 11 May 2016
Good Afternoon,
I am currently trying to do a rolling upgrade from Kafka 0.8.2.1 to 0.9.0.1
and am running into a problem when starting 0.9.0.1 with the protocol
version 0.8.2.1 set in the server.properties.
Here is my current Kafka topic setup, data retention and hardware used:
3 Zookeeper node
Hi,
I am using kafka version e20eba958d8de29cb4e3b6feea37ac3a1e1ab4f9
(something which identifies as 0.10.0-SNAPSHOT when built), which gives me
files like
./repository/org/apache/kafka/kafka-tools/0.10.1.0-SNAPSHOT/kafka-tools-0.10.1.0-SNAPSHOT.jar
and
./repository/org/apache/kafka/kafka_2.11/
We have a requirement that consumer must be able to re-read the messages.
In High level consumer api,it looks like the re-start of consumer needed ,if
offset has to be reset.
The new consumer API seems to be of beta quality.
In stable version ,I guess the only option is to go for simple consu
I don't think it's possible since the offsets of both clusters can be
different, you don't know if it will work correctly. When I used the mirror
maker accidentally on the __consumer_offsets topic it also gave some
errors, so I don't know if it's technically possible. A possible future
solution wo
Hi
I’m looking at failing-over from one cluster to another, connected via mirror
maker, where the __consumer_offsets topic is also mirrored.
In theory this should allow consumers to be restarted to point at the secondary
cluster such that they resume from the same offset they reached in the pr
Hi
I have an app that spins up multiple threads in a single JVM. Each thread has
its own v9 consumer running under different groupIds.
Since they are part of the same application, I set the client.id property for
all 3 consumers to "frylock".
Everything runs OK but I do see the following ex
I have used:
./kafka-topics.sh --delete --topic unibs --zookeeper 127.0.0.1:2181
It did work for me:
~/Downloads/kafka_2.11-0.9.0.1/bin$ ./kafka-topics.sh --list --zookeeper
127.0.0.1:2181
unibs
~/Downloads/kafka_2.11-0.9.0.1/bin$ ./kafka-topics.sh --delete --topic unibs
--zookeeper 127.0.0.1:
Hi!
Here is a great article about the consumer API:
http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client
.
In my opinion, the consumer will not be able to send the heartbeat to the
group coordinator (due to the fact that poll calls on onPartitionRevo
Ah, yes those consumers znodes in Zookeeper are related to consumer group
metadata. You can actually find a full write up of what each znode is used
for here:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper
Cheers,
On Wed, May 11, 2016 at 9:22 AM, Spico Flori
Hi, Dustin!
Thank you for your answer. I have observed that in zookeeper there was a
folder named consumers, that kept data about the topic name its partitions
and the offsets.
Were both the consumers/producers using this folder to keep track of the
offsets? What was the purpose of this folder?
I
Hi Florin,
The new consumer is intended to replace both the high level and simple
consumers.
http://kafka.apache.org/documentation.html#consumerapi
The old simple consumer API didn't rely on zookeeper to store offsets, but
rather the client was responsible for managing their own offsets as there
Hey Joe,
The closest thing is probably the ConsumerOffsetChecker (which is
deprecated in 0.9 and replaced by the ConsumerGroupCommand).
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-ConsumerOffsetChecker
On Wed, May 11, 2016 at 5:09 AM, Joe San wrote:
> In versio
That's actually not the right way to delete topics (or for that matter
managing a Kafka instance). It can lead to odd/corrupt installation.
-Jaikiran
On Wednesday 11 May 2016 06:27 PM, Eduardo Costa Alfaia wrote:
Hi,
It’s better creating a script that delete the kafka folder where exist the
k
Hi,
It’s better creating a script that delete the kafka folder where exist the
kafka topic and after create it again if need.
BR
Eduardo Costa Alfaia
Ph.D. Student in Telecommunications Engineering
Università degli Studi di Brescia
Tel: +39 3209333018
On 5/11/16, 09:48, "Snehalata Nagaj
Hello!
I'm using Kafka 0.9.1. Suppose that I have created a topic "my-topic" with
1 partition.With the following code, I got StaleMetadataException in
Fetcher->listOffset method and the thread is blocked in an infinite while
loop (while true).
I came to this error by mistake, so what to do in thi
On Tuesday 10 May 2016 09:29 PM, Radoslaw Gruchalski wrote:
Kafka is expecting the state to be there when the zookeeper comes back. One way
to protect yourself from what you see happening, is to have a zookeeper quorum.
Run a cluster of 3 zookeepers, then repeat your exercise.
Kafka will conti
Hi again, I was able to reproduce the bug in the same scenario (two workers
on separate machines) just by deleting the connector from the Rest API and
then restarting it again.
I also got this error on one of the workers :
[2016-05-11 11:29:47,034] INFO 172.17.42.1 - - [11/May/2016:11:29:45 +]
Hi Liquan,
thanks for the fast response.
I'm able to reproduce the error by having two workers running on two
different machines. If I restart one of the two worker, the failover logic
correctly detects the failure and shut down the tasks on the healthy worker
for rebalancing. When the failed worke
In version 0.9.0.0, we have this beautiful command that would show the
offset and the lag in a println as:
println("%s, %s, %s, %s, %s, %s, %s"
.format("GROUP", "TOPIC", "PARTITION", "CURRENT OFFSET", "LOG END
OFFSET", "LAG", "OWNER"))
is there an equivalent command that I could use for the 0.
If it’s marked, he configured it. I recently tried it and it doesn’t mark it
otherwise.
Am 11.05.16, 09:51 schrieb "Jan Omar" :
>You have to allow topic deletion in server.properties first.
>
>delete.topic.enable = true
>
>Regards
>
>Jan
>
>> On 11 May 2016, at 09:48, Snehalata Nagaje
>> wr
Hi Matteo,
Glad to hear that you are building a connector. To better understand the
issue, can you provide the exact steps to re-produce the issue? One thing I
am confused is that when one worker is shutdown, you don't need to restart
the connector through the rest API, the failover logic should h
Hi,
I'm working on a custom implementation of a sink connector for Kafka
Connect framework. I'm testing the connector for fault tolerance by killing
the worker process and restarting the connector through the Rest API and
occasionally I notice that some tasks don't receive anymore messages from
th
I am using 0.9.0.1
- Original Message -
From: "Jörg Wagner"
To: users@kafka.apache.org
Sent: Wednesday, May 11, 2016 1:20:37 PM
Subject: Re: Can we delete topic in kafka
Depending on your version of kafka it may or may not work.
Before 0.8.2 it didn't work afaik and on 0.8.2 it works u
I tried with persisten volume as Christian suggested and it works great for me.
Thanks !
Btw I need to explore the zookeeper cluster solution as well.
Paolo.
Paolo PatiernoSenior Software Engineer (IoT) @ Red Hat
Microsoft MVP on Windows Embedded & IoTMicrosoft Azure Advisor
Twitter : @ppatier
You could create a docker image with a kafka installation, and start a
mirror maker in it, you could set the retention time for it to infinite,
and mount the data volume. With the data you could always restart the
docker, en mirror it to somewhere else. Not sure that's what you want, but
it's an op
You have to allow topic deletion in server.properties first.
delete.topic.enable = true
Regards
Jan
> On 11 May 2016, at 09:48, Snehalata Nagaje
> wrote:
>
>
>
> Hi ,
>
> Can we delete certain topic in kafka?
>
> I have deleted using command
>
> ./kafka-topics.sh --delete --topic top
Depending on your version of kafka it may or may not work.
Before 0.8.2 it didn't work afaik and on 0.8.2 it works unrealiably from
my experience. Can't comment on 0.9 forward.
Cheers
On 11.05.2016 09:48, Snehalata Nagaje wrote:
Hi ,
Can we delete certain topic in kafka?
I have deleted us
Hi ,
Can we delete certain topic in kafka?
I have deleted using command
./kafka-topics.sh --delete --topic topic_billing --zookeeper localhost:2181
It says topic marked as deletion, but it does not actually delete topic.
Thanks,
Snehalata
Hi!
In Gwen's answer there is nothing related with the simple API consumer and
the relation with ZK. But still the simple consumer API uses ZK to store
offsets, isn't it?
I look forward for your answer.
Regards,
Florin
On Wed, May 11, 2016 at 2:11 AM, R Krishna wrote:
> And where is the docume
Thanks Alex and John , it is very helpful.
Regards,
Sahitya Agrawal
On Wed, May 11, 2016 at 4:14 AM, Alex Loddengaard wrote:
> Hi Sahitya,
>
> I wonder if your consumers are experiencing soft failures because they're
> busy processing a large collection of messages and not calling poll()
> wit
39 matches
Mail list logo