Hi Abhimanyu,
What errors are you seeing? And which version of DCOS are you running as well?
Tim
On Fri, Jul 22, 2016 at 6:14 AM, Chakrabarty, Abhimanyu
wrote:
> I had a question regarding Kafka on DC/OS because whenever we try to install
> the Kafka package
That's all information available from the jmx endpoints in Kafka.
Tim
On Fri, Mar 25, 2016 at 1:21 PM, yeshwanth kumar wrote:
> can someone explain, how Cloudera manager Collects Kafka Metrics, such as
> TotalMessages in a Topic, Total Bytes read and written from and into
Hi Jeff,
The controller should have a Topic deletion thread running
coordinating the delete in the cluster, and the progress should be
logged to the controller log.
Can you look at the controller log to see what's going on?
Tim
On Wed, Mar 4, 2015 at 10:28 AM, Jeff Schroeder
I believe that's the only way it's supported from the CLI.
Delete topic actually fully removes the topic from the cluster, which
also includes cleaning the logs and removing it from zookeeper (once
it is fully deleted).
Tim
On Fri, Jan 23, 2015 at 12:13 PM, Sumit Rangwala
What's your configured required.acks? And also are you waiting for all
your messages to be acknowledged as well?
The new producer returns futures back, but you still need to wait for
the futures to complete.
Tim
On Fri, Jan 2, 2015 at 9:54 AM, Sa Li sal...@gmail.com wrote:
Hi, all
We are
Is this the latest master? I've added the delete option in trunk, but
it's not in any release yet.
We used to have the delete option flag but I believe we removed it
that's why the documentation difference.
Tim
On Wed, Aug 6, 2014 at 10:53 PM, Shlomi Hazan shl...@viber.com wrote:
if the answer
:01 PM, Timothy Chen tnac...@gmail.com wrote:
Is this the latest master? I've added the delete option in trunk, but
it's not in any release yet.
We used to have the delete option flag but I believe we removed it
that's why the documentation difference.
Tim
On Wed, Aug 6, 2014 at 10:53 PM
cluster? Or
does the cluster also have to be running trunk? (I'm guessing it does :)).
I have some topics I'd like to delete, but don't want to wait for 0.8.2
(but will probably have to, I'm guessing).
Jason
On Thu, Aug 7, 2014 at 2:53 AM, Timothy Chen tnac...@gmail.com wrote:
Hi Gwen
Yes the existing delete topic command just cleans up the topic entry in zk, but
not really deleting the topic from the cluster.
I have a patch that enables kafka-topics.sh to delete topic but not sure if
it's merged to trunk.
Tim
On Jun 18, 2014, at 1:39 PM, hsy...@gmail.com hsy...@gmail.com
wrote:
Have seen if you have a write with zero data it will hang
On Jun 16, 2014, at 21:02, Timothy Chen tnac...@gmail.com wrote:
Can you try running it in debug mode? (./gradlew jar -d)
Tim
On Mon, Jun 16, 2014 at 8:44 PM, Jorge Marizan jorge.mari...@gmail.com
wrote:
It just hangs
with ps aux and there is no Gradle processes left
behind when I cancel the compile job.
Jorge.
On Jun 17, 2014, at 11:45 PM, Timothy Chen tnac...@gmail.com wrote:
Not sure what's wrong but I'm guessing there probably can be a gradle lock
somewhere.
Is there other gradle processes
What output was it stuck on?
Tim
On Mon, Jun 16, 2014 at 6:39 PM, Jorge Marizan jorge.mari...@gmail.com wrote:
Hi team, I’m a newcomer to Kafka, but I’m having some troubles trying to get
it to run on OS X.
Basically building Kafka on OS X with 'gradlew jar’ gets stuck forever
without any
Can you try running it in debug mode? (./gradlew jar -d)
Tim
On Mon, Jun 16, 2014 at 8:44 PM, Jorge Marizan jorge.mari...@gmail.com wrote:
It just hangs there without any output at all.
Jorge.
On Jun 16, 2014, at 11:27 PM, Timothy Chen tnac...@gmail.com wrote:
What output was it stuck
Hi Maung,
If your required.acks is 1 then the producer only ensures that one
broker receives the data before it's sucessfully returned to the
client.
Therefore if the broker crashes and lost all the data then you lose
data, or similarly it can happen even before the data is fsynced.
To ensure
is zero, can we use ack other than 1?
Maung
On Jun 3, 2014, at 3:00 PM, Timothy Chen tnac...@gmail.com wrote:
Hi Maung,
If your required.acks is 1 then the producer only ensures that one
broker receives the data before it's sucessfully returned to the
client.
Therefore if the broker
There is a Scala API. You can take a look at TopicCommand.scala as
kafka-topics.sh simply calls that class.
Tim
On Tue, May 20, 2014 at 3:41 PM, Saurabh Agarwal (BLOOMBERG/ 731 LEX
-) sagarwal...@bloomberg.net wrote:
Hi,
Is there java API in kafka to list topics and partitions in the kafka
It typically throws a exception in the end of the sync producer cannot
deliver your message.
In the case where there is a IOException or similiar exceptions that
the Broker cannot deal with, I believe it will try to return
UnknownError response which will then throw in the producer.
In cases
stream commit, do you mean a per partition commit like this API -
public OffsetMetadata commit(MapTopicPartition, Long offsets);
This API allows the consumer to commit the specified offsets only for
selected partitions.
Thanks,
Neha
On Thu, May 15, 2014 at 8:42 AM, Timothy Chen tnac
The C# client you're using only supports 0.7 Kafka, where 0.8 kafka is
not backward compatible APIs anymore.
If you want to use the latest Kafka you'll have to change the binary
protocol yourself, or work with one of the other folks that has
mentioend about .NET client in the mailing list.
Tim
What is your compression configuration for your producer?
One of the biggest CPU source for the producer is doing compression
and also checksuming.
Tim
On Sun, May 11, 2014 at 12:24 AM, yunbinw...@travelsky.com wrote:
I write a very simple code , like this :
public class LogProducer {
Hi Chris,
Kafka producer doesn't require zookeeper anymore, so you can simply
connect to one of the brokers directly.
Tim
On Tue, Apr 29, 2014 at 9:23 AM, Chris Helck chris.he...@ebs.com wrote:
I have a few newbie questions. I need to create a Producer that sends
messages to Kafka brokers.
Done, let me know if you want more changes.
Tim
On Tue, Apr 29, 2014 at 1:54 PM, Sergiy Zuban s.zu...@gmail.com wrote:
Could someone please update Perl client information at
https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Perl
1. GZIP and Snappy compression supported
2.
Hi Yashika,
No logs in broker log is not normal, can you verify if you turned off
logging in your log4j properties file?
If it is please enable it and try again, and see what is in the logs.
Tim
On Thu, Apr 24, 2014 at 10:53 PM, Yashika Gupta
yashika.gu...@impetus.co.in wrote:
Jun,
I am
Hi Ryan,
Also KAFKA-1317 should be fixed in both trunk and latest 0.8.1 branch,
are you running with either or just with one of the previous released
versions?
Tim
On Mon, Apr 21, 2014 at 5:00 PM, Guozhang Wang wangg...@gmail.com wrote:
Hi Ryan,
Did you see any error logs on the new
onControllerResignation assuming it's the controller while the
broker might be just re-establishing zookeeper session.
I'll file a jira and fix this.
Tim
On Wed, Apr 2, 2014 at 4:00 PM, Clark Breyman cl...@breyman.com wrote:
Hey Tim. Small world :).
Kafka 0.8.1_2.10
On Wed, Apr 2, 2014 at 3:54 PM, Timothy
Hi Roy,
I wonder if you were able to start the broker following the steps here:
http://kafka.apache.org/documentation.html#quickstart
That page also shows you how to create a topic and send/consume messages
using the console producer/consumer.
Let us know if you run into any problems,
Tim
Hi Sripada,
Unfortunately I can't provide a code fix, but it's an easy fix actually.
Basically the path that is trying to look for kafka-run-class.bat is wrong
as it expects that file to be from the current window's folder.
You can either cd into the parent and run it or fix the script.
Tim
From the roadmap they published it looks like pipelining as part of the
client rewrite is happening post 0.8.
Tim
On Thu, Dec 5, 2013 at 3:52 PM, Tom Brown tombrow...@gmail.com wrote:
In our environment we use currently use Kafka 0.7.1.
The core features I am looking for in a client are
Hi Philip,
So I wonder if you guys hit disk perf problems with EBS? It seems quite common
in the past but I haven't tried recently.
Also can you share how you guys deployed zookeeper in AWS so that a qurom is
always available?
Tim
Sent from my iPhone
On Dec 2, 2013, at 5:15 PM, Steve Morin
On Thu, Oct 17, 2013 at 3:20 PM, Timothy Chen tnac...@gmail.com wrote:
Hi Roger,
That's exactly what I need in my end, and actually internally created a
new
property called zkHost.name to publish a different host to zk. This is
also
needed for deploying Kafka into Azure.
I
Hi Roger,
That's exactly what I need in my end, and actually internally created a new
property called zkHost.name to publish a different host to zk. This is also
needed for deploying Kafka into Azure.
I also created zkHost.port since the internal and external ports that's
exposed might be
:
That would be great!
-Jay
On Wed, Aug 21, 2013 at 3:13 PM, Timothy Chen tnac...@gmail.com wrote:
Hi Jay,
I'm planning to test run Kafka on Windows in our test environments
evaluating if it's suitable for production usage.
I can provide feedback with the patch how well it works and if we
to have windows support in 0.8
and
it
sounds like Tim is able to get things working after these changes.
-Jay
On Mon, Sep 9, 2013 at 10:19 AM, Timothy Chen tnac...@gmail.com
wrote:
Btw, I've been running this patch in our cloud env and it's been
working
fine so far.
I
across producer and broker logs.
On Mon, Aug 19, 2013 at 11:01 PM, Timothy Chen tnac...@gmail.com wrote:
Hi,
This is probably a very obvious questoin, but I cannot find the answer
for
this.
What does the correlation id mean in a producer request?
Tim
Hi all,
I've tried pushing a large amount of messages into Kafka on Windows, and
got the following error:
Caused by: java.io.IOException: The requested operation cannot be performed
on a
file with a user-mapped section open
at java.io.RandomAccessFile.setLength(Native Method)
at
Hi Jun,
I wonder when will the tool be available? We're very interested in changing
the number of partitions for a topic after creation too.
Thanks!
Tim
On Thu, Jul 4, 2013 at 9:06 PM, Jun Rao jun...@gmail.com wrote:
Currently, once a topic is created, the number of partitions can't be
Hi Robert,
The most recent one that I know of is the C# client that ExactTarget folks
did, however not all calls are up to the 0.8 protocol so it doesn't
completely work.
I have a slightly more cleaned up version here
https://github.com/tnachen/kafka/tree/feature/et-develop-0.8
It will be great
Also since you're going to be creating a topic per user, the number of
concurrent users will also be a concern to Kafka as it doesn't like massive
amounts of topics.
Tim
On Thu, Jun 13, 2013 at 10:47 AM, Josh Foure user...@yahoo.com wrote:
Hi Mahendra, I think that is where it gets a little
Hi,
I'm trying to add my own custom partitioner and saw the example in the 0.8
producer example in the wiki.
However, when I set a broker list and set the custom partitioner class name
I did in the client, I see this error:
Partitioner cannot be used when broker list is set
Does this means
partitions to the new broker without any downtime.
Thanks,
Neha
On Wed, May 22, 2013 at 2:20 PM, Timothy Chen tnac...@gmail.com wrote:
Hi Neha/Chris,
Thanks for the reply, so if I set a fixed number of partitions and just
add
brokers to the broker pool, does it rebalance the load
Hi,
I'm currently trying to understand how Kafka (0.8) can scale with our usage
pattern and how to setup the partitioning.
We want to route the same messages belonging to the same id to the same
queue, so its consumer will able to consume all the messages of that id.
My questions:
- From my
/0.8.0+SimpleConsumer+Example
Thanks,
Neha
On Wed, May 22, 2013 at 12:37 PM, Chris Curtin curtin.ch...@gmail.com
wrote:
Hi Tim,
On Wed, May 22, 2013 at 3:25 PM, Timothy Chen tnac...@gmail.com wrote:
Hi,
I'm currently trying to understand how Kafka (0.8) can scale with our
42 matches
Mail list logo