Hi all,
I have a Kafka 0.8 cluster of two nodes on same machine with 4 partitions and
communicating through a single zookeeper.
I am producing data using the Kafka Producer using the following code:
KeyedMessage data = new KeyedMessage(topic,
input);
producer.send(data);
I am able to consume
One thing to note is that we do support controlled shutdown as part of the
regular shutdown hook in the broker. The wiki was not very clear w.r.t
this and I have updated it to convey this. You can turn on controlled
shutdown by setting "controlled.shutdown.enable" to true in kafka config.
This will
Ok thanks - I'll go through this tomorrow.
Joel
On Wed, Jul 10, 2013 at 9:14 PM, Calvin Lei wrote:
> Joel,
>So i was able to reproduce the issue that I experienced. Please see the
> steps below.
> 1. Set up a 3-zookeeper and 6-broker cluster. Setup one topic with 2
> partitions, with replica
It's not ideal - right now we use the JMX operation (which returns an
empty set on a successful controlled shutdown). If not it returns a
set containing the partitions still being led on the broker. We retry
(with appropriate intervals) until it succeeds. After that we do a
regular broker shutdown
We are already using "zk.connect" to connect zookeeper and registered
multiple brokers (same topic/partitions), so when a consumer request ZK, is
load balancing already done?
Thanks
That's right. If this only happens during shutdown, it's not a concern.
Thanks,
Jun
On Wed, Jul 10, 2013 at 10:28 PM, Philip O'Toole wrote:
> Quite possibly. We are doing lots of restarts recovering from various
> issues.
> We can pay more attention next time.
>
> If it is a shutdown scenario
My understanding is that the Sun engineers were concerned that a process
may read data mapped by another process, if unmap is supported.
Thanks,
Jun
On Wed, Jul 10, 2013 at 9:34 AM, Jay Kreps wrote:
> Does anyone understand the discussion on that ticket Sriram posted? It
> sounds like they ha
Quite possibly. We are doing lots of restarts recovering from various issues.
We can pay more attention next time.
If it is a shutdown scenario, I guess we don't need to be that concerned?
Philip
On Jul 10, 2013, at 10:22 PM, Jun Rao wrote:
> That can happen when shutting down the consumer.
That can happen when shutting down the consumer. Is that the case?
Thanks,
Jun
On Wed, Jul 10, 2013 at 6:43 PM, Philip O'Toole wrote:
> Hello -- we're doing some heavy lifting now with our high-level based
> consumer. We open a Consumer Connection per partition within the one JVM,
> and are u
Could you try 0.7.2?
Thanks,
Jun
On Wed, Jul 10, 2013 at 11:38 AM, Sybrandy, Casey <
casey.sybra...@six3systems.com> wrote:
> Hello,
>
> Apologies for bringing this back from the dead, but I'm getting the same
> exception using Kafka 0.7.0. What could be causing this?
>
> Thanks.
>
> Casey
>
That's actually not expected. We should only return live brokers to the
client. It seems that we never clear the live broker cache in the brokers.
This is a bug. Could you file a jira?
Thanks,
Jun
On Wed, Jul 10, 2013 at 8:52 AM, Vinicius Carvalho <
viniciusccarva...@gmail.com> wrote:
> Hi the
This is not supported in 0.7 version, see this thread for further details.
http://search-hadoop.com/m/4TaT4dTQDe2/hussain/v=threaded
Thanks,
Hussain
-Original Message-
From: Hisham Mardam-Bey [mailto:his...@mate1inc.com]
Sent: Thursday, July 11, 2013 9:49 AM
To: users@kafka.apache.org
Su
Joel. How do you guys do kafka service shutdown and startup?
On Wed, Jul 10, 2013 at 5:32 PM, Joel Koshy wrote:
> https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools
> has more details. The ShutdownBroker tool does not return anything.
> i.e., it does not exit with a System.exit
Hey guys,
In 0.7 how can one identify the broker and partition that a message was
pulled from in the high level consumer?
Thanks!
hmb.
--
Hisham Mardam-Bey
Hey guys,
In 0.7 how can one identify the broker and partition that a message was
pulled from in the high level consumer?
Thanks!
hmb.
--
Hisham Mardam-Bey
Thanks Jay. We will still suffer from network latency if we use remote
write.
We probably will explore more on the idea of having local cluster and
mirror messages across the DC.
thanks,
Cal
On Wed, Jul 10, 2013 at 12:04 PM, Jay Kreps wrote:
> To publish to a remote data center just configure
Joel,
So i was able to reproduce the issue that I experienced. Please see the
steps below.
1. Set up a 3-zookeeper and 6-broker cluster. Setup one topic with 2
partitions, with replication factor set to 3.
2. Setup and run the console consumer, consuming messages from that topic.
3. Produce a fe
Hello -- we're doing some heavy lifting now with our high-level based
consumer. We open a Consumer Connection per partition within the one JVM,
and are using Kafka 0.72. We saw a burst of the exceptions shown below. Is
this something we should be concerned about? Or is this the normal output
from r
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools
has more details. The ShutdownBroker tool does not return anything.
i.e., it does not exit with a System.exit code to pass back to a
shell. it only logs if controlled shutdown was complete or not. You
will need to configure the num
We have deployed kafka 0.8 beta1. It was my understanding that
ShutdownBroker program needs be used to initiate proper shutdown of the
server. We are going to use this script in automated fashion. Does the
script return meaningful error code that can be capture by calling script
and act up on? What
Hello,
Apologies for bringing this back from the dead, but I'm getting the same
exception using Kafka 0.7.0. What could be causing this?
Thanks.
Casey
-Original Message-
From: Jun Rao [mailto:jun...@gmail.com]
Sent: Tuesday, March 12, 2013 12:14 AM
To: users@kafka.apache.org
Subject:
Also, just so that we are on the same page. I assume that you used the
following api. Did you just put in one topic in the topicCountMap?
def createMessageStreams(topicCountMap: Map[String,Int]): Map[String,
List[KafkaStream[Array[Byte],Array[Byte
Thank,
Jun
On Wed, Jul 10, 2013 at 8:30 A
Does anyone understand the discussion on that ticket Sriram posted? It
sounds like they have an unmap call but they appear to be concerned about
protecting threads from one another--i.e. if one thread unmapped the file
and another mapped a different file it would show up in the old memory
mapping.
The weird part is this. If the consumers are consuming, the following
fetcher thread shouldn't be blocked on enqueuing the data. Could you turn
on TRACE level logging in kafka.server.KafkaRequestHandlers and if there is
any fetch requests issued to the broker when the consumer threads get stuck?
"
To publish to a remote data center just configure the producers with the
host/port of the remote datacenter. To ensure good throughput you may want
to tune the socket send and receive buffers on the client and server to
avoid small roundtrips:
http://en.wikipedia.org/wiki/Bandwidth-delay_product
-
Hi there. Once again, I don't think I could get the docs on another topic.
So my nodejs client connects to the broker and the first thing it does is
store the topic metadata:
data received
{
"brokers": [
{
"nodeId": 0,
"host": "10.139.245.106",
"por
Thanks Jun
On Wed, Jul 10, 2013 at 12:17 AM, Jun Rao wrote:
> For 1, you will get a response with an error.
>
> For 2, a partition # has to be specified. If it is incorrect, you will get
> a response with an error.
>
> Thanks,
>
> Jun
>
>
> On Tue, Jul 9, 2013 at 11:58 AM, Vinicius Carvalho <
>
Hi Jun,
Thanks for helping out so far.
As per your explanation we are doing exactly as you have mentioned in your
workaround below.
> A workaround is to use different consumer connectors, each consuming a
> single topic.
Here is the problem...
We have a topic which gets a lot of events (arou
Ok. One of the issues is that when you have a consumer that consumes
multiple topics, if one of the consumer threads is slow in consuming
messages from one topic, it can block the consumption of other consumer
threads. This is because we use a shared fetcher to fetch all topics. There
is an in-memo
Thanks very much for digging in! I was a tad concerned about that
approach but in the process of testing that idea out along with some other
more dramatic ideas ;). Will keep you updated - thanks again!
On 7/10/13 7:55 AM, "Jun Rao" wrote:
>From that link, one workaround is to set the buffer
>From that link, one workaround is to set the buffer to null and force a GC.
Not sure if that's a good idea though.
Thanks,
Jun
On Tue, Jul 9, 2013 at 10:13 PM, Sriram Subramanian <
srsubraman...@linkedin.com> wrote:
> As far as I am aware it is not possible to resize mapped buffer without
> u
Thanks Jay. I thought of using the worldview architecture you suggested.
But since our consumers are also globally deployed, which means any new
messages arrive the worldview needs to be replicated back to the local DCs,
making the topology a bit complicated.
Would you please elaborate on the remo
Thanks Ian.
Is your consumer multi-threaded? If so can you share how you coordinated
each of the threads so you knew it was 'okay' to commit across all the
threads? I'm stuck on how to do this without really complicating the
consumer.
Thanks,
Chris
On Tue, Jul 9, 2013 at 5:51 PM, Ian Friedman
33 matches
Mail list logo