It seems like nohup has solved this issue, even when the putty window
becomes inactive the processes are still running (I din't need to
interact with them). I might look into using screen or tmux as a long
term solution.
Thanks Terry and Mike!
Best,
Su
On Tue, Jun 16, 2015 at 3:42 PM, Terry Ba
Hi,
While testing message delivery using kafka, I realized that few duplicate
messages got delivered by the consumers in the same consumer group (two
consumers got the same message with few milli-seconds difference). However,
I do not see any redundancy at the producer or broker. One more observat
Hi,
During a round of kafka data discrepancy investigation I came across a
bunch of recurring errors below:
producer.log
>
2015-06-14 13:06:25,591 WARN [task-thread-9]
> (k.p.a.DefaultEventHandler:83) - Produce request with correlation id 624
> failed due to [mytopic,21]: kafka.common.NotLeader
Greetings,
nohup does the trick, as Mr. Bridge has shared. If you seem to want to run
these and still have some "interactivity" with
the services, consider using "screen" or "tmux" as these will enable you to
run these programs in foreground, have added
windows you can use to access shell, tail lo
Have you tried using "nohup"
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
nohup bin/kafka-server-start.sh config/server.properties &
On Tue, Jun 16, 2015 at 3:21 PM, Su She wrote:
> Hello Everyone,
>
> I'm wondering how to keep Zookeeper and Kafka Server up even wh
Running this seems to indicate that there is a leader at 0:
$ ./bin/kafka-topics.sh --zookeeper MY.EXTERNAL.IP:2181 --describe
--topic test123
--> Topic:test123 PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test123 Partition: 0 Leader: 0 Replicas: 0 Isr: 0
I reran this test and my
Hello Everyone,
I'm wondering how to keep Zookeeper and Kafka Server up even when my
SSH (using putty) becomes inactive. I've tried running it in the
background (using &), but it seems like it stops sometimes after a
couple hours or so and I'll have to restart zookeeper and/or the kafka
server.
T
The topic warning is a bug (i.e the fact that you get a warning on
perfectly valid parameter). We fixed it for next release.
It is also unrelated to the real issue with the LeaderNotAvailable
On Tue, Jun 16, 2015 at 2:08 PM, Mike Bridge wrote:
> I am able to get a simple one-node Kafka (kafka_2.
I am able to get a simple one-node Kafka (kafka_2.11-0.8.2.1) working
locally on one linux machine, but when I try to run a producer remotely I'm
getting some confusing errors.
I'm following the quickstart guide at
http://kafka.apache.org/documentation.html#quickstart. I stopped the kafka
process
Ah :)
See how the first replica in your replicas list is always either 1 or 2?
This means that after re-assignment, this will be the leader (and the
preferred leader) for these partitions.
Which means that Kafka will keep trying to rebalance leaders to those
replicas (since they are preferred). Yo
Hello Jin,
You can subscribe following these instructions:
http://kafka.apache.org/contact.html
Guozhang
On Tue, Jun 16, 2015 at 11:31 AM, Jin Wang wrote:
>
>
--
-- Guozhang
Should not matter. We're running 12.04.
Wes
On Jun 16, 2015 12:18 PM, "Henry Cai" wrote:
> Does it still matter whether we are using Ubuntu 14 or 12?
>
> On Tue, Jun 16, 2015 at 8:44 AM, Wesley Chow wrote:
>
> >
> > A call with Amazon confirmed instability for d2 and c4 instances
> triggered
>
Does it still matter whether we are using Ubuntu 14 or 12?
On Tue, Jun 16, 2015 at 8:44 AM, Wesley Chow wrote:
>
> A call with Amazon confirmed instability for d2 and c4 instances triggered
> by lots of network activity. They fixed the problem and have since rolled
> it out. We've been running K
Hi!
I'm trying to test this fix:
https://issues.apache.org/jira/browse/KAFKA-847 (and use Log4j Kafka
appender) but I have some problems.
Here are my pom dependencies:
log4j
log4j
1.2.17
org.apache.kafka
kafka_2.9.2
0.8.2.0
and my log4j.properties file:
log4j.r
A call with Amazon confirmed instability for d2 and c4 instances triggered
by lots of network activity. They fixed the problem and have since rolled
it out. We've been running Kafka with d2's for a little while now and so
far so good.
Wes
On Tue, Jun 2, 2015 at 1:39 PM, Wes Chow wrote:
>
> We
Ok..I got your point. Currently we check the log segment constraints
(segment.bytes, segment.ms)
only before appending new messages. So we will not create a new log segment
until new data comes.
In your case, your approach(sending periodic dummy/ping message) should be
fine.
On Tue, Jun 16, 201
Thank you for the response!
Unfortunately, those improvements would not help. It is the lack of
activity resulting in a new segment that prevents compaction.
I was confused by what qualifies as the active segment. The active segment
is the last segment as opposed to the segment that would be wri
Hi,
Your observation is correct. we never compact the active segment.
Some improvements are proposed here,
https://issues.apache.org/jira/browse/KAFKA-1981
Manikumar
On Tue, Jun 16, 2015 at 5:35 PM, Shayne S wrote:
> Some further information, and is this a bug? I'm using 0.8.2.1.
>
>
Hi Gwen,
sure, the following commands were executed:
./kafka-reassign-partitions.sh --zookeeper XXX --reassignment-json-file
~/partition_redist.json --execute
./kafka-reassign-partitions.sh --zookeeper XXX --reassignment-json-file
~/partition_redist.json --verify
The contents of partition_redist
Some further information, and is this a bug? I'm using 0.8.2.1.
Log compaction will only occur on the non active segments. Intentional or
not, it seems that the last segment is always the active segment. In other
words, an expired segment will not be cleaned until a new segment has been
created
Found out that there is standard API for retrieving and committing offsets
(see
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka
)
Problem is that the server/broker side is not extensible (see
https://github.com/apache/kafka/blob/trunk/core/src/ma
22 matches
Mail list logo