and msg2 may not be sent by one broker.
2015-11-21 7:56 GMT+08:00 Guozhang Wang <wangg...@gmail.com>:
> Yonghui,
>
> What is the ack mode for the producer clients? And are msg1 and msg2 sent
> by the same producer?
>
> Guozhang
>
> On Thu, Nov 19, 2015 at 10
no coordination between consumers, hence is lighter-weight.
>
>
> Guozhang
>
>
> On Thu, Nov 19, 2015 at 1:15 AM, Yonghui Zhao <zhaoyong...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I found in 0.9,
> >
> > def commitOffsets(offsetsToCommit: j
and retried.
> d. msg2 acked.
> e. msg1 acked.
>
> Assuming you said "msg1 and msg2 may notbe sent by one producer", out of
> ordering can even happen more likely as they could arrive to the broker
> from different sockets at arbitrary ordering.
>
> Guozhang
&
Hi,
I found in 0.9,
def commitOffsets(offsetsToCommit: java.util.Map[TopicAndPartition,
OffsetAndMetadata], retryOnFailure: Boolean)
is added to ZookeeperConsumerConnector.
If I want to read data backwards or forwards, can I use this API to commit
new offset and then read data from new
Broker setting is: 8 partitions, 1 replica, kafka version 0.8.1
We send 2 message at almost same time.
Msg1 first, Msg2 second.
We have more than 1 producers in sync mode.
We may send msg1 in one broker, after *producer.send return response*
And send msg2 in the other broker.
Both 2 msg has
Hi,
How about this feature? thanks
*We do plan to allow the high level consumer to specify a
starting offset inthe future when we revisit the consumer design. Some of
the details aredescribed
inhttps://cwiki.apache.org/confluence/display/KAFKA/Consumer+Client+Re-Design
or maybe deleting
these log files? For more context to this question, please read the
discussion related to this here http://mail-archives.apache.
org/mod_mbox/kafka-dev/201501.mbox/%3C54C47E9B.5060401%40gmail.com%3E
-Jaikiran
On Thursday 08 January 2015 11:19 AM, Yonghui Zhao wrote:
CentOS
0.8.1.1 we have 2 brokers. broker-0 and broker-1
2015-01-20 8:43 GMT+08:00 Guozhang Wang wangg...@gmail.com:
Yonghui, which version of Kafka are you using? And does your cluster only
have one (broker-0) server?
Guozhang
On Sat, Jan 17, 2015 at 11:53 PM, Yonghui Zhao zhaoyong...@gmail.com
Hi,
our kafka cluster is shut down automatically today, here is the log.
I don't find any error log. Anything wrong?
[2015-01-18 05:01:01,788] INFO [BrokerChangeListener on Controller 0]:
Broker change listener fired for path /brokers/ids with children 0
6, 2015 at 4:46 AM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Hi,
We use kafka_2.10-0.8.1.1 in our server. Today we found disk space alert.
We find many kafka data files are deleted, but still opened by kafka.
such as:
_yellowpageV2-0/68170670.log (deleted)
java
CentOS release 6.3 (Final)
2015-01-07 22:18 GMT+08:00 Harsha ka...@harsha.io:
Yonghui,
Which OS you are running.
-Harsha
On Wed, Jan 7, 2015, at 01:38 AM, Yonghui Zhao wrote:
Yes and I found the reason rename in deletion is failed.
In rename progress the files is deleted
Hi,
We use kafka_2.10-0.8.1.1 in our server. Today we found disk space alert.
We find many kafka data files are deleted, but still opened by kafka.
such as:
_yellowpageV2-0/68170670.log (deleted)
java 8446 root 724u REG 253,2 536937911
26087362
Hi,
For a non-existent topic, the consumer and producer are set up.
Then if the producer sends the first message, producer gets this exception:
[2014-11-12 16:24:28,041] WARN Error while fetching metadata
[{TopicMetadata for topic test5 -
No partition metadata for topic test5 due to
to resolve this issue.
Guozhang
On Wed, Nov 12, 2014 at 12:35 AM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Hi,
For a non-existent topic, the consumer and producer are set up.
Then if the producer sends the first message, producer gets this
exception:
[2014-11-12 16:24:28,041
Hi,
In http://kafka.apache.org/08/configuration.html, there are 2 parameter
about zk in consumer.
zookeeper.session.timeout.ms 6000 Zookeeper session timeout. If the
consumer fails to heartbeat to zookeeper for this period of time it is
considered dead and a rebalance will occur.
-based distributed logic into a centralized
coordiantor. Details can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+0.9+Consumer+Rewrite+Design
Guozhang
On Mon, May 12, 2014 at 12:48 AM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Hi,
We are using kafka 0.7
If l use java producer api in sync mode.
public void send(kafka.producer.KeyedMessageK,V message) { /* compiled
code */ }
How to know whether a send process is successful or failed?
For example if the kafka broker disk is not accessible , will it throw
exceptions?
Hi,
We are using kafka 0.7.
2 brokers, each broker has 10 partitions for one topic
3 consumers in one consumer group, each consumer create 10 streams.
Today, when we want to rollout new service.
After we restart one consumer we find exceptions and warning.
In kafka 0.8.1, when I shut down high level consumer, I found one exception.
I think it is expected, right?
14/04/19 21:12:47 INFO consumer.ZookeeperConsumerConnector:
[1_yozhao-ubuntu-1397913156352-9b13f962], ZKConsumerConnector shutting down
14/04/19 21:12:47 INFO zookeeper.ClientCnxn: Client
Hi
I am trying kafka 0.8.1, and use zkclient version:
dependency
groupIdcom.101tec/groupId
artifactIdzkclient/artifactId
version0.3/version
/dependency
But I found exceptions in console, is zkversion is wrong?
*java.lang.NoSuchMethodError:
In my environment, I have 2 brokers and only 1 topic, each brokers has 10
partitions,
so there are 20 partitions in total.
I have 4 consumers in one consumer group, each consumer use
createMessageStreams to create 10 streams, 40 streams in total.
Since partition can not be split, so there are
Hi,
In kafka 0.8, does high level consumer support setting offset?
Our service reads kafka data but won't flush the data immediately, so if
restarted the data in memory will be lost. We want to reset kafka consumer
offset to an old offset.
If the consumer group has only 1 machine, we can
Hi,
In kafka 0.7.2, we use 2 brokers and 12 consumers.
I have 2 questions:
1. seems if one broker is restarted then all consumers need restart to
reconnect to kafka. right?
2. How to monitor whether the consumer connection with the broker is
healthy?
Thank you very much.
to try this.
Best
Guodong
On Thu, May 30, 2013 at 11:07 AM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Hi,
I am using kafka 0.7.2, do you see this exception? What's the possible
reason?
2013/05/29 19:18:19.325 ERROR [KafkaRequestHandlers] [] Error processing
ProduceRequest
Hi,
I am using kafka 0.7.2, do you see this exception? What's the possible
reason?
2013/05/29 19:18:19.325 ERROR [KafkaRequestHandlers] [] Error processing
ProduceRequest on gallery:0
kafka.message.InvalidMessageException: message is invalid, compression
codec: NoCompressionCodec size: 222
Hi,
I want to confirm is private kafka.javaapi.producer.Producer thread safe?
i.e. I can use one producer to send data in multi threads at the same time.
cause. So, you will
need to tune the GC setting further. Another way to avoid ZK session
timeout is to increase the session timeout config.
Thanks,
Jun
On Wed, Mar 27, 2013 at 8:35 PM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Now I used GC like this:
-server -Xms1536m -Xmx1536m
frequent your GCs are.
Thanks,
Jun
On Thu, Mar 28, 2013 at 12:23 AM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
I used zookeeper-3.3.4 in kafka.
Default tickTime is 3 seconds, minSesstionTimeOut is 6 seconds.
Now I change tickTime to 5 seconds. minSesstionTimeOut to 10 seconds
in
the broker as well.
Thanks,
Jun
On Thu, Mar 28, 2013 at 8:20 AM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Thanks Jun.
But I can't understand how consumer GC trigger kafka server issue:
java.lang.RuntimeException: A broker is already registered on the path
/brokers/ids/0. This probably
, Yonghui Zhao wrote:
Any suggestion on consumer side?
在 2013-3-25 下午9:49,Neha Narkhede neha.narkh...@gmail.com
javascript:;
写道:
For Kafka 0.7 in production at Linkedin, we use a heap of size 3G,
new
gen
256 MB, CMS collector with occupancy of 70%.
Thanks,
Neha
Done, https://issues.apache.org/jira/browse/KAFKA-824
2013/3/25 Neha Narkhede neha.narkh...@gmail.com
On the Kafka JIRA - https://issues.apache.org/jira/browse/KAFKA
Thanks,
Neha
On Sunday, March 24, 2013, Yonghui Zhao wrote:
Sure. no problem. Where should I file the bug?
2013/3
logs in the consume saying sth like expired session for ZK.
Occasional rebalances are fine. Too many rebalances can slow down the
consumption and you will need to tune your GC setting.
Thanks,
Jun
On Thu, Mar 21, 2013 at 11:07 PM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Yes, before
Any suggestion on consumer side?
在 2013-3-25 下午9:49,Neha Narkhede neha.narkh...@gmail.com写道:
For Kafka 0.7 in production at Linkedin, we use a heap of size 3G, new gen
256 MB, CMS collector with occupancy of 70%.
Thanks,
Neha
On Sunday, March 24, 2013, Yonghui Zhao wrote:
Hi Jun,
I
get away with using CMS for the tenured generation and parallel
collector for the new generation with a small heap like 1gb or so.
Thanks,
Neha
On Monday, March 25, 2013, Yonghui Zhao wrote:
Any suggestion on consumer side?
在 2013-3-25 下午9:49,Neha Narkhede neha.narkh...@gmail.comjavascript
is triggered or the client is
being shutdown. Do you mind filing a bug ?
Thanks,
Neha
On Sunday, March 24, 2013, Yonghui Zhao wrote:
Have you ever seen this exception?
2013/03/25 12:08:32.020 WARN [ZookeeperConsumerConnector] []
0_lu-ml-test10.bj-1364184411339-7c88f710 exception during
rebalances can slow down the
consumption and you will need to tune your GC setting.
Thanks,
Jun
On Thu, Mar 21, 2013 at 11:07 PM, Yonghui Zhao zhaoyong...@gmail.comwrote:
Yes, before consumer exception:
2013/03/21 12:07:17.909 INFO [ZookeeperConsumerConnector] []
0_lg-mc-db01.bj
fails?
Thanks,
Jun
On Tue, Mar 19, 2013 at 1:34 AM, Yonghui Zhao zhaoyong...@gmail.com
wrote:
Connection reset exception reproed.
[2013-03-19 16:30:45,814] INFO Closing socket connection to /127.0.0.1.
(kafka.network.Processor)
[2013-03-19 16:30:55,253] ERROR Closing socket
37 matches
Mail list logo