Hi,
I would like to know if there is a way/configuration to keep-alive connections
from producer/client to broker?
I observed that connection is getting closed every time, which takes
considerable time.
Can you please suggest ?
Thanks Regards,
Neeraja
10x Christian
On Thu, Nov 13, 2014 at 9:50 AM, cac...@gmail.com cac...@gmail.com wrote:
I used the 0.8.2 producer in a 0.8.1 cluster in a nonproduction
environment. No problems to report it worked great, but my testing at that
time was not particularly extensive for failure scenarios.
@Neha, Can you share suggested consumer side GC settings?
On Wed, Nov 12, 2014 at 5:31 PM, Neha Narkhede neha.narkh...@gmail.com
wrote:
Does this help?
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog
?
On Wed, Nov 12, 2014 at 3:53 PM, Chen
Hello there:
I'd like to be added to the mailing list
Thanks,
--
Rohit Pujari
Solutions Engineer, Hortonworks
rpuj...@hortonworks.com
716-430-6899
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain
Thanks a lot Guozhang, I've now upgraded to 0.8.2-beta and the issue
seems to be gone.
András
On 11/3/2014 4:45 PM, Guozhang Wang wrote:
Hi Andras,
Could you try 0.8.2-beta and see if this issue comes out again? We fixed a
couple of the purgatory issues (e.g. KAFKA-1616
Hi,
I would like to know the way to configure kafka over firewall.
We have firewall where it filters the connections based on protocol and there
are security checks in place.
Our broker 0.8.1 resides over firewall when producers sending data to broker,
have to go through checks.
Please suggest
Rohit,
Please send a mail to users-subscr...@kafka.apache.org.
more info here http://kafka.apache.org/contact.html.
-Harsha
On Wed, Nov 12, 2014, at 09:20 PM, Rohit Pujari wrote:
Hello there:
I'd like to be added to the mailing list
Thanks,
--
Rohit Pujari
Solutions Engineer,
Hey Chen,
As Neha suggested, typical reason of too many rebalances is that your
consumers kept being timed out from ZK, and you can verify this by checking
in your consumer logs for sth. like session timeout entries (these are
not ERROR entries).
Guozhang
Guozhang
On Wed, Nov 12, 2014 at 5:31
Kafka uses a binary protocol over tcp.
To allow kafka traffic, you should open all the brokerIP:port in
firewall.
kumar
On Thu, Nov 13, 2014 at 9:47 PM, Kothakota, Neeraja (HP Software)
neeraj...@hp.com wrote:
Hi,
I would like to know the way to configure kafka over firewall.
We have
Thanks for the info.
It makes sense, however, I didn't see any session timeout/expired
entries in consumer log..
but do see lots of connection closed entry in zookeeper log:
2014-11-13 10:07:53,132 [myid:1] - INFO [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2182:NIOServerCnxn@1007] - Closed socket
@Neha, Can you share suggested consumer side GC settings?
Consumer side GC settings are not standard since it is a function of your
application that embeds the consumer. Your consumer application's memory
patterns will dictate your GC settings. Sorry, I know that's not very
helpful, but GC tuning
Chen,
From ZK logs it sounds like ZK kept timed out consumers which triggers
rebalance.
What is the zk session timeout config value in your consumers?
Guozhang
On Thu, Nov 13, 2014 at 10:15 AM, Chen Wang chen.apache.s...@gmail.com
wrote:
Thanks for the info.
It makes sense, however, I
Neeraja,
Producer does use keep-alive connections to the brokers, and a recent
change is introduced in broker which will actively close connections if it
has not got any requests from the producer for some time. The default
period is 10 min, you can set it to INT_MAX if you do not want this
@Guozhang:
In server.properties we have :
zookeeper.connection.timeout.ms=100
In zoo.cfg we have
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/zookeeper/data
dataLogDir=/opt/zookeeper/logs
clientPort=2182
server.1=.com:2888:3888
server.2=.com:2888:3888
I was originally asking about consumer configs, which should contain the
following:
http://kafka.apache.org/documentation.html#consumerconfigs
zookeeper.session.timeout.ms
zookeeper.connection.timeout.ms
On Thu, Nov 13, 2014 at 10:40 AM, Manish maa...@gmail.com wrote:
@Guozhang:
In
kafka.config.zookeeper.session.timeout.ms6
kafka.config.rebalance.backoff.ms6000kafka.config.rebalance.max.retries6
On Thu, Nov 13, 2014 at 10:56 AM, Guozhang Wang wangg...@gmail.com wrote:
I was originally asking about consumer configs, which should contain the
following:
I tried running kafka-simple-consumer-shell. I can see the following mbean.
kafka.consumer:type=FetchRequestAndResponseMetrics,name=SimpleConsumerShell-AllBrokersFetchRequestRateAndTimeMs
Thanks,
Jun
On Wed, Nov 12, 2014 at 9:57 PM, Madhukar Bharti bhartimadhu...@gmail.com
wrote:
Hi Jun Rao,
From your zk logs:
2014-11-13 10:07:53,132 [myid:1] - INFO [NIOServerCxn.Factory: .. Closed
socket connection for
2014-11-13 10:08:08,746 [myid:1] - INFO [NIOServerCxn.Factory: .. Closed
socket connection for
It does seem to kick out one consumer every 6ms, and you would probably
check
Dear Developers,
I am 2nd year masters student at IIT. I am using Kafka for one of my
research projects.My question is the following:
1. I have a producer, consumer and a broker(that contains 1st partition of
my topic) on node1
2. I have a producer, consumer, zookeeper and a broker(that
I've wondered that about Azure Event Hubs as well. They both use a
different consumer offset tracking mechanism than the one in 0.8 for their
higher level consumers.
Christian
On Thu, Nov 13, 2014 at 2:32 PM, Joseph Lawson jlaw...@roomkey.com wrote:
Oh man they look similar. Any comments?
Perhaps humanity just hit that inevitable point where we needed streaming event
queues. Sort of like how Darwin and Alfred Russel Wallace both thought of
evolution at the same time.
From: cac...@gmail.com cac...@gmail.com
Sent: Thursday, November 13,
Are both of them in same Consumer Group?
On Fri, Nov 14, 2014 at 9:12 AM, Palur Sandeep psand...@hawk.iit.edu
wrote:
Dear Developers,
I am 2nd year masters student at IIT. I am using Kafka for one of my
research projects.My question is the following:
1. I have a producer, consumer and a
Yes, they are on the same consumer group, but I have two partitions.
On Thu, Nov 13, 2014 at 5:04 PM, Jagat Singh jagatsi...@gmail.com wrote:
Are both of them in same Consumer Group?
On Fri, Nov 14, 2014 at 9:12 AM, Palur Sandeep psand...@hawk.iit.edu
wrote:
Dear Developers,
I am 2nd
It would be worth reading once the consumer section from the documentation.
https://kafka.apache.org/documentation.html
On Fri, Nov 14, 2014 at 10:09 AM, Palur Sandeep psand...@hawk.iit.edu
wrote:
Yes, they are on the same consumer group, but I have two partitions.
On Thu, Nov 13, 2014 at
Yeah the real question is really are the products built on top of Kafka
(Kafka with a hat on). The last place I worked we ended up using Kinesis
rather than Kafka basically for the reason Niek mentions, it seemed easier
to accept the limitations and pay Amazon rather than run Kafka (small
company
Do you think things will change now , with Kafta makers setting up own
company and will provide commercial support?
On Fri, Nov 14, 2014 at 10:28 AM, cac...@gmail.com cac...@gmail.com wrote:
Yeah the real question is really are the products built on top of Kafka
(Kafka with a hat on). The
Hi guys,
Just notice kafka.logs.dir in log4j.properties doesn't take effect
It's always set to *$base_dir/logs* in kafka-run-class.sh
LOG_DIR=$base_dir/logs
KAFKA_LOG4J_OPTS=-Dkafka.logs.dir=$LOG_DIR $KAFKA_LOG4J_OPTS
Best,
Siyuan
Which version of ZK are you using?
Thanks,
Jun
On Thu, Nov 13, 2014 at 10:15 AM, Chen Wang chen.apache.s...@gmail.com
wrote:
Thanks for the info.
It makes sense, however, I didn't see any session timeout/expired
entries in consumer log..
but do see lots of connection closed entry in
Hi Palur,
When producing messages, did you specify a key in your KeyedMessage? If
not, producer will send all messages to ONE randomly selected partition and
stick to this partition for 10 minutes by default.
regards,
Chia-Chun
2014-11-14 7:19 GMT+08:00 Jagat Singh jagatsi...@gmail.com:
It
Both the Kafka client and broker need to allocate memory for the whole
message. So, the larger the message, the more memory fragmentation it may
cause, which can lead to GC/OOME issues.
Thanks,
Jun
On Wed, Nov 12, 2014 at 9:23 PM, Rohit Pujari rpuj...@hortonworks.com
wrote:
I'm thinking of
@Guozhang,
Thanks for your reply.
Is is available as part of broker configuration? If yes, how to configure.
--Neeraja
-Original Message-
From: Guozhang Wang [mailto:wangg...@gmail.com]
Sent: Friday, November 14, 2014 12:02 AM
To: users@kafka.apache.org
Subject: Re: How to keep-alive
I think there is no docs yet for this feature since it is only included in
trunk (are you using a released version? If yes then you should not hit
this); but here is the ticket that you can take a look:
https://issues.apache.org/jira/browse/KAFKA-1282
On Thu, Nov 13, 2014 at 6:37 PM, Kothakota,
Yup, sounds like
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyisdatanotevenlydistributedamongpartitionswhenapartitioningkeyisnotspecified
?
This should go away with 0.8.2 with the default partions now being 1 =8^)
with auto create topics.
Thank you Chia-chun,Joe and Jagat.
I am not using any custom partitioner logic. Here is what I observed when I
ran kafka on 4 nodes with the following structure:
1. Each node has a producer, consumer and a broker (that contains one
partition of my topic) and one of the machine has the Zookeeper
Hi Jun,
These are the two lines of log4j-related warnings I get when I try to run
my producer:
log4j:WARN No appenders could be found for logger
(kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.
I have searched extensively online and have so far not
Hi Jun Rao,
Sorry to disturb you. But I my Kafka setup it is not showing. I am
attaching screen shot taken from all brokers.
In kafka.consumer it is listing only ReplicaFetcherThread.
As I said earlier I am using 2.10-0.8.1.1 version. Do i need to configure
any extra parameter for this? I am
If you're not using your own partitioning logic, messages are partitioned
randomly. This is the current default behavior I believe.
On Fri, Nov 14, 2014 at 12:01 PM, Palur Sandeep psand...@hawk.iit.edu
wrote:
Thank you Chia-chun,Joe and Jagat.
I am not using any custom partitioner logic.
37 matches
Mail list logo