You also have to update the property `zookeeper.session.timeout.ms
On Sat, Feb 24, 2018 at 11:25 PM, Ted Yu wrote:
> Please take a look at maxSessionTimeout under:
> http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.
> html#sc_advancedConfiguration
>
> On Sat, Feb 24, 2018 at 9:46 AM, Soheil
If you've noticed the default values of the above configuration, it's
Long.MAX_VALUE.
This is set to discourage the users not to edit / re-configure it. The
above configuration
is to flush the messages from the cache to the disk (fsync). Kafka
delegates the task of
flushing the messages to disk to
I think your application (where the producer resides) is facing GC issues.
The time taken for the GC might be higher than the `request.timeout.ms`.
Check your `jvm.log` and update the `request.timeout.ms`. The same property
is applicable to producer, consumer and broker. Increase the config only
f
Don't use `kill -9 PID`. Use `kill -s TERM PID` - sends a signal to the
process to end, and will trigger any cleanup routines before exiting.
Since the output of the `ps` command used by kafka-server-stop.sh exceeds
4096 characters. It shows "No kafka server to stop"
On Thu, Jul 6, 2017 at 3:25 A
> bootstrap.servers = ,
Is your bootstrap.servers configuration is correct ? You have specified
port `9091`, but running the GetOffsetShell command on `9094`
On Wed, Apr 19, 2017 at 11:58 AM, Ranjith Anbazhakan <
ranjith.anbazha...@aspiresys.com> wrote:
> Unfortunately, there is no specifi
Can you enable DEBUG logs ? It'll be helpful to debug.
-- Kamal
On Mon, Jan 9, 2017 at 5:37 AM, Gupta, Swati wrote:
> Hello All,
>
> Any help on this would be appreciated.
> There seems to be no error. Does it look like a version issue?
>
> I have updated my pom.xml with the below:
>
Ben,
You can list all the available topic information and do a simple look up
from the returned list.
Map> topics = consumer.listTopics();
topics.contains () - isn't enough?
-- Kamal
On 25 Oct 2016 22:56, "Ben Osheroff" wrote:
> We won't proceed in the face of a missing table, we'll just crash
You can refer this example[1]
[1]:
https://github.com/omkreddy/kafka-examples/blob/master/consumer/src/main/java/kafka/examples/consumer/advanced/AdvancedConsumer.java
- Kamal
On Wed, Sep 28, 2016 at 11:33 AM, Vincent Dautremont <
vincent.dautrem...@olamobile.com> wrote:
> I had the same proble
Hi all,
The log compaction article [1] doesn't explains how key comparison
takes place. AFAIK, Kafka don't de-serialize the records and using MD5
algorithm. I'm using Kafka java client-v10. Could someone explain whether
the below statements are correct:
1. Key can be of any data-type ?
2. K1
partition up, producer can't
> connect to broker alive (always try to connect to the original lead broker,
> node 1 in my case).
>
> Kafka can't recover for this situation? Anyone has clue for this?
>
> Thanks!
> Aggie
> -Original Message-
> From: Kamal
Reduce the metadata refresh interval 'metadata.max.age.ms' from 5 min to
your desired time interval.
This may reduce the time window of non-availability broker.
-- Kamal
this consumer it starts to read data from that partition. How
to pause a partition even if re-balance occurs ?
Regards,
Kamal C
Yes, it gets called on every re-balance.
-- Kamal
On Thu, Aug 4, 2016 at 11:24 PM, sat wrote:
> Hi Kamal,
>
> Thanks for your prompt response. Does our custom partition assignor gets
> called during every rebalancing.
>
> Thanks and Regards
> A.SathishKumar
>
>
>
> >Implement your own custom
>
Implement your own custom
`org.apache.kafka.clients.consumer.internals.PartitionAssignor`
and assign all the subscribed partitions to the first consumer instance in
the group.
See 'partition.assignment.strategy' config in the consumer configs [1]
[1]: http://kafka.apache.org/documentation.html#ne
See the answers inline.
On Tue, Aug 2, 2016 at 12:23 AM, sat wrote:
> Hi,
>
> I am new to Kafka. We are planning to use Kafka messaging for our
> application. I was playing with Kafka 0.9.0.1 version and i have following
> queries. Sorry for asking basic questions.
>
>
> 1) I have instantiated K
Hi,
A. What is the usage of `kafka-replica-verification.sh` script ? I don't
find any documentations about it in [1] and [2].
I have a topic `test` with 10 partitions. Ran the above script, it
continuously prints the below results.
[kamal@tcltest1 bin]$ sh kafka-replica-verification.sh --time -2
Andy,
Kafka 0.9.0 server supports the previous versions of the clients (0.8.2,
0.8.1..).
But, new clients won't work properly with the older version of Kafka server.
You should upgrade your server / broker first.
--Kamal
On Fri, May 20, 2016 at 10:58 PM, Andy Davidson <
a...@santacruzintegratio
ause ?
>
> # /opt/kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --topic
> user_track --describe
> Topic:user_track PartitionCount:1 ReplicationFactor:2 Configs:
> Topic: user_track Partition: 0 Leader: 1003 Replicas: 1003,1002 Isr:
> 1003,1002
>
>
>
> Thanks
&
Can you describe your topic configuration using the below command ?
*sh kafka-topics.sh --zookeeper localhost:2181 --topic
--describe*
Key for a record is mandatory only for compacted topics.
--Kamal
On Wed, May 4, 2016 at 2:25 PM, I PVP wrote:
> HI all,
>
> What makes a message key mandator
tional and by the default
> KafkaConsumer uses a NoOpConsumerRebalanceListener if no one is provided.
>
> I think the seek() is already done internally when a consumer joins or
> quits the group. I'm not sure this line is actually needed.
>
> 2016-04-18 15:31 GMT+02:00 Kamal C
consumer offsets into
> the onPartitionsAssigned method ?
>
>
> https://github.com/omkreddy/kafka-examples/blob/master/consumer/src/main/java/kafka/examples/consumer/advanced/AdvancedConsumer.java#L120
>
> 2016-04-15 15:06 GMT+02:00 Kamal C :
>
> > Hi Florian,
> &g
Hi Florian,
This may be helpful
https://github.com/omkreddy/kafka-examples/blob/master/consumer/src/main/java/kafka/examples/consumer/advanced/AdvancedConsumer.java
--Kamal
On Fri, Apr 15, 2016 at 2:57 AM, Jason Gustafson wrote:
> Hi Florian,
>
> It's actually OK if processing takes longer tha
Hi All,
I'm using Kafka 0.9.0.1.
I have a requirement in which consumption of records are asynchronous.
*for (ConsumerRecord record : records) {*
*executor.submit(new Runnable() {*
*public void run() {*
*// process record;}*
*});*
*}*
*consumer.commitSy
Thanks Jason! It worked. I've missed it.
On 17-Mar-2016 2:00 AM, "Jason Gustafson" wrote:
> Have you looked at partitionsFor()?
>
> -Jason
>
> On Wed, Mar 16, 2016 at 4:58 AM, Kamal C wrote:
>
> > Hi,
> >
> > I'm using the new con
Hi,
I'm using the new consumer in assign mode. I would like to assign all
the partitions of a topic to the consumer.
For that, I need to know the number of partitions available in the topic.
*consumer.assign(List partitions);*
How to programmatically get the number of partitions in a topic?
Cody,
Use ConsumerRebalanceListener to achieve that,
ConsumerRebalanceListener listener = new ConsumerRebalanceListener() {
@Override
public void onPartitionsRevoked(Collection
partitions) {
}
@Override
public void onPartitionsAssigned
Yes, it can sustain one failure. Misunderstood your question..
On Thu, Jan 14, 2016 at 5:14 PM, Erik Forsberg wrote:
>
>
> On 2016-01-14 12:42, Kamal C wrote:
>
>> It's a single point of failure. You may lose high-availability.
>>
>
> In this case I would li
It's a single point of failure. You may lose high-availability.
On Thu, Jan 14, 2016 at 4:36 PM, Erik Forsberg wrote:
> Hi!
>
> Pondering how to configure Kafka clusters and avoid having too many
> machines to manage.. Would it be recommended to run say a 3 node kafka
> cluster where you also ru
Refer the mail list
http://qnalist.com/questions/6002514/new-producer-metadata-update-problem-on-2-node-cluster
https://issues.apache.org/jira/browse/KAFKA-1843
On Wed, Sep 9, 2015 at 7:37 AM, Shushant Arora
wrote:
> Hi
>
> I have a kafka cluster with 3 brokers. I have a topic with ~50 partitio
27;t connect to the broker, how would it even report that it has a
> connectivity issue?
>
> -Ewen
>
> On Mon, May 25, 2015 at 10:05 PM, Kamal C wrote:
>
> > Hi,
> >
> > I have a cluster of 3 Kafka brokers and a remote producer. Producer
> > started t
It won't throw OffsetOutOfRange error when you pass the latest offset to
the fetch request, the resulting fetch response message set would be empty.
You can wait for message to be available either manually [or] by
configuring *maxWait* in fetch request.
On Mon, May 25, 2015 at 3:50 PM, Nipur Patod
leader node by throwing connection
refused exception. I understand that when there is a node failure leader
gets switched. Why it's not switching the leader in this scenario ?
--
Kamal C
This is resolved. As I missed host entry configuration in my infrastructure.
On Mon, May 4, 2015 at 10:35 AM, Kamal C wrote:
> We are running ZooKeeper in ensemble (Cluster of 3 / 5). With further
> investigation, I found that the Connect Exception throws for all "inflight&q
gt;
> On 5/2/15, 11:01 PM, "Kamal C" wrote:
>
> >Any comments on this issue?
> >
> >On Sat, May 2, 2015 at 9:16 AM, Kamal C wrote:
> >
> >> Hi,
> >> We are using Kafka_2.10-0.8.2.0, new Kafka producer and Kafka Simple
> >> Consum
Any comments on this issue?
On Sat, May 2, 2015 at 9:16 AM, Kamal C wrote:
> Hi,
> We are using Kafka_2.10-0.8.2.0, new Kafka producer and Kafka Simple
> Consumer. In Standalone mode, 1 ZooKeeper and 1 Kafka we haven't faced any
> problems.
>
> In cluster mode, 3 ZooKee
ts to
throw Connect Exception continuously and tries to connect with the dead
node (not all producers).
Is there any configuration available to avoid this exception ?
Regards,
Kamal C
ttp://en.wikipedia.org/wiki/Environment_variable#Microsoft_Windows>\system32\drivers\etc\hosts
put an entry of your { serverIphostname}
Regards,
Kamal C.
On Sat, Aug 2, 2014 at 11:52 AM, wrote:
> Hi Team,
>
> I am trying with Kafka client on Windows 7 64bit -corporate pc which is
>
37 matches
Mail list logo