Hi All,
In our 3-node test cluster running Kafka 0.10.0, we faced this error:
FATAL [2017-07-06 07:30:42,962]
kafka.server.ReplicaFetcherThread:[Logging$class:fatal:110] -
[ReplicaFetcherThread-0-0] - [ReplicaFetcherThread-0-0], Halting because
log truncation is not allowed for topic Topic3,
config, 0, time.scheduler, time
On Tue, Aug 30, 2016 at 11:37 AM, Jaikiran Pai <jai.forums2...@gmail.com>
wrote:
> Can you paste the entire exception stacktrace please?
>
> -Jaikiran
>
> On Tuesday 30 August 2016 11:23 AM, Gaurav Agarwal wrote:
>
>> Hi there, just wanted to b
Hi there, just wanted to bump up the thread one more time to check if
someone can point us in the right direction... This one was quite a serious
failure that took down many of our kafka brokers..
On Sat, Aug 27, 2016 at 2:11 PM, Gaurav Agarwal <gauravagarw...@gmail.com>
wrote:
> Hi Al
Hi All,
We are facing a weird problem where Kafka broker fails to start due to an
unhandled exception while 'recovering' a log segment. I have been able to
isolate the problem to a single record and providing the details below:
During Kafka restart, if index files are corrupted or they don't
Hi All,
We are facing a weird problem where Kafka broker fails to start due to an
unhandled exception while 'recovering' a log segment. I have been able to
isolate the problem to a single record and providing the details below:
During Kafka restart, if index files are corrupted or they don't
Hi
You can have one or two instances of Kafka but you can have one or two
Kafka topic dedicated to each application according to the need. Partition
will have u in increasing the throughput and consumer group id can help u
to make queue as topic or queue.
On Apr 22, 2016 12:37 PM, "Kuldeep Kamboj"
sumers C1,C2,C3 and C4.
>
> can we have C1,C2,C3 subscribe to T1 and (C3,C4) subscribe to T2. and this
> should work like a queue. i.e. T2 should send message to only C3 or C4. and
> same in case of T1.
>
> is this possible by any means?
>
> Thanks & Regards,
> Vi
If you have consumer group id across multiple consumers then Kafka will
work as queue only .
On Mar 28, 2016 6:48 PM, "Sharninder" wrote:
> What kind of queue are you looking for? Kafka works as a nice FIFO queue by
> default anyway.
>
>
>
> On Mon, Mar 28, 2016 at 5:19 PM,
What u need prabhu from presentation, go to YouTube u will get presentation
or search Kafka example u will get .
On Mar 11, 2016 9:12 PM, "prabhu v" wrote:
> Hi,
>
> Can anyone please help me with the video presentations from Kafka experts?
>
> Seems the link provided in
itten to the cluster.
>
> something like this should help you figure that out.
>
> [path of kafka]/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
> --zookeeper [zookeeperhost:2181] --topic [topicnamehere] --group
> [groupIDhere]
>
> On Thu, Dec 3, 2015 at 10:48 PM, G
Hello
How can I find in Kafka API in 0.8.1.1 count of uncommitted offsets
(unread message)from a particular topic .with the respective consumer group
I'd
I am looking after adminutils ,topic comand , and offsetrequest any
specific class of Kafka API which I can use to find am these things.
Did u check with ps -ef Kafka whether Kafka broker is running or not
On Nov 25, 2015 4:56 PM, "Shaikh, Mazhar A (Mazhar)" <
mazhar.sha...@in.verizon.com> wrote:
> Hi Team,
>
> In my test setup, Kafka broker goes down when consumer is stopped.
>
> Below are the version & logs.
>
> Request you
Can u share the code that you are using to publish the message. Also can u
whether a small message is.published
On Nov 25, 2015 9:25 PM, "Kudumula, Surender"
wrote:
> Hi all
> I am trying to get the producer working. It was working before but now
> getting the
So u have two nodes running where you want to increase the replication
factor 2 because of fault tolerance. That won't be a problem
On Nov 25, 2015 6:26 AM, "Dillian Murphey" wrote:
> Is it safe to run this on an active production topic? A topic was created
> without a
Can u check couple of things
1. Message size that u are sending ttry sending small message
2.check which encoder defaultencoder or string encoder u are using to
consume the message , are u serializing the message while sending normal
stream.
3.u create.any partition of topic just create only topic
Hello
I created Custom partitioner for my need implemented Partitioner interface
Override this method
public int partition(Object key, int a_numPartitions){
return partitionId;
}
We have something called as
We are using key as correlationId, That will be unique for each message .
Are you consuming data in kafkaStream with byte[] argument in generics
On Sun, Oct 18, 2015 at 4:20 PM, Kiran Singh wrote:
> Hi Pratapi
>
> I am using following serializer property at producer side:
>
> ("serializer.class", "kafka.serializer.StringEncoder");
>
> And at
You need to allocate extra memory for the topology to run.
On Wed, Sep 2, 2015 at 11:36 PM, Khalasi, Vipul Kantibhai <
vipul.kantibhai.khal...@citi.com> wrote:
> Hi ,
>
>
>
> I am using kafkaspout in my topology ann each kafka topic have 8
> partitions and all topic atleast contains 1GB of data.
can we find from some api in kafka that how many number of connections
we have kafka broker to zookeeper, as my kafka is getting down again
and again .
, Manoj Khangaonkar khangaon...@gmail.com
wrote:
Hi
Your key seems to be String.
key.serializer.class might need to be set to StringEncoder.
regards
On Sat, Apr 25, 2015 at 10:43 AM, Gaurav Agarwal gaurav130...@gmail.com
wrote:
Hello
I am sending message from producer like
Hello
I am sending message from producer like this with DefaultEncoder.
KeyedMessageString, byte[] keyedMessage = new KeyedMessageString,
byte[](topic,Serializations.serialize(s),
Serializations.getSerialized(msg,rqst));
This is a compile time error at java level as it expects
you can use one more auto.offset.reset=smallest/largest. I ALSO face
the same issue and it work fine for me . Might be my key name is not
correct, please check the key in kafka doxumentation ?..
On 4/6/15, Madhukar Bharti bhartimadhu...@gmail.com wrote:
Hi Mayuresh,
We are having only one
I am new to Kafka that's the reason asking so many question
KeyedMessageString, byte[] keyedMessage = new KeyedMessageString,
byte[](request.getRequestTopicName(),SerializationUtils.serialize(message));
producer.send(keyedMessage);
Currently,I am sending message without any key maintained as
#topic-config
Regards,
Madhukar
On Fri, Apr 3, 2015 at 5:01 PM, Gaurav Agarwal gaurav130...@gmail.com
wrote:
hello group,
I have created a topic with the delete retention ms time 1000 and send
and consume message across. Nothing happened after that , i checked
the log also , message
in kafka 0.8.1.1 When Kafka Producer set the property of
request.required.acks=1 ,It means that the producer gets an acknowledgement
after the leader replica has received the data . How will Producer come to
know he got the acknowledgment , Is there any api that i can see at my
application level ,
MessageAndMetadata.partition()
On Fri, Feb 27, 2015 at 5:16 AM, Jun Rao j...@confluent.io wrote:
The partition api is exposed to the consumer in 0.8.2.
Thanks,
Jun
On Thu, Feb 26, 2015 at 10:53 AM, Gaurav Agarwal
gaurav130...@gmail.com
wrote:
After retrieving
Hello
After retrieving a kafka stream or kafka message how to get the
corresponding partition number to which it belongs ? I am using kafka
version 0.8.1.
More specifically kafka.consumer.KafkaStream and
kafka.message.MessageAndMetaData classes, does not provide API to retrieve
partition number.
After retrieving a kafka stream or kafka message how to get the
corresponding partition number to which it belongs ? I am using kafka
version 0.8.1.
More specifically kafka.consumer.KafkaStream and
kafka.message.MessageAndMetaData classes, does not provide API to retrieve
partition number. Are
hello
We are sending custom message across producer and consumer. But
getting class cast exception . This is working fine with String
message and string encoder.
But this did not work with custom message , i got class cast
exception. I have a message with couple of String attributes
29 matches
Mail list logo