Timestamps with Kafka REST proxy

2020-05-12 Thread Sachin Nikumbh
Hi all, Is there a way to include timestamp with each record when using Kafka's REST proxy? The documentation does not show any examples and when I tried to use a "timestamp" field, I got an "unknown field" error in response. Any help would be greatly appreciated. ThanksSachin

Re: Kafka logs are getting deleted too soon

2019-07-18 Thread Sachin Nikumbh
ting ‘retention.ms=-1' for the topic? That should persist the data indefinitely. > On Jul 17, 2019, at 6:07 PM, Sachin Nikumbh > wrote: > > I am not setting the group id for the console consumer. When I say, the .log > files are all 0 bytes long it is after the producer has

Re: Kafka logs are getting deleted too soon

2019-07-17 Thread Sachin Nikumbh
when it is flushed to disk can be quite a while. > On Jul 17, 2019, at 1:39 PM, Sachin Nikumbh > wrote: > > Hi Jamie, > I have 3 brokers and the replication factor for my topic is set to 3. I know > for sure that the producer is producing data successfully because I am >

Re: Kafka logs are getting deleted too soon

2019-07-17 Thread Sachin Nikumbh
3 brokers, have you set offsets.topic.replication.factor to the number of brokers?  Thanks,  Jamie -Original Message- From: Sachin Nikumbh To: users Sent: Wed, 17 Jul 2019 20:21 Subject: Re: Kafka logs are getting deleted too soon Broker configs:===broker.id

Re: Kafka logs are getting deleted too soon

2019-07-17 Thread Sachin Nikumbh
iodic .deleted files. Does it mean that Kafka was deleting logs? Any help would be highly appreciated. On Wednesday, July 17, 2019, 01:47:44 PM EDT, Peter Bukowinski wrote: Can you share your broker and topic config here? > On Jul 17, 2019, at 10:09 AM, Sachin Nikumbh > wrote: > &g

Re: Kafka logs are getting deleted too soon

2019-07-17 Thread Sachin Nikumbh
Aley wrote: Hi Sachin, Try adding --from-beginning to your console consumer to view the historically produced data. By default the console consumer starts from the last offset. Tom Aley thomas.a...@ibm.com From:  Sachin Nikumbh To:    Kafka Users Date:  17/07/2019 16:01 Su

Kafka logs are getting deleted too soon

2019-07-17 Thread Sachin Nikumbh
Hi all, I have ~ 96GB of data in files that I am trying to get into a Kafka cluster. I have ~ 11000 keys for the data and I have created 15 partitions for my topic. While my producer is dumping data in Kafka, I have a console consumer that shows me that kafka is getting the data. The producer ru

Unable to consume messages

2017-05-07 Thread Sachin Nikumbh
Hi all, I am relatively new to kafka and my initial attempts at consuming messages are failing. My topic has 3 partitions and I am setting "auto.offset.reset" to "earliest". The call to poll hangs. Here's my code:  **

Trouble fetching messages with error "Skipping fetch for partition"

2017-05-04 Thread Sachin Nikumbh
Hello, I am using kafka 0.10.1.0 and failing to fetch messages with following error message in the log : Skipping fetch for partition MYPARTITION because there is an in-flight request to MYMACHINE:9092 (id: 0 rack: null) (org.apache.kafka.clients.consumer.