will same message will be read
> twice ?
>
>
> and If I set auto-commit true will message will be deleted ?
>
>
>
>
>
>
> *Regards,*
> *Laxmi Narayan Patel*
> *MCA NIT Durgapur (2011-2014)*
> *Mob:-9741292048,8345847473*
>
> On Tue, Jan 3,
I don't think so because more partition can led to unavailability but yes
can led to higher throughput. But it cause more problems like increasing
end to end latency, requires more open file handler and require more memory
at client side.
*Thanks, Kunal*
*+91-9958189589*
*Data Analyst*
*First Pape
meout.ms=6
> auto.leader.rebalance.enable=true
> delete.topic.enable=true
>
>
> log.retention.bytes=5 is the setting, we have 5.5TB available,
> so we start deleting at 5TB space used.
>
>
>
>
> On Mon, Oct 17, 2016 at 10:37 AM, Kunal Gupta
> wrote:
>
>
is called. So we never run
> out of space (don't set the value to something like 99% of your disk, as
> the log cleaner thread might not kick in time, we leave it at 90% of disks
> space)
>
> On Monday, 17 October 2016, Kunal Gupta wrote:
>
> > Please help me :(
> &g
On Sun, Oct 16, 2016 at 11:23 AM, Kunal Gupta
wrote:
>
> In my organisation I have 3 machine cluster of Kafka and each topic
> assigned two machine for storing there data.
>
> There is one topic for which I get lot of data from clients thats data
> exceeds my disk space in on
In my organisation I have 3 machine cluster of Kafka and each topic
assigned two machine for storing there data.
There is one topic for which I get lot of data from clients thats data
exceeds my disk space in one machine because that machine is a leader of
that topic, when I look into kafka-logs s
LinkedIn/burrow tool is there for monitoring consumer
On 16 Mar 2016 02:28, "Vinay Gulani" wrote:
> Hi,
>
> I am new to Kafka and using kafka version 0.8.2.1. I am monitoring kafka
> using kafka-manager tool.
>
> Is there any way to monitor those kafka-consumers (using kafka-manager)
> who are n
to 0.9.0.1.
>
> Kind regards,
> Stevo Slavic.
> On Mon, Mar 14, 2016, 07:10 Kunal Gupta wrote:
>
> > Hi everyone,
> >
> > I am new here, recently join the group. I faced a problem in Kafka
> Cluster,
> > a problem is described below.
> >
> > I am u
Hi everyone,
I am new here, recently join the group. I faced a problem in Kafka Cluster,
a problem is described below.
I am using Kafka version 0.9.0.0
We have established a Kafka Cluster of 3 machines where 2 machines are
utilized for Kafka broker and same 3 machines utilized for zookeeper. Whe