RE: Too many open files

2013-10-04 Thread Nicolas Berthet
Hi Mark,

Sorry for the delay. We're not using a load balancer if it's what you mean by 
LB. 

After applying the change I mentioned last time (the netfilter thing), I 
couldn't see any improvement. We even restart kafka, but since the restart, I 
saw connection count slowly getting higher.
  
Best regards,

Nicolas Berthet 


-Original Message-
From: Mark [mailto:static.void@gmail.com] 
Sent: Saturday, September 28, 2013 12:35 AM
To: users@kafka.apache.org
Subject: Re: Too many open files

No, this is all within the same DC. I think the problem has to do with the LB. 
We've upgraded our producers to point directory to a node for testing and after 
running it all night, I don't see any more connections then there are supposed 
to be. 

Can I ask which LB are you using? We are using A10's

On Sep 26, 2013, at 6:41 PM, Nicolas Berthet  wrote:

> Hi Mark,
> 
> I'm using centos 6.2. My file limit is something like 500k, the value is 
> arbitrary.
> 
> One of the thing I changed so far are the TCP keepalive parameters, it had 
> moderate success so far.
> 
> net.ipv4.tcp_keepalive_time
> net.ipv4.tcp_keepalive_intvl
> net.ipv4.tcp_keepalive_probes
> 
> I still notice an abnormal number of ESTABLISHED connections, I've 
> been doing some search and came over this page 
> (http://www.lognormal.com/blog/2012/09/27/linux-tcpip-tuning/)
> 
> I'll change the "net.netfilter.nf_conntrack_tcp_timeout_established" as 
> indicated there, it looks closer to the solution to my issue.
> 
> Are you also experiencing the issue in a cross data center context ? 
> 
> Best regards,
> 
> Nicolas Berthet
> 
> 
> -Original Message-
> From: Mark [mailto:static.void@gmail.com]
> Sent: Friday, September 27, 2013 6:08 AM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
> 
> What OS settings did you change? How high is your huge file limit?
> 
> 
> On Sep 25, 2013, at 10:06 PM, Nicolas Berthet  
> wrote:
> 
>> Jun,
>> 
>> I observed similar kind of things recently. (didn't notice before 
>> because our file limit is huge)
>> 
>> I have a set of brokers in a datacenter, and producers in different data 
>> centers. 
>> 
>> At some point I got disconnections, from the producer perspective I had 
>> something like 15 connections to the broker. On the other hand on the broker 
>> side, I observed hundreds of connections from the producer in an ESTABLISHED 
>> state.
>> 
>> We had some default settings for the socket timeout on the OS level, which 
>> we reduced hoping it would prevent the issue in the future. I'm not sure if 
>> the issue is from the broker or OS configuration though. I'm still keeping 
>> the broker under observation for the time being.
>> 
>> Note that, for clients in the same datacenter, we didn't see this issue, the 
>> socket count matches on both ends.
>> 
>> Nicolas Berthet
>> 
>> -Original Message-
>> From: Jun Rao [mailto:jun...@gmail.com]
>> Sent: Thursday, September 26, 2013 12:39 PM
>> To: users@kafka.apache.org
>> Subject: Re: Too many open files
>> 
>> If a client is gone, the broker should automatically close those broken 
>> sockets. Are you using a hardware load balancer?
>> 
>> Thanks,
>> 
>> Jun
>> 
>> 
>> On Wed, Sep 25, 2013 at 4:48 PM, Mark  wrote:
>> 
>>> FYI if I kill all producers I don't see the number of open files drop. 
>>> I still see all the ESTABLISHED connections.
>>> 
>>> Is there a broker setting to automatically kill any inactive TCP 
>>> connections?
>>> 
>>> 
>>> On Sep 25, 2013, at 4:30 PM, Mark  wrote:
>>> 
>>>> Any other ideas?
>>>> 
>>>> On Sep 25, 2013, at 9:06 AM, Jun Rao  wrote:
>>>> 
>>>>> We haven't seen any socket leaks with the java producer. If you 
>>>>> have
>>> lots
>>>>> of unexplained socket connections in established mode, one 
>>>>> possible
>>> cause
>>>>> is that the client created new producer instances, but didn't 
>>>>> close the
>>> old
>>>>> ones.
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Jun
>>>>> 
>>>>> 
>>>>> On Wed, Sep 25, 2013 at 6:08 AM, Mark 
>>> wrote:
>>>>> 
>>>>>> No. We are using the kafka-rb ruby gem producer.
>>>>&g

RE: Too many open files

2013-09-26 Thread Nicolas Berthet
Hi Mark,

I'm using centos 6.2. My file limit is something like 500k, the value is 
arbitrary.

One of the thing I changed so far are the TCP keepalive parameters, it had 
moderate success so far.

net.ipv4.tcp_keepalive_time
net.ipv4.tcp_keepalive_intvl
net.ipv4.tcp_keepalive_probes

I still notice an abnormal number of ESTABLISHED connections, I've been doing 
some search and came over this page 
(http://www.lognormal.com/blog/2012/09/27/linux-tcpip-tuning/)

I'll change the "net.netfilter.nf_conntrack_tcp_timeout_established" as 
indicated there, it looks closer to the solution to my issue.

Are you also experiencing the issue in a cross data center context ? 

Best regards,

Nicolas Berthet 


-Original Message-
From: Mark [mailto:static.void@gmail.com] 
Sent: Friday, September 27, 2013 6:08 AM
To: users@kafka.apache.org
Subject: Re: Too many open files

What OS settings did you change? How high is your huge file limit?


On Sep 25, 2013, at 10:06 PM, Nicolas Berthet  wrote:

> Jun,
> 
> I observed similar kind of things recently. (didn't notice before 
> because our file limit is huge)
> 
> I have a set of brokers in a datacenter, and producers in different data 
> centers. 
> 
> At some point I got disconnections, from the producer perspective I had 
> something like 15 connections to the broker. On the other hand on the broker 
> side, I observed hundreds of connections from the producer in an ESTABLISHED 
> state.
> 
> We had some default settings for the socket timeout on the OS level, which we 
> reduced hoping it would prevent the issue in the future. I'm not sure if the 
> issue is from the broker or OS configuration though. I'm still keeping the 
> broker under observation for the time being.
> 
> Note that, for clients in the same datacenter, we didn't see this issue, the 
> socket count matches on both ends.
> 
> Nicolas Berthet
> 
> -Original Message-
> From: Jun Rao [mailto:jun...@gmail.com]
> Sent: Thursday, September 26, 2013 12:39 PM
> To: users@kafka.apache.org
> Subject: Re: Too many open files
> 
> If a client is gone, the broker should automatically close those broken 
> sockets. Are you using a hardware load balancer?
> 
> Thanks,
> 
> Jun
> 
> 
> On Wed, Sep 25, 2013 at 4:48 PM, Mark  wrote:
> 
>> FYI if I kill all producers I don't see the number of open files drop. 
>> I still see all the ESTABLISHED connections.
>> 
>> Is there a broker setting to automatically kill any inactive TCP 
>> connections?
>> 
>> 
>> On Sep 25, 2013, at 4:30 PM, Mark  wrote:
>> 
>>> Any other ideas?
>>> 
>>> On Sep 25, 2013, at 9:06 AM, Jun Rao  wrote:
>>> 
>>>> We haven't seen any socket leaks with the java producer. If you 
>>>> have
>> lots
>>>> of unexplained socket connections in established mode, one possible
>> cause
>>>> is that the client created new producer instances, but didn't close 
>>>> the
>> old
>>>> ones.
>>>> 
>>>> Thanks,
>>>> 
>>>> Jun
>>>> 
>>>> 
>>>> On Wed, Sep 25, 2013 at 6:08 AM, Mark 
>> wrote:
>>>> 
>>>>> No. We are using the kafka-rb ruby gem producer.
>>>>> https://github.com/acrosa/kafka-rb
>>>>> 
>>>>> Now that you asked that question I need to ask. Is there a problem 
>>>>> with the java producer?
>>>>> 
>>>>> Sent from my iPhone
>>>>> 
>>>>>> On Sep 24, 2013, at 9:01 PM, Jun Rao  wrote:
>>>>>> 
>>>>>> Are you using the java producer client?
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>> Jun
>>>>>> 
>>>>>> 
>>>>>>> On Tue, Sep 24, 2013 at 5:33 PM, Mark 
>>>>>>> 
>>>>> wrote:
>>>>>>> 
>>>>>>> Our 0.7.2 Kafka cluster keeps crashing with:
>>>>>>> 
>>>>>>> 2013-09-24 17:21:47,513 -  [kafka-acceptor:Acceptor@153] - Error 
>>>>>>> in acceptor
>>>>>>> java.io.IOException: Too many open
>>>>>>> 
>>>>>>> The obvious fix is to bump up the number of open files but I'm
>> wondering
>>>>>>> if there is a leak on the Kafka side and/or our application 
>>>>>>> side. We currently have the ulimit set to a generous 4096 but 
>>>>>>> obviously we are

RE: Too many open files

2013-09-25 Thread Nicolas Berthet
Jun,

I observed similar kind of things recently. (didn't notice before because our 
file limit is huge)

I have a set of brokers in a datacenter, and producers in different data 
centers. 

At some point I got disconnections, from the producer perspective I had 
something like 15 connections to the broker. On the other hand on the broker 
side, I observed hundreds of connections from the producer in an ESTABLISHED 
state.

We had some default settings for the socket timeout on the OS level, which we 
reduced hoping it would prevent the issue in the future. I'm not sure if the 
issue is from the broker or OS configuration though. I'm still keeping the 
broker under observation for the time being.

Note that, for clients in the same datacenter, we didn't see this issue, the 
socket count matches on both ends.

Nicolas Berthet 

-Original Message-
From: Jun Rao [mailto:jun...@gmail.com] 
Sent: Thursday, September 26, 2013 12:39 PM
To: users@kafka.apache.org
Subject: Re: Too many open files

If a client is gone, the broker should automatically close those broken 
sockets. Are you using a hardware load balancer?

Thanks,

Jun


On Wed, Sep 25, 2013 at 4:48 PM, Mark  wrote:

> FYI if I kill all producers I don't see the number of open files drop. 
> I still see all the ESTABLISHED connections.
>
> Is there a broker setting to automatically kill any inactive TCP 
> connections?
>
>
> On Sep 25, 2013, at 4:30 PM, Mark  wrote:
>
> > Any other ideas?
> >
> > On Sep 25, 2013, at 9:06 AM, Jun Rao  wrote:
> >
> >> We haven't seen any socket leaks with the java producer. If you 
> >> have
> lots
> >> of unexplained socket connections in established mode, one possible
> cause
> >> is that the client created new producer instances, but didn't close 
> >> the
> old
> >> ones.
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >>
> >> On Wed, Sep 25, 2013 at 6:08 AM, Mark 
> wrote:
> >>
> >>> No. We are using the kafka-rb ruby gem producer.
> >>> https://github.com/acrosa/kafka-rb
> >>>
> >>> Now that you asked that question I need to ask. Is there a problem 
> >>> with the java producer?
> >>>
> >>> Sent from my iPhone
> >>>
> >>>> On Sep 24, 2013, at 9:01 PM, Jun Rao  wrote:
> >>>>
> >>>> Are you using the java producer client?
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Jun
> >>>>
> >>>>
> >>>>> On Tue, Sep 24, 2013 at 5:33 PM, Mark 
> >>>>> 
> >>> wrote:
> >>>>>
> >>>>> Our 0.7.2 Kafka cluster keeps crashing with:
> >>>>>
> >>>>> 2013-09-24 17:21:47,513 -  [kafka-acceptor:Acceptor@153] - Error 
> >>>>> in acceptor
> >>>>>  java.io.IOException: Too many open
> >>>>>
> >>>>> The obvious fix is to bump up the number of open files but I'm
> wondering
> >>>>> if there is a leak on the Kafka side and/or our application 
> >>>>> side. We currently have the ulimit set to a generous 4096 but 
> >>>>> obviously we are hitting this ceiling. What's a recommended value?
> >>>>>
> >>>>> We are running rails and our Unicorn workers are connecting to 
> >>>>> our
> Kafka
> >>>>> cluster via round-robin load balancing. We have about 1500 
> >>>>> workers to
> >>> that
> >>>>> would be 1500 connections right there but they should be split 
> >>>>> across
> >>> our 3
> >>>>> nodes. Instead Netstat shows thousands of connections that look 
> >>>>> like
> >>> this:
> >>>>>
> >>>>> tcp0  0 kafka1.mycompany.:XmlIpcRegSvc :::
> >>> 10.99.99.1:22503ESTABLISHED
> >>>>> tcp0  0 kafka1.mycompany.:XmlIpcRegSvc :::
> >>> 10.99.99.1:48398ESTABLISHED
> >>>>> tcp0  0 kafka1.mycompany.:XmlIpcRegSvc :::
> >>> 10.99.99.2:29617ESTABLISHED
> >>>>> tcp0  0 kafka1.mycompany.:XmlIpcRegSvc :::
> >>> 10.99.99.1:32444ESTABLISHED
> >>>>> tcp0  0 kafka1.mycompany.:XmlIpcRegSvc :::
> >>> 10.99.99.1:34415ESTABLISHED
> >>>>> tcp0  0 kafka1.mycompany.:XmlIpcRegSvc :::
> >>> 10.99.99.1:56901ESTABLISHED
> >>>>> tcp0  0 kafka1.mycompany.:XmlIpcRegSvc :::
> >>> 10.99.99.2:45349ESTABLISHED
> >>>>>
> >>>>> Has anyone come across this problem before? Is this a 0.7.2 
> >>>>> leak, LB misconfiguration... ?
> >>>>>
> >>>>> Thanks
> >>>
> >
>
>


RE: Producer / Consumer - connection management

2013-05-01 Thread Nicolas Berthet
Hi Neha,

For the "connection at creation time", I had the issue with the sync producer 
only, didn't observe this with the async producer, I didn't test it yet, but I 
guess I would get similar issues.

I didn't keep the stacktrace as it happened some time ago, but basically, 
calling "new Producer()" resulted in an exception, because the connection to ZK 
wasn't working, I'm using "zk.connect".

In the setup I was testing, I have a zk cluster spanning on 2 sites, in the 
second site, zk is in "observer" mode only. From time to time, the observer 
loses its sync with the leader and for a short period of time you can see these 
"zookeeper server not running" in the log (during sync with leader). If the 
producer is created (application started) at that time, it will fail. Again, I 
assume the same would happen if ZK wasn't running at all.

The solution I had so far was to wrap the producer to handle this, and try to 
create it again when ZK is coming up.

As I understand, once created, the producer / consumer would reconnect 
automatically, it would be nice to extend this behavior and support to be in a 
disconnected state at startup.
    
Kindly,

Nicolas Berthet

-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com] 
Sent: Tuesday, April 30, 2013 10:53 PM
To: users@kafka.apache.org
Subject: Re: Producer / Consumer - connection management

> Basically, who and how is managing network disconnection / outage ?

Short answer is that the producer is responsible for retrying connection 
establishment and it automatically does so when the network connection to the 
broker fails. There is a connection retry interval that you can use.
Once that is reached it gives up trying to reconnect.

> 1 ) If the "sync" producer has been created successfully, will it
reconnect whenever it loses its connection to the broker or zk ? I could 
understand it's not happening as we expect the connection to be immediately 
available when using the sync producer.

The ZK client library manages connection loss and reconnects to ZK. But be 
aware that ZK not being unavailable is never an option.

> 2 ) How about the "async" producer ? Does it expect connection at
creation time ? will it reconnect in case of failure ?

Can you describe what connection issue you see at creation time. Is it just 
when you use the zk.connect option ?

> 3 ) Finally, how about the "high level" consumer ? Afaik, it 
> reconnects
automatically

It does, so does producer.


Kafka - Camel integration

2013-04-29 Thread Nicolas Berthet
Hi,

Does anyone have a pointer for a good Kafka-Camel library ?

I've been trying the one available here : 
https://github.com/BreizhBeans/camel-kafka but it has some annoying limitations 
like the choice of the serializer.

I'm about to create something from scratch or fork an existing project, so any 
input would be greatly appreciated

Best regards,

Nicolas Berthet


Producer / Consumer - connection management

2013-04-29 Thread Nicolas Berthet
Hi,

I'd like a few details about consumers (high level) and producers (both sync 
and async), I'm using Kafka 0.7.2.

Basically, who and how is managing network disconnection / outage ? For 
example, I've been playing with sync producer, and I realized that it can't be 
created when there's a network issue (broker or zk unavailable) as it creates 
its connection at creation time.

1 ) If the "sync" producer has been created successfully, will it reconnect 
whenever it loses its connection to the broker or zk ? I could understand it's 
not happening as we expect the connection to be immediately available when 
using the sync producer.
2 ) How about the "async" producer ? Does it expect connection at creation time 
? will it reconnect in case of failure ?
3 ) Finally, how about the "high level" consumer ? Afaik, it reconnects 
automatically

In a nutshell, I'd like to know more about connection management, requirement, 
etc. If there's any gotcha, like the connection required at creation time for 
sync producer, is there any max retry which may cause my consumer to disconnect 
definitively if the outage is too long, etc. My goal would be to have all 
producers and consumers able to reconnect properly whenever there's a 
disconnection, outage, etc.

Best regards,

Nicolas Berthet



RE: OffsetOutOfRangeException with 0 retention

2013-03-13 Thread Nicolas Berthet
Sadly, I don't have access to those logs anymore, I don't have access to
environment. Though I remember seeing some exception during offset writing,
most probably due to zookeeper connection issue.

What would be side effects of not being able to write the consumer offset,
beside seeing this exception ? As long as my service doesn't restart and I
do not recreate the consumer, would the consumer continue to work ? Get
duplicates of messages when it's getting connected to ZK again ?

Basically, I'm interested in whatever could go wrong with our kafka
consumers, what would be the symptoms, what would be the possible
workaround. 

Kindly,

Nicolas

-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com] 
Sent: Wednesday, March 13, 2013 9:19
To: users@kafka.apache.org
Subject: Re: OffsetOutOfRangeException with 0 retention

Looks like your consumers have never updated their offsets and are unable to
reset offset to the earliest/latest on startup. Can you pass around the
entire consumer log ?

Thanks,
Neha


On Mon, Mar 11, 2013 at 6:34 PM, Nicolas Berthet
wrote:

> Neha,
>
> Thanks for the reply. I'm using the high level consumer, btw, I'm 
> using kafka 0.7.2 (we built it with scala 2.10) the consumer is using 
> default values with an high ZK timeout value.
>
> As far as I know, my consumers didn't restart, they're running on 
> services that were not restarted (unless the consumer itself would 
> reconnect after sometime).
>
> Don't know if it could be part of the reason, some of my consumers are 
> in remote sites, they have high latency and experience ZK timeouts 
> here and there. I've ZK observers on the remote sites with rather high 
> timeout values, they still disconnect from time to time from the main 
> site due to timeout.
> Due to the ZK timeouts I noticed the consumers fail to write their
offsets.
>
>
> PS: Sorry for the previous spamming, my mail client went crazy and by 
> the time I realized it was too late.
>
> Kindly,
>
> Nicolas
>
> -Original Message-
> From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
> Sent: Monday, March 11, 2013 23:52
> To: users@kafka.apache.org
> Subject: Re: OffsetOutOfRangeException with 0 retention
>
> Nicolas,
>
> It seems that you started a consumer from the earliest offset, then 
> shut it down for a long time, and tried restarting it again. At this 
> time, you will see OffsetOutOfRange exceptions, since the offset that 
> your consumer is trying to fetch has been garbage collected from the 
> server (due to it being too old). If you are using the high level 
> consumer (ZookeeperConsumerConnector), the consumer will automatically  
> reset the offset to the earliest or latest depending on the 
> autooffset.reset config value.
>
> Which consumer are you using in this test ?
>
> Thanks,
> Neha
>
>
> On Mon, Mar 11, 2013 at 2:12 AM, Nicolas Berthet
> wrote:
>
> > Hi,
> >
> >
> >
> > I'm currently seeing a lot of OffsetOutOfRangeException in my server 
> > logs (it's not something that appeared recently, I simply didn't use 
> > Kafka before). I tried to find information on the mailing-list, but 
> > nothing seems to match my case.
> >
> >
> >
> > ERROR error when processing request FetchRequest(topic:test-topic,
> > part:0
> > offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)
> >
> >  kafka.common.OffsetOutOfRangeException: offset 3004960 is out 
> > of range
> >
> >
> >
> > I understand that, at startup, consumers will ask for a MAX_VALUE 
> > offset to trigger this exception and detect the correct offset, right ?
> >
> >
> >
> > In my case, it's just too often (much more than the number of 
> > consumer connections), but I also noticed it seems to happen in 
> > particular for topics with a "0" retention. Did anybody else suffer 
> > from the same symptoms ?
> >
> >
> >
> > Although it seems not critical (everything seems to work), it's 
> > probably far from optimal, and the log is just full of those.
> >
> >
> >
> > Regards,
> >
> >
> >
> > Nicolas
> >
> >
>
>



RE: OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Neha,

Thanks for the reply. I'm using the high level consumer, btw, I'm using
kafka 0.7.2 (we built it with scala 2.10) the consumer is using default
values with an high ZK timeout value.

As far as I know, my consumers didn't restart, they're running on services
that were not restarted (unless the consumer itself would reconnect after
sometime).

Don't know if it could be part of the reason, some of my consumers are in
remote sites, they have high latency and experience ZK timeouts here and
there. I've ZK observers on the remote sites with rather high timeout
values, they still disconnect from time to time from the main site due to
timeout.
Due to the ZK timeouts I noticed the consumers fail to write their offsets.


PS: Sorry for the previous spamming, my mail client went crazy and by the
time I realized it was too late.

Kindly,

Nicolas

-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com] 
Sent: Monday, March 11, 2013 23:52
To: users@kafka.apache.org
Subject: Re: OffsetOutOfRangeException with 0 retention

Nicolas,

It seems that you started a consumer from the earliest offset, then shut it
down for a long time, and tried restarting it again. At this time, you will
see OffsetOutOfRange exceptions, since the offset that your consumer is
trying to fetch has been garbage collected from the server (due to it being
too old). If you are using the high level consumer
(ZookeeperConsumerConnector), the consumer will automatically  reset the
offset to the earliest or latest depending on the autooffset.reset config
value.

Which consumer are you using in this test ?

Thanks,
Neha


On Mon, Mar 11, 2013 at 2:12 AM, Nicolas Berthet
wrote:

> Hi,
>
>
>
> I'm currently seeing a lot of OffsetOutOfRangeException in my server 
> logs (it's not something that appeared recently, I simply didn't use 
> Kafka before). I tried to find information on the mailing-list, but 
> nothing seems to match my case.
>
>
>
> ERROR error when processing request FetchRequest(topic:test-topic, 
> part:0
> offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)
>
>  kafka.common.OffsetOutOfRangeException: offset 3004960 is out of 
> range
>
>
>
> I understand that, at startup, consumers will ask for a MAX_VALUE 
> offset to trigger this exception and detect the correct offset, right ?
>
>
>
> In my case, it's just too often (much more than the number of consumer 
> connections), but I also noticed it seems to happen in particular for 
> topics with a "0" retention. Did anybody else suffer from the same 
> symptoms ?
>
>
>
> Although it seems not critical (everything seems to work), it's 
> probably far from optimal, and the log is just full of those.
>
>
>
> Regards,
>
>
>
> Nicolas
>
>



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas 



OffsetOutOfRangeException with 0 retention

2013-03-11 Thread Nicolas Berthet
Hi,

 

I'm currently seeing a lot of OffsetOutOfRangeException in my server logs
(it's not something that appeared recently, I simply didn't use Kafka
before). I tried to find information on the mailing-list, but nothing seems
to match my case. 

 

ERROR error when processing request FetchRequest(topic:test-topic, part:0
offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers)

 kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range

 

I understand that, at startup, consumers will ask for a MAX_VALUE offset to
trigger this exception and detect the correct offset, right ?

 

In my case, it's just too often (much more than the number of consumer
connections), but I also noticed it seems to happen in particular for topics
with a "0" retention. Did anybody else suffer from the same symptoms ?

 

Although it seems not critical (everything seems to work), it's probably far
from optimal, and the log is just full of those.

 

Regards,

 

Nicolas