in my opinion for production I would run with raid10, its true that kafka
has durability as to shutdown of brokers, there are exceptions or hiccups,
if you want to avoid tons of movement between brokers and connection errors
on the clients (which may or not depending on how loaded your cluster is
c
The kafka fast connector handles this differently than the standard kafka
client (which requires one consumer per partition at most), by breaking
offsets into consumable ranges which allows one partition to be read by
multiple conumers where each consumer uniquely receives a different offset
range.
e you errors as a general rule.
> If you are aware of certain scenarios where it should give an error and it
> doesn't, then please file a bug with steps to reproduce.
>
> Ismael
>
> On Thu, Jan 19, 2017 at 6:48 PM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com>
has the same
> behavior. Have you experienced that as well?
>
> On Thu, Jan 19, 2017 at 11:48 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > Hi,
> >
> > I've added kerberos support for https://github.com/gerritjvv/kafka-fast
> >
Hi,
I've added kerberos support for https://github.com/gerritjvv/kafka-fast and
have seen that the kafka brokers do not send any response if the SASL
authentication is not correct or accepted, thus causing the client to hang
while waiting for a response from kafka.
Some things that might help to
The kafka brokers have a maximum message size limit, this is a protection
measure and avoids sending monster messages to kafka.
You have two options:
1. On the brokers, increase the max.request.size, default is at ~2mb,
making it 5 or even 10 is not an issue normally. Java applications can
happil
r a week of reading I could get started working on
using kerberos without getting senselessly frustrated all the time.
On Fri, Dec 30, 2016 at 5:49 PM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> make sure kafka1 is the FQN and that the server kafka1 can resolve
&g
make sure kafka1 is the FQN and that the server kafka1 can resolve properly
from you're kerberos server, EXAMPLE.COM should be a realm that is
configured in krb5.conf and kdc.conf, with the adequate domain mappings for
kafka1 to this realm.
Kerberos is a pain and there are tons of stuff that can g
I don't know about speeding up rebalancing, and an hour seems to suggest
something is wrong with zookeeper or you're whole setup maybe. if it
becomes an unsolvable issue for you, you could try
https://github.com/gerritjvv/kafka-fast which uses a different model and
doesn't need balancing or rebalan
; connection between broker and subscriber should not be terminated.
> Subscriber is free to change his topic interests without closing the
> connection.
>
> On Wed, Nov 2, 2016 at 12:43 PM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > Hi,
> >
> &
Hi,
Have a look at the kafka client lib
https://github.com/gerritjvv/kafka-fast#java-1, it already provides this
functionality.
On Wed, Nov 2, 2016 at 2:34 AM, Janagan Sivagnanasundaram <
janagan1...@gmail.com> wrote:
> Kafka's current nature is does not support to dynamic subscriber
> environm
take a look at kafka client https://github.com/gerritjvv/kafka-fast, it
uses a different approach where you can have more than several consumers
per topic+partition (i.e no relation between topic partitions and
consumers). It uses redis but only for offsets and work distribution, not
for the messag
Hi,
I've seen pauses using G1 in other applications and have found that
-XX:+UseParallelGC
-XX:+UseParallelOldGC works best if you're having GC issues in general on
the JVM.
Regards,
Gerrit
On Wed, Oct 14, 2015 at 4:28 PM, Cory Kolbeck wrote:
> Hi folks,
>
> I'm a bit new to the operational
also check your'e GC, caused by using async without backpressuce, with the
latest jdk and GC1 I've found that many time's a JVM app can become
unresponsive without throwing a OOM.
running jstat -gcutil 250 would tell you, e.g if S0 or S1 stay at
100.00 then you've got a GC problem, and need to re
Hi,
I'm not sure about the high level consumer but I maintain a kafka consumer
that can add and remove topics dynamically.
https://github.com/gerritjvv/kafka-fast
see
https://github.com/gerritjvv/kafka-fast/blob/master/kafka-clj/java/kakfa_clj/core/Consumer.java
if you're using java/scala
On T
sity: did you choose Redis because ZooKeeper is not well
> supported in Clojure? Or were there other reasons?
>
> On Mon, Oct 13, 2014 at 2:04 PM, Gerrit Jansen van Vuuren
> wrote:
> > Hi Steven,
> >
> > Redis:
> >
> > I've had a discussion on redis to
tly-uninformed comments inline,
>
>
> On Oct 13, 2014, at 2:00 AM, Gerrit Jansen van Vuuren
> wrote:
>
> > Hi Daniel,
> >
> > At the moment redis is a spof in the architecture, but you can setup
> > replication and I'm seriously looking into using redis
do have one question: what are
> the guarantees you offer to users of your library under failures,
> particularly when Redis fails?
>
> --
> Daniel
>
> > On 13/10/2014, at 10:22 am, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
> >
> > Hi,
>
Hi,
Just thought I'll put this out for the kafka community to see (if anyone
finds it useful great!!).
Kafka-fast is 100% pure clojure implementation for kafka, but not just
meant for clojure because it has a Java API wrapper that can be used from
Java, Groovy, JRuby or Scala.
This library does
>
>
> On Tue, Jun 17, 2014 at 1:48 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > Hi,
> >
> > I've installed kafka 2.8.1,
> > created a topic using:
> >
> > /opt/kafka/bin/kafka-topics.sh --create --topic "te
Hi,
I've installed kafka 2.8.1,
created a topic using:
/opt/kafka/bin/kafka-topics.sh --create --topic "test" --zookeeper
"localhost:2381" --partitions 2 --replication-factor 2
Then opened a console producer and a console consumer.
I type a few lines on the producer and then the two kafka broker
I've found the response to my own question:
http://mail-archives.apache.org/mod_mbox/kafka-users/201308.mbox/%3c44d1e1522419a14482f89ff4ce322ede25025...@brn1wnexmbx01.vcorp.ad.vrsn.com%3E
On Wed, Jan 29, 2014 at 1:17 PM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
&g
Hi,
I've finally fixed this by closing the connection on timeout and creating a
new connection on the next send.
Thanks,
Gerrit
On Tue, Jan 14, 2014 at 10:20 AM, Gerrit Jansen van Vuuren <
gerrit...@gmail.com> wrote:
> Hi,
>
> thanks I will do this.
>
>
>
>
Hi,
I'm testing kafka 0.8.0 failover.
I have 5 brokers 1,2,3,4,5. I shutdown 5 (with controlled shutdown
activated).
broker 4 is my bootstrap broker.
My config has: default.replication.factor=2, num.partitions=8.
When I look at the kafka server.log on broker 4 I get the below error,
which only
in
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop <http://www.twitter.com/allthingshadoop>
> ************/
>
>
> On Tue, Jan 14, 2014 at 3:38 AM, Gerrit Janse
hanks,
>
> Jun
>
>
> On Mon, Jan 13, 2014 at 8:42 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > I'm using netty and async write, read.
> > For read I used a timeout such that if I do not see anything on the read
> > channel, m
ssing
> the request? Do you use large messages in your test?
>
> If you haven't enabled compression, it's weird that you will re-get 240 and
> 241 with an offset of 242 in the fetch request. Is that easily
> reproducible?
>
> Thanks,
>
> Jun
>
>
> On Mo
Name: FetchRequest; Version: 0;
CorrelationId: 1389443537; ClientId: 1; ReplicaId: -1; MaxWait: 1000 ms;
MinBytes: 1 bytes; RequestInfo: [ping,0] ->
PartitionFetchInfo(187,1048576).
This corresponds with the timed out fetch request.
On Sat, Jan 11, 2014 at 12:19 PM, Gerrit Jansen van Vuuren
onsible for advancing the
> offsets after consumption.
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 9, 2014 at 1:00 PM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > Hi,
> >
> > I'm writing a custom consumer for kafka 0.8.
> > Everyth
Have you tried using more producers.
The kafka broker is performant, but the client producer's performance is
not what it should be.
You can also have a look at tuning the number of kafka broker's network and
io threads.
Regards,
Gerrit
On Fri, Jan 10, 2014 at 1:06 PM, Klaus Schaefers <
klaus
uest will return an entire
> compressed block even if the requested offset isn't the beginning of the
> compressed block. Thus a message we saw previously may be returned again.
>
> This is probably what is happening to you
>
> Chris
>
>
> On Thu, Jan 9, 2014 at 4
Hi,
I'm writing a custom consumer for kafka 0.8.
Everything works except for the following:
a. connect, send fetch, read all results
b. send fetch
c. send fetch
d. send fetch
e. via the console publisher, publish 2 messages
f. send fetch :corr-id 1
g. read 2 messages published :offsets [10 11] :c
the max message size? Do you really expect to have a
> single message of 600MB? After that, you can reduce the fetch size.
>
> Thanks,
>
> Jun
>
>
> On Thu, Jan 2, 2014 at 8:06 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > There is
>
> On Thu, Jan 2, 2014 at 2:42 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > Hi,
> >
> > I just double checked my configuration and the broker has
> message.max.bytes
> > set to 1 gig, the consumers have the same setting for max
nd
> throw an exception, which we don't do currently.
>
> Thanks,
>
> Jun
>
>
> On Wed, Jan 1, 2014 at 9:27 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > Mm... Could be Im not sure if in a single request though. I am moving
> al
rently, we don't handle integer overflow properly.
>
> Thanks,
>
> Jun
>
>
> On Wed, Jan 1, 2014 at 4:24 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > While consuming from the topics I get an IlegalArgumentException and all
> > co
ove to if
> someone's come up with it.
>
> -Chris
>
>
>
> On Wed, Jan 1, 2014 at 9:10 AM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > I've seen this bottle neck regardless of using compression or not, bpth
> > situations give me po
4 at 5:42 AM, yosi botzer wrote:
>
> > This is very interesting, this is what I see as well. I wish someone
> could
> > explain why it is not as explained here:
> > http://engineering.gnip.com/kafka-async-producer/
> >
> >
> > On Wed, Jan 1, 2014 at 2:39 PM
gt;
>
> On Wed, Jan 1, 2014 at 2:22 PM, Gerrit Jansen van Vuuren <
> gerrit...@gmail.com> wrote:
>
> > The producer is heavily synchronized (i.e. all the code in the send
> method
> > is encapsulated in one huge synchronized block).
> > Try creating mu
While consuming from the topics I get an IlegalArgumentException and all
consumption stops, the error keeps on throwing.
I've tracked it down to FectchResponse.scala line 33
The error happens when the FetchResponsePartitionData object's readFrom
method calls:
messageSetBuffer.limit(messageSetSize
The producer is heavily synchronized (i.e. all the code in the send method
is encapsulated in one huge synchronized block).
Try creating multiple producers and round robin send over them.
e.g.
p = producers[ n++ % producers.length ]
p.send msg
This will give you one thread per producer instance.
can ensure that RoundRobinPartitioner has been successfully
> registered but logic of round robin is not getting called.
>
> Any help to resolve what i am missing ?
>
> Thanks in advance !!
>
>
>
> On Tue, Dec 17, 2013 at 5:59 PM, Guozhang Wang wrote:
>
> > Hello
hi,
I've had the same issue with the kafka producer.
you need to use a different partitioner than the default one provided for
kafka.
I've created a round robin partitioner that works well for equally
distributing data across partitions.
https://github.com/gerritjvv/pseidon/blob/master/pseidon-k
43 matches
Mail list logo