Hi All,
Does someone has experience / encountered any issues using a 0.8.2.2
producer against a 0.8.1.1 broker (specifically kafka_2.9.2-0.8.1.1)?
I want to upgrade my existing producer (0.8.2-beta).
Also, is there a functional difference between the scala versions
(2.9.2,2.10,2.11)?
Thanks,
ou are using the old clients
> > (which are in the core jar) and the rest of your app requires a specific
> > Scala version.
> >
> > -Ewen
> >
> > On Wed, Dec 23, 2015 at 6:31 AM, Shlomi Hazan <shl...@viber.com> wrote:
> >
> > > Hi All,
>
of the jmx ui screens to push metrics into
TSDB to get the following.The rate below is per second - so I could push
the Kafka cluster to 140k+ messages/sec on a 4-node cluster with very
little utilization (30% utilization).
From: Shlomi Hazan shl...@viber.com
To: users@kafka.apache.org
On Thu, Jan 1, 2015 at 1:40 AM, Shlomi Hazan shl...@viber.com wrote:
Happy new year!
I did not set log.flush.interval.messages.
I also could not find a default value in the docs.
Could you explain about that?
Thanks,
Shlomi
On Thu, Jan 1, 2015 at 2:20 AM, Jun Rao j...@confluent.io
,
Jun
On Mon, Dec 29, 2014 at 3:26 AM, Shlomi Hazan shl...@viber.com wrote:
Hi,
I am using 0.8.1.1, and I have hundreds of msec latency at best and even
seconds at worst.
I have this latency both on production, (with peak load of 30K msg/sec,
replication = 2 across 5 brokers, acks = 1
Hi,
I am using 0.8.1.1, and I have hundreds of msec latency at best and even
seconds at worst.
I have this latency both on production, (with peak load of 30K msg/sec,
replication = 2 across 5 brokers, acks = 1),
and on the local windows machine using just one process for each of
producer,
Jay, Jun,
Thank you both for explaining. I understand this is important enough such
that it must be done, and if so, the sooner the better.
How will the change be released? a beta-2 or release candidate? I think
that if possible, it should not overrun the already released version.
Thank you guys
,
Jun
On Sun, Nov 23, 2014 at 2:12 AM, Shlomi Hazan shl...@viber.com wrote:
Hi,
Started to dig into that new producer and have a few questions:
1. what part (if any) of the old producer config still apply to the new
producer or is it just what is specified on New Producer Configs?
2. how
Hi,
Started to dig into that new producer and have a few questions:
1. what part (if any) of the old producer config still apply to the new
producer or is it just what is specified on New Producer Configs?
2. how do you specify a partitioner to the new producer? if no such option,
what usage is
, but will not touch the existing ones.
Guozhang
On Mon, Nov 17, 2014 at 11:13 PM, Shlomi Hazan shl...@viber.com wrote:
Hi Guozhang,
Sorry for being too brief but the question referred to adding partitions
with the topic tool (without specifying json file).
I was not aware of the json file
Hi
I want to add partitions to a running topic,
and since I use the python producer I will eventually have to restart
producers to reflect the change.
the question is if leadership will change for the existing partitions too,
forcing me to immediately restart the producers.
10x,
Shlomi
Hi,
I need to make a choice and I can't get a full picture on the differences
between the two.
E.g.:
Are both producers async capable to the same extent?
Is the new producer stable for production?
Is there some usage example for the new producer?
What are the tradeoffs using one or another?
10x,
the existing partitions you should be fine.
Guozhang
On Mon, Nov 17, 2014 at 1:08 AM, Shlomi Hazan shl...@viber.com wrote:
Hi
I want to add partitions to a running topic,
and since I use the python producer I will eventually have to restart
producers to reflect the change.
the question
.
Christian
On Wed, Nov 12, 2014 at 10:37 PM, Shlomi Hazan shl...@viber.com wrote:
I was asking to know if there's a point in trying...
From your answer I understand the answer is yes.
10x,
Shlomi
On Wed, Nov 12, 2014 at 7:04 PM, Guozhang Wang wangg...@gmail.com
wrote:
Shlomi
Hi,
Is the new producer 0.8.2 supposed to work with 0.8.1.1 cluster?
Shlomi
On Mon, Nov 10, 2014 at 5:20 AM, Shlomi Hazan shl...@viber.com wrote:
Hmmm..
The Java producer example seems to ignore added partitions too...
How can I auto refresh keyed producers to use new partitions as these
partitions are added?
On Mon, Nov 10, 2014 at 12:33 PM, Shlomi
Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/
On Tue, Nov 11, 2014 at 9:24 AM, Shlomi Hazan shl...@viber.com wrote:
Hi,
My zookeeper 'dataLogDir' is eating up my disk with tons of snapshot
but it may have some performance
impact
on the Kafka cluster since it ends up sending topic metadata requests to
the broker at that interval.
Thanks,
Neha
On Tue, Nov 11, 2014 at 1:45 AM, Shlomi Hazan shl...@viber.com wrote:
Neha, I understand that the producer
On Mon, Nov 10, 2014 at 9:34 AM, Shlomi Hazan shl...@viber.com wrote:
No I don't see anything like that, the question was aimed at learning if
it is worthwhile to make the effort of reimplementing the Python producer
in Java, I so I will not make all the effort just to be disappointed
Hmmm..
The Java producer example seems to ignore added partitions too...
How can I auto refresh keyed producers to use new partitions as these
partitions are added?
On Mon, Nov 10, 2014 at 12:33 PM, Shlomi Hazan shl...@viber.com wrote:
One more thing:
I saw that the Python client is also
if the leader of the partitions being
reassigned also changes. However it should retry and succeed. Do you see a
behavior that suggests otherwise?
On Sat, Nov 8, 2014 at 11:45 PM, Shlomi Hazan shl...@viber.com wrote:
Hi All,
I recently had an issue producing from python where expanding a cluster
from
Hi All,
I recently had an issue producing from python where expanding a cluster
from 3 to 5 nodes and reassigning partitions forced me to restart the
producer b/c of KeyError thrown.
Is this situation handled by the Java producer automatically or need I do
something to have the java producer
get? What's the output of the
ConsumerOffsetChecker (see http://kafka.apache.org/documentation.html)?
For consumer.id, you don't need to set it in general. We generate some
uuid
automatically.
Thanks,
Jun
On Tue, Oct 28, 2014 at 4:59 AM, Shlomi Hazan shl...@viber.com wrote
in a group than partitions, some consumers will never get any
data.
Thanks,
Jun
On Mon, Oct 27, 2014 at 4:14 AM, Shlomi Hazan shl...@viber.com wrote:
Hi All,
Using Kafka's high consumer API I have bumped into a situation where
launching a consumer process P1 with X consuming threads
Hi All,
Using Kafka's high consumer API I have bumped into a situation where
launching a consumer process P1 with X consuming threads on a topic with X
partition kicks out all other existing consumer threads that consumed prior
to launching the process P.
That is, consumer process P is stealing
or consumers (i.e., no rebalances).
If you can reproduce this easily, can you please send exact steps to
reproduce and send over your consumer logs?
Thanks,
Joel
On Mon, Oct 20, 2014 at 09:13:27PM +0300, Shlomi Hazan wrote:
Yes I did. It is set to 2.
On Oct 20, 2014 5:38 PM, Neha Narkhede
Hi All,
Will version 0.8.1.2 happen?
Shlomi
Hi,
Running some tests on 0811 and wanted to see what happens when a broker is
taken down with 'kill'. I bumped into the situation at the subject where
launching the broker back left him a bit out of the game as far as I could
see using stack driver metrics.
Trying to rebalance with verify
, Shlomi Hazan shl...@viber.com wrote:
Hi,
Running some tests on 0811 and wanted to see what happens when a broker
is
taken down with 'kill'. I bumped into the situation at the subject where
launching the broker back left him a bit out of the game as far as I
could
see using stack
just that: print the
offset and lag for each consumer and partition.
You can either use that class directly, or use it as a guideline for
your implementation
On Wed, Oct 1, 2014 at 2:10 AM, Shlomi Hazan shl...@viber.com wrote:
Hi,
How can I programmatically get the number of items
parameter...
On Sun, Oct 5, 2014 at 1:22 PM, Shlomi Hazan shl...@viber.com wrote:
Bingo. 10x!!
On Wed, Oct 1, 2014 at 6:41 PM, chetan conikee coni...@gmail.com wrote:
The other method is via command line
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group
*groupName*
--zkconnect
Hi,
How can I programmatically get the number of items in a topic, pending for
consumption?
If no programmatic way is avail, what other method is available?
Shlomi
No, just a bare centos 6.5 on an EC2 instance
On Sep 11, 2014 1:39 AM, Jun Rao jun...@gmail.com wrote:
I meant whether you start the broker in service containers like jetty or
tomcat.
Thanks,
Jun
On Wed, Sep 10, 2014 at 12:28 AM, Shlomi Hazan shl...@viber.com wrote:
Hi, sorry, what do
On Tue, Sep 9, 2014 at 12:05 AM, Shlomi Hazan shl...@viber.com wrote:
Hi,
it's probably beyond that. it may be an issue with the number of files
Kafka can have opened concurrently.
A previous conversation with Joe about (build failes for latest stable
source tgz (kafka_2.9.2-0.8.1.1
...
Thanks,
Shlomi
On Tue, Sep 9, 2014 at 7:12 AM, Jun Rao jun...@gmail.com wrote:
What type of error did you see? You may need to configure a larger open
file handler limit.
Thanks,
Jun
On Wed, Sep 3, 2014 at 12:01 PM, Shlomi Hazan hzshl...@gmail.com wrote:
Hi,
I am trying to load
(kafka.network.Acceptor)
...
On Sat, Sep 6, 2014 at 5:48 PM, Shlomi Hazan shl...@viber.com wrote:
Hi and sorry for the late response I just got into the weekend and still
Satdurday here...
Well, not at my desk but will answer what I can:
1. what else on the logs? [*will vpn and check*]
2. other
Hi,
While I am not sure that JDK8 is the problem, what I did is simply clone
and gardle the source.
I kept getting failures and excluding tasks until eventually I did this:
*gradle -PscalaVersion=2.9.2 -x :clients:javadoc -x :clients:signArchives
-x :clients:licenseTest -x :contrib:signArchives
what gradle version is used to build kafka_2.9.2-0.8.1.1 ?
tried with v2 and failed with :
gradle --stacktrace clean
FAILURE: Build failed with an exception.
* Where:
Build file
'/home/shlomi/0dec0xb/project/vpmb/master/3rdparty/kafka/code/kafka-0.8.1.1-src/build.gradle'
line: 34
* What
Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/
On Thu, Sep 4, 2014 at 10:59 AM, Shlomi Hazan shl...@viber.com wrote:
it failed with JDK 8 so I hoped a newer gradle will maybe do the magic
.
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop http://www.twitter.com/allthingshadoop
/
On Thu, Sep 4, 2014 at 11:22 AM, Shlomi Hazan
Hi,
I am trying to get rid of the log files written under “$base_dir/logs”,
folder create by line 26 at “bin/kafka-run-class.sh”.
I use an EC2 machine with small primary disk and it blows away on occasions
when writing to these logs is excessive, and I bumped into a few already
(from Jira it
Hi,
Doing some evaluation testing, and accidently create a queue with wrong
replication factor.
Trying to delete as in:
kafka_2.10-0.8.1.1/bin/kafka-topics.sh --zookeeper localhost:2181 --delete
--topic replicated-topic
Yeilded:
Command must include exactly one action: --list, --describe,
42 matches
Mail list logo