+1 on this change — APIs are forever. As much as we’d love to see 0.8.2 release
ASAP, it is important to get this right.
-JW
On Nov 24, 2014, at 5:58 PM, Jun Rao jun...@gmail.com wrote:
Hi, Everyone,
I'd like to start a discussion on whether it makes sense to add the
serializer api back
There are various costs when a broker fails, including broker leader election
for each partition, etc., as well as exposing possible issues for in-flight
messages, and client rebalancing etc.
So even though replication provides partition redundancy, RAID 10 on each
broker is usually a good
Shapira gshap...@cloudera.com
wrote:
Makes sense. Thanks :)
On Wed, Oct 22, 2014 at 11:10 AM, Jonathan Weeks
jonathanbwe...@gmail.com wrote:
There are various costs when a broker fails, including broker leader
election for each partition, etc., as well as exposing possible issues
probably going to test out RAID 5 and 6 to start with and see how much we
lose from the parity calculations.
-Todd
On Wed, Oct 22, 2014 at 3:59 PM, Jonathan Weeks jonathanbwe...@gmail.com
wrote:
Neha,
Do you mean RAID 10 or RAID 5 or 6? With RAID 5 or 6, recovery is
definitely
Sure — take a look at the kafka unit tests as well as admin.AdminUtils , e.g.:
import kafka.admin.AdminUtils
AdminUtils.createTopic(zkClient, topicNameString, 10, 1)
Best Regards,
-Jonathan
On Oct 13, 2014, at 9:58 AM, hsy...@gmail.com wrote:
Hi guys,
Besides TopicCommand, which I
I was one asking for 0.8.1.2 a few weeks back, when 0.8.2 was at least 6-8
weeks out.
If we truly believe that 0.8.2 will go “golden” and stable in 2-3 weeks, I, for
one, don’t need a 0.8.1.2, but it depends on the confidence in shipping 0.8.2
soonish.
YMMV,
-Jonathan
On Sep 30, 2014, at
I would look at writing a service that reads from your existing topic and
writes to a new topic with (e.g. four) partitions.
You will also need to pay attention to the partitioning policy (or implement
your own), as the default hashing in the current kafka version default can lead
to poor
When 0.8.2 arrives in the near future, consumer offsets will be stored by the
brokers, and thus that workload will not be impacting ZK.
Best Regards,
-Jonathan
On Sep 10, 2014, at 8:20 AM, Mike Marzo precisionarchery...@gmail.com wrote:
Is it possible for the high level consumer to use a
+1
Topic Deletion with 0.8.1.1 is extremely problematic, and coupled with the fact
that rebalance/broker membership changes pay a cost per partition today,
whereby excessive partitions extend downtime in the case of a failure; this
means fewer topics (e.g. hundreds or thousands) is a best
On Fri, Aug 29, 2014 at 10:09 AM, Jonathan Weeks jonathanbwe...@gmail.com
wrote:
Thanks, Jay. Follow-up questions:
Some of our services will produce and consume. Is there consumer code on
trunk that is backwards compatible with an existing 0.8.1.1 broker cluster?
If not 0.8.1.1
I am interested in this very topic as well. Also, can the trunk version of the
producer be used with an existing 0.8.1.1 broker installation, or does one need
to wait for 0.8.2 (at least)?
Thanks,
-Jonathan
On Aug 26, 2014, at 12:35 PM, Ryan Persaud ryan_pers...@symantec.com wrote:
Hello,
I hand-applied this patch https://reviews.apache.org/r/23895/diff/ to the kafka
0.8.1.1 branch and was able to build successfully:
gradlew -PscalaVersion=2.11.2
-PscalaCompileOptions.useAnt=false releaseTarGz -x signArchives
I am testing the jar now, and will let you
+1 on a 0.8.1.2 release with support for Scala 2.11.x.
-Jonathan
On Aug 22, 2014, at 11:19 AM, Joe Stein joe.st...@stealth.ly wrote:
The changes are committed to trunk. We didn't create the patch for 0.8.1.1
since there were code changes required and we dropped support for Scala 2.8
( so
One tactic that might be worth exploring is to rely on the message key to
facilitate this.
It would require engineering careful functions for the key which hashes to the
partitions for your topic(s). It would also mean that your consumers for the
topic would be evaluating the key and
You can look at something like:
https://github.com/harelba/tail2kafka
(although I don’t know what the effort would be to update it, as it doesn’t
look like it has been updated in a couple years)
We are using flume to gather logs, and then sending them to a kafka cluster via
a flume kafka
Howdy,
I was wondering if it would be possible to update the release plan:
https://cwiki.apache.org/confluence/display/KAFKA/Future+release+plan
aligned with the feature roadmap:
https://cwiki.apache.org/confluence/display/KAFKA/Index
We have several active projects actively and planning to
16 matches
Mail list logo