Yeah, well we have many producers and only a few consumers. I don't expect
the producers of a given topic to unifornly migrate at the same time, so
we'll have duplicate consumer versions. I'll know the migration is
complete when the old consumer version stops receiving any new messages...
On We
We don't have an official code-name. You may want to try sth like kafka-ng
(next generation).
Thanks,
Jun
On Wed, May 1, 2013 at 8:36 PM, Jason Rosenberg wrote:
> Hi,
>
> I'm wondering if there's a code-name or useful descriptor for the
> technology changes introduced in 0.8. Since I'll need
Partition is different from replicas. A topic can have one or more
partitions and each partition can have one or more replicas. A consumer
consumes data at partition level. In other words, a consumer gets the same
data no matter how many replicas are there.
When you say the consumer only gets half
There is one leader per partition. For details on how to use the 0.8 api,
please see http://kafka.apache.org/08/api.html
Thanks,
Jun
On Wed, May 1, 2013 at 7:11 PM, Rob Withers wrote:
> with topic auto-creation (and no previous topic) and a replication factor
> of
> 2 results in 4 partitions.
Jason,
During the migration, the only thing to watch out for is that the producers
of a particular topic don't upgrade to 0.8 before the consumers do so. You
can let applications upgrade when they can to respect the above
requirement. If there are fewer applications producing to and consuming
from
So, we have lots of apps producing messages to our kafka 0.7.2 instances
(and multiple consumers of the data).
We are not going to be able to follow the suggested migration path, where
we first migrate all data, then move all producers to use 0.8, etc.
Instead, many apps are on their own release c
Hi,
I'm wondering if there's a code-name or useful descriptor for the
technology changes introduced in 0.8. Since I'll need to deploy a separate
set of servers/consumers while we transition our running apps to use the
new library for producing, I am looking for a descriptive name to call the
serve
Running a consumer group (createStreams()), pointing to the zookeeper and
with the topic and 1 consumer thread, results in only half the messages
being consumed. The topic was auto-created, with a replication factor of 2,
but the producer was configured to produce to 2 brokers and so 4 partitions
with topic auto-creation (and no previous topic) and a replication factor of
2 results in 4 partitions. They are numbered uniquely and sequentially, but
there are two leaders? Is it so? Should we only write to one broker? Does
it have to be the leader or will the producer get flipped by the zk?
Hi Neha,
For the "connection at creation time", I had the issue with the sync producer
only, didn't observe this with the async producer, I didn't test it yet, but I
guess I would get similar issues.
I didn't keep the stacktrace as it happened some time ago, but basically,
calling "new Produce
The following are sample encoder/decoder in java.
class StringEncode implements Encoder {
private String encoding = null;
public StringEncoder(VerifiableProperties props) {
if(props == null)
encoding = "UTF8";
else
encoding = props.getString("serializer.encoding", "UTF8")
Hi Jun
I've added #1 and #2.
I'll need to think about where to put #3, maybe even adding a 'tips and
tricks' section?
I've not had to do any encoder/decoders. Can anyone else offer some example
code I can incorporate into an example?
Thanks,
Chris
On Wed, May 1, 2013 at 11:45 AM, Jun Rao wr
Chris,
Thanks. This is very helpful. I linked your wiki pages to our website. A
few more comments:
1. Producer: The details of the meaning of request.required.acks are
described in http://kafka.apache.org/08/configuration.html. It would be
great if you can add a link to the description in your wi
I've tested my examples with the new (4/30) release and they work, so I've
updated the documentation.
Thanks,
Chris
On Mon, Apr 29, 2013 at 6:18 PM, Jun Rao wrote:
> Thanks. I also updated your producer example to reflect a recent config
> change (broker.list => metadata.broker.list).
>
> Jun
14 matches
Mail list logo