git pull --tags origin
git tag -l
Then you should see the following tag.
0.8.0-beta1
You can then checkout from the tag and see the commits.
Thanks,
Jun
On Wed, Jul 3, 2013 at 3:02 PM, Yu, Libo wrote:
> Hi,
>
> I am trying to find out the main difference between the BETA1 and the 0.8
> ver
We can easily make a Camus configuration that would mimic the functionality
of the hadoop consumer in contrib. It may require the addition of a
BinaryWritable decoder, and a couple minor code changes. As for the
producer, we don't have anything in Camus that does what it does. But
maybe we shoul
I guess I am more concerned about the long term than the short term. I
think if you guys want to have all the Hadoop+Kafka stuff then we should
move the producer there and it sounds like it would be possible to get
similar functionality from the existing consumer code. I am not in a rush I
just wan
Hi,
I am trying to find out the main difference between the BETA1 and the 0.8
version we
are testing right now.
I have a question about the beta1 release notes. In the announcement it is
referred as
a full change log. So I assume all changes since 0.72 are in the list. Is that
right?
If I
Nice catch--fixed.
-Jay
On Wed, Jul 3, 2013 at 5:42 AM, Markus Roder wrote:
> Hi together,
>
> I have recognized following issue:
>
> https://kafka.apache.org/introduction.html in the "Getting
> started"-section
> the link to the design-page is broken.
> currently the link point to https://kafk
IMHO, I think Camus should probably be decoupled from Avro before the
simpler contribs are deleted.
We don't actually use the contribs, so I'm not saying this for our sake,
but it seems like the right thing to do to provide simple examples for this
type of stuff, no...?
--
Felix
On Wed, Jul 3,
Hi Nitin,
> We receive events from external source ( for example, facebook status
> update events). These events are pushed to kafka queue when received There
> is possibility of duplicate event ( multiple facebook status update events
> for same account in quick intervals ) coming again and get
Libo, if you set num.partitions on the target brokers and set topic
auto creation to true then you don't need to create the topic
beforehand.
On Wed, Jul 3, 2013 at 9:15 AM, Yu, Libo wrote:
> Thanks, Jun. Previously I created the topic with the specified number of
> partitions on the destination
I haven't got the chance to try JMXTrans, although I'm planning to (and
like to hear feedback if there is any).
That being said they seem to focus on performance
from their wiki (https://github.com/jmxtrans/jmxtrans/wiki)
"The JmxTransformer engine is fully multithreaded. You specify the maximum
Thanks, Jun. Previously I created the topic with the specified number of
partitions on the destination
(target) cluster before I launched mirrormaker to do mirroring. But the
mirrormaker only downloaded
data for some of partitions, and in some cases, it stopped randomly. I will
test this scen
I've also used jolokia, http://jolokia.org/, though it can get a little slow
to respond if you don't use it right. Have rolled a JMX/HTTP 'data dumper'
from scratch (can be done in a couple hundred lines of Java without too
much issue)...
--
Dave DeMaagd
ddema...@linkedin.com | 818 262 7958
(c
You can take a look at
https://cwiki.apache.org/confluence/display/KAFKA/Keyed+Messages+Proposal
Thanks,
Jun
On Wed, Jul 3, 2013 at 7:18 AM, S Ahmed wrote:
> When you guys refactored the offset to be human friendly i.e. numerical
> versus the byte offset, what was involved in that refactor?
Yes, but you can change num.partitions in the target brokers.
Thanks,
Jun
On Wed, Jul 3, 2013 at 6:21 AM, Yu, Libo wrote:
> For a topic with N partitions on the source cluster, if I let mirrormaker
> to create topic
> automatically, the same topic on the destination cluster will have only
> o
I believe something like https://github.com/jmxtrans/jmxtrans could solve
this without the need to implement 3rd party monitoring integrations.
On 7/2/13 12:57 AM, "Maxime Brugidou" wrote:
>By the way, having an official contrib package with graphite, ganglia and
>other well-known reporters woul
Good morning Joel. The bug has been filed
https://issues.apache.org/jira/browse/KAFKA-958
On Mon, Jul 1, 2013 at 1:27 PM, Joel Koshy wrote:
> Also, there are several key metrics on the broker and client side - we
> should compile a list and put it on a wiki. Can you file a jira for
> this?
>
>
When you guys refactored the offset to be human friendly i.e. numerical
versus the byte offset, what was involved in that refactor? Is there a
wiki page for that?
I'm guessing there was a index file that was created for this, or is this
currently managed in zookeeper?
This wiki is related to th
For a topic with N partitions on the source cluster, if I let mirrormaker to
create topic
automatically, the same topic on the destination cluster will have only one
partition.
Is that expected behavior? Could you just give a yes/no answer? Thanks.
Regards,
Libo
-Original Message-
F
Hi together,
I have recognized following issue:
https://kafka.apache.org/introduction.html in the "Getting started"-section
the link to the design-page is broken.
currently the link point to https://kafka.apache.org/08/design.html but
should be https://kafka.apache.org/design.html
regards
2013
If the Hadoop consumer/producers use-case will remain relevant for Kafka
(I assume it will), it would make sense to have the core components (kafka
input/output format at least) as part of Kafka so that it could be built,
tested and versioned together to maintain compatibility.
This would also make
Hello-
We receive events from external source ( for example, facebook status
update events). These events are pushed to kafka queue when received There
is possibility of duplicate event ( multiple facebook status update events
for same account in quick intervals ) coming again and gets pushed int
Jay,
What is the difference between this project and Camus? Which advantages to use
for loading log entries from kafka into Hadoop ?
Vadim
Sent from my iPhone
On Jul 2, 2013, at 5:01 PM, Jay Kreps wrote:
> We currently have a contrib package for consuming and producing messages
> from mapredu
Thanks Jun.
Guodong
On Wed, Jul 3, 2013 at 12:53 PM, Jun Rao wrote:
> One of the issues is that in 0.7, the partitioning key is not stored on the
> broker. So, mirror maker won't know what key to use for partitioning.
>
> In 0.8, the partitioning key will be stored on the broker. Once kafka-95
Thanks Jun
On Tue, Jul 2, 2013 at 2:35 PM, Jun Rao wrote:
> In 0.8, there are more mbeans than 0.7.
>
> Thanks,
>
> Jun
>
>
> On Mon, Jul 1, 2013 at 9:06 PM, Hanish Bansal <
> hanish.bansal.agar...@gmail.com> wrote:
>
> > Okay i'll try for same.
> >
> > Also want to know that in kafka-0.7 there
23 matches
Mail list logo