Are you looking for maven repo?
You can always checkout sources from http://kafka.apache.org/code.html
and build it yourself.
2013/11/8 Liu, Raymond raymond@intel.com:
If I want to use kafka_2.10 0.8.0-beta1, which repo I should go to? Seems
apache repo don't have it. While there are
0.8.0 is in process being released and when that is done Scala 2.10 will be
in Maven central.
Until then you can do
./sbt ++2.10 publish-local
from checking out the source of Kafka as Victor just said, yup.
you will be prompted to sign the jars which you can do with a pgp key or
remove the pgp
Simplekafkaetljob class, as mentioned in the post.
Thanks
Abhi
From Samsung Galaxy S4
On Nov 7, 2013 8:34 PM, Jun Rao jun...@gmail.com wrote:
Which class is not found?
Thanks,
Jun
On Thu, Nov 7, 2013 at 11:56 AM, Abhi Basu 9000r...@gmail.com wrote:
Let me describe my environment.
Thanks. Good to know that it will come to formal 0.8.0 release soon.
I am really looking for the public maven repo for porting sparks on to it
instead of running a local version ;) probably I can do with a beta1 one
firstly.
Best Regards,
Raymond Liu
-Original Message-
From: Joe Stein
Thanks for your reply, Joel.
Regards,
Libo
-Original Message-
From: Joel Koshy [mailto:jjkosh...@gmail.com]
Sent: Thursday, November 07, 2013 5:00 PM
To: users@kafka.apache.org
Subject: Re: add partition tool in 0.8
kafka-add-partitions.sh is in 0.8 but not in 0.8-beta1. Therefore
I read it and tried to understand it. It would be great to add a summary
at the beginning about what it is and how it may impact a user.
Regards,
Libo
-Original Message-
From: Joel Koshy [mailto:jjkosh...@gmail.com]
Sent: Friday, November 08, 2013 2:01 AM
To: users@kafka.apache.org
Thx for the feedback. It is true I never mention anything about impact on
users or the fact this is mostly internal business in Kafka. I will try to
rephrase some of this.
Marc
On Nov 8, 2013 10:10 AM, Yu, Libo libo...@citi.com wrote:
I read it and tried to understand it. It would be great to
Hi,
We have a cluster of Kafka servers. We want data of all topics on these
servers to be compressed, Is there some configuration to achieve this?
I was able to compress code by using compression.codec property in
ProducerConfig in Kafka Producer.
But I wanted to know if there is a way of
Currently, the only way to send compressed data to Kafka is by enabling
compression on the producer side. To move compression to server side, we
have https://issues.apache.org/jira/browse/KAFKA-595 filed
Thanks,
Neha
On Fri, Nov 8, 2013 at 8:23 AM, arathi maddula arathimadd...@gmail.comwrote:
Can anyone help me with this issue? I feel like I am very close and am
probably making some silly config error.
Kafka team, please provide more detailed notes on how to make this
component work.
Thanks.
On Fri, Nov 8, 2013 at 5:23 AM, Abhi Basu 9000r...@gmail.com wrote:
Simplekafkaetljob
ClassNotFound means the Hadoop job is not able to find the related jar.
Have you made sure the related jars are registered in the distributed cache?
On Fri, Nov 8, 2013 at 8:40 AM, Abhi Basu 9000r...@gmail.com wrote:
Can anyone help me with this issue? I feel like I am very close and am
Hi Neha:
I was following the directions outlined here -
https://github.com/apache/kafka/tree/0.8/contrib/hadoop-consumer. It does
not mention anything about registering jars. Can you please provide more
details?
Thanks,
Abhi
On Fri, Nov 8, 2013 at 8:48 AM, Neha Narkhede
Ok, sorry, missed a step, ran the copy_jars.sh and now retrying
On Fri, Nov 8, 2013 at 8:55 AM, Abhi Basu 9000r...@gmail.com wrote:
Hi Neha:
I was following the directions outlined here -
https://github.com/apache/kafka/tree/0.8/contrib/hadoop-consumer. It does
not mention anything
Still get the same error:
[root@idh251-0 hadoop-consumer]# ./run-class.sh
kafka.etl.impl.SimpleKafkaETLJob test/test.properties
Thanks Marc! I will also go through it and suggest some edits today.
Guozhang
On Fri, Nov 8, 2013 at 7:50 AM, Marc Labbe mrla...@gmail.com wrote:
Thx for the feedback. It is true I never mention anything about impact on
users or the fact this is mostly internal business in Kafka. I will try
Hello,
I am using the beta right now.
I'm not sure if it's GC or something else at this point. To be honest I've
never really fiddled with any GC settings before. The system can run for as
long as a day without failing, or as little as a few hours. The lack of
pattern makes it a little harder to
Copy-jars.sh did not copy the hadoop consumer and kafka jars. I have copied
them to HDFS manually, but still getting the same error.
Looks like I have all the reqd jars now:
[root@idh251-0 test]# hadoop fs -ls /tmp/kafka/lib
Warning: $HADOOP_HOME is deprecated.
Found 7 items
-rw-r--r-- 3 root
Do you have write permissions in /kafka-log4j? Your logs should be
going there (at least per your log4j config) - and you may want to use
a different log4j config for your consumer so it doesn't collide with
the broker's.
I doubt the consumer thread dying issue is related to yours - again,
logs
Marc - thanks again for doing this. Couple of suggestions:
- I would suggest removing the disclaimer and email quotes since this
can become a stand-alone clean document on what the purgatory is and
how it works.
- A diagram would be helpful - it could say, show the watcher map and
the
Thanks for the input. Yes that directory is open for all users (rwx).
I don't think that the lack of logging is related to my consumer dying, but
it doesn't help when trying to debug when I have no logs.
I am struggling to find a reason behind this. I deployed the same code, and
same version of
Hi guys, since kafka is able to add new broker into the cluster at runtime,
I'm wondering is there a way to add new partition for a specific topic at
run time? If not what will you do if you want to add more partition to a
topic? Thanks!
Hello,
Please check the add-partition tool:
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-5.AddPartitionTool
Guozhang
On Fri, Nov 8, 2013 at 5:32 PM, hsy...@gmail.com hsy...@gmail.com wrote:
Hi guys, since kafka is able to add new broker into the
It's in the branch, cool, I'll wait for it's release. actually I find I can
use ./kafka-delete-topic.sh and ./kafk-create-topic.sh with same topic name
and keep the broker running. It's interesting that delete topic doesn't
actually remove the data from the brokers. So what I understand is as long
I mean I assume the messages not yet consumed before delete-topic will be
delivered before you create same topic, correct?
On Fri, Nov 8, 2013 at 6:30 PM, hsy...@gmail.com hsy...@gmail.com wrote:
It's in the branch, cool, I'll wait for it's release. actually I find I
can use
Delete topic doesn't work yet. We plan to fix it in trunk.
Thanks,
Jun
On Fri, Nov 8, 2013 at 6:30 PM, hsy...@gmail.com hsy...@gmail.com wrote:
It's in the branch, cool, I'll wait for it's release. actually I find I can
use ./kafka-delete-topic.sh and ./kafk-create-topic.sh with same topic
25 matches
Mail list logo