we tried to create out own decoder, we put it in a separate jar, and ran
Camus with the hadoop -libjars arg:
hadoop jar camus-etl-kafka-jar-with-dependencies
com.linkedIn.camus.CamusJob -libjars mydecoder-jar-with-dependencies.jar -P
camus.properties
then it gave me the following error
Caused
Hi All,
Before Kafka 0.9 release is available, is there an immediate security
solution that we can leverage?
I've come across https://github.com/relango/kafka/tree/kafka_security and
the IP address filter patch from Kafka 0.8.3, which has not have a set
release date.
Thanks,
Connie
Please add my email to mailing list
SP Naidu | Digital River | Director, Data Technologies Engineering
p: +1 952 225 3421 m:763-203-1675 + |
sna...@digitalriver.commailto:sna...@digitalriver.com |
digitalriver.comhttp://www.digitalriver.com/
10380 Bren Road West, Minnetonka, MN 55343, USA
Thanks Jay. Any idea, when this feature can be expected? Are there any patches
for it?
Thanks
Arun
-Original Message-
From: Jay Kreps [mailto:jay.kr...@gmail.com]
Sent: Tuesday, February 17, 2015 10:16 PM
To: users@kafka.apache.org
Subject: Re: Producer duplicates
There are some
Hi!
I'm using the new KafkaProducer in 0.8.2.0.
I have thousands of Nodes which receive messages. Each message
idempotently mutates the state of the Node, so while duplicate messages are
fine, missed messages are not.
I'm writing these messages into a topic with dozens of partitions.
Am I
Telnet seems to be able to connect from the Mac to the VM and from the VM to
the VM:
From Mac to VM:
Richards-MacBook-Air:kafka_2.10-0.8.2.0 rick$ telnet 192.168.241.128 9092
Trying 192.168.241.128...
Connected to 192.168.241.128.
Escape character is '^]’.
From VM to VM:
Time to debug Kafka then :)
Does the topic you are producing to exists? (you can check with
kafka-topics tool)
If not, do you have auto-creation enabled?
Which version are you on? Is it possible you ran into KAFKA-1738?
On Tue, Feb 17, 2015 at 10:08 PM, Richard Spillane r...@defend7.com
Sorry, that was my mistake while writing the e-mail. I actually use the IP
address of the appropriate machine instead of localhost (in this case
192.168.241.128). I can ssh just fine into the kafka machine, and other
services (e.g., HTTP or MySQL) work fine as well.
On Feb 17, 2015, at 9:09
What happens when you telnet to port 9092? try it from both your mac and
the ubuntu vm.
On Tue, Feb 17, 2015 at 9:26 PM, Richard Spillane r...@defend7.com wrote:
I checked iptables and all rules are set to forward, so nothing should be
blocked in the VM example. In the container example the
So I would like to have two machines: one running zookeeper and a single kafka
node and another machine running a producer. I want to use the basic commands
mentioned in the Quick Start guide to do this. However, I keep getting
connection closed exceptions in the producer.
This is what I do:
The producer machine lists 'localhost:9092' for the Kafka connection? They're
on two different machines aren't they?
-Original Message-
From: Gwen Shapira [gshap...@cloudera.com]
Received: Tuesday, 17 Feb 2015, 8:57PM
To: users@kafka.apache.org [users@kafka.apache.org]
Subject: Re:
Hi,
I'm trying to replicate a broker shutdown in unit tests. I've got a simple
cluster running with 2 brokers (and one ZK). I'm successfully able to
create a topic with a single partition and replication factor of 2.
I'd like to test shutting down the current leader for the partition and
make
I checked iptables and all rules are set to forward, so nothing should be
blocked in the VM example. In the container example the port is explicitly
EXPOSEd and other ports in a similar range (e.g., 8080) can be accessed just
fine.
On Feb 17, 2015, at 8:56 PM, Gwen Shapira
Is it possible that you have iptables on the Ubuntu where you run your
broker?
Try disabling iptables and see if it fixes the issue.
On Tue, Feb 17, 2015 at 8:47 PM, Richard Spillane r...@defend7.com wrote:
So I would like to have two machines: one running zookeeper and a single
kafka node
We're running 0.8.2 at the moment, and now I think I understand the concept
of consumer groups and how to see their offsets.
It does appear that consumer groups periodically get deleted (not sure why).
My question is, what's the generaly lifecycle of a consumer group? I would
assume they hang
Hi Chris,
In 0.8.2, the simple consumer Java API supports committing/fetching
offsets that are stored in ZooKeeper. You don't need to issue any
ConsumerMetadataRequest for this. Unfortunately, the API currently
does not support fetching offsets that are stored in Kafka.
Thanks,
Joel
On Mon,
Are you are on 0.8.1.1 or higher and are you using the kafka-topics tool to
see which leaders are on which brokers?
Did you also wait for more than 15 seconds to see if the leader election
takes place?
Are there any errors in the controller log?
On Sat, Feb 14, 2015 at 9:35 AM, nitin sharma
I think I found my answer reading the 0.8.2 release notes about the changes
to consumer groups; basically they are triples (group, topic, partition)
for the key. That's all I needed to know.
Thanks!
(but got a different follow up question coming!)
On Tue Feb 17 2015 at 9:34:29 AM Todd Palino
Thanks Todd. that will work
On Tue, Feb 17, 2015 at 10:31 PM, Todd Palino tpal...@gmail.com wrote:
In order to do that, you'll need to run it and parse the output, and then
emit it to your metrics system of choice. This is essentially what I do - I
have a monitoring application which runs
I'm assuming from your description here that all of these topics are being
consumed by a single consumer (i.e. a single process that does something
different with each topic it sees). In general, you're going to get more
efficiency out of a single consumer instance that consumes multiple topics
hi Neha,
I am using 0.8.1.1 version and have used kafka-topics to create the topics
and then later used reassignment tool to spilt the partitions to the
required brokers..
After restart, i have seen all the partitions to remain on active server
and doesn't migrate back to the restarted node.. i
Hi Jey,
I have debugged further and looks like it was problem with my test case.
After calling send method for specific time my main thread was getting
finished and i was not waiting for *kafka-producer-network-thread* to send
all the messages from the buffer hence my test case was failing.
I
22 matches
Mail list logo