ed in 0.10.1
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Tue, Mar 14, 2017 at 10:09 AM, Joe San wrote:
> Dear Kafka Users,
>
> What are the arguments against setting the retention plociy on a Kafka
> topic to infi
JMS AVRO schema
that includes both the JMS metadata as well as the JMS message body (which can
be any of the JMS message types).
-hans
> On Mar 14, 2017, at 9:26 AM, Robert Quinlivan wrote:
>
> Did you look at the ConsumerRecord
> <https://kafka.apache.org/0100/javadoc/org/apach
have someone help you set it up, test it, and document the proper
failover and recovery procedures.
-hans
> On Mar 6, 2017, at 10:44 AM, Le Cyberian wrote:
>
> Thanks Han and Alexander for taking time out and your responses.
>
> I now understand the risks and the possible ou
other
messaging protocol with TLS on vs off.
If you wanted to encrypt each message independently before sending to Kafka
then zero copy would still be in effect and all the consumers would get the
same encrypted message (and have to understand how to decrypt it).
-hans
> On Mar 6, 2017, at 5
which you
will only ever startup if a site goes down because otherwise 2 of 4 zookeeper
nodes is not a quorum.Again you would be better with 3 nodes because then you
would only have to do this in the site that has the single active node.
-hans
> On Mar 6, 2017, at 5:57 AM, Le Cyberian wr
ou
think a stretch cluster will work? That seems wrong.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Mon, Mar 6, 2017 at 5:37 AM, Le Cyberian wrote:
> Hi Guys,
>
> Thank you very much for you reply.
>
> The
could fail.
So it SHOULD be an odd number of zookeeper nodes (not MUST).
-hans
> On Mar 6, 2017, at 12:20 AM, Jens Rantil wrote:
>
> Hi Hans,
>
>> On Mon, Mar 6, 2017 at 12:10 AM, Hans Jespersen wrote:
>>
>> A 4 node zookeeper ensemble will not even work.
partitions that were leaders on that node that
need to move to the remaining brokers and they will try to be balanced across
the remaining in-sync replicas on the remaining nodes.
-hans
Sent from my iPhone
> On Mar 5, 2017, at 1:46 PM, Le Cyberian wrote:
>
> Hi Guys,
>
> I
Contact the author via github if the readme is not clear.
-hans
Sent from my iPhone
> On Mar 1, 2017, at 7:01 AM, VIVEK KUMAR MISHRA 13BIT0066
> wrote:
>
> Hi All,
>
> I want to use kafka-connect-salesforce but i am not able to use it .
> can any one provide s
Maybe look at this Kafka source connector for salesforce
https://github.com/jcustenborder/kafka-connect-salesforce
-hans
Sent from my iPhone
> On Feb 27, 2017, at 4:06 PM, VIVEK KUMAR MISHRA 13BIT0066
> wrote:
>
> Actually my data sources are salesforce and mailchimp. i have
overs.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Sun, Feb 26, 2017 at 5:44 PM, VIVEK KUMAR MISHRA 13BIT0066 <
vivekkumar.mishra2...@vit.ac.in> wrote:
> My question is can we create partitions in topic using an
would expect that once there is a reference implementation of the new Java
Admin client that other client libraries would implement similar interfaces.
In the meantime I’m afraid we just need to use the existing Java admin utils or
the CLI tools for just a little while longer.
-hans
> On
The __consumer_offsets topic should also get a tombstone message as soon as
a topic is deleted.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Wed, Feb 22, 2017 at 5:59 PM, Jun MA wrote:
> Hi Todd,
>
> Thank you so
You can't integrate 3.1.1 REST Proxy with a secure cluster because it uses the
old consumer API (hence zookeeper dependency). The 3.2 REST Proxy will allow
you to integrate with a secure cluster because it is updated with the latest
0.10.2 client.
-hans
Sent from my iPhone
> On Feb
sages.
-hans
Sent from my iPhone
> On Feb 8, 2017, at 8:17 AM, Manikumar wrote:
>
> Are you using new java consumer API? It is officially released as part of
> 0.9 release.
> 0.8.2.2 java consumer code may not be usable. You have to use old scala
> consumer API.
>
> O
” that
pre-packages only the jars and scripts needed to run Kafka Connect and not any
of the other Kafka components then the answer is unfortunately no at this time,
you need to download and install the entire Kafka distribution in order to get
the bits needed to run Kafka Connect.
-hans
else, it really
depends on target system. You have a lot of flexibility with where Connect runs
and in distributed mode it stores most data in Kafka anyway. Most connectors do
not use a lot of resources and often connectors run on machines shared with
other apps.
-hans
> On Feb 1, 2017, at 2
Today! Confluent 3.1.2 supports Kafka 0.10.1.1
https://www.confluent.io/blog/confluent-delivers-upgrades-clients-kafka-streams-brokers-apache-kafka-0-10-1-1/
<https://www.confluent.io/blog/confluent-delivers-upgrades-clients-kafka-streams-brokers-apache-kafka-0-10-1-1/>
-hans
>
but when it’s down, there is not a replica on any of the other brokers.
Try creating a new topic with replication-factor 3 and you should get better
availability in the event of one or even two broker failures.
-hans
--
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h
es in Kafka below)
-hans
> On Jan 18, 2017, at 11:17 PM, Paolo Patierno wrote:
>
> Yes I know so ... what's the value of the Offset field in the MessageSet when
> producer sends messages ?
>
> ____
> From: Hans Jespersen
> Sent: Wedne
Producer will not know the offset of the message(s) at the time they send to
the broker but they can receive that information back as confirmation of
successful publish.
-hans
> On Jan 18, 2017, at 1:25 AM, Paolo Patierno wrote:
>
> Hi,
>
>
> reading about
latency guaranteed messaging --
Jiangjie (Becket) Qin (LinkedIn)
<https://www.youtube.com/watch?v=oQe7PpDDdzA> which might give you much
better context and understanding of what these parameters mean and how they
work.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc
is created your code won't automatically know to consume
from it.
-hans
> On Jan 6, 2017, at 4:42 PM, Pradeep Gollakota wrote:
>
> What I mean by "flapping" in this context is unnecessary rebalancing
> happening. The example I would give is what a Hadoop Datanode wo
rebalancing in off hours as the replication traffic can negatively
impact the production traffic. In the latest releases there is a feature to
separately throttle the replication traffic separately from client traffic.
-hans
> On Jan 6, 2017, at 12:23 PM, R . wrote:
>
> Hello, I have a 3nod
This sounds exactly as I would expect things to behave. If you consume from the
beginning I would think you would get all the messages but not if you consume
from the latest offset. You can separately tune the metadata refresh interval
if you want to miss fewer messages but that still won't get
Yes. Either using a Change Data Capture product like Oracle GoldenGate
Connector for Kafka or JDBC Source & Sink Kafka Connectors like the one
included with Confluent Open Source.
-hans
> On Dec 27, 2016, at 11:47 AM, Julious.Campbell
> wrote:
>
>
> Support
>
&
This is a recognized area for improvement and better version compatibility is
something that is being actively worked on. librdkafka clients already allow
for both forward and backward compatibility. Soon the java clients will be able
to do so as well.
-hans
> On Dec 24, 2016, at 12:26
No. All Java clients (including Streams) need to be the same version (or lower)
as the brokers they connect to.
-hans
> On Dec 23, 2016, at 1:03 AM, Sachin Mittal wrote:
>
> Is Kafka streams 0.10.2.0-SNAPSHOT compatible with 0.10.0.1 broker.
> I was facing broker connect issue
on these two nodes (like
zookeeper or anything else)?
-hans
> On Dec 23, 2016, at 2:08 AM, Herbert Fischer
> wrote:
>
> Hi,
>
> I have a two node Kafka cluster, and I'm catching some unusual "TCP
> retransmission" metrics from my monitoring. I did f
Kafka clients (currently) do not work against older Kafka brokers/servers so
you have no other option but to upgrade to a 0.10.1.0 or higher Kafka broker.
-hans
> On Dec 22, 2016, at 2:25 PM, Joanne Contact wrote:
>
> Hello I have a program which requires 0.10.1.0 streams API. T
documentID so the Kafka connector actually can
update multiple times without causing duplicates in ES.
-hans
> On Dec 21, 2016, at 6:12 PM, kant kodali wrote:
>
> Hi Hans,
>
> Thats a great answer compared to the paragraphs I read online! I am
> assuming you meant HDFS? what
no duplicates and
exactly once semantics.
-hans
> On Dec 21, 2016, at 4:39 PM, kant kodali wrote:
>
> How does Kafka emulate exactly once processing currently? Does it require
> the producer to send at least once and consumer to de dupe?
>
> I did do my research but I fe
ns to remote Kafka Brokers.
I have personally used both on these node.js libraries in a number of my own
node Kafka applications and they are both up to date and stable.
-hans
> On Dec 16, 2016, at 7:01 PM, Chintan Bhatt
> wrote:
>
> Hi
> I want to give continuous output
a newer message of the same key.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Thu, Dec 15, 2016 at 10:16 AM, Kenny Gorman wrote:
> A couple thoughts..
>
> - If you plan on fetching old messages in a non-contiguous ma
Also see https://github.com/confluentinc/kafka-rest-node for an example
JavaScript wrapper on the Confluent REST Proxy.
You definitely do not have to use Kafka Connect to pub/sub to Kafka via REST.
-hans
> On Dec 15, 2016, at 11:17 AM, Stevo Slavić wrote:
>
> https://g
are you setting the group.id property to be the same on both consumers?
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Wed, Dec 7, 2016 at 12:36 PM
Hadoop cluster
-hans
> On Dec 6, 2016, at 3:25 AM, Aseem Bansal wrote:
>
> Hi
>
> Has anyone done a storage of Kafka JSON messages to deep storage like S3.
> We are looking to back up all of our raw Kafka JSON messages for
> Exploration. S3, HDFS, MongoDB come to mind ini
own native Kafka
Connector for each input protocol. There are over 150 Kafka Connectors already
built (search for "kafka-connect-*" in github) and see the following connector
landing page for more info on Kafka Connect
https://www.confluent.io/product/connectors/
-hans
Sent from
The performance impact of upgrading and some settings you can use to
mitigate this impact when the majority of your clients are still 0.8.x are
documented on the Apache Kafka website
https://kafka.apache.org/documentation#upgrade_10_performance_impact
-hans
/**
* Hans Jespersen, Principal
The default config handles messages up to 1MB so you should be fine.
-hans
> On Nov 22, 2016, at 4:00 AM, Felipe Santos wrote:
>
> I read on documentation that kafka is not optimized for big messages, what
> is considered a big message?
>
> For us the messages will be o
Publish lots of messages and measure in seconds or minutes. Otherwise you are
just benchmarking the initial SSL handshake setup time which should normally be
a one time overhead, not a per message overhead. If you just send one message
then of course SSL is much slower.
-hans
> On Nov
What is the difference using the bin/kafka-console-producer and
kafka-console-consumer as pub/sub clients?
see http://docs.confluent.io/3.1.0/kafka/ssl.html
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Thu, Nov 17, 2016 at 11
I believe that the new topics are picked up at the next metadata refresh
which is controlled by the metadata.max.age.ms parameter. The default value
is 30 (which is 5 minutes).
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On
case this is no
longer a Kafka question and has become more of a a distributed database
design question.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Sun, Nov 6, 2016 at 7:08 PM, kant kodali wrote:
> Hi Hans,
>
> Th
and operating system are you using to build this
system? You have to give us more information if you want specific
recommendations.
-hans
> On Nov 6, 2016, at 2:54 PM, kant kodali wrote:
>
> Hi! Thanks. any pointers on how to do that?
>
> On Sun, Nov 6, 2016 at 2:32 PM, Tauzell
way to get you the functionality you want?
-hans
> On Nov 5, 2016, at 4:31 PM, kant kodali wrote:
>
> I am new to Kafka and reading this statement "write consumer 1 and consumer
> 2 to share a common external offset storage" I can interpret it many ways
> but my
There is no built in mechanism to do this in Apache Kafka but if you can write
consumer 1 and consumer 2 to share a common external offset storage then you
may be able to build the functionality you seek.
-hans
> On Nov 5, 2016, at 3:55 PM, kant kodali wrote:
>
> Sorry there
The 0.10.1 broker will use more file descriptor than previous releases
because of the new timestamp indexes. You should expect and plan for ~33%
more file descriptors to be open.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On
Just make sure they are not in the same consumer group by creating a unique
value for group.id for each independent consumer.
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Mon, Oct 31, 2016 at 9:42 AM, Patrick Viet
wrote:
>
Are you willing to have a maximum throughput of 6.67 messages per second?
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Fri, Oct 28, 2016 at 9:07 AM, Mudit Agarwal wrote:
> Hi Hans,
>
> The latency between my
necessary because the offset numbers for a
given message will not match in both datacenters.
-hans
> On Oct 28, 2016, at 8:08 AM, Mudit Agarwal wrote:
>
> Thanks dave.
> Any ways for how we can achieve HA/Failover in kafka across two DC?
> Thanks,Mudit
>
> From: "Ta
linux-linuxfoundationx-lfs101x-0
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Mon, Oct 24, 2016 at 1:50 AM, Gourab Chowdhury
wrote:
> Thanks for the reply, I tried changing the data directory as follows:-
> dataDir=/data/zook
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Mon, Oct 24, 2016 at 7:01 AM, Demian Calcaprina
wrote:
> Hi Guys,
>
> Is there a way to remove a kafka topic from the java api?
>
> I have the following scenar
Yes.
//h...@confluent.io
Original message From: ZHU Hua B
Date: 10/24/16 12:09 AM (GMT-08:00) To:
users@kafka.apache.org Subject: RE: Mirror multi-embedded consumer's
configuration
Hi,
Many thanks for your confirm!
I have another question, if I deleted a mirrored topic on
You are going to lose everything you store in /tmp. In a production system
you never configure Kafka or zookeeper to store critical data in /tmp.
This has nothing to do with AWS or EBS it is just standard Linux than
everything under /tmp is deleted when Linux reboots.
-hans
/**
* Hans Jespersen
Yes. See the description of quotas.
https://kafka.apache.org/documentation#design_quotas
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Thu, Oct 20, 2016 at 3:20 PM, Adrienne Kole
wrote:
> Hi,
>
> Is there a way to
-1489 <https://issues.apache.org/jira/browse/KAFKA-1489> and KIP-61
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-61%3A+Add+a+log+retention+parameter+for+maximum+disk+space+usage+percentage>
for
more detail.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Co
Because the producer-property option is used to set other properties that are
not compression type.
//h...@confluent.io
Original message From: ZHU Hua B
Date: 10/16/16 11:20 PM (GMT-08:00) To:
Radoslaw Gruchalski , users@kafka.apache.org Subject: RE:
A question about kafka
Watch this talk. Kafka will not lose messages it configured correctly.
http://www.confluent.io/kafka-summit-2016-ops-when-it-absolutely-positively-has-to-be-there
<http://www.confluent.io/kafka-summit-2016-ops-when-it-absolutely-positively-has-to-be-there>
-hans
> On Oct 13, 2016
7:07.500]AxThreadId 23516 ->ID:4495 offset: 81][ID
date: 2016-09-28 20:07:39.000 ]
On Sun, Oct 9, 2016 at 1:31 PM, Hans Jespersen wrote:
> Then publish with the user ID as the key and all messages for the same key
> will be guaranteed to go to the same partition and ther
sumed the
way I sent, then my analytics will go haywire.
Abhi
On Sun, Oct 9, 2016 at 12:50 PM, Hans Jespersen wrote:
> You don't even have to do that because the default partitioner will spread
> the data you publish to the topic over the available partitions for you.
> Just try it
y are automatically
distributed out over the available partitions.
//h...@confluent.io
Original message From: Abhit Kalsotra
Date: 10/8/16 11:19 PM (GMT-08:00) To: users@kafka.apache.org Subject: Re:
Regarding Kafka
Hans
Thanks for the response, yeah you can say yeah I am tre
n be a bit more tricky if you are using keys but it doesn't sound like
you are if today you are publishing to topics the way you describe.
-hans
> On Oct 8, 2016, at 9:01 PM, Abhit Kalsotra wrote:
>
> Guys any views ?
>
> Abhi
>
>> On Sat, Oct 8, 2016 at 4:28
ll is any consumer has finished processing
a message by checking the offsets for that client. For the older clients these
are stored in Zookeeper but for the new consumers (0.9+) they are in a special
kafka topic dedicated to storing client offsets.
-hans
> On Oct 7, 2016, at 1:34
why the 0.10.1 docs are hard to find.
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Tue, Oct 4, 2016 at 11:42 PM, Gaurav Shaha wrote:
> Hi,
>
> I want to use kafka new consumer. But in the documentation of 0.10.0
>
at 3:54 AM, Sloot, Hans-Peter
> wrote:
>
> Hi,
>
> I wrote a small python script to consume messages from kafka.
>
> The consumer is defined as follows:
> kafka = KafkaConsumer('my-replicated-topic',
> metadata_broker_list=['localhost
lf the number of messages and the
other the rest.
How can I arrange this?
Regards Hans-Peter
This e-mail and the documents attached are confidential and intended solely for
the addressee; it may also be privileged. If you receive this e-mail in error,
please notify the sender immediately an
101 - 167 of 167 matches
Mail list logo