Sounds nice!
I'm discussing with a customer to create a fully anonymized stream for
future analytical purposes.
Remaining question: the anonymization algorithm/strategy that maintains
statistical relevance while being resilient against brute force.
Thoughts?
-wim
On Thu, 23 Nov 2017 at 19:03 Sc
With the `zookeeper-shell.sh` script, I have checked the path of
`/brokers/ids`, it showed only the broker id which was un-affected.
On Fri, Nov 24, 2017 at 12:49 PM, Kamal
wrote:
> Hi Kafka Users,
>
> In our production cluster, we have faced the below error in 2 out of 3
> brokers. After th
Hi Kafka Users,
In our production cluster, we have faced the below error in 2 out of 3
brokers. After this error, the ISR are not updated and not able to create
new topics as the replication factor
is higher than the available brokers.
The session between Kafka and Zookeeper got expired. Duri
Thanks, Brett. We will work on writing a client to do this for us, then.
Regards,
Ali
On Thu, Nov 23, 2017 at 11:02 PM, Brett Rann
wrote:
> Ah apologies.
>
> Found the KIP where the one I suggested was added:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 122%3A+Add+Reset+Consumer+G
If you use "consumer groups" it is ensured that a single partitions in
processed by one consumer (and one consumer can get multiple partitions
assigned).
Thus, this work out of the box and is easier to manager than
parallelizing record processing in the consumer. Also, this does not
work if you ne
You might want to consider using the reset tool instead of just changing
the application.id...
https://www.confluent.io/blog/data-reprocessing-with-kafka-streams-resetting-a-streams-application/
-Matthias
On 11/23/17 3:34 AM, Artur Mrozowski wrote:
> Oh, I've got it. Need to reset the applicati
On 2017-11-22 23:15, "Matthias J. Sax" wrote:
> I KafkaConsumer itself should be use single threaded. If you want to
> parallelize processing, each thread should have it's own KafkaConsumer
> instance and all consumers should use the same `group.id` in their
> configuration. Load will be shared
Anyone here ?
On Wed, Nov 22, 2017 at 4:04 PM, Raghav wrote:
> Hi
>
> If I give several locations with smaller capacity for log.dirs vs one
> large drive for log.dirs, are there any PROS or CONS between the two
> (assuming total storage is same in both cases).
>
> I don't have access to one driv
Our legal departments interpretation is when an account is deleted any data
that is kept longer then K days must be deleted. We setup our un-redacted
Kafka topics to never be greater then K days. This simplifies the problem
greatly.
Our solution is designed to limit the ability of services to see
I think the best way to implement this is via envelope encryption: your
system manages a key encryption key (kek) which is used to encrypt data
encryption keys (dek) per user/customer which are used to encrypt the
user's/customer's data.
If the user/customer walks away, simply drop the dek. His da
Ah apologies.
Found the KIP where the one I suggested was added:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-122%3A+Add+Reset+Consumer+Group+Offsets+tooling
And from the discussion thread link, a couple in to the thread:
Currently the only way for an admin to successfully override offs
Oh, I've got it. Need to reset the application id.
On Thu, Nov 23, 2017 at 12:19 PM, Artur Mrozowski wrote:
> Hi,
> I am running a Kafka Streams application and want to read everything from
> the topic from the beginning.
> So I renamed all the stores and set (ConsumerConfig.AUTO_OFFSET_
> RESE
Hi,
I am running a Kafka Streams application and want to read everything from
the topic from the beginning.
So I renamed all the stores and set
(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest")
but it does not take effect. Nothing happens.
As soon as I put new messages than the processing and
Unfortunately, it doesn't have that option in this version of Kafka!
On Thu, Nov 23, 2017 at 9:02 PM, Brett Rann
wrote:
> I don't know about kafka-storm spout, but you could try using
> the kafka-consumer-groups.sh cli to reset the offset. It has
> a --reset-offsets option.
>
> On Thu, Nov 23, 2
ok..got it..thanks Manikumar...😀
On Thu, Nov 23, 2017 at 2:58 PM, Manikumar
wrote:
> topic-level config "compression.type" takes precedence over producer
> compression code.
> By default topic-level config "compression.type" is "producer", which means
> retain
> the original compression codec se
I don't know about kafka-storm spout, but you could try using
the kafka-consumer-groups.sh cli to reset the offset. It has
a --reset-offsets option.
On Thu, Nov 23, 2017 at 7:02 PM, Ali Nazemian wrote:
> Hi All,
>
> I am using Kafka 0.10.0.2 and I am not able to upgrade my Kafka version. I
> hav
topic-level config "compression.type" takes precedence over producer
compression code.
By default topic-level config "compression.type" is "producer", which means
retain
the original compression codec set by the producer.
If you set topic-level config "compression.type" to any other compression
co
Ok. So you mean stop all producers, change the compress type for topic at
runtime and switch the compression type for producers and have them start
again.
On Thu, Nov 23, 2017 at 12:46 PM, Manikumar
wrote:
> You can dynamically change topic level configs on brokers.
> http://kafka.apache.org/doc
Hi Scott and thanks for your reply.
For what you say, I guess that when you are asked to delete some "data
user" (that's the "right to be forgotten" in GDPR), what you are really
doing is blocking the access to it. I had a similar approach, based on the
idea of Greg Young's solution of encrypting a
Hello Jakub,
nice to also see you on this list :-)
Thank you for your answer.
Kind Regards,
Andreas
On Wed, Nov 22, 2017 at 4:18 PM, Jakub Scholz wrote:
> Hi Andreas,
>
> What you are describing is basically one central cluster (managed by the
> central org.) and a set of "satelite" clusters
Hi All,
I am using Kafka 0.10.0.2 and I am not able to upgrade my Kafka version. I
have a situation that after removing Kafka topic, I am getting the
following error in Kafka-Storm Spout client because the offset hasn't been
reset properly. I was wondering how I can reset the offset in the
new-con
21 matches
Mail list logo