Hi Peter,
That seems to be an issue. Do you want to file a JIRA on this?
Thanks,
Liquan
On Thu, May 12, 2016 at 5:01 PM, Peter Davis wrote:
> I'm having an issue running a Connect source worker running as a standalone
> herder embedded in a webapp. The webapp hangs on shutdown.
>
> MemoryOffs
I'm having an issue running a Connect source worker running as a standalone
herder embedded in a webapp. The webapp hangs on shutdown.
MemoryOffsetBackingStore creates an executor but doesn't call
ExecutorService.shutdown() on stop().
I'd like to contribute a pull request. Question in the meant
One use case is implementing a data retention policy.
-Peter
> On May 12, 2016, at 17:11, Guozhang Wang wrote:
>
> Wesley,
>
> Could describe your use case a bit more for motivating this? Is your data
> source expiring records and hence you want to auto "delete" the
> corresponding Kafka reco
Wesley,
Could describe your use case a bit more for motivating this? Is your data
source expiring records and hence you want to auto "delete" the
corresponding Kafka records as well?
Guozhang
On Thu, May 12, 2016 at 2:35 PM, Wesley Chow wrote:
> Right, I’m trying to avoid explicitly managing T
Hi Mayuresh,
You need to enable client authentication by setting `ssl.client.auth` to
`required` or `requested` (I suggest the former).
Ismael
On Thu, May 12, 2016 at 10:35 PM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:
> HI I am trying to establish an SSL connection from kafkaProduce
HI I am trying to establish an SSL connection from kafkaProducer and send
certificate to the Kafka Broker.
I deploy my kafka broker locally running 2 ports :
*listeners = PLAINTEXT://:9092,SSL://:16637 *
*My KafkaBroker SSL configs look like this :*
ssl.protocol = TLS
ssl.trustmanager.algorithm
Right, I’m trying to avoid explicitly managing TTLs. It’s nice being able to
just produce keys into Kafka without having an accompanying vacuum consumer.
Wes
> On May 12, 2016, at 5:15 PM, Benjamin Manns wrote:
>
> If you send a NULL value to a compacted log, after the retention period it
> w
Hi Tom,
This is puzzling because, as you said, not much has changed in the TLS code
since 0.9.0.1. A JIRA sounds good. I was going to ask if you could test the
commit before/after KAFKA-3025, but I see that Gwen has already done that.
:)
Ismael
On Thu, May 12, 2016 at 9:26 PM, Tom Crayford wrot
I know it is a big ask, but can you try bisecting?
For example, test before/after on commits:
* 45c8195 KAFKA-3025; Added timetamp to Message and use relative offset.
* 5b375d7 KAFKA-3149; Extend SASL implementation to support more mechanisms
* 69d9a66 KAFKA-3618; Handle ApiVersionsRequest before
If you send a NULL value to a compacted log, after the retention period it
will be removed. You could run a process that reprocesses the log and sends
a NULL to keys you want to purge based on some custom logic.
On Thu, May 12, 2016 at 2:01 PM, Wesley Chow wrote:
> Are there any thoughts on supp
Hi Experts,
I have just downloaded latest Confluent Kafka 2.0.1 and started playing
with it setting up a Single broker instance, a Zk and a Schema instance.
When I start Zk its fine, BUT when I start my kafka server I get a lot of
these...
[2016-05-12 11:16:23,343] INFO Accepted socket connection
Are there any thoughts on supporting TTLs on keys in compacted logs? In
other words, some way to set on a per-key basis a time to auto-delete.
Wes
Just to confirm:
You tested both versions with plain text and saw no performance drop?
On Thu, May 12, 2016 at 1:26 PM, Tom Crayford wrote:
> We've started running our usual suite of performance tests against Kafka
> 0.10.0.0 RC. These tests orchestrate multiple consumer/producer machines to
> r
Yep, confirm.
On Thu, May 12, 2016 at 9:37 PM, Gwen Shapira wrote:
> Just to confirm:
> You tested both versions with plain text and saw no performance drop?
>
>
> On Thu, May 12, 2016 at 1:26 PM, Tom Crayford
> wrote:
> > We've started running our usual suite of performance tests against Kafka
We've started running our usual suite of performance tests against Kafka
0.10.0.0 RC. These tests orchestrate multiple consumer/producer machines to
run a fairly normal mixed workload of producers and consumers (each
producer/consumer are just instances of kafka's inbuilt consumer/producer
perf tes
Hi,
Can I coonsume data in batches from kafka using the old High Level Consumer?
Is the new consumer API production ready?
I found also this suspicious log snippet that might be revelant. The task
executed by thread 134 is the one that won't receive messages
INFO Attempt to heart beat failed since the group is rebalancing, try to
re-join group.
(org.apache.kafka.clients.consumer.internals.AbstractCoordinator:633)
[201
Hi Michal,
There is no authentication in the PLAINTEXT port, but authorization still
happens. And the user will always be `KafkaPrincipal.ANONYMOUS` for that
case.
Ismael
On Thu, May 12, 2016 at 11:18 AM, Michał Kabocik
wrote:
> Dears,
>
>
>
> I have a three node cluster of Kafka 0.9 with two
Dears,
I have a three node cluster of Kafka 0.9 with two listeners configured :
PLAINTEXT on 9092 and SASL_PLAINTEXT on 9094.
I successfully configured Kerberos + ACL and I’m able to produce messages
(using kafka_console_producer) to port 9094.
But when I try to produce to PLAINTEXT port wit
19 matches
Mail list logo