HI All ,
We are seeing following behavior , let me know if its expected or some
configuration error .
I have Apache Kafka running on three server on TLS protocol . They are
clustered on ZK level .
*Behaviour *,
1.* Unable to run only one instance* - When 2 out of 3 servers or instances
goes dow
Kafka topic names are case-sensitive.
On Tue, Aug 22, 2017 at 5:11 AM, Dominique De Vito
wrote:
> HI,
>
> Just a short question (I was quite surprised not to find it in the Kafka
> FAQ, or in the Kafka book...).
>
> Are Kafka topic names case sensitive or not sensitive ?
>
> Thanks.
>
> Regards,
HI,
Just a short question (I was quite surprised not to find it in the Kafka
FAQ, or in the Kafka book...).
Are Kafka topic names case sensitive or not sensitive ?
Thanks.
Regards,
Dominique
At Twilio we do this often, it is how we upgrade and deploy. We use
partition reassignments tool to assign partitions off the old node and onto
the new node. Hopefully, you are using an 0.10.X cluster, and have access
to the replication limiter so the reassignment doesn't steal all the
bandwidth on
Is mirror maker something you can utilise?
On 21 Aug 2017 4:03 pm, "Nomar Morado" wrote:
Hi
My brokers are currently installed in servers that's end of life.
What is the recommended way of migrating them over to new servers?
Thanks
Printing e-mails wastes valuable natural resources. Plea
Hi
My brokers are currently installed in servers that's end of life.
What is the recommended way of migrating them over to new servers?
Thanks
Printing e-mails wastes valuable natural resources. Please don't print this
message unless it is absolutely necessary. Thank you for thinking green
Hi Victor,
The KTable abstraction is mainly for maintaining a keyed collection of
facts that can be continuously evolving from its updates, not as a concrete
data structure in the Streams DSL.
For your case, I think it may be easier expressed in the lower-level API
with the StateStoreSupplier, wh
Hi,
I am trying to build kafka from source code, but i get below error when i
try to build the project.(Have use gradle .idea command) and when i try to
click on the import statements, they end up opening in test folder of
client package and not main package.
Any help would be appreciated.
/User
Hi,
I would like to build a structure similar to ktable, but instead of
structuring this data structure as a table I would like to structure
it, say, as a binary tree. Are there any standard approaches for it?
Is this way of using kstreams is supported in the first place?
Thanks,
Victor.
Searching the codebase I found only one usage of the
`ProcessorTopologyTestDriver` with `getKeyValueStore()` (or
`getStateStore()`) [0], and that usage only gets from store. Perhaps the
JavaDoc suggests something that cannot actually be done?
The obvious workaround is to get data into the store vi
I am trying to `put()` to a KeyValueStore that I got from
ProcessorTopologyTestDriver#getKeyValueStore() as part of setup for a test.
The JavaDoc endorses this use-case:
* This is often useful in test cases to pre-populate the store before
the test case instructs the topology to
* {@link
Hi,
Nowadays in my company we are planning to create a Data Lake. As we have
started also to use Kafka as our Event Store, and therefore implement some
Event Sourcing on it, we are wondering if it would be a good idea to use
the same approach to create a Data Lake.
So, one of the ideas in our mind
Hi,
I just saw an example, does producer.initTransactions() takes care of this
part.
Also thinking if transactions are threadsafe as soon as i do begin and
commit local to a thread.
Please enlighten.
-Sameer.
On Fri, Aug 18, 2017 at 3:22 PM, Sameer Kumar
wrote:
> Hi,
>
> I have a question o
`through` = `to` + `stream` operation. So, the consumer-groups command
showing the "fname-stream" topic.
Use `to`, if you just want to write the output to the topic.
-- Kamal
On Mon, Aug 21, 2017 at 12:05 PM, Sachin Mittal wrote:
> Folks any thoughts on this.
> Basically I want to know on what
attached log file
On Mon, Aug 21, 2017 at 11:06 AM, Elyahou Ittah
wrote:
> I am consuming from kafka using KafkaSpout of Storm and also in ruby using
> ruby-kafka gem (both use new consumer API).
>
> I noticed that after a rolling restart of the kafka cluster. The
> kafkaSpout reconsumed all kaf
I am consuming from kafka using KafkaSpout of Storm and also in ruby using
ruby-kafka gem (both use new consumer API).
I noticed that after a rolling restart of the kafka cluster. The kafkaSpout
reconsumed all kafka messages ignoring the committed offsets...
What can cause this behavior ?
Attach
Short answer - you cannot. The existing data is not reprocessed since kafka
itself has no knowledge on how you did your partitioning.
The normal workaround is that you stop producers and consumers. Create a
new topic with the desired number of partitions. Consume the old topic from
beginning and w
Hi Damian,
I've checked the global table and found that there is no data in the table,
here is my code to check:
val view: ReadOnlyKeyValueStore[String, UserData] =
jvnStream.store("userdata", QueryableStoreTypes.keyValueStore[String,
UserData]())
view.all().foreach((kv) => kv.toString)
And the
Yes I understand that.
The streams application takes care of that
when I do:
input
.map(new KeyValueMapper>() {
public KeyValue apply(K key, V value) {
..
return new KeyValue(new_key, new_value);
}
}).through(k_serde, v_se
19 matches
Mail list logo