Hi,
I have a multiple high level streams DSL to execute.
In first it reads from a source topic processes the data and sends the data
to a sink.
In second it again reads from same source topic processes the data and
sends it to a different topic.
For now these two operations are independent.
Now
HI All
- this is fixed, this was due to class path issue.
On Thu, Jul 27, 2017 at 3:30 PM, karan alang wrote:
> Hi All - i've installed Confluent 3.2.2 & i'm getting error in starting up
> the Broker
>
> Attached the server.properties file, any ideas on what the issue might be ?
>
> ./bin/kafka-
Hi All - i've installed Confluent 3.2.2 & i'm getting error in starting up
the Broker
Attached the server.properties file, any ideas on what the issue might be ?
./bin/kafka-server-start ./etc/kafka/server.properties
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
Using `PartitionGrouper` is correct.
As you mentioned correctly, Stream scales via "max number of partitions"
and thus, be default only create one task for this case.
Another way would be, to deploy multiple streams applications each
processing a different topic. Of course, for this you will need
One more thing.. How do we check if the stateful join operation resulted in
a kstream of some value in it (size of kstream)? How do we check the
content of a kstream?
- S
On Thu, Jul 27, 2017 at 2:06 PM, Shekar Tippur wrote:
> Damien,
>
> Thanks a lot for pointing out.
>
> I got a little furthe
Hi,
the behavior you describe is by design. You should increase the
retention time of the re-partitioning topics manually to process old data.
-Matthias
On 7/25/17 7:17 AM, Gerd Behrmann wrote:
> Hi,
>
> While adding a new Streams based micro service to an existing Kafka
> infrastructure, I h
Damien,
Thanks a lot for pointing out.
I got a little further. I am kind of stuck with the sequencing. Couple of
issues:
1. I cannot initialise KafkaStreams before the parser.to().
2. Do I need to create a new KafkaStreams object when I create a
KeyValueStore?
3. How do I initialize KeyValueItera
It is part of the ReadOnlyKeyValueStore interface:
https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/ReadOnlyKeyValueStore.java
On Thu, 27 Jul 2017 at 17:17 Shekar Tippur wrote:
> That's cool. This feature is a part of rocksdb object and not ktable?
I would not mess with internals like this. It's safer if you treat this
like a special case of a Broker expansion.
Don't forget if you are going to have mixed lineage brokers:
*From the documentation*
1. Update server.properties file on all brokers and add the following
properties:
That's cool. This feature is a part of rocksdb object and not ktable?
Sent from my iPhone
> On Jul 27, 2017, at 07:57, Damian Guy wrote:
>
> Yes they can be strings,
>
> so you could do something like:
> store.range("test_host", "test_hosu");
>
> This would return an iterator containing all o
I am using on the non-enterprise edition .. it works fine.
regards.
On Tue, Jul 25, 2017 at 5:16 PM, Hassaan Pasha wrote:
> Is confluent-kafka supported by DCOS (non-enterprise edition) or does it
> only work with the enterprise edition?
>
> On Tue, Jul 25, 2017 at 4:37 PM, Affan Syed wrote:
>
You might want to read this first
http://www.apache.org/licenses/exports/
On 24 July 2017 at 14:33, DUGAN, TIMOTHY K wrote:
> Hello,
>
>
>
> We are looking for export compliance information related to the following
> ASF products:
>
>
>
>- Apache Maven 3.3.3, 3.1.0, 3.0.5
>- Apache Sub
Is confluent-kafka supported by DCOS (non-enterprise edition) or does it
only work with the enterprise edition?
On Tue, Jul 25, 2017 at 4:37 PM, Affan Syed wrote:
> again, another person from confluent saying it is supported. Ask here asap
> :)
> - Affan
>
> -- Forwarded message
Hi,
While adding a new Streams based micro service to an existing Kafka
infrastructure, I have run into some issues processing older data in existing
topics. I am uncertain of the exact cause of the problems, but am looking for
advice to clarify how things are supposed to work to eliminate poss
Hello,
We are looking for export compliance information related to the following ASF
products:
* Apache Maven 3.3.3, 3.1.0, 3.0.5
* Apache Subversion SVN client: SVN, version 1.6.11
* Apache Kafka_2.11-0.11.0.0 for message queue
* Apache Zookeeper-3.4.10 (background service for
Yes they can be strings,
so you could do something like:
store.range("test_host", "test_hosu");
This would return an iterator containing all of the values (inclusive) from
"test_host" -> "test_hosu".
On Thu, 27 Jul 2017 at 14:48 Shekar Tippur wrote:
> Can you please point me to an example? Can
Hello,
Please forgive me for asking too simply question (since I haven't done any
Scala development).
I am trying to see if a fix works for Windows OS. I have made some changes
in core package and trying to run unitTest gradle command. The test already
exists in existing Kafka source code (so i a
Can you please point me to an example? Can from and to be a string?
Sent from my iPhone
> On Jul 27, 2017, at 04:04, Damian Guy wrote:
>
> Hi,
>
> You can't use a regex, but you could use a range query.
> i.e, keyValueStore.range(from, to)
>
> Thanks,
> Damian
>
>> On Wed, 26 Jul 2017 at 22:
Thanks!
On Thu, Jul 27, 2017 at 4:12 PM, Damian Guy wrote:
>
> On Wed, 26 Jul 2017 at 15:53 Debasish Ghosh
> wrote:
>
>> One of the brokers died. The good thing is that it's not a production
>> cluster, it's just a demo cluster. I have no replicas. But I can knock off
>> the current Kafka insta
Hi,
You can't use a regex, but you could use a range query.
i.e, keyValueStore.range(from, to)
Thanks,
Damian
On Wed, 26 Jul 2017 at 22:34 Shekar Tippur wrote:
> Hello,
>
> I am able to get the kstream to ktable join work. I have some use cases
> where the key is not always a exact match.
> I
On Wed, 26 Jul 2017 at 15:53 Debasish Ghosh
wrote:
> One of the brokers died. The good thing is that it's not a production
> cluster, it's just a demo cluster. I have no replicas. But I can knock off
> the current Kafka instance and have a new one.
>
>
That explains it.
> Just for my understand
Wouldn't it make sense to *set index/segment creation, roll and deletion
to a lower loglevel like FINE or DEBUG?
Seems I can't create an issue for this.
*
On 13/07/17 16:53, mosto...@gmail.com wrote:
Hi
While testing Kafka in our environment, we have noticed it creates A
LOT of "debug" logs (
22 matches
Mail list logo