umber fields per object. And
I'm planning on receiving about 2 million a day.
Thanks for any help,
Nick
...@confluent.io | @rmoff
>
>
> On Tue, 9 Apr 2019 at 21:26, Nick Torenvliet
> wrote:
>
> > Hi all,
> >
> > Just looking for some general guidance.
> >
> > We have a kafka -> druid pipeline we intend to use in an industrial
> setting
> > to m
et of the topics (somehow) from kafka using
some streams interface.
With all the stock ticker apps out there, I have to imagine this is a
really common use case.
Anyone have any thoughts as to what we are best to do?
Nick
h KIP-349. Some felt that this feature could be
achieved by using existing capabilities of the current consumer API. See the
thread on the dev list (with KIP-349 in subject heading) for more details.
Cheers,
--
Nick
ill be used as input to determine if we move ahead with the
proposal. Thanks in advance for input.
Cheers,
--
Nick
Hard to say without more info, but why not just deploy something like a
REST api and expose it to your clients, they will send the data to the api
and it will in turn feed the Kafka topic.
You will minimize coupling and be able to scale / upgrade easier.
On Mar 10, 2018 2:47 AM, "adrien ruffie"
---
May I know the answer about below issue?
a) Is there any setting, which will lead the new kafka to update the path
"/kafka/brokers/topics/__consumer_offsets/partitions"?
b) when will zk path
"/kafka/brokers/topics/__consumer_offsets/partitions/{number}/state" be updated?
Thanks
Nick
ex.html
Thanks for any info,
--
Nick
t scenario), I'd like to understand the
semantics of the `acks=1` case nonetheless.
Thanks in advance.
- nick
[0]:
https://github.com/apache/kafka/blob/0.10.1.1/clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java#L86
Sounds good, thank you!
Kind regards,
Nick
On 10 February 2017 at 22:59, Matthias J. Sax wrote:
> I agree that the API can be improved and we are working on that.
>
> Btw: KStream#toTable() was already suggested in KIP-114 discussion:
>
> http://search-hadoop.com/m/Kafka/uyzND19
he key of the KStream is the key of the KTable.
// Any latter key overwrites the former.
someStream.table(
Serde,
Serde,
topicName,
tableName
);
or maybe the Serdes can be inferred? Either way, this would be a nice clean
approach to a (maybe?) common use case.
Thank you,
Nick
On 2
(the `kafka-avro-console-consumer` works because it doesn't deserialize the
key.)
Nick
On 10 February 2017 at 19:25, Nick DeCoursin
wrote:
> Finally, here's the problem:
>
> $ curl -X GET localhost:8081/subjects
> ["test-value","test-key","tes
;d like to have both the key
and the value as Avro, not just the value.
Thank you,
Nick
On 10 February 2017 at 12:55, Nick DeCoursin
wrote:
> It seems like a bug.
>
> Thanks,
> Nick
>
> On 9 February 2017 at 14:57, Nick DeCoursin
> wrote:
>
>> Hello,
>>
>
It seems like a bug.
Thanks,
Nick
On 9 February 2017 at 14:57, Nick DeCoursin
wrote:
> Hello,
>
> Here is a github repo with the failing case: https://github.com/decoursin/
> kafka-connect-test.
>
> I've tried other similar things and nothing seems to work.
>
> Tha
Hello,
Here is a github repo with the failing case:
https://github.com/decoursin/kafka-connect-test.
I've tried other similar things and nothing seems to work.
Thanks,
Nick
On 9 February 2017 at 04:40, Nick DeCoursin
wrote:
> Any help here? I can create a git repo with the code, if
Any help here? I can create a git repo with the code, if somebody assures
me they'll have a look.
Thank you,
Nick
On 8 February 2017 at 10:39, Nick DeCoursin
wrote:
> Below's the rest of my consumer, which includes the serde code. It's worth
> noting that when I run the
arting stream...");
final KafkaStreams streams = new KafkaStreams(builder,
streamsConfiguration);
streams.start();
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
try {
streams.close();
} catch (Exception e) {
System.out.
streamsConfiguration.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG,
"http://localhost:8081";);
final KStream tests = builder.stream(Serdes.String(),
testSpecificAvroSerde, "test");
tests.map((id, command) -> {
System.out.println("test id=" + id + " c
e semantics for Kafka haven't been
> completed, so this would be at-least-once.
>
> On Wed, 25 Jan 2017 at 08:43 Nick DeCoursin
> wrote:
>
> > From the documentation
> > <http://docs.confluent.io/3.1.1/streams/developer-guide.html#id3>:
> >
> > The c
ord, update the corresponding key by
> incrementing its count by one.
How? Is there any examples of this online? To me, it doesn't seem so
trivial because there's no such thing as a transaction in Kafka,
Thank you,
Nick
--
Nick DeCoursin
Software Engineer
foodpanda
Tel | +1 9
Thank you very much, both suggestions are wonderful, and I will try them.
Have a great day!
Kind regards,
Nick
On 24 January 2017 at 19:46, Matthias J. Sax wrote:
> If your data is already partitioned by key, you can save writing to a
> topic by doing a dummy reduce instead:
>
into `KTable`s. Is there any other way?
Thank you very much,
Nick DeCoursin
--
Nick DeCoursin
Software Engineer
foodpanda
Tel | +1 920 450 5434
Mail | n.decour...@foodpanda.com
Skype | nick.foodpanda
Foodpanda GmbH | Schreiberhauer Str. 30 | 10317 Berlin | Germany
Sitz der Gesellschaft | Berli
(with the reassign-partitions script)
- decommission the broker
Opened KAFKA-4681 to track this issue.
On Thu, Jan 19, 2017 at 4:50 PM, Nick Travers wrote:
> We recently tried to rebalance partitions for a topic (via
> kafka.admin.ReassignPartitionsCommand).
>
> In the .json f
particular
issue.
Thanks in advance!
- nick
twork maintenance_ is basically impossible to reproduce.
We'll be upgrading to 0.10.1.1 in the next few days nonetheless.
Thanks!
- nick
ed on any
new event.
Thank you,
Nick DeCoursin
I’m running the old 0.8 consumer storing offsets in Zookeeper, want to
migrate to the new consumer introduced in 0.9 . I don’t see anything in the
docs about how to do that while preserving offsets. Do I need to follow the
steps from the FAQ to migrate to committing offsets to Kafka, then I can
swa
I’m running the old 0.8 consumer storing offsets in Zookeeper, want to
migrate to the new consumer introduced in 0.9 . I don’t see anything in the
docs about how to do that while preserving offsets. Do I need to follow the
steps from the FAQ to migrate to committing offsets to Kafka, then I can
swa
To follow up with my last email, I have been looking into
socket.receive.buffer.byte as well as socket.send.buffer.bytes. Would it help
to increase the buffer for OOM issue?
All help is appreciated!
Thanks!
-nick
From: "McKoy, Nick"
mailto:nicholas.mc...@washpost.com>>
Dat
Hey all,
I have a kafka cluster of 5 nodes that’s working really hard. CPU is around 40%
idle daily.
I looked at the file descriptor note on this documentation page
http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap
and decided to give it a shot on one instance in the
;s
mention in the doc of configuring the concurrency level of internal thread
pools, so I assume that would be applicable to this example ?
--
Nick
Manual Commit (version 0.8 and above)
In order to be able to achieve "at-least-once" delivery, you can use following
API to obta
Hi,
I'm looking for a complete low-level consumer example. Ideally one in Scala
that continuously consumes from a topic and commits offsets.
Thanks for any pointers,
--
Nick
32 matches
Mail list logo