Kafka Connectors output to topic.

2021-03-11 Thread Nick Siviglia
umber fields per object. And I'm planning on receiving about 2 million a day. Thanks for any help, Nick

Re: Streaming Data

2019-04-10 Thread Nick Torenvliet
...@confluent.io | @rmoff > > > On Tue, 9 Apr 2019 at 21:26, Nick Torenvliet > wrote: > > > Hi all, > > > > Just looking for some general guidance. > > > > We have a kafka -> druid pipeline we intend to use in an industrial > setting > > to m

Streaming Data

2019-04-09 Thread Nick Torenvliet
et of the topics (somehow) from kafka using some streams interface. With all the stock ticker apps out there, I have to imagine this is a really common use case. Anyone have any thoughts as to what we are best to do? Nick

Re: Prioritized Topics for Kafka

2019-01-26 Thread nick
h KIP-349. Some felt that this feature could be achieved by using existing capabilities of the current consumer API. See the thread on the dev list (with KIP-349 in subject heading) for more details. Cheers, -- Nick

Prioritized Topics for Kafka

2019-01-16 Thread nick
ill be used as input to determine if we move ahead with the proposal. Thanks in advance for input. Cheers, -- Nick

Re: Suggestion over architecture

2018-03-10 Thread Nick Vasilyev
Hard to say without more info, but why not just deploy something like a REST api and expose it to your clients, they will send the data to the api and it will in turn feed the Kafka topic. You will minimize coupling and be able to scale / upgrade easier. On Mar 10, 2018 2:47 AM, "adrien ruffie"

when will zk path "/kafka/brokers/topics/__consumer_offsets/partitions/{number}/state" be updated

2017-09-08 Thread Nick
--- May I know the answer about below issue? a) Is there any setting, which will lead the new kafka to update the path "/kafka/brokers/topics/__consumer_offsets/partitions"? b) when will zk path "/kafka/brokers/topics/__consumer_offsets/partitions/{number}/state" be updated? Thanks Nick

Javadoc for org.apache.kafka.common.requests package

2017-03-15 Thread Afshartous, Nick
ex.html Thanks for any info, -- Nick

Producer acks=1, clean broker shutdown and data loss

2017-02-18 Thread Nick Travers
t scenario), I'd like to understand the semantics of the `acks=1` case nonetheless. Thanks in advance. - nick [0]: https://github.com/apache/kafka/blob/0.10.1.1/clients/src/main/java/org/apache/kafka/clients/producer/ProducerConfig.java#L86

Re: Table a KStream

2017-02-10 Thread Nick DeCoursin
Sounds good, thank you! Kind regards, Nick On 10 February 2017 at 22:59, Matthias J. Sax wrote: > I agree that the API can be improved and we are working on that. > > Btw: KStream#toTable() was already suggested in KIP-114 discussion: > > http://search-hadoop.com/m/Kafka/uyzND19

Re: Table a KStream

2017-02-10 Thread Nick DeCoursin
he key of the KStream is the key of the KTable. // Any latter key overwrites the former. someStream.table( Serde, Serde, topicName, tableName ); or maybe the Serdes can be inferred? Either way, this would be a nice clean approach to a (maybe?) common use case. Thank you, Nick On 2

Re: Kafka Connect - Unknown magic byte

2017-02-10 Thread Nick DeCoursin
(the `kafka-avro-console-consumer` works because it doesn't deserialize the key.) Nick On 10 February 2017 at 19:25, Nick DeCoursin wrote: > Finally, here's the problem: > > $ curl -X GET localhost:8081/subjects > ["test-value","test-key","tes

Re: Kafka Connect - Unknown magic byte

2017-02-10 Thread Nick DeCoursin
;d like to have both the key and the value as Avro, not just the value. Thank you, Nick On 10 February 2017 at 12:55, Nick DeCoursin wrote: > It seems like a bug. > > Thanks, > Nick > > On 9 February 2017 at 14:57, Nick DeCoursin > wrote: > >> Hello, >> >

Re: Kafka Connect - Unknown magic byte

2017-02-10 Thread Nick DeCoursin
It seems like a bug. Thanks, Nick On 9 February 2017 at 14:57, Nick DeCoursin wrote: > Hello, > > Here is a github repo with the failing case: https://github.com/decoursin/ > kafka-connect-test. > > I've tried other similar things and nothing seems to work. > > Tha

Re: Kafka Connect - Unknown magic byte

2017-02-09 Thread Nick DeCoursin
Hello, Here is a github repo with the failing case: https://github.com/decoursin/kafka-connect-test. I've tried other similar things and nothing seems to work. Thanks, Nick On 9 February 2017 at 04:40, Nick DeCoursin wrote: > Any help here? I can create a git repo with the code, if

Re: Kafka Connect - Unknown magic byte

2017-02-08 Thread Nick DeCoursin
Any help here? I can create a git repo with the code, if somebody assures me they'll have a look. Thank you, Nick On 8 February 2017 at 10:39, Nick DeCoursin wrote: > Below's the rest of my consumer, which includes the serde code. It's worth > noting that when I run the

Re: Kafka Connect - Unknown magic byte

2017-02-08 Thread Nick DeCoursin
arting stream..."); final KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration); streams.start(); Runtime.getRuntime().addShutdownHook(new Thread(() -> { try { streams.close(); } catch (Exception e) { System.out.

Kafka Connect - Unknown magic byte

2017-02-07 Thread Nick DeCoursin
streamsConfiguration.put(AbstractKafkaAvroSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081";); final KStream tests = builder.stream(Serdes.String(), testSpecificAvroSerde, "test"); tests.map((id, command) -> { System.out.println("test id=" + id + " c

Re: Track Latest Count - Unique Key

2017-01-25 Thread Nick DeCoursin
e semantics for Kafka haven't been > completed, so this would be at-least-once. > > On Wed, 25 Jan 2017 at 08:43 Nick DeCoursin > wrote: > > > From the documentation > > <http://docs.confluent.io/3.1.1/streams/developer-guide.html#id3>: > > > > The c

Track Latest Count - Unique Key

2017-01-25 Thread Nick DeCoursin
ord, update the corresponding key by > incrementing its count by one. How? Is there any examples of this online? To me, it doesn't seem so trivial because there's no such thing as a transaction in Kafka, Thank you, Nick -- Nick DeCoursin Software Engineer foodpanda Tel | +1 9

Re: Table a KStream

2017-01-24 Thread Nick DeCoursin
Thank you very much, both suggestions are wonderful, and I will try them. Have a great day! Kind regards, Nick On 24 January 2017 at 19:46, Matthias J. Sax wrote: > If your data is already partitioned by key, you can save writing to a > topic by doing a dummy reduce instead: >

Table a KStream

2017-01-24 Thread Nick DeCoursin
into `KTable`s. Is there any other way? Thank you very much, Nick DeCoursin -- Nick DeCoursin Software Engineer foodpanda Tel | +1 920 450 5434 Mail | n.decour...@foodpanda.com Skype | nick.foodpanda Foodpanda GmbH | Schreiberhauer Str. 30 | 10317 Berlin | Germany Sitz der Gesellschaft | Berli

Re: Reassigning partitions to a non-existent broker

2017-01-20 Thread Nick Travers
(with the reassign-partitions script) - decommission the broker Opened KAFKA-4681 to track this issue. On Thu, Jan 19, 2017 at 4:50 PM, Nick Travers wrote: > We recently tried to rebalance partitions for a topic (via > kafka.admin.ReassignPartitionsCommand). > > In the .json f

Reassigning partitions to a non-existent broker

2017-01-19 Thread Nick Travers
particular issue. Thanks in advance! - nick

Brokers stuck trying to shrink ISR down to themselves

2017-01-13 Thread Nick Travers
twork maintenance_ is basically impossible to reproduce. We'll be upgrading to 0.10.1.1 in the next few days nonetheless. Thanks! - nick

Process KTable on Predicate

2016-11-12 Thread Nick DeCoursin
ed on any new event. Thank you, Nick DeCoursin

Migrating old consumer offsets to new consumer

2016-09-21 Thread Nick
I’m running the old 0.8 consumer storing offsets in Zookeeper, want to migrate to the new consumer introduced in 0.9 . I don’t see anything in the docs about how to do that while preserving offsets. Do I need to follow the steps from the FAQ to migrate to committing offsets to Kafka, then I can swa

Migrating old consumer offsets to new consumer

2016-09-21 Thread Nick Kleinschmidt
I’m running the old 0.8 consumer storing offsets in Zookeeper, want to migrate to the new consumer introduced in 0.9 . I don’t see anything in the docs about how to do that while preserving offsets. Do I need to follow the steps from the FAQ to migrate to committing offsets to Kafka, then I can swa

Re: Out of memory - Java Heap space

2016-04-18 Thread McKoy, Nick
To follow up with my last email, I have been looking into socket.receive.buffer.byte as well as socket.send.buffer.bytes. Would it help to increase the buffer for OOM issue? All help is appreciated! Thanks! -nick From: "McKoy, Nick" mailto:nicholas.mc...@washpost.com>> Dat

Out of memory - Java Heap space

2016-04-18 Thread McKoy, Nick
Hey all, I have a kafka cluster of 5 nodes that’s working really hard. CPU is around 40% idle daily. I looked at the file descriptor note on this documentation page http://docs.confluent.io/1.0/kafka/deployment.html#file-descriptors-and-mmap and decided to give it a shot on one instance in the

Re: Low-level Consumer Example (Scala)

2016-04-06 Thread Afshartous, Nick
;s mention in the doc of configuring the concurrency level of internal thread pools, so I assume that would be applicable to this example ? -- Nick Manual Commit (version 0.8 and above) In order to be able to achieve "at-least-once" delivery, you can use following API to obta

Low-level Consumer Example (Scala)

2016-04-05 Thread Afshartous, Nick
Hi, I'm looking for a complete low-level consumer example. Ideally one in Scala that continuously consumes from a topic and commits offsets. Thanks for any pointers, -- Nick