in/java/org/apache/kafka/streams/processor/internals/RecordCollector.java#L73
which means that your error happens on the value, not the key.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On December 6, 2016 at 9:18:53 PM, Jon Yeargers (jon.yearg...@cedexis.com)
wrote:
0.10.1.0
On Tue, Dec 6,
Jon,
Are you using 0.10.1 or 0.10.0.1?
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On December 6, 2016 at 7:55:30 PM, Damian Guy (damian@gmail.com) wrote:
Hi Jon,
At a glance the code looks ok, i.e, i believe the aggregate() should have
picked up the default Serde set in your
Do you mind sharing the code of AggKey class?
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On December 6, 2016 at 7:26:51 PM, Jon Yeargers (jon.yearg...@cedexis.com)
wrote:
The 2nd.
On Tue, Dec 6, 2016 at 10:01 AM, Radek Gruchalski
wrote:
> Is the error happening at this st
indows.of(60 * 60 * 1000L),
collectorSerde, "prt_minute_agg_stream");
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On December 6, 2016 at 6:47:38 PM, Jon Yeargers (jon.yearg...@cedexis.com)
wrote:
If I comment out the aggregation step and just .print the .map step I don
may have to write your own Serializer / Deserializer for
RtDetailLogLine.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On December 6, 2016 at 6:28:23 PM, Jon Yeargers (jon.yearg...@cedexis.com)
wrote:
Using 0.10.1.0
This is my topology:
Properties config = new Properties();
config.put
from others though.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On December 5, 2016 at 5:27:05 PM, Thomas Becker (tobec...@tivo.com) wrote:
Thanks for the reply, Radek. So you're running with 6s then? I'm
surprised, I thought people were generally increasing this value when
Hi Thomas,
Defaults are good for sure. Never had a problem with default timeouts in
AWS.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On December 5, 2016 at 4:58:41 PM, Thomas Becker (tobec...@tivo.com) wrote:
I know several folks are running Kafka in AWS, can someone give me an
idea
You’re most likely correct that it’s not that particular change.
That commit was introduced only 6 days ago, well after releasing 0.10.1.
An mvp would be helpful. Unless someone else on this list knows the issue
immediately.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On November 29
thought.
–
Best regards,
Radek Gruchalski
ra...@gruchalski.com
On November 28, 2016 at 9:04:16 PM, Bart Vercammen (b...@cloutrix.com)
wrote:
Hi,
It seems that consumer group rebalance is broken in Kafka 0.10.1.0 ?
When running a small test-project :
- consumers running in own JVM (with
296 looks familiar: https://www.nodejitsu.com/
Kind regards,
Radek Gruchalski
radek.gruchal...@technicolor.com (mailto:radek.gruchal...@technicolor.com) |
radek.gruchal...@portico.io (mailto:radek.gruchal...@portico.io) |
ra...@gruchalski.com
(mailto:ra...@gruchalski.com)
00447889948663
That is exactly why we've decided to stick with java. Also support for all
consumer settings out of the box.
Kind regards,
Radek Gruchalski
On 22 Dec 2012, at 19:17, David Arthur wrote:
> FWIW, message production is quite simpler than consumption. It does
> not require the s
We use that fork of node-kafka without any issues. We have a 3 server cluster
setup. Single topic, 3 partitions. No issues. Franz-kafka is on our "to check"
list but no rush yet.
Kind regards,
Radek Gruchalski
On 22 Dec 2012, at 18:59, Apoorva Gaurav wrote:
> Thanks Radek,
is capable of this then we
> are willing to move to Scala or Java but node.js is the first choice.
>
> Thanks & Regards,
> Apoorva
>
> On Sat, Dec 22, 2012 at 2:25 AM, Radek Gruchalski <
> radek.gruchal...@portico.io> wrote:
>
>> We are using https://gi
integration. Another
kafka module is this one: https://github.com/dannycoates/franz-kafka.
Kind regards,
Radek Gruchalski
radek.gruchal...@technicolor.com (mailto:radek.gruchal...@technicolor.com) |
radek.gruchal...@portico.io (mailto:radek.gruchal...@portico.io) |
ra...@gruchalski.com
(mailto:ra
14 matches
Mail list logo