Good afternoon:
I'm Marco from BAC Credomatic. We are a group of banks in Central América. We
are interested to use AK, to send several records from On-Premises data marts
to a Pega Systems Cloud. Also I would like to know if some partner of you can
have a call to guide us in the process
properties for this problem, but I don't know
what it can be.
Thank you
-Messaggio originale-
Da: M. Manna
Inviato: venerdì 17 gennaio 2020 13:25
A: users@kafka.apache.org
Oggetto: Re: Kafka encoding UTF-8 problem
Hi,
On Fri, 17 Jan 2020 at 11:18, Marco Di Falco
wrote:
> Hello guys!
>
Hello guys!
I have a producer and consumer running in a windows shell.
I write this message ‘questo è un test’ and in consumer receive this: “questo
´┐¢ un test” .
What properties should I use to set up character coding in utf-8?
thank you
Marco
ary 16, 2020 10:52 AM
> To: Kafka Users
> Subject: Re: Kafka Broker leader change without effect
>
> Marco, the replication factor of 3 is not possible when you only have
> two brokers, thus the producer will fail to send records until the
> third broker is restored. You would need to
Hello guys!
i have a problem i wrote about stackoverflow here:
https://stackoverflow.com/questions/59772124/kafka-broker-leader-change-without-effect
Can you help me?
thank you
Marco
oduction ready" configuration
> etc. It's just to play with the system.
>
> You should change the config accordingly.
>
>
> -Matthias
>
>
> On 11/1/18 1:57 AM, Marco Ippolito wrote:
> > Hi all,
> > yesterday I installed Kafka 2.0 in my Ubuntu 18.04.01 S
; ...
Best,
Marco
-Messaggio originale-
Da: 赖剑清 [mailto:laijianq...@tp-link.com.cn]
Inviato: martedì 23 ottobre 2018 05:55
A: users@kafka.apache.org
Oggetto: RE: When Kafka stores group information in zookeeper?
Yeah, it works.
It all depends on the address of the bootstrap-server: address of
with ZooKeeper-based consumers:
bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --describe --group
my-group
Do you have any hint on this behavior?
Thank you very much,
Marco Spadoni
Italiaonline S.p.A.
This email and any files transmitted with it are confidential and intended
ew,
MySession::aggregateSessions,
MySession::mergeSessions,
SessionWindows
.with(WINDOW_INACTIVITY_GAPS_MS) //5 minutes
.until(WINDOW_MAINTAIN_DURATION_MS), // 7 minutes
"aggregate_store")
.to(windowedSerde, mySessionSerde, SINK_TOPIC_KTABLE);
KafkaStreams stream = new
ActivityJoiner does:
public static JsonObject locationActivityJoiner(JsonObject activity, String
loc) {
JsonObject join = activity.copy();
join.put("city" , loc);
return join;
}
hum... your question is letting me think... are you telling me that since
both are kstreams, they actuall
and left, elements that belongs only from activity KStream, while I was
expecting to receive an activity (a JsonObject object) and a userLocation (a
String object) element.
how is this possible? I can't get where I'm doing wrong.
Do you have any clue on why this is happenings?
thanks a lot for your support and work.
Best
Marco
Hello Damian,
Thanks a lot for your precious support.
I confirm you that your workaround is perfectly working for my use case.
I'll be glad to support you to test the original code whenever the issue
you've spotted will be solved.
Thanks a lot again.
Marco.
Il 06/mar/2017 16:03, "Damia
arg2;
}
}
BTW (this will be a topic for another thread anyway...) is there a way to
be con control of MySession lifecycle? I was thinking to pool them to
reduce GC workload.
thanks a lot for your precious help.
Marco
2017-03-06 11:59 GMT+01:00 Damian Guy <damian@gmail.com&g
Please let me know if you need more info about.
thanks a lot,
Marco
Try to use --packages to include the jars. From error it seems it's looking
for main class in jars but u r running a python script...
On 25 Feb 2017 10:36 pm, "Raymond Xie" wrote:
That's right Anahita, however, the class name is not indicated in the
original github
KafkaStreams streams = new KafkaStreams(builder, props);
streams.start();
Also with my real use case doesn't work.
While debugging, I've noticed that is doesn't reach neither the beginning
of the stream pipeline (groupby).
Can you please help investigating this issue?
Best.
Marco
Hi Ben,
Thanks for your answer. What if the instance does not have a public DNS
hostname?
These are all private nodes without public/elastic IP, therefore I don't
know what to set.
Marco
2016-06-01 15:09 GMT+02:00 Ben Davison <ben.davi...@7digital.com>:
> Hi Marco,
>
> We use
idering that
we have a VPN between the clusters, the only choice left seems to be the
one setting the hostname.
What should this value be? Is there anything else I need to know for this
kind of setup? Any suggestions?
Thanks in advance.
Kind regards,
Marco
Yes, it's online and version -0.8.1.2.2.0.0-1084. jps lists it also
2014-12-05 14:56 GMT+01:00 svante karlsson s...@csi.se:
I haven't run the sandbox but check if the kafka server is started at all.
ps -ef | grep kafka
2014-12-05 14:34 GMT+01:00 Marco marco@gmail.com:
Hi,
I've
know if this can intefere?
2014-12-05 15:14 GMT+01:00 Harsha ka...@harsha.io:
I think the default port for kafka running there is 6667. Can you check
server.properties to see whats the port number
-Harsha
On Fri, Dec 5, 2014, at 06:10 AM, Marco wrote:
Yes, it's online and version
Ok, I've download Kafka by myself and that works. Anyways, thx for help, guys!
2014-12-05 15:55 GMT+01:00 Marco marco@gmail.com:
The port in server.configuration is indeed 6667.
bin/kafka-console-producer.sh --broker-list localhost:6667 --topic test
- same error :(
I've tried also
, Guozhang Wang wangg...@gmail.com ha scritto:
Hi Marco,
The fetch error comes from UnresolvedAddressException, could you try to
check if you have a network partition issue during that time?
As for the Too many file handlers, I think this is due to not properly
handling such exceptions that it does
the broker should do the work.
Guozhang
On Mon, Nov 10, 2014 at 7:47 AM, Marco zentrop...@yahoo.co.uk wrote:
We're using kafka 0.8.1.1.
About network partition, it is an option.
now i'm just wondering if deleting the data folder on the second node will at
least have it come up again.
i think another
23 matches
Mail list logo