Good afternoon:
I'm Marco from BAC Credomatic. We are a group of banks in Central América. We
are interested to use AK, to send several records from On-Premises data marts
to a Pega Systems Cloud. Also I would like to know if some partner of you can
have a call to guide us in the proce
have to set a kafka properties for this problem, but I don't know
what it can be.
Thank you
-Messaggio originale-
Da: M. Manna
Inviato: venerdì 17 gennaio 2020 13:25
A: users@kafka.apache.org
Oggetto: Re: Kafka encoding UTF-8 problem
Hi,
On Fri, 17 Jan 2020 at 11:18, Marco Di Falco
wrote
Hello guys!
I have a producer and consumer running in a windows shell.
I write this message ‘questo è un test’ and in consumer receive this: “questo
´┐¢ un test” .
What properties should I use to set up character coding in utf-8?
thank you
Marco
January 16, 2020 10:52 AM
> To: Kafka Users
> Subject: Re: Kafka Broker leader change without effect
>
> Marco, the replication factor of 3 is not possible when you only have
> two brokers, thus the producer will fail to send records until the
> third broker is restored. You wou
Hello guys!
i have a problem i wrote about stackoverflow here:
https://stackoverflow.com/questions/59772124/kafka-broker-leader-change-without-effect
Can you help me?
thank you
Marco
oduction ready" configuration
> etc. It's just to play with the system.
>
> You should change the config accordingly.
>
>
> -Matthias
>
>
> On 11/1/18 1:57 AM, Marco Ippolito wrote:
> > Hi all,
> > yesterday I installed Kafka 2.0 in my Ubuntu 18.04.01
between the producers and the
receivers though the Kafka brokerage?
Looking forward to your kind help.
Marco
; ...
Best,
Marco
-Messaggio originale-
Da: 赖剑清 [mailto:laijianq...@tp-link.com.cn]
Inviato: martedì 23 ottobre 2018 05:55
A: users@kafka.apache.org
Oggetto: RE: When Kafka stores group information in zookeeper?
Yeah, it works.
It all depends on the address of the bootstrap-server: address of
with ZooKeeper-based consumers:
bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --describe --group
my-group
Do you have any hint on this behavior?
Thank you very much,
Marco Spadoni
Italiaonline S.p.A.
This email and any files transmitted with it are confidential and intended
solel
SessionWindows
.with(WINDOW_INACTIVITY_GAPS_MS) //5 minutes
.until(WINDOW_MAINTAIN_DURATION_MS), // 7 minutes
"aggregate_store")
.to(windowedSerde, mySessionSerde, SINK_TOPIC_KTABLE);
KafkaStreams stream = new KafkaStreams(builder, propsActivity);
stream.start();
D
Object activity, String
loc) {
JsonObject join = activity.copy();
join.put("city" , loc);
return join;
}
hum... your question is letting me think... are you telling me that since
both are kstreams, they actually need to be re-streamed in sync?
Thanks a lot.
Marco
2017-04-16 21:
essing tasks).
Can you help me to provide the right clue? Do I have to push the tho
streams in a sychronized fashion (such as simulating real time data flow,
as they came the first time into the system)?
Thanks for your support.
Best
Marco
essing tasks).
Can you help me to provide the right clue? Do I have to push the tho
streams in a sychronized fashion (such as simulating real time data flow,
as they came the first time into the system)?
Thanks for your support.
Best
Marco
ity (a JsonObject object) and a userLocation (a
String object) element.
how is this possible? I can't get where I'm doing wrong.
Do you have any clue on why this is happenings?
thanks a lot for your support and work.
Best
Marco
Hello Damian,
Thanks a lot for your precious support.
I confirm you that your workaround is perfectly working for my use case.
I'll be glad to support you to test the original code whenever the issue
you've spotted will be solved.
Thanks a lot again.
Marco.
Il 06/mar/2017 16:03, &
rg2;
}
}
BTW (this will be a topic for another thread anyway...) is there a way to
be con control of MySession lifecycle? I was thinking to pool them to
reduce GC workload.
thanks a lot for your precious help.
Marco
2017-03-06 11:59 GMT+01:00 Damian Guy :
> Hi Marco,
>
> Your co
his?
Please let me know if you need more info about.
thanks a lot,
Marco
Try to use --packages to include the jars. From error it seems it's looking
for main class in jars but u r running a python script...
On 25 Feb 2017 10:36 pm, "Raymond Xie" wrote:
That's right Anahita, however, the class name is not indicated in the
original github project so I don't know wh
reams streams = new KafkaStreams(builder, props);
streams.start();
Also with my real use case doesn't work.
While debugging, I've noticed that is doesn't reach neither the beginning
of the stream pipeline (groupby).
Can you please help investigating this issue?
Best.
Marco
uot;10.1.83.5" (of course, we had to do this for each hostname).
I hope that all these solutions will help others with the same issue.
Thanks a lot for your support!
Kind regards,
Marco
2016-06-02 5:40 GMT+02:00 Mudit Kumar :
> I donot think you need public hostname.I have a similarsetup
Hi Ben,
Thanks for your answer. What if the instance does not have a public DNS
hostname?
These are all private nodes without public/elastic IP, therefore I don't
know what to set.
Marco
2016-06-01 15:09 GMT+02:00 Ben Davison :
> Hi Marco,
>
> We use the public DNS hostname th
onsidering that
we have a VPN between the clusters, the only choice left seems to be the
one setting the hostname.
What should this value be? Is there anything else I need to know for this
kind of setup? Any suggestions?
Thanks in advance.
Kind regards,
Marco
Ok, I've download Kafka by myself and that works. Anyways, thx for help, guys!
2014-12-05 15:55 GMT+01:00 Marco :
> The port in server.configuration is indeed 6667.
>
> bin/kafka-console-producer.sh --broker-list localhost:6667 --topic test
>
> -> same error :(
>
>
host...don't know if this can intefere?
2014-12-05 15:14 GMT+01:00 Harsha :
> I think the default port for kafka running there is 6667. Can you check
> server.properties to see whats the port number
> -Harsha
>
> On Fri, Dec 5, 2014, at 06:10 AM, Marco wrote:
>> Yes, it
Yes, it's online and version -0.8.1.2.2.0.0-1084. jps lists it also
2014-12-05 14:56 GMT+01:00 svante karlsson :
> I haven't run the sandbox but check if the kafka server is started at all.
>
> ps -ef | grep kafka
>
>
>
> 2014-12-05 14:34 GMT+01:00 Marco :
&g
In my server.properties the
host is set to host.name=sandbox.hortonworks.com, which is correct.
Thanks for any help,
Marco
--
Viele Grüße,
Marco
should do the work.
Guozhang
On Mon, Nov 10, 2014 at 7:47 AM, Marco wrote:
We're using kafka 0.8.1.1.
>
>About network partition, it is an option.
>now i'm just wondering if deleting the data folder on the second node will at
>least have it come up again.
>
>i th
2014 16:36, Guozhang Wang ha scritto:
Hi Marco,
The fetch error comes from "UnresolvedAddressException", could you try to
check if you have a network partition issue during that time?
As for the "Too many file handlers", I think this is due to not properly
handling such excepti
Hi,
i've got a 2-machine kafka cluster. For some reasons after a restart the second
node won't start.
i get tons of "Error in fetch Name" until I get a final "Too many open files".
How do i start dealing with this?
thanks
this is the error
[2014-11-10 14:48:01,169] INFO [Kafka Server 2], start
29 matches
Mail list logo