Thanks, Robin. Got the clarification.
Thanks and Regards,
Himanshu Shukla
On Wed, Jan 15, 2020 at 3:40 PM Robin Moffatt wrote:
> If you use a load balancer bear in mind the importance of the
> advertised.listeners setting:
> https://rmoff.net/2018/08/02/kafka-listeners-explained/
>
>
> --
>
Hi all
Is it possible to deploy just the connector/worker stack SQ onto say a Web
server. or a JBOSS server.
Guessing the connector is then configured into standalone mode ?
G
--
You have the obligation to inform one honestly of the risk, and as a person
you are committed to educate yourself
Hey Anindya,
On Wed, 15 Jan 2020 at 18:23, Anindya Haldar
wrote:
> Thanks for the response.
>
> Essentially, we are looking for a confirmation that a send acknowledgement
> received at the client’s end will ensure the message is indeed persisted to
> the replication logs. We initially
Thanks for the response.
Essentially, we are looking for a confirmation that a send acknowledgement
received at the client’s end will ensure the message is indeed persisted to the
replication logs. We initially wondered whether the client has to make an
explicit flush() call or whether it has
You can use RUN SCRIPT command for this
See
https://docs.confluent.io/current/ksql/docs/tutorials/examples.html#running-ksql-statements-from-the-command-line
On Wed, Jan 15, 2020 at 8:12 PM KhajaAsmath Mohammed <
mdkhajaasm...@gmail.com> wrote:
> Hi,
>
> I am looking to run the ksql query
thanks
G
On Wed, Jan 15, 2020 at 6:19 PM Robin Moffatt wrote:
> If spooldir doesn't suit, there's also
> https://github.com/streamthoughts/kafka-connect-file-pulse to check out.
> Also bear in mind tools like filebeat from Elastic support Kafka as a
> target.
>
>
> --
>
> Robin Moffatt |
Anindya,
On Wed, 15 Jan 2020 at 16:49, Anindya Haldar
wrote:
> In our case, the minimum in-sync replicas is set to 2.
>
> Given that, what will be expected behavior for the scenario I outlined?
>
This means you will get confirmation when 2 of them have acknowledged. so
you will always have 2
In our case, the minimum in-sync replicas is set to 2.
Given that, what will be expected behavior for the scenario I outlined?
Sincerely,
Anindya Haldar
Oracle Responsys
> On Jan 15, 2020, at 6:38 AM, Ismael Juma wrote:
>
> To all the in-sync replicas. You can set the minimum number of
If spooldir doesn't suit, there's also
https://github.com/streamthoughts/kafka-connect-file-pulse to check out.
Also bear in mind tools like filebeat from Elastic support Kafka as a
target.
--
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
On Wed, 15 Jan 2020 at
Pretty sure WePay are working on this:
https://github.com/debezium/debezium-incubator
--
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
On Wed, 15 Jan 2020 at 13:55, Sachin Mittal wrote:
> Hi,
> I was looking for any cassandra source connector to read data from
>
Hi,
I am looking to run the ksql query with the queries file in windows. May I
know how to acheive this? I cannot run directly on CLI as the query has
more than 3000+ columns. I am looking to create stream with it
Thanks,
Asmath
To all the in-sync replicas. You can set the minimum number of in-sync
replicas via the min.insync.replicas topic/broker config.
Ismael
On Tue, Jan 14, 2020 at 11:11 AM Anindya Haldar
wrote:
> I have a question related to the semantics of a producer send and the get
> calls on the future
I appreciate if I could get some advice with this if anyone had
experienced similar issue.
Thanks,
Yu Watanabe
On Wed, Jan 15, 2020 at 10:53 PM Yu Watanabe wrote:
>
> I found the reason. I am not sure if this is Azure problem or Jdbc
> problem though...
>
> First of all my apology that I had
Hi,
I was looking for any cassandra source connector to read data from
cassandra column family and insert the data into kafka topic.
Are you folks aware of any community supported version of such a tool.
I found one such:
https://docs.lenses.io/connectors/source/cassandra.html
However I am not
I found the reason. I am not sure if this is Azure problem or Jdbc
problem though...
First of all my apology that I had not elaborated my environment.
I use,
DataSource: Azure PostgresSQL Server (Read Replica)
Kafka Connect 2.3.1 (strimzi 0.15.0)
Kafka Broker 2.3.1 (strimzi 0.15.0)
In this
Hi Tom
will do. for now I have 4 specific file types I need to ingest.
1. reading apache web server log files, http.log's.
2. reading in our custom log files
3. reading in log4j log files
4. mysql connection as a source
5. cassandra connection, as a sink
I can not use NFS mounting the source
Hi George,
Since you mentioned CDC specifically you might want to check out Debezium (
https://debezium.io/) which operates as a connector of the sort Robin
referred to and does CDC for MySQL and others.
Cheers,
Tom
On Wed, Jan 15, 2020 at 10:18 AM Robin Moffatt wrote:
> The integration part
The integration part of Apache Kafka that you're talking about is
called Kafka Connect. Kafka Connect runs as its own process, known as
a Kafka Connect Worker, either on its own or as part of a cluster. Kafka
Connect will usually be deployed on a separate instance from the Kafka
brokers.
Kafka
If you use a load balancer bear in mind the importance of the
advertised.listeners setting:
https://rmoff.net/2018/08/02/kafka-listeners-explained/
--
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
On Wed, 15 Jan 2020 at 08:15, Himanshu Shukla
wrote:
> is it
is it recommended to use ALB(running all 3 nodes on ec2 instances) or
something similar?
Thanks and Regards,
Himanshu Shukla
On Wed, Jan 15, 2020 at 12:52 PM Tom Bentley wrote:
> Hi Himanshu,
>
> Short answer: yes.
>
> The way a Kafka client works is to connect to one of the given bootstrap
20 matches
Mail list logo