Re: [EXTERNAL] [VOTE] KIP-382 MirrorMaker 2.0

2019-01-01 Thread McCaig, Rhys
+1 (non-binding). Fantastic work on the KIP Ryanne.

> On Dec 25, 2018, at 9:10 AM, Stephane Maarek  
> wrote:
> 
> +1 ! Great stuff
> 
> Stephane
> 
> On Mon., 24 Dec. 2018, 12:07 pm Edoardo Comar  
>> +1 non-binding
>> 
>> thanks for the KIP
>> --
>> 
>> Edoardo Comar
>> 
>> IBM Event Streams
>> 
>> 
>> Harsha  wrote on 21/12/2018 20:17:03:
>> 
>>> From: Harsha 
>>> To: dev@kafka.apache.org
>>> Date: 21/12/2018 20:17
>>> Subject: Re: [VOTE] KIP-382 MirrorMaker 2.0
>>> 
>>> +1 (binding).  Nice work Ryan.
>>> -Harsha
>>> 
>>> On Fri, Dec 21, 2018, at 8:14 AM, Andrew Schofield wrote:
 +1 (non-binding)
 
 Andrew Schofield
 IBM Event Streams
 
 On 21/12/2018, 01:23, "Srinivas Reddy" 
>> wrote:
 
+1 (non binding)
 
Thank you Ryan for the KIP, let me know if you need support in
>>> implementing
it.
 
-
Srinivas
 
- Typed on tiny keys. pls ignore typos.{mobile app}
 
 
On Fri, 21 Dec, 2018, 08:26 Ryanne Dolan > wrote:
 
> Thanks for the votes so far!
> 
> Due to recent discussions, I've removed the high-level REST
>>> API from the
> KIP.
> 
> On Thu, Dec 20, 2018 at 12:42 PM Paul Davidson
>>> 
> wrote:
> 
>> +1
>> 
>> Would be great to see the community build on the basic
>>> approach we took
>> with Mirus. Thanks Ryanne.
>> 
>> On Thu, Dec 20, 2018 at 9:01 AM Andrew Psaltis
>>> > 
>> wrote:
>> 
>>> +1
>>> 
>>> Really looking forward to this and to helping in any way
>>> I can. Thanks
>> for
>>> kicking this off Ryanne.
>>> 
>>> On Thu, Dec 20, 2018 at 10:18 PM Andrew Otto
>> 
> wrote:
>>> 
 +1
 
 This looks like a huge project! Wikimedia would be
>>> very excited to
> have
 this. Thanks!
 
 On Thu, Dec 20, 2018 at 9:52 AM Ryanne Dolan
>>> 
 wrote:
 
> Hey y'all, please vote to adopt KIP-382 by replying +1
>> to this
>> thread.
> 
> For your reference, here are the highlights of the
>> proposal:
> 
> - Leverages the Kafka Connect framework and ecosystem.
> - Includes both source and sink connectors.
> - Includes a high-level driver that manages connectors
>> in a
> dedicated
> cluster.
> - High-level REST API abstracts over connectors
>>> between multiple
>> Kafka
> clusters.
> - Detects new topics, partitions.
> - Automatically syncs topic configuration between
>> clusters.
> - Manages downstream topic ACL.
> - Supports "active/active" cluster pairs, as well as
>>> any number of
>>> active
> clusters.
> - Supports cross-data center replication,
>>> aggregation, and other
>>> complex
> topologies.
> - Provides new metrics including end-to-end
>>> replication latency
>> across
> multiple data centers/clusters.
> - Emits offsets required to migrate consumers
>>> between clusters.
> - Tooling for offset translation.
> - MirrorMaker-compatible legacy mode.
> 
> Thanks, and happy holidays!
> Ryanne
> 
 
>>> 
>> 
>> 
>> --
>> Paul Davidson
>> Principal Engineer, Ajna Team
>> Big Data & Monitoring
>> 
> 
 
 
>>> 
>> 
>> Unless stated otherwise above:
>> IBM United Kingdom Limited - Registered in England and Wales with number
>> 741598.
>> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>> 



[Discuss] Question on KIP-298: Error Handling in Kafka Connect

2019-01-01 Thread Pere Urbón Bayes
Hi,
 a quick question on the KIP-298 Dead letter queue, as I read from the KIP
is only available for the Sink connectors.

While I know the challenges of defining a dead-letter queue for the
incoming messages, I wanted ask/discuss what is the sense in here for this,
do you completely discard the option?

I sort of see it useful for messages that where pulled from the source, but
somehow could not be ingested in Kafka, might be because of serialisation
for example.

What do you think?

-- 
Pere Urbon-Bayes
Software Architect
http://www.purbon.com
https://twitter.com/purbon
https://www.linkedin.com/in/purbon/


Re: Kafka tests on a remote cluster

2019-01-01 Thread Pere Urbón Bayes
Hi,
 if I understand your question properly you are aiming to validate failure
scenarios. I usually see this done for learning purposes, basically to
answer how Kafka would react to such and such situation? or to validate the
current setup / configuration an Organisation have.

For the first one, or as well the second one but not directly on the
cluster, colleagues of mine actually wrote
https://github.com/Dabz/kafka-boom-boom, this uses docker to simulate the
scenarios. I've actually seen this approach used often.

This days I would only run a chaos testing if supported by a tooling like
the Netflix Chaos Monkey (https://github.com/Netflix/chaosmonkey) or the
other tooling available for the cloud providers, Kubernetes or Cloud
Foundry. A nice list is available from
https://github.com/dastergon/awesome-chaos-engineering#notable-tools

Cheers

-- Pere




Missatge de Parviz deyhim  del dia dj., 27 de des. 2018 a
les 21:57:

> +dev@kafka.apache.org
>
> On Wed, Dec 26, 2018 at 8:53 PM Parviz deyhim  wrote:
>
> > Thanks fair points. Probably best if I simplify the question: How does
> > Kafka community run tests besides using mocked local Kafka components?
> > Surely there are tests to confirm different failure scenarios such as
> > losing a broker in a real clustered environment (multi node cluster with
> > Ip, port, hostnsmes and etc). The answer would be a good starting point
> for
> > me.
> >
> > On Wed, Dec 26, 2018 at 6:11 PM Stephen Powis 
> > wrote:
> >
> >> Without looking into how the integration tests work my best guess is
> >> within
> >> the context they were written to run in, it doesn't make sense to run
> them
> >> against a remote cluster.  The "internal" cluster is running the same
> >> code,
> >> so why require having to coordinate with an external dependency?
> >>
> >> For the use case you gave, and I'm not sure if tests exist that cover
> this
> >> behavior or not -- Running the brokers locally in the context of the
> tests
> >> mean that those tests have control over the brokers (IE shut them off,
> >> restart them, etc.. programmatically) and validate behavior.  To
> >> coordinate
> >> these operations on a remote broker would be significantly more
> difficult.
> >>
> >> Not sure this helps...but perhaps you're either asking the wrong
> questions
> >> or trying to go about solving your problem using the wrong set of tools?
> >> My gut feeling says if you want to do a full scale multi-server load /
> HA
> >> test, Kafka's test suite is not the best place to start.
> >>
> >> Stephen
> >>
> >>
> >>
> >> On Thu, Dec 27, 2018 at 10:53 AM Parviz deyhim 
> wrote:
> >>
> >> > Hi,
> >> >
> >> > I'm looking to see who has done this before and get some guidance. On
> >> > frequent basis I like to run basic tests on a remote Kafka cluster
> while
> >> > some random chaos/faults are being performed. In other words I like to
> >> run
> >> > chaos engineering tasks (network outage, disk outage, etc) and see how
> >> > Kafka behaves. For example:
> >> >
> >> > 1) bring some random Broker node down
> >> > 2) send 2000 messages
> >> > 3) consumes messages
> >> > 4) confirm there's no data loss
> >> >
> >> > My questions: I'm pretty sure most of the scenarios I'm looking to
> test
> >> > have been covered under Kafka's integration,unit and other existing
> >> tests.
> >> > What I cannot figure out is how to run those tests on a remote cluster
> >> vs.
> >> > a local one which the tests seems to run on. For example I like to run
> >> the
> >> > following command but the tests to be executed on a remote cluster of
> my
> >> > choice:
> >> >
> >> > ./gradlew cleanTest integrationTest
> >> >
> >> > Any guidance/help would be appreciated.
> >> >
> >> > Thanks
> >> >
> >>
> >
>


-- 
Pere Urbon-Bayes
Software Architect
http://www.purbon.com
https://twitter.com/purbon
https://www.linkedin.com/in/purbon/