Hey Edward,

just to summarize and make sure I understood your question, you want to
implement some Chaos testing to validate Kafka EOS model, but not sure how
to start or curious about whether there are already works in the community
doing that?

For the correctness of Kafka EOS, we have tons of unit tests and system
tests to prove its functionality. They could be found inside the repo. You
could check them out and see if we still have gaps (which I believe we
definitely have).

Boyang

On Fri, Oct 25, 2019 at 7:25 PM Edward Capriolo <edlinuxg...@gmail.com>
wrote:

> Hello all,
>
> I used to work in adtech. Adtech was great. CPM for ads is 1-$5 per
> thousand ad impression. If numbers are 5% off you can blame javascript
> click trackers.
>
> Now, I work in a non addtech industry and they are really, really serious
> about exactly once.
>
> So there is this blog:
>
> https://www.confluent.io/blog/transactions-apache-kafka/
>
> Great little snippet of code. I think I can copy it and implement it
> correctly.
> You know, but if you read that section about the zombie fencing, you learn
> that you either need to manually assign partitions or use the rebalance
> listener and have N producers. Hunting around github is not super helpful,
> some code snippets are less complete then even the snipped in the blog.
>
> I looked at what spring-kafka does. It does get the zombie fencing correct
> with respect to fencing id, other bits other bits of the code seem
> plausible.
>
> Notice I said "plausible", because I do not count a few end to end tests
> running single VM as a solid enough evidence that this works in the face of
> failures.
>
> I have been contemplating how one stress tests this exactly once concept,
> with something Jepsen like or something brute force that I can run for 5
> hours in a row
>
> If I faithfully implemented the code in the transactional read-write loop
> and I feed it into my jepsen like black box tester it should:
>
> Create a topic with 10 partitions, Start launching read-write transaction
> code, start feeding input data, Maybe strings like 1 -1000, now start
> randomly killing vms with kill -9 kill graceful exits, maybe even killing
> kafka, and make sure 1-1000, pop out on the other end.
>
> I thought of some other "crazy" ideas. One such idea:
>
> If I make a transactional "echo", read x; write x back to the same topic.
> RunN instances of that and kill them randomly. If I am loosing messages
> (and not duplicating messages) then the topic would eventually have no
> data..
>
> Or should I make a program with some math formula like receive x write xx.
> If duplication is happening I would start seeing multiple xx's
>
> Or send 1,000,000,000 messages through and consumer logs them to a file.
> Then use an etl tool to validate messages come out on the other side.
>
> Or should I use a nosql with increments and count up and ensure no key has
> been incremented twice.
>
> note: I realize I can just use kafka streams or storm, which has its own
> systems to guarantee "at most once" but Iooking for a way to prove what can
> be done with pure kafka. (and not just prove it adtech work (good enough 5%
> here or there) )
>
> I imagine someone somewhere must be doing this. How where tips? Is it part
> of some kafka release stress test? I'm down to write it if it does not
> exist.
>
> Thanks,
> Edward,
>
> Thanks,
> Edward
>

Reply via email to