Scenario
=======

A partition that Flink is reading:
[ 1 - 2 - 3 - 4 - 5 - 6 - 7 - |  8 _ 9 _ 10 _ 11 | 12 ~ 13 ]
[.           Committed.       |         In flight      | unread  ]

Kafka basically breaks off pieces of the end of the queue and shoves them
downstream for processing?

So suppose while semantically:
- 8 &10 succeed (api call success)
- 9 & 11 fail (api failure).

Failure Handling options
==================

Basically we have two options to handle failures?

A. Try/catch to deadletter queue
```
    try {
        api.write(8, 9, 10, 11);
    } catch E {
        // 9, 11 failed to write to the api so we deadletter them

        deadletterQueue.write(E.failed_set())
    }
```

B. Or it can fail - which will retry the batch?
```
    api.write(8, 9, 10, 11);
    // 9, 11 failed to write to the api
```

In situation (B.), we're rewriting 8 and 10 to the api, which is bad, so
situation (A.) seems better.


Challenge I can't understand
======================

However in (A.) we then do something with the queue:

A2. Try/catch to another deadletter queue?
```
    try {
        api.write(9, 11);
    } catch E {
        //11 failed to write to the api
        deadletterQueue2.write(E.failed_set())
    }
```

Do you see what I mean? Is it turtles all the way down?

Should I create a separate index of semantic outcome? Where should it live?

Should I just keep things in the queue until

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to