Hi Mahesh,

In the kafka tests, were using a pattern of killing a job by throwing a
"SuccessException" after a certain number of messages have passed. Just
check the Kafka tests to see how its done :)

On Thu, Mar 9, 2017 at 10:09 PM, MAHESH KUMAR <r.mahesh.kumar....@gmail.com>
wrote:

> Hi Team,
>
> I am trying to write test cases to check whether the job is getting
> executed as desired. I am using the Flink test util. I am trying to do a
> end to end testing where Flink reads from a Kafka Queue, does some
> processing and then writes the output to another topic of the Kafka Queue.
> My objective is to read the message from the output topic and check if it
> has the same message as expected.
>
> I have got Zookeeper and Kafka configured for the test. When I start the
> Flink Job, it never terminates since it's source is a Kafka Source. Is
> there a way to run a job for a specific interval of time or how do I go
> about testing this scenario. Is there any documentation/example for running
> test cases such as these?
>
> My code currently looks something like this:
>
> class StreamingMultipleTest extends StreamingMultipleProgramsTestBase
> {
>
> @Before def initialize() = {
> // Start Kafka, Zookeeper
> // Call the run method of the Flink Class - FlinkClass.run()  // This
> class contains the env.execute()
>
> // My code does not execute any further since the previous call is never
> returned.
> }
>
> @Test def Test1() = {
> // Check if the Output Topic of the Kafka Queue is as expected -
> AssertStatement
>
> }
>
> @After def closeServices() = {
> // Stop Zookeeper and Kafka
> }
>
> }
>
>
> Thanks and Regards,
> Mahesh
>
> --
>
> Mahesh Kumar Ravindranathan
> Data Streaming Engineer
> Oracle Marketing Cloud - Social Platform
> Contact No:+1(720)492-4445
>
>

Reply via email to