I mean the `StreamingExecution` generated a proper error message:
2017-08-26 07:05:00,641 ERROR StreamExecution:? - Query [id =
8597ae0b-2183-407f-8300-239a24eb68ab, runId =
c1fe627d-bcf4-4462-bbd9-b178ffaca860]
terminated with error org.apache.spark.SparkException: Job aborted due to
stage
When you say "the application remained alive", do you mean the
StreamingQuery stayed alive, or the whole process stayed alive? The
StreamingQuery should be terminated immediately. And the stream execution
threads are all daemon threads, so it should not affect the termination of
the application
I wasn't sure if this would be a proper bug or not.
Today, the behavior of Structured Streaming is such that if a source fails
with an exception, the `StreamExecution` class halts reading further from
the source, but the application is remained alive. For applications where
the sole purpose is to