Hi All,

How to track batch jobs in spark? For example, is there some id or token i
can get after I spawn a batch job and use it to track the progress or to
kill the batch job itself?

For Streaming, we have StreamingQuery.id()

Thanks!

Reply via email to