Re: How to handle auto-restart in Kubernetes Spark application

2021-05-02 Thread Ali Gouta
Hello, Better to ask your question on the spark operator github and not on this mailing list. For the answer, try: type: Always Best regards, Ali Gouta. On Sun, May 2, 2021 at 6:15 PM Sachit Murarka wrote: > Hi All, > > I am using Spark with Kubernetes, Can anyone please tell me how I can >

How to handle auto-restart in Kubernetes Spark application

2021-05-02 Thread Sachit Murarka
Hi All, I am using Spark with Kubernetes, Can anyone please tell me how I can handle restarting failed Spark jobs? I have used following property but it is not working restartPolicy: type: OnFailure Kind Regards, Sachit Murarka

Re: Spark JDBC errors out

2021-05-02 Thread Farhan Misarwala
Thanks, Mich, I have been using the JDBC source with MySQL & Postgres drivers in production for almost 4 years now. The error looked a bit weird and what I meant to ask was am I doing it right? As you mentioned, I will also check with the developers of the driver if they have anything to say

Re: Delivery Status Notification (Failure)

2021-05-02 Thread Mich Talebzadeh
Part 2 In this case, we are simply counting the number of rows to be ingested once before SSS terminates. This is shown in the above method batchId is 0 Total records processed in this run = 3107 wrote to DB So it shows batchId (0) and the total records count() and writes to BigQuery table

Re: Delivery Status Notification (Failure)

2021-05-02 Thread Mich Talebzadeh
This message is in two parts Hi, I did some tests on these. The idea being running Spark Structured Streaming (SSS) on a collection of records since the last run of SSS and shutdown SSS job. Some parts of this approach has been described in the following databricks blog Running Streaming Jobs