Re: Creating JDBC source table schema(DDL) dynamically

2018-07-12 Thread Kadam, Gangadhar (GE Aviation, Non-GE)
y compilation, query execution (for triggers, referential integrity, etc) and even to establish a connection. On 7/12/18, 9:53 AM, "Kadam, Gangadhar (GE Aviation, Non-GE)" wrote: Thanks Jayesh. I was aware of the catalog table approach but I was av

Re: Creating JDBC source table schema(DDL) dynamically

2018-07-12 Thread Kadam, Gangadhar (GE Aviation, Non-GE)
link below for some pointers….. https://stackoverflow.com/questions/2593803/how-to-generate-the-create-table-sql-statement-for-an-existing-table-in-postgr On 7/11/18, 9:55 PM, "Kadam, Gangadhar (GE Aviation, Non-GE)" wrote: Hi All,

Creating JDBC source table schema(DDL) dynamically

2018-07-11 Thread Kadam, Gangadhar (GE Aviation, Non-GE)
Hi All, I am trying to build a spark application which will read the data from Postgresql (source) one environment and write it to postgreSQL, Aurora (target) on a dfiffernt environment (like to PROD to QA or QA to PROD etc) using spark JDBC. When I am loading the dataframe back to

Re: EXT: Multiple cores/executors in Pyspark standalone mode

2017-03-24 Thread Kadam, Gangadhar (GE Aviation, Non-GE)
In Local Mode all processes are executed inside a single JVM. Application is started in a local mode by setting master to local, local[*] or local[n]. spark.executor.cores and spark.executor.cores are not applicable in the local mode because there is only one embedded executor. In Standalone