y compilation, query execution (for triggers,
referential integrity, etc) and even to establish a connection.
On 7/12/18, 9:53 AM, "Kadam, Gangadhar (GE Aviation, Non-GE)"
wrote:
Thanks Jayesh.
I was aware of the catalog table approach but I was av
link below for some pointers…..
https://stackoverflow.com/questions/2593803/how-to-generate-the-create-table-sql-statement-for-an-existing-table-in-postgr
On 7/11/18, 9:55 PM, "Kadam, Gangadhar (GE Aviation, Non-GE)"
wrote:
Hi All,
Hi All,
I am trying to build a spark application which will read the data from
Postgresql (source) one environment and write it to postgreSQL, Aurora
(target) on a dfiffernt environment (like to PROD to QA or QA to PROD etc)
using spark JDBC.
When I am loading the dataframe back to
In Local Mode all processes are executed inside a single JVM.
Application is started in a local mode by setting master to local, local[*] or
local[n].
spark.executor.cores and spark.executor.cores are not applicable in the local
mode because there is only one embedded executor.
In Standalone