Thank you very much.
I understand the performance implications and that Spark will download it
before modifying.
The JDBC database is just extremely small, it’s the BI/aggregated layer.
What’s interesting is that here it says I can use JDBC
Please bear in mind that insert/update delete operations are DML,
whereas CREATE/DROP TABLE are DDL operations that are best performed in the
native database which I presume is a transactional.
Can you CREATE TABLE before (any insert of data) using the native JDBC
database syntax?
Alternatively
Generally, the problem is that I don’t find a way to automatically create a
JDBC table in the JDBC database when I want to insert data into it using Spark
SQL only, not DataFrames API.
> On 2 Feb 2023, at 21:22, Harut Martirosyan
> wrote:
>
> Hi, thanks for the reply.
>
> Let’s imagine we
Hi, thanks for the reply.
Let’s imagine we have a parquet based table called parquet_table, now I want to
insert it into a new JDBC table, all using pure SQL.
If the JDBC table already exists, it’s easy, we do CREATE TABLE USING JDBC and
then we do INSERT INTO that table.
If the table doesn’t
Hi,
It is not very clear your statement below:
".. If the table existed, I would create a table using JDBC in spark SQL
and then insert into it, but I can't create a table if it doesn't exist in
JDBC database..."
If the table exists in your JDBC database, why do you need to create it?
How do
I have a resultset (defined in SQL), and I want to insert it into my JDBC
database using only SQL, not dataframes API.
If the table existed, I would create a table using JDBC in spark SQL and
then insert into it, but I can't create a table if it doesn't exist in JDBC
database.
How to do that