We will probably fix this in Spark 1.6

https://issues.apache.org/jira/browse/SPARK-10040

On Thu, Aug 20, 2015 at 5:18 AM, Aram Mkrtchyan <aram.mkrtchyan...@gmail.com
> wrote:

> We want to migrate our data (approximately 20M rows) from parquet to postgres,
> when we are using dataframe writer's jdbc method the execution time is very
> large,  we have tried the same with batch insert it was much effective.
> Is it intentionally implemented in that way?
>

Reply via email to