[
https://issues.apache.org/jira/browse/SPARK-3451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15316284#comment-15316284
]
Christian Kurz commented on SPARK-3451:
---
In order to align with best practices of using jars on
Christian Kurz created SPARK-12097:
--
Summary: How to do a cached, batched JDBC-lookup in Spark
Streaming?
Key: SPARK-12097
URL: https://issues.apache.org/jira/browse/SPARK-12097
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-12010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028404#comment-15028404
]
Christian Kurz commented on SPARK-12010:
Hi Huaxin,
thank you for your kind offer.
Actually I
Christian Kurz created SPARK-12010:
--
Summary: Spark JDBC requires support for column-name-free INSERT
syntax
Key: SPARK-12010
URL: https://issues.apache.org/jira/browse/SPARK-12010
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Christian Kurz updated SPARK-11989:
---
Description:
Writing DataFrames out to a JDBC destination currently requires the JDBC
Christian Kurz created SPARK-11989:
--
Summary: Spark JDBC write only works on techologies with
transaction support
Key: SPARK-11989
URL: https://issues.apache.org/jira/browse/SPARK-11989
Project: