Hello Gnana,
I'm bringing this thread back to the user@ list for the benefit of anyone
else who might want to try this feature.
Running this from the root of the source tree should give you a working
full build with Kubernetes and the experimental Volcano feature, using
Scala 2.12:
build/mvn -Pk
Taking this of list
Start here:
https://github.com/apache/spark/blob/70ec696bce7012b25ed6d8acec5e2f3b3e127f11/sql/core/src/main/scala/org/apache/spark/sql/jdbc/JdbcDialects.scala#L144
Look at subclasses of JdbcDialect too, like TeradataDialect.
Note that you are using an old unsupported version, t
Hello!
I want to read csv files with pyspark using (spark_session).read.csv().
There is a whole bunch of nice options, especially an option "locale", nut
nonetheless a decimal comma instead of a decimal point is not understood when
reading float/double input even when the locale is set to 'de-DE