Hello,I developed a custom hive storage handler.It works on the legacy
hive(using MR).My custom storage handler extracts some values in the SQL WHERE
Clause.And the values were set into job conf.That logic exists in
HiveStoragePredicateHandler#decomposePredicate.But SparkSQL seem to does not
Hello,
I am using createDataframe and passing java row rdd and schema . But it is
changing the time value when I write that data frame to a parquet file.
Can any one help .
Thank you,
Sudhir
On 9 Jan 2018, at 18:10, Sean Owen
> wrote:
Just to follow up -- those are actually in a Palantir repo, not Central.
Deploying to Central would be uncourteous, but this approach is legitimate and
how it has to work for vendors to release distros
Hi,
I am currently working on a Scala project which has defined a logback.xml file
in order to write the logs generated by the application in a specific file and
console. I am packaging the project using Maven and executing it using
spark-submit, however the output log file is not generated.
Hi,
I've recently installed Spark 2.2.1, and it seems like the SQL tab isn't
getting updated at all, although the "Jobs" tab gets updated with new
incoming jobs, the SQL tab remains empty, all the time.
I was wondering if anyone noticed such regression in 2.2.1?
--
Best Regards,
Yuval