Hi,
Recently, I have open-sourced a tool called DataRoaster(
https://github.com/cloudcheflabs/dataroaster) to provide data platforms
running on kubernetes with ease.
In particular, with DataRoaster, you can deploy spark thrift server on
kubernetes easily, which is originated from my blog of
Hi!
Again, thanks a lot for advice: I'll have a look!
Best,
Aurelien
Le mar. 7 sept. 2021 à 20:36, Haryani, Akshay a
écrit :
> For custom metrics, you can take a look at Groupon’s spar metrics:
> https://github.com/groupon/spark-metrics
>
>
>
> It is supported on spark 2.x. Alternatively,
Hello,
I use a DataFrameReader and I use permissive mode for corrupted records. I
would like to have more information about corrupted records: for example,
in the case of a schema mismatch, something like: "Invalid type. Expected
Integer but got String for column number 12 (in case of csv).
By
Thanks, I'll check them out.
On Thu, Sep 9, 2021 at 7:22 PM Sean Owen wrote:
> - other lists, please don't cross post to 4 lists (!)
>
> This is a problem you'd see with Java 9 or later - I assume you're running
> that under the hood. However it should be handled by Spark in the case that
> you
- other lists, please don't cross post to 4 lists (!)
This is a problem you'd see with Java 9 or later - I assume you're running
that under the hood. However it should be handled by Spark in the case that
you can't access certain things in Java 9+, and this may be a bug I'll look
into. In the