Hi all,
Any suggestions on optimization of running ML Pipeline inference in a
webapp in a multi-tenant low-latency mode.
Suggestions would be appreciated.
Thank you!
Hello.
I am working to move our system from Spark 2.1.0 to Spark 2.3.0. Our
system is running on Spark managed via Yarn. During the course of the move
I mirrored the settings to our new cluster. However, on the Spark 2.3.0
cluster with the same resource allocation I am seeing a number of
Hi,
I am not very sure if SPARK data frames apply to your used case, if it does
please give a try by creating a UDF in Python and check whether you can
call it in Scala or not using select and expr.
Regards,
Gourav Sengupta
On Mon, Jul 16, 2018 at 5:32 AM, Chetan Khatri
wrote:
> Hello Jayant,