Thank you Jeff! I would certainly give it a try.
Best,
Rahul
On 2021/09/26 22:49:03, Jeff Zhang wrote:
> Hi kumar,
>
> You can try Zeppelin which support the udf sharing across languages
>
> http://zeppelin.apache.org/
>
>
>
>
> rahul kumar 于2021年9月27日
ch support the udf sharing across languages
> >
> > http://zeppelin.apache.org/
> >
> >
> >
> >
> > rahul kumar 于2021年9月27日周一 上午4:20写道:
> >
> >> I'm trying to use a function defined in scala jar in pyspark ( spark
> >> 3.0.2).
>
I'm trying to use a function defined in scala jar in pyspark ( spark 3.0.2).
--scala ---
Object PythonUtil {
def customedf(dataFrame: DataFrame,
keyCol: String,
table: String,
outputSchema: StructType,
database: String):
Dear friends,I'm implementing datasource v2 for a custom NoSql database. I'm
facing following issuea) It seems while doing save operation, there is no
way to access user specified schema on dataframe. There is an existing
unresolved ongoing conversation here
Did you guys find a way to retrieve schema while saving into external
database? I'm also stuck at the same place without any clear path forward.
Thanks,
Rahul
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
Thank you Jacek, for trying it out and clarifying. Appreciate it.
Best,
Rahul
--
Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
I'm implementing V2 datasource for a custom datasource.
I'm trying to insert a record into a temp view, in following fashion.
insertDFWithSchema.createOrReplaceTempView(sqlView)
spark.sql(s”insert into $sqlView values (2, ‘insert_record1’, 200,
23000), (20001, ‘insert_record2’, 201,
Hello everyone,
I was wondering, how Cassandra spark connector deals with deleted/updated
record while readstream operation. If the record was already fetched in
spark memory, and it got updated or deleted in database, does it get
reflected in streaming join?
Thanks,
Rahul
--
Sent from:
I'm trying to implement structured spark streaming source for a custom
connector. I'm wondering if it is possible to do predicate pushdown in the
streaming source? I'm aware this may be something native to the datastore in
question. However, I would really appreciate if someone can redirect me
–
Successfully stopped SparkContext
Rahul Kumar
*Software Engineer- I (Search Snapdeal)*
*M*: +91 9023542950*EXT: *14226
362-363, ASF CENTRE , UDYOG VIHAR , PHASE - IV , GURGAON 122 016 , INDIA
I faced similar issue with wholeTextFiles function due to version
compatibility. Spark 1.0 with Hadoop 2.4.1 worked. Did you try other
function such as textFile to check if the issue is specific to
wholeTextFiles?
Spark needs to be re-compiled for different hadoop versions. However, you
can keep
11 matches
Mail list logo