This is a standard practice used for chaining, to support
a.setStepSize(..)
.set setRegParam(...)
On Sun, Jul 23, 2017 at 8:47 PM, tao zhan wrote:
> Thank you for replying.
> But I do not get it completely, why does the "this.type“” necessary?
> why could not it be
Hello everyone
I want to use spark with java API
Please let me know how can I configure it
Thanks
A
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
It means the same object ("this") is returned.
On Sun, Jul 23, 2017 at 8:16 PM, tao zhan wrote:
> Hello,
>
> I am new to scala and spark.
> What does the "this.type" in set function for?
>
>
>
> https://github.com/apache/spark/blob/481f0792944d9a77f0fe8b5e2596da
>
Hi all
I want to change the binary from kafka to string. Would you like help me please?
val df = ss.readStream.format("kafka").option("kafka.bootstrap.server","")
.option("subscribe","")
.load
val value = df.select("value")
value.writeStream
.outputMode("append")
@Sumedh Can I run streaming jobs on the same context with spark-jobserver ?
so there is no waiting for results since the spark sql job is expected
stream forever and results of each streaming job are captured through a
message queue.
In my case each spark sql query will be a streaming job.
On
Cool thanks. Will give that a try...
--Ron
On Friday, July 21, 2017 8:09 PM, Keith Chapman
wrote:
You could also enable it with --conf spark.logLineage=true if you do not want
to change any code.
Regards,Keith.
http://keith-chapman.com
On Fri, Jul 21, 2017
>
> left.join(right, my_fuzzy_udf (left("cola"),right("cola")))
>
While this could work, the problem will be that we'll have to check every
possible combination of tuples from left and right using your UDF. It
would be best if you could somehow partition the problem so that we could
reduce the
I am facing issue while connecting Apache Spark to Apache Cassandra
Datastore
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
Please unsubscribe me