Re: Question on Spark code

2017-07-23 Thread Reynold Xin
This is a standard practice used for chaining, to support a.setStepSize(..) .set setRegParam(...) On Sun, Jul 23, 2017 at 8:47 PM, tao zhan wrote: > Thank you for replying. > But I do not get it completely, why does the "this.type“” necessary? > why could not it be

How to configure spark with java

2017-07-23 Thread amit kumar singh
Hello everyone I want to use spark with java API Please let me know how can I configure it Thanks A - To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Re: Question on Spark code

2017-07-23 Thread Reynold Xin
It means the same object ("this") is returned. On Sun, Jul 23, 2017 at 8:16 PM, tao zhan wrote: > Hello, > > I am new to scala and spark. > What does the "this.type" in set function for? > > > ​ > https://github.com/apache/spark/blob/481f0792944d9a77f0fe8b5e2596da >

how to convert the binary from kafak to srring pleaae

2017-07-23 Thread ??????????
Hi all I want to change the binary from kafka to string. Would you like help me please? val df = ss.readStream.format("kafka").option("kafka.bootstrap.server","") .option("subscribe","") .load val value = df.select("value") value.writeStream .outputMode("append")

Re: Is there a way to run Spark SQL through REST?

2017-07-23 Thread kant kodali
@Sumedh Can I run streaming jobs on the same context with spark-jobserver ? so there is no waiting for results since the spark sql job is expected stream forever and results of each streaming job are captured through a message queue. In my case each spark sql query will be a streaming job. On

Re: Get full RDD lineage for a spark job

2017-07-23 Thread Ron Gonzalez
Cool thanks. Will give that a try... --Ron On Friday, July 21, 2017 8:09 PM, Keith Chapman wrote: You could also enable it with --conf spark.logLineage=true if you do not want to change any code. Regards,Keith. http://keith-chapman.com On Fri, Jul 21, 2017

Re: custom joins on dataframe

2017-07-23 Thread Michael Armbrust
> > left.join(right, my_fuzzy_udf (left("cola"),right("cola"))) > While this could work, the problem will be that we'll have to check every possible combination of tuples from left and right using your UDF. It would be best if you could somehow partition the problem so that we could reduce the

java.lang.NoClassDefFoundError: scala/runtime/AbstractPartialFunction$mcJL$sp

2017-07-23 Thread Kaushal Shriyan
I am facing issue while connecting Apache Spark to Apache Cassandra Datastore > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >

unsubscribe

2017-07-23 Thread Vasilis Hadjipanos
Please unsubscribe me