Measure performance time in some spark transformations.

2018-05-12 Thread Guillermo Ortiz Fernández
I want to measure how long it takes some different transformations in Spark as map, joinWithCassandraTable and so on. Which one is the best aproximation to do it? def time[R](block: => R): R = { val t0 = System.nanoTime() val result = block val t1 = System.nanoTime()

Having issues when running spark with s3

2018-05-12 Thread Shivam Sharma
Hi, I am putting data using spark-redshift connector("com.databricks" %% "spark-redshift" % "1.1.0") which uses s3. Basically when I use *fs.s3a.fast.upload=true* hadoop property then my program works fine. But if this property is false then it causes issue and throws NullPointerException.

Re: Spark Structured Streaming is giving error “org.apache.spark.sql.AnalysisException: Inner join between two streaming DataFrames/Datasets is not supported;”

2018-05-12 Thread ThomasThomas
Thanks for the quick response...I'm able to inner join the dataframes with regular spark session. The issue is only with the spark streaming session. BTW I'm using Spark 2.2.0 version... -- Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/

Re: Spark Structured Streaming is giving error “org.apache.spark.sql.AnalysisException: Inner join between two streaming DataFrames/Datasets is not supported;”

2018-05-12 Thread रविशंकर नायर
Perhaps this link might help you. https://stackoverflow.com/questions/48699445/inner-join-not-working-in-dataframe-using-spark-2-1 Best, Passion On Sat, May 12, 2018, 10:57 AM ThomasThomas wrote: > Hi There, > > Our use case is like this. > > We have a nested(multiple)

Spark Structured Streaming is giving error “org.apache.spark.sql.AnalysisException: Inner join between two streaming DataFrames/Datasets is not supported;”

2018-05-12 Thread ThomasThomas
Hi There, Our use case is like this. We have a nested(multiple) JSON message flowing through Kafka Queue. Read the message from Kafka using Spark Structured Streaming(SSS) and explode the data and flatten all data into single record using DataFrame joins and land into a relational database

Dataset error with Encoder

2018-05-12 Thread Masf
Hi, I have the following issue, case class Item (c1: String, c2: String, c3: Option[BigDecimal]) import sparkSession.implicits._ val result = df.as[Item].groupByKey(_.c1).mapGroups((key, value) => { value }) But I get the following error in compilation time: Unable to find encoder for type

Re: ordered ingestion not guaranteed

2018-05-12 Thread ravidspark
Jorn, Thanks for the response. My downstream database is Kudu. 1. Yes. As you have suggested, I have been using a central caching mechanism that caches the rdd results and to make a comparison with the next batch to check for the latest timestamps and ignore the old timestamps. But, I see