Fwd: [SparkSQL] Project using NamedExpression

2017-03-21 Thread Aviral Agarwal
Hi guys, I want transform Row using NamedExpression. Below is the code snipped that I am using : def apply(dataFrame: DataFrame, selectExpressions: java.util.List[String]): RDD[UnsafeRow] = { val exprArray = selectExpressions.map(s => Column(SqlParser.parseExpression(s)).named )

Re: subscribe to spark dev list

2017-03-21 Thread Yash Sharma
Sorry for the spam, used the wrong email address. On Wed, 22 Mar 2017 at 12:01 Yash Sharma wrote: > subscribe to spark dev list >

subscribe to spark dev list

2017-03-21 Thread Yash Sharma
subscribe to spark dev list

Re: Outstanding Spark 2.1.1 issues

2017-03-21 Thread Nick Pentreath
As for SPARK-19759 , I don't think that needs to be targeted for 2.1.1 so we don't need to worry about it On Tue, 21 Mar 2017 at 13:49 Holden Karau wrote: > I agree with Michael, I think we've got some outstanding issues

Re: Outstanding Spark 2.1.1 issues

2017-03-21 Thread Holden Karau
I agree with Michael, I think we've got some outstanding issues but none of them seem like regression from 2.1 so we should be good to start the RC process. On Tue, Mar 21, 2017 at 1:41 PM, Michael Armbrust wrote: > Please speak up if I'm wrong, but none of these seem

Re: Outstanding Spark 2.1.1 issues

2017-03-21 Thread Michael Armbrust
Please speak up if I'm wrong, but none of these seem like critical regressions from 2.1. As such I'll start the RC process later today. On Mon, Mar 20, 2017 at 9:52 PM, Holden Karau wrote: > I'm not super sure it should be a blocker for 2.1.1 -- is it a regression? >

Re: Why are DataFrames always read with nullable=True?

2017-03-21 Thread Jason White
Thanks for pointing to those JIRA tickets, I hadn't seen them. Encouraging that they are recent. I hope we can find a solution there. -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Why-are-DataFrames-always-read-with-nullable-True-tp21207p21218.html

Re: Issues: Generate JSON with null values in Spark 2.0.x

2017-03-21 Thread Dongjin Lee
Hi Chetan, Sadly, you can not; Spark is configured to ignore the null values when writing JSON. (check JacksonMessageWriter and find JsonInclude.Include.NON_NULL from the code.) If you want that functionality, it would be much better to file the problem to JIRA. Best, Dongjin On Mon, Mar 20,