[vote] Apache Spark 2.0.0-preview release (rc1)

2016-05-17 Thread Reynold Xin
Hi, In the past the Apache Spark community have created preview packages (not official releases) and used those as opportunities to ask community members to test the upcoming versions of Apache Spark. Several people in the Apache community have suggested we conduct votes for these preview

Re: CompileException for spark-sql generated code in 2.0.0-SNAPSHOT

2016-05-17 Thread Michael Armbrust
Yeah, can you open a JIRA with that reproduction please? You can ping me on it. On Tue, May 17, 2016 at 4:55 PM, Reynold Xin wrote: > It seems like the problem here is that we are not using unique names > for mapelements_isNull? > > > > On Tue, May 17, 2016 at 3:29 PM,

Re: CompileException for spark-sql generated code in 2.0.0-SNAPSHOT

2016-05-17 Thread Reynold Xin
It seems like the problem here is that we are not using unique names for mapelements_isNull? On Tue, May 17, 2016 at 3:29 PM, Koert Kuipers wrote: > hello all, we are slowly expanding our test coverage for spark > 2.0.0-SNAPSHOT to more in-house projects. today i ran into

CompileException for spark-sql generated code in 2.0.0-SNAPSHOT

2016-05-17 Thread Koert Kuipers
hello all, we are slowly expanding our test coverage for spark 2.0.0-SNAPSHOT to more in-house projects. today i ran into this issue... this runs fine: val df = sc.parallelize(List(("1", "2"), ("3", "4"))).toDF("a", "b") df .map(row => row)(RowEncoder(df.schema)) .select("a", "b") .show

Indexing of RDDs and DF in 2.0?

2016-05-17 Thread Michael Segel
Hi, I saw a replay of a talk about what’s coming in Spark 2.0 and the speed performances… I am curious about indexing of data sets. In HBase/MapRDB you can create ordered sets of indexes through an inverted table. Here, you can take the intersection of the indexes to find the result set of

Re: SBT doesn't pick resource file after clean

2016-05-17 Thread Marcelo Vanzin
Perhaps you need to make the "compile" task of the appropriate module depend on the task that generates the resource file? Sorry but my knowledge of sbt doesn't really go too far. On Tue, May 17, 2016 at 11:58 AM, dhruve ashar wrote: > We are trying to pick the spark

SBT doesn't pick resource file after clean

2016-05-17 Thread dhruve ashar
We are trying to pick the spark version automatically from pom instead of manually modifying the files. This also includes richer pieces of information like last commit, version, user who built the code etc to better identify the framework running. The setup is as follows : - A shell script

Re: HiveContext.refreshTable() missing in spark 2.0

2016-05-17 Thread Yin Huai
Hi Yang, I think it's deleted accidentally while we were working on the API migration. We will add it back ( https://issues.apache.org/jira/browse/SPARK-15367). Thanks, Yin On Fri, May 13, 2016 at 2:47 AM, 汪洋 wrote: > Hi all, > > I notice that HiveContext used to have