Thanks Michael for confirming!

On Thu, Jul 31, 2014 at 2:43 PM, Michael Armbrust <mich...@databricks.com>
wrote:

> The performance should be the same using the DSL or SQL strings.
>
>
> On Thu, Jul 31, 2014 at 2:36 PM, Buntu Dev <buntu...@gmail.com> wrote:
>
>> I was not sure if registerAsTable() and then query against that table
>> have additional performance impact and if DSL eliminates that.
>>
>>
>> On Thu, Jul 31, 2014 at 2:33 PM, Zongheng Yang <zonghen...@gmail.com>
>> wrote:
>>
>>> Looking at what this patch [1] has to do to achieve it, I am not sure
>>> if you can do the same thing in 1.0.0 using DSL only. Just curious,
>>> why don't you use the hql() / sql() methods and pass a query string
>>> in?
>>>
>>> [1] https://github.com/apache/spark/pull/1211/files
>>>
>>> On Thu, Jul 31, 2014 at 2:20 PM, Buntu Dev <buntu...@gmail.com> wrote:
>>> > Thanks Zongheng for the pointer. Is there a way to achieve the same in
>>> 1.0.0
>>> > ?
>>> >
>>> >
>>> > On Thu, Jul 31, 2014 at 1:43 PM, Zongheng Yang <zonghen...@gmail.com>
>>> wrote:
>>> >>
>>> >> countDistinct is recently added and is in 1.0.2. If you are using that
>>> >> or the master branch, you could try something like:
>>> >>
>>> >>     r.select('keyword, countDistinct('userId)).groupBy('keyword)
>>> >>
>>> >> On Thu, Jul 31, 2014 at 12:27 PM, buntu <buntu...@gmail.com> wrote:
>>> >> > I'm looking to write a select statement to get a distinct count on
>>> >> > userId
>>> >> > grouped by keyword column on a parquet file SchemaRDD equivalent of:
>>> >> >   SELECT keyword, count(distinct(userId)) from table group by
>>> keyword
>>> >> >
>>> >> > How to write it using the chained select().groupBy() operations?
>>> >> >
>>> >> > Thanks!
>>> >> >
>>> >> >
>>> >> >
>>> >> > --
>>> >> > View this message in context:
>>> >> >
>>> http://apache-spark-user-list.1001560.n3.nabble.com/SchemaRDD-select-expression-tp11069.html
>>> >> > Sent from the Apache Spark User List mailing list archive at
>>> Nabble.com.
>>> >
>>> >
>>>
>>
>>
>

Reply via email to