Why don't you wipe everything out and try again?

On Monday, April 4, 2016, Ted Yu <yuzhih...@gmail.com> wrote:

> The commit you mentioned was made Friday.
> I refreshed workspace Sunday - so it was included.
>
> Maybe this was related:
>
> $ bin/spark-shell
> Failed to find Spark jars directory
> (/home/hbase/spark/assembly/target/scala-2.10).
> You need to build Spark before running this program.
>
> Then I did:
>
> $ ln -s /home/hbase/spark/assembly/target/scala-2.11
> assembly/target/scala-2.10
>
> Cheers
>
> On Mon, Apr 4, 2016 at 4:06 AM, Herman van Hövell tot Westerflier <
> hvanhov...@questtec.nl
> <javascript:_e(%7B%7D,'cvml','hvanhov...@questtec.nl');>> wrote:
>
>> No, it can''t. You only need implicits when you are using the catalyst
>> DSL.
>>
>> The error you get is due to the fact that the parser does not recognize
>> the CODEGEN keyword (which was the case before we introduced this in
>> https://github.com/apache/spark/commit/fa1af0aff7bde9bbf7bfa6a3ac74699734c2fd8a).
>> That suggests to me that you are not on the latest master.
>>
>> Kind regards,
>>
>> Herman van Hövell
>>
>> 2016-04-04 12:15 GMT+02:00 Ted Yu <yuzhih...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','yuzhih...@gmail.com');>>:
>>
>>> Could the error I encountered be due to missing import(s) of implicit ?
>>>
>>> Thanks
>>>
>>> On Sun, Apr 3, 2016 at 9:42 PM, Reynold Xin <r...@databricks.com
>>> <javascript:_e(%7B%7D,'cvml','r...@databricks.com');>> wrote:
>>>
>>>> Works for me on latest master.
>>>>
>>>>
>>>>
>>>> scala> sql("explain codegen select 'a' as a group by 1").head
>>>> res3: org.apache.spark.sql.Row =
>>>> [Found 2 WholeStageCodegen subtrees.
>>>> == Subtree 1 / 2 ==
>>>> WholeStageCodegen
>>>> :  +- TungstenAggregate(key=[], functions=[], output=[a#10])
>>>> :     +- INPUT
>>>> +- Exchange SinglePartition, None
>>>>    +- WholeStageCodegen
>>>>       :  +- TungstenAggregate(key=[], functions=[], output=[])
>>>>       :     +- INPUT
>>>>       +- Scan OneRowRelation[]
>>>>
>>>> Generated code:
>>>> /* 001 */ public Object generate(Object[] references) {
>>>> /* 002 */   return new GeneratedIterator(references);
>>>> /* 003 */ }
>>>> /* 004 */
>>>> /* 005 */ /** Codegened pipeline for:
>>>> /* 006 */ * TungstenAggregate(key=[], functions=[], output=[a#10])
>>>> /* 007 */ +- INPUT
>>>> /* 008 */ */
>>>> /* 009 */ final class GeneratedIterator extends
>>>> org.apache.spark.sql.execution.BufferedRowIterator {
>>>> /* 010 */   private Object[] references;
>>>> /* 011 */   ...
>>>>
>>>>
>>>> On Sun, Apr 3, 2016 at 9:38 PM, Jacek Laskowski <ja...@japila.pl
>>>> <javascript:_e(%7B%7D,'cvml','ja...@japila.pl');>> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Looks related to the recent commit...
>>>>>
>>>>> Repository: spark
>>>>> Updated Branches:
>>>>>   refs/heads/master 2262a9335 -> 1f0c5dceb
>>>>>
>>>>> [SPARK-14350][SQL] EXPLAIN output should be in a single cell
>>>>>
>>>>> Jacek
>>>>> 03.04.2016 7:00 PM "Ted Yu" <yuzhih...@gmail.com
>>>>> <javascript:_e(%7B%7D,'cvml','yuzhih...@gmail.com');>> napisał(a):
>>>>>
>>>>>> Hi,
>>>>>> Based on master branch refreshed today, I issued 'git clean -fdx'
>>>>>> first.
>>>>>>
>>>>>> Then this command:
>>>>>> build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6
>>>>>> -Dhadoop.version=2.7.0 package -DskipTests
>>>>>>
>>>>>> I got the following error:
>>>>>>
>>>>>> scala>  sql("explain codegen select 'a' as a group by 1").head
>>>>>> org.apache.spark.sql.catalyst.parser.ParseException:
>>>>>> extraneous input 'codegen' expecting {'(', 'SELECT', 'FROM', 'ADD',
>>>>>> 'DESC', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'INSERT', 'DELETE',
>>>>>> 'DESCRIBE', 'EXPLAIN', 'LOGICAL', 'SHOW', 'USE', 'DROP', 'ALTER', 'MAP',
>>>>>> 'SET', 'START', 'COMMIT', 'ROLLBACK', 'REDUCE', 'EXTENDED', 'REFRESH',
>>>>>> 'CLEAR', 'CACHE', 'UNCACHE', 'FORMATTED', 'DFS', 'TRUNCATE', 'ANALYZE',
>>>>>> 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 'EXPORT', 'IMPORT',
>>>>>> 'LOAD'}(line 1, pos 8)
>>>>>>
>>>>>> == SQL ==
>>>>>> explain codegen select 'a' as a group by 1
>>>>>> --------^^^
>>>>>>
>>>>>> Can someone shed light ?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to