The plan dump indicates that you are not using the latest code from the 
ptf-windowing branch. So first of all, if you can please try with the latest 
code.
Otherwise can you post the log file of the failed task; and also tell us which 
version of the code you are using.

Regards,
Harish.

From: neelesh gadhia <ngad...@yahoo.com<mailto:ngad...@yahoo.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
<user@hive.apache.org<mailto:user@hive.apache.org>>, neelesh gadhia 
<ngad...@yahoo.com<mailto:ngad...@yahoo.com>>
Date: Friday, February 22, 2013 2:55 PM
To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
<user@hive.apache.org<mailto:user@hive.apache.org>>, 
"hashut...@apache.org<mailto:hashut...@apache.org>" 
<hashut...@apache.org<mailto:hashut...@apache.org>>
Subject: Re: [SQLWindowing] Windowing function output path syntax (#26)

Thanks Harish for your quick response.

I tried it with the new syntax using one of the example in 
ptf_general_queries.q and get the following error. Am I still doing something 
wrong here?

hive> select mid, tdate, tamt,sum(tamt) as com_sum over (rows between unbounded 
preceding and current row)
    > from t_enc
    > distribute by mid
    > sort by mid, tdate;

1.TS :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, 
t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, 
block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name 
-> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

2.RS :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, 
t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, 
block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name 
-> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

3.EX :
RowResolver::
    columns:[t_enc._col0, t_enc._col1, t_enc._col2]
    Aliases:[
        t_enc:[mid -> _col0, tdate -> _col1, tamt -> _col2
    ]
    columns mapped to expressions:[
    ]

4.PTF :
RowResolver::
    columns:[<null>._col0, t_enc._col1, t_enc._col2, t_enc._col3]
    Aliases:[
        :[(tok_function sum (tok_table_or_col tamt) (tok_windowspec 
(tok_windowrange (preceding unbounded) current))) -> _col0
        t_enc:[mid -> _col1, tdate -> _col2, tamt -> _col3
    ]
    columns mapped to expressions:[
        (TOK_FUNCTION sum (TOK_TABLE_OR_COL tamt) (TOK_WINDOWSPEC 
(TOK_WINDOWRANGE (preceding unbounded) current))) -> (TOK_FUNCTION sum 
(TOK_TABLE_OR_COL tamt) (TOK_WINDOWSPEC (TOK_WINDOWRANGE (preceding unbounded) 
current)))
    ]

insclause-0:
Def ObjectInspector:[_col0, _col1, _col2]
SerDe:org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
windowingtablefunction:
Def ObjectInspector:[com_sum, _col0, _col1, _col2]
SerDe:org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe
Evaluator Output ObjectInspector:[com_sum, _col0, _col1, _col2]
SelectList:_col0, _col1, _col2, _col3

1.TS :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, 
t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, 
block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name 
-> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

2.RS :
RowResolver::
    columns:[t_enc.mid, t_enc.tdate, t_enc.tamt, 
t_enc.BLOCK__OFFSET__INSIDE__FILE, t_enc.INPUT__FILE__NAME]
    Aliases:[
        t_enc:[mid -> mid, tdate -> tdate, tamt -> tamt, 
block__offset__inside__file -> BLOCK__OFFSET__INSIDE__FILE, input__file__name 
-> INPUT__FILE__NAME
    ]
    columns mapped to expressions:[
    ]

3.EX :
RowResolver::
    columns:[t_enc._col0, t_enc._col1, t_enc._col2]
    Aliases:[
        t_enc:[mid -> _col0, tdate -> _col1, tamt -> _col2
    ]
    columns mapped to expressions:[
    ]

4.PTF :
RowResolver::
    columns:[<null>._col0, t_enc._col1, t_enc._col2, t_enc._col3]
    Aliases:[
        :[(tok_function sum (tok_table_or_col tamt) (tok_windowspec 
(tok_windowrange (preceding unbounded) current))) -> _col0
        t_enc:[mid -> _col1, tdate -> _col2, tamt -> _col3
    ]
    columns mapped to expressions:[
        (TOK_FUNCTION sum (TOK_TABLE_OR_COL tamt) (TOK_WINDOWSPEC 
(TOK_WINDOWRANGE (preceding unbounded) current))) -> (TOK_FUNCTION sum 
(TOK_TABLE_OR_COL tamt) (TOK_WINDOWSPEC (TOK_WINDOWRANGE (preceding unbounded) 
current)))
    ]

5.SEL :
RowResolver::
    columns:[<null>._col0, <null>._col1, <null>._col2, <null>._col3]
    Aliases:[
        <null>:[mid -> _col0, tdate -> _col1, tamt -> _col2, com_sum -> _col3
    ]
    columns mapped to expressions:[
    ]

6.FS :
RowResolver::
    columns:[<null>._col0, <null>._col1, <null>._col2, <null>._col3]
    Aliases:[
        <null>:[mid -> _col0, tdate -> _col1, tamt -> _col2, com_sum -> _col3
    ]
    columns mapped to expressions:[
    ]

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201302221435_0001, Tracking URL = 
http://localhost:50030/jobdetails.jsp?jobid=job_201302221435_0001
Kill Command = /usr/local/Cellar/hadoop/1.1.1/libexec/bin/../bin/hadoop job  
-kill job_201302221435_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2013-02-22 14:52:43,467 Stage-1 map = 0%,  reduce = 0%
2013-02-22 14:53:05,568 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201302221435_0001 with errors
Error during job, obtaining debugging information...
Job Tracking URL: 
http://localhost:50030/jobdetails.jsp?jobid=job_201302221435_0001
Examining task ID: task_201302221435_0001_m_000002 (and more) from job 
job_201302221435_0001

Task with the most failures(4):
-----
Task ID:
  task_201302221435_0001_m_000000

URL:
  
http://localhost:50030/taskdetails.jsp?jobid=job_201302221435_0001&tipid=task_201302221435_0001_m_000000
-----
Diagnostic Messages for this Task:
java.lang.RuntimeException: java.util.NoSuchElementException
    at 
org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:228)
    at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:255)
    at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:381)
    at 
org.apache.hadoop.hive.ql.io.HiveInputFormat.pushProjectionsAndFilters(HiveInputFormat.java:374)
    at 
org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:540)
    at 
org.apache.hadoop.mapred.MapTask$TrackedRecordReader.<init>(MapTask.java:197)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:418)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.util.NoSuchElementException
    at java.util.Vector.lastElement(Vector.java:456)
    at com.sun.beans.ObjectHandler.lastExp(ObjectHandler.java:134)
    at com.sun.beans.ObjectHandler.addArg(ObjectHandler.java:119)
    at com.sun.beans.ObjectHandler.endElement(ObjectHandler.java:374)
    at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.endElement(AbstractSAXParser.java:593)
    at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanEndElement(XMLDocumentFragmentScannerImpl.java:1782)
    at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2939)
    at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:647)
    at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
    at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
    at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
    at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
    at 
com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205)
    at 
com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522)
    at javax.xml.parsers.SAXParser.parse(SAXParser.java:364)
    at javax.xml.parsers.SAXParser.parse(SAXParser.java:142)
    at java.beans.XMLDecoder$1.run(XMLDecoder.java:248)
    at java.beans.XMLDecoder$1.run(XMLDecoder.java:242)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.beans.XMLDecoder.getHandler(XMLDecoder.java:242)
    at java.beans.XMLDecoder.close(XMLDecoder.java:155)
    at 
org.apache.hadoop.hive.ql.exec.Utilities.deserializeMapRedWork(Utilities.java:525)
    at 
org.apache.hadoop.hive.ql.exec.Utilities.getMapRedWork(Utilities.java:220)
    ... 12 more


FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
hive>





________________________________
From: "Butani, Harish" <harish.but...@sap.com<mailto:harish.but...@sap.com>>
To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
<user@hive.apache.org<mailto:user@hive.apache.org>>; neelesh gadhia 
<ngad...@yahoo.com<mailto:ngad...@yahoo.com>>; Ashutosh Chauhan 
<hashut...@apache.org<mailto:hashut...@apache.org>>
Sent: Friday, February 22, 2013 2:44 PM
Subject: Re: [SQLWindowing] Windowing function output path syntax (#26)

Hi Neelesh,

You are using the syntax from the SQLWindowing project; which was done on top 
of HQL.
Now the syntax is standard SQL; see ptf_general_queries.q for examples. Your 
e.g can be expressed as:

select sum(tamt) over (partition by mid order by mid rows between unbounded 
preceding and current row) as cum_amt,
mid,tdate,tamt,cum_amt
>From t_enc

Regards,
Harish.
From: neelesh gadhia <ngad...@yahoo.com<mailto:ngad...@yahoo.com>>
Reply-To: "user@hive.apache.org<mailto:user@hive.apache.org>" 
<user@hive.apache.org<mailto:user@hive.apache.org>>, neelesh gadhia 
<ngad...@yahoo.com<mailto:ngad...@yahoo.com>>
Date: Friday, February 22, 2013 2:05 PM
To: "hashut...@apache.org<mailto:hashut...@apache.org>" 
<hashut...@apache.org<mailto:hashut...@apache.org>>, 
"user@hive.apache.org<mailto:user@hive.apache.org>" 
<user@hive.apache.org<mailto:user@hive.apache.org>>
Subject: Re: [SQLWindowing] Windowing function output path syntax (#26)

Hello,

I downloaded the source code from ptf-windowing branch and build the dist based 
on that.

Now when I try to make use of windowing function using the following ql, I get 
an error as shown below. Am I missing anything here? Please advise.


from <select mid, tdate, tamt from t_enc >
partition by mid
order by mid
with
sum(tamt) over rows between
unbounded preceding and current row as cum_amt
select mid,tdate,tamt,cum_amt;


hive> from <select mid, tdate, tamt from t_enc >
    > partition by mid
    > order by mid
    > with
    > sum(tamt) over rows between
    > unbounded preceding and current row as cum_amt
    > select mid,tdate,tamt,cum_amt;
NoViableAltException(258@[])
    at 
org.apache.hadoop.hive.ql.parse.HiveParser.joinSource(HiveParser.java:32612)
    at 
org.apache.hadoop.hive.ql.parse.HiveParser.fromClause(HiveParser.java:32498)
    at 
org.apache.hadoop.hive.ql.parse.HiveParser.queryStatement(HiveParser.java:26832)
    at 
org.apache.hadoop.hive.ql.parse.HiveParser.queryStatementExpression(HiveParser.java:26716)
    at 
org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:981)
    at org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:687)
    at org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:444)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:416)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:898)
    at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
    at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:756)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
FAILED: ParseException line 1:5 cannot recognize input near '<' 'select' 'mid' 
in join source







________________________________
From: hbutani <notificati...@github.com<mailto:notificati...@github.com>>
To: hbutani/SQLWindowing 
<sqlwindow...@noreply.github.com<mailto:sqlwindow...@noreply.github.com>>
Cc: ngadhia <ngad...@yahoo.com<mailto:ngad...@yahoo.com>>
Sent: Sunday, February 17, 2013 4:50 PM
Subject: Re: [SQLWindowing] Windowing function output path syntax (#26)

Hi,
We don't actively support this library anymore. This functionality is in the 
process of being folded into hive. You can see the latest code at 
https://github.com/apache/hive, the ptf-windowing branch. Also checkout the 
Jiras in Hive Jira: look for Jiras whose Component is PTF-Windowing.
regards,
Harish.
—
Reply to this email directly or view it on 
GitHub<https://github.com/hbutani/SQLWindowing/issues/26#issuecomment-13701539>.





Reply via email to