Using INDEX inside SUBSTR

2013-07-10 Thread dyuti a
Hi All, One of my Teradata query is SUBSTR(PRESG_ID,INDEX(PRESG_ID,' ')+1,1) IN ('1','2','3') Any idea of the above to convert to hive, It seems to be a simple qus but am stuck...didn't get a chance to try it out in cluster :( -Thanks

Help in ROW_NUMBER() OVER (PARTITION BY) in Hive

2013-07-02 Thread dyuti a
Hi Experts, I'm working with Teradata query conversion to hive environment (Hive version 0.10.0).The challenge that am facing here is in converting the below line in query. In SELECT clause: ROW_NUMBER() OVER (PARTITION BY CLMST_KEY2 ORDER BY COUNTER) AS CLMST_ORDR_NBR When searched found like

Re: Need urgent help in hive query

2013-07-02 Thread dyuti a
, optimize. Robin From: dyuti a hadoop.hiv...@gmail.com Reply-To: user@hive.apache.org user@hive.apache.org Date: Friday, June 28, 2013 12:05 PM To: user@hive.apache.org user@hive.apache.org Subject: Re: Need urgent help in hive query Hi Robin, Thanks for your reply. Hope

Help in ROW_NUMBER() OVER (PARTITION BY) in Hive

2013-07-02 Thread dyuti a
Hi Experts, I'm working with Teradata query conversion to hive environment (Hive version 0.10.0).The challenge that am facing here is in converting the below line in query. In SELECT clause: ROW_NUMBER() OVER (PARTITION BY CLMST_KEY2 ORDER BY COUNTER) AS CLMST_ORDR_NBR When searched found like

Re: Fwd: Need urgent help in hive query

2013-06-28 Thread dyuti a
Hi Michael, Thanks for your help, is there any other possible options apart from this. On Fri, Jun 28, 2013 at 10:33 PM, Michael Malak michaelma...@yahoo.comwrote: Just copy and paste the whole long expressions to their second occurrences. -- *From:* dyuti

Re: Help in hive query

2012-10-31 Thread dyuti a
,receive_dd,receive_hh,receive_hh+1; On Tue, Oct 30, 2012 at 1:51 AM, dyuti a hadoop.hiv...@gmail.com wrote: Hi All, I want to perform (No.of .approvals in an hour/No.of transactions in that hour)*100. //COUNT(1) AS cnt gives total transactions in an hour SELECT client_id

Re: help in hive

2012-10-22 Thread dyuti a
BY start_time; On Mon, Oct 22, 2012 at 4:41 PM, dyuti a hadoop.hiv...@gmail.com wrote: Hi all, I have a hive table with 235 million records. SAMPLE INPUT: receive_yearreceive_day receive_hour client 2012 7 17 xyz