Re: [Error] : while registering Hbase table with hive

2016-03-03 Thread Swagatika Tripathy
Hi Divya,
Can you paste the Hive table structure as well as HBASE table structure?

Regards,
Swagatika

On Mon, Feb 29, 2016 at 8:51 AM, Divya Gehlot 
wrote:

> Hi,
> I trying to register a hbase table with hive and getting following error :
>
>  Error while processing statement: FAILED: Execution Error, return code 1
>> from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException:
>> MetaException(message:org.apache.hadoop.hive.serde2.SerDeException Error:
>> the HBase columns mapping contains a badly formed column family, column
>> qualifier specification.)
>
>
>
> May I know what could be the possible reason ?
>
>
> Thanks,
> Divya
>


Re: Cannot convert column 2 from string to map error

2016-03-03 Thread Swagatika Tripathy
Hi Buntu,
Since the attribute attrs is of type MAP,So, you need to do it something
like:
 insert into tmp_table values ( 'src1', 'uid1',map("NULL","NULL").

Let me know if it works.



On Tue, Mar 1, 2016 at 5:06 AM, Buntu Dev  wrote:

> When attempting to insert null value into a map column
> type, I run into this error:
>
> Cannot convert column 2 from string to map
>
>
> Here is my Avro schema and the table definition:
>
> 
> "fields": [
> {"name": "src", "type": ["null", "string"], "default": null},
> {"name": "uid", "type": ["null", "string"], "default": null},
> {"name": "attrs", "type": {"type": "map", "values": ["null",
> "string"]}, "default": null},
> ...
> ]
>
>
> > desc tmp_table;
>
> +--+---+---+--+
> | col_name |   data_type   |comment
>  |
>
> +--+---+---+--+
> | src| bigint| from deserializer |
> | uid  | int   | from deserializer |
> | attrs| map| from deserializer   |
> 
> ...
> 
>
> If I run this INSERT INTO, I get the error message mentioned:
>
>  insert into tmp_table values ( 'src1', 'uid1', null);
>
> Is there some way I can fix this issue?
>
>
>
> Thanks!
>
>
>
>
>


Re: Fwd: Row exception in Hive while using join

2015-03-09 Thread Swagatika Tripathy
Hi Krish,
It seems the data corresponding to that particular row pertaining to key 12
is corrupt.Can u try reloading the data and then selecting?

Let me know if it works.

Regards
Swagatika
On Mar 5, 2015 4:45 PM, "krish"  wrote:

>
> I got the following exception while executing join on Hive Query and
> reducer hang after 68% completion.
>
>
> java.lang.RuntimeException:
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
> processing row (tag=1)
> {"key":{"joinkey0":"12"},"value":{"_col2":"rs317647905"},"alias":1}
> at
> org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:270)
> at
> org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438)
> at org.apache.hadoop.mapred.Child.main(Child.java:262)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime
> Error while processing row (tag=1)
> {"key":{"joinkey0":"12"},"value":{"_col2":"rs317647905"},"alias":1}
> at
> org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:258)
> ... 7 more
> Caused by: org.apache.hadoop.
>
> ---
>
> my query and table structure:
>
> create table table_llv_N_C as select
> table_line_n_passed.chromosome_number,table_line_n_passed.position,
> table_line_c_passed.id from table_line_n_passed join table_line_c_passed
> on
> (table_line_n_passed.chromosome_number=table_line_c_passed.chromosome_number)
>
> hive> desc table_line_n_passed;
> OK
> chromosome_number   string
>
> positionint
> id  string
> ref string
> alt string
> quality double
> filter  string
> infostring
> format  string
> line6   string
> Time taken: 0.854 seconds
> Why am I getting this error, and how can I solve it?
>
>
>
>
>
> --
> with regards
> krish!!
>


Re: Re: How to query data by page in Hive?

2015-02-17 Thread Swagatika Tripathy
Hello Debopam
Please mail.me the queries as well for rank/dense rank.
TIA.

Regards
Swagatika
On Feb 5, 2015 3:51 PM, "Devopam Mittra"  wrote:

> Please provide a valid table structure and the columns you wish to pick
> and I shall email you the query directly
>
>
> regards
> Devopam
>
> On Thu, Feb 5, 2015 at 3:20 PM, r7raul1...@163.com 
> wrote:
>
>> Thank you Devopam! Could you show me a  example?
>>
>> --
>> r7raul1...@163.com
>>
>>
>> *From:* Devopam Mittra 
>> *Date:* 2015-02-05 18:05
>> *To:* user@hive.apache.org
>> *Subject:* Re: How to query data by page in Hive?
>> You may want to use a ROW_NUMBER OR RANK / DENSE RANK in the inner query
>> and then select only a subset of it in the outer query to control
>> pagination. Based on your need, you may want to order the records as well ..
>>
>> Alternatively you may want to use CTE(
>> https://cwiki.apache.org/confluence/display/Hive/Common+Table+Expression)
>> for selecting the data in one go and then use row number to select as in
>> previous case.
>>
>> regards
>> Devopam
>>
>> On Thu, Feb 5, 2015 at 1:31 PM, r7raul1...@163.com 
>> wrote:
>>
>>> Hello,
>>>  How to query data by page in Hive?
>>>
>>> hive> select * from u_data a limit 1,2;
>>> FAILED: ParseException line 1:31 missing EOF at ',' near '1'
>>>
>>> --
>>> r7raul1...@163.com
>>>
>>
>>
>>
>> --
>> Devopam Mittra
>> Life and Relations are not binary
>>
>>
>
>
> --
> Devopam Mittra
> Life and Relations are not binary
>


Re: how to load json with nested array into hive?

2014-06-23 Thread Swagatika Tripathy
Hi,
Use 1.9.3 Jason serde with dependencies jar. Its the latest one I suppose.

Thanks
Swagatika
On Jun 23, 2014 11:57 PM, "Roberto Congiu"  wrote:

> Hi,
> 1.1.4 is a oldish version of the JSON serde, have you tried with the most
> recent from the master branch ?
>
>
> On Mon, Jun 23, 2014 at 10:23 AM, Christian Link 
> wrote:
>
>> Hi,
>>
>> thanks...but I need to sort things out with ONE SerDe/strategy...
>> I've started with André's idea by using Roberto Congiu's SerDe and
>> André's template to create a table with the right schema and loading the
>> data aftrerwards.
>>
>> But it's not completely working...
>>
>> I did the following (sorry for spaming...):
>>
>> 1. create table and load data
>>
>> -- create database (if not exists)
>> CREATE DATABASE IF NOT EXISTS mdmp_api_dump;
>>
>> -- connect to database;
>> USE mdmp_api_dump;
>>
>> -- add SerDE for json processing
>> ADD JAR /home/hadoop/lib/hive/json-serde-1.1.4-jar-with-dependencies.jar;
>>
>> -- drop old raw data
>> DROP TABLE IF EXISTS mdmp_raw_data;
>>
>> -- create raw data table
>> CREATE TABLE mdmp_raw_data (
>>   action string,
>>   batch array<
>>   struct<
>> timestamp:string,
>> traits:map,
>> requestId:string,
>> sessionId:string,
>> event:string,
>> userId:string,
>> action:string,
>> context:map,
>> properties:map
>>
>>   >
>> >,
>>   context struct<
>> build:map,
>> device:struct<
>>  brand:string,
>>  manufacturer:string,
>>  model:string,
>>  release:string,
>>  sdk:int
>>>,
>> display:struct<
>>   density:double,
>>   height:int,
>>   width:int
>> >,
>> integrations:map,
>> library:string,
>> libraryVersion:string,
>> locale:map,
>> location:map,
>> telephony:map,
>> wifi:map
>>   >,
>>   received_at string,
>>   requestTimestamp string,
>>   writeKey string
>> )
>> ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
>> STORED AS TEXTFILE;
>>
>> -- load data
>> LOAD DATA INPATH 'hdfs:///input-api/1403181319.json' OVERWRITE INTO TABLE
>> `mdmp_raw_data`;
>>
>> 2. run query against the "raw data" and create "formatted table":
>>
>> ADD JAR /home/hadoop/lib/hive/json-serde-1.1.4-jar-with-dependencies.jar;
>>
>> USE mdmp_api_dump;
>>
>> DROP TABLE IF EXISTS mdmp_api_data;
>>
>> CREATE TABLE mdmp_api_data AS
>> SELECT DISTINCT
>>   a.action,
>>   a.received_at,
>>   a.requestTimestamp,
>>   a.writeKey,
>>   a.context.device.brand as brand,
>>   a.context.device.manufacturer as manufacturer,
>>   a.context.device.model as model,
>>   a.context.device.release as release,
>>   a.context.device.sdk as sdk,
>> --  a.context.display.density as density,
>>   a.context.display.height as height,
>>   a.context.display.width as width,
>>   a.context.telephony['radio'] as tel_radio,
>>   a.context.telephony['carrier'] as tel_carrier,
>>   a.context.wifi['connected'] as wifi_connected,
>>   a.context.wifi['available'] as wifi_available,
>>a.context.locale['carrier'] as loce_carrier,
>>   a.context.locale['language'] as loce_language,
>>   a.context.locale['country'] as loce_country,
>>   a.context.integrations['Tapstream'] as int_tapstream,
>>   a.context.integrations['Amplitude'] as int_amplitude,
>>   a.context.integrations['Localytics'] as int_localytics,
>>   a.context.integrations['Flurry'] as int_flurry,
>>   a.context.integrations['Countly'] as int_countly,
>>   a.context.integrations['Quantcast'] as int_quantcast,
>>   a.context.integrations['Crittercism'] as int_crittercism,
>>   a.context.integrations['Google Analytics'] as int_googleanalytics,
>>   a.context.integrations['Mixpanel'] as int_mixpanel,
>>   b.batch.action AS b_action,
>>   b.batch.context,
>>   b.batch.event,
>>   b.batch.properties,
>>   b.batch.requestId,
>>   b.batch.sessionId,
>>   b.batch.timestamp,
>>   b.batch.traits,
>>   b.batch.userId
>> FROM mdmp_raw_data a
>> LATERAL VIEW explode(a.batch) b AS batch;
>>
>> So far so good... (besides a silly double/int bug in the outdated SerDe)
>> I thought.
>>
>> But it turned out, that some fields are NULL - within all records.
>>
>> Affected fields are:
>>   b.batch.event,
>>   b.batch.requestId,
>>   b.batch.sessionId,
>>   b.batch.userId
>>
>> I can see values in the json file, but neither  in the "raw table" nor in
>> the final table...that's really strange.
>>
>> An example record:
>> {"requestTimestamp":"2014-06-19T14:25:26+02:00","context":{"libraryVersion":"0.6.13","telephony":{"radio":"gsm","carrier":"o2
>> -
>> de"},"wifi":{"connected":true,"available":true},"location":{},"locale":{"carrier":"o2
>> -
>> de","language":"Deutsch","country":"Deutschland"},"libra

Re: Executing Hive Queries in Parallel

2014-04-27 Thread Swagatika Tripathy
Hi,
You can also use oozie's fork fearure  which acts as a workflow scheduler
to run jobs in parallel. You just need to define all our hql's inside the
workflow.XML to make it run in parallel.
On Apr 22, 2014 3:14 AM, "Subramanian, Sanjay (HQP)" <
sanjay.subraman...@roberthalf.com> wrote:

>   Hey
>
>  Instead of going into HIVE CLI
>  I would propose 2 ways
>
>  *NOHUP *
>  nohup hive -f path/to/query/file/*hive1.hql* >> ./hive1.hql_`date
> +%Y-%m-%d-%H–%M–%S`.log 2>&1
>  nohup hive -f path/to/query/file/*hive2.hql* >> ./hive2.hql_`date
> +%Y-%m-%d-%H–%M–%S`.log 2>&1
>  nohup hive -f path/to/query/file/*hive3.hql* >> ./hive3.hql_`date
> +%Y-%m-%d-%H–%M–%S`.log 2>&1
>  nohup hive -f path/to/query/file/*hive4.hql* >> ./hive4.hql_`date
> +%Y-%m-%d-%H–%M–%S`.log 2>&1
>  nohup hive -f path/to/query/file/*hive5.hql* >> ./hive5.hql_`date
> +%Y-%m-%d-%H–%M–%S`.log 2>&1
>
>  Each statement above will launch MR jobs on your cluster and depending
> on the cluster configs the jobs will run parallelly
>  Scheduling jobs on the MR cluster is independent of Hive
>
>  *SCREEN sessions*
>
>- Create a Screen session
>   - screen  –S  hive_query1
>   - U r inside the screen session hive_query1
>  - hive -f path/to/query/file/*hive1.hql*
>   - Ctrl A D
>  - U detach from a screen session
>- Repeat for each hive query u want to run
>   - I.e. Say 5 screen sessions, each running a have query
>- To display screen session active
>   - screen -x
>- To attach to a screen session
>   - screen  -x hive_query1
>
>
>  Thanks
>
> Warm Regards
>
>
>  Sanjay
>
>
>From: saurabh 
> Reply-To: "user@hive.apache.org" 
> Date: Monday, April 21, 2014 at 1:53 PM
> To: "user@hive.apache.org" 
> Subject: Executing Hive Queries in Parallel
>
>
>  Hi,
>  I need some inputs to execute hive queries in parallel. I tried doing
> this using CLI (by opening multiple ssh connection) and executed 4 HQL's;
> it was observed that the queries are getting executed sequentially. All the
> FOUR queries got submitted however while the first one was in execution
> mode the other were in pending state. I was performing this activity on the
> EMR running on Batch mode hence didn't able to dig into the logs.
>
>  The hive CLI uses native hive connection which by default uses the FIFO
> scheduler.  This might be one of the reason for the queries getting
> executed in sequence.
>
>  I also observed that when multiple queries are executed using multiple
> HUE sessions, it provides the parallel execution functionality. Can you
> please suggest how the functionality of HUE can be replicated using CLI?
>
>  I am aware of beeswax client however i am not sure how this can be used
> during EMR- batch mode processing.
>
>  Thanks in advance for going through this. Kindly let me know your
> thoughts on the same.
>
>


Re: READING FILE FROM MONGO DB

2014-04-01 Thread Swagatika Tripathy
Do we hv a for loop concept in hive to iterate through the array elements n
display them. We need an alternative for explode method
Well you cN use Json serde for this

Sent from my iPhone

On Mar 26, 2014, at 8:40 PM, "Swagatika Tripathy" 
wrote:

Hi ,
The use case is we have some unstructured data fetched from Mongo DB and
stored in a particular location. Our task is to load those data into our
staging and core hive tables in form of rows and columns.eg if the data is
in key value pair like:
{
Id: bigint(12346),
Name:string(ABC),
Subjects:
{Subject enrolled:
Subjects:
[eng ,math]
}
{Game enrolled:
[Football,cricket]
}
This is just a very simple eg fr reference but we have a complex Json
format with huge amount of data.

So, in this case how can we load it into hive tables and hdfs?
 On Mar 26, 2014 10:59 PM,  wrote:

> Are you swagatika mohanty?
>
>
>
>
>
>
> Thanks,
> Shouvanik
>
>
> -Original Message-
> From: Siddharth Tiwari [mailto:siddharth.tiw...@live.com]
> Sent: Wednesday, March 26, 2014 10:03 AM
> To: user@hive.apache.org
> Subject: Re: READING FILE FROM MONGO DB
>
> Hi Swagatika
> You can create external tables to Mongo and can process it using hive. New
> mongo connectors have added support for hive. Did you try that?
>
> Sent from my iPhone
>
> > On Mar 26, 2014, at 9:59 AM, "Swagatika Tripathy" <
> swagatikat...@gmail.com> wrote:
> >
> > Hi,
> > We have some files stored in MongoDB , mostly in key value format. We
> need to parse those files and store it into Hive tables.
> >
> > Any inputs on this will be appreciated.
> >
> > Thanks,
> > Swagatika
> >
>
>
> 
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise confidential information. If you have
> received it in error, please notify the sender immediately and delete the
> original. Any other use of the e-mail by you is prohibited. Where allowed
> by local law, electronic communications with Accenture and its affiliates,
> including e-mail and instant messaging (including content), may be scanned
> by our systems for the purposes of information security and assessment of
> internal compliance with Accenture policy.
>
> __
>
> www.accenture.com
>
>


RE: READING FILE FROM MONGO DB

2014-03-26 Thread Swagatika Tripathy
Hi ,
The use case is we have some unstructured data fetched from Mongo DB and
stored in a particular location. Our task is to load those data into our
staging and core hive tables in form of rows and columns.eg if the data is
in key value pair like:
{
Id: bigint(12346),
Name:string(ABC),
Subjects:
{Subject enrolled:
Subjects:
[eng ,math]
}
{Game enrolled:
[Football,cricket]
}
This is just a very simple eg fr reference but we have a complex Json
format with huge amount of data.

So, in this case how can we load it into hive tables and hdfs?
 On Mar 26, 2014 10:59 PM,  wrote:

> Are you swagatika mohanty?
>
>
>
>
>
>
> Thanks,
> Shouvanik
>
>
> -Original Message-
> From: Siddharth Tiwari [mailto:siddharth.tiw...@live.com]
> Sent: Wednesday, March 26, 2014 10:03 AM
> To: user@hive.apache.org
> Subject: Re: READING FILE FROM MONGO DB
>
> Hi Swagatika
> You can create external tables to Mongo and can process it using hive. New
> mongo connectors have added support for hive. Did you try that?
>
> Sent from my iPhone
>
> > On Mar 26, 2014, at 9:59 AM, "Swagatika Tripathy" <
> swagatikat...@gmail.com> wrote:
> >
> > Hi,
> > We have some files stored in MongoDB , mostly in key value format. We
> need to parse those files and store it into Hive tables.
> >
> > Any inputs on this will be appreciated.
> >
> > Thanks,
> > Swagatika
> >
>
>
> 
>
> This message is for the designated recipient only and may contain
> privileged, proprietary, or otherwise confidential information. If you have
> received it in error, please notify the sender immediately and delete the
> original. Any other use of the e-mail by you is prohibited. Where allowed
> by local law, electronic communications with Accenture and its affiliates,
> including e-mail and instant messaging (including content), may be scanned
> by our systems for the purposes of information security and assessment of
> internal compliance with Accenture policy.
>
> __
>
> www.accenture.com
>
>


Re: READING FILE FROM MONGO DB

2014-03-26 Thread Swagatika Tripathy
Hi Siddharth,
we need to store the unstructured data in internal hive tables.  Have u
tried something similar?




On Wed, Mar 26, 2014 at 10:33 PM, Shrikanth Shankar wrote:

> https://github.com/mongodb/mongo-hadoop is from the mongo folks themselves
>
> Shrikanth
>
>
> On Wed, Mar 26, 2014 at 10:01 AM, Nitin Pawar wrote:
>
>> take a look at https://github.com/yc-huang/Hive-mongo
>>
>>
>> On Wed, Mar 26, 2014 at 10:29 PM, Swagatika Tripathy <
>> swagatikat...@gmail.com> wrote:
>>
>>> Hi,
>>> We have some files stored in MongoDB , mostly in key value format. We
>>> need to parse those files and store it into Hive tables.
>>>
>>> Any inputs on this will be appreciated.
>>>
>>> Thanks,
>>> Swagatika
>>>
>>>
>>
>>
>> --
>> Nitin Pawar
>>
>
>


READING FILE FROM MONGO DB

2014-03-26 Thread Swagatika Tripathy
Hi,
We have some files stored in MongoDB , mostly in key value format. We need
to parse those files and store it into Hive tables.

Any inputs on this will be appreciated.

Thanks,
Swagatika


Deleting a column from internally managed table

2014-01-12 Thread Swagatika Tripathy
Yes; u can do so... 5 individual alter table replace column command will do

On 1/9/14, Kishore kumar  wrote:
> No I want to change 4to5 columns only outof 40 columns.
>
>
> On Wed, Jan 8, 2014 at 2:05 PM, Edward Capriolo
> wrote:
>
>> Alter table replace columns changes all columns and types.
>>
>>
>> On Wed, Jan 8, 2014 at 5:35 AM, Kishore kumar
>> wrote:
>>
>>> Hi Experts,
>>>
>>> Is there  a way to change multiple column names and types ?
>>>
>>> --
>>>
>>> *Kishore Kumar*
>>> ITIM
>>>
>>>
>>
>
>
> --
>
> *Kishore Kumar*
> ITIM
>
> Bidstalk - Ingenius Programmatic Platform
>
> Email: kish...@techdigita.in| Tel: +1 415 423 8230  | Cell: +91 741 135
> 8658 | skype: kishore.alajangi | YM: kk_asn2004 | Twitter:
> __kishorealajangi
> [image: Inline image 1]
>


SET PROPERTY FOR UDF

2013-11-23 Thread Swagatika Tripathy
Hi,

I want to customize the hive-site.xml to add some extra properties to be
used for a UDF.

The UDF is a part of the view to be created.

So , i tried setting the path through hivecli as below:

set hive.property.path="/Users/path of hive-testing.xml";

but it does not seem to take the path and gives me a Null pointer Exception.

Can u please suggest any alternative. This particular property was set
inside the UDF as below:
System.getProperty("hive.site.xml",hive-site.xml);

Regards,
Swagatika.


Re: load data stored as sequencefiles

2013-09-24 Thread Swagatika Tripathy
Make sure you have set the table properties while creating the table
structure.
Also , it should not be a problem unless it is a FixedLength.Try altering
the table to set it to the type desired  or else it will be by default
SequenceFile


On Tue, Sep 24, 2013 at 7:51 PM, Artem Ervits  wrote:

>  Anyone?
>
> ** **
>
> *From:* Artem Ervits [mailto:are9...@nyp.org]
> *Sent:* Friday, September 20, 2013 11:18 AM
> *To:* user@hive.apache.org
> *Subject:* load data stored as sequencefiles
>
> ** **
>
> Hello all,
>
>  
>
> I’m a bit lost with using Hive and SequenceFiles. I loaded data using
> Sqoop from a RDBMS and stored as sequencefile. I jarred the class generated
> by sqoop and added it to my create table script. Now I create a table in
> hive and specify “STORED AS SEQUENCEFILE”, I also “ADD JAR
> SQOOP_GENERATED.JAR”. Then I try to insert data with the same generated jar
> added. I also specify 
>
>  
>
> SET hive.exec.compress.output=true;
>
> SET io.seqfile.compression.type=BLOCK;
>
>  
>
> LOAD DATA INPATH '/TEST/SeqFiles/201308300700/part-m-1' INTO TABLE
> tblname;
>
>  
>
> When the query executes, I see this “[num_partitions: 0, num_files: 2,
> num_rows: 0, total_size: 478662618, raw_data_size: 0]”
>
>  
>
> When I select on the table,  I get 
> org.apache.hadoop.hive.serde2.SerDeException:
> class org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe: expects either
> BytesWritable or Text object! 
>
>  
>
> So my question is, how do I specify my generated class along with
> SequenceFileInputFormat in my create statement? How do I specify the
> inputformats?
>
>
> This electronic message is intended to be for the use only of the named
> recipient, and may contain information that is confidential or privileged.
> If you are not the intended recipient, you are hereby notified that any
> disclosure, copying, distribution or use of the contents of this message is
> strictly prohibited. If you have received this message in error or are not
> the named recipient, please notify us immediately by contacting the sender
> at the electronic mail address noted above, and delete and destroy all
> copies of this message. Thank you.
>  --
>
>
> Confidential Information subject to NYP's (and its affiliates')
> information management and security policies (
> http://infonet.nyp.org/QA/HospitalManual).
>
> This electronic message is intended to be for the use only of the named
> recipient, and may contain information that is confidential or privileged.
> If you are not the intended recipient, you are hereby notified that any
> disclosure, copying, distribution or use of the contents of this message is
> strictly prohibited. If you have received this message in error or are not
> the named recipient, please notify us immediately by contacting the sender
> at the electronic mail address noted above, and delete and destroy all
> copies of this message. Thank you.
>


Re: UDAF HELP

2013-08-18 Thread Swagatika Tripathy
Hi,
i have a requirement to compare two different columns in 2 adjacent rows
and if values are equal, it should return a non zero value else should give
0 as result.

Please let me know how can we implement using UDAF(User Defined Aggregate
Function) in hive.

TIA.

Regards,
Swagtk


On Fri, Aug 16, 2013 at 11:43 PM, Swagatika Tripathy <
swagatikat...@gmail.com> wrote:

> Hi,
> i have a requirement to compare two different columns in 2 adjacent rows
> and if values are equal, it should return a non zero value else should give
> 0 as result.
>
> Please let me know how can we implement using UDAF(User Defined Aggregate
> Function) in hive.
>
> TIA.
>
> Regards,
> Swagatika
>