USE CASE:: Hierarchical Structure in Hive and Java

2014-10-21 Thread yogesh dhari
Hello All,

We are having a use case where we need to create the hierarchical structure
using Hive and Java

For example
Lets say in an organisation we need to create Org chart
i.e. Senior director - director - associate director - senior manager -
manager - senior associate - associate - Developer
means parent child then sub child and so on.

Input Source: summarized table which is getting populated after running the
joins  which will run business logic and fetch the data from base table.
Output: Table which store the parent child relationship in a hierarchical
manner


If anyone come across this kind of requirement/scenario kindly suggest the
approach to proceed.

Thanks in Advance


Thanks  Regards
Yogesh


HIVE::START WITH and CONNECT BY implementation in Hive

2014-10-20 Thread yogesh dhari
Hello All


How can we achive

*start with .. connect by* clause can be used to select data that has a
hierarchical relationship (usually some sort of parent-child
(boss-employee or thing-parts).

into Hive, or what would be the other work around to implement this.



Please suggest

Thanks in advance
Yogesh


PLEASE HELP :: HOW TO DO INNER JOIN IN HIVE

2014-10-15 Thread yogesh dhari
Hello all,

I have a use case where I need to do inner join..

Like

select A.iteam , B.Decsription,
from iteam_table A INNER JOIN iteam_desc B
on A.id = B.id


As hive does not support Inner Join,

Please suggest how to do it


Thanks in Advance


Re: PLEASE HELP :: HOW TO DO INNER JOIN IN HIVE

2014-10-15 Thread yogesh dhari
Thanks Devopam ,

I did not get your point, could you pls show me a dummy query in my case


Regards

On Wed, Oct 15, 2014 at 1:42 PM, Devopam Mittra devo...@gmail.com wrote:

 hi Yogesh,
 Please try to leverage common table expression (WITH ...) to achieve your
 desired outcome

 regards
 Dev

  On Oct 15, 2014, at 1:18 PM, yogesh dhari yogeshh...@gmail.com wrote:
 
  Hello all,
 
  I have a use case where I need to do inner join..
 
  Like
 
  select A.iteam , B.Decsription,
  from iteam_table A INNER JOIN iteam_desc B
  on A.id = B.id
 
 
  As hive does not support Inner Join,
 
  Please suggest how to do it
 
 
  Thanks in Advance
 



HIVE QUERY HELP:: HOW TO IMPLEMENT THIS CASE

2014-03-03 Thread yogesh dhari
Hello All,

I have a use case in RDBMS query which I have implemented in
HIVE as..



*1.1) Update statement* *in RDBMS*

update  TABLE1
set
Age= case when isnull(age,'') ='' then 'A= Current' else '240+ Days' end,
Prev_Age=case when isnull(prev_age,'') ='' then 'A= Current' else '240+
Days' end;
*1.2) Update statement* *in HIVE*

create table  TABLE2 as select
a.* ,
case when coalesce(a.age,'')='' then 'A=Current' else '240+ Days' end as
Age,
case when coalesce(a.prev_age,'')='' then 'A=Current' else '240+ Days' end
as Prev_age from TABLE1 a ;





*Now I have a case statement in which I have a join condition*.



*2) Join in RDBMS*
update  TABLE1
set a.Age = c.sys_buscd_item_desc1
from  TABLE1 a
join  TABLE2 c
on c.sys_type_cd='AGE'
where isnull(a.age,'00')=c.sys_item;
commit;





How can I implement this query into Hive, Pls help and suggest.



Thanks In Advance

Yogesh Kumar


Re: Hive Query :: Implementing case statement

2014-02-19 Thread yogesh dhari
Thanks a lot Stephen Sprague  :) :)

It worked..  , just to remove the  ;from here, bcoz it was throuig
sub query systax error...


create table NEW_BALS as
select * from (
select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join TABLE_SQL_2 b on
(a.key=b.key) where a.code='1';
UNION ALL
select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join TABLE_SQL_3 b on
(a.key=b.key) where a.code='2';
) z
;



to

create table NEW_BALS as
select * from (
select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join TABLE_SQL_2 b on
(a.key=b.key) where a.code='1'
  UNION ALL
select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join TABLE_SQL_3 b on
(a.key=b.key) where a.code='2'
  ) z
;

Thanks  Regards
Yogesh Kumar


On Tue, Feb 18, 2014 at 9:18 PM, Stephen Sprague sprag...@gmail.com wrote:

 maybe consider something along these lines. nb. not tested.

 -- temp table holding new balances + key

 create table NEW_BALS as
   select * from (
   select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join
 TABLE_SQL_2 b on (a.key=b.key) where a.code='1';
   UNION ALL
   select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join
 TABLE_SQL_3 b on (a.key=b.key) where a.code='2';
   ) z
   ;

 -- i gave the table a new name but you could use TABLE_SQL instead of
 NEW_TABLE_SQL for the true update.

 insert overwrite table NEW_TABLE_SQL

select * from (

 -- intersected rows - you'll have to line-up the columns correctly
 substituting b.NEW_BALANCE into the right position
select  a.col1, b.NEW_BALANCE, a.col2, ... from TABLE_SQL a join
 NEW_BALS b on (a.key=b.key)

UNION ALL

 -- non-intersected rows
select a.* from TABLE_SQL a join NEW_BALS b on (a.key=b.key) where
 b.NEW_BALANCE is null

) z
 ;

 there's probably some typos in there but hopefully you get the idea and
 can take it from here.





 On Tue, Feb 18, 2014 at 4:39 AM, yogesh dhari yogeshh...@gmail.comwrote:

 Hello All,

 I have a use case where a table say TABLE_SQL is geting updated like.


 1st Update Command

 update TABLE_SQL a
 set BALANCE = b.prev
 from TABLE_SQL_2 b
 where a.key = b.key and a.code = 1


 2nd Update Command

 update TABLE_SQL a
 set BALANCE = b.prev
 from TABLE_SQL_3 b
 where a.key = b.key and a.code = 2


 same column is getting update twice in sql table,

 I have a Table in Hive say TABLE_HIVE.

 How to perform this kind operatation in HIVE,

 Pls Help,

 Thanks in Advance
 Yogesh Kumar















Re: Hive Query :: Implementing case statement

2014-02-19 Thread yogesh dhari
Hello Stephen ,

Yes, actully I have used Left Outer Join instead of Join, there were left
outer joins in RDBMS Query instead of join.

Thanks again :)



On Thu, Feb 20, 2014 at 10:45 AM, Stephen Sprague sprag...@gmail.comwrote:

 Hi Yogesh,

 i overlooked one thing and for completeness we should make note of it here.

 change:


 -- non-intersected rows
select a.* from TABLE_SQL a join NEW_BALS b on (a.key=b.key) where
 b.NEW_BALANCE is null

  to

 -- non-intersected rows
select a.* from TABLE_SQL a *LEFT OUTER* join NEW_BALS b on
 (a.key=b.key) where b.NEW_BALANCE is null


 This takes care of the rows where code != 1 and code != 2.


 But you knew that!




 On Wed, Feb 19, 2014 at 8:33 PM, yogesh dhari yogeshh...@gmail.comwrote:

 Thanks a lot Stephen Sprague  :) :)

 It worked..  , just to remove the  ;from here, bcoz it was throuig
 sub query systax error...



 create table NEW_BALS as
 select * from (
 select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join TABLE_SQL_2 b
 on (a.key=b.key) where a.code='1';
 UNION ALL
 select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join TABLE_SQL_3 b
 on (a.key=b.key) where a.code='2';
 ) z
 ;



 to

 create table NEW_BALS as
 select * from (
 select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join TABLE_SQL_2 b
 on (a.key=b.key) where a.code='1'
   UNION ALL
 select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join TABLE_SQL_3 b
 on (a.key=b.key) where a.code='2'
   ) z
 ;

 Thanks  Regards
 Yogesh Kumar


 On Tue, Feb 18, 2014 at 9:18 PM, Stephen Sprague sprag...@gmail.comwrote:

 maybe consider something along these lines. nb. not tested.

 -- temp table holding new balances + key

 create table NEW_BALS as
   select * from (
   select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join
 TABLE_SQL_2 b on (a.key=b.key) where a.code='1';
   UNION ALL
   select b.prev as NEW_BALANCE, a.key from TABLE_SQL a join
 TABLE_SQL_3 b on (a.key=b.key) where a.code='2';
   ) z
   ;

 -- i gave the table a new name but you could use TABLE_SQL instead of
 NEW_TABLE_SQL for the true update.

 insert overwrite table NEW_TABLE_SQL

select * from (

 -- intersected rows - you'll have to line-up the columns correctly
 substituting b.NEW_BALANCE into the right position
select  a.col1, b.NEW_BALANCE, a.col2, ... from TABLE_SQL a join
 NEW_BALS b on (a.key=b.key)

UNION ALL

 -- non-intersected rows
select a.* from TABLE_SQL a join NEW_BALS b on (a.key=b.key) where
 b.NEW_BALANCE is null

) z
 ;

 there's probably some typos in there but hopefully you get the idea and
 can take it from here.





 On Tue, Feb 18, 2014 at 4:39 AM, yogesh dhari yogeshh...@gmail.comwrote:

 Hello All,

 I have a use case where a table say TABLE_SQL is geting updated like.


 1st Update Command

 update TABLE_SQL a
 set BALANCE = b.prev
 from TABLE_SQL_2 b
 where a.key = b.key and a.code = 1


 2nd Update Command

 update TABLE_SQL a
 set BALANCE = b.prev
 from TABLE_SQL_3 b
 where a.key = b.key and a.code = 2


 same column is getting update twice in sql table,

 I have a Table in Hive say TABLE_HIVE.

 How to perform this kind operatation in HIVE,

 Pls Help,

 Thanks in Advance
 Yogesh Kumar

















Hive Query :: Implementing case statement

2014-02-18 Thread yogesh dhari
Hello All,

I have a use case where a table say TABLE_SQL is geting updated like.


1st Update Command

update TABLE_SQL a
set BALANCE = b.prev
from TABLE_SQL_2 b
where a.key = b.key and a.code = 1


2nd Update Command

update TABLE_SQL a
set BALANCE = b.prev
from TABLE_SQL_3 b
where a.key = b.key and a.code = 2


same column is getting update twice in sql table,

I have a Table in Hive say TABLE_HIVE.

How to perform this kind operatation in HIVE,

Pls Help,

Thanks in Advance
Yogesh Kumar


HIVE SUB QUERY:: How to implement this case

2014-01-22 Thread yogesh dhari
Hello all,

I have a case statement where I need to work like this logic.

select as_of_dt as as_of_dt, max_feed_key as max_feed_key, min_feed_key as
min_feed_key from table feed_key_temp where max_fed_key  ( select
max(feed_key) from summ_table ) group by as_of_dt ;


Here, max_feed_key and min_feed_key are the fields in table  feed_key_temp.


As Hive does not provide (0.9 version)  sub query in where clause, Pls
suggest the work around it, and how to implement it


Thanks  Regards

Yogesh Kumar


Re: working with HIVE VARIALBE: Pls suggest

2014-01-06 Thread yogesh dhari
Thanks all for your help..

I think was not so much clear about what I was trying to do...

I was just trying to create a variable  in Hive like in RDBMS,  and want to
store  the result of a query into that variable.

lets say,
I have declared a variable MY_VAR

I want to store the result  of hive query select max(salary) from
Hive_employe_table ;
into MY_VAR

Is it possible in Hive ? if yes then how to achieve..

Thanks a lot in advance.
Pls suggest or to work around it


Thanks  Regards
Yogesh




On Mon, Jan 6, 2014 at 12:36 PM, lxw lxw1...@qq.com wrote:


 maybe you can see:


 https://cwiki.apache.org/confluence/plugins/viewsource/viewpagesrc.action?pageId=30754722


 -- Original --
 *From: * yogesh dhari;yogeshh...@gmail.com;
 *Date: * Fri, Jan 3, 2014 01:13 AM
 *To: * useruser@hive.apache.org;
 *Subject: * working with HIVE VARIALBE: Pls suggest

 Hello Hive Champs,
  I have a case statement, where I need to check the date passed through
 parameter,
  If date is 1st date of the month then keep it as it as
 else
 set the parameter date to 1st date of the month.
  and then later opretation are being performed on that date into hive
 quries,
  I have wrote this Hive QL
  *select case when as_of_dt = ${hivevar:as_of_dt} then
 ${hivevar:as_of_dt} else date_sub(${hivevar:as_of_dt} ,
 (day(${hivevar:as_of_dt} )) -1 ) end as as_of_dt from TABLE group by
 as_of_dt ;*
  O/P of this query is, lets say = 2012-08-01

 I want to store the value of this Query into a variable.
  like

 MY_VARIABLE = (*select case when as_of_dt = ${hivevar:as_of_dt} then
 ${hivevar:as_of_dt} else date_sub(${hivevar:as_of_dt} ,
 (day(${hivevar:as_of_dt} )) -1 ) end as as_of_dt from TABLE group by
 as_of_dt; )*



  How to achieve that.
 Pls suggest,
 Thanks in advance



Setting value into hive varialble

2014-01-02 Thread yogesh dhari
Hello All,


I have a case statement, where I need to check the date passed through
parameter,

If date is 1st date of the month then keep it as it as
else
set the parameter date to 1st date of the month.

and then later opretation are being performed on that date into hive quries,


I have wrote this Hive QL

   *select case when as_of_dt = ${hivevar:as_of_dt} then
${hivevar:as_of_dt} else date_sub(${hivevar:as_of_dt} ,
(day(${hivevar:as_of_dt} )) -1 ) end as as_of_dt from TABLE group by
as_of_dt ;*

I want to store the value of this Query int a variable.

like MY_VARIABLE = output of this query;

How to achive that.

Pls suggest,
Thanks in advance


some


working with HIVE VARIALBE: Pls suggest

2014-01-02 Thread yogesh dhari
Hello Hive Champs,


I have a case statement, where I need to check the date passed through
parameter,

If date is 1st date of the month then keep it as it as
else
set the parameter date to 1st date of the month.

and then later opretation are being performed on that date into hive quries,


I have wrote this Hive QL

*select case when as_of_dt = ${hivevar:as_of_dt} then ${hivevar:as_of_dt}
else date_sub(${hivevar:as_of_dt} , (day(${hivevar:as_of_dt} )) -1 ) end as
as_of_dt from TABLE group by as_of_dt ;*

O/P of this query is, lets say =  2012-08-01

I want to store the value of this Query into a variable.

like

MY_VARIABLE = (*select case when as_of_dt = ${hivevar:as_of_dt} then
${hivevar:as_of_dt} else date_sub(${hivevar:as_of_dt} ,
(day(${hivevar:as_of_dt} )) -1 ) end as as_of_dt from TABLE group by
as_of_dt; )*




How to achieve that.

Pls suggest,
Thanks in advance


Re: Date format in Hive

2013-12-24 Thread yogesh dhari
Java nahi aati, to udf kaha se likhu..

koi hive ka function hi bata ya work around it..

ya fir shell script me kaisey kerney ka h ...


On Tue, Dec 24, 2013 at 12:47 PM, ashok.sa...@wipro.com wrote:

 Hello Dhari,

 Write a hive udf,
 1.which ll take date as argument.
 2.extract yr, mon,date from string, create a new string combining them
 3. convert it to numeric and multiply 100..
 4. return this value from udf.

 Thanks,
 Ashok
 
 From: yogesh dhari [yogeshh...@gmail.com]
 Sent: Tuesday, December 24, 2013 12:24 PM
 To: user@hive.apache.org
 Subject: Date format in Hive

 Hello All,

 I have a hive table in which dates are stored in string format.

 like

 2013-01-01
 2013-02-01
 2013-03-01


 I have a use case where I need this date like
 20130101
 20130201
 20130301

 and want to multiply each with 100
 like

 2013010100
 2013020100
 2013030100


 How can I do it into Hive

 pls help.

 Thanks  Regards
 Yogesh

 The information contained in this electronic message and any attachments
 to this message are intended for the exclusive use of the addressee(s) and
 may contain proprietary, confidential or privileged information. If you are
 not the intended recipient, you should not disseminate, distribute or copy
 this e-mail. Please notify the sender immediately and destroy all copies of
 this message and any attachments.

 WARNING: Computer viruses can be transmitted via email. The recipient
 should check this email and any attachments for the presence of viruses.
 The company accepts no liability for any damage caused by any virus
 transmitted by this email.

 www.wipro.com



to find the 1st day of the month in hive

2013-12-24 Thread yogesh dhari
Hello,

 I have a use case where I need to find the 1st day of the month of
entered date.



like if the date is 2013-12-05 i need to get 2013-12-01.

how to do it in Hive.  (wont preffer to use UDF, like by doing some
date_sub kind of or other function)



Pls suggest



Thanks
Yogesh


Re: to find the 1st day of the month in hive

2013-12-24 Thread yogesh dhari
Hi,
Can i use this.

select date_sub('2013-12-08', (day('2013-12-08')) -1) from table.

just want to cross chk


On Tue, Dec 24, 2013 at 3:48 PM, yogesh dhari yogeshh...@gmail.com wrote:

 Hello,

  I have a use case where I need to find the 1st day of the month of
 entered date.



 like if the date is 2013-12-05 i need to get 2013-12-01.

 how to do it in Hive.  (wont preffer to use UDF, like by doing some
 date_sub kind of or other function)



 Pls suggest



 Thanks
 Yogesh



CASE Statement not working in hive

2013-12-24 Thread yogesh dhari
Hello all,



I have wrote this query .



* select*

*case when as_date rlike '2013-05-01' then as_date else '2013-07-04' end as
as_date from table AA ;  *


as the value of as_date i.e  2013-05-01 exists in table AA  it *should
return the value 2013-05-01 *(Which is true in case statement)  but it
always *results value in else condition i.e '2013-07-04'*



*select as_date from AA ;*

 2013-05-01


why is it so, pls suggest and help



Regards
Yogesh Kumar


Date format in Hive

2013-12-23 Thread yogesh dhari
Hello All,

I have a hive table in which dates are stored in string format.

like

2013-01-01
2013-02-01
2013-03-01


I have a use case where I need this date like
20130101
20130201
20130301

and want to multiply each with 100
like

2013010100
2013020100
2013030100


How can I do it into Hive

pls help.

Thanks  Regards
Yogesh


JOIN comparasion PIG V/S HIVE

2012-10-22 Thread yogesh dhari

Hi All,

Is it true that Pig's JOIN operation is not so efficient as of HIVE.

I have just tried over and found differences over JOIN query.

Hive resulted the same as My Sql but Pig resulted some counts lesser then Hive 
Join.

Please put some light over JOINS in Pig and Hive.


Regards
Yogesh Kumar Dhari





  

PIG vs HIVE

2012-10-17 Thread yogesh dhari

Hi All,

I want to understand about the exceptional cases where Hive takes over Pig and 
Pig takes over Hive.

leaving the Fact Pig is best as an ETL Tool and Hive is best Data Warehouse.

Please suggest me me the real use cases for both.

Thanks Regards
Yogesh Kumar



  

NEED HELP in Hive Query

2012-10-14 Thread yogesh dhari

Hi all, 

I have this file. I want this operation to perform in HIVE  PIG

  NAME  DATE   URL  
 HITCOUNT
   timesascent.in2008-08-27
http://timesascent.in/index.aspx?page=tparchives15
timesascent.in2008-08-27
http://timesascent.in/index.aspx?page=articlesectid=1contentid=200812182008121814134447219270b26
20
timesascent.in2008-08-27http://timesascent.in/37
timesascent.in2008-08-27http://timesascent.in/section/39/Job%20Wise 
   14
timesascent.in2008-08-27
http://timesascent.in/article/7/2011062120110621171709769aacc537/Work-environment--Employee-productivity.html
20
timesascent.in2008-08-27http://timesascent.in/17
timesascent.in2008-08-27http://timesascent.in/section/2/Interviews  
  15
timesascent.in2008-08-27http://timesascent.in/17
   timesascent.in2008-08-27http://timesascent.in/27
timesascent.in2008-08-27http://timesascent.in/37
timesascent.in2008-08-27http://timesascent.in/27
timesascent.in2008-08-27http://www.timesascent.in/16
timesascent.in2008-08-27http://timesascent.in/section/2/Interviews  
  14
timesascent.in2008-08-27http://timesascent.in/14
timesascent.in2008-08-27http://timesascent.in/22


I want to add all HITCOUNT for the same NAME, DATE  URL  

like 

 timesascent.in2008-08-27http://timesascent.in/(addition of all 
hitcount under same name, date, url   (37+17+17+27+))

Please suggest me is there any method to perform this query.


Thanks  Regards
Yogesh Kumar



  

RE: Need Help in Hive storage format

2012-10-12 Thread yogesh dhari

Thanks Bejoy,

Now I want to store theses rows in Pig.

like 

A = load '/Pig/00_0' using PigStorage() 
as 
(id:INT, name:chararray, ddate, prim, ignore, ignorecase, activat);

What should be in the delimiter into PigStorage( )?  
I have tried PigStorage('/001') but its showing errors.
What delimiter should we use.

Please help and Suggest.

Thanks  Regards
Yogesh Kumar





Subject: Re: Need Help in Hive storage format
To: user@hive.apache.org
From: bejoy...@yahoo.com
Date: Thu, 11 Oct 2012 17:53:12 +




Hi Yogesh. 

It should be a simple delimited file with ^A character as the field delimiter.
Regards
Bejoy KS

Sent from handheld, please excuse typos.From:  yogesh dhari 
yogeshdh...@live.com
Date: Thu, 11 Oct 2012 23:18:35 +0530To: hive 
requestuser@hive.apache.orgReplyTo:  user@hive.apache.org
Subject: Need Help in Hive storage format

Hi all,

If we run this query

insert overwrite local directory '/home/yogesh/Downloads/demoyy' select * from 
NYSE_LOCAL; 

{
( describe NYSE_LOCAL ;

 exchangestring
symbolstring
ddatestring
openfloat
highfloat
lowfloat
) }

ls /home/yogesh/Downloads/demoyy/

it shows the file name 00_0

my Question is:

1) In which format does file 00_0 is?
2) what is the delimiter between columns?

Please help 

Thanks  Regards
Yogesh Kumar



  

RE: hive query fails

2012-10-01 Thread yogesh dhari

Hi Ajit,

Select * command doesn't invoke reducer, its just dump the data.

check out your network proxy settings. 

Regards
Yogesh Kumar Dhari

From: ajit.shreevast...@hcl.com
To: user@hive.apache.org
Date: Mon, 1 Oct 2012 16:29:27 +0530
Subject: hive query fails

Dear all, I am running following query and I m not getting any output. But 
select * from pokes is working fine. [hadoop@NHCLT-PC44-2 bin]$ hiveLogging 
initialized using configuration in 
file:/home/hadoop/Hive/conf/hive-log4j.propertiesHive history 
file=/home/hadoop/tmp/hadoop/hive_job_log_hadoop_201210011620_669979453.txthive
 select count(1) from pokes;Total MapReduce jobs = 1Launching Job 1 out of 
1Number of reduce tasks determined at compile time: 1In order to change the 
average load for a reducer (in bytes):  set 
hive.exec.reducers.bytes.per.reducer=numberIn order to limit the maximum 
number of reducers:  set hive.exec.reducers.max=numberIn order to set a 
constant number of reducers:  set mapred.reduce.tasks=numberStarting Job = 
job_201210011527_0004, Tracking URL = 
http://NHCLT-PC44-2:50030/jobdetails.jsp?jobid=job_201210011527_0004Kill 
Command = /home/hadoop/hadoop-1.0.3/bin/hadoop job  -kill job_201210011527_0004 
 Thanks and RegardsAjit Kumar ShreevastavaADCOE (App Development Center Of 
Excellence )Mobile: 9717775634 


::DISCLAIMER::

The contents of this e-mail and any attachment(s) are confidential and intended 
for the named recipient(s) only.
E-mail transmission is not guaranteed to be secure or error-free as information 
could be intercepted, corrupted, 
lost, destroyed, arrive late or incomplete, or may contain viruses in 
transmission. The e mail and its contents 
(with or without referred errors) shall therefore not attach any liability on 
the originator or HCL or its affiliates. 
Views or opinions, if any, presented in this email are solely those of the 
author and may not necessarily reflect the 
views or opinions of HCL or its affiliates. Any form of reproduction, 
dissemination, copying, disclosure, modification, 
distribution and / or publication of this message without the prior written 
consent of authorized representative of 
HCL is strictly prohibited. If you have received this email in error please 
delete it and notify the sender immediately. 
Before opening any email and/or attachments, please check them for viruses and 
other defects.

  

RE: how to perform GROUP BY:: in pig for this

2012-09-30 Thread yogesh dhari


Thanks Bejoy :-)

Regards
Yogesh Kumar
From: bejo...@outlook.com
To: user@hive.apache.org
Subject: RE: how to perform GROUP BY:: in pig for this
Date: Sun, 30 Sep 2012 18:57:55 +0530




Hi Yogesh

If you are looking for the solution in hive, then the following query will get 
you the required result

Select month(Date), max(rate) from date_sample Group BY month(Date);


Regards
Bejoy KS



 From: yogesh.kuma...@wipro.com
 To: user@hive.apache.org
 CC: yogeshdh...@live.com
 Subject: FW: how to perform GROUP BY:: in pig for this
 Date: Sat, 29 Sep 2012 11:22:29 +
 
 
 
 Hi all,
 
 I have this data, having fields  (Date, symbol, rate)
 
 and I want it to be group by Months, and to find out the maximum rate value 
 for each month.
 
 like: for month (08, 36.3), (09, 36.4), (10, 36.8), (11, 37.5) ..
 
 
 (2009-08-21,CLI,33.38)
 (2009-08-24,CLI,33.03)
 (2009-08-25,CLI,33.16)
 (2009-08-26,CLI,32.78)
 (2009-08-27,CLI,32.79)
 (2009-08-28,CLI,33.37)
 (2009-08-31,CLI,32.51)
 (2009-09-11,CLI,34.08)
 (2009-09-14,CLI,35.19)
 (2009-09-15,CLI,35.82)
 (2009-09-16,CLI,36.58)
 (2009-09-17,CLI,37.63)
 (2009-09-18,CLI,37.26)
 (2009-09-21,CLI,36.31)
 (2009-09-22,CLI,35.88)
 (2009-09-23,CLI,35.84)
 (2009-09-24,CLI,33.98)
 (2009-09-25,CLI,32.44)
 (2009-09-28,CLI,33.34)
 (2009-09-29,CLI,33.6)
 (2009-09-30,CLI,33.24)
 (2009-10-01,CLI,31.98)
 (2009-10-02,CLI,31.21)
 (2009-10-05,CLI,31.31)
 (2009-10-21,CLI,32.86)
 (2009-10-26,CLI,33.15)
 (2009-10-27,CLI,32.71)
 (2009-10-28,CLI,32.03)
 (2009-10-29,CLI,32.05)
 (2009-10-30,CLI,31.88)
 (2009-11-02,CLI,31.88)
 (2009-11-03,CLI,31.16)
 (2009-11-04,CLI,31.47)
 (2009-11-09,CLI,31.59)
 (2009-11-25,CLI,30.58)
 (2009-11-27,CLI,30.19)
 (2009-11-30,CLI,30.86)
 (2009-12-01,CLI,31.74)
 (2009-12-02,CLI,32.62)
 (2009-12-03,CLI,33.43)
 (2009-12-04,CLI,34.12)
 (2009-12-07,CLI,33.77)
 (2009-12-08,CLI,33.8)
 (2009-12-09,CLI,33.71)
 
 Please help and suggest .
 
 Thanks  Regards
 Yogesh Kumar
 
 The information contained in this electronic message and any attachments to 
 this message are intended for the exclusive use of the addressee(s) and may 
 contain proprietary, confidential or privileged information. If you are not 
 the intended recipient, you should not disseminate, distribute or copy this 
 e-mail. Please notify the sender immediately and destroy all copies of this 
 message and any attachments. 
 
 WARNING: Computer viruses can be transmitted via email. The recipient should 
 check this email and any attachments for the presence of viruses. The company 
 accepts no liability for any damage caused by any virus transmitted by this 
 email.
 
 www.wipro.com

  

Report tool ISSUE.

2012-09-30 Thread yogesh dhari

Hi all,

I am trying to install iReport on Ubuntu. I am not able to install it. Its 
doesn't have start.sh file in /iReport-4.7.1/bin/
it has /iReport-4.7.1/bin/ireport.exe

Please suggest me how to get it install over ubuntu.

Please suggest me for the linux version 

Thanks  Regards
Yogesh Kumar
  

ERROR: Hive subquery showing

2012-09-27 Thread yogesh dhari

Hi all,

I have a table called ABC, like

namegrp
A 1
B 2
C 4
D 8

I want the output like the name having greatest grp i.e D;

I wrote a query:

select name from ( select MAX(grp) from ABC ) gy ;

but it gives error

FAILED: Error in semantic analysis: Line 1:7 Invalid table alias or column 
reference 'name': (possible column names are: _col0)

Please help and suggest why it is so, and what would be the query;


Thanks  regards
Yogesh Kumar





  

RE: ERROR: Hive subquery showing

2012-09-27 Thread yogesh dhari

thanks Chen,  

I want output like ( the name and grp having highest grp)

D 8

for the table.

name  grp
A  1
B 2
C 4
D 8

Query :  
select name from ( select MAX(grp) as name from ABC ) gy ;
showing output: 8

which can be obtained by simple : select MAX(grp) from ABC ( I think here outer 
query is not performing)

Please Suggest

Regards
Yogesh Kumar


Date: Thu, 27 Sep 2012 15:33:11 -0400
Subject: Re: ERROR: Hive subquery showing
From: chen.song...@gmail.com
To: user@hive.apache.org

Can you try this?
select name from ( select MAX(grp) as name from ABC ) gy ;

On Thu, Sep 27, 2012 at 3:29 PM, yogesh dhari yogeshdh...@live.com wrote:





Hi all,

I have a table called ABC, like

name 牋 grp
A牋牋 1
B牋牋 2
C牋牋 4
D牋牋 8

I want the output like the name having greatest grp i.e D;

I wrote a query:


select name from ( select MAX(grp) from ABC ) gy ;

but it gives error

FAILED: Error in semantic analysis: Line 1:7 Invalid table alias or column 
reference 'name': (possible column names are: _col0)


Please help and suggest why it is so, and what would be the query;


Thanks  regards
Yogesh Kumar





  


-- 
Chen Song


  

RE: ERROR: Hive subquery showing

2012-09-27 Thread yogesh dhari

Hi Bejoy,

I tried this one also but here it throws horrible error:

i.e:

hive: select name from ABD where grp=MAX(grp);

FAILED: Hive Internal Error: java.lang.NullPointerException(null)
java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:684)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:805)
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
at 
org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:161)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7506)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7464)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFilterPlan(SemanticAnalyzer.java:1513)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFilterPlan(SemanticAnalyzer.java:1494)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:5886)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6524)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7282)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

Regards
Yogesh Kumar

Subject: Re: ERROR: Hive subquery showing
To: user@hive.apache.org
From: bejoy...@yahoo.com
Date: Thu, 27 Sep 2012 19:48:25 +

Hi yogesh

What about a query like this
select name from ABC WHERE grp=MAX(grp); 

Regards
Bejoy KS

Sent from handheld, please excuse typos.From:  Chen Song 
chen.song...@gmail.com
Date: Thu, 27 Sep 2012 15:33:11 -0400To: user@hive.apache.orgReplyTo:  
user@hive.apache.org
Subject: Re: ERROR: Hive subquery showing
Can you try this?
select name from ( select MAX(grp) as name from ABC ) gy ;

On Thu, Sep 27, 2012 at 3:29 PM, yogesh dhari yogeshdh...@live.com wrote:





Hi all,

I have a table called ABC, like

namegrp
A 1
B 2
C 4
D 8

I want the output like the name having greatest grp i.e D;

I wrote a query:


select name from ( select MAX(grp) from ABC ) gy ;

but it gives error

FAILED: Error in semantic analysis: Line 1:7 Invalid table alias or column 
reference 'name': (possible column names are: _col0)


Please help and suggest why it is so, and what would be the query;


Thanks  regards
Yogesh Kumar





  


-- 
Chen Song


  

RE: how to load TAB_SEPRATED file in hive table

2012-09-26 Thread yogesh dhari

Thanks Ashok :-),

I am not so much aware about regarding storage formats in hive(I am new to 
hive).

Could you please list some of them except those.

1) space seprated  --  FIELDS TERMINATED BY   ;
2) Control-A seprated   FIELDS TERMINATED BY '\001'
3) Tab septared  --   FIELDS TERMINATED BY '\t'

Please list some more.

Thanks  Regards
Yogesh Kumar





 From: ashok.sa...@wipro.com
 To: user@hive.apache.org
 Subject: RE: how to load TAB_SEPRATED file in hive table
 Date: Thu, 27 Sep 2012 05:20:11 +
 
 '\t'
 
 From: yogesh dhari [yogeshdh...@live.com]
 Sent: Thursday, September 27, 2012 10:42 AM
 To: hive request
 Subject: how to load TAB_SEPRATED file in hive table
 
 Hi all,
 
 I have a file in which records are Tab-Seprated,
 Please suggest me how to upload such file in Hive table.
 
 Like how to specify
 
 Create table XYZ (name STRING, roll INT)
 ROW FORMAT DELIMITED
 FIELDS TERMINATED BY  
 
 Please suggest for  over here.
 
 
 Thanks  Regards
 Yogesh Kumar
 
 Please do not print this email unless it is absolutely necessary. 
 
 The information contained in this electronic message and any attachments to 
 this message are intended for the exclusive use of the addressee(s) and may 
 contain proprietary, confidential or privileged information. If you are not 
 the intended recipient, you should not disseminate, distribute or copy this 
 e-mail. Please notify the sender immediately and destroy all copies of this 
 message and any attachments. 
 
 WARNING: Computer viruses can be transmitted via email. The recipient should 
 check this email and any attachments for the presence of viruses. The company 
 accepts no liability for any damage caused by any virus transmitted by this 
 email. 
 
 www.wipro.com
  

ERROR :regarding Hive WI, hwi service is not running

2012-09-19 Thread yogesh dhari

Hi all, 

I am trying to run hive wi but its showing FATAL,

I have used this command

hive --service hwi 

but it shows..

yogesh@yogesh-Aspire-5738:/opt/hive-0.8.1/lib$ hive --service hwi

12/09/20 03:12:03 INFO hwi.HWIServer: HWI is starting up
12/09/20 03:12:04 FATAL hwi.HWIServer: HWI WAR file not found at 
/opt/hive-0.8.1/lib/hive-hwi-0.8.1.war

although lies there.


yogesh@yogesh-Aspire-5738:/opt/hive-0.8.1/lib$  pwd

/opt/hive-0.8.1/lib

ls -l

-rw-rw-r-- 1 root root   55876 Jan 26  2012 hive-common-0.8.1.jar
-rw-rw-r-- 1 root root  112440 Jan 26  2012 hive-contrib-0.8.1.jar
-rw-rw-r-- 1 root root  112440 Jan 26  2012 hive_contrib.jar
-rw-rw-r-- 1 root root 3461228 Jan 26  2012 hive-exec-0.8.1.jar
-rw-rw-r-- 1 root root   48829 Jan 26  2012 hive-hbase-handler-0.8.1.jar
-rw-rw-r-- 1 root root   23529 Jan 26  2012 hive-hwi-0.8.1.jar
-rwxrwxrwx 1 root root   28413 Jan 26  2012 hive-hwi-0.8.1.war
-rw-rw-r-- 1 root root   58914 Jan 26  2012 hive-jdbc-0.8.1.jar
-rw-rw-r-- 1 root root 1765743 Jan 26  2012 hive-metastore-0.8.1.jar
-rw-rw-r-- 1 root root   14081 Jan 26  2012 hive-pdk-0.8.1.jar
-rw-rw-r-- 1 root root  509488 Jan 26  2012 hive-serde-0.8.1.jar
-rw-rw-r-- 1 root root  174445 Jan 26  2012 hive-service-0.8.1.jar
-rw-rw-r-- 1 root root  110154 Jan 26  2012 hive-shims-0.8.1.jar
-rw-rw-r-- 1 root root   15260 Jan 24  2012 javaewah-0.3.jar
-rw-rw-r-- 1 root root  198552 Dec 24  2009 jdo2-api-2.3-ec.jar


Please suggest regarding

Thanks  regards
Yogesh Kumar









  

RE: ERROR :regarding Hive WI, hwi service is not running

2012-09-19 Thread yogesh dhari

Hello Swarnim,

Are you saying to put hive-hwi-0.8.1.war into hadoop/lib ?

I have put it over there and still the same issue..

Thanks  Regards
Yogesh Kumar

From: kulkarni.swar...@gmail.com
Date: Wed, 19 Sep 2012 16:48:37 -0500
Subject: Re: ERROR :regarding Hive WI, hwi service is not running
To: user@hive.apache.org

It's probably looking for that file on HDFS. Try placing it there under the 
given location and see if you get the same error.

On Wed, Sep 19, 2012 at 4:45 PM, yogesh dhari yogeshdh...@live.com wrote:






Hi all, 

I am trying to run hive wi but its showing FATAL,

I have used this command

hive --service hwi 

but it shows..

yogesh@yogesh-Aspire-5738:/opt/hive-0.8.1/lib$ hive --service hwi



12/09/20 03:12:03 INFO hwi.HWIServer: HWI is starting up
12/09/20 03:12:04 FATAL hwi.HWIServer: HWI WAR file not found at 
/opt/hive-0.8.1/lib/hive-hwi-0.8.1.war

although lies there.


yogesh@yogesh-Aspire-5738:/opt/hive-0.8.1/lib$  pwd



/opt/hive-0.8.1/lib

ls -l

-rw-rw-r-- 1 root root   55876 Jan 26  2012 hive-common-0.8.1.jar
-rw-rw-r-- 1 root root  112440 Jan 26  2012 hive-contrib-0.8.1.jar
-rw-rw-r-- 1 root root  112440 Jan 26  2012 hive_contrib.jar


-rw-rw-r-- 1 root root 3461228 Jan 26  2012 hive-exec-0.8.1.jar
-rw-rw-r-- 1 root root   48829 Jan 26  2012 hive-hbase-handler-0.8.1.jar
-rw-rw-r-- 1 root root   23529 Jan 26  2012 hive-hwi-0.8.1.jar
-rwxrwxrwx 1 root root   28413 Jan 26  2012 hive-hwi-0.8.1.war


-rw-rw-r-- 1 root root   58914 Jan 26  2012 hive-jdbc-0.8.1.jar
-rw-rw-r-- 1 root root 1765743 Jan 26  2012 hive-metastore-0.8.1.jar
-rw-rw-r-- 1 root root   14081 Jan 26  2012 hive-pdk-0.8.1.jar
-rw-rw-r-- 1 root root  509488 Jan 26  2012 hive-serde-0.8.1.jar


-rw-rw-r-- 1 root root  174445 Jan 26  2012 hive-service-0.8.1.jar
-rw-rw-r-- 1 root root  110154 Jan 26  2012 hive-shims-0.8.1.jar
-rw-rw-r-- 1 root root   15260 Jan 24  2012 javaewah-0.3.jar
-rw-rw-r-- 1 root root  198552 Dec 24  2009 jdo2-api-2.3-ec.jar




Please suggest regarding

Thanks  regards
Yogesh Kumar









  


-- 
Swarnim
  

RE: ERROR :regarding Hive WI, hwi service is not running

2012-09-19 Thread yogesh dhari

Swarnim,

opt/hive-0.8.1/lib already exists, its the path of hive dir  
hive-hwi-0.8.1.war already exists there.

/opt/hive-0.8.1/lib

ls -l

-rw-rw-r-- 1 root root   55876 Jan 26  2012 hive-common-0.8.1.jar
-rw-rw-r-- 1 root root  112440 Jan 26  2012 hive-contrib-0.8.1.jar
-rw-rw-r-- 1 root root  112440 Jan 26  2012 hive_contrib.jar




-rw-rw-r-- 1 root root 3461228 Jan 26  2012 hive-exec-0.8.1.jar
-rw-rw-r-- 1 root root   48829 Jan 26  2012 hive-hbase-handler-0.8.1.jar
-rw-rw-r-- 1 root root   23529 Jan 26  2012 hive-hwi-0.8.1.jar
-rwxrwxrwx 1 root root   28413 Jan 26  2012 hive-hwi-0.8.1.war




-rw-rw-r-- 1 root root   58914 Jan 26  2012 hive-jdbc-0.8.1.jar
-rw-rw-r-- 1 root root 1765743 Jan 26  2012 hive-metastore-0.8.1.jar


Thats what its very surprising :-(

Thanks  Regards
Yogesh Kumar



From: kulkarni.swar...@gmail.com
Date: Wed, 19 Sep 2012 16:58:01 -0500
Subject: Re: ERROR :regarding Hive WI, hwi service is not running
To: user@hive.apache.org

No. I meant create /opt/hive-0.8.1/lib/ in HDFS and place the 
hive-hwi-0.8.1.war there.



On Wed, Sep 19, 2012 at 4:55 PM, yogesh dhari yogeshdh...@live.com wrote:






Hello Swarnim,

Are you saying to put hive-hwi-0.8.1.war into hadoop/lib ?

I have put it over there and still the same issue..

Thanks  Regards
Yogesh Kumar

From: kulkarni.swar...@gmail.com


Date: Wed, 19 Sep 2012 16:48:37 -0500
Subject: Re: ERROR :regarding Hive WI, hwi service is not running
To: user@hive.apache.org



It's probably looking for that file on HDFS. Try placing it there under the 
given location and see if you get the same error.

On Wed, Sep 19, 2012 at 4:45 PM, yogesh dhari yogeshdh...@live.com wrote:








Hi all, 

I am trying to run hive wi but its showing FATAL,

I have used this command

hive --service hwi 

but it shows..

yogesh@yogesh-Aspire-5738:/opt/hive-0.8.1/lib$ hive --service hwi





12/09/20 03:12:03 INFO hwi.HWIServer: HWI is starting up
12/09/20 03:12:04 FATAL hwi.HWIServer: HWI WAR file not found at 
/opt/hive-0.8.1/lib/hive-hwi-0.8.1.war

although lies there.


yogesh@yogesh-Aspire-5738:/opt/hive-0.8.1/lib$  pwd





/opt/hive-0.8.1/lib

ls -l

-rw-rw-r-- 1 root root   55876 Jan 26  2012 hive-common-0.8.1.jar
-rw-rw-r-- 1 root root  112440 Jan 26  2012 hive-contrib-0.8.1.jar
-rw-rw-r-- 1 root root  112440 Jan 26  2012 hive_contrib.jar




-rw-rw-r-- 1 root root 3461228 Jan 26  2012 hive-exec-0.8.1.jar
-rw-rw-r-- 1 root root   48829 Jan 26  2012 hive-hbase-handler-0.8.1.jar
-rw-rw-r-- 1 root root   23529 Jan 26  2012 hive-hwi-0.8.1.jar
-rwxrwxrwx 1 root root   28413 Jan 26  2012 hive-hwi-0.8.1.war




-rw-rw-r-- 1 root root   58914 Jan 26  2012 hive-jdbc-0.8.1.jar
-rw-rw-r-- 1 root root 1765743 Jan 26  2012 hive-metastore-0.8.1.jar
-rw-rw-r-- 1 root root   14081 Jan 26  2012 hive-pdk-0.8.1.jar
-rw-rw-r-- 1 root root  509488 Jan 26  2012 hive-serde-0.8.1.jar




-rw-rw-r-- 1 root root  174445 Jan 26  2012 hive-service-0.8.1.jar
-rw-rw-r-- 1 root root  110154 Jan 26  2012 hive-shims-0.8.1.jar
-rw-rw-r-- 1 root root   15260 Jan 24  2012 javaewah-0.3.jar
-rw-rw-r-- 1 root root  198552 Dec 24  2009 jdo2-api-2.3-ec.jar






Please suggest regarding

Thanks  regards
Yogesh Kumar









  


-- 
Swarnim
  


-- 
Swarnim
  

RE: FAILED: Error in metadata

2012-09-11 Thread yogesh dhari

thanks all for your guidance :-)

its done now

Regards
Yogesh kumar

From: rohithsharm...@huawei.com
To: user@hive.apache.org
Subject: RE: FAILED: Error in metadata
Date: Tue, 11 Sep 2012 14:35:18 +0530

Hi Exception trace clearly indicates that hive is pointing to default 
configurations.Make sure your editing hive-site.xml is loaded by hive server. 
For verifying whether correct configurations are loaded, you enable DEBUG 
logs.There you can see the loaded configurations. Verify your configurations 
with loaded configuration (In hive.log).  Regardsrohith Sharma k s From: 
Ravindra [mailto:ravindra.baj...@gmail.com] 
Sent: Monday, September 10, 2012 11:49 AM
To: user@hive.apache.org
Subject: Re: FAILED: Error in metadata I am new to Hive, Still 2 cents -

1. Do you have metastore_db already created, I don't see this name in your 
hive-site (you have try).
2. Hope you have your database client driver jar copied in the hive classpath.
--
Ravi.
''We do not inherit the earth from our ancestors, we borrow it from our 
children.'' PROTECT IT !


On Mon, Sep 10, 2012 at 10:40 AM, Hongjie Guo hongjie...@gmail.com 
wrote:check your mysql first,  if you can see some tables like 
TBLS,COLUMNS, the mysql should  work fine.
otherwise you should check your configuration about the hive-site.xml 2012/9/10 
yogesh dhari yogeshdh...@live.comHelli Bejoy,

I have restarted the hive and all cluster again but still the same issue.

Please help me out


Thanks  regards
Yogesh KumarDate: Sun, 9 Sep 2012 02:21:47 -0700
From: bejoy...@yahoo.com
Subject: Re: FAILED: Error in metadata
To: user@hive.apache.org Hi Yogesh It looks like hive is still on the derby db 
. Can you restart your hive instances after updating the hive-site.xml. Also 
please make sure that you are modifying the right copy of the file. 
Regards,Bejoy KS From: yogesh dhari yogeshdh...@live.com
To: hive request user@hive.apache.org 
Sent: Sunday, September 9, 2012 2:21 PM
Subject: FAILED: Error in metadata Hi all,

 my hive-site.xml is


property
namejavax.jdo.option.ConnectionURL/name

valuejdbc:mysql://127.0.0.1:3306/try?createDatabaseIfNotExist=true/value
/property

property
namejavax.jdo.option.ConnectionDriverName/name
valuecom.mysql.jdbc.Driver/value
/property

property
namejavax.jdo.option.ConnectionUserName/name
valuehadoop/value
/property

property
namejavax.jdo.option.ConnectionPassword/name
valuehadoop/value
/property 


and created a user in my sql.

 CREATE USER 'hadoop'@'localhost' IDENTIFIED BY 'hadoop'; 
GRANT ALL PRIVILEGES ON *.* TO 'hadoop' WITH GRANT OPTION;

Schema name is try, and connection name is Demo


When I run hive in single terminal then it runs fine but when i try to run hive 
parallel on new terminal window it shows the error:


FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Failed to 
start database 'metastore_db', see the next exception for details.
NestedThrowables:
java.sql.SQLException: Failed to start database 'metastore_db', see the next 
exception for details.
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask


Please help and suggest
Yogesh Kumar

  

FAILED: Error in metadata

2012-09-09 Thread yogesh dhari

Hi all,

 my hive-site.xml is


property
namejavax.jdo.option.ConnectionURL/name

valuejdbc:mysql://127.0.0.1:3306/try?createDatabaseIfNotExist=true/value
/property

property
namejavax.jdo.option.ConnectionDriverName/name
valuecom.mysql.jdbc.Driver/value
/property

property
namejavax.jdo.option.ConnectionUserName/name
valuehadoop/value
/property

property
namejavax.jdo.option.ConnectionPassword/name
valuehadoop/value
/property 


and created a user in my sql.

 CREATE USER 'hadoop'@'localhost' IDENTIFIED BY 'hadoop';

GRANT ALL PRIVILEGES ON *.* TO 'hadoop' WITH GRANT OPTION;

Schema name is try, and connection name is Demo


When I run hive in single terminal then it runs fine but when i try to run hive 
parallel on new terminal window it shows the error:


FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Failed to 
start database 'metastore_db', see the next exception for details.
NestedThrowables:
java.sql.SQLException: Failed to start database 'metastore_db', see the next 
exception for details.
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask


Please help and suggest
Yogesh Kumar


  

RE: FAILED: Error in metadata

2012-09-09 Thread yogesh dhari

Helli Bejoy,

I have restarted the hive and all cluster again but still the same issue.

Please help me out


Thanks  regards
Yogesh Kumar

Date: Sun, 9 Sep 2012 02:21:47 -0700
From: bejoy...@yahoo.com
Subject: Re: FAILED: Error in metadata
To: user@hive.apache.org

Hi Yogesh
It looks like hive is still on the derby db . Can you restart your hive 
instances after updating the hive-site.xml. Also please make sure that you are 
modifying the right copy of the file. Regards,Bejoy KS
From: yogesh dhari yogeshdh...@live.com
 To: hive request user@hive.apache.org 
 Sent: Sunday, September 9, 2012 2:21 PM
 Subject: FAILED: Error in metadata
   





Hi all,

 my hive-site.xml is


property
namejavax.jdo.option.ConnectionURL/name

valuejdbc:mysql://127.0.0.1:3306/try?createDatabaseIfNotExist=true/value
/property

property
namejavax.jdo.option.ConnectionDriverName/name
valuecom.mysql.jdbc.Driver/value
/property

property
namejavax.jdo.option.ConnectionUserName/name
valuehadoop/value
/property
   
 
property
namejavax.jdo.option.ConnectionPassword/name
valuehadoop/value
/property 


and created a user in my sql.

 CREATE USER 'hadoop'@'localhost' IDENTIFIED BY 'hadoop';

GRANT ALL PRIVILEGES ON *.* TO 'hadoop' WITH GRANT OPTION;

Schema name is try, and connection name is Demo


When I run hive in single terminal then it runs fine but when i try to run hive 
parallel on new terminal window it shows the error:


FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Failed to 
start database 'metastore_db', see the next exception for details.
NestedThrowables:
java.sql.SQLException: Failed to start database 'metastore_db', see the next 
exception for details.
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.DDLTask


Please help and suggest
Yogesh Kumar


  


  

issue regarding importing hive tables from one cluster to another.

2012-09-08 Thread yogesh dhari

Hi all,

 I have switched to new hdfs cluster from old cluster  ( all machines from old 
cluster are not connected to new cluster in any manner)

I brought edit and fsimage including ( dfs.name.dir and dfs.data.dir )from old 
cluster and put it over new cluster and every files and data are showing over 
new cluster.

is there any way out by which I can bring all tables created and there 
structure in hive from old cluster to new to new cluster..  

Thanks  regards
Yogesh Kumar



  

RE: namenode starting error

2012-07-08 Thread yogesh dhari

Hi, 

have you formated your name node by using

hadoop namenode -format


Regards
Yogesh Kumar

 From: are9...@nyp.org
 To: user@hive.apache.org; common-u...@hadoop.apache.org
 Date: Sat, 7 Jul 2012 21:37:16 -0400
 Subject: Re: namenode starting error
 
 Check your firewall settings, specifically if port 54310 is open. The
 other ports to look at are:
 
 50010
 50020
 50030
 50060
 50070
 54311
 
 On 6/20/12 5:22 AM, soham sardar sohamsardart...@gmail.com wrote:
 
 the thing is i have updated my JAVA_HOME in both .bashrc and hadoop-env.sh
 
 
 now the problem is when i try
 jps is the output is
 
 6113 NodeManager
 6799 DataNode
 7562 Jps
 7104 SecondaryNameNode
 5728 ResourceManager
 
 and now except namenode ,all other nodes are working and when i try to
 give
 
 hadoop fs -ls
 
 hduser@XPS-L501X:/home/soham/cloudera/hadoop-2.0.0-cdh4.0.0/sbin$ hadoop
 fs -ls
 12/06/20 14:45:14 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 0 time(s).
 12/06/20 14:45:15 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 1 time(s).
 12/06/20 14:45:16 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 2 time(s).
 12/06/20 14:45:17 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 3 time(s).
 12/06/20 14:45:18 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 4 time(s).
 12/06/20 14:45:19 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 5 time(s).
 12/06/20 14:45:20 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 6 time(s).
 12/06/20 14:45:21 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 7 time(s).
 12/06/20 14:45:22 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 8 time(s).
 12/06/20 14:45:23 INFO ipc.Client: Retrying connect to server:
 localhost/127.0.0.1:54310. Already tried 9 time(s).
 ls: Call From XPS-L501X/127.0.1.1 to localhost:54310 failed on
 connection exception: java.net.ConnectException: Connection refused;
 For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 
 could u temme why this error is coming ?
 
 
 This electronic message is intended to be for the use only of the named 
 recipient, and may contain information that is confidential or privileged. If 
 you are not the intended recipient, you are hereby notified that any 
 disclosure, copying, distribution or use of the contents of this message is 
 strictly prohibited. If you have received this message in error or are not 
 the named recipient, please notify us immediately by contacting the sender at 
 the electronic mail address noted above, and delete and destroy all copies of 
 this message. Thank you.
 
 
 
 
 This electronic message is intended to be for the use only of the named 
 recipient, and may contain information that is confidential or privileged.  
 If you are not the intended recipient, you are hereby notified that any 
 disclosure, copying, distribution or use of the contents of this message is 
 strictly prohibited.  If you have received this message in error or are not 
 the named recipient, please notify us immediately by contacting the sender at 
 the electronic mail address noted above, and delete and destroy all copies of 
 this message.  Thank you.
 
 
 
 
 
 
 This electronic message is intended to be for the use only of the named 
 recipient, and may contain information that is confidential or privileged.  
 If you are not the intended recipient, you are hereby notified that any 
 disclosure, copying, distribution or use of the contents of this message is 
 strictly prohibited.  If you have received this message in error or are not 
 the named recipient, please notify us immediately by contacting the sender at 
 the electronic mail address noted above, and delete and destroy all copies of 
 this message.  Thank you.
 
 
 
  

RE: Hive upload

2012-07-04 Thread yogesh dhari





Hi Bejoy,
Thank you very much for your response,
1)
A) When I run command  show tables it doesn't show  newhive table.B) Yes the 
the newhive directory is present into /user/hive/warehouse and also containing 
the values imported from RDBMS
Please suggest and give me an example for the sqoop import command according to 
you for this case.

2)
A) Here is the command  

describe formatted letstry;
OK
# col_namedata_type   comment 
  
rollno  int None
namestring  None
numbr   int None
sno int None
  
# Detailed Table Information  
Database:   default  
Owner:  mediaadmin   
CreateTime: Tue Jul 03 17:06:27 GMT+05:30 2012 
LastAccessTime: UNKNOWN  
Protect Mode:   None 
Retention:  0
Location:   hdfs://localhost:9000/user/hive/warehouse/letstry 
Table Type: MANAGED_TABLE
Table Parameters:  
transient_lastDdlTime1341315550  
  
# Storage Information  
SerDe Library:  org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe 
InputFormat:org.apache.hadoop.mapred.TextInputFormat 
OutputFormat:   
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat 
Compressed: No   
Num Buckets:-1   
Bucket Columns: []   
Sort Columns:   []   
Storage Desc Params:  
serialization.format1   
Time taken: 0.101 seconds


B) hadoop dfs -ls /user/hive/warehouse/letstry/
Found 1 items
-rw-r--r--   1 mediaadmin supergroup 17 2012-07-02 12:05 
/user/hive/warehouse/letstry/part-m-0

hadoop dfs -cat /user/hive/warehouse/letstry/part-m-0
1,John,123,abc,2



Here data is present but when I upload it to Hive it gets deleted from HDFS and 
in Hive value appers NULL instead of  ( 1,John,123,abc,2). and I didn't 
understad your point regarding correct data format? ( this data was imported 
from Mysql table)And what kind of confugration neede in sqoop 
Please suggest and help

GreetingsYogesh Kumar





Subject: Re: Hive upload
To: user@hive.apache.org
From: bejoy...@yahoo.com
Date: Wed, 4 Jul 2012 05:58:41 +




Hi Yogesh

The first issue (sqoop one).
1) Is the table newhive coming when you list tables using 'show table'?
2) Are you seeing a directory 'newhive' in your hive warte house dir(usually 
/usr/hive/warehouse)?

If not sqoop is failing to create hive tables /load data into them. Only sqoop 
import to hdfs is getting successful the hive part is failing. 

If hive in stand alone mode works as desired you need to check the sqoop 
configurations.

Regarding the second issue, can you check the storage location of NewTable and 
check whether there are files within. If so then do a 'cat' of those files and 
see whether it has the correct data format.

You can get the location of your table from the following command
describe formatted NewTable;
Regards
Bejoy KS

Sent from handheld, please excuse typos.From:  yogesh dhari 
yogeshdh...@live.com
Date: Wed, 4 Jul 2012 11:09:02 +0530To: hive 
requestuser@hive.apache.orgReplyTo:  user@hive.apache.org
Subject: Hive upload

Hi all,
I am trying to upload the tables from RDBMS to hive through sqoop, hive imports 
successfully. but i didn't find any table in hive that imported table gets 
uploaded into hdfs idr /user/hive/warehouseI want it to be present into hive, I 
used this command
sqoop import --connect jdbc:mysql://localhost:3306/Demo --username sqoop1 
--password SQOOP1 -table newone --hive-table newhive --create-hive-table 
--hive-import --target-dir /user/hive/warehouse/new

And another thing is,If I upload any file or table from HDFS or from Local then 
its uploads but data doesn't show in Hive table,
If I run command Select * from NewTable;it reflects
Null Null NullNull

although the real data is
Yogesh4Bangalore   1234

Please Suggest and help
RegardsYogesh Kumar   
  

Hive upload

2012-07-03 Thread yogesh dhari

Hi all,
I am trying to upload the tables from RDBMS to hive through sqoop, hive imports 
successfully. but i didn't find any table in hive that imported table gets 
uploaded into hdfs idr /user/hive/warehouseI want it to be present into hive, I 
used this command
sqoop import --connect jdbc:mysql://localhost:3306/Demo --username sqoop1 
--password SQOOP1 -table newone --hive-table newhive --create-hive-table 
--hive-import --target-dir /user/hive/warehouse/new

And another thing is,If I upload any file or table from HDFS or from Local then 
its uploads but data doesn't show in Hive table,
If I run command Select * from NewTable;it reflects
Null Null NullNull

although the real data is
Yogesh4Bangalore   1234

Please Suggest and help
RegardsYogesh Kumar