Re: "desc database extended " doesn't print dbproperties?

2014-06-25 Thread Navis류승우
Booked in https://issues.apache.org/jira/browse/HIVE-7298

Thanks,



2014-06-26 14:28 GMT+09:00 Navis류승우 :

> Seemed regression of HIVE-6386. Will be fixed in next version.
>
>
> 2014-06-26 7:58 GMT+09:00 Sumit Kumar :
>
>  Hey guys,
>>
>> I just discovered that this syntax doesn't print the dbproperties any
>> more. I've two hive versions that i'm testing following query on:
>>
>>   create database test2 with dbproperties ('key1' = 'value1', 'key2' =
>> 'value2');
>>   desc database extended test2;
>>
>> The output on hive 11 is:
>>
>> hive>   desc database extended
>> test2;
>> OK
>> test2 hdfs://:9000/warehouse/test2.db   {key2=value2,
>> key1=value1}
>> Time taken: 0.021 seconds, Fetched: 1 row(s)
>>
>> The output on hive 13 is:
>> hive> desc database extended
>> test2;
>> OK
>> test2 hdfs://:9000/warehouse/test2.dbhadoop
>> Time taken: 0.023 seconds, Fetched: 1 row(s)
>>
>> If you look closely, you would notice that no key value information from
>> dbproperties was printed in hive13 case and somehow magically "hadoop" (i
>> guess it's my userid) appeared.
>>
>> Any idea if this functionality changed since hive 11? Do we have a
>> reference jira? I searched on the wikis and JIRAs but couldn't find a
>> reference; surprised that the language manual wiki (
>> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL)
>> doesn't even talk about this functionality any more. Would appreciate input
>> on this.
>>
>> Thanks,
>> -Sumit
>>
>
>


Re: "desc database extended " doesn't print dbproperties?

2014-06-25 Thread Navis류승우
Seemed regression of HIVE-6386. Will be fixed in next version.


2014-06-26 7:58 GMT+09:00 Sumit Kumar :

> Hey guys,
>
> I just discovered that this syntax doesn't print the dbproperties any
> more. I've two hive versions that i'm testing following query on:
>
>   create database test2 with dbproperties ('key1' = 'value1', 'key2' =
> 'value2');
>   desc database extended test2;
>
> The output on hive 11 is:
>
> hive>   desc database extended
> test2;
> OK
> test2 hdfs://:9000/warehouse/test2.db   {key2=value2,
> key1=value1}
> Time taken: 0.021 seconds, Fetched: 1 row(s)
>
> The output on hive 13 is:
> hive> desc database extended
> test2;
> OK
> test2 hdfs://:9000/warehouse/test2.dbhadoop
> Time taken: 0.023 seconds, Fetched: 1 row(s)
>
> If you look closely, you would notice that no key value information from
> dbproperties was printed in hive13 case and somehow magically "hadoop" (i
> guess it's my userid) appeared.
>
> Any idea if this functionality changed since hive 11? Do we have a
> reference jira? I searched on the wikis and JIRAs but couldn't find a
> reference; surprised that the language manual wiki (
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL)
> doesn't even talk about this functionality any more. Would appreciate input
> on this.
>
> Thanks,
> -Sumit
>


Hivemetastore Error: Duplicate entry

2014-06-25 Thread 张伟
I run hive-0.13.1 on hadoop-2.2.0. When I insert into an orc table, then i
get the following ERROR, indicating that Hive is trying to insert duplicate
entry to a table which is called "GLOBAL_PRIVS".  Any idea how to fix it?

Exception in thread "main" java.lang.RuntimeException:
java.lang.RuntimeException: Unable to instantiate
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:346)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:625)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.RuntimeException: Unable to instantiate
org.apache.hadoop.hive.metastore.HiveMetaStoreClient
at
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1412)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:62)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:72)
at
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2453)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2465)
at
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:340)
... 7 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at
org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1410)
... 12 more
Caused by: javax.jdo.JDODataStoreException: Exception thrown flushing
changes to datastore
NestedThrowables:
java.sql.BatchUpdateException: Duplicate entry 'admin-ROLE-All-admin-ROLE'
for key 'GLOBALPRIVILEGEINDEX'
at
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
at
org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:165)
at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:406)
at
org.apache.hadoop.hive.metastore.ObjectStore.grantPrivileges(ObjectStore.java:3877)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
at com.sun.proxy.$Proxy7.grantPrivileges(Unknown Source)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultRoles(HiveMetaStore.java:567)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:398)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:356)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:54)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
at
org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4944)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:171)
... 17 more


"desc database extended " doesn't print dbproperties?

2014-06-25 Thread Sumit Kumar
Hey guys,

I just discovered that this syntax doesn't print the dbproperties any more. 
I've two hive versions that i'm testing following query on:

  create database test2 with dbproperties ('key1' = 'value1', 'key2' = 
'value2');
  desc database extended test2;


The output on hive 11 is:

hive>   desc database extended test2;   
 
OK
test2 hdfs://:9000/warehouse/test2.db   {key2=value2, 
key1=value1}
Time taken: 0.021 seconds, Fetched: 1 row(s)


The output on hive 13 is:
hive> desc database extended test2; 
 
OK
test2 hdfs://:9000/warehouse/test2.db    hadoop
Time taken: 0.023 seconds, Fetched: 1 row(s)


If you look closely, you would notice that no key value information from 
dbproperties was printed in hive13 case and somehow magically "hadoop" (i guess 
it's my userid) appeared.

Any idea if this functionality changed since hive 11? Do we have a reference 
jira? I searched on the wikis and JIRAs but couldn't find a reference; 
surprised that the language manual wiki 
(https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL) doesn't 
even talk about this functionality any more. Would appreciate input on this.


Thanks,
-Sumit


Re: Hive 0.13 map 100 % reduce 100% and the reduce decrise to 75 % ( in join or lag function)

2014-06-25 Thread Matouk IFTISSEN
No task failed in log I suspect the skewed join problem (skewed table using
lag fonction ).
How can I avoid this ( skewed data)?
Le 26 juin 2014 00:40, "Stéphane Verlet"  a
écrit :

> if reduce is decreasing is probably mean it failed.
> It typically retries and goes back up
>
>
> On Mon, Jun 23, 2014 at 3:15 AM, Matouk IFTISSEN <
> matouk.iftis...@ysance.com> wrote:
>
>>
>> My HDFS space is so big I dont thiks this isthe cause of the problem.
>> I  will test to increase java heap memory in hive-env.sh file
>>
>> 2014-06-23 11:06 GMT+02:00 Nagarjuna Vissarapu 
>> :
>>
>>> Can you please check your hdfs space once? If it is fine please increase
>>> java heap memory in hive-env.sh file
>>>
>>>
>>> On Mon, Jun 23, 2014 at 2:00 AM, Dima Machlin 
>>> wrote:
>>>
  I don’t see how this is “same” or even remotely related to my issue.

 It would be better for you to send it with a different and informative
 subjects on a separate mail.



 *From:* Matouk IFTISSEN [mailto:matouk.iftis...@ysance.com]
 *Sent:* Monday, June 23, 2014 11:49 AM
 *To:* user@hive.apache.org
 *Subject:* Re: Hive 0.12 Mapjoin and MapJoinMemoryExhaustionException



 Hello,

 I have as the same problem, but in other manner

 the map 100 %

 the reduce 100% and then the reduce decrise in 75 % !!

 I use a lag function in hive, table  (my_first_table) has 15million
 rows :








 *INSERT INTO TABLE my_table select *, case when nouvelle_tache = '1'
 then 'pas de rejeu' else if (lag(opg_id,1) OVER (PARTITION BY opg_par_id
 order by date_execution) is null,  opg_id, lag(opg_id,1) OVER (PARTITION BY
 opg_par_id order by date_execution) ) end opg_par_id_1, others_columns from
 my_first_table*

 *--- to limit the number of row I have thought that is a memory proble
 but, not because I have a lot of free memory*


 *where column5 > '37123T0104-10510' and column5 <=  '69191R0025-10162'
 order bycolumn5 ;*



 no error in log , Please healp what is wrong ??

  This the detail for tracker (full log):

  Regards



 2014-06-23 10:18 GMT+02:00 Dima Machlin :

  Hello,

 We are running Hive 0.12 and using the hive.auto.convert.join feature
 when :

 hive.auto.convert.join.noconditionaltask.size = 5000

 hive.mapjoin.followby.gby.localtask.max.memory.usage = 0.7



 The query is a mapjoin with a group by afterwards like so :



 select id,x,max(y)

 from (

 select t1.id,t1.x,t2.y from  tbl1  join tbl2 on (t1.id=t2.id)

 ) z

 group by id,x;





 While executing a join to a table that has ~3m rows we are failing on :



 org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionException:
 2014-06-10 04:42:21Processing rows:250 Hashtable size:
 249 Memory usage:704765184percentage: 0.701

 at
 org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionHandler.checkMemoryStatus(MapJoinMemoryExhaustionHandler.java:91)



 This is understood as we pass the 70% limit.

 But, the table only takes 35mb in the HDFS and somehow reading it to
 the hash table increases it size drastically when in the end it fails after
 reaching ~700mb.



 So this is the first question – why does it take so much space in
 memory?



 Later, i tried to increase
 hive.mapjoin.followby.gby.localtask.max.memory.usage to allow the mapjoin
 to finish. By doing so i got another problem.

 The table is in fact loaded to memory as seen here :



 Processing rows:290 Hashtable size: 289 Memory usage:
 818590784   percentage: 0.815

 INFO exec.HashTableSinkOperator: 2014-05-28 12:16:42  Processing
 rows:290 Hashtable size: 289 Memory usage:
 818590784   percentage:   0.815

 INFO exec.TableScanOperator: 0 finished. closing...

 INFO exec.TableScanOperator: 0 forwarded 2946773 rows

 INFO exec.HashTableSinkOperator: 1 finished. closing...

 INFO exec.HashTableSinkOperator: Temp URI for side table:
 file:/tmp/hadoop/hive_2014-05-28_12-16-21_239_3089817264132856114-94/-local-10004/HashTable-Stage-2

 Dump the side-table into file:
 file:/tmp/hadoop/hive_2014-05-28_12-16-21_239_3089817264132856114-94/-local-10004/HashTable-Stage-2/MapJoin-mapfile691--.hashtable

 INFO exec.HashTableSinkOperator: 2014-05-28 12:16:42  Dump the
 side-table into file:
 file:/tmp/hadoop/hive_2014-05-28_12-16-21_239_3089817264132856114-94/-local-10004/HashTable-Stage-2/MapJoin-mapfile691--.hashtable

 Upload 1 File to:
 file:/tmp/hadoop/hive_201

Re: Query execution time for Hive queries in Hue Web UI

2014-06-25 Thread Stéphane Verlet
in the result page click on the m/r job on the left , then click on metadata

Stephane


On Mon, Jun 23, 2014 at 3:42 AM, Ravi Prasad  wrote:

> Hi all,
>
>  I have created the Hive table  (  million of records).
> I am using Hue Web UI  to run the Hive queires
>
> I am running the same queries in both Hive UI ( Beeswax)   and  Cloudera
> Impala ( Web UI)  in Hue to find our the performance.
>
> In the Hue, I am not able to find the  query execution time .
> Can someone help on this How to find the execution time of the queries in
> Hue.
>
>
>
> --
> Regards,
> RAVI PRASAD. T
>


Re: Hive 0.13 map 100 % reduce 100% and the reduce decrise to 75 % ( in join or lag function)

2014-06-25 Thread Stéphane Verlet
if reduce is decreasing is probably mean it failed.
It typically retries and goes back up


On Mon, Jun 23, 2014 at 3:15 AM, Matouk IFTISSEN  wrote:

>
> My HDFS space is so big I dont thiks this isthe cause of the problem.
> I  will test to increase java heap memory in hive-env.sh file
>
> 2014-06-23 11:06 GMT+02:00 Nagarjuna Vissarapu :
>
>> Can you please check your hdfs space once? If it is fine please increase
>> java heap memory in hive-env.sh file
>>
>>
>> On Mon, Jun 23, 2014 at 2:00 AM, Dima Machlin 
>> wrote:
>>
>>>  I don’t see how this is “same” or even remotely related to my issue.
>>>
>>> It would be better for you to send it with a different and informative
>>> subjects on a separate mail.
>>>
>>>
>>>
>>> *From:* Matouk IFTISSEN [mailto:matouk.iftis...@ysance.com]
>>> *Sent:* Monday, June 23, 2014 11:49 AM
>>> *To:* user@hive.apache.org
>>> *Subject:* Re: Hive 0.12 Mapjoin and MapJoinMemoryExhaustionException
>>>
>>>
>>>
>>> Hello,
>>>
>>> I have as the same problem, but in other manner
>>>
>>> the map 100 %
>>>
>>> the reduce 100% and then the reduce decrise in 75 % !!
>>>
>>> I use a lag function in hive, table  (my_first_table) has 15million rows
>>> :
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *INSERT INTO TABLE my_table select *, case when nouvelle_tache = '1'
>>> then 'pas de rejeu' else if (lag(opg_id,1) OVER (PARTITION BY opg_par_id
>>> order by date_execution) is null,  opg_id, lag(opg_id,1) OVER (PARTITION BY
>>> opg_par_id order by date_execution) ) end opg_par_id_1, others_columns from
>>> my_first_table*
>>>
>>> *--- to limit the number of row I have thought that is a memory proble
>>> but, not because I have a lot of free memory*
>>>
>>>
>>> *where column5 > '37123T0104-10510' and column5 <=  '69191R0025-10162'
>>> order bycolumn5 ;*
>>>
>>>
>>>
>>> no error in log , Please healp what is wrong ??
>>>
>>>  This the detail for tracker (full log):
>>>
>>>  Regards
>>>
>>>
>>>
>>> 2014-06-23 10:18 GMT+02:00 Dima Machlin :
>>>
>>>  Hello,
>>>
>>> We are running Hive 0.12 and using the hive.auto.convert.join feature
>>> when :
>>>
>>> hive.auto.convert.join.noconditionaltask.size = 5000
>>>
>>> hive.mapjoin.followby.gby.localtask.max.memory.usage = 0.7
>>>
>>>
>>>
>>> The query is a mapjoin with a group by afterwards like so :
>>>
>>>
>>>
>>> select id,x,max(y)
>>>
>>> from (
>>>
>>> select t1.id,t1.x,t2.y from  tbl1  join tbl2 on (t1.id=t2.id)
>>>
>>> ) z
>>>
>>> group by id,x;
>>>
>>>
>>>
>>>
>>>
>>> While executing a join to a table that has ~3m rows we are failing on :
>>>
>>>
>>>
>>> org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionException:
>>> 2014-06-10 04:42:21Processing rows:250 Hashtable size:
>>> 249 Memory usage:704765184percentage: 0.701
>>>
>>> at
>>> org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionHandler.checkMemoryStatus(MapJoinMemoryExhaustionHandler.java:91)
>>>
>>>
>>>
>>> This is understood as we pass the 70% limit.
>>>
>>> But, the table only takes 35mb in the HDFS and somehow reading it to the
>>> hash table increases it size drastically when in the end it fails after
>>> reaching ~700mb.
>>>
>>>
>>>
>>> So this is the first question – why does it take so much space in memory?
>>>
>>>
>>>
>>> Later, i tried to increase
>>> hive.mapjoin.followby.gby.localtask.max.memory.usage to allow the mapjoin
>>> to finish. By doing so i got another problem.
>>>
>>> The table is in fact loaded to memory as seen here :
>>>
>>>
>>>
>>> Processing rows:290 Hashtable size: 289 Memory usage:
>>> 818590784   percentage: 0.815
>>>
>>> INFO exec.HashTableSinkOperator: 2014-05-28 12:16:42  Processing
>>> rows:290 Hashtable size: 289 Memory usage:
>>> 818590784   percentage:   0.815
>>>
>>> INFO exec.TableScanOperator: 0 finished. closing...
>>>
>>> INFO exec.TableScanOperator: 0 forwarded 2946773 rows
>>>
>>> INFO exec.HashTableSinkOperator: 1 finished. closing...
>>>
>>> INFO exec.HashTableSinkOperator: Temp URI for side table:
>>> file:/tmp/hadoop/hive_2014-05-28_12-16-21_239_3089817264132856114-94/-local-10004/HashTable-Stage-2
>>>
>>> Dump the side-table into file:
>>> file:/tmp/hadoop/hive_2014-05-28_12-16-21_239_3089817264132856114-94/-local-10004/HashTable-Stage-2/MapJoin-mapfile691--.hashtable
>>>
>>> INFO exec.HashTableSinkOperator: 2014-05-28 12:16:42  Dump the
>>> side-table into file:
>>> file:/tmp/hadoop/hive_2014-05-28_12-16-21_239_3089817264132856114-94/-local-10004/HashTable-Stage-2/MapJoin-mapfile691--.hashtable
>>>
>>> Upload 1 File to:
>>> file:/tmp/hadoop/hive_2014-05-28_12-16-21_239_3089817264132856114-94/-local-10004/HashTable-Stage-2/MapJoin-mapfile691--.hashtable
>>>
>>> INFO exec.HashTableSinkOperator: 2014-05-28 12:16:45  Upload 1 File to:
>>> file:/tmp/hadoop/hive_2014-05-28_12-16-21_239_3089817264132856114-94/-local-10004/HashTable-Stage-2/MapJoin-mapfile691--.hashtable
>>>
>>> INFO exec.HashTableSinkOperator: 1 forwarded 0 rows
>>>
>>>

Re: hive date format

2014-06-25 Thread Matouk IFTISSEN
Review your oracle date format normaly it look like this : -mm-dd
HH:mm:ss
You can do -- verbose in sqoop to debug error.
Tegards
Le 26 juin 2014 00:01, "rpriaa...@gmail.com"  a écrit :

> I am actually trying to sqoop a hive table to oracle. The date
> format(25-JUN-2014 in oracle) pose to be a problem. guessed may be if i
> have the date format changed in the hive table itself will resolve the
> issue. But now im not sure if it is the right approach, Can some one help
> with this please.
>
>
> On Wed, Jun 25, 2014 at 2:41 PM, D K  wrote:
>
>> Probably you meant from_unixtime(timestamp in bigint, "dd-MMM-").
>> "dd" vs "DD" does make a difference in the output.
>>
>> -Deepesh
>>
>>
>> On Wed, Jun 25, 2014 at 2:30 PM, Matouk IFTISSEN <
>> matouk.iftis...@ysance.com> wrote:
>>
>>> sorry use this : from_unixtime(field_date,'DD-MMM-')
>>>
>>>
>>> 2014-06-25 23:27 GMT+02:00 Matouk IFTISSEN :
>>>
>>> use ; unix_timestamp(field_date,'DD-MMM-')



 2014-06-25 23:20 GMT+02:00 rpriaa...@gmail.com :

 Hi,
>
> Please can someone tell me how to change the date format in hive.
> I need it in the format  '25-JUN-2014'
>
> --
> Regards,
> riad
>



 --

 *Matouk IFTISSEN | Consultant BI & Big Data [image:
 http://www.ysance.com] *
 24 rue du sentier - 75002 Paris - www.ysance.com
 
 Fax : +33 1 73 72 97 26
 *Ysance sur* :*Twitter* * | Facebook
  | Google+
 
  | LinkedIn
  | Newsletter
 *
 *Nos autres sites* : *ys4you* * | labdecisionnel
  | decrypt *

>>>
>>>
>>>
>>> --
>>>
>>> *Matouk IFTISSEN | Consultant BI & Big Data [image:
>>> http://www.ysance.com] *
>>> 24 rue du sentier - 75002 Paris - www.ysance.com
>>> 
>>> Fax : +33 1 73 72 97 26
>>> *Ysance sur* :*Twitter* * | Facebook
>>>  | Google+
>>> 
>>>  | LinkedIn
>>>  | Newsletter
>>> *
>>> *Nos autres sites* : *ys4you* * | labdecisionnel
>>>  | decrypt *
>>>
>>
>>
>
>
> --
> Regards,
> riad
>


Re: hive date format

2014-06-25 Thread rpriaa...@gmail.com
I am actually trying to sqoop a hive table to oracle. The date
format(25-JUN-2014 in oracle) pose to be a problem. guessed may be if i
have the date format changed in the hive table itself will resolve the
issue. But now im not sure if it is the right approach, Can some one help
with this please.


On Wed, Jun 25, 2014 at 2:41 PM, D K  wrote:

> Probably you meant from_unixtime(timestamp in bigint, "dd-MMM-"). "dd"
> vs "DD" does make a difference in the output.
>
> -Deepesh
>
>
> On Wed, Jun 25, 2014 at 2:30 PM, Matouk IFTISSEN <
> matouk.iftis...@ysance.com> wrote:
>
>> sorry use this : from_unixtime(field_date,'DD-MMM-')
>>
>>
>> 2014-06-25 23:27 GMT+02:00 Matouk IFTISSEN :
>>
>> use ; unix_timestamp(field_date,'DD-MMM-')
>>>
>>>
>>>
>>> 2014-06-25 23:20 GMT+02:00 rpriaa...@gmail.com :
>>>
>>> Hi,

 Please can someone tell me how to change the date format in hive.
 I need it in the format  '25-JUN-2014'

 --
 Regards,
 riad

>>>
>>>
>>>
>>> --
>>>
>>> *Matouk IFTISSEN | Consultant BI & Big Data [image:
>>> http://www.ysance.com] *
>>> 24 rue du sentier - 75002 Paris - www.ysance.com
>>> 
>>> Fax : +33 1 73 72 97 26
>>> *Ysance sur* :*Twitter* * | Facebook
>>>  | Google+
>>> 
>>>  | LinkedIn
>>>  | Newsletter
>>> *
>>> *Nos autres sites* : *ys4you* * | labdecisionnel
>>>  | decrypt *
>>>
>>
>>
>>
>> --
>>
>> *Matouk IFTISSEN | Consultant BI & Big Data [image:
>> http://www.ysance.com] *
>> 24 rue du sentier - 75002 Paris - www.ysance.com 
>> Fax : +33 1 73 72 97 26
>> *Ysance sur* :*Twitter* * | Facebook
>>  | Google+
>> 
>>  | LinkedIn
>>  | Newsletter
>> *
>> *Nos autres sites* : *ys4you* * | labdecisionnel
>>  | decrypt *
>>
>
>


-- 
Regards,
riad


Re: hive date format

2014-06-25 Thread D K
Probably you meant from_unixtime(timestamp in bigint, "dd-MMM-"). "dd"
vs "DD" does make a difference in the output.

-Deepesh


On Wed, Jun 25, 2014 at 2:30 PM, Matouk IFTISSEN  wrote:

> sorry use this : from_unixtime(field_date,'DD-MMM-')
>
>
> 2014-06-25 23:27 GMT+02:00 Matouk IFTISSEN :
>
> use ; unix_timestamp(field_date,'DD-MMM-')
>>
>>
>>
>> 2014-06-25 23:20 GMT+02:00 rpriaa...@gmail.com :
>>
>> Hi,
>>>
>>> Please can someone tell me how to change the date format in hive.
>>> I need it in the format  '25-JUN-2014'
>>>
>>> --
>>> Regards,
>>> riad
>>>
>>
>>
>>
>> --
>>
>> *Matouk IFTISSEN | Consultant BI & Big Data [image:
>> http://www.ysance.com] *
>> 24 rue du sentier - 75002 Paris - www.ysance.com 
>> Fax : +33 1 73 72 97 26
>> *Ysance sur* :*Twitter* * | Facebook
>>  | Google+
>> 
>>  | LinkedIn
>>  | Newsletter
>> *
>> *Nos autres sites* : *ys4you* * | labdecisionnel
>>  | decrypt *
>>
>
>
>
> --
>
> *Matouk IFTISSEN | Consultant BI & Big Data [image:
> http://www.ysance.com] *
> 24 rue du sentier - 75002 Paris - www.ysance.com 
> Fax : +33 1 73 72 97 26
> *Ysance sur* :*Twitter* * | Facebook
>  | Google+
> 
>  | LinkedIn
>  | Newsletter
> *
> *Nos autres sites* : *ys4you* * | labdecisionnel
>  | decrypt *
>


Re: hive date format

2014-06-25 Thread Matouk IFTISSEN
sorry use this : from_unixtime(field_date,'DD-MMM-')


2014-06-25 23:27 GMT+02:00 Matouk IFTISSEN :

> use ; unix_timestamp(field_date,'DD-MMM-')
>
>
>
> 2014-06-25 23:20 GMT+02:00 rpriaa...@gmail.com :
>
> Hi,
>>
>> Please can someone tell me how to change the date format in hive.
>> I need it in the format  '25-JUN-2014'
>>
>> --
>> Regards,
>> riad
>>
>
>
>
> --
>
> *Matouk IFTISSEN | Consultant BI & Big Data [image:
> http://www.ysance.com] *
> 24 rue du sentier - 75002 Paris - www.ysance.com 
> Fax : +33 1 73 72 97 26
> *Ysance sur* :*Twitter* * | Facebook
>  | Google+
> 
>  | LinkedIn
>  | Newsletter
> *
> *Nos autres sites* : *ys4you* * | labdecisionnel
>  | decrypt *
>



-- 

*Matouk IFTISSEN | Consultant BI & Big Data [image: http://www.ysance.com] *
24 rue du sentier - 75002 Paris - www.ysance.com 
Fax : +33 1 73 72 97 26
*Ysance sur* :*Twitter* * | Facebook
 | Google+

| LinkedIn
 | Newsletter
*
*Nos autres sites* : *ys4you* * | labdecisionnel
 | decrypt *


Re: hive date format

2014-06-25 Thread Matouk IFTISSEN
use ; unix_timestamp(field_date,'DD-MMM-')



2014-06-25 23:20 GMT+02:00 rpriaa...@gmail.com :

> Hi,
>
> Please can someone tell me how to change the date format in hive.
> I need it in the format  '25-JUN-2014'
>
> --
> Regards,
> riad
>



-- 

*Matouk IFTISSEN | Consultant BI & Big Data [image: http://www.ysance.com] *
24 rue du sentier - 75002 Paris - www.ysance.com 
Fax : +33 1 73 72 97 26
*Ysance sur* :*Twitter* * | Facebook
 | Google+

| LinkedIn
 | Newsletter
*
*Nos autres sites* : *ys4you* * | labdecisionnel
 | decrypt *


hive date format

2014-06-25 Thread rpriaa...@gmail.com
Hi,

Please can someone tell me how to change the date format in hive.
I need it in the format  '25-JUN-2014'

-- 
Regards,
riad


RE: Trying to create an HBase Table using HCatalog

2014-06-25 Thread Carlotta Hicks
That worked!  Thanks you very much!

From: D K [mailto:deepe...@gmail.com]
Sent: Wednesday, June 25, 2014 12:05 AM
To: user@hive.apache.org
Subject: Re: Trying to create an HBase Table using HCatalog

Can you change the storage handler classname in your query from 
"org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler" to 
"org.apache.hcatalog.hbase.HBaseHCatStorageHandler" and try?
-Deepesh

On Tue, Jun 24, 2014 at 12:40 PM, Carlotta Hicks 
mailto:carlotta.hi...@sas.com>> wrote:
I am submitting the following with HCatalog:

CREATE TABLE
usTestHB (fname string, lname string, cname string)
STORED BY 'org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler '
TBLPROPERTIES (
'hbase.table.name'='usTest',
'hbase.columns.mapping'='fname:fname, lname:lname, cname:cname'
);

but I keep getting this exception:

SemanticException java.io.IOException: Error in loading storage 
handler.org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler

I’ve found patch HIVE-6698, but this seems to be geared towards using HCatalog 
on windows.  I am on unix.

Can anyone provide assistance with creating an HBase table using HCatalog?

HCatalog – 0.12.0
HBase – 0.96.1.1

Thanks,

Carlotta Hicks
carlotta.hi...@sas.com

SAS® … THE POWER TO KNOW®




Re: Reg: Merging Rows

2014-06-25 Thread ushahive
Thank u so much it helped.


On Tue, Jun 24, 2014 at 2:49 PM, sumit ghosh  wrote:

> Did you try sum(col1), sum(col2) ...   group by id
>
>
>   On Tuesday, 24 June 2014 1:23 PM, usha hive  wrote:
>
>
> Hi,
>
> I am trying to merge few rows in to 1 row. I am stuck. Please help me.
>
> Example
> id  col1  col2  col3col4
> 1   44   NULLNULLNULL
> 1  NULL 37   NULL NULL
> 1  NULLNULL   88NULL
> 1  NULLNULL   NULL99
>
> to
> id  col1  col2  col3col4
> 1 44  3788   99
>
> Thanks,
> Usha
>
>
>


Re: hive/hbase integration

2014-06-25 Thread Brian Jeltema
This did work, though specifying the same list in the HIVE_AUX_JARS_PATH 
environment variable
or in a ‘set hive.aux.jars.path’ command does not work. 

Thanks, I can work with this.

Brian

On Jun 24, 2014, at 11:49 PM, D K  wrote:

> If the MR job is failing can you try the following on Hive CLI before running 
> the query?
> 
> add jar $HBASE_HOME/lib/hbase-client--hadoop2.jar;
> add jar $HBASE_HOME/lib/hbase-protocol--hadoop2.jar;
> add jar $HBASE_HOME/lib/hbase-server--hadoop2.jar;
> add jar $HBASE_HOME/lib/htrace-core-2.01.jar
> 
> replace  based on your install environment. Also replace $HBASE_HOME 
> with the full path of your hbase install.
> 
> -Deepesh
> 
> On Mon, Jun 23, 2014 at 9:14 AM, Brian Jeltema 
>  wrote:
> I’m running Hive 0.12 on Hadoop V2 (Ambari installation) and have been trying 
> to use HBase integration. Hive generated Map/Reduce jobs
> are failing with:
> 
>Error: java.lang.ClassNotFoundException: 
> org.apache.hadoop.hbase.mapreduce.TableSplit
> 
> this is discussed in several discussion threads, but I there are so many 
> different distributions, bugs, and versions involved
> that it’s difficult to find the correct solution. 
> 
> Can someone suggest the correct fix (or approach to a fix) for this 
> configuration? 
> 
> Thanks
> Brian
>