Please help : Hive + postgres integration. Drop table query is hanging.

2012-08-28 Thread rohithsharma
Hi

 

I am using  PostgreSQl-9.0.7 as metastore and + Hive-0.9.0. I integrated
postgres with hive. Few queries are working fine. I am using 

postgresql-9.0-802.jdbc3.jar for connecting to JDBC.

 

But  "drop table query" is hanging. Following is Hive DEBUG log . 

 

08/12/28 06:02:09 DEBUG lazy.LazySimpleSerDe: LazySimpleSerDe initialized

with: columnNames=[a] columnTypes=[int] separator=[[B@e4600c0] nullstring=\N

lastColumnTakesRest=false

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: get_table : db=default

tbl=erer

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: drop_table : db=default

tbl=erer

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: get_table : db=default

tbl=erer

*08/12/28 06:02:09 DEBUG metastore.ObjectStore: Executing listMPartitions

 

 

I find there is no bug in open source . Is there any way to overcome this
problem please help me.

 

 

 

Regards

Rohith Sharma K S

 



[jira] [Created] (HIVE-3414) Exception cast issue in HiveMetaStore.java

2012-08-28 Thread Harsh J (JIRA)
Harsh J created HIVE-3414:
-

 Summary: Exception cast issue in HiveMetaStore.java
 Key: HIVE-3414
 URL: https://issues.apache.org/jira/browse/HIVE-3414
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.8.1
Reporter: Harsh J
Priority: Trivial


(This is reading the 0.8 code)

Faulty way of checking for types in HiveMetaStore.java, under the 
HMSHandler.rename_partition method:

{code}
1914 } catch (Exception e) { 
1915 assert(e instanceof RuntimeException); 
1916 throw (RuntimeException)e; 
1917 }
{code}

Leads to:

{code}
Caused by: java.lang.ClassCastException: 
org.apache.hadoop.hive.metastore.api.InvalidOperationException cannot be cast 
to java.lang.RuntimeException
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.rename_partition(HiveMetaStore.java:1916)
at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partition(HiveMetaStore.java:1884)
at 
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_partition(HiveMetaStoreClient.java:818)
at org.apache.hadoop.hive.ql.metadata.Hive.alterPartition(Hive.java:427)
at org.apache.hadoop.hive.ql.metadata.Hive.getPartition(Hive.java:1464)
... 18 more
{code}

When a genuine exception occurs when processing the alter_partition method.

Why do we cast here and not re-throw in a wrapped fashion?

On trunk the similar statements now exist just in createDefaultDB and 
get_database methods.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3056) Ability to bulk update location field in Db/Table/Partition records

2012-08-28 Thread Shreepadma Venugopalan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443780#comment-13443780
 ] 

Shreepadma Venugopalan commented on HIVE-3056:
--

@Carl: I've addressed your comments in the new patch. Its on review board at : 
https://reviews.apache.org/r/6650/diff/ Thanks!

> Ability to bulk update location field in Db/Table/Partition records
> ---
>
> Key: HIVE-3056
> URL: https://issues.apache.org/jira/browse/HIVE-3056
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Carl Steinbach
>Assignee: Shreepadma Venugopalan
> Attachments: HIVE-3056.2.patch.txt, HIVE-3056.3.patch.txt, 
> HIVE-3056.4.patch.txt, HIVE-3056.5.patch.txt, HIVE-3056.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3056) Ability to bulk update location field in Db/Table/Partition records

2012-08-28 Thread Shreepadma Venugopalan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shreepadma Venugopalan updated HIVE-3056:
-

Attachment: HIVE-3056.5.patch.txt

> Ability to bulk update location field in Db/Table/Partition records
> ---
>
> Key: HIVE-3056
> URL: https://issues.apache.org/jira/browse/HIVE-3056
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Carl Steinbach
>Assignee: Shreepadma Venugopalan
> Attachments: HIVE-3056.2.patch.txt, HIVE-3056.3.patch.txt, 
> HIVE-3056.4.patch.txt, HIVE-3056.5.patch.txt, HIVE-3056.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2247) ALTER TABLE RENAME PARTITION

2012-08-28 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443778#comment-13443778
 ] 

Harsh J commented on HIVE-2247:
---

Can someone mark a Fix Version for this JIRA please? It is unclear.

> ALTER TABLE RENAME PARTITION
> 
>
> Key: HIVE-2247
> URL: https://issues.apache.org/jira/browse/HIVE-2247
> Project: Hive
>  Issue Type: New Feature
>Reporter: Siying Dong
>Assignee: Weiyan Wang
> Attachments: HIVE-2247.10.patch.txt, HIVE-2247.11.patch.txt, 
> HIVE-2247.3.patch.txt, HIVE-2247.4.patch.txt, HIVE-2247.5.patch.txt, 
> HIVE-2247.6.patch.txt, HIVE-2247.7.patch.txt, HIVE-2247.8.patch.txt, 
> HIVE-2247.9.patch.txt, HIVE-2247.9.patch.txt
>
>
> We need a ALTER TABLE TABLE RENAME PARTITIONfunction that is similar t ALTER 
> TABLE RENAME.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: Review Request: HIVE-3056: Ability to bulk update location field in Db/Table/Partition records

2012-08-28 Thread Shreepadma Venugopalan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6650/#review10790
---



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Moved the printing to HiveMetaTool and the results are printed only if the 
transaction succeeds.



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Changed method to return a set of strings. HiveMetaTool prints the root 
location. The current implementation uses a hashset to eliminate duplicates.



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


done



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Replaced iterator with forloop



metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


Changed iterator to use forloop



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


Done



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


done



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


done



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


done



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


done



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


I'm not sure what you are trying to say here. I've changes the variable 
"HAUpgrade" to "updateHDFSRootLoc". We need to add this option to the 
cmdLineOptions variable.



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


Replaced exception with system.err..



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


Moved to main().



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


I've removed the run method. I'll keep this in this mind while coding :)



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


I've removed the run method and have moved the logic to main. I'll keep 
this i.e., exception rather than error codes should be used, in mind while 
coding :)



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


Moved the logic in run() to main().



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


done



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


done



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


I removed the exception and replaced it with a system.err.. I also print 
the help option.



metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java


Moved the try catch block. Changed to use getLocalizedMessage() instead of 
getMessage()


- Shreepadma Venugopalan


On Aug. 29, 2012, 3

Re: Review Request: HIVE-3056: Ability to bulk update location field in Db/Table/Partition records

2012-08-28 Thread Shreepadma Venugopalan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/6650/
---

(Updated Aug. 29, 2012, 3:24 a.m.)


Review request for hive and Carl Steinbach.


Changes
---

This revision to the original patch addresses almost all of the review comments 
from revision # 5. Additionally the following changes have been made to the 
-updateLocation option,

* A new option -dryRun has been added. When run with the -dryRun option 
persistent changes are not made. Instead the current location and the proposed 
new location are printed to stdout.
* Both new-loc and old-loc have to be valid URIs. This validation is performed 
in the code and an error is raised if either the new-loc or the old-loc is not 
a valid URI. Please note that both the host name and scheme fields of the URI 
are required, while the port is optional. However, if the old-loc contains a 
port, the new-loc should specify a port too. The motivation behind making the 
scheme a required field is to prevent an inadvertent update of the location 
field when the schemes doesn't match. Please note that the primary motivation 
for this tool at this point is to update the location field to ensure Hive 
survives a non-HA to HA upgrade and vice versa.
* The test case has been fixed to remove hard coded paths in the locationURI 
field of MDatabase. 

Additionally the updateLocation option of the metatool has been manually tested 
on a real cluster. All of the locations in the metastore correctly point to the 
HDFS NN and running metatool -updateLocation   
correctly updates the location of all relevant records to point to the new HA 
NN.


Description
---

This patch implement hive metatool which,

* lets admins perform a HA upgrade by patching the location of the NN in Hive's 
metastore
* allows JDOQL to be executed against the metastore.


This addresses bug HIVE-3056.
https://issues.apache.org/jira/browse/HIVE-3056


Diffs (updated)
-

  bin/ext/metatool.sh PRE-CREATION 
  bin/metatool PRE-CREATION 
  build.xml 6712af9 
  eclipse-templates/TestHiveMetaTool.launchtemplate PRE-CREATION 
  metastore/ivy.xml 3011d2f 
  metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java 045b550 
  metastore/src/java/org/apache/hadoop/hive/metastore/tools/HiveMetaTool.java 
PRE-CREATION 
  metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaTool.java 
PRE-CREATION 

Diff: https://reviews.apache.org/r/6650/diff/


Testing
---

A new JUnit test - TestHiveMetaTool - has been added to test the various 
metatool options.


Thanks,

Shreepadma Venugopalan



[jira] [Commented] (HIVE-3068) Add ability to export table metadata as JSON on table drop

2012-08-28 Thread Andrew Chalfant (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443768#comment-13443768
 ] 

Andrew Chalfant commented on HIVE-3068:
---

Added, thanks Edward!

> Add ability to export table metadata as JSON on table drop
> --
>
> Key: HIVE-3068
> URL: https://issues.apache.org/jira/browse/HIVE-3068
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Serializers/Deserializers
>Reporter: Andrew Chalfant
>Assignee: Andrew Chalfant
>Priority: Minor
>  Labels: features, newbie
> Attachments: HIVE-3068.2.patch.txt
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> When a table is dropped, the contents go to the users trash but the metadata 
> is lost. It would be super neat to be able to save the metadata as well so 
> that tables could be trivially re-instantiated via thrift.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

2012-08-28 Thread Zhenxiao Luo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenxiao Luo updated HIVE-3413:
---

Status: Patch Available  (was: Open)

> Fix pdk.PluginTest on hadoop23
> --
>
> Key: HIVE-3413
> URL: https://issues.apache.org/jira/browse/HIVE-3413
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Attachments: HIVE-3413.1.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
> [junit] Running org.apache.hive.pdk.PluginTest
> [junit] Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> [junit] Total MapReduce jobs = 1
> [junit] Launching Job 1 out of 1
> [junit] Number of reduce tasks determined at compile time: 1
> [junit] In order to change the average load for a reducer (in bytes):
> [junit]   set hive.exec.reducers.bytes.per.reducer=
> [junit] In order to limit the maximum number of reducers:
> [junit]   set hive.exec.reducers.max=
> [junit] In order to set a constant number of reducers:
> [junit]   set mapred.reduce.tasks=
> [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is 
> deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the 
> log4j.properties files.
> [junit] Execution log at: 
> /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> [junit] java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
> [junit] at 
> org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
> [junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:85)
> [junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:78)
> [junit] at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
> [junit] at 
> org.apache.hadoop.mapred.JobClient.(JobClient.java:466)
> [junit] at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
> [junit] at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
> [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [junit] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> [junit] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [junit] at java.lang.reflect.Method.invoke(Method.java:616)
> [junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> [junit] Job Submission failed with exception 'java.io.IOException(Cannot 
> initialize Cluster. Please check your configuration for 
> mapreduce.framework.name and the correspond server addresses.)'
> [junit] Execution failed with exit status: 1
> [junit] Obtaining error information
> [junit]
> [junit] Task failed!
> [junit] Task ID:
> [junit]   Stage-1
> [junit]
> [junit] Logs:
> [junit]
> [junit] /tmp/cloudera/hive.log
> [junit] FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask]>)
> [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> - Standard Error -
> GLOBAL SETUP:  Copying file: 
> file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_25225.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> -  ---
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris 
> took 4.428 sec
> FAILED
> expected:<[23]> but was:<[
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> WARNING: org.apache.hadoop.metrics.jvm.EventCounte

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

2012-08-28 Thread Zhenxiao Luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443765#comment-13443765
 ] 

Zhenxiao Luo commented on HIVE-3413:


A quick note, to build hive on hadoop23:
$ant very-clean package.-Dhadoop.version=0.23.1 -Dhadoop-0.23.version=0.23.1 
-Dhadoop.mr.rev=23

And run test:
$ant test -Dhadoop.version=0.23.1.-Dhadoop-0.23.version=0.23.1 
-Dhadoop.mr.rev=23

> Fix pdk.PluginTest on hadoop23
> --
>
> Key: HIVE-3413
> URL: https://issues.apache.org/jira/browse/HIVE-3413
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Attachments: HIVE-3413.1.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
> [junit] Running org.apache.hive.pdk.PluginTest
> [junit] Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> [junit] Total MapReduce jobs = 1
> [junit] Launching Job 1 out of 1
> [junit] Number of reduce tasks determined at compile time: 1
> [junit] In order to change the average load for a reducer (in bytes):
> [junit]   set hive.exec.reducers.bytes.per.reducer=
> [junit] In order to limit the maximum number of reducers:
> [junit]   set hive.exec.reducers.max=
> [junit] In order to set a constant number of reducers:
> [junit]   set mapred.reduce.tasks=
> [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is 
> deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the 
> log4j.properties files.
> [junit] Execution log at: 
> /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> [junit] java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
> [junit] at 
> org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
> [junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:85)
> [junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:78)
> [junit] at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
> [junit] at 
> org.apache.hadoop.mapred.JobClient.(JobClient.java:466)
> [junit] at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
> [junit] at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
> [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [junit] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> [junit] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [junit] at java.lang.reflect.Method.invoke(Method.java:616)
> [junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> [junit] Job Submission failed with exception 'java.io.IOException(Cannot 
> initialize Cluster. Please check your configuration for 
> mapreduce.framework.name and the correspond server addresses.)'
> [junit] Execution failed with exit status: 1
> [junit] Obtaining error information
> [junit]
> [junit] Task failed!
> [junit] Task ID:
> [junit]   Stage-1
> [junit]
> [junit] Logs:
> [junit]
> [junit] /tmp/cloudera/hive.log
> [junit] FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask]>)
> [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> - Standard Error -
> GLOBAL SETUP:  Copying file: 
> file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_25225.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> -  ---
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris 
> took 4.428 sec
> FAILED
> expected:<[23]> but was:<[
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in byte

[jira] [Updated] (HIVE-3413) Fix pdk.PluginTest on hadoop23

2012-08-28 Thread Zhenxiao Luo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenxiao Luo updated HIVE-3413:
---

Attachment: HIVE-3413.1.patch.txt

> Fix pdk.PluginTest on hadoop23
> --
>
> Key: HIVE-3413
> URL: https://issues.apache.org/jira/browse/HIVE-3413
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Attachments: HIVE-3413.1.patch.txt
>
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
> [junit] Running org.apache.hive.pdk.PluginTest
> [junit] Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> [junit] Total MapReduce jobs = 1
> [junit] Launching Job 1 out of 1
> [junit] Number of reduce tasks determined at compile time: 1
> [junit] In order to change the average load for a reducer (in bytes):
> [junit]   set hive.exec.reducers.bytes.per.reducer=
> [junit] In order to limit the maximum number of reducers:
> [junit]   set hive.exec.reducers.max=
> [junit] In order to set a constant number of reducers:
> [junit]   set mapred.reduce.tasks=
> [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is 
> deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the 
> log4j.properties files.
> [junit] Execution log at: 
> /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> [junit] java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
> [junit] at 
> org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
> [junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:85)
> [junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:78)
> [junit] at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
> [junit] at 
> org.apache.hadoop.mapred.JobClient.(JobClient.java:466)
> [junit] at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
> [junit] at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
> [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [junit] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> [junit] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [junit] at java.lang.reflect.Method.invoke(Method.java:616)
> [junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> [junit] Job Submission failed with exception 'java.io.IOException(Cannot 
> initialize Cluster. Please check your configuration for 
> mapreduce.framework.name and the correspond server addresses.)'
> [junit] Execution failed with exit status: 1
> [junit] Obtaining error information
> [junit]
> [junit] Task failed!
> [junit] Task ID:
> [junit]   Stage-1
> [junit]
> [junit] Logs:
> [junit]
> [junit] /tmp/cloudera/hive.log
> [junit] FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask]>)
> [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> - Standard Error -
> GLOBAL SETUP:  Copying file: 
> file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_25225.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> -  ---
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris 
> took 4.428 sec
> FAILED
> expected:<[23]> but was:<[
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> WARNING: org.apache.hadoop.metrics.jvm.EventCounter i

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

2012-08-28 Thread Zhenxiao Luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443764#comment-13443764
 ] 

Zhenxiao Luo commented on HIVE-3413:


review request submitted at:
https://reviews.facebook.net/D5001

> Fix pdk.PluginTest on hadoop23
> --
>
> Key: HIVE-3413
> URL: https://issues.apache.org/jira/browse/HIVE-3413
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.9.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
>
> When running Hive test on Hadoop0.23, pdk.PluginTest is failing:
> test:
> [junit] Running org.apache.hive.pdk.PluginTest
> [junit] Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> [junit] Total MapReduce jobs = 1
> [junit] Launching Job 1 out of 1
> [junit] Number of reduce tasks determined at compile time: 1
> [junit] In order to change the average load for a reducer (in bytes):
> [junit]   set hive.exec.reducers.bytes.per.reducer=
> [junit] In order to limit the maximum number of reducers:
> [junit]   set hive.exec.reducers.max=
> [junit] In order to set a constant number of reducers:
> [junit]   set mapred.reduce.tasks=
> [junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is 
> deprecated. Please use org.apache.hadoop.log.metrics.EventCounter in all the 
> log4j.properties files.
> [junit] Execution log at: 
> /tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
> [junit] java.io.IOException: Cannot initialize Cluster. Please check your 
> configuration for mapreduce.framework.name and the correspond server 
> addresses.
> [junit] at 
> org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
> [junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:85)
> [junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:78)
> [junit] at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
> [junit] at 
> org.apache.hadoop.mapred.JobClient.(JobClient.java:466)
> [junit] at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
> [junit] at 
> org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
> [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [junit] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> [junit] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [junit] at java.lang.reflect.Method.invoke(Method.java:616)
> [junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
> [junit] Job Submission failed with exception 'java.io.IOException(Cannot 
> initialize Cluster. Please check your configuration for 
> mapreduce.framework.name and the correspond server addresses.)'
> [junit] Execution failed with exit status: 1
> [junit] Obtaining error information
> [junit]
> [junit] Task failed!
> [junit] Task ID:
> [junit]   Stage-1
> [junit]
> [junit] Logs:
> [junit]
> [junit] /tmp/cloudera/hive.log
> [junit] FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.MapRedTask]>)
> [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:
> Testsuite: org.apache.hive.pdk.PluginTest
> Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
> - Standard Error -
> GLOBAL SETUP:  Copying file: 
> file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
> Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
> Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
> org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
> GLOBAL TEARDOWN:
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_25225.txt
> OK
> Time taken: 6.874 seconds
> OK
> Time taken: 0.512 seconds
> -  ---
> Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris 
> took 4.428 sec
> FAILED
> expected:<[23]> but was:<[
> Hive history 
> file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> Number of reduce tasks determined at compile time: 1
> In order to change the average load for a reducer (in bytes):
>   set hive.exec.reducers.bytes.per.reducer=
> In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=
> In order to set a constant number of reducers:
>   set mapred.reduce.tasks=
> WARNI

[jira] [Commented] (HIVE-3413) Fix pdk.PluginTest on hadoop23

2012-08-28 Thread Zhenxiao Luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443753#comment-13443753
 ] 

Zhenxiao Luo commented on HIVE-3413:


There are two problems with this bug:

#1. Missing Dependency:
The compile and test classpath in pdk/scripts/build-plugin.xml is based on 
build/ivy/lib/default, the following dependencies are missing when building 
hive-exec*.jar:

hadoop-mapreduce-client-jobclient
hadoop-minicluster

This dependency should be added to ql/ivy.xml, which is the place for hive-exec 
dependencies.

Note that hadoop-mapreduce-client-jobclient dependency should be updated by 
just changing the jar placement. putting the jar in build/ivy/lib/test/ would 
not be included in pdk PluginTest classpath.

#2. After fixing #1, the following log4j warning message appears in the output 
stream, which fails the testcase(pdk PluginTest diffs expected output with the 
output stream):

test:
[junit] Running org.apache.hive.pdk.PluginTest
[junit] 2012-08-28 19:05:20,679 WARN  [main] conf.Configuration 
(Configuration.java:loadProperty(1621)) - 
file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an
 attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
[junit] 2012-08-28 19:05:20,680 WARN  [main] conf.Configuration 
(Configuration.java:loadProperty(1621)) - 
file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an
 attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
[junit] 2]3>)
[junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 42.318 sec


And the details in: ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt

Testsuite: org.apache.hive.pdk.PluginTest
Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 42.318 sec
- Standard Error -
GLOBAL SETUP:  Copying file: 
file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281905_427044653.txt
GLOBAL TEARDOWN:
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281905_95794698.txt
OK
Time taken: 6.585 seconds
OK
Time taken: 0.415 seconds
-  ---

Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 
9.435 sec
FAILED
expected:<2[]3> but was:<2[012-08-28 19:05:20,464 WARN  [main] conf.HiveConf 
(HiveConf.java:(75)) - hive-site.xml not found on CLASSPATH
2012-08-28 19:05:20,679 WARN  [main] conf.Configuration 
(Configuration.java:loadProperty(1621)) - 
file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an
 attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2012-08-28 19:05:20,680 WARN  [main] conf.Configuration 
(Configuration.java:loadProperty(1621)) - 
file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an
 attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
2]3>
junit.framework.ComparisonFailure: expected:<2[]3> but was:<2[012-08-28 
19:05:20,464 WARN  [main] conf.HiveConf (HiveConf.java:(75)) - 
hive-site.xml not found on CLASSPATH
2012-08-28 19:05:20,679 WARN  [main] conf.Configuration 
(Configuration.java:loadProperty(1621)) - 
file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an
 attempt to override final parameter: 
mapreduce.job.end-notification.max.retry.interval;  Ignoring.
2012-08-28 19:05:20,680 WARN  [main] conf.Configuration 
(Configuration.java:loadProperty(1621)) - 
file:/tmp/cloudera/hive_2012-08-28_19-05-17_531_4347419252405007581/-local-10002/jobconf.xml:an
 attempt to override final parameter: 
mapreduce.job.end-notification.max.attempts;  Ignoring.
2]3>
at org.apache.hive.pdk.PluginTest.runTest(PluginTest.java:59)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)
at junit.extensions.TestDecorator.basicRun(TestDecorator.java:24)
at junit.extensions.TestSetup$1.protect(TestSetup.java:23)
at junit.extensions.TestSetup.run(TestSetup.java:27)

This warning is printed to console, since it happens before hive configure its 
log4j(which happens in HiveConf.java static initialization), and hadoop's 
default log4j configuration is INFA,console. This does not happen on previous 
branches, since on hadoop0.23, there are code execution path change, and these 
warnings just app

[jira] [Created] (HIVE-3413) Fix pdk.PluginTest on hadoop23

2012-08-28 Thread Zhenxiao Luo (JIRA)
Zhenxiao Luo created HIVE-3413:
--

 Summary: Fix pdk.PluginTest on hadoop23
 Key: HIVE-3413
 URL: https://issues.apache.org/jira/browse/HIVE-3413
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.9.0
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo


When running Hive test on Hadoop0.23, pdk.PluginTest is failing:

test:
[junit] Running org.apache.hive.pdk.PluginTest
[junit] Hive history 
file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
[junit] Total MapReduce jobs = 1
[junit] Launching Job 1 out of 1
[junit] Number of reduce tasks determined at compile time: 1
[junit] In order to change the average load for a reducer (in bytes):
[junit]   set hive.exec.reducers.bytes.per.reducer=
[junit] In order to limit the maximum number of reducers:
[junit]   set hive.exec.reducers.max=
[junit] In order to set a constant number of reducers:
[junit]   set mapred.reduce.tasks=
[junit] WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. 
Please use org.apache.hadoop.log.metrics.EventCounter in all the 
log4j.properties files.
[junit] Execution log at: 
/tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
[junit] java.io.IOException: Cannot initialize Cluster. Please check your 
configuration for mapreduce.framework.name and the correspond server addresses.
[junit] at 
org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)
[junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:85)
[junit] at org.apache.hadoop.mapreduce.Cluster.(Cluster.java:78)
[junit] at org.apache.hadoop.mapred.JobClient.init(JobClient.java:487)
[junit] at org.apache.hadoop.mapred.JobClient.(JobClient.java:466)
[junit] at 
org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:424)
[junit] at 
org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:688)
[junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
[junit] at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[junit] at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit] at java.lang.reflect.Method.invoke(Method.java:616)
[junit] at org.apache.hadoop.util.RunJar.main(RunJar.java:200)
[junit] Job Submission failed with exception 'java.io.IOException(Cannot 
initialize Cluster. Please check your configuration for 
mapreduce.framework.name and the correspond server addresses.)'
[junit] Execution failed with exit status: 1
[junit] Obtaining error information
[junit]
[junit] Task failed!
[junit] Task ID:
[junit]   Stage-1
[junit]
[junit] Logs:
[junit]
[junit] /tmp/cloudera/hive.log
[junit] FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.MapRedTask]>)
[junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec

With details in ./build/builtins/TEST-org.apache.hive.pdk.PluginTest.txt:


Testsuite: org.apache.hive.pdk.PluginTest
Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 33.9 sec
- Standard Error -
GLOBAL SETUP:  Copying file: 
file:/home/cloudera/Code/hive2/builtins/test/onerow.txt
Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/onerow
Copying file: file:/home/cloudera/Code/hive2/builtins/test/iris.txt
Deleted /home/cloudera/Code/hive2/build/builtins/warehouse/iris
org.apache.hive.builtins.UDAFUnionMap TEARDOWN:
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_840355011.txt
GLOBAL TEARDOWN:
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_25225.txt
OK
Time taken: 6.874 seconds
OK
Time taken: 0.512 seconds
-  ---

Testcase: SELECT size(UNION_MAP(MAP(sepal_width, sepal_length))) FROM iris took 
4.428 sec
FAILED
expected:<[23]> but was:<[
Hive history file=/tmp/cloudera/hive_job_log_cloudera_201208281845_172375530.txt
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapred.reduce.tasks=
WARNING: org.apache.hadoop.metrics.jvm.EventCounter is deprecated. Please use 
org.apache.hadoop.log.metrics.EventCounter in all the log4j.properties files.
Execution log at: 
/tmp/cloudera/cloudera_20120828184545_6deeb166-7dd4-40d3-9ff7-c5d5277aee39.log
java.io.IOException: Cannot initialize Cluster. Please check your configuration 
for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:123)

[jira] [Updated] (HIVE-3388) Improve Performance of UDF PERCENTILE_APPROX()

2012-08-28 Thread Rongrong Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rongrong Zhong updated HIVE-3388:
-

Status: Patch Available  (was: Open)

> Improve Performance of UDF PERCENTILE_APPROX()
> --
>
> Key: HIVE-3388
> URL: https://issues.apache.org/jira/browse/HIVE-3388
> Project: Hive
>  Issue Type: Task
>Reporter: Rongrong Zhong
>Assignee: Rongrong Zhong
>Priority: Minor
> Attachments: HIVE-3388.1.patch.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3388) Improve Performance of UDF PERCENTILE_APPROX()

2012-08-28 Thread Rongrong Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rongrong Zhong updated HIVE-3388:
-

Attachment: HIVE-3388.1.patch.txt

https://reviews.facebook.net/D4959

> Improve Performance of UDF PERCENTILE_APPROX()
> --
>
> Key: HIVE-3388
> URL: https://issues.apache.org/jira/browse/HIVE-3388
> Project: Hive
>  Issue Type: Task
>Reporter: Rongrong Zhong
>Assignee: Rongrong Zhong
>Priority: Minor
> Attachments: HIVE-3388.1.patch.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3412) Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha

2012-08-28 Thread Zhenxiao Luo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenxiao Luo updated HIVE-3412:
---

Attachment: HIVE-3412.1.patch.txt

> Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha
> -
>
> Key: HIVE-3412
> URL: https://issues.apache.org/jira/browse/HIVE-3412
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.9.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Fix For: 0.10.0
>
> Attachments: HIVE-3412.1.patch.txt
>
>
> TestCliDriver.repair fails on the following Hadoop versions:
> 0.23.3, 3.0.0, 2.2.0-alpha
> repair.q fails with "dfs -mkdir":
> [junit] mkdir: `../build/ql/test/data/warehouse/repairtable/p1=a/p2=a': No 
> such file or directory
> The problem is, after fixing HADOOP-8551, which changes the hdfs Shell syntax 
> for mkdir:
> https://issues.apache.org/jira/browse/HADOOP-8551
> all "dfs -mkdir" commands should provide "-p" in order to execute without 
> error.
> This is an intentional change in HDFS. And HADOOP-8551 will be included in 
> 0.23.3, 3.0.0, 2.2.0-alpha versions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3412) Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha

2012-08-28 Thread Zhenxiao Luo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443694#comment-13443694
 ] 

Zhenxiao Luo commented on HIVE-3412:


Review Request submitted at:
https://reviews.facebook.net/D4989

> Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha
> -
>
> Key: HIVE-3412
> URL: https://issues.apache.org/jira/browse/HIVE-3412
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.9.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Fix For: 0.10.0
>
> Attachments: HIVE-3412.1.patch.txt
>
>
> TestCliDriver.repair fails on the following Hadoop versions:
> 0.23.3, 3.0.0, 2.2.0-alpha
> repair.q fails with "dfs -mkdir":
> [junit] mkdir: `../build/ql/test/data/warehouse/repairtable/p1=a/p2=a': No 
> such file or directory
> The problem is, after fixing HADOOP-8551, which changes the hdfs Shell syntax 
> for mkdir:
> https://issues.apache.org/jira/browse/HADOOP-8551
> all "dfs -mkdir" commands should provide "-p" in order to execute without 
> error.
> This is an intentional change in HDFS. And HADOOP-8551 will be included in 
> 0.23.3, 3.0.0, 2.2.0-alpha versions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3412) Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha

2012-08-28 Thread Zhenxiao Luo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhenxiao Luo updated HIVE-3412:
---

Status: Patch Available  (was: Open)

> Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha
> -
>
> Key: HIVE-3412
> URL: https://issues.apache.org/jira/browse/HIVE-3412
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.9.0
>Reporter: Zhenxiao Luo
>Assignee: Zhenxiao Luo
> Fix For: 0.10.0
>
> Attachments: HIVE-3412.1.patch.txt
>
>
> TestCliDriver.repair fails on the following Hadoop versions:
> 0.23.3, 3.0.0, 2.2.0-alpha
> repair.q fails with "dfs -mkdir":
> [junit] mkdir: `../build/ql/test/data/warehouse/repairtable/p1=a/p2=a': No 
> such file or directory
> The problem is, after fixing HADOOP-8551, which changes the hdfs Shell syntax 
> for mkdir:
> https://issues.apache.org/jira/browse/HADOOP-8551
> all "dfs -mkdir" commands should provide "-p" in order to execute without 
> error.
> This is an intentional change in HDFS. And HADOOP-8551 will be included in 
> 0.23.3, 3.0.0, 2.2.0-alpha versions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3412) Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 2.2.0-alpha

2012-08-28 Thread Zhenxiao Luo (JIRA)
Zhenxiao Luo created HIVE-3412:
--

 Summary: Fix TestCliDriver.repair on Hadoop 0.23.3, 3.0.0, and 
2.2.0-alpha
 Key: HIVE-3412
 URL: https://issues.apache.org/jira/browse/HIVE-3412
 Project: Hive
  Issue Type: Bug
  Components: Tests
Affects Versions: 0.9.0
Reporter: Zhenxiao Luo
Assignee: Zhenxiao Luo
 Fix For: 0.10.0


TestCliDriver.repair fails on the following Hadoop versions:
0.23.3, 3.0.0, 2.2.0-alpha

repair.q fails with "dfs -mkdir":
[junit] mkdir: `../build/ql/test/data/warehouse/repairtable/p1=a/p2=a': No such 
file or directory

The problem is, after fixing HADOOP-8551, which changes the hdfs Shell syntax 
for mkdir:
https://issues.apache.org/jira/browse/HADOOP-8551

all "dfs -mkdir" commands should provide "-p" in order to execute without error.

This is an intentional change in HDFS. And HADOOP-8551 will be included in 
0.23.3, 3.0.0, 2.2.0-alpha versions.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 1634 - Failure

2012-08-28 Thread Apache Jenkins Server
Changes for Build #1634



1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1

Error Message:
Unexpected exception See build/ql/tmp/hive.log, or try "ant test ... 
-Dtest.silent=false" to get more logs.

Stack Trace:
junit.framework.AssertionFailedError: Unexpected exception
See build/ql/tmp/hive.log, or try "ant test ... -Dtest.silent=false" to get 
more logs.
at junit.framework.Assert.fail(Assert.java:47)
at 
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1(TestNegativeCliDriver.java:11278)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:232)
at junit.framework.TestSuite.run(TestSuite.java:227)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1634)

Status: Failure

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1634/ to 
view the results.

Jenkins build is back to normal : Hive-0.9.1-SNAPSHOT-h0.21 #119

2012-08-28 Thread Apache Jenkins Server
See 



[jira] [Work stopped] (HIVE-3388) Improve Performance of UDF PERCENTILE_APPROX()

2012-08-28 Thread Rongrong Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-3388 stopped by Rongrong Zhong.

> Improve Performance of UDF PERCENTILE_APPROX()
> --
>
> Key: HIVE-3388
> URL: https://issues.apache.org/jira/browse/HIVE-3388
> Project: Hive
>  Issue Type: Task
>Reporter: Rongrong Zhong
>Assignee: Rongrong Zhong
>Priority: Minor
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Work started] (HIVE-3388) Improve Performance of UDF PERCENTILE_APPROX()

2012-08-28 Thread Rongrong Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-3388 started by Rongrong Zhong.

> Improve Performance of UDF PERCENTILE_APPROX()
> --
>
> Key: HIVE-3388
> URL: https://issues.apache.org/jira/browse/HIVE-3388
> Project: Hive
>  Issue Type: Task
>Reporter: Rongrong Zhong
>Assignee: Rongrong Zhong
>Priority: Minor
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3388) Improve Performance of UDF PERCENTILE_APPROX()

2012-08-28 Thread Rongrong Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rongrong Zhong updated HIVE-3388:
-

Status: Patch Available  (was: Open)

> Improve Performance of UDF PERCENTILE_APPROX()
> --
>
> Key: HIVE-3388
> URL: https://issues.apache.org/jira/browse/HIVE-3388
> Project: Hive
>  Issue Type: Task
>Reporter: Rongrong Zhong
>Assignee: Rongrong Zhong
>Priority: Minor
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3388) Improve Performance of UDF PERCENTILE_APPROX()

2012-08-28 Thread Rongrong Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rongrong Zhong updated HIVE-3388:
-

Status: Open  (was: Patch Available)

> Improve Performance of UDF PERCENTILE_APPROX()
> --
>
> Key: HIVE-3388
> URL: https://issues.apache.org/jira/browse/HIVE-3388
> Project: Hive
>  Issue Type: Task
>Reporter: Rongrong Zhong
>Assignee: Rongrong Zhong
>Priority: Minor
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hive-0.9.1-SNAPSHOT-h0.21-keepgoing=false #119

2012-08-28 Thread Apache Jenkins Server
See 


--
[...truncated 10243 lines...]
 [echo] Project: odbc
 [copy] Warning: 

 does not exist.

ivy-resolve-test:
 [echo] Project: odbc

ivy-retrieve-test:
 [echo] Project: odbc

compile-test:
 [echo] Project: odbc

create-dirs:
 [echo] Project: serde
 [copy] Warning: 

 does not exist.

init:
 [echo] Project: serde

ivy-init-settings:
 [echo] Project: serde

ivy-resolve:
 [echo] Project: serde
[ivy:resolve] :: loading settings :: file = 

[ivy:report] Processing 

 to 


ivy-retrieve:
 [echo] Project: serde

dynamic-serde:

compile:
 [echo] Project: serde

ivy-resolve-test:
 [echo] Project: serde

ivy-retrieve-test:
 [echo] Project: serde

compile-test:
 [echo] Project: serde
[javac] Compiling 26 source files to 

[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

create-dirs:
 [echo] Project: service
 [copy] Warning: 

 does not exist.

init:
 [echo] Project: service

ivy-init-settings:
 [echo] Project: service

ivy-resolve:
 [echo] Project: service
[ivy:resolve] :: loading settings :: file = 

[ivy:report] Processing 

 to 


ivy-retrieve:
 [echo] Project: service

compile:
 [echo] Project: service

ivy-resolve-test:
 [echo] Project: service

ivy-retrieve-test:
 [echo] Project: service

compile-test:
 [echo] Project: service
[javac] Compiling 2 source files to 


test:
 [echo] Project: hive

test-shims:
 [echo] Project: hive

test-conditions:
 [echo] Project: shims

gen-test:
 [echo] Project: shims

create-dirs:
 [echo] Project: shims
 [copy] Warning: 

 does not exist.

init:
 [echo] Project: shims

ivy-init-settings:
 [echo] Project: shims

ivy-resolve:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 

[ivy:report] Processing 

 to 


ivy-retrieve:
 [echo] Project: shims

compile:
 [echo] Project: shims
 [echo] Building shims 0.20

build_shims:
 [echo] Project: shims
 [echo] Compiling 

 against hadoop 0.20.2 
(

ivy-init-settings:
 [echo] Project: shims

ivy-resolve-hadoop-shim:
 [echo] Project: shims
[ivy:resolve] :: loading settings :: file = 


ivy-retrieve-hadoop-shim:
 [echo] Project: shims
 [echo] Building shims 0.20S

build_shims:
 [echo] Project: shims
 [echo] Compiling 


[jira] [Commented] (HIVE-3408) A race condition is caused within QueryPlan class

2012-08-28 Thread Kazuki Ohta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443239#comment-13443239
 ] 

Kazuki Ohta commented on HIVE-3408:
---

Found that other HashSet within a same class could cause the race condition.

> A race condition is caused within QueryPlan class
> -
>
> Key: HIVE-3408
> URL: https://issues.apache.org/jira/browse/HIVE-3408
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.7.1
> Environment: * Java 1.6.0_21, Java HotSpot(TM) 64-Bit Server VM
> * Ubuntu 10.04
>Reporter: Muga Nishizawa
>  Labels: Concurrency
> Attachments: race-condition-of-done-field.patch
>
>
> Hive's threads are stopped at HashMap.getEntry(..) that is used within 
> QueryPlan#extractCounters() and QueryPlan#updateCountersInQueryPlan().  It 
> seems that a race condition problem of a HashSet object is caused when 
> extractCounters() and updateCountersInQueryPlan() are concurrently executed.  
> I hit the problem with Hive 0.7.1 but, I think that it also is caused with 
> 0.8.1.
> The problem is reported by several persons on mailing list.
> http://mail-archives.apache.org/mod_mbox/hive-dev/201201.mbox/%3CCAKTRiE+3x31FDy+3F0c+jZSXQrhxBgT4DOyfZddm7sdX+cu=z...@mail.gmail.com%3E
> http://mail-archives.apache.org/mod_mbox/hive-user/201202.mbox/%3cfc28ccd9-9c75-4f8d-b272-3d50f663a...@gmail.com%3E
> The following is a part of my thread dump.
> {quote}
> "Thread-1091" prio=10 tid=0x7fd17112b000 nid=0x1100 runnable 
> [0x7fd175f6]
>java.lang.Thread.State: RUNNABLE
>at java.util.HashMap.getEntry(HashMap.java:347)
>at java.util.HashMap.containsKey(HashMap.java:335)
>at java.util.HashSet.contains(HashSet.java:184)
>at org.apache.hadoop.hive.ql.QueryPlan.extractCounters(QueryPlan.java:342)
>at org.apache.hadoop.hive.ql.QueryPlan.getQueryPlan(QueryPlan.java:419)
>at org.apache.hadoop.hive.ql.QueryPlan.toString(QueryPlan.java:592)
>at 
> org.apache.hadoop.hive.ql.history.HiveHistory.logPlanProgress(HiveHistory.java:493)
>at org.apache.hadoop.hive.ql.exec.ExecDriver.progress(ExecDriver.java:395)
>at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:686)
>at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:123)
>at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130)
>at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:47)
> "Thread-1090" prio=10 tid=0x7fd17012f000 nid=0x10ff runnable 
> [0x7fd175152000]
>java.lang.Thread.State: RUNNABLE
>at java.util.HashMap.getEntry(HashMap.java:347)
>at java.util.HashMap.containsKey(HashMap.java:335)
>at java.util.HashSet.contains(HashSet.java:184)
>at 
> org.apache.hadoop.hive.ql.QueryPlan.updateCountersInQueryPlan(QueryPlan.java:297)
>at org.apache.hadoop.hive.ql.QueryPlan.getQueryPlan(QueryPlan.java:420)
>at org.apache.hadoop.hive.ql.QueryPlan.toString(QueryPlan.java:592)
>at 
> org.apache.hadoop.hive.ql.history.HiveHistory.logPlanProgress(HiveHistory.java:493)
>at org.apache.hadoop.hive.ql.exec.ExecDriver.progress(ExecDriver.java:395)
>at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:686)
>at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:123)
>at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:130)
>at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:47)
> {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Please help : Hive + postgres integration. Drop table query is hanging.

2012-08-28 Thread rohithsharma
Hi

 

I am using  PostgreSQl-9.0.7 as metastore and + Hive-0.9.0. I integrated
postgres with hive. Few queries are working fine. I am using 

postgresql-9.0-802.jdbc3.jar for connecting to JDBC.

 

But  "drop table query" is hanging. Following is Hive DEBUG log . 

 

08/12/28 06:02:09 DEBUG lazy.LazySimpleSerDe: LazySimpleSerDe initialized

with: columnNames=[a] columnTypes=[int] separator=[[B@e4600c0] nullstring=\N

lastColumnTakesRest=false

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: get_table : db=default

tbl=erer

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: drop_table : db=default

tbl=erer

08/12/28 06:02:09 INFO metastore.HiveMetaStore: 0: get_table : db=default

tbl=erer

*08/12/28 06:02:09 DEBUG metastore.ObjectStore: Executing listMPartitions

 

 

I find there is no bug in open source . Is there any way to overcome this
problem please help me.

 

 

 

Regards

Rohith Sharma K S



[jira] [Commented] (HIVE-2137) JDBC driver doesn't encode string properly.

2012-08-28 Thread jokang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13443045#comment-13443045
 ] 

jokang commented on HIVE-2137:
--

I have ran into the same trouble. Thank you for your patches.
But, I think I had found some way easier to solve the problem by setting your 
encoding of .class files when compiling the source files. 
here is the command:
javac -encoding UTF-8 XX.java
or you can set it in eclipse.

> JDBC driver doesn't encode string properly.
> ---
>
> Key: HIVE-2137
> URL: https://issues.apache.org/jira/browse/HIVE-2137
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.9.0
>Reporter: Jin Adachi
> Attachments: HIVE-2137.patch
>
>
> JDBC driver decode string by client encoding. 
> It ignore server encoding. 
> For example, 
> server = Linux (utf-8) 
> client = Windows (shift-jis : it's japanese charset) 
> It makes character corruption in the client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2137) JDBC driver doesn't encode string properly.

2012-08-28 Thread Zhou Kang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhou Kang updated HIVE-2137:


Description: 
JDBC driver decode string by client encoding. 
It ignore server encoding. 

For example, 
server = Linux (utf-8) 
client = Windows (shift-jis : it's japanese charset) 
It makes character corruption in the client.

  was:JDBC driver doesn't encode string properly.


> JDBC driver doesn't encode string properly.
> ---
>
> Key: HIVE-2137
> URL: https://issues.apache.org/jira/browse/HIVE-2137
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.9.0
>Reporter: Jin Adachi
> Attachments: HIVE-2137.patch
>
>
> JDBC driver decode string by client encoding. 
> It ignore server encoding. 
> For example, 
> server = Linux (utf-8) 
> client = Windows (shift-jis : it's japanese charset) 
> It makes character corruption in the client.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2137) JDBC driver doesn't encode string properly.

2012-08-28 Thread Zhou Kang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhou Kang updated HIVE-2137:


Description: (was: ,)

> JDBC driver doesn't encode string properly.
> ---
>
> Key: HIVE-2137
> URL: https://issues.apache.org/jira/browse/HIVE-2137
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.9.0
>Reporter: Jin Adachi
> Attachments: HIVE-2137.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2137) JDBC driver doesn't encode string properly.

2012-08-28 Thread Zhou Kang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhou Kang updated HIVE-2137:


Description: JDBC driver doesn't encode string properly.

> JDBC driver doesn't encode string properly.
> ---
>
> Key: HIVE-2137
> URL: https://issues.apache.org/jira/browse/HIVE-2137
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.9.0
>Reporter: Jin Adachi
> Attachments: HIVE-2137.patch
>
>
> JDBC driver doesn't encode string properly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-2137) JDBC driver doesn't encode string properly.

2012-08-28 Thread Zhou Kang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhou Kang updated HIVE-2137:


Description: ,  (was: JDBC driver decode string by client encoding.
It ignore server encoding.

For example,
server = Linux (utf-8)
client = Windows (shift-jis : it's japanese charset) 
It makes character corruption in the client.)

> JDBC driver doesn't encode string properly.
> ---
>
> Key: HIVE-2137
> URL: https://issues.apache.org/jira/browse/HIVE-2137
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.9.0
>Reporter: Jin Adachi
> Attachments: HIVE-2137.patch
>
>
> ,

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira