Build failed in Jenkins: Hive-0.10.0-SNAPSHOT-h0.20.1 #2

2012-12-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hive-0.10.0-SNAPSHOT-h0.20.1/2/

--
[...truncated 39906 lines...]
[junit] Hadoop job information for null: number of mappers: 0; number of 
reducers: 0
[junit] 2012-12-15 00:34:53,414 null map = 100%,  reduce = 100%
[junit] Ended Job = job_local_0001
[junit] Execution completed successfully
[junit] Mapred Local Task Succeeded . Convert the Join into MapJoin
[junit] POSTHOOK: query: select count(1) as cnt from testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/localscratchdir/hive_2012-12-15_00-34-50_643_132277709670806983/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Copying file: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/data/files/kv1.txt
[junit] Hive history 
file=/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/tmp/hive_job_log_jenkins_201212150034_768966977.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: load data local inpath 
'/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] Copying data from 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/data/files/kv1.txt
[junit] Loading data to table default.testhivedrivertable
[junit] Table default.testhivedrivertable stats: [num_partitions: 0, 
num_files: 1, num_rows: 0, total_size: 5812, raw_data_size: 0]
[junit] POSTHOOK: query: load data local inpath 
'/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: select * from testhivedrivertable limit 10
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/localscratchdir/hive_2012-12-15_00-34-54_531_5338843468725454461/-mr-1
[junit] POSTHOOK: query: select * from testhivedrivertable limit 10
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/localscratchdir/hive_2012-12-15_00-34-54_531_5338843468725454461/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/tmp/hive_job_log_jenkins_201212150034_1852793709.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable

[jira] [Commented] (HIVE-3766) Enable adding hooks to hive meta store init

2012-12-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13532971#comment-13532971
 ] 

Hudson commented on HIVE-3766:
--

Integrated in Hive-trunk-h0.21 #1856 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/1856/])
HIVE-3766. Enable adding hooks to hive meta store init. (Jean Xu via 
kevinwilfong) (Revision 1422146)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1422146
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreInitContext.java
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreInitListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/DummyMetaStoreInitListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestMetaStoreInitListener.java


 Enable adding hooks to hive meta store init
 ---

 Key: HIVE-3766
 URL: https://issues.apache.org/jira/browse/HIVE-3766
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Reporter: Jean Xu
Assignee: Jean Xu
 Attachments: jira3766.txt


 We will enable hooks to be added to init HMSHandler

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 1856 - Still Failing

2012-12-15 Thread Apache Jenkins Server
Changes for Build #1854

Changes for Build #1855

Changes for Build #1856
[kevinwilfong] HIVE-3766. Enable adding hooks to hive meta store init. (Jean Xu 
via kevinwilfong)




1 tests failed.
REGRESSION:  
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_stats_aggregator_error_1

Error Message:
Forked Java VM exited abnormally. Please note the time in the report does not 
reflect the time until the VM exit.

Stack Trace:
junit.framework.AssertionFailedError: Forked Java VM exited abnormally. Please 
note the time in the report does not reflect the time until the VM exit.
at 
net.sf.antcontrib.logic.ForTask.doSequentialIteration(ForTask.java:259)
at net.sf.antcontrib.logic.ForTask.doToken(ForTask.java:268)
at net.sf.antcontrib.logic.ForTask.doTheTasks(ForTask.java:324)
at net.sf.antcontrib.logic.ForTask.execute(ForTask.java:244)




The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1856)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1856/ to 
view the results.

[jira] [Created] (HIVE-3807) Hive authorization should use short username when Kerberos authentication

2012-12-15 Thread Kai Zheng (JIRA)
Kai Zheng created HIVE-3807:
---

 Summary: Hive authorization should use short username when 
Kerberos authentication
 Key: HIVE-3807
 URL: https://issues.apache.org/jira/browse/HIVE-3807
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Affects Versions: 0.9.0
Reporter: Kai Zheng


Currently when authentication method is Kerberos,Hive authorization uses user 
full name as privilege principal, for example, it uses j...@example.com instead 
of john.

It should use the short name instead. The benefits:
1. Be consistent. Hadoop, HBase and etc they all use short name in related ACLs 
or authorizations. For Hive authorization works well with them, this should be.
2. Be convenient. It's very inconvenient to use the lengthy Kerberos principal 
name when grant or revoke privileges via Hive CLI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-3808) Failed to create table in database due to HDFS permission even authorized to create

2012-12-15 Thread Kai Zheng (JIRA)
Kai Zheng created HIVE-3808:
---

 Summary: Failed to create table in database due to HDFS permission 
even authorized to create
 Key: HIVE-3808
 URL: https://issues.apache.org/jira/browse/HIVE-3808
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Affects Versions: 0.9.0
Reporter: Kai Zheng


User is already authorized to create table in database in HDFS. But when 
creating table is failed, due to HDFS file permission,although the 
authorization check was passed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3808) Failed to create table in database due to HDFS permission even authorized to create

2012-12-15 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13533069#comment-13533069
 ] 

Kai Zheng commented on HIVE-3808:
-

This seems to be a known issue or limit in Hive authorization. I still opened 
this for further discussion, or reference.

Obviously when Hive user, group or role is granted to CREATE privilege on 
database, the principal should also be authorized with write permission to the 
corresponding database file automatically. Of course this can be done by user 
manually executing chmod stuff as an extra step.

To resolve this:
1. One approach would be not to allow granting CREATE privilege to others. In 
HBase, only super users can create tables.
2. Another idea would be to enhance HDFS permission mechanism so that allowing 
more users, groups to access files with certain permissions while at the same 
time disallowing others.


 Failed to create table in database due to HDFS permission even authorized to 
 create
 ---

 Key: HIVE-3808
 URL: https://issues.apache.org/jira/browse/HIVE-3808
 Project: Hive
  Issue Type: Improvement
  Components: Authorization
Affects Versions: 0.9.0
Reporter: Kai Zheng

 User is already authorized to create table in database in HDFS. But when 
 creating table is failed, due to HDFS file permission,although the 
 authorization check was passed. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 1857 - Still Failing

2012-12-15 Thread Apache Jenkins Server
Changes for Build #1854

Changes for Build #1855

Changes for Build #1856
[kevinwilfong] HIVE-3766. Enable adding hooks to hive meta store init. (Jean Xu 
via kevinwilfong)


Changes for Build #1857



No tests ran.

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #1857)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/1857/ to 
view the results.

[jira] [Comment Edited] (HIVE-3805) Resolve TODO in TUGIBasedProcessor

2012-12-15 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13533106#comment-13533106
 ] 

Ashutosh Chauhan edited comment on HIVE-3805 at 12/15/12 6:53 PM:
--

If you look at hiveserver2 implementation over at HIVE-2935, it has an 
implementation of {{Plain}} sasl server. Plain server means sasl server doesn't 
use kerberos (or any authentication mechanism) for authenticating thrift client 
and at the same time client transfers end user identity to server. Server just 
trusts client, since its unsecure mode anyways. This Sasl server is used for 
thrift client and server transport in HiveServer2. That is much more cleaner 
approach than the current implementation which is really hacky which does an 
rpc call to transfer ugi (introduced in HIVE-2616 ), instead of transferring it 
at connection setup time. Though, current hacky approach works, its a twisted 
design and harder to understand. If there is any interest in wider adoption of 
transferring ugi for unsecure connection between thrift client and server, we 
should use HS2 mechanism. Further, since HiveServer2 already uses that, we will 
have parity in transport layer between HS2 client-server transport and 
metastore client-server transport. That way we can reuse code between these two 
transports, instead of having two parallel implementations of same feature.

  was (Author: ashutoshc):
If you look at hiveserver2 implementation over at HIVE-2935, it has an 
implementation of {{Plain}} sasl server. Plain server means sasl server doesn't 
use kerberos (or any authentication mechanism) for authenticating thrift client 
and at the same time client transfers end user identity to server. Server just 
trusts client, since its unsecure mode anyways. This Sasl server is used for 
thrift client and server transport in HiveServer2. That is much more cleaner 
approach than the current implementation which is really hacky which does an 
rpc call to transfer ugi (introduced in HIVE-2616 ), instead of transferring it 
at connection setup time. Though, current hacky approach works, its a twisted 
design and harder to understand. If there is any interest in wider adoption of 
transferring ugi for unsecure connection between thrift client and server, we 
should use HS2 mechanism. Further, since HiveServer2 already uses that, we will 
have parity in transport layer between HS2 client-server transport and 
metastore client-server transport. 
  
 Resolve TODO in TUGIBasedProcessor
 --

 Key: HIVE-3805
 URL: https://issues.apache.org/jira/browse/HIVE-3805
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.11
Reporter: Kevin Wilfong
Assignee: Kevin Wilfong
 Attachments: HIVE-3805.1.patch.txt


 There's a TODO in TUGIBasedProcessor
 // TODO get rid of following reflection after THRIFT-1465 is fixed.
 Now that we have upgraded to Thrift 9 THRIFT-1465 is available.
 This will also fix an issue where fb303 counters cannot be collected if the 
 TUGIBasedProcessor is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hive-0.10.0-SNAPSHOT-h0.20.1 #3

2012-12-15 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hive-0.10.0-SNAPSHOT-h0.20.1/3/

--
[...truncated 40092 lines...]
[junit] Hadoop job information for null: number of mappers: 0; number of 
reducers: 0
[junit] 2012-12-15 13:30:13,994 null map = 100%,  reduce = 100%
[junit] Ended Job = job_local_0001
[junit] Execution completed successfully
[junit] Mapred Local Task Succeeded . Convert the Join into MapJoin
[junit] POSTHOOK: query: select count(1) as cnt from testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/localscratchdir/hive_2012-12-15_13-30-10_882_4426311097435573980/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/tmp/hive_job_log_jenkins_201212151330_332570869.txt
[junit] Copying file: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/data/files/kv1.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: load data local inpath 
'/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] Copying data from 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/data/files/kv1.txt
[junit] Loading data to table default.testhivedrivertable
[junit] Table default.testhivedrivertable stats: [num_partitions: 0, 
num_files: 1, num_rows: 0, total_size: 5812, raw_data_size: 0]
[junit] POSTHOOK: query: load data local inpath 
'/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/data/files/kv1.txt'
 into table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: select * from testhivedrivertable limit 10
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/localscratchdir/hive_2012-12-15_13-30-15_238_1733294197677396282/-mr-1
[junit] POSTHOOK: query: select * from testhivedrivertable limit 10
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: 
file:/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/localscratchdir/hive_2012-12-15_13-30-15_238_1733294197677396282/-mr-1
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Input: default@testhivedrivertable
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] Hive history 
file=/x1/jenkins/jenkins-slave/workspace/Hive-0.10.0-SNAPSHOT-h0.20.1/hive/build/service/tmp/hive_job_log_jenkins_201212151330_879507617.txt
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: drop table testhivedrivertable
[junit] POSTHOOK: type: DROPTABLE
[junit] OK
[junit] PREHOOK: query: create table testhivedrivertable (num int)
[junit] PREHOOK: type: DROPTABLE
[junit] POSTHOOK: query: create table testhivedrivertable (num int)
[junit] POSTHOOK: type: DROPTABLE
[junit] POSTHOOK: Output: default@testhivedrivertable
[junit] OK
[junit] PREHOOK: query: drop table testhivedrivertable
[junit] PREHOOK: type: DROPTABLE
[junit] PREHOOK: Input: default@testhivedrivertable
[junit] PREHOOK: Output: default@testhivedrivertable

[jira] [Created] (HIVE-3809) Concurrency issue in RCFile: multiple threads can use the same decompressor

2012-12-15 Thread Mikhail Bautin (JIRA)
Mikhail Bautin created HIVE-3809:


 Summary: Concurrency issue in RCFile: multiple threads can use the 
same decompressor
 Key: HIVE-3809
 URL: https://issues.apache.org/jira/browse/HIVE-3809
 Project: Hive
  Issue Type: Bug
Reporter: Mikhail Bautin
Priority: Critical


RCFile is not thread-safe, even if each reader is only used by one thread as 
intended, because it is possible to return decompressors to the pool multiple 
times by calling close on the reader multiple times. Then, different threads 
can pick up the same decompressor twice from the pool, resulting in 
decompression failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3809) Concurrency issue in RCFile: multiple threads can use the same decompressor

2012-12-15 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-3809:
--

Attachment: D7419.1.patch

mbautin requested code review of [jira] [HIVE-3809] Concurrency issue in 
RCFile: multiple threads can use the same decompressor.
Reviewers: kevinwilfong, njain, ashutoshc, cwsteinbach, edwardcapriolo, JIRA

  Making sure that decompressors are only returned to the pool once when 
readers are closed in RCFile. Otherwise the same decompressor can end up in the 
pool multiple times, and multiple threads would pick it up and try to use it 
concurrently.

TEST PLAN
  Unit tests

REVISION DETAIL
  https://reviews.facebook.net/D7419

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java

MANAGE HERALD DIFFERENTIAL RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/17709/

To: kevinwilfong, njain, ashutoshc, cwsteinbach, edwardcapriolo, JIRA, mbautin


 Concurrency issue in RCFile: multiple threads can use the same decompressor
 ---

 Key: HIVE-3809
 URL: https://issues.apache.org/jira/browse/HIVE-3809
 Project: Hive
  Issue Type: Bug
Reporter: Mikhail Bautin
Priority: Critical
 Attachments: D7419.1.patch


 RCFile is not thread-safe, even if each reader is only used by one thread as 
 intended, because it is possible to return decompressors to the pool multiple 
 times by calling close on the reader multiple times. Then, different threads 
 can pick up the same decompressor twice from the pool, resulting in 
 decompression failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3809) Concurrency issue in RCFile: multiple threads can use the same decompressor

2012-12-15 Thread Mikhail Bautin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Bautin updated HIVE-3809:
-

Attachment: 0001-HIVE-3809-Decompressors-should-only-be-returned-to-t.patch

Attaching a manually generated patch.

 Concurrency issue in RCFile: multiple threads can use the same decompressor
 ---

 Key: HIVE-3809
 URL: https://issues.apache.org/jira/browse/HIVE-3809
 Project: Hive
  Issue Type: Bug
Reporter: Mikhail Bautin
Priority: Critical
 Attachments: 
 0001-HIVE-3809-Decompressors-should-only-be-returned-to-t.patch, D7419.1.patch


 RCFile is not thread-safe, even if each reader is only used by one thread as 
 intended, because it is possible to return decompressors to the pool multiple 
 times by calling close on the reader multiple times. Then, different threads 
 can pick up the same decompressor twice from the pool, resulting in 
 decompression failures.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3778) Add MapJoinDesc.isBucketMapJoin() as part of explain plan

2012-12-15 Thread Gang Tim Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gang Tim Liu updated HIVE-3778:
---

Status: Patch Available  (was: In Progress)

patch is available.

 Add MapJoinDesc.isBucketMapJoin() as part of explain plan
 -

 Key: HIVE-3778
 URL: https://issues.apache.org/jira/browse/HIVE-3778
 Project: Hive
  Issue Type: Bug
Reporter: Gang Tim Liu
Assignee: Gang Tim Liu
Priority: Minor
 Attachments: HIVE-3778.patch.3


 This is follow up of HIVE-3767:
 Add MapJoinDesc.isBucketMapJoin() as part of explain plan

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-3778) Add MapJoinDesc.isBucketMapJoin() as part of explain plan

2012-12-15 Thread Gang Tim Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gang Tim Liu updated HIVE-3778:
---

Attachment: HIVE-3778.patch.3

 Add MapJoinDesc.isBucketMapJoin() as part of explain plan
 -

 Key: HIVE-3778
 URL: https://issues.apache.org/jira/browse/HIVE-3778
 Project: Hive
  Issue Type: Bug
Reporter: Gang Tim Liu
Assignee: Gang Tim Liu
Priority: Minor
 Attachments: HIVE-3778.patch.3


 This is follow up of HIVE-3767:
 Add MapJoinDesc.isBucketMapJoin() as part of explain plan

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira