[jira] Created: (HIVE-1899) add a factory method for creating a synchronized wrapper for IMetaStoreClient

2011-01-07 Thread John Sichi (JIRA)
add a factory method for creating a synchronized wrapper for IMetaStoreClient
-

 Key: HIVE-1899
 URL: https://issues.apache.org/jira/browse/HIVE-1899
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.7.0
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.7.0
 Attachments: HIVE-1899.1.patch

There are currently some HiveMetaStoreClient multithreading bugs.  This patch 
adds an (optional) synchronized wrapper for IMetaStoreClient using a dynamic 
proxy.  This can be used for thread safety by multithreaded apps until all 
reentrancy bugs are fixed.


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Build failed in Hudson: Hive-trunk-h0.20 #472

2011-01-07 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/472/changes

Changes:

[namit] HIVE-1889 add an option (hive.index.compact.file.ignore.hdfs)
to ignore HDFS location stored in index files
(Yongqiang He via namit)

--
[...truncated 15166 lines...]
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket23.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table src
[junit] POSTHOOK: Output: defa...@src
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv3.txt
[junit] Loading data to table src1
[junit] POSTHOOK: Output: defa...@src1
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.seq
[junit] Loading data to table src_sequencefile
[junit] POSTHOOK: Output: defa...@src_sequencefile
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/complex.seq
[junit] Loading data to table src_thrift
[junit] POSTHOOK: Output: defa...@src_thrift
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/json.txt
[junit] Loading data to table src_json
[junit] POSTHOOK: Output: defa...@src_json
[junit] OK
[junit] diff 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/build/ql/test/logs/negative/unknown_table1.q.out
 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/ql/src/test/results/compiler/errors/unknown_table1.q.out
[junit] Done query: unknown_table1.q
[junit] Begin query: unknown_table2.q
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table srcpart partition (ds=2008-04-08, hr=11)
[junit] POSTHOOK: Output: defa...@srcpart@ds=2008-04-08/hr=11
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table srcpart partition (ds=2008-04-08, hr=12)
[junit] POSTHOOK: Output: defa...@srcpart@ds=2008-04-08/hr=12
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table srcpart partition (ds=2008-04-09, hr=11)
[junit] POSTHOOK: Output: defa...@srcpart@ds=2008-04-09/hr=11
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table srcpart partition (ds=2008-04-09, hr=12)
[junit] POSTHOOK: Output: defa...@srcpart@ds=2008-04-09/hr=12
[junit] OK
[junit] POSTHOOK: Output: defa...@srcbucket
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket0.txt
[junit] Loading data to table srcbucket
[junit] POSTHOOK: Output: defa...@srcbucket
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket1.txt
[junit] Loading data to table srcbucket
[junit] POSTHOOK: Output: defa...@srcbucket
[junit] OK
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket20.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket21.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket22.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/srcbucket23.txt
[junit] Loading data to table srcbucket2
[junit] POSTHOOK: Output: defa...@srcbucket2
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv1.txt
[junit] Loading data to table src
[junit] POSTHOOK: Output: defa...@src
[junit] OK
[junit] Copying data from 
https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/ws/hive/data/files/kv3.txt
[junit] Loading data to table src1
[junit] POSTHOOK: Output: defa...@src1
[junit] OK
  

[jira] Commented: (HIVE-1899) add a factory method for creating a synchronized wrapper for IMetaStoreClient

2011-01-07 Thread Jeff Hammerbacher (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978685#action_12978685
 ] 

Jeff Hammerbacher commented on HIVE-1899:
-

Hey John,

Could you link this JIRA to the JIRAs for the multithreading bugs? I couldn't 
track them down.

Thanks,
Jeff

 add a factory method for creating a synchronized wrapper for IMetaStoreClient
 -

 Key: HIVE-1899
 URL: https://issues.apache.org/jira/browse/HIVE-1899
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.7.0
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.7.0

 Attachments: HIVE-1899.1.patch


 There are currently some HiveMetaStoreClient multithreading bugs.  This patch 
 adds an (optional) synchronized wrapper for IMetaStoreClient using a dynamic 
 proxy.  This can be used for thread safety by multithreaded apps until all 
 reentrancy bugs are fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1539) Concurrent metastore threading problem

2011-01-07 Thread Bennie Schut (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978698#action_12978698
 ] 

Bennie Schut commented on HIVE-1539:


Are we getting errors like these on HIVE-1862 ? :
[junit] Exception: java.lang.RuntimeException: The table 
default__show_idx_full_idx_comment__ is an index table. Please do drop index 
instead.
[junit] org.apache.hadoop.hive.ql.metadata.HiveException: 
java.lang.RuntimeException: The table default__show_idx_full_idx_comment__ is 
an index table. Please do drop index instead.

Or is this something else?

 Concurrent metastore threading problem 
 ---

 Key: HIVE-1539
 URL: https://issues.apache.org/jira/browse/HIVE-1539
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.7.0
Reporter: Bennie Schut
Assignee: Bennie Schut
 Attachments: ClassLoaderResolver.patch, HIVE-1539-1.patch, 
 HIVE-1539.patch, thread_dump_hanging.txt


 When running hive as a service and running a high number of queries 
 concurrently I end up with multiple threads running at 100% cpu without any 
 progress.
 Looking at these threads I notice this thread(484e):
 at 
 org.apache.hadoop.hive.metastore.ObjectStore.getMTable(ObjectStore.java:598)
 But on a different thread(63a2):
 at 
 org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoReplaceField(MStorageDescriptor.java)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1829) Fix intermittent failures in TestRemoteMetaStore

2011-01-07 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1829:
-

Summary: Fix intermittent failures in TestRemoteMetaStore  (was: 
TestRemoteMetaStore fails if machine has multiple IPs)

 Fix intermittent failures in TestRemoteMetaStore
 

 Key: HIVE-1829
 URL: https://issues.apache.org/jira/browse/HIVE-1829
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.6.0
Reporter: Edward Capriolo
Assignee: Carl Steinbach

 Notice how Running metastore! appears twice.
 {noformat}
 test:
 [junit] Running org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore
 [junit] BR.recoverFromMismatchedToken
 [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 36.697 sec
 [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore
 [junit] Running metastore!
 [junit] Running metastore!
 [junit] org.apache.thrift.transport.TTransportException: Could not create 
 ServerSocket on address 0.0.0.0/0.0.0.0:29083.
 [junit]   at 
 org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:98)
 [junit]   at 
 org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:79)
 [junit]   at 
 org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.init(TServerSocketKeepAlive.java:34)
 [junit]   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:2189)
 [junit]   at 
 org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore$RunMS.run(TestRemoteHiveMetaStore.java:35)
 [junit]   at java.lang.Thread.run(Thread.java:619)
 [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore
 [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
 [junit] Test org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore 
 FAILED (crashed)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Review Request: HIVE-1899: Add a factory method for creating a synchronized wrapper for IMetaStoreClient

2011-01-07 Thread Carl Steinbach

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/257/
---

Review request for hive.


Summary
---

Review for CWS.


This addresses bug HIVE-1899.
https://issues.apache.org/jira/browse/HIVE-1899


Diffs
-

  
trunk/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java
 1056212 
  
trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
 1056212 

Diff: https://reviews.apache.org/r/257/diff


Testing
---


Thanks,

Carl



[jira] Commented: (HIVE-1899) add a factory method for creating a synchronized wrapper for IMetaStoreClient

2011-01-07 Thread Carl Steinbach (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978868#action_12978868
 ] 

Carl Steinbach commented on HIVE-1899:
--

https://reviews.apache.org/r/257/

+1. Will commit if tests pass.


 add a factory method for creating a synchronized wrapper for IMetaStoreClient
 -

 Key: HIVE-1899
 URL: https://issues.apache.org/jira/browse/HIVE-1899
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.7.0
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.7.0

 Attachments: HIVE-1899.1.patch


 There are currently some HiveMetaStoreClient multithreading bugs.  This patch 
 adds an (optional) synchronized wrapper for IMetaStoreClient using a dynamic 
 proxy.  This can be used for thread safety by multithreaded apps until all 
 reentrancy bugs are fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1862) Revive partition filtering in the Hive MetaStore

2011-01-07 Thread Mac Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978874#action_12978874
 ] 

Mac Yang commented on HIVE-1862:


Namit, thanks for posting the test.

I have run the test with partial data (36k out of 302k partitions for the jdo2 
table). With six concurrent processes, the test ran for more than a day without 
error. Will try again after finish loading the full data set.

 Revive partition filtering in the Hive MetaStore
 

 Key: HIVE-1862
 URL: https://issues.apache.org/jira/browse/HIVE-1862
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.7.0
Reporter: Devaraj Das
 Fix For: 0.7.0

 Attachments: HIVE-1862.1.patch.txt, invoke_runqry.sh, qry, qry-sch.Z, 
 runqry


 HIVE-1853 downgraded the JDO version. This makes the feature of partition 
 filtering in the metastore unusable. This jira is to keep track of the lost 
 feature and discussing approaches to bring it back.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1611) Add alternative search-provider to Hive site

2011-01-07 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978881#action_12978881
 ] 

Otis Gospodnetic commented on HIVE-1611:


Edward - I see what you are describing.
Short version: I can't find the file I need to modify to change the searchbox 
behaviour.  Please help.

Explanation:
Because I saw that the search form is defined in a few places in 
author/src/documentation/skins/hadoop-pelt/xslt/html/site-to-xhtml.xsl I 
thought that the way to modify the search stuff would be by editing that 
author/src/documentation/skins/hadoop-pelt/xslt/html/site-to-xhtml.xsl and 
fixing whatever is wrong in it.

However, it looks like the above file is not even used for building the site 
pages.  Is this correct?
I say that because after I made various changes in that file it would never 
take, as if the file was not being used.  I then removed the file completely 
and re-ran ant from the site dir and it happily generated all site pages.  If 
that file were used, I assume I'd get an error.

So which file should I edit to modify the search box behaviour?
I thought I'd try looking for the Search the site with string to locate the 
file, because that's the text that's shown in the search box by default...

$ ffg Search the site with | egrep -v 'svn|build|publish'
./author/src/documentation/skins/common/translations/CommonMessages_en_US.xml:  
message key=Search the site withSearch site with/message
./author/src/documentation/skins/common/translations/CommonMessages_de.xml:  
message key=Search the site withSuche auf der Seite mit/message
./author/src/documentation/skins/common/translations/CommonMessages_fr.xml:  
message key=Search the site withRechercher sur le site avec/message
./author/src/documentation/skins/common/translations/CommonMessages_es.xml:  
message key=Search the site withBuscar en/message

But the above are just i18n messages.

Then I peeked at the source of hive.apache.org and noticed the start Search 
comment near the search box, so I looked for that:

$ ffg start Search | egrep -v 'svn|build|publish'

...and found nothing.

I'm not sure if it matters that author/src/documentation/skins is some sort of 
external item pointing to 
http://svn.apache.org/repos/asf/hadoop/site/author/src/documentation/skins it 
seems.  Since Hive is not TLP, maybe this should be changed anyway?  But that 
may be a separate issue.

Hmmm help? :)


 Add alternative search-provider to Hive site
 

 Key: HIVE-1611
 URL: https://issues.apache.org/jira/browse/HIVE-1611
 Project: Hive
  Issue Type: Improvement
Reporter: Alex Baranau
Assignee: Edward Capriolo
Priority: Minor
 Attachments: HIVE-1611.patch


 Use search-hadoop.com service to make available search in Hive sources, MLs, 
 wiki, etc.
 This was initially proposed on user mailing list. The search service was 
 already added in site's skin (common for all Hadoop related projects) before 
 so this issue is about enabling it for Hive. The ultimate goal is to use it 
 at all Hadoop's sub-projects' sites.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HIVE-1900) a mapper should be able to span multiple partitions

2011-01-07 Thread Namit Jain (JIRA)
a mapper should be able to span multiple partitions
---

 Key: HIVE-1900
 URL: https://issues.apache.org/jira/browse/HIVE-1900
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: He Yongqiang


Currently, a  mapper only spans a single partition which creates a problem in 
the presence of many
small partitions (which is becoming a common usecase in facebook).

If the plan is the same, a mapper should be able to span files across multiple 
partitions

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1900) a mapper should be able to span multiple partitions

2011-01-07 Thread Ning Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978918#action_12978918
 ] 

Ning Zhang commented on HIVE-1900:
--

I remember I had encountered the problem before. Enabling a mapper to read from 
multiple partitions is trivial but there are some pitfalls to watch:

 1) partitioning columns are not present in the data file itself. The 
partitioning column value is appended during the RecordReader (or something 
like that). It assumes that all records come from the same partition. The 
assumption will be broken here. An example query you can try is 

   select ds, count(1) from srcpart where ds is not null group by ds;

 2) The merge job should be treated specially to not allow combined input from 
multiple partitions. 

 3) Auto-gathering stats from the FileSinkOperator need to be address for the 
case so that stats are maintained for multiple partitions. 

 a mapper should be able to span multiple partitions
 ---

 Key: HIVE-1900
 URL: https://issues.apache.org/jira/browse/HIVE-1900
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: He Yongqiang

 Currently, a  mapper only spans a single partition which creates a problem in 
 the presence of many
 small partitions (which is becoming a common usecase in facebook).
 If the plan is the same, a mapper should be able to span files across 
 multiple partitions

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Build failed in Hudson: Hive-trunk-h0.20 #473

2011-01-07 Thread Apache Hudson Server
See https://hudson.apache.org/hudson/job/Hive-trunk-h0.20/473/

--
[...truncated 7225 lines...]
[junit] Testing protocol: org.apache.thrift.protocol.TBinaryProtocol
[junit] TypeName = 
struct_hello:int,2bye:arraystring,another:mapstring,int,nhello:int,d:double,nd:double
[junit] bytes 
=x08xffxffx00x00x00xeax0fxffxfex0bx00x00x00x02x00x00x00x0bx66x69x72x73x74x53x74x72x69x6ex67x00x00x00x0cx73x65x63x6fx6ex64x53x74x72x69x6ex67x0dxffxfdx0bx08x00x00x00x02x00x00x00x08x66x69x72x73x74x4bx65x79x00x00x00x01x00x00x00x09x73x65x63x6fx6ex64x4bx65x79x00x00x00x02x08xffxfcxffxffxffx16x04xffxfbx3fxf0x00x00x00x00x00x00x04xffxfaxc0x04x00x00x00x00x00x00x00
[junit] o class = class java.util.ArrayList
[junit] o size = 6
[junit] o[0] class = class java.lang.Integer
[junit] o[1] class = class java.util.ArrayList
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}, 
-234, 1.0, -2.5]
[junit] Testing protocol: org.apache.thrift.protocol.TJSONProtocol
[junit] TypeName = 
struct_hello:int,2bye:arraystring,another:mapstring,int,nhello:int,d:double,nd:double
[junit] bytes 
=x7bx22x2dx31x22x3ax7bx22x69x33x32x22x3ax32x33x34x7dx2cx22x2dx32x22x3ax7bx22x6cx73x74x22x3ax5bx22x73x74x72x22x2cx32x2cx22x66x69x72x73x74x53x74x72x69x6ex67x22x2cx22x73x65x63x6fx6ex64x53x74x72x69x6ex67x22x5dx7dx2cx22x2dx33x22x3ax7bx22x6dx61x70x22x3ax5bx22x73x74x72x22x2cx22x69x33x32x22x2cx32x2cx7bx22x66x69x72x73x74x4bx65x79x22x3ax31x2cx22x73x65x63x6fx6ex64x4bx65x79x22x3ax32x7dx5dx7dx2cx22x2dx34x22x3ax7bx22x69x33x32x22x3ax2dx32x33x34x7dx2cx22x2dx35x22x3ax7bx22x64x62x6cx22x3ax31x2ex30x7dx2cx22x2dx36x22x3ax7bx22x64x62x6cx22x3ax2dx32x2ex35x7dx7d
[junit] bytes in text 
={-1:{i32:234},-2:{lst:[str,2,firstString,secondString]},-3:{map:[str,i32,2,{firstKey:1,secondKey:2}]},-4:{i32:-234},-5:{dbl:1.0},-6:{dbl:-2.5}}
[junit] o class = class java.util.ArrayList
[junit] o size = 6
[junit] o[0] class = class java.lang.Integer
[junit] o[1] class = class java.util.ArrayList
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}, 
-234, 1.0, -2.5]
[junit] Testing protocol: 
org.apache.hadoop.hive.serde2.thrift.TCTLSeparatedProtocol
[junit] TypeName = 
struct_hello:int,2bye:arraystring,another:mapstring,int,nhello:int,d:double,nd:double
[junit] bytes 
=x32x33x34x01x66x69x72x73x74x53x74x72x69x6ex67x02x73x65x63x6fx6ex64x53x74x72x69x6ex67x01x66x69x72x73x74x4bx65x79x03x31x02x73x65x63x6fx6ex64x4bx65x79x03x32x01x2dx32x33x34x01x31x2ex30x01x2dx32x2ex35
[junit] bytes in text 
=234firstStringsecondStringfirstKey1secondKey2-2341.0-2.5
[junit] o class = class java.util.ArrayList
[junit] o size = 6
[junit] o[0] class = class java.lang.Integer
[junit] o[1] class = class java.util.ArrayList
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}, 
-234, 1.0, -2.5]
[junit] Beginning Test testTBinarySortableProtocol:
[junit] Testing struct test { double hello}
[junit] Testing struct test { i32 hello}
[junit] Testing struct test { i64 hello}
[junit] Testing struct test { string hello}
[junit] Testing struct test { string hello, double another}
[junit] Test testTBinarySortableProtocol passed!
[junit] bytes in text =234  firstStringsecondString
firstKey1secondKey2
[junit] compare to=234  firstStringsecondString
firstKey1secondKey2
[junit] o class = class java.util.ArrayList
[junit] o size = 3
[junit] o[0] class = class java.lang.Integer
[junit] o[1] class = class java.util.ArrayList
[junit] o[2] class = class java.util.HashMap
[junit] o = [234, [firstString, secondString], {firstKey=1, secondKey=2}]
[junit] bytes in text =234  firstStringsecondString
firstKey1secondKey2
[junit] compare to=234  firstStringsecondString
firstKey1secondKey2
[junit] o class = class java.util.ArrayList
[junit] o size = 3
[junit] o = [234, null, {firstKey=1, secondKey=2}]
[junit] Tests run: 9, Failures: 0, Errors: 0, Time elapsed: 0.861 sec
[junit] Running org.apache.hadoop.hive.serde2.lazy.TestLazyArrayMapStruct
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.367 sec
[junit] Running org.apache.hadoop.hive.serde2.lazy.TestLazyPrimitive
[junit] Tests run: 8, Failures: 0, Errors: 0, Time elapsed: 0.436 sec
[junit] Running org.apache.hadoop.hive.serde2.lazy.TestLazySimpleSerDe
[junit] Tests run: 4, Failures: 0, Errors: 0, Time elapsed: 0.376 sec
[junit] Running org.apache.hadoop.hive.serde2.lazybinary.TestLazyBinarySerDe
[junit] Beginning Test TestLazyBinarySerDe:
[junit] Test TestLazyBinarySerDe passed!
[junit] Tests run: 1, Failures: 0, Errors: 0, Time elapsed: 1.536 sec
[junit] Running 

[jira] Updated: (HIVE-1899) add a factory method for creating a synchronized wrapper for IMetaStoreClient

2011-01-07 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1899:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed. Thanks John!

 add a factory method for creating a synchronized wrapper for IMetaStoreClient
 -

 Key: HIVE-1899
 URL: https://issues.apache.org/jira/browse/HIVE-1899
 Project: Hive
  Issue Type: Improvement
  Components: Metastore
Affects Versions: 0.7.0
Reporter: John Sichi
Assignee: John Sichi
 Fix For: 0.7.0

 Attachments: HIVE-1899.1.patch


 There are currently some HiveMetaStoreClient multithreading bugs.  This patch 
 adds an (optional) synchronized wrapper for IMetaStoreClient using a dynamic 
 proxy.  This can be used for thread safety by multithreaded apps until all 
 reentrancy bugs are fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1901) ColumnPruner bug in case of LATERAL VIEW and UDTF

2011-01-07 Thread Ning Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978965#action_12978965
 ] 

Ning Zhang commented on HIVE-1901:
--

This bug is related to the unnecessary columns in the subquery (db.value and 
nt.hr). It works if removing these columns. 

 ColumnPruner bug in case of LATERAL VIEW and UDTF
 -

 Key: HIVE-1901
 URL: https://issues.apache.org/jira/browse/HIVE-1901
 Project: Hive
  Issue Type: Bug
Reporter: Ning Zhang

 The following query 
 {code}
  EXPLAIN SELECT stat FROM (
 SELECT nt.value AS n, dt.value AS d_evt, nt.hr as hr
 FROM srcpart nt
 JOIN srcpart dt
 ON (nt.ds='2008-04-08' AND dt.ds='2008-04-08' AND nt.key=dt.key)
 ) joined
 LATERAL VIEW explode(array(n)) s AS stat;
 {code}
 throws an exception:
 FAILED: Hive Internal Error: java.lang.ArrayIndexOutOfBoundsException(-1)
 java.lang.ArrayIndexOutOfBoundsException: -1
   at java.util.ArrayList.get(ArrayList.java:324)
   at 
 org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory$ColumnPrunerSelectProc.process(ColumnPrunerProcFactory.java:398)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
   at 
 org.apache.hadoop.hive.ql.optimizer.ColumnPruner$ColumnPrunerWalker.walk(ColumnPruner.java:143)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
   at 
 org.apache.hadoop.hive.ql.optimizer.ColumnPruner.transform(ColumnPruner.java:106)
   at 
 org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:85)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6606)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:48)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:686)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:161)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:235)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:450)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HIVE-1901) ColumnPruner bug in case of LATERAL VIEW and UDTF

2011-01-07 Thread Ning Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ning Zhang reassigned HIVE-1901:


Assignee: Paul Yang

Paul, can you take a look?

 ColumnPruner bug in case of LATERAL VIEW and UDTF
 -

 Key: HIVE-1901
 URL: https://issues.apache.org/jira/browse/HIVE-1901
 Project: Hive
  Issue Type: Bug
Reporter: Ning Zhang
Assignee: Paul Yang

 The following query 
 {code}
  EXPLAIN SELECT stat FROM (
 SELECT nt.value AS n, dt.value AS d_evt, nt.hr as hr
 FROM srcpart nt
 JOIN srcpart dt
 ON (nt.ds='2008-04-08' AND dt.ds='2008-04-08' AND nt.key=dt.key)
 ) joined
 LATERAL VIEW explode(array(n)) s AS stat;
 {code}
 throws an exception:
 FAILED: Hive Internal Error: java.lang.ArrayIndexOutOfBoundsException(-1)
 java.lang.ArrayIndexOutOfBoundsException: -1
   at java.util.ArrayList.get(ArrayList.java:324)
   at 
 org.apache.hadoop.hive.ql.optimizer.ColumnPrunerProcFactory$ColumnPrunerSelectProc.process(ColumnPrunerProcFactory.java:398)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
   at 
 org.apache.hadoop.hive.ql.optimizer.ColumnPruner$ColumnPrunerWalker.walk(ColumnPruner.java:143)
   at 
 org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
   at 
 org.apache.hadoop.hive.ql.optimizer.ColumnPruner.transform(ColumnPruner.java:106)
   at 
 org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:85)
   at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:6606)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
   at 
 org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:48)
   at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:238)
   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:335)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:686)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:161)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:235)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:450)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-78) Authorization infrastructure for Hive

2011-01-07 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-78?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978978#action_12978978
 ] 

Namit Jain commented on HIVE-78:


I am getting some compilation errors - can you regenerate the patch ?

 Authorization infrastructure for Hive
 -

 Key: HIVE-78
 URL: https://issues.apache.org/jira/browse/HIVE-78
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Query Processor, Server Infrastructure
Reporter: Ashish Thusoo
Assignee: He Yongqiang
 Attachments: createuser-v1.patch, hive-78-metadata-v1.patch, 
 hive-78-syntax-v1.patch, HIVE-78.1.nothrift.patch, HIVE-78.1.thrift.patch, 
 HIVE-78.10.no_thrift.patch, HIVE-78.11.patch, HIVE-78.12.patch, 
 HIVE-78.2.nothrift.patch, HIVE-78.2.thrift.patch, HIVE-78.4.complete.patch, 
 HIVE-78.4.no_thrift.patch, HIVE-78.5.complete.patch, 
 HIVE-78.5.no_thrift.patch, HIVE-78.6.complete.patch, 
 HIVE-78.6.no_thrift.patch, HIVE-78.7.no_thrift.patch, HIVE-78.7.patch, 
 HIVE-78.9.no_thrift.patch, HIVE-78.9.patch, hive-78.diff


 Allow hive to integrate with existing user repositories for authentication 
 and authorization infromation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-78) Authorization infrastructure for Hive

2011-01-07 Thread He Yongqiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-78?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

He Yongqiang updated HIVE-78:
-

Attachment: HIVE-78.12.2.patch

refresh the patch

 Authorization infrastructure for Hive
 -

 Key: HIVE-78
 URL: https://issues.apache.org/jira/browse/HIVE-78
 Project: Hive
  Issue Type: New Feature
  Components: Metastore, Query Processor, Server Infrastructure
Reporter: Ashish Thusoo
Assignee: He Yongqiang
 Attachments: createuser-v1.patch, hive-78-metadata-v1.patch, 
 hive-78-syntax-v1.patch, HIVE-78.1.nothrift.patch, HIVE-78.1.thrift.patch, 
 HIVE-78.10.no_thrift.patch, HIVE-78.11.patch, HIVE-78.12.2.patch, 
 HIVE-78.12.patch, HIVE-78.2.nothrift.patch, HIVE-78.2.thrift.patch, 
 HIVE-78.4.complete.patch, HIVE-78.4.no_thrift.patch, 
 HIVE-78.5.complete.patch, HIVE-78.5.no_thrift.patch, 
 HIVE-78.6.complete.patch, HIVE-78.6.no_thrift.patch, 
 HIVE-78.7.no_thrift.patch, HIVE-78.7.patch, HIVE-78.9.no_thrift.patch, 
 HIVE-78.9.patch, hive-78.diff


 Allow hive to integrate with existing user repositories for authentication 
 and authorization infromation.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1900) a mapper should be able to span multiple partitions

2011-01-07 Thread Namit Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978987#action_12978987
 ] 

Namit Jain commented on HIVE-1900:
--

1. Why should it be any different than sort-merge join ?
ExecMapper needs to keep track of the current file, and then change the 
partitioning columns whenever the file changes
2. Why should it matter ?
3. Why should it matter ?

 a mapper should be able to span multiple partitions
 ---

 Key: HIVE-1900
 URL: https://issues.apache.org/jira/browse/HIVE-1900
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: He Yongqiang

 Currently, a  mapper only spans a single partition which creates a problem in 
 the presence of many
 small partitions (which is becoming a common usecase in facebook).
 If the plan is the same, a mapper should be able to span files across 
 multiple partitions

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1900) a mapper should be able to span multiple partitions

2011-01-07 Thread Ning Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12978997#action_12978997
 ] 

Ning Zhang commented on HIVE-1900:
--

Namit, do you mean bucketized sort-merge join? In that case don't you need to 
use a specialized InputFormat and RecordReader? If we allow mappers get inputs 
from multiple partitions, we need to ensure HiveInputFormat and 
CombineHiveInputFormat and the RecordReaders be partition aware. 

2) is important because we don't want to merge different partitions in one 
file. Otherwise you need a dynamic partition insert for the merge which may 
generate multiple small files for a partition again. 

3) If TableScanOperator can take multiple partitions, the stats has to be 
gathered according to the input partition column values. Currently the 
partition column value is checked for the 1st row and assumes all the rows have 
the same partitioning column value. If we allow multiple partitions in a 
mapper, we have to check the partition column values for each row. 

 a mapper should be able to span multiple partitions
 ---

 Key: HIVE-1900
 URL: https://issues.apache.org/jira/browse/HIVE-1900
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Namit Jain
Assignee: He Yongqiang

 Currently, a  mapper only spans a single partition which creates a problem in 
 the presence of many
 small partitions (which is becoming a common usecase in facebook).
 If the plan is the same, a mapper should be able to span files across 
 multiple partitions

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HIVE-1903) Can't join HBase tables if one's name is the beginning of the other

2011-01-07 Thread John Sichi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Sichi reassigned HIVE-1903:


Assignee: John Sichi

 Can't join HBase tables if one's name is the beginning of the other
 ---

 Key: HIVE-1903
 URL: https://issues.apache.org/jira/browse/HIVE-1903
 Project: Hive
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: John Sichi
 Fix For: 0.7.0


 I tried joining two tables, let's call them table and table_a, but I'm 
 seeing an array of errors such as this:
 {noformat}
 java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at 
 org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getRecordReader(HiveHBaseTableInputFormat.java:118)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:231)
 {noformat}
 The reason is that HiveInputFormat.pushProjectionsAndFilters matches the 
 aliases with startsWith so in my case the mappers for table_a were getting 
 the columns from table as well as its own (and since it had less column, it 
 was trying to get one too far in the array).
 I don't know if just changing it to equals fill fix it, my guess is it 
 won't, since it may break RCFiles.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1903) Can't join HBase tables if one's name is the beginning of the other

2011-01-07 Thread John Sichi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12979029#action_12979029
 ] 

John Sichi commented on HIVE-1903:
--

Can you attach the CREATE TABLE stmts plus SELECT which fails?


 Can't join HBase tables if one's name is the beginning of the other
 ---

 Key: HIVE-1903
 URL: https://issues.apache.org/jira/browse/HIVE-1903
 Project: Hive
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
 Fix For: 0.7.0


 I tried joining two tables, let's call them table and table_a, but I'm 
 seeing an array of errors such as this:
 {noformat}
 java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at 
 org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getRecordReader(HiveHBaseTableInputFormat.java:118)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:231)
 {noformat}
 The reason is that HiveInputFormat.pushProjectionsAndFilters matches the 
 aliases with startsWith so in my case the mappers for table_a were getting 
 the columns from table as well as its own (and since it had less column, it 
 was trying to get one too far in the array).
 I don't know if just changing it to equals fill fix it, my guess is it 
 won't, since it may break RCFiles.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HIVE-1903) Can't join HBase tables if one's name is the beginning of the other

2011-01-07 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12979035#action_12979035
 ] 

Jean-Daniel Cryans commented on HIVE-1903:
--

Here it is:

{noformat}
CREATE EXTERNAL TABLE users(key int, userid int, username string, created int) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = 
:key,f:userid,f:nickname,f:created);

CREATE EXTERNAL TABLE users_level(key int, userid int, level int)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES (hbase.columns.mapping = :key,f:userid,f:level);

SELECT year(from_unixtime(users.created)) AS year, level, count(users.userid) 
AS num 
 FROM users JOIN users_level ON (users.userid = users_level.userid) 
 GROUP BY year(from_unixtime(users.created)), level;
{noformat}

 Can't join HBase tables if one's name is the beginning of the other
 ---

 Key: HIVE-1903
 URL: https://issues.apache.org/jira/browse/HIVE-1903
 Project: Hive
  Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: John Sichi
 Fix For: 0.7.0


 I tried joining two tables, let's call them table and table_a, but I'm 
 seeing an array of errors such as this:
 {noformat}
 java.lang.IndexOutOfBoundsException: Index: 3, Size: 3
   at java.util.ArrayList.RangeCheck(ArrayList.java:547)
   at java.util.ArrayList.get(ArrayList.java:322)
   at 
 org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getRecordReader(HiveHBaseTableInputFormat.java:118)
   at 
 org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:231)
 {noformat}
 The reason is that HiveInputFormat.pushProjectionsAndFilters matches the 
 aliases with startsWith so in my case the mappers for table_a were getting 
 the columns from table as well as its own (and since it had less column, it 
 was trying to get one too far in the array).
 I don't know if just changing it to equals fill fix it, my guess is it 
 won't, since it may break RCFiles.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1692) FetchOperator.getInputFormatFromCache hides causal exception

2011-01-07 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1692:
-

Resolution: Fixed
  Assignee: Philip Zeyliger
Status: Resolved  (was: Patch Available)

Committed. Thanks Philip!

 FetchOperator.getInputFormatFromCache hides causal exception
 

 Key: HIVE-1692
 URL: https://issues.apache.org/jira/browse/HIVE-1692
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.7.0
Reporter: Philip Zeyliger
Assignee: Philip Zeyliger
Priority: Minor
 Fix For: 0.7.0

 Attachments: HIVE-1692.2.patch.txt, HIVE-1692.patch.txt


 There's a line in FetchOperator.getInputFormatFromCache that catches all 
 exceptions and re-throws IOException instead, hiding the original cause.  I 
 ran into this, naturally, and wish to fix it.  Patch below is trivial.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HIVE-1829) Fix intermittent failures in TestRemoteMetaStore

2011-01-07 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1829:
-

Attachment: HIVE-1829.1.patch.txt

This patch attempts to fix the intermittent failures in TestRemoteHiveMetaStore 
by instituting a 60 second
wait between consecutive connection attempts in HiveMetaStoreClient. This wait 
period is configurable
via the new configuration property hive.metastore.client.connect.retry.delay. 
The patch also defines the
new configuration property hive.metastore.client.socket.timeout that is used to 
set the timeout value on
Thrift socket wrapped by HiveMetaStoreClient.


 Fix intermittent failures in TestRemoteMetaStore
 

 Key: HIVE-1829
 URL: https://issues.apache.org/jira/browse/HIVE-1829
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.6.0
Reporter: Edward Capriolo
Assignee: Carl Steinbach
 Attachments: HIVE-1829.1.patch.txt


 Notice how Running metastore! appears twice.
 {noformat}
 test:
 [junit] Running org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore
 [junit] BR.recoverFromMismatchedToken
 [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 36.697 sec
 [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore
 [junit] Running metastore!
 [junit] Running metastore!
 [junit] org.apache.thrift.transport.TTransportException: Could not create 
 ServerSocket on address 0.0.0.0/0.0.0.0:29083.
 [junit]   at 
 org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:98)
 [junit]   at 
 org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:79)
 [junit]   at 
 org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.init(TServerSocketKeepAlive.java:34)
 [junit]   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:2189)
 [junit]   at 
 org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore$RunMS.run(TestRemoteHiveMetaStore.java:35)
 [junit]   at java.lang.Thread.run(Thread.java:619)
 [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore
 [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
 [junit] Test org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore 
 FAILED (crashed)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Review Request: HIVE-1829: Fix intermittent failures in TestRemoteMetaStore

2011-01-07 Thread Carl Steinbach

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/260/
---

Review request for hive.


Summary
---

This patch attempts to fix the intermittent failures in TestRemoteHiveMetaStore 
by instituting a 60 second
wait between consecutive connection attempts in HiveMetaStoreClient. This wait 
period is configurable
via the new configuration property hive.metastore.client.connect.retry.delay. 
The patch also defines the
new configuration property hive.metastore.client.socket.timeout that is used to 
set the timeout value on
Thrift socket wrapped by HiveMetaStoreClient.


This addresses bug HIVE-1829.
https://issues.apache.org/jira/browse/HIVE-1829


Diffs
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java c19d29f 
  conf/hive-default.xml 7662f11 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
720c1d0 
  
metastore/src/test/org/apache/hadoop/hive/metastore/TestRemoteHiveMetaStore.java
 57648b6 

Diff: https://reviews.apache.org/r/260/diff


Testing
---


Thanks,

Carl



[jira] Updated: (HIVE-1829) Fix intermittent failures in TestRemoteMetaStore

2011-01-07 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-1829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-1829:
-

Fix Version/s: 0.7.0
   Status: Patch Available  (was: Open)

Review request: https://reviews.apache.org/r/260/

 Fix intermittent failures in TestRemoteMetaStore
 

 Key: HIVE-1829
 URL: https://issues.apache.org/jira/browse/HIVE-1829
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.6.0
Reporter: Edward Capriolo
Assignee: Carl Steinbach
 Fix For: 0.7.0

 Attachments: HIVE-1829.1.patch.txt


 Notice how Running metastore! appears twice.
 {noformat}
 test:
 [junit] Running org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore
 [junit] BR.recoverFromMismatchedToken
 [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 36.697 sec
 [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore
 [junit] Running metastore!
 [junit] Running metastore!
 [junit] org.apache.thrift.transport.TTransportException: Could not create 
 ServerSocket on address 0.0.0.0/0.0.0.0:29083.
 [junit]   at 
 org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:98)
 [junit]   at 
 org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:79)
 [junit]   at 
 org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.init(TServerSocketKeepAlive.java:34)
 [junit]   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:2189)
 [junit]   at 
 org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore$RunMS.run(TestRemoteHiveMetaStore.java:35)
 [junit]   at java.lang.Thread.run(Thread.java:619)
 [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore
 [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec
 [junit] Test org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore 
 FAILED (crashed)
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.