[jira] [Commented] (HIVE-2966) Revert HIVE-2795

2012-04-20 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258458#comment-13258458
 ] 

Ashutosh Chauhan commented on HIVE-2966:


There is lot of overlap between this and HIVE-2961. I will recommend to do both 
in conjunction. 
Can you also fold that in?


> Revert HIVE-2795
> 
>
> Key: HIVE-2966
> URL: https://issues.apache.org/jira/browse/HIVE-2966
> Project: Hive
>  Issue Type: Task
>  Components: Metastore
>Affects Versions: 0.9.0
>Reporter: Ashutosh Chauhan
>Priority: Blocker
> Fix For: 0.9.0
>
> Attachments: HIVE-2966.1.patch
>
>
> In 4/18/12 contrib meeting, it was decided to revert HIVE-2795

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2965) Revert HIVE-2612

2012-04-20 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13258382#comment-13258382
 ] 

Ashutosh Chauhan commented on HIVE-2965:


All tests passed. 
{code}
BUILD SUCCESSFUL
Total time: 354 minutes 58 seconds
{code}

> Revert HIVE-2612
> 
>
> Key: HIVE-2965
> URL: https://issues.apache.org/jira/browse/HIVE-2965
> Project: Hive
>  Issue Type: Task
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Blocker
> Fix For: 0.9.0
>
> Attachments: hive-2765.patch
>
>
> In 4/19 contrib meeting it was decided to revert HIVE-2612.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-19 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13257593#comment-13257593
 ] 

Ashutosh Chauhan commented on HIVE-2961:


bq. But if we end up taking that route, I think we should leave the 
upgrade-0.8.0-to-0.9.0.xxx.sql scripts for the sake of consistency.

Do you mean delete the content of the file, thus effectively leaving empty 
files, since there is no upgrade needed when we revert those patches?

> Remove need for storage descriptors for view partitions
> ---
>
> Key: HIVE-2961
> URL: https://issues.apache.org/jira/browse/HIVE-2961
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.9.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2961.D2877.1.patch
>
>
> Storage descriptors were introduced for view partitions as part of HIVE-2795. 
>  This was to allow view partitions to have the concept of a region as well as 
> to fix a NPE that resulted from calling describe formatted on them.
> Since regions are no longer necessary for view partitions and the NPE can be 
> fixed by not displaying storage information for view partitions (or 
> displaying the view's storage information if this is preferred, although, 
> since a view partition is purely metadata, this does not seem necessary), 
> these are no longer needed.
> This also means the Python script added which retroactively adds storage 
> descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2961) Remove need for storage descriptors for view partitions

2012-04-18 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13256822#comment-13256822
 ] 

Ashutosh Chauhan commented on HIVE-2961:


I agree with Kevin. Since, views are purely metadata it doesnt make much sense 
to have a storage-descriptor associated with them.

> Remove need for storage descriptors for view partitions
> ---
>
> Key: HIVE-2961
> URL: https://issues.apache.org/jira/browse/HIVE-2961
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.9.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2961.D2877.1.patch
>
>
> Storage descriptors were introduced for view partitions as part of HIVE-2795. 
>  This was to allow view partitions to have the concept of a region as well as 
> to fix a NPE that resulted from calling describe formatted on them.
> Since regions are no longer necessary for view partitions and the NPE can be 
> fixed by not displaying storage information for view partitions (or 
> displaying the view's storage information if this is preferred, although, 
> since a view partition is purely metadata, this does not seem necessary), 
> these are no longer needed.
> This also means the Python script added which retroactively adds storage 
> descriptors to existing view partitions can be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2883) Metastore client doesnt close connection properly

2012-04-17 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13255747#comment-13255747
 ] 

Ashutosh Chauhan commented on HIVE-2883:


Hive Committers,

This has been independently verified by couple other folks. Can someone review 
it for me?

All tests passed with the patch.

> Metastore client doesnt close connection properly
> -
>
> Key: HIVE-2883
> URL: https://issues.apache.org/jira/browse/HIVE-2883
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.9.0
>
> Attachments: HIVE-2883.D2613.1.patch
>
>
> While closing connection, it always fail with following trace. Seemingly, it 
> doesnt have any harmful effects.
> {code}
> 12/03/20 10:55:02 ERROR hive.metastore: Unable to shutdown local metastore 
> client
> org.apache.thrift.transport.TTransportException: Cannot write to null 
> outputStream
>   at 
> org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:142)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:163)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:91)
>   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
>   at 
> com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:421)
>   at 
> com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:415)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:310)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1013) Interactive 'help' command for HiveCLI

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13253089#comment-13253089
 ] 

Ashutosh Chauhan commented on HIVE-1013:


Judging from lack of response on jira, looks like no one is working on this. Go 
for it.

> Interactive 'help' command for HiveCLI
> --
>
> Key: HIVE-1013
> URL: https://issues.apache.org/jira/browse/HIVE-1013
> Project: Hive
>  Issue Type: New Feature
>  Components: Clients
>Reporter: Carl Steinbach
>
> 'help' should return a list of HiveCLI commands. For example:
> {noformat}
> hive> help;
> Commands:
> !
> add
> dfs
> list
> set
> quit
> hive> help !;
> Command: !
> Execute shell command  from within Hive CLI.
> hive>
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2948) HiveFileFormatUtils should use Path.SEPARATOR instead of File.Separator

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13253080#comment-13253080
 ] 

Ashutosh Chauhan commented on HIVE-2948:


All the tests passed.

> HiveFileFormatUtils should use Path.SEPARATOR instead of File.Separator
> ---
>
> Key: HIVE-2948
> URL: https://issues.apache.org/jira/browse/HIVE-2948
> Project: Hive
>  Issue Type: Bug
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-2948.D2763.1.patch
>
>
> Because its munging hdfs paths and not OS paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2937) TestHiveServerSessions hangs when executed directly

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13253077#comment-13253077
 ] 

Ashutosh Chauhan commented on HIVE-2937:


Go ahead and commit it will unblock us from this failing test. But do file a 
follow-up jira describing your findings so in case anyone is interested to 
further pursue this will have some context to begin with.

> TestHiveServerSessions hangs when executed directly
> ---
>
> Key: HIVE-2937
> URL: https://issues.apache.org/jira/browse/HIVE-2937
> Project: Hive
>  Issue Type: Test
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-2937.D2697.1.patch
>
>
> {code}
> ant test -Doffline=true -Dtestcase=TestHiveServerSessions
> {code}
> Hangs infinitely.
> I couldn't imagine exact cause of the problem, but found that by adding 'new 
> HiveServer.HiveServerHandler();' in setup(), test resulted to success.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2936) Warehouse table subdirectories should inherit the group permissions of the warehouse parent directory

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13253055#comment-13253055
 ] 

Ashutosh Chauhan commented on HIVE-2936:


+1 will commit, if tests pass.

> Warehouse table subdirectories should inherit the group permissions of the 
> warehouse parent directory
> -
>
> Key: HIVE-2936
> URL: https://issues.apache.org/jira/browse/HIVE-2936
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Rohini Palaniswamy
>Assignee: Rohini Palaniswamy
> Fix For: 0.9.0
>
> Attachments: HIVE-2504-1.patch, HIVE-2504.patch, HIVE-2504.patch, 
> HIVE-2936-2.patch
>
>
> When the Hive Metastore creates a subdirectory in the Hive warehouse for
> a new table it does so with the default HDFS permissions derived from 
> dfs.umask or dfs.umaskmode. There should be a option to inherit the 
> permissions of the parent directory (default warehouse or custom database 
> directory) so that the table directories have the same permissions as the 
> database directories. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2936) Warehouse table subdirectories should inherit the group permissions of the warehouse parent directory

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13253041#comment-13253041
 ] 

Ashutosh Chauhan commented on HIVE-2936:


{code}
if (this.inheritPerms) {
try {
   return fs.getFileStatus(f).isDir();
} catch (FileNotFoundException fnfe) {}
  }

  boolean success = fs.mkdirs(f);

  if (this.inheritPerms && success) {
// Set the permission of parent directory.
fs.setPermission(f, fs.getFileStatus(f.getParent()).getPermission());
  }
  return success;
{code}

this improves on urs by just doing 1 call for "inheritPerms on existing path."

> Warehouse table subdirectories should inherit the group permissions of the 
> warehouse parent directory
> -
>
> Key: HIVE-2936
> URL: https://issues.apache.org/jira/browse/HIVE-2936
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Rohini Palaniswamy
>Assignee: Rohini Palaniswamy
> Fix For: 0.9.0
>
> Attachments: HIVE-2504-1.patch, HIVE-2504.patch, HIVE-2504.patch
>
>
> When the Hive Metastore creates a subdirectory in the Hive warehouse for
> a new table it does so with the default HDFS permissions derived from 
> dfs.umask or dfs.umaskmode. There should be a option to inherit the 
> permissions of the parent directory (default warehouse or custom database 
> directory) so that the table directories have the same permissions as the 
> database directories. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2936) Warehouse table subdirectories should inherit the group permissions of the warehouse parent directory

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13253019#comment-13253019
 ] 

Ashutosh Chauhan commented on HIVE-2936:


I see. Agree on both points. Another point is this increases number on calls on 
nn, it will be good to reduce that if possible. How about following:
{code}
  boolean success = fs.mkdirs(f);
  if(success) {
if (this.inheritPerms && fs.exists(f.getParent())) {
  try {
fs.setPermission(f, 
fs.getFileStatus(f.getParent()).getPermission());
  } catch (IOException ioe) {
LOG.equals("Failed to set permissions");
success = false;
  }

}
  } else {
return fs.getFileStatus(f).isDir();
  }
  return success;
{code}

How about this. The case of returning false if you fail to set Permissions is 
not clear. I return false, what you think ?

> Warehouse table subdirectories should inherit the group permissions of the 
> warehouse parent directory
> -
>
> Key: HIVE-2936
> URL: https://issues.apache.org/jira/browse/HIVE-2936
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Rohini Palaniswamy
>Assignee: Rohini Palaniswamy
> Fix For: 0.9.0
>
> Attachments: HIVE-2504-1.patch, HIVE-2504.patch, HIVE-2504.patch
>
>
> When the Hive Metastore creates a subdirectory in the Hive warehouse for
> a new table it does so with the default HDFS permissions derived from 
> dfs.umask or dfs.umaskmode. There should be a option to inherit the 
> permissions of the parent directory (default warehouse or custom database 
> directory) so that the table directories have the same permissions as the 
> database directories. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2736) Hive UDFs cannot emit binary constants

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252992#comment-13252992
 ] 

Ashutosh Chauhan commented on HIVE-2736:


@Philip,
Patch looks good. Can you add a test case for it? You may find this howto 
useful : 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute#HowToContribute-AddaUnitTest
 

> Hive UDFs cannot emit binary constants
> --
>
> Key: HIVE-2736
> URL: https://issues.apache.org/jira/browse/HIVE-2736
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor, Serializers/Deserializers, UDF
>Affects Versions: 0.9.0
>Reporter: Philip Tromans
>Assignee: Philip Tromans
>Priority: Minor
>  Labels: newbie
> Fix For: 0.9.0
>
> Attachments: HIVE-2736.1.patch.txt
>
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> I recently wrote a UDF which emits BINARY values (as implemented in 
> [HIVE-2380|https://issues.apache.org/jira/browse/HIVE-2380]). When testing 
> this, I encountered the following exception (because I was evaluating 
> f(g(constant string))) and g() was emitting a BytesWritable type.
> FAILED: Hive Internal Error: java.lang.RuntimeException(Internal error: 
> Cannot find ConstantObjectInspector for BINARY)
> java.lang.RuntimeException: Internal error: Cannot find 
> ConstantObjectInspector for BINARY
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector(PrimitiveObjectInspectorFactory.java:196)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getConstantObjectInspector(ObjectInspectorUtils.java:899)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDF.initializeAndFoldConstants(GenericUDF.java:128)
>   at 
> org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc.newInstance(ExprNodeGenericFuncDesc.java:214)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.getXpathOrFuncExprNodeDesc(TypeCheckProcFactory.java:684)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory$DefaultExprProcessor.process(TypeCheckProcFactory.java:805)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:89)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:88)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:125)
>   at 
> org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:102)
>   at 
> org.apache.hadoop.hive.ql.parse.TypeCheckProcFactory.genExprNode(TypeCheckProcFactory.java:161)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:7708)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2301)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:2103)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:6126)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6097)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:6723)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:7484)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:243)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:430)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:889)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:671)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
> It looks like a pretty simple fix - add a case for BINARY in 
> PrimitiveObjectInspectorFactory.getPrimitiveWritableConstantObjectInspector() 
> and implement a WritableConstantByteArrayObjectInspector class (almost 
> identical to the others

[jira] [Commented] (HIVE-2936) Warehouse table subdirectories should inherit the group permissions of the warehouse parent directory

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252981#comment-13252981
 ] 

Ashutosh Chauhan commented on HIVE-2936:


Looks good, couple of comments:
* I dont see a need of parent existence check, parent will always exists. / is 
parent of itself.
* fs.mkdirs() takes permission as an argument, so use that signature instead of 
first creating dirs and then setting perms. 

So, something like following:
{code}
 boolean success;
  if (this.inheritPerms) {
success = fs.mkdirs(f, fs.getFileStatus(f.getParent()).getPermission());
  } else {
success = fs.mkdirs(f);
  }
  return success;
{code}

> Warehouse table subdirectories should inherit the group permissions of the 
> warehouse parent directory
> -
>
> Key: HIVE-2936
> URL: https://issues.apache.org/jira/browse/HIVE-2936
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Rohini Palaniswamy
>Assignee: Rohini Palaniswamy
> Fix For: 0.9.0
>
> Attachments: HIVE-2504-1.patch, HIVE-2504.patch, HIVE-2504.patch
>
>
> When the Hive Metastore creates a subdirectory in the Hive warehouse for
> a new table it does so with the default HDFS permissions derived from 
> dfs.umask or dfs.umaskmode. There should be a option to inherit the 
> permissions of the parent directory (default warehouse or custom database 
> directory) so that the table directories have the same permissions as the 
> database directories. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2937) TestHiveServerSessions hangs when executed directly

2012-04-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13252487#comment-13252487
 ] 

Ashutosh Chauhan commented on HIVE-2937:


@Kevin,

What you said make sense. But, I am still puzzled why this has started to show 
up recently. I see this hang ~70% of time on my builds starting last week. If 
the bug is as described, this should have always been the case. Isn't it?

> TestHiveServerSessions hangs when executed directly
> ---
>
> Key: HIVE-2937
> URL: https://issues.apache.org/jira/browse/HIVE-2937
> Project: Hive
>  Issue Type: Test
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-2937.D2697.1.patch
>
>
> {code}
> ant test -Doffline=true -Dtestcase=TestHiveServerSessions
> {code}
> Hangs infinitely.
> I couldn't imagine exact cause of the problem, but found that by adding 'new 
> HiveServer.HiveServerHandler();' in setup(), test resulted to success.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2767) Optionally use framed transport with metastore

2012-04-10 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13250711#comment-13250711
 ] 

Ashutosh Chauhan commented on HIVE-2767:


@Travis,
It has nothing to do with your patch. Problem exists on trunk. See, HIVE-2937 
And ya, its a race condition, so it doesn't show up always.

> Optionally use framed transport with metastore
> --
>
> Key: HIVE-2767
> URL: https://issues.apache.org/jira/browse/HIVE-2767
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Travis Crawford
>Assignee: Travis Crawford
> Attachments: HIVE-2767.D2661.1.patch, HIVE-2767.D2661.2.patch, 
> HIVE-2767.D2661.3.patch, HIVE-2767.patch.txt, HIVE-2767_a.patch.txt
>
>
> Users may want/need to use thrift's framed transport when communicating with 
> the Hive MetaStore. This patch adds a new property 
> {{hive.metastore.thrift.framed.transport.enabled}} that enables the framed 
> transport (defaults to off, aka no change from before the patch). This 
> property must be set for both clients and the HMS server.
> It wasn't immediately clear how to use the framed transport with SASL, so as 
> written an exception is thrown if you try starting the server with both 
> options. If SASL and the framed transport will indeed work together I can 
> update the patch (although I don't have a secured environment to test in).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2937) TestHiveServerSessions hangs when executed directly

2012-04-09 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13250243#comment-13250243
 ] 

Ashutosh Chauhan commented on HIVE-2937:


@Navis

Can you add Namit in reviewer list?


> TestHiveServerSessions hangs when executed directly
> ---
>
> Key: HIVE-2937
> URL: https://issues.apache.org/jira/browse/HIVE-2937
> Project: Hive
>  Issue Type: Test
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-2937.D2697.1.patch
>
>
> {code}
> ant test -Doffline=true -Dtestcase=TestHiveServerSessions
> {code}
> Hangs infinitely.
> I couldn't imagine exact cause of the problem, but found that by adding 'new 
> HiveServer.HiveServerHandler();' in setup(), test resulted to success.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2937) TestHiveServerSessions hangs when executed directly

2012-04-09 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13250172#comment-13250172
 ] 

Ashutosh Chauhan commented on HIVE-2937:


@Navis,

I have also found this test hanging on my machines. Looking at apache build 
logs, either of HIVE-2929 or HIVE-2858 is the cause. 
https://builds.apache.org/job/Hive-trunk-h0.21/1360/ Though atleast one of the 
subsequent build has passed. So, may be there is a race condition somewhere.

Nsmit was involved in both of those patches, so may have more context.

> TestHiveServerSessions hangs when executed directly
> ---
>
> Key: HIVE-2937
> URL: https://issues.apache.org/jira/browse/HIVE-2937
> Project: Hive
>  Issue Type: Test
>Reporter: Navis
>Priority: Trivial
>
> {code}
> ant test -Doffline=true -Dtestcase=TestHiveServerSessions
> {code}
> Hangs infinitely.
> I couldn't imagine exact cause of the problem, but found that by adding 'new 
> HiveServer.HiveServerHandler();' in setup(), test resulted to success.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2084) Upgrade datanucleus from 2.0.3 to 3.0.1

2012-04-09 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13250161#comment-13250161
 ] 

Ashutosh Chauhan commented on HIVE-2084:


@Sushanth, 

You have two patches up, one via phabricator another directly on jira and they 
have quite bit of differences among them. Which one you are proposing for 
review?

> Upgrade datanucleus from 2.0.3 to 3.0.1
> ---
>
> Key: HIVE-2084
> URL: https://issues.apache.org/jira/browse/HIVE-2084
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Ning Zhang
>Assignee: Sushanth Sowmyan
>  Labels: datanucleus
> Attachments: HIVE-2084.1.patch.txt, HIVE-2084.2.patch.txt, 
> HIVE-2084.D2397.1.patch, HIVE-2084.patch
>
>
> It seems the datanucleus 2.2.3 does a better join in caching. The time it 
> takes to get the same set of partition objects takes about 1/4 of the time it 
> took for the first time. While with 2.0.3, it took almost the same amount of 
> time in the second execution. We should retest the test case mentioned in 
> HIVE-1853, HIVE-1862.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2767) Optionally use framed transport with metastore

2012-04-08 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13249582#comment-13249582
 ] 

Ashutosh Chauhan commented on HIVE-2767:


After your patch test {{ant test -Dtestcase=TestHiveServerSessions}} is timing 
out. Can you take a look?

> Optionally use framed transport with metastore
> --
>
> Key: HIVE-2767
> URL: https://issues.apache.org/jira/browse/HIVE-2767
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Travis Crawford
>Assignee: Travis Crawford
> Attachments: HIVE-2767.D2661.1.patch, HIVE-2767.D2661.2.patch, 
> HIVE-2767.D2661.3.patch, HIVE-2767.patch.txt, HIVE-2767_a.patch.txt
>
>
> Users may want/need to use thrift's framed transport when communicating with 
> the Hive MetaStore. This patch adds a new property 
> {{hive.metastore.thrift.framed.transport.enabled}} that enables the framed 
> transport (defaults to off, aka no change from before the patch). This 
> property must be set for both clients and the HMS server.
> It wasn't immediately clear how to use the framed transport with SASL, so as 
> written an exception is thrown if you try starting the server with both 
> options. If SASL and the framed transport will indeed work together I can 
> update the patch (although I don't have a secured environment to test in).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2504) Warehouse table subdirectories should inherit the group permissions of the warehouse parent directory

2012-04-08 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13249581#comment-13249581
 ] 

Ashutosh Chauhan commented on HIVE-2504:


I agree that fiddling with umask is not the cleanest approach here. But, I am 
not sure about *always* inheriting permissions either, since this effectively 
implies the whole sub-tree of warehouse dir will have same permissions as 
warehouse dir itself. Concretely, lets consider following example. Lets say, wh 
dir has 700 perms. Then, if I create table (which only owner of wh can do) I 
will end up with either 775 or 755 (depending on whether it was before or after 
the earlier patch of jira). However, with your patch, table dir will end up 
with 700. In the earlier case, anyone could have read the tables, but now with 
your approach only owner can read. Now, which of this is correct behavior is 
open for debate and depends on which security model you have as your premise. 
Additionally, this will be change of behavior then the current behavior. So, I 
suggest you define a new config variable like {{hive.warehouse.inherit.perms}} 
or something similar and set it to false by default. And then take your code 
path of inheriting parent perms in case it is set to true. Thoughts? 

> Warehouse table subdirectories should inherit the group permissions of the 
> warehouse parent directory
> -
>
> Key: HIVE-2504
> URL: https://issues.apache.org/jira/browse/HIVE-2504
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Carl Steinbach
>Assignee: Rohini Palaniswamy
> Fix For: 0.9.0
>
> Attachments: HIVE-2504.patch, HIVE-2504.patch
>
>
> When the Hive Metastore creates a subdirectory in the Hive warehouse for
> a new table it does so with the default HDFS permissions. Since the default
> dfs.umask value is 022, this means that the new subdirectory will not inherit 
> the
> group write permissions of the hive warehouse directory.
> We should make the umask used by Warehouse.mkdirs() configurable, and set
> it to use a default value of 002.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2928) Support for Oracle-backed Hive-Metastore ("longvarchar" to "clob" in package.jdo)

2012-04-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13249312#comment-13249312
 ] 

Ashutosh Chauhan commented on HIVE-2928:


bq. 2. These modified hive-libraries work as is with pre-existing mysql 
metastores. Migrating data isn't a worry.

You are changing type of columns. This sounds like requiring data migration and 
thus data migration and schema upgrade scripts for mysql and derby.

> Support for Oracle-backed Hive-Metastore ("longvarchar" to "clob" in 
> package.jdo)
> -
>
> Key: HIVE-2928
> URL: https://issues.apache.org/jira/browse/HIVE-2928
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Affects Versions: 0.9.0
>Reporter: Mithun Radhakrishnan
> Attachments: HIVE-2928.patch
>
>
> I'm trying to get the Hive-Metastore to work when backed by an Oracle 
> backend. There's a change to hive's package.jdo that I'd like advice/comments 
> on.
> One sticking point on working with Oracle has been the TBLS table (MTable) 
> and its 2 LONGVARCHAR properties (VIEW_ORIGINAL_TEXT and VIEW_EXPANDED_TEXT). 
> Oracle doesn't support more than one LONGVARCHAR property per table (for 
> reason of legacy), and prefers that one use CLOBs instead. If one switches to 
> CLOB properties, with no modification to hive's package.jdo, one sees the 
> following exception:
> 
> Incompatible data type for column TBLS.VIEW_EXPANDED_TEXT : was CLOB
> (datastore), but type expected was LONGVARCHAR (metadata). Please check that
> the type in the datastore and the type specified in the MetaData are
> consistent.
> org.datanucleus.store.rdbms.exceptions.IncompatibleDataTypeException:
> Incompatible data type for column TBLS.VIEW_EXPANDED_TEXT : was CLOB
> (datastore), but type expected was LONGVARCHAR (metadata). Please check that
> the type in the datastore and the type specified in the MetaData are
> consistent.
> at
> org.datanucleus.store.rdbms.table.ColumnImpl.validate(ColumnImpl.java:521)
> at
> org.datanucleus.store.rdbms.table.TableImpl.validateColumns(TableImpl.java:2
> 
> But if one rebuilds Hive with the package.jdo changed to use CLOBs instead of 
> LONGVARCHARs, things look promising:
> 1. The exception no longer occurs. Things seem to work with Oracle. (I've yet 
> to scale-test.)
> 2. These modified hive-libraries work as is with pre-existing mysql 
> metastores. Migrating data isn't a worry.
> 3. The unit-tests seem to run through. 
> Would there be opposition to changing the package.jdo's LONGVARCHAR 
> references to CLOB, if this works with mysql and with Oracle? 
> Mithun
> P.S. I also have a working hive-schema-0.9.0-oracle.sql script that I'm 
> testing, for the related issue of creating the required tables in Oracle.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1721) use bloom filters to improve the performance of joins

2012-04-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13249311#comment-13249311
 ] 

Ashutosh Chauhan commented on HIVE-1721:


Last line should read "then launch second MR job to do step 3 & 4"

> use bloom filters to improve the performance of joins
> -
>
> Key: HIVE-1721
> URL: https://issues.apache.org/jira/browse/HIVE-1721
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Namit Jain
>  Labels: gsoc, gsoc2012, optimization
>
> In case of map-joins, it is likely that the big table will not find many 
> matching rows from the small table.
> Currently, we perform a hash-map lookup for every row in the big table, which 
> can be pretty expensive.
> It might be useful to try out a bloom-filter containing all the elements in 
> the small table.
> Each element from the big table is first searched in the bloom filter, and 
> only in case of a positive match,
> the small table hash table is explored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1721) use bloom filters to improve the performance of joins

2012-04-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13249310#comment-13249310
 ] 

Ashutosh Chauhan commented on HIVE-1721:


@Alex,
Reading the previous comments on jira, this is proposed to work as follows:
* Create a local task and launch it on client machine, building a bloom filter 
on medium-sized table. (~200MB)
* Create a Common Join MR job and launch it on cluster. Also, ship the bloom 
filter built in previous step to all the mapper nodes (via Distributed Cache).
* In Mapper, look-up key of every row of large table in bloom filter. If it 
exists, then send that row to reducer, else filter it out.
* In reducer, do the cross-product of rows of different table for a given key 
to get your joined output. 

As outlined above, it will be a win since you will be shuffling much less data 
from mappers to reducers. Though assumptions are cost of building bloom filter 
on client machine is small, there is huge difference in sizes of two tables and 
the join key is highly selective. One or more of these assumptions may be wrong 
in which case there might be a performance loss. So, there is a trade-off when 
to use this.

I don't know if there exists a way to compute bloom filter in distributed 
fashion. If there is such a way, then you can do the step 1 through a MR job 
(instead of locally) and on a much larger table and then launch second MR job 
to do step 2 & 3. Again, there will be trade-offs here. 


> use bloom filters to improve the performance of joins
> -
>
> Key: HIVE-1721
> URL: https://issues.apache.org/jira/browse/HIVE-1721
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Namit Jain
>  Labels: gsoc, gsoc2012, optimization
>
> In case of map-joins, it is likely that the big table will not find many 
> matching rows from the small table.
> Currently, we perform a hash-map lookup for every row in the big table, which 
> can be pretty expensive.
> It might be useful to try out a bloom-filter containing all the elements in 
> the small table.
> Each element from the big table is first searched in the bloom filter, and 
> only in case of a positive match,
> the small table hash table is explored.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2910) Improve the HWI interface

2012-04-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13249298#comment-13249298
 ] 

Ashutosh Chauhan commented on HIVE-2910:


Seems like there are few binary resources (icon png images) which needs to be 
checked-in but aren't uploaded yet. Hugo, can you upload them here with license 
granted to ASF.

> Improve the HWI interface
> -
>
> Key: HIVE-2910
> URL: https://issues.apache.org/jira/browse/HIVE-2910
> Project: Hive
>  Issue Type: Improvement
>  Components: Web UI
>Reporter: Hugo Trippaers
>Assignee: Hugo Trippaers
>Priority: Minor
>  Labels: newbie, patch
> Attachments: hive-2910.3.patch.log, hive-2910.3.patch.txt, 
> hive-hwi-2.patch, hive-hwi.patch, screenie001.PNG, screenie002.PNG
>
>
> I've made some improvements to the HWI interface with the Twitter bootstrap 
> system. I'm looking for feedback on the new design.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2910) Improve the HWI interface

2012-04-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13249297#comment-13249297
 ] 

Ashutosh Chauhan commented on HIVE-2910:


Ran the test with patch. All passed. I don't much about this part of code, but 
since Ed (who is the expert in this area) has already +1ed, will commit it.

> Improve the HWI interface
> -
>
> Key: HIVE-2910
> URL: https://issues.apache.org/jira/browse/HIVE-2910
> Project: Hive
>  Issue Type: Improvement
>  Components: Web UI
>Reporter: Hugo Trippaers
>Assignee: Hugo Trippaers
>Priority: Minor
>  Labels: newbie, patch
> Attachments: hive-2910.3.patch.log, hive-2910.3.patch.txt, 
> hive-hwi-2.patch, hive-hwi.patch, screenie001.PNG, screenie002.PNG
>
>
> I've made some improvements to the HWI interface with the Twitter bootstrap 
> system. I'm looking for feedback on the new design.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2932) TestHBaseCliDriver breaking in trunk

2012-04-06 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13248770#comment-13248770
 ] 

Ashutosh Chauhan commented on HIVE-2932:


Also, apache build machines are reporting success too: 
https://builds.apache.org/job/Hive-trunk-h0.21/

> TestHBaseCliDriver breaking in trunk
> 
>
> Key: HIVE-2932
> URL: https://issues.apache.org/jira/browse/HIVE-2932
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>
> I am getting 3 failures in clean trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2932) TestHBaseCliDriver breaking in trunk

2012-04-06 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13248769#comment-13248769
 ] 

Ashutosh Chauhan commented on HIVE-2932:


I just ran the tests and all of them passed. I have run them on multiple 
machines on past few days, all of them always pass. Something peculiar to your 
environment ?

> TestHBaseCliDriver breaking in trunk
> 
>
> Key: HIVE-2932
> URL: https://issues.apache.org/jira/browse/HIVE-2932
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>
> I am getting 3 failures in clean trunk.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2923) testAclPositive in TestZooKeeperTokenStore failing in clean checkout when run on Mac

2012-04-05 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13247860#comment-13247860
 ] 

Ashutosh Chauhan commented on HIVE-2923:


I was able to reproduce failures before the patch. With the latest patch, 
failures went away and tests pass. Patch looks good to me. Those who were 
seeing failures earlier can try with this patch and report back.

> testAclPositive in TestZooKeeperTokenStore failing in clean checkout when run 
> on Mac
> 
>
> Key: HIVE-2923
> URL: https://issues.apache.org/jira/browse/HIVE-2923
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.9.0
> Environment: Mac OSX Lion
>Reporter: Kevin Wilfong
>Assignee: Thomas Weise
>Priority: Blocker
> Fix For: 0.9.0
>
> Attachments: HIVE-2923.patch
>
>
> When running testAclPositive in TestZooKeeperTokenStore in a clean checkout, 
> it fails with the error:
> Failed to validate token path. 
> org.apache.hadoop.hive.thrift.DelegationTokenStore$TokenStoreException: 
> Failed to validate token path.
> at 
> org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.init(ZooKeeperTokenStore.java:207)
> at 
> org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.setConf(ZooKeeperTokenStore.java:225)
> at 
> org.apache.hadoop.hive.thrift.TestZooKeeperTokenStore.testAclPositive(TestZooKeeperTokenStore.java:170)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /zktokenstore-testAcl
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:778)
> at 
> org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.ensurePath(ZooKeeperTokenStore.java:119)
> at 
> org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.init(ZooKeeperTokenStore.java:204)
> ... 17 more
> This message is also printed to standard out:
> Unable to load realm mapping info from SCDynamicStore
> The test seems to run fine in Linux, but more than one developer has reported 
> this on a Mac.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2923) testAclPositive in TestZooKeeperTokenStore failing in clean checkout when run on Mac

2012-04-05 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13247862#comment-13247862
 ] 

Ashutosh Chauhan commented on HIVE-2923:


Thanks, Thomas for taking this up. Very much appreciated!

> testAclPositive in TestZooKeeperTokenStore failing in clean checkout when run 
> on Mac
> 
>
> Key: HIVE-2923
> URL: https://issues.apache.org/jira/browse/HIVE-2923
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.9.0
> Environment: Mac OSX Lion
>Reporter: Kevin Wilfong
>Assignee: Thomas Weise
>Priority: Blocker
> Fix For: 0.9.0
>
> Attachments: HIVE-2923.patch
>
>
> When running testAclPositive in TestZooKeeperTokenStore in a clean checkout, 
> it fails with the error:
> Failed to validate token path. 
> org.apache.hadoop.hive.thrift.DelegationTokenStore$TokenStoreException: 
> Failed to validate token path.
> at 
> org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.init(ZooKeeperTokenStore.java:207)
> at 
> org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.setConf(ZooKeeperTokenStore.java:225)
> at 
> org.apache.hadoop.hive.thrift.TestZooKeeperTokenStore.testAclPositive(TestZooKeeperTokenStore.java:170)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:168)
> at junit.framework.TestCase.runBare(TestCase.java:134)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:232)
> at junit.framework.TestSuite.run(TestSuite.java:227)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:518)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1052)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:906)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /zktokenstore-testAcl
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:778)
> at 
> org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.ensurePath(ZooKeeperTokenStore.java:119)
> at 
> org.apache.hadoop.hive.thrift.ZooKeeperTokenStore.init(ZooKeeperTokenStore.java:204)
> ... 17 more
> This message is also printed to standard out:
> Unable to load realm mapping info from SCDynamicStore
> The test seems to run fine in Linux, but more than one developer has reported 
> this on a Mac.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2926) Expose some information about the metastore through JMX

2012-04-05 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13247743#comment-13247743
 ] 

Ashutosh Chauhan commented on HIVE-2926:


Couple of comments:

* This assumes sh, cat, sed etc. to be available in the environment. So, it may 
not work where these tools are not present, which is fine but thought to point 
this out. I tested on mac and linux, it builds fine.

* I don't see how your generated package-info.java will get compiled. You 
create a .java file in build dir. Have you tested that it works as you expect? 

> Expose some information about the metastore through JMX
> ---
>
> Key: HIVE-2926
> URL: https://issues.apache.org/jira/browse/HIVE-2926
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Fix For: 0.9.0
>
> Attachments: metastore-jmx.patch
>
>
> Expose some information about the metastore through JMX

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2764) Obtain delegation tokens for MR jobs in secure hbase setup

2012-04-05 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13247075#comment-13247075
 ] 

Ashutosh Chauhan commented on HIVE-2764:


Those tests passed. So, thats a progress. But following new ones failed.

* rcfile_merge3.q
* rcfile_createas1.q


> Obtain delegation tokens for MR jobs in secure hbase setup  
> 
>
> Key: HIVE-2764
> URL: https://issues.apache.org/jira/browse/HIVE-2764
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler, Security
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: HIVE-2764.D2205.1.patch, HIVE-2764.D2205.2.patch, 
> HIVE-2764.D2205.3.patch, HIVE-2764.D2205.4.patch, HIVE-2764_v0.patch
>
>
> As discussed in HCATALOG-244, in a secure hbase setup with 0.92, we need to 
> obtain delegation tokens for hbase and save it in jobconf, so that tasks can 
> access region servers. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2559) Add target to install Hive JARs/POMs in the local Maven cache

2012-04-04 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13246928#comment-13246928
 ] 

Ashutosh Chauhan commented on HIVE-2559:


+1 will commit if tests pass.

> Add target to install Hive JARs/POMs in the local Maven cache
> -
>
> Key: HIVE-2559
> URL: https://issues.apache.org/jira/browse/HIVE-2559
> Project: Hive
>  Issue Type: Improvement
>  Components: Build Infrastructure
>Affects Versions: 0.9.0
>Reporter: Alejandro Abdelnur
>Assignee: Alan Gates
>Priority: Critical
> Attachments: HIVE-2559.patch
>
>
> HIVE-2391 is producing usable Maven artifacts.
> However, it only as a target to deploy/publish those artifacts to Apache 
> Maven repos.
> There should be a new target to locally install Hive Maven artifacts, thus 
> enabling their use from other projects before they are committed/publish to 
> Apache Maven (this is critical to test patches that may address issues in 
> downstream components).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2764) Obtain delegation tokens for MR jobs in secure hbase setup

2012-04-04 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13246830#comment-13246830
 ] 

Ashutosh Chauhan commented on HIVE-2764:


In TestCliDriver, following tests failed:

* alter_concatenate_indexed_table.q
* alter_merge.q
* alter_merge_2.q
* alter_merge_stats.q
* concatenate_inherit_table_location.q
* create_merge_compressed.q
* escape2.q

I re-ran them again individually, and they again failed.


> Obtain delegation tokens for MR jobs in secure hbase setup  
> 
>
> Key: HIVE-2764
> URL: https://issues.apache.org/jira/browse/HIVE-2764
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler, Security
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: HIVE-2764.D2205.1.patch, HIVE-2764.D2205.2.patch, 
> HIVE-2764.D2205.3.patch, HIVE-2764_v0.patch
>
>
> As discussed in HCATALOG-244, in a secure hbase setup with 0.92, we need to 
> obtain delegation tokens for hbase and save it in jobconf, so that tasks can 
> access region servers. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2883) Metastore client doesnt close connection properly

2012-04-04 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13246692#comment-13246692
 ] 

Ashutosh Chauhan commented on HIVE-2883:


Nopes, this location doesnt have it. It just has abstract class 
FaceBookBase.java I am looking for its implemetation that is 
FaceBookService.java Do you know where is that?

> Metastore client doesnt close connection properly
> -
>
> Key: HIVE-2883
> URL: https://issues.apache.org/jira/browse/HIVE-2883
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.9.0
>Reporter: Ashutosh Chauhan
> Fix For: 0.9.0
>
>
> While closing connection, it always fail with following trace. Seemingly, it 
> doesnt have any harmful effects.
> {code}
> 12/03/20 10:55:02 ERROR hive.metastore: Unable to shutdown local metastore 
> client
> org.apache.thrift.transport.TTransportException: Cannot write to null 
> outputStream
>   at 
> org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:142)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:163)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:91)
>   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
>   at 
> com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:421)
>   at 
> com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:415)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:310)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2883) Metastore client doesnt close connection properly

2012-04-04 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13246673#comment-13246673
 ] 

Ashutosh Chauhan commented on HIVE-2883:


Thanks, Travis for pointing this out. I will take a look.

> Metastore client doesnt close connection properly
> -
>
> Key: HIVE-2883
> URL: https://issues.apache.org/jira/browse/HIVE-2883
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.9.0
>Reporter: Ashutosh Chauhan
> Fix For: 0.9.0
>
>
> While closing connection, it always fail with following trace. Seemingly, it 
> doesnt have any harmful effects.
> {code}
> 12/03/20 10:55:02 ERROR hive.metastore: Unable to shutdown local metastore 
> client
> org.apache.thrift.transport.TTransportException: Cannot write to null 
> outputStream
>   at 
> org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:142)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:163)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:91)
>   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
>   at 
> com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:421)
>   at 
> com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:415)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:310)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2883) Metastore client doesnt close connection properly

2012-04-04 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13246546#comment-13246546
 ] 

Ashutosh Chauhan commented on HIVE-2883:


My analysis is client already knows about transport, so inside 
client.shutdown() it is calling transport.close() which closes underlying 
outputstream. So, we *should* not do transport.close() ourselves in 
HiveMetaStoreClient.close() ? But, I can't be sure of this hypothesis since 
implementation of client.shutdown() is in FaceBookService.client (which is in 
libfb303.jar) I cant seem to find sources for this class anywhere. 

So, I request Facebook contributors of Hive project to either fix this bug or 
release sources of libfb303.jar so that I can verify and fix this myself.

> Metastore client doesnt close connection properly
> -
>
> Key: HIVE-2883
> URL: https://issues.apache.org/jira/browse/HIVE-2883
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.9.0
>Reporter: Ashutosh Chauhan
> Fix For: 0.9.0
>
>
> While closing connection, it always fail with following trace. Seemingly, it 
> doesnt have any harmful effects.
> {code}
> 12/03/20 10:55:02 ERROR hive.metastore: Unable to shutdown local metastore 
> client
> org.apache.thrift.transport.TTransportException: Cannot write to null 
> outputStream
>   at 
> org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:142)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeI32(TBinaryProtocol.java:163)
>   at 
> org.apache.thrift.protocol.TBinaryProtocol.writeMessageBegin(TBinaryProtocol.java:91)
>   at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
>   at 
> com.facebook.fb303.FacebookService$Client.send_shutdown(FacebookService.java:421)
>   at 
> com.facebook.fb303.FacebookService$Client.shutdown(FacebookService.java:415)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.close(HiveMetaStoreClient.java:310)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2869) Merging small files throws RuntimeException when hive.mergejob.maponly=false

2012-04-03 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13245801#comment-13245801
 ] 

Ashutosh Chauhan commented on HIVE-2869:


@Shrijeet, Take a look at 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute on how to 
contribute. Essentially, you need to submit a patch. 

> Merging small files throws RuntimeException when hive.mergejob.maponly=false
> 
>
> Key: HIVE-2869
> URL: https://issues.apache.org/jira/browse/HIVE-2869
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.8.0
> Environment: CentOS release 5.5 (Final)
>Reporter: Shrijeet Paliwal
> Attachments: data_to_reproduce.tar.gz
>
>
> Hive Version: Hive 0.8 (last commit SHA  
> b581a6192b8d4c544092679d05f45b2e50d42b45 ) 
> Hadoop version : chd3u0
> Trying to use the hive merge small file feature by setting all the necessary 
> params.
> Have disabled use of CombineHiveInputFormat since my input is compressed 
> text. 
> {noformat}
> hive> set mapred.min.split.size.per.node=10;
> hive> set mapred.min.split.size.per.rack=10;
> hive> set mapred.max.split.size=10;
> hive> set hive.merge.size.per.task=10;
> hive> set hive.merge.smallfiles.avgsize=10;
> hive> set hive.merge.size.smallfiles.avgsize=10;
> hive> set hive.merge.mapfiles=true;
> hive> set hive.merge.mapredfiles=true;
> hive> set hive.mergejob.maponly=false;
> {noformat}
> The plan decides to launch two MR jobs but after first job succeeds I get 
> runt time error 
> "java.lang.RuntimeException: Plan invalid, Reason: Reducers == 0 but reduce 
> operator specified"
> *How to reproduce :* 
> * Creare tables as follows : 
> {code}
> --create input table
> create table tmp_notmerged (
>   idint,
>   name  string
> )
> ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
> STORED AS TEXTFILE;
> --create o/p table
> create table tmp_merged (
>   idint
> )
> STORED AS TEXTFILE;
> {code}
> * Load data into tmp_notmerged (find files attached in with this jira)
> * set knobs and fire hive query 
> {code}
> set hive.merge.mapfiles=true;
> set hive.mergejob.maponly=false;
> insert overwrite table tmp_merged select id from tmp_notmerged;
> {code}
> * You should see error "java.lang.RuntimeException: Plan invalid, Reason: 
> Reducers == 0 but reduce operator specified"
> *Proposed fix :*
> Patch is here : https://gist.github.com/2025303

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2711) Make the header of RCFile unique

2012-04-02 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13244981#comment-13244981
 ] 

Ashutosh Chauhan commented on HIVE-2711:


Patch results failures in TestCliDriver in following queries:

* alter_concatenate_indexed_table.q
* alter_merge.q
* alter_merge_stats.q
* create_merge_compressed.q
* ctas.q
* partition_wise_fileformat.q
* partition_wise_fileformat3.q
* sample10.q

> Make the header of RCFile unique
> 
>
> Key: HIVE-2711
> URL: https://issues.apache.org/jira/browse/HIVE-2711
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: HIVE-2711.D2115.1.patch, HIVE-2711.D2115.2.patch, 
> HIVE-2711.D2571.1.patch
>
>
> The RCFile implementation was copied from Hadoop's SequenceFile and copied 
> the 'magic' string in the header. This means that you can't use the header to 
> distinguish between RCFiles and SequenceFiles.
> I'd propose that we create a new header for RCFiles (RCF?) to replace the 
> current SEQ. To maintain compatibility, we'll need to continue to accept the 
> current 'SEQ\06' and just make new files contain the new header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2576) HiveDataSource doesn't get a proper connection

2012-04-02 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13244958#comment-13244958
 ] 

Ashutosh Chauhan commented on HIVE-2576:


@Nicolas,
Patch looks good. Instead of hardcoding string, can you make {{URI_PREFIX}} in 
HiveConnection public and reference that.

> HiveDataSource doesn't get a proper connection
> --
>
> Key: HIVE-2576
> URL: https://issues.apache.org/jira/browse/HIVE-2576
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 0.8.0
>Reporter: Nicolas Lalevée
>Assignee: Nicolas Lalevée
> Attachments: HIVE-2576-r1201637.patch
>
>
> The HiveDataSource is creating a HiveConnection with as an URL "", but the 
> connection expects to start with "jdbc:hive://"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2738) NPE in ExprNodeGenericFuncEvaluator

2012-04-02 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13244954#comment-13244954
 ] 

Ashutosh Chauhan commented on HIVE-2738:


+1 will commit if tests pass.

> NPE in ExprNodeGenericFuncEvaluator
> ---
>
> Key: HIVE-2738
> URL: https://issues.apache.org/jira/browse/HIVE-2738
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.8.0
>Reporter: Nicolas Lalevée
>Assignee: Nicolas Lalevée
> Attachments: 750c8966-6402-465a-b011-903469fe56da.xml, 
> HIVE-2738-r1237763.patch, MapMaxUDF.java, MapToJsonUDF.java, hive_job_logs.txt
>
>
> Here is the query:
> bq. {{SELECT t.lid, '2011-12-12', 
> s_map2json(s_maxmap(UNION_MAP(t.categoryCount), 100)) FROM ( SELECT theme_lid 
> AS theme_lid, MAP(s_host(referer), COUNT( * )) AS categoryCount FROM 
> PageViewEvent WHERE day >= '20130104' AND day <= '20130112' AND date_ >= 
> '2012-01-04' AND date_ < '2012-01-13' AND lid IS NOT NULL GROUP BY lid, 
> s_host(referer) ) t GROUP BY t.lid}}
> Removing the call s_map2json make it work but not by removing s_maxmap, but I 
> don't understand what could be wrong with the implementation of my udf. And I 
> don't know how to debug remote hadoop jobs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2860) TestNegativeCliDriver autolocal1.q fails on 0.23

2012-04-02 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13244919#comment-13244919
 ] 

Ashutosh Chauhan commented on HIVE-2860:


* external1.q
* external2.q
* fetchtask_ioexception.q
* fs_default_name1.q
* fs_default_name1.q



> TestNegativeCliDriver autolocal1.q fails on 0.23
> 
>
> Key: HIVE-2860
> URL: https://issues.apache.org/jira/browse/HIVE-2860
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Affects Versions: 0.9.0
>Reporter: Carl Steinbach
>Assignee: Carl Steinbach
> Fix For: 0.9.0
>
> Attachments: HIVE-2860.D2253.1.patch, HIVE-2860.D2253.1.patch, 
> HIVE-2860.D2565.1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2860) TestNegativeCliDriver autolocal1.q fails on 0.23

2012-04-02 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13244909#comment-13244909
 ] 

Ashutosh Chauhan commented on HIVE-2860:


Latest patch results in 5 failed tests in TestNegativeCliDriver

> TestNegativeCliDriver autolocal1.q fails on 0.23
> 
>
> Key: HIVE-2860
> URL: https://issues.apache.org/jira/browse/HIVE-2860
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Affects Versions: 0.9.0
>Reporter: Carl Steinbach
>Assignee: Carl Steinbach
> Fix For: 0.9.0
>
> Attachments: HIVE-2860.D2253.1.patch, HIVE-2860.D2253.1.patch, 
> HIVE-2860.D2565.1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2585) Collapse hive.metastore.uris and hive.metastore.local

2012-04-02 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13244895#comment-13244895
 ] 

Ashutosh Chauhan commented on HIVE-2585:


{code}
BUILD SUCCESSFUL
Total time: 339 minutes 51 seconds
{code}

All the tests passed with this patch.

> Collapse hive.metastore.uris and hive.metastore.local
> -
>
> Key: HIVE-2585
> URL: https://issues.apache.org/jira/browse/HIVE-2585
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-2585.D2559.1.patch
>
>
> We should just have hive.metastore.uris. If it is empty, we shall assume 
> local mode, if non-empty we shall use that string to connect to remote 
> metastore. Having two different keys for same information is confusing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2711) Make the header of RCFile unique

2012-04-01 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243920#comment-13243920
 ] 

Ashutosh Chauhan commented on HIVE-2711:


Patch fails to apply. Needs to be rebased.

> Make the header of RCFile unique
> 
>
> Key: HIVE-2711
> URL: https://issues.apache.org/jira/browse/HIVE-2711
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: HIVE-2711.D2115.1.patch
>
>
> The RCFile implementation was copied from Hadoop's SequenceFile and copied 
> the 'magic' string in the header. This means that you can't use the header to 
> distinguish between RCFiles and SequenceFiles.
> I'd propose that we create a new header for RCFiles (RCF?) to replace the 
> current SEQ. To maintain compatibility, we'll need to continue to accept the 
> current 'SEQ\06' and just make new files contain the new header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2711) Make the header of RCFile unique

2012-04-01 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13243918#comment-13243918
 ] 

Ashutosh Chauhan commented on HIVE-2711:


I see. Yeah, very first commit of RCFile 
http://svn.apache.org/viewvc/hadoop/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java?view=markup&pathrev=770548
 started with SEQ6 so there could possibly be no data written in RCFile format 
with version SEQ5 or earlier. So, backward compatibility with SEQ6 suffices. 
So, +1 will commit if tests pass.

> Make the header of RCFile unique
> 
>
> Key: HIVE-2711
> URL: https://issues.apache.org/jira/browse/HIVE-2711
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: HIVE-2711.D2115.1.patch
>
>
> The RCFile implementation was copied from Hadoop's SequenceFile and copied 
> the 'magic' string in the header. This means that you can't use the header to 
> distinguish between RCFiles and SequenceFiles.
> I'd propose that we create a new header for RCFiles (RCF?) to replace the 
> current SEQ. To maintain compatibility, we'll need to continue to accept the 
> current 'SEQ\06' and just make new files contain the new header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2711) Make the header of RCFile unique

2012-03-30 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13242930#comment-13242930
 ] 

Ashutosh Chauhan commented on HIVE-2711:


That makes sense. I will take a look.

> Make the header of RCFile unique
> 
>
> Key: HIVE-2711
> URL: https://issues.apache.org/jira/browse/HIVE-2711
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: HIVE-2711.D2115.1.patch
>
>
> The RCFile implementation was copied from Hadoop's SequenceFile and copied 
> the 'magic' string in the header. This means that you can't use the header to 
> distinguish between RCFiles and SequenceFiles.
> I'd propose that we create a new header for RCFiles (RCF?) to replace the 
> current SEQ. To maintain compatibility, we'll need to continue to accept the 
> current 'SEQ\06' and just make new files contain the new header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-538) make hive_jdbc.jar self-containing

2012-03-30 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13242752#comment-13242752
 ] 

Ashutosh Chauhan commented on HIVE-538:
---

 Patch for generating artifacts for jdbc drivers which makes it easier for 
folks using jdbc driver to include it in their projects. Note two noticeable 
omissions from hive-jdbc-rt-deps.jar datanucleus-core.jar and 
datanucleus-rdbms.jar If those are packaged in same jar then datanucleus have 
trouble loading them, so I excluded those. As a result, those still needs to 
put in application's classpath separately 

> make hive_jdbc.jar self-containing
> --
>
> Key: HIVE-538
> URL: https://issues.apache.org/jira/browse/HIVE-538
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: 0.3.0, 0.4.0, 0.6.0
>Reporter: Raghotham Murthy
> Attachments: HIVE-538.D2553.1.patch
>
>
> Currently, most jars in hive/build/dist/lib and the hadoop-*-core.jar are 
> required in the classpath to run jdbc applications on hive. We need to do 
> atleast the following to get rid of most unnecessary dependencies:
> 1. get rid of dynamic serde and use a standard serialization format, maybe 
> tab separated, json or avro
> 2. dont use hadoop configuration parameters
> 3. repackage thrift and fb303 classes into hive_jdbc.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1362) column level statistics

2012-03-28 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13240644#comment-13240644
 ] 

Ashutosh Chauhan commented on HIVE-1362:


@Namit,
Bharath plans to work on this and has a gsoc proposal : 
http://www.google-melange.com/gsoc/proposal/review/google/gsoc2012/bharathv/18002
 

> column level statistics
> ---
>
> Key: HIVE-1362
> URL: https://issues.apache.org/jira/browse/HIVE-1362
> Project: Hive
>  Issue Type: Sub-task
>  Components: Statistics
>Reporter: Ning Zhang
>  Labels: gsoc, gsoc2012
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2891) TextConverter for UDF's is inefficient if the input object is already Text or Lazy

2012-03-27 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13239778#comment-13239778
 ] 

Ashutosh Chauhan commented on HIVE-2891:


Cool. I also learnt that hard way. Looks good. Running tests now.

> TextConverter for UDF's is inefficient if the input object is already Text or 
> Lazy
> --
>
> Key: HIVE-2891
> URL: https://issues.apache.org/jira/browse/HIVE-2891
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 0.7.0, 0.7.1, 0.8.1
>Reporter: Cliff Engle
>Assignee: Cliff Engle
>Priority: Minor
> Attachments: HIVE-2891.1.patch.txt, HIVE-2891.2.patch.txt
>
>
> The TextConverter in PrimitiveObjectInspectorConverter.java is very 
> inefficient if the input object is already Text or Lazy. Since it calls 
> getPrimitiveJavaObject, each Text is decoded into a String and then 
> re-encoded into Text. The solution is to check if preferWritable() is true, 
> then call getPrimitiveWritable(input).
> To test performance, I ran the Grep query from 
> https://issues.apache.org/jira/browse/HIVE-396 on a cluster of 3 ec2 large 
> nodes (2 slaves 1 master) on 6GB of data. It took 21 map tasks. With the 
> current 0.8.1 version, it took 81 seconds. After patching, it took 66 seconds.
> I will attach a patch and testcases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2822) Add JSON output to the hive ddl commands

2012-03-27 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13239651#comment-13239651
 ] 

Ashutosh Chauhan commented on HIVE-2822:


Further, there are 4 failures in TestNegativeCliDriver 
* database_drop_does_not_exist.q
* database_create_already_exists.q
* column_rename4.q
* column_rename1.q

You can reproduce them via {{ ant test -Dtestcase=TestNegativeCliDriver 
-Dqfile=column_rename1.q }} and so on. Most of these looked like changes in 
.q.out files because of small change in error messages. If you expect these to 
happen then you can just overwrite them with -Doverwrite=true.


> Add JSON output to the hive ddl commands
> 
>
> Key: HIVE-2822
> URL: https://issues.apache.org/jira/browse/HIVE-2822
> Project: Hive
>  Issue Type: Improvement
>Reporter: Chris Dean
>Assignee: Chris Dean
> Attachments: HIVE-2822.03-branch0-8.patch, HIVE-2822.03.patch, 
> HIVE-2822.03b.patch, HIVE-2822.04-branch-08.patch, 
> HIVE-2822.05-branch0-8-1.patch, HIVE-2822.05-branch0-8.patch, 
> HIVE-2822.05.patch, HIVE-2822.D2475.1.patch, hive-json-01-branch0-8.patch, 
> hive-json-01.patch, hive-json-02-branch0-8.patch, hive-json-02.patch
>
>
> The goal is to have an option to produce JSON output of the DDL commands that 
> is easily machine parseable.
> For example, "desc my_table" currently gives
> {noformat}
> idbigint
> user  string
> {noformat} 
> and we want to allow a json output:
> {noformat}
> {
>   "columns": [
> {"name": "id", "type": "bigint"},
> {"name": "user", "type": "string"}
>   ]
> }
> {noformat} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2903) Numeric binary type keys are not compared properly

2012-03-26 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13239075#comment-13239075
 ] 

Ashutosh Chauhan commented on HIVE-2903:


Yeah, I am having hard time believing that hbase lets you do this. I am not 
sure if the bug is present in some form in hbase. Your experiment does suggest 
its there in hbase. If it is, then it certainly makes sense to patch hbase, 
instead of special handling it ourselves. 

> Numeric binary type keys are not compared properly
> --
>
> Key: HIVE-2903
> URL: https://issues.apache.org/jira/browse/HIVE-2903
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Navis
>Assignee: Navis
> Attachments: HIVE-2903.D2481.1.patch
>
>
> In current binary format for numbers, minus values are always greater than 
> plus values, for example.
> {code}
> System.our.println(Bytes.compareTo(Bytes.toBytes(-100), Bytes.toBytes(100))); 
> // 255
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-538) make hive_jdbc.jar self-containing

2012-03-26 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13239048#comment-13239048
 ] 

Ashutosh Chauhan commented on HIVE-538:
---

@Bill,
Your approach sounds reasonable to me. Will you like to work on this? You can 
reference my patch at HIVE-2900 for how to do repackaging easily. 

> make hive_jdbc.jar self-containing
> --
>
> Key: HIVE-538
> URL: https://issues.apache.org/jira/browse/HIVE-538
> Project: Hive
>  Issue Type: Improvement
>  Components: JDBC
>Affects Versions: 0.3.0, 0.4.0, 0.6.0
>Reporter: Raghotham Murthy
>
> Currently, most jars in hive/build/dist/lib and the hadoop-*-core.jar are 
> required in the classpath to run jdbc applications on hive. We need to do 
> atleast the following to get rid of most unnecessary dependencies:
> 1. get rid of dynamic serde and use a standard serialization format, maybe 
> tab separated, json or avro
> 2. dont use hadoop configuration parameters
> 3. repackage thrift and fb303 classes into hive_jdbc.jar

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2903) Numeric binary type keys are not compared properly

2012-03-26 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13239041#comment-13239041
 ] 

Ashutosh Chauhan commented on HIVE-2903:


@Navis,
The way you have fixed it, it will work only if data is written from hive into 
hbase and then queries are run from hive client against hbase. What if data was 
written in hbase through hbase client and then queried from hive client, this 
bug will still be there, isn't it?
This also makes me wonder that this problem is not limited to hive, but for 
hbase in general. If you are writing data through hbase client and then do 
range scans, you will have same bug. There must be some solution in hbase space 
for this.

> Numeric binary type keys are not compared properly
> --
>
> Key: HIVE-2903
> URL: https://issues.apache.org/jira/browse/HIVE-2903
> Project: Hive
>  Issue Type: Bug
>  Components: HBase Handler
>Reporter: Navis
>Assignee: Navis
> Attachments: HIVE-2903.D2481.1.patch
>
>
> In current binary format for numbers, minus values are always greater than 
> plus values, for example.
> {code}
> System.our.println(Bytes.compareTo(Bytes.toBytes(-100), Bytes.toBytes(100))); 
> // 255
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2518) pull junit jar from maven repos via ivy

2012-03-24 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13237752#comment-13237752
 ] 

Ashutosh Chauhan commented on HIVE-2518:


No worries : ) Thanks for the update. 

> pull junit jar from maven repos via ivy
> ---
>
> Key: HIVE-2518
> URL: https://issues.apache.org/jira/browse/HIVE-2518
> Project: Hive
>  Issue Type: Improvement
>Reporter: He Yongqiang
>Assignee: Kevin Wilfong
>
> see https://issues.apache.org/jira/browse/HIVE-2505

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2711) Make the header of RCFile unique

2012-03-23 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13237371#comment-13237371
 ] 

Ashutosh Chauhan commented on HIVE-2711:


@Owen,
 I think original design of RCFile was done with compatibility with Sequence 
File in mind. This patch will break that. Whats the advantage of this change?

> Make the header of RCFile unique
> 
>
> Key: HIVE-2711
> URL: https://issues.apache.org/jira/browse/HIVE-2711
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: HIVE-2711.D2115.1.patch
>
>
> The RCFile implementation was copied from Hadoop's SequenceFile and copied 
> the 'magic' string in the header. This means that you can't use the header to 
> distinguish between RCFiles and SequenceFiles.
> I'd propose that we create a new header for RCFiles (RCF?) to replace the 
> current SEQ. To maintain compatibility, we'll need to continue to accept the 
> current 'SEQ\06' and just make new files contain the new header.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2897) Remove filter operator of which all the predicates have pushed down to storage handler

2012-03-23 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13236821#comment-13236821
 ] 

Ashutosh Chauhan commented on HIVE-2897:


Thanks Navis for tracking this down. We should remove filter altogether once 
its been pushed. I am guessing this may result in  changes in query plan in 
other *.q.out files in hbase tests, though test output itself must not change. 
Can you check that and update the patch if thats the case.

> Remove filter operator of which all the predicates have pushed down to 
> storage handler
> --
>
> Key: HIVE-2897
> URL: https://issues.apache.org/jira/browse/HIVE-2897
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Attachments: HIVE-2897.D2427.1.patch
>
>
> If all the predicates have pushed down to StorageHandler in TS, it should be 
> removed (or make empty). Might it be intentional? 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2518) pull junit jar from maven repos via ivy

2012-03-22 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13236197#comment-13236197
 ] 

Ashutosh Chauhan commented on HIVE-2518:


@Kevin,
Just checking if there is any progress on this one?

> pull junit jar from maven repos via ivy
> ---
>
> Key: HIVE-2518
> URL: https://issues.apache.org/jira/browse/HIVE-2518
> Project: Hive
>  Issue Type: Improvement
>Reporter: He Yongqiang
>Assignee: Kevin Wilfong
>
> see https://issues.apache.org/jira/browse/HIVE-2505

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1643) support range scans and non-key columns in HBase filter pushdown

2012-03-22 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13236093#comment-13236093
 ] 

Ashutosh Chauhan commented on HIVE-1643:


Just a quick update:
Work on filter pushdown on primary key range scans have been completed via 
HIVE-1634 HIVE-2771 HIVE-2815 HIVE-2819 

Note that filters are still *not* pushed for non-primary keys. I don't plan to 
work on that in immediate future, but would be to help out if any one else is 
planning for it. 

> support range scans and non-key columns in HBase filter pushdown
> 
>
> Key: HIVE-1643
> URL: https://issues.apache.org/jira/browse/HIVE-1643
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 0.7.0
>Reporter: John Sichi
>Assignee: Vaibhav Aggarwal
> Attachments: hbase_handler.patch
>
>
> HIVE-1226 added support for WHERE rowkey=3.  We would like to support WHERE 
> rowkey BETWEEN 10 and 20, as well as predicates on non-rowkeys (plus 
> conjunctions etc).  Non-rowkey conditions can't be used to filter out entire 
> ranges, but they can be used to push the per-row filter processing as far 
> down as possible.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2797) Make the IP address of a Thrift client available to HMSHandler.

2012-03-22 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13235864#comment-13235864
 ] 

Ashutosh Chauhan commented on HIVE-2797:


Latest patch fails to apply with following in 
shims/src/common/java/org/apache/hadoop/hive/thrift/TUGIContainingTransport.java.rej
{code}
***
*** 71,74 
return transMap.get(trans);
  }
}
- }
--- 85,88 
return transMap.get(trans);
  }
}
+ }
{code}

> Make the IP address of a Thrift client available to HMSHandler.
> ---
>
> Key: HIVE-2797
> URL: https://issues.apache.org/jira/browse/HIVE-2797
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2797.D1701.1.patch, HIVE-2797.D1701.2.patch, 
> HIVE-2797.D1701.3.patch, HIVE-2797.D1701.4.patch, HIVE-2797.D1701.5.patch, 
> HIVE-2797.D1701.6.patch
>
>
> Currently, in unsecured mode, metastore Thrift calls are, from the 
> HMSHandler's point of view, anonymous.  If we expose the IP address of the 
> Thrift client to the HMSHandler from the Processor, this will help to give 
> some context, in particular for audit logging, of where the call is coming 
> from.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2895) Hive cli silently fails to create index

2012-03-22 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13235748#comment-13235748
 ] 

Ashutosh Chauhan commented on HIVE-2895:


Dupe of HIVE-1496 ?

> Hive cli silently fails to create index
> ---
>
> Key: HIVE-2895
> URL: https://issues.apache.org/jira/browse/HIVE-2895
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Indexing
>Affects Versions: 0.7.0
>Reporter: Mark Schnegelberger
>  Labels: newbie
>
> Hive CLI reports 'OK' to index creation, but no index is created as confirmed 
> by later 'NoSuchObjectException' when later issuing index rebuild:
> [user@host dir]$ hive
> Hive history file=/tmp/user/hive_job_log_user_20120307_111.txt 
> hive> CREATE INDEX X ON TABLE Foo(startTime) as 'COMPACT' WITH DEFERRED 
> REBUILD; 
> OK 
> Time taken: 2.353 seconds 
> hive> SHOW INDEXES ON Foo; 
> OK 
> Time taken: 0.182 seconds 
> hive> ALTER INDEX X ON Foo REBUILD; 
> FAILED: Error in semantic analysis: 
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> NoSuchObjectException(message:default.Foo index=X not found) 
> hive>

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2577) Expose the HiveConf in HiveConnection API

2012-03-22 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13235654#comment-13235654
 ] 

Ashutosh Chauhan commented on HIVE-2577:


+1 will commit if tests pass.

> Expose the HiveConf in HiveConnection API
> -
>
> Key: HIVE-2577
> URL: https://issues.apache.org/jira/browse/HIVE-2577
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Nicolas Lalevée
>Assignee: Nicolas Lalevée
> Attachments: HIVE-2577-r1201637.patch
>
>
> When running the jdbc code in a local mode, there no way to programatically 
> manage the hive conf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2809) StorageHandler authorization providers

2012-03-21 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13235003#comment-13235003
 ] 

Ashutosh Chauhan commented on HIVE-2809:


@Enis,
The patch is huge and it has three different issues smashed together. If broken 
apart into following three issues it will be easier for reviewing and better 
tracking:
a) Adding DB as read and write entity for auth checks.
b) Improving semantics for adding checks for add partition and others.
c) Adding HDFSAuthProvider as an alternative. 

> StorageHandler authorization providers
> --
>
> Key: HIVE-2809
> URL: https://issues.apache.org/jira/browse/HIVE-2809
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 0.9.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: HIVE-2809.D1953.1.patch, HIVE-2809.D1953.2.patch, 
> HIVE-2809.D1953.3.patch, HIVE-2809.D1953.4.patch, HIVE-2809.D1953.5.patch
>
>
> In this issue, we would like to discuss the possibility of supplementing the 
> Hive authorization model with authorization at the storage level. As 
> discussed in HIVE-1943, Hive should also check for operation permissions in 
> hdfs and hbase, since otherwise, data and metadata can be in an inconsistent 
> state or be orphaned. Going a step further, some of the setups might not need 
> the full featured auth model by Hive, but want to rely on managing the 
> permissions at the data layer. In this model, the metadata operations are 
> checked first from hdfs/hbase and it is allowed only if they are allowed at 
> the data layer. The semantics are documented at 
> https://cwiki.apache.org/confluence/display/HCATALOG/Hcat+Security+Design. 
> So, the goals of this issue are: 
>  - Port storage handler specific authorization providers, and the 
> StorageDelegationAuthorizationProvider from HCATALOG-245 and HCATALOG-260 to 
> Hive. 
>  - Keep current Hive's default authorization provider, and enable user to use 
> this and/or the storage one. auth providers are already configurable.
>  - Move the manual checks that had to be performed about authorization in 
> Hcat to Hive, specifically:
>   -- CREATE DATABASE/TABLE, ADD PARTITION statements does not call 
>HiveAuthorizationProvider.authorize() with the candidate objects, which 
> means that
>we cannot do checks against defined LOCATION.
>   -- HiveOperation does not define sufficient Privileges for most of the 
> operations, 
> especially database operations. 
>   -- For some of the operations, Hive SemanticAnalyzer does not add the 
> changed 
> object as a WriteEntity or ReadEntity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2819) Closed range scans on hbase keys

2012-03-20 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13234158#comment-13234158
 ] 

Ashutosh Chauhan commented on HIVE-2819:


All the tests passed.
{code}
BUILD SUCCESSFUL
Total time: 328 minutes 17 seconds
{code}

> Closed range scans on hbase keys 
> -
>
> Key: HIVE-2819
> URL: https://issues.apache.org/jira/browse/HIVE-2819
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-2819.D1923.1.patch, HIVE-2819.D1923.2.patch, 
> HIVE-2819.D1923.3.patch
>
>
> This patch pushes range scans on keys of closed form into hbase 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2865) hive-config.sh should honor HIVE_HOME env

2012-03-16 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13231556#comment-13231556
 ] 

Ashutosh Chauhan commented on HIVE-2865:


+1 looks good. will commit if tests pass.

> hive-config.sh should honor HIVE_HOME env 
> --
>
> Key: HIVE-2865
> URL: https://issues.apache.org/jira/browse/HIVE-2865
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 0.8.0
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
> Attachments: HIVE-2865.patch
>
>
> hive-config.sh should honor HIVE_HOME env variable if set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2702) listPartitionsByFilter only supports non-string partitions

2012-03-16 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13230971#comment-13230971
 ] 

Ashutosh Chauhan commented on HIVE-2702:


@Aniket,
This is by design. Partition values are stored as strings in backend db (mysql) 
so pushing filters into db where partition column is of numeric types won't 
work, since then comparison will happen lexicographically. You should be able 
to catch this with rigorous tests. e.g., with your patch on, create table with 
partition key of int type, add partitions 1-11 and then do filter p < 2 and you 
will get partitions 1,10,11 instead of 1. Though, you can still push equality 
predicate. 
Enabling this feature requires mysql table schema updates which can retain type 
information for partition keys.   

> listPartitionsByFilter only supports non-string partitions
> --
>
> Key: HIVE-2702
> URL: https://issues.apache.org/jira/browse/HIVE-2702
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.8.1
>Reporter: Aniket Mokashi
>Assignee: Aniket Mokashi
> Attachments: HIVE-2702.1.patch, HIVE-2702.D2043.1.patch
>
>
> listPartitionsByFilter supports only non-string partitions. This is because 
> its explicitly specified in generateJDOFilterOverPartitions in 
> ExpressionTree.java. 
> //Can only support partitions whose types are string
>   if( ! table.getPartitionKeys().get(partitionColumnIndex).
>   
> getType().equals(org.apache.hadoop.hive.serde.Constants.STRING_TYPE_NAME) ) {
> throw new MetaException
> ("Filtering is supported only on partition keys of type string");
>   }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2503) HiveServer should provide per session configuration

2012-03-16 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13230949#comment-13230949
 ] 

Ashutosh Chauhan commented on HIVE-2503:


+1 Will commit if tests pass. Navis, can you also post the patch on jira 
granting license.

> HiveServer should provide per session configuration
> ---
>
> Key: HIVE-2503
> URL: https://issues.apache.org/jira/browse/HIVE-2503
> Project: Hive
>  Issue Type: Bug
>  Components: CLI, Server Infrastructure
>Affects Versions: 0.9.0
>Reporter: Navis
>Assignee: Navis
> Fix For: 0.9.0
>
> Attachments: HIVE-2503.1.patch.txt
>
>
> Currently ThriftHiveProcessorFactory returns same HiveConf instance to 
> HiveServerHandler, making impossible to use per sesssion configuration. Just 
> wrapping 'conf' -> 'new HiveConf(conf)' seemed to solve this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2748) Upgrade Hbase and ZK dependcies

2012-03-14 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13229919#comment-13229919
 ] 

Ashutosh Chauhan commented on HIVE-2748:


Patch looks good. Running tests.

> Upgrade Hbase and ZK dependcies
> ---
>
> Key: HIVE-2748
> URL: https://issues.apache.org/jira/browse/HIVE-2748
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.7.0, 0.7.1, 0.8.0, 0.8.1, 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Enis Soztutar
> Attachments: HIVE-2748.3.patch, HIVE-2748.D1431.1.patch, 
> HIVE-2748.D1431.2.patch, HIVE-2748_v4.patch, HIVE-2748_v5.patch, 
> HIVE-2748_v6.patch, HIVE-2748_v7.patch, HIVE-2748_v8.patch
>
>
> Both softwares have moved forward with significant improvements. Lets bump 
> compile time dependency to keep up

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2748) Upgrade Hbase and ZK dependcies

2012-03-13 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13229007#comment-13229007
 ] 

Ashutosh Chauhan commented on HIVE-2748:


TestHBaseSerDe fails with latest patch. Perhaps, jackson libs need to be upped 
to 1.7.1 as well.

> Upgrade Hbase and ZK dependcies
> ---
>
> Key: HIVE-2748
> URL: https://issues.apache.org/jira/browse/HIVE-2748
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.7.0, 0.7.1, 0.8.0, 0.8.1, 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Enis Soztutar
> Attachments: HIVE-2748.3.patch, HIVE-2748.D1431.1.patch, 
> HIVE-2748.D1431.2.patch, HIVE-2748_v4.patch, HIVE-2748_v5.patch, 
> HIVE-2748_v6.patch, HIVE-2748_v7.patch
>
>
> Both softwares have moved forward with significant improvements. Lets bump 
> compile time dependency to keep up

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2748) Upgrade Hbase and ZK dependcies

2012-03-13 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13228903#comment-13228903
 ] 

Ashutosh Chauhan commented on HIVE-2748:


Unable to compile tests. Probably, ivysettings.xml is missing m2:classifier bit.

> Upgrade Hbase and ZK dependcies
> ---
>
> Key: HIVE-2748
> URL: https://issues.apache.org/jira/browse/HIVE-2748
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.7.0, 0.7.1, 0.8.0, 0.8.1, 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Enis Soztutar
> Attachments: HIVE-2748.3.patch, HIVE-2748.D1431.1.patch, 
> HIVE-2748.D1431.2.patch, HIVE-2748_v4.patch, HIVE-2748_v5.patch, 
> HIVE-2748_v6.patch
>
>
> Both softwares have moved forward with significant improvements. Lets bump 
> compile time dependency to keep up

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2748) Upgrade Hbase and ZK dependcies

2012-03-12 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13228099#comment-13228099
 ] 

Ashutosh Chauhan commented on HIVE-2748:


Doesn't apply cleanly, patch needs a rebase.

> Upgrade Hbase and ZK dependcies
> ---
>
> Key: HIVE-2748
> URL: https://issues.apache.org/jira/browse/HIVE-2748
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.7.0, 0.7.1, 0.8.0, 0.8.1, 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Enis Soztutar
> Attachments: HIVE-2748.3.patch, HIVE-2748.D1431.1.patch, 
> HIVE-2748.D1431.2.patch, HIVE-2748_v4.patch, HIVE-2748_v5.patch
>
>
> Both softwares have moved forward with significant improvements. Lets bump 
> compile time dependency to keep up

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2748) Upgrade Hbase and ZK dependcies

2012-03-10 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13226808#comment-13226808
 ] 

Ashutosh Chauhan commented on HIVE-2748:


Few comments:

* +hbase-test.version=0.92.0
Remove this. This variable is not used anywhere later. Test version to download 
is same as hbase.version

* Instead of
hbaseConf.set("hbase.master", hbaseCluster.getMaster().toString());
Do:
   hbaseConf.set("hbase.master", 
hbaseCluster.getMaster().getServerName().getHostAndPort());

* There might be one more runtime dependency to run HBase test which you should 
be able to catch while running tests.



> Upgrade Hbase and ZK dependcies
> ---
>
> Key: HIVE-2748
> URL: https://issues.apache.org/jira/browse/HIVE-2748
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.7.0, 0.7.1, 0.8.0, 0.8.1, 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Enis Soztutar
> Attachments: HIVE-2748.3.patch, HIVE-2748.D1431.1.patch, 
> HIVE-2748.D1431.2.patch, HIVE-2748_v4.patch
>
>
> Both softwares have moved forward with significant improvements. Lets bump 
> compile time dependency to keep up

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1634) Allow access to Primitive types stored in binary format in HBase

2012-03-08 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13225744#comment-13225744
 ] 

Ashutosh Chauhan commented on HIVE-1634:


Committed to trunk. Thanks, Carl for the review. 

Credit for this goes to Basab Maulik who did initial patch. I just rebased and 
took care of the comments by reviewers. Thanks, Basab!

> Allow access to Primitive types stored in binary format in HBase
> 
>
> Key: HIVE-1634
> URL: https://issues.apache.org/jira/browse/HIVE-1634
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 0.7.0, 0.8.0, 0.9.0
>Reporter: Basab Maulik
>Assignee: Ashutosh Chauhan
> Fix For: 0.9.0
>
> Attachments: HIVE-1634.0.patch, HIVE-1634.1.patch, 
> HIVE-1634.D1581.1.patch, HIVE-1634.D1581.2.patch, HIVE-1634.D1581.3.patch, 
> TestHiveHBaseExternalTable.java, hive-1634_3.patch
>
>
> This addresses HIVE-1245 in part, for atomic or primitive types.
> The serde property "hbase.columns.storage.types" = "-,b,b,b,b,b,b,b,b" is a 
> specification of the storage option for the corresponding column in the serde 
> property "hbase.columns.mapping". Allowed values are '-' for table default, 
> 's' for standard string storage, and 'b' for binary storage as would be 
> obtained from o.a.h.hbase.utils.Bytes. Map types for HBase column families 
> use a colon separated pair such as 's:b' for the key and value part 
> specifiers respectively. See the test cases and queries for HBase handler for 
> additional examples.
> There is also a table property "hbase.table.default.storage.type" = "string" 
> to specify a table level default storage type. The other valid specification 
> is "binary". The table level default is overridden by a column level 
> specification.
> This control is available for the boolean, tinyint, smallint, int, bigint, 
> float, and double primitive types. The attached patch also relaxes the 
> mapping of map types to HBase column families to allow any primitive type to 
> be the map key.
> Attached is a program for creating a table and populating it in HBase. The 
> external table in Hive can access the data as shown in the example below.
> hive> create external table TestHiveHBaseExternalTable
> > (key string, c_bool boolean, c_byte tinyint, c_short smallint,
> >  c_int int, c_long bigint, c_string string, c_float float, c_double 
> double)
> >  stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> >  with serdeproperties ("hbase.columns.mapping" = 
> ":key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double")
> >  tblproperties ("hbase.table.name" = "TestHiveHBaseExternalTable");
> OK
> Time taken: 0.691 seconds
> hive> select * from TestHiveHBaseExternalTable;
> OK
> key-1 NULLNULLNULLNULLNULLTest-String NULLNULL
> Time taken: 0.346 seconds
> hive> drop table TestHiveHBaseExternalTable;
> OK
> Time taken: 0.139 seconds
> hive> create external table TestHiveHBaseExternalTable
> > (key string, c_bool boolean, c_byte tinyint, c_short smallint,
> >  c_int int, c_long bigint, c_string string, c_float float, c_double 
> double)
> >  stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> >  with serdeproperties (
> >  "hbase.columns.mapping" = 
> ":key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double",
> >  "hbase.columns.storage.types" = "-,b,b,b,b,b,b,b,b" )
> >  tblproperties (
> >  "hbase.table.name" = "TestHiveHBaseExternalTable",
> >  "hbase.table.default.storage.type" = "string");
> OK
> Time taken: 0.139 seconds
> hive> select * from TestHiveHBaseExternalTable;
> OK
> key-1 true-128-32768  -2147483648 -9223372036854775808
> Test-String -2.1793132E-11  2.01345E291
> Time taken: 0.151 seconds
> hive> drop table TestHiveHBaseExternalTable;
> OK
> Time taken: 0.154 seconds
> hive> create external table TestHiveHBaseExternalTable
> > (key string, c_bool boolean, c_byte tinyint, c_short smallint,
> >  c_int int, c_long bigint, c_string string, c_float float, c_double 
> double)
> >  stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> >  with serdeproperties (
> >  "hbase.columns.mapping" = 
> ":key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double",
> >  "hbase.columns.storage.types" = "-,b,b,b,b,b,-,b,b" )
> >  tblproperties ("hbase.table.name" = "TestHiveHBaseExternalTable");
> OK
> Time taken: 0.347 seconds
> hive> select * from TestHiveHBaseExternalTable;
> OK
> key-1 true-128-32768  -2147483648 -9223372036854775808
> Test-String -2.1793132E-11  2.01345E291
> Time taken: 0.245 seconds
> hive> 

--
This message i

[jira] [Commented] (HIVE-2853) Add pre event listeners to metastore

2012-03-08 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13225533#comment-13225533
 ] 

Ashutosh Chauhan commented on HIVE-2853:


@Namit,
Will it be ok for you to hold-on the commit before others get a chance to 
review this. Looks like a user facing feature is getting introduced here. Want 
to understand it a bit.

> Add pre event listeners to metastore
> 
>
> Key: HIVE-2853
> URL: https://issues.apache.org/jira/browse/HIVE-2853
> Project: Hive
>  Issue Type: Improvement
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Attachments: HIVE-2853.D2175.1.patch
>
>
> Currently there are event listeners in the metastore which run after the 
> completion of a method.  It would be useful to have similar hooks which run 
> before the metastore method is executed.  These can be used to make 
> validating names, locations, etc. customizable.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1634) Allow access to Primitive types stored in binary format in HBase

2012-03-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13225064#comment-13225064
 ] 

Ashutosh Chauhan commented on HIVE-1634:


All the tests passed with latest patch on latest trunk. 
{code}
BUILD SUCCESSFUL
Total time: 323 minutes 29 seconds
{code}

> Allow access to Primitive types stored in binary format in HBase
> 
>
> Key: HIVE-1634
> URL: https://issues.apache.org/jira/browse/HIVE-1634
> Project: Hive
>  Issue Type: Improvement
>  Components: HBase Handler
>Affects Versions: 0.7.0, 0.8.0, 0.9.0
>Reporter: Basab Maulik
>Assignee: Ashutosh Chauhan
> Fix For: 0.9.0
>
> Attachments: HIVE-1634.0.patch, HIVE-1634.1.patch, 
> HIVE-1634.D1581.1.patch, HIVE-1634.D1581.2.patch, HIVE-1634.D1581.3.patch, 
> TestHiveHBaseExternalTable.java, hive-1634_3.patch
>
>
> This addresses HIVE-1245 in part, for atomic or primitive types.
> The serde property "hbase.columns.storage.types" = "-,b,b,b,b,b,b,b,b" is a 
> specification of the storage option for the corresponding column in the serde 
> property "hbase.columns.mapping". Allowed values are '-' for table default, 
> 's' for standard string storage, and 'b' for binary storage as would be 
> obtained from o.a.h.hbase.utils.Bytes. Map types for HBase column families 
> use a colon separated pair such as 's:b' for the key and value part 
> specifiers respectively. See the test cases and queries for HBase handler for 
> additional examples.
> There is also a table property "hbase.table.default.storage.type" = "string" 
> to specify a table level default storage type. The other valid specification 
> is "binary". The table level default is overridden by a column level 
> specification.
> This control is available for the boolean, tinyint, smallint, int, bigint, 
> float, and double primitive types. The attached patch also relaxes the 
> mapping of map types to HBase column families to allow any primitive type to 
> be the map key.
> Attached is a program for creating a table and populating it in HBase. The 
> external table in Hive can access the data as shown in the example below.
> hive> create external table TestHiveHBaseExternalTable
> > (key string, c_bool boolean, c_byte tinyint, c_short smallint,
> >  c_int int, c_long bigint, c_string string, c_float float, c_double 
> double)
> >  stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> >  with serdeproperties ("hbase.columns.mapping" = 
> ":key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double")
> >  tblproperties ("hbase.table.name" = "TestHiveHBaseExternalTable");
> OK
> Time taken: 0.691 seconds
> hive> select * from TestHiveHBaseExternalTable;
> OK
> key-1 NULLNULLNULLNULLNULLTest-String NULLNULL
> Time taken: 0.346 seconds
> hive> drop table TestHiveHBaseExternalTable;
> OK
> Time taken: 0.139 seconds
> hive> create external table TestHiveHBaseExternalTable
> > (key string, c_bool boolean, c_byte tinyint, c_short smallint,
> >  c_int int, c_long bigint, c_string string, c_float float, c_double 
> double)
> >  stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> >  with serdeproperties (
> >  "hbase.columns.mapping" = 
> ":key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double",
> >  "hbase.columns.storage.types" = "-,b,b,b,b,b,b,b,b" )
> >  tblproperties (
> >  "hbase.table.name" = "TestHiveHBaseExternalTable",
> >  "hbase.table.default.storage.type" = "string");
> OK
> Time taken: 0.139 seconds
> hive> select * from TestHiveHBaseExternalTable;
> OK
> key-1 true-128-32768  -2147483648 -9223372036854775808
> Test-String -2.1793132E-11  2.01345E291
> Time taken: 0.151 seconds
> hive> drop table TestHiveHBaseExternalTable;
> OK
> Time taken: 0.154 seconds
> hive> create external table TestHiveHBaseExternalTable
> > (key string, c_bool boolean, c_byte tinyint, c_short smallint,
> >  c_int int, c_long bigint, c_string string, c_float float, c_double 
> double)
> >  stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> >  with serdeproperties (
> >  "hbase.columns.mapping" = 
> ":key,cf:boolean,cf:byte,cf:short,cf:int,cf:long,cf:string,cf:float,cf:double",
> >  "hbase.columns.storage.types" = "-,b,b,b,b,b,-,b,b" )
> >  tblproperties ("hbase.table.name" = "TestHiveHBaseExternalTable");
> OK
> Time taken: 0.347 seconds
> hive> select * from TestHiveHBaseExternalTable;
> OK
> key-1 true-128-32768  -2147483648 -9223372036854775808
> Test-String -2.1793132E-11  2.01345E291
> Time taken: 0.245 seconds
> hive> 

--
This message is automatically generated by JIRA.
If you think it was sent inc

[jira] [Commented] (HIVE-2841) Fix javadoc warnings

2012-03-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13224724#comment-13224724
 ] 

Ashutosh Chauhan commented on HIVE-2841:


Owen,
Can you upload patch directly to jira too.. providing license to ASF?

> Fix javadoc warnings
> 
>
> Key: HIVE-2841
> URL: https://issues.apache.org/jira/browse/HIVE-2841
> Project: Hive
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.9.0
>
> Attachments: HIVE-2841.D2139.1.patch
>
>
> We currently have 219 warnings out of Javadoc and I'd like to fix them all.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2831) TestContribCliDriver.dboutput and TestCliDriver.input45 fail on 0.23

2012-03-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13224343#comment-13224343
 ] 

Ashutosh Chauhan commented on HIVE-2831:


The approach looks alright to me, but I suspect this will result in updates in 
most *.q.out files.  

> TestContribCliDriver.dboutput and TestCliDriver.input45 fail on 0.23
> 
>
> Key: HIVE-2831
> URL: https://issues.apache.org/jira/browse/HIVE-2831
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Carl Steinbach
>Assignee: Carl Steinbach
> Attachments: HIVE-2831.1.patch.txt, HIVE-2831.D2049.1.patch, 
> HIVE-2831.D2049.1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2834) Diff masking it too aggressive in index_bitmap*.q and index_compact*.q tests

2012-03-07 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13224337#comment-13224337
 ] 

Ashutosh Chauhan commented on HIVE-2834:


Failed to apply. 
{code}
Hunk #3 FAILED at 63.
1 out of 3 hunks FAILED -- saving rejects to file 
ql/src/test/queries/clientpositive/index_compact_2.q.rej
{code}

> Diff masking it too aggressive in index_bitmap*.q and index_compact*.q tests
> 
>
> Key: HIVE-2834
> URL: https://issues.apache.org/jira/browse/HIVE-2834
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Carl Steinbach
>Assignee: Carl Steinbach
> Attachments: HIVE-2834.D2061.1.patch, HIVE-2834.D2109.1.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2498) Group by operator doesnt estimate size of Timestamp & Binary data correctly

2012-03-06 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13223971#comment-13223971
 ] 

Ashutosh Chauhan commented on HIVE-2498:


Ignore the previous review request. That was meant for HIVE-2517. Use the one 
above that one for this issue HIVE-2498

> Group by operator doesnt estimate size of Timestamp & Binary data correctly
> ---
>
> Key: HIVE-2498
> URL: https://issues.apache.org/jira/browse/HIVE-2498
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.9.0
>
> Attachments: HIVE-2498.D1185.1.patch, hive-2498.patch, 
> hive-2498_1.patch
>
>
> It currently defaults to default case and returns constant value, whereas we 
> can do better by getting actual size at runtime.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2838) cleanup readentity/writeentity

2012-03-05 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13222760#comment-13222760
 ] 

Ashutosh Chauhan commented on HIVE-2838:


@Namit,
Can you briefly explain why there should be one common entity instead of 
read/write entity?

> cleanup readentity/writeentity
> --
>
> Key: HIVE-2838
> URL: https://issues.apache.org/jira/browse/HIVE-2838
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>Assignee: Namit Jain
>
> Ideally, there should be one common entity instead of readentity/writeentity.
> Unfortunately, that would be a backward incompatible change since users os 
> hive might have written
> there own hooks, where they are using readentity/writeentity.
> We should atleast create a common class, and then we can deprecate read/write 
> entity later, for a new release.
> For now, I propose to make a backward compatible change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2833) Fix test failures caused by HIVE-2716

2012-03-02 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13221438#comment-13221438
 ] 

Ashutosh Chauhan commented on HIVE-2833:


@Kevin,
Thanks a bunch for digging into it and for the fix.

@Namit,
I will take a look at it over the weekend and will post comments if I have any.

> Fix test failures caused by HIVE-2716
> -
>
> Key: HIVE-2833
> URL: https://issues.apache.org/jira/browse/HIVE-2833
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Carl Steinbach
>Assignee: Kevin Wilfong
> Attachments: HIVE-2716.D2055.1.patch, HIVE-2833.D2055.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2833) Fix test failures caused by HIVE-2716

2012-03-01 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13220586#comment-13220586
 ] 

Ashutosh Chauhan commented on HIVE-2833:


@Namit,

It passes all the tests on the trunk. Do, you have a reproducible testcase for 
this? 

> Fix test failures caused by HIVE-2716
> -
>
> Key: HIVE-2833
> URL: https://issues.apache.org/jira/browse/HIVE-2833
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Carl Steinbach
>Assignee: Enis Soztutar
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2809) StorageHandler authorization providers

2012-02-27 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13217744#comment-13217744
 ] 

Ashutosh Chauhan commented on HIVE-2809:


New patch fails to apply too.

> StorageHandler authorization providers
> --
>
> Key: HIVE-2809
> URL: https://issues.apache.org/jira/browse/HIVE-2809
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 0.9.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: HIVE-2809.D1953.1.patch, HIVE-2809.D1953.2.patch
>
>
> In this issue, we would like to discuss the possibility of supplementing the 
> Hive authorization model with authorization at the storage level. As 
> discussed in HIVE-1943, Hive should also check for operation permissions in 
> hdfs and hbase, since otherwise, data and metadata can be in an inconsistent 
> state or be orphaned. Going a step further, some of the setups might not need 
> the full featured auth model by Hive, but want to rely on managing the 
> permissions at the data layer. In this model, the metadata operations are 
> checked first from hdfs/hbase and it is allowed only if they are allowed at 
> the data layer. The semantics are documented at 
> https://cwiki.apache.org/confluence/display/HCATALOG/Hcat+Security+Design. 
> So, the goals of this issue are: 
>  - Port storage handler specific authorization providers, and the 
> StorageDelegationAuthorizationProvider from HCATALOG-245 and HCATALOG-260 to 
> Hive. 
>  - Keep current Hive's default authorization provider, and enable user to use 
> this and/or the storage one. auth providers are already configurable.
>  - Move the manual checks that had to be performed about authorization in 
> Hcat to Hive, specifically:
>   -- CREATE DATABASE/TABLE, ADD PARTITION statements does not call 
>HiveAuthorizationProvider.authorize() with the candidate objects, which 
> means that
>we cannot do checks against defined LOCATION.
>   -- HiveOperation does not define sufficient Privileges for most of the 
> operations, 
> especially database operations. 
>   -- For some of the operations, Hive SemanticAnalyzer does not add the 
> changed 
> object as a WriteEntity or ReadEntity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2767) Optionally use framed transport with metastore

2012-02-27 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13217743#comment-13217743
 ] 

Ashutosh Chauhan commented on HIVE-2767:


Thanks Travis for explaining the usecase. Makes sense. Without that I was 
shooting in dark about why you need Framed Transport. : ) Yeah, update the 
patch. I will test and commit.

> Optionally use framed transport with metastore
> --
>
> Key: HIVE-2767
> URL: https://issues.apache.org/jira/browse/HIVE-2767
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: Travis Crawford
>Assignee: Travis Crawford
> Attachments: HIVE-2767.patch.txt
>
>
> Users may want/need to use thrift's framed transport when communicating with 
> the Hive MetaStore. This patch adds a new property 
> {{hive.metastore.thrift.framed.transport.enabled}} that enables the framed 
> transport (defaults to off, aka no change from before the patch). This 
> property must be set for both clients and the HMS server.
> It wasn't immediately clear how to use the framed transport with SASL, so as 
> written an exception is thrown if you try starting the server with both 
> options. If SASL and the framed transport will indeed work together I can 
> update the patch (although I don't have a secured environment to test in).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2809) StorageHandler authorization providers

2012-02-27 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13217669#comment-13217669
 ] 

Ashutosh Chauhan commented on HIVE-2809:


Patch applies, but fails to compile.

> StorageHandler authorization providers
> --
>
> Key: HIVE-2809
> URL: https://issues.apache.org/jira/browse/HIVE-2809
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 0.9.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: HIVE-2809.D1953.1.patch
>
>
> In this issue, we would like to discuss the possibility of supplementing the 
> Hive authorization model with authorization at the storage level. As 
> discussed in HIVE-1943, Hive should also check for operation permissions in 
> hdfs and hbase, since otherwise, data and metadata can be in an inconsistent 
> state or be orphaned. Going a step further, some of the setups might not need 
> the full featured auth model by Hive, but want to rely on managing the 
> permissions at the data layer. In this model, the metadata operations are 
> checked first from hdfs/hbase and it is allowed only if they are allowed at 
> the data layer. The semantics are documented at 
> https://cwiki.apache.org/confluence/display/HCATALOG/Hcat+Security+Design. 
> So, the goals of this issue are: 
>  - Port storage handler specific authorization providers, and the 
> StorageDelegationAuthorizationProvider from HCATALOG-245 and HCATALOG-260 to 
> Hive. 
>  - Keep current Hive's default authorization provider, and enable user to use 
> this and/or the storage one. auth providers are already configurable.
>  - Move the manual checks that had to be performed about authorization in 
> Hcat to Hive, specifically:
>   -- CREATE DATABASE/TABLE, ADD PARTITION statements does not call 
>HiveAuthorizationProvider.authorize() with the candidate objects, which 
> means that
>we cannot do checks against defined LOCATION.
>   -- HiveOperation does not define sufficient Privileges for most of the 
> operations, 
> especially database operations. 
>   -- For some of the operations, Hive SemanticAnalyzer does not add the 
> changed 
> object as a WriteEntity or ReadEntity.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2789) When integrating into MapReduce2, 'select * from src distribute by src.key limit 1' returns non-deterministic result

2012-02-24 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13216306#comment-13216306
 ] 

Ashutosh Chauhan commented on HIVE-2789:


I still cant apply this patch cleanly either with arc or with patch command 
directly.

> When integrating into MapReduce2, 'select * from src distribute by src.key 
> limit 1' returns non-deterministic result
> 
>
> Key: HIVE-2789
> URL: https://issues.apache.org/jira/browse/HIVE-2789
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Zhenxiao Luo
>Assignee: Carl Steinbach
> Attachments: HIVE-2789.D1647.1.patch, HIVE-2789.D1647.1.patch, 
> HIVE-2789.D1647.2.patch, HIVE-2789.D1647.2.patch
>
>
> query_properties.q test failure:
> [junit] Begin query: query_properties.q
> [junit] 12/01/23 16:59:13 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:13 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:18 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:18 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:22 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:22 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:27 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:27 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:32 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:32 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:36 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:36 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:41 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:41 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:46 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:46 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:50 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:50 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:55 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:55 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:59 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:59 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:04 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 17:00:04 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:08 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 17:00:08 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:13 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 17:00:13 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:18 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 17:00:18 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.

[jira] [Commented] (HIVE-2761) Remove lib/javaewah-0.3.jar

2012-02-23 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13215418#comment-13215418
 ] 

Ashutosh Chauhan commented on HIVE-2761:


There is one more in .classpath

> Remove lib/javaewah-0.3.jar
> ---
>
> Key: HIVE-2761
> URL: https://issues.apache.org/jira/browse/HIVE-2761
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.8.0, 0.8.1, 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Edward Capriolo
> Fix For: 0.9.0
>
> Attachments: HIVE-2761.2.patch.txt, HIVE-2761.D1911.1.patch, 
> HIVE-2761.D1911.2.patch
>
>
> After HIVE-2391 it is retrieved from maven repo via ivy, we can get rid of it 
> from our lib/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2761) Remove lib/javaewah-0.3.jar

2012-02-23 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13215265#comment-13215265
 ] 

Ashutosh Chauhan commented on HIVE-2761:


@Ed,
Can you upload the patch on jira granting perms?

> Remove lib/javaewah-0.3.jar
> ---
>
> Key: HIVE-2761
> URL: https://issues.apache.org/jira/browse/HIVE-2761
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.8.0, 0.8.1, 0.9.0
>Reporter: Ashutosh Chauhan
>Assignee: Edward Capriolo
> Fix For: 0.9.0
>
> Attachments: HIVE-2761.D1911.1.patch
>
>
> After HIVE-2391 it is retrieved from maven repo via ivy, we can get rid of it 
> from our lib/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2773) HiveStorageHandler.configureTableJobProperites() should let the handler know wether it is configuration for input or output

2012-02-23 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13214979#comment-13214979
 ] 

Ashutosh Chauhan commented on HIVE-2773:


bq. If I add new methods, classes implementing the interface will have to 
implement the new methods anyway,

Adding or removing methods have different impacts from point of view of 
backward compatibility. 
If you add new methods in an interface then implementations using the older 
interface will continue to work even after they upgrade to new version of hive. 
That has no cost on people who have implemented the old interface. Only if they 
recompile it, then they have to implement new methods.  
On the other hand, if you remove methods, then their old implementation won't 
work anymore after upgrade. They have to recompile and this implies work on 
their part.

This is the difference which gets factored in backward compatibility. So, as 
long as you are adding new methods in interface, I consider it to be backward 
compatible. But, if you are removing methods, then its not. Makes sense?   

> HiveStorageHandler.configureTableJobProperites() should let the handler know 
> wether it is configuration for input or output
> ---
>
> Key: HIVE-2773
> URL: https://issues.apache.org/jira/browse/HIVE-2773
> Project: Hive
>  Issue Type: Improvement
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hcatalog, storage_handler
> Attachments: HIVE-2773.D1815.1.patch, HIVE-2773.patch
>
>
> HiveStorageHandler.configureTableJobProperties() is called to allow the 
> storage handler to setup any properties that the underlying 
> inputformat/outputformat/serde may need. But the handler implementation does 
> not know whether it is being called for configuring input or output. This 
> makes it a problem for handlers which sets an external state. In the case of 
> HCatalog's HBase storageHandler, whenever a write needs to be configured we 
> create a write transaction which needs to be committed or aborted later on. 
> In this case configuring for both input and output each time 
> configureTableJobProperties() is called would not be desirable. This has 
> become an issue since HCatalog is dropping storageDrivers for SerDe and 
> StorageHandler (see HCATALOG-237).
> My proposal is to replace configureTableJobProperties() with two methods:
> configureInputJobProperties()
> configureOutputJobProperties()
> Each method will have the same signature. I cursory look at the code and I 
> believe changes should be straighforward also given that we are not really 
> changing anything just splitting responsibility. If the community is fine 
> with this approach I will go ahead and create a aptch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2716) Move retry logic in HiveMetaStore to a separe class

2012-02-22 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13214241#comment-13214241
 ] 

Ashutosh Chauhan commented on HIVE-2716:


I reran the ant test -Dtestcase=TestNegativeCliDriver 
-Dqfile=script_broken_pipe1.q which failed as reported and it passed. So, looks 
like that failure is not related to this patch.

> Move retry logic in HiveMetaStore to a separe class
> ---
>
> Key: HIVE-2716
> URL: https://issues.apache.org/jira/browse/HIVE-2716
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 0.9.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.9.0
>
> Attachments: HIVE-2716.D1227.1.patch, HIVE-2716.D1227.2.patch, 
> HIVE-2716.D1227.3.patch, HIVE-2716.D1227.4.patch, HIVE-2716.patch
>
>
> In HIVE-1219, method retrying for raw store operation are introduced to 
> handle jdo operations more robustly. However, the abstraction for the 
> RawStore operations can be moved to a separate class implementing RawStore, 
> which should clean up the code base for HiveMetaStore. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2773) HiveStorageHandler.configureTableJobProperites() should let the handler know wether it is configuration for input or output

2012-02-22 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13214018#comment-13214018
 ] 

Ashutosh Chauhan commented on HIVE-2773:


This will effectively break backwards compatibility since you are removing a 
method from an interface. You can get around this by keeping original method as 
well as introducing new ones. Then, at the time of invocation of these methods, 
you can check if newer methods exists, if they do then invoke them else invoke 
old method. Then, we can deprecate old method. But, before you go down that 
path and introduce code complexity I suggest you send an email on user@hive to 
check with people if they are ok with your current approach as is and will be 
able to upgrade their storage handlers, in which case we can go ahead with this 
current approach.

> HiveStorageHandler.configureTableJobProperites() should let the handler know 
> wether it is configuration for input or output
> ---
>
> Key: HIVE-2773
> URL: https://issues.apache.org/jira/browse/HIVE-2773
> Project: Hive
>  Issue Type: Improvement
>Reporter: Francis Liu
>  Labels: hcatalog, storage_handler
> Attachments: HIVE-2773.D1815.1.patch, HIVE-2773.patch
>
>
> HiveStorageHandler.configureTableJobProperties() is called to allow the 
> storage handler to setup any properties that the underlying 
> inputformat/outputformat/serde may need. But the handler implementation does 
> not know whether it is being called for configuring input or output. This 
> makes it a problem for handlers which sets an external state. In the case of 
> HCatalog's HBase storageHandler, whenever a write needs to be configured we 
> create a write transaction which needs to be committed or aborted later on. 
> In this case configuring for both input and output each time 
> configureTableJobProperties() is called would not be desirable. This has 
> become an issue since HCatalog is dropping storageDrivers for SerDe and 
> StorageHandler (see HCATALOG-237).
> My proposal is to replace configureTableJobProperties() with two methods:
> configureInputJobProperties()
> configureOutputJobProperties()
> Each method will have the same signature. I cursory look at the code and I 
> believe changes should be straighforward also given that we are not really 
> changing anything just splitting responsibility. If the community is fine 
> with this approach I will go ahead and create a aptch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2789) When integrating into MapReduce2, 'select * from src distribute by src.key limit 1' returns non-deterministic result

2012-02-20 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13212392#comment-13212392
 ] 

Ashutosh Chauhan commented on HIVE-2789:


Carl,
Patch failed to apply cleanly for me. I have reviewed it. Looks good. Can you 
update the patch, test and commit?

> When integrating into MapReduce2, 'select * from src distribute by src.key 
> limit 1' returns non-deterministic result
> 
>
> Key: HIVE-2789
> URL: https://issues.apache.org/jira/browse/HIVE-2789
> Project: Hive
>  Issue Type: Bug
>Reporter: Zhenxiao Luo
>Assignee: Carl Steinbach
> Attachments: HIVE-2789.D1647.1.patch, HIVE-2789.D1647.1.patch
>
>
> query_properties.q test failure:
> [junit] Begin query: query_properties.q
> [junit] 12/01/23 16:59:13 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:13 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:18 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:18 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:22 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:22 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:27 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:27 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:32 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:32 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:36 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:36 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:41 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:41 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:46 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:46 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:50 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:50 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:55 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:55 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 16:59:59 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 16:59:59 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:04 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 17:00:04 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:08 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 17:00:08 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:13 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 17:00:13 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:18 WARN conf.Configuration: mapred.system.dir is 
> deprecated. Instead, use mapreduce.jobtracker.system.dir
> [junit] 12/01/23 17:00:18 WARN conf.Configuration: mapred.local.dir is 
> deprecated. Instead, use mapreduce.cluster.local.dir
> [junit] 12/01/23 17:00:22 WAR

[jira] [Commented] (HIVE-2716) Move retry logic in HiveMetaStore to a separe class

2012-02-20 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13212385#comment-13212385
 ] 

Ashutosh Chauhan commented on HIVE-2716:


ant  test -Dtestcase=TestNegativeCliDriver -Dqfile=authorization_fail_1.q 
failed.

> Move retry logic in HiveMetaStore to a separe class
> ---
>
> Key: HIVE-2716
> URL: https://issues.apache.org/jira/browse/HIVE-2716
> Project: Hive
>  Issue Type: Sub-task
>  Components: Metastore
>Affects Versions: 0.9.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: HIVE-2716.D1227.1.patch, HIVE-2716.D1227.2.patch, 
> HIVE-2716.D1227.3.patch
>
>
> In HIVE-1219, method retrying for raw store operation are introduced to 
> handle jdo operations more robustly. However, the abstraction for the 
> RawStore operations can be moved to a separate class implementing RawStore, 
> which should clean up the code base for HiveMetaStore. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2792) SUBSTR(CAST( AS BINARY)) produces unexpected results

2012-02-20 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13212298#comment-13212298
 ] 

Ashutosh Chauhan commented on HIVE-2792:


@Navis,
Can you please upload the patch on jira also granting ASF perms. On top of this 
page, click More Actions > Attach Files. Then after attaching your patch file, 
click on radio button granting ASF perms.

> SUBSTR(CAST( AS BINARY)) produces unexpected results
> 
>
> Key: HIVE-2792
> URL: https://issues.apache.org/jira/browse/HIVE-2792
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.8.0, 0.8.1
>Reporter: Carl Steinbach
>Assignee: Navis
> Fix For: 0.9.0
>
> Attachments: HIVE-2792.D1797.1.patch, HIVE-2792.D1797.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-1054) "CHANGE COLUMN" does not support changing partition column types.

2012-02-16 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-1054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13209634#comment-13209634
 ] 

Ashutosh Chauhan commented on HIVE-1054:


One problem there could be that all partition values are stored as string, so 
type info is completely lost in mysql. As a result partition filters cannot be 
pushed in mysql for any types other then string. This is also documented in 
listPartitionsByFilter() api of metastoreclient. Though, I don't find that this 
is enforced anywhere (that might itself be a bug). As a result, changing a type 
of partition column from string to int or vice-versa might result in change in 
behavior.

> "CHANGE COLUMN" does not support changing partition column types.
> -
>
> Key: HIVE-1054
> URL: https://issues.apache.org/jira/browse/HIVE-1054
> Project: Hive
>  Issue Type: Bug
>Reporter: He Yongqiang
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region

2012-02-16 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13209485#comment-13209485
 ] 

Ashutosh Chauhan commented on HIVE-2799:


As a developer of Hive, I am definitely in favor of keeping code simpler and 
interfaces cleaner. As an open source user, I am fine with wrapping the new 
calls in existing apis.  

> change the following thrift apis to add a region
> 
>
> Key: HIVE-2799
> URL: https://issues.apache.org/jira/browse/HIVE-2799
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Thrift API
>Reporter: Namit Jain
>Assignee: Kevin Wilfong
>
>  list get_tables(1: string db_name, 2: string pattern) throws (1: 
> MetaException o1)
>   list get_all_tables(1: string db_name) throws (1: MetaException o1)
>   Table get_table(1:string dbname, 2:string tbl_name)
>throws (1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_table_objects_by_name(1:string dbname, 2:list 
> tbl_names)
>throws (1:MetaException o1, 2:InvalidOperationException o2, 
> 3:UnknownDBException o3)
>   list get_table_names_by_filter(1:string dbname, 2:string filter, 
> 3:i16 max_tables=-1)
>throws (1:MetaException o1, 
> 2:InvalidOperationException o2, 3:UnknownDBException o3)
>   Partition add_partition(1:Partition new_part)
>throws(1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   i32 add_partitions(1:list new_parts)
>throws(1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   Partition append_partition(1:string db_name, 2:string tbl_name, 
> 3:list part_vals)
>throws (1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 
> 3:string part_name)
>throws (1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   bool drop_partition(1:string db_name, 2:string tbl_name, 3:list 
> part_vals, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string 
> part_name, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   Partition get_partition(1:string db_name, 2:string tbl_name, 3:list 
> part_vals)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 
> 3:list part_vals,
>   4: string user_name, 5: list group_names) 
> throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   Partition get_partition_by_name(1:string db_name 2:string tbl_name, 
> 3:string part_name)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions(1:string db_name, 2:string tbl_name, 3:i16 
> max_parts=-1)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partitions_with_auth(1:string db_name, 2:string 
> tbl_name, 3:i16 max_parts=-1,
>  4: string user_name, 5: list group_names) 
> throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 
> max_parts=-1)
>throws(1:MetaException o2)
>   list get_partitions_ps(1:string db_name 2:string tbl_name
> 3:list part_vals, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_ps_with_auth(1:string db_name, 2:string 
> tbl_name, 3:list part_vals, 4:i16 max_parts=-1,
>  5: string user_name, 6: list group_names) 
> throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partition_names_ps(1:string db_name,
> 2:string tbl_name, 3:list part_vals, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_by_filter(1:string db_name 2:string tbl_name
> 3:string filter, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_by_names(1:string db_name 2:string tbl_name 
> 3:list names)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   bool drop_index_by_name(1:string db_name, 2:string tbl_name, 3:string 
> index_name, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   Index get_index_by_name(1:string db_name 2:string tbl_name, 3:string 
> index_name)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   lis

[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region

2012-02-15 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13209013#comment-13209013
 ] 

Ashutosh Chauhan commented on HIVE-2799:


For that we can modify HiveMetaStoreClient.java (the most widely used client) 
to wrap these methods in the one which don't take region as an argument (which 
is the current api) and then passing null for server param through rpc client. 
Those folks who are using real rpc clients their clients will continue to work 
without recompilation and if they are indeed recompiling they can pass on a 
null in there. 

At this point, I think we should reconsider whether we want to add a new set of 
apis or want to modify the existing ones. To me, latter seems a better choice 
to avoid code duplication and confusion.

> change the following thrift apis to add a region
> 
>
> Key: HIVE-2799
> URL: https://issues.apache.org/jira/browse/HIVE-2799
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Thrift API
>Reporter: Namit Jain
>Assignee: Kevin Wilfong
>
>  list get_tables(1: string db_name, 2: string pattern) throws (1: 
> MetaException o1)
>   list get_all_tables(1: string db_name) throws (1: MetaException o1)
>   Table get_table(1:string dbname, 2:string tbl_name)
>throws (1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_table_objects_by_name(1:string dbname, 2:list 
> tbl_names)
>throws (1:MetaException o1, 2:InvalidOperationException o2, 
> 3:UnknownDBException o3)
>   list get_table_names_by_filter(1:string dbname, 2:string filter, 
> 3:i16 max_tables=-1)
>throws (1:MetaException o1, 
> 2:InvalidOperationException o2, 3:UnknownDBException o3)
>   Partition add_partition(1:Partition new_part)
>throws(1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   i32 add_partitions(1:list new_parts)
>throws(1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   Partition append_partition(1:string db_name, 2:string tbl_name, 
> 3:list part_vals)
>throws (1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 
> 3:string part_name)
>throws (1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   bool drop_partition(1:string db_name, 2:string tbl_name, 3:list 
> part_vals, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string 
> part_name, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   Partition get_partition(1:string db_name, 2:string tbl_name, 3:list 
> part_vals)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 
> 3:list part_vals,
>   4: string user_name, 5: list group_names) 
> throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   Partition get_partition_by_name(1:string db_name 2:string tbl_name, 
> 3:string part_name)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions(1:string db_name, 2:string tbl_name, 3:i16 
> max_parts=-1)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partitions_with_auth(1:string db_name, 2:string 
> tbl_name, 3:i16 max_parts=-1,
>  4: string user_name, 5: list group_names) 
> throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 
> max_parts=-1)
>throws(1:MetaException o2)
>   list get_partitions_ps(1:string db_name 2:string tbl_name
> 3:list part_vals, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_ps_with_auth(1:string db_name, 2:string 
> tbl_name, 3:list part_vals, 4:i16 max_parts=-1,
>  5: string user_name, 6: list group_names) 
> throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partition_names_ps(1:string db_name,
> 2:string tbl_name, 3:list part_vals, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_by_filter(1:string db_name 2:string tbl_name
> 3:string filter, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_by_names(1:string db_name 2:string tbl_name 
> 3:list names)
> 

[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region

2012-02-15 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13208988#comment-13208988
 ] 

Ashutosh Chauhan commented on HIVE-2799:


In case client is upgraded but server is not, my impression is that extra param 
passed on by client is automatically dropped by rpc before making a call on 
server side. So, that will still work. 

> change the following thrift apis to add a region
> 
>
> Key: HIVE-2799
> URL: https://issues.apache.org/jira/browse/HIVE-2799
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Thrift API
>Reporter: Namit Jain
>Assignee: Kevin Wilfong
>
>  list get_tables(1: string db_name, 2: string pattern) throws (1: 
> MetaException o1)
>   list get_all_tables(1: string db_name) throws (1: MetaException o1)
>   Table get_table(1:string dbname, 2:string tbl_name)
>throws (1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_table_objects_by_name(1:string dbname, 2:list 
> tbl_names)
>throws (1:MetaException o1, 2:InvalidOperationException o2, 
> 3:UnknownDBException o3)
>   list get_table_names_by_filter(1:string dbname, 2:string filter, 
> 3:i16 max_tables=-1)
>throws (1:MetaException o1, 
> 2:InvalidOperationException o2, 3:UnknownDBException o3)
>   Partition add_partition(1:Partition new_part)
>throws(1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   i32 add_partitions(1:list new_parts)
>throws(1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   Partition append_partition(1:string db_name, 2:string tbl_name, 
> 3:list part_vals)
>throws (1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 
> 3:string part_name)
>throws (1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   bool drop_partition(1:string db_name, 2:string tbl_name, 3:list 
> part_vals, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string 
> part_name, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   Partition get_partition(1:string db_name, 2:string tbl_name, 3:list 
> part_vals)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 
> 3:list part_vals,
>   4: string user_name, 5: list group_names) 
> throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   Partition get_partition_by_name(1:string db_name 2:string tbl_name, 
> 3:string part_name)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions(1:string db_name, 2:string tbl_name, 3:i16 
> max_parts=-1)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partitions_with_auth(1:string db_name, 2:string 
> tbl_name, 3:i16 max_parts=-1,
>  4: string user_name, 5: list group_names) 
> throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 
> max_parts=-1)
>throws(1:MetaException o2)
>   list get_partitions_ps(1:string db_name 2:string tbl_name
> 3:list part_vals, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_ps_with_auth(1:string db_name, 2:string 
> tbl_name, 3:list part_vals, 4:i16 max_parts=-1,
>  5: string user_name, 6: list group_names) 
> throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partition_names_ps(1:string db_name,
> 2:string tbl_name, 3:list part_vals, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_by_filter(1:string db_name 2:string tbl_name
> 3:string filter, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_by_names(1:string db_name 2:string tbl_name 
> 3:list names)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   bool drop_index_by_name(1:string db_name, 2:string tbl_name, 3:string 
> index_name, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   Index get_index_by_name(1:string db_name 2:string tbl_name, 3:string 
> index_name)
>throws(1:MetaException o1, 2:NoSuchObjectExce

[jira] [Commented] (HIVE-2612) support hive table/partitions exists in more than one region

2012-02-15 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13208958#comment-13208958
 ] 

Ashutosh Chauhan commented on HIVE-2612:


@Kevin,

You named the script upgrade-0.9.0-to-0.10.0.mysql.sql, 
upgrade-0.9.0-to-0.10.0.derby.sql, hive-schema-0.10.0.derby.sql, 
hive-schema-0.10.0.mysql.sql But we have not released 0.9 yet. These should be 
named upgrade-0.8.0-to-0.9.0.mysql.sql, upgrade-0.8.0-to-0.9.0.derby.sql, 
hive-schema-0.9.0.derby.sql, hive-schema-0.9.0.mysql.sql respectively. 

> support hive table/partitions exists in more than one region
> 
>
> Key: HIVE-2612
> URL: https://issues.apache.org/jira/browse/HIVE-2612
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore
>Reporter: He Yongqiang
>Assignee: Kevin Wilfong
> Fix For: 0.9.0
>
> Attachments: HIVE-2612.1.patch, HIVE-2612.2.patch.txt, 
> HIVE-2612.3.patch.txt, HIVE-2612.4.patch.txt, HIVE-2612.6.patch.txt, 
> HIVE-2612.7.patch.txt, HIVE-2612.8.patch.txt, HIVE-2612.D1569.1.patch, 
> HIVE-2612.D1569.2.patch, HIVE-2612.D1569.3.patch, HIVE-2612.D1569.4.patch, 
> HIVE-2612.D1569.5.patch, HIVE-2612.D1569.6.patch, HIVE-2612.D1569.7.patch, 
> HIVE-2612.D1707.1.patch, hive.2612.5.patch
>
>
> 1) add region object into hive metastore
> 2) each partition/table has a primary region and a list of living regions, 
> and also data location in each region

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2801) When join key is null, random distribute this tuple

2012-02-15 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13208953#comment-13208953
 ] 

Ashutosh Chauhan commented on HIVE-2801:


I didn't get the context. Can you expand a bit more? Better still, you can add 
a testcase which illustrate the "fix" for the problem.

> When join key is null, random distribute this tuple
> ---
>
> Key: HIVE-2801
> URL: https://issues.apache.org/jira/browse/HIVE-2801
> Project: Hive
>  Issue Type: Improvement
>Reporter: binlijin
> Attachments: HIVE-2801.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HIVE-2799) change the following thrift apis to add a region

2012-02-15 Thread Ashutosh Chauhan (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13208945#comment-13208945
 ] 

Ashutosh Chauhan commented on HIVE-2799:


My understanding of thrift is limited. But, AFAIK you can add new params into 
existing thrift api and they will still be backward compatible. Is that not the 
case? Or, is there any other reason that you want to add a whole new set of 
parallel apis?

> change the following thrift apis to add a region
> 
>
> Key: HIVE-2799
> URL: https://issues.apache.org/jira/browse/HIVE-2799
> Project: Hive
>  Issue Type: New Feature
>  Components: Metastore, Thrift API
>Reporter: Namit Jain
>Assignee: Kevin Wilfong
>
>  list get_tables(1: string db_name, 2: string pattern) throws (1: 
> MetaException o1)
>   list get_all_tables(1: string db_name) throws (1: MetaException o1)
>   Table get_table(1:string dbname, 2:string tbl_name)
>throws (1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_table_objects_by_name(1:string dbname, 2:list 
> tbl_names)
>throws (1:MetaException o1, 2:InvalidOperationException o2, 
> 3:UnknownDBException o3)
>   list get_table_names_by_filter(1:string dbname, 2:string filter, 
> 3:i16 max_tables=-1)
>throws (1:MetaException o1, 
> 2:InvalidOperationException o2, 3:UnknownDBException o3)
>   Partition add_partition(1:Partition new_part)
>throws(1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   i32 add_partitions(1:list new_parts)
>throws(1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   Partition append_partition(1:string db_name, 2:string tbl_name, 
> 3:list part_vals)
>throws (1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   Partition append_partition_by_name(1:string db_name, 2:string tbl_name, 
> 3:string part_name)
>throws (1:InvalidObjectException o1, 
> 2:AlreadyExistsException o2, 3:MetaException o3)
>   bool drop_partition(1:string db_name, 2:string tbl_name, 3:list 
> part_vals, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   bool drop_partition_by_name(1:string db_name, 2:string tbl_name, 3:string 
> part_name, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   Partition get_partition(1:string db_name, 2:string tbl_name, 3:list 
> part_vals)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   Partition get_partition_with_auth(1:string db_name, 2:string tbl_name, 
> 3:list part_vals,
>   4: string user_name, 5: list group_names) 
> throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   Partition get_partition_by_name(1:string db_name 2:string tbl_name, 
> 3:string part_name)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions(1:string db_name, 2:string tbl_name, 3:i16 
> max_parts=-1)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partitions_with_auth(1:string db_name, 2:string 
> tbl_name, 3:i16 max_parts=-1,
>  4: string user_name, 5: list group_names) 
> throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partition_names(1:string db_name, 2:string tbl_name, 3:i16 
> max_parts=-1)
>throws(1:MetaException o2)
>   list get_partitions_ps(1:string db_name 2:string tbl_name
> 3:list part_vals, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_ps_with_auth(1:string db_name, 2:string 
> tbl_name, 3:list part_vals, 4:i16 max_parts=-1,
>  5: string user_name, 6: list group_names) 
> throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   list get_partition_names_ps(1:string db_name,
> 2:string tbl_name, 3:list part_vals, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_by_filter(1:string db_name 2:string tbl_name
> 3:string filter, 4:i16 max_parts=-1)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   list get_partitions_by_names(1:string db_name 2:string tbl_name 
> 3:list names)
>throws(1:MetaException o1, 2:NoSuchObjectException o2)
>   bool drop_index_by_name(1:string db_name, 2:string tbl_name, 3:string 
> index_name, 4:bool deleteData)
>throws(1:NoSuchObjectException o1, 2:MetaException o2)
>   Index get_index_by_name(1:string db_name 2:string tbl_name, 3:string 
> index_name)
>   

  1   2   3   >