[jira] [Commented] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-06 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16534788#comment-16534788
 ] 

Sean Busbey commented on HIVE-20077:


how's that look [~stakiar]?

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.14.0, 2.3.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HIVE-20077.0.patch, HIVE-20077.1.patch
>
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.in

[jira] [Updated] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-05 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-20077:
---
Status: Patch Available  (was: In Progress)

-v1
  - unchanged. rerunning QA.

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.3.2, 0.14.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HIVE-20077.0.patch, HIVE-20077.1.patch
>
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.invoke

[jira] [Updated] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-05 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-20077:
---
Attachment: HIVE-20077.1.patch

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.14.0, 2.3.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HIVE-20077.0.patch, HIVE-20077.1.patch
>
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.Na

[jira] [Updated] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-05 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-20077:
---
Status: In Progress  (was: Patch Available)

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.3.2, 0.14.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HIVE-20077.0.patch
>
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMet

[jira] [Commented] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533679#comment-16533679
 ] 

Sean Busbey commented on HIVE-20077:


{quote}
Failed tests:

TestAlterTableMetadata - did not produce a TEST-*.xml file (likely timed out) 
(batchId=240)
TestAutoPurgeTables - did not produce a TEST-*.xml file (likely timed out) 
(batchId=240)
TestClearDanglingScratchDir - did not produce a TEST-*.xml file (likely timed 
out) (batchId=240)
TestLocationQueries - did not produce a TEST-*.xml file (likely timed out) 
(batchId=240)
TestReplicationScenariosAcidTables - did not produce a TEST-*.xml file (likely 
timed out) (batchId=240)
TestSemanticAnalyzerHookLoading - did not produce a TEST-*.xml file (likely 
timed out) (batchId=240)
TestSparkStatistics - did not produce a TEST-*.xml file (likely timed out) 
(batchId=240)
{quote}

The only connection to HCatalog code I can find in the failed tests is itests' 
{{WarehouseInstance}} makes use of:

{code}
import org.apache.hive.hcatalog.api.repl.ReplicationV1CompatRule;
import org.apache.hive.hcatalog.listener.DbNotificationListener;
{code}

looking at those classes I have a hard time guessing how HBase classpath 
changes could reasonably impact them.

Is there an existing automated tool to get a rerun of just the failed tests? Or 
perhaps just batchId 240, since it seems like that one failed due to 
{{TestReplicationScenariosAcidTables}} hanging?

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.14.0, 2.3.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HIVE-20077.0.patch
>
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.ha

[jira] [Commented] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16533644#comment-16533644
 ] 

Sean Busbey commented on HIVE-20077:


bq. ERROR: -1 due to no test(s) being added or modified.

I didn't see a way to expressly check the runtime environment in a way that 
would reflect this change. Any suggestions? 

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.14.0, 2.3.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HIVE-20077.0.patch
>
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.jav

[jira] [Updated] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-20077:
---
Status: Patch Available  (was: In Progress)

-v0
  - copy the hbase detection and jar finding from {{bin/hive}}
  - tested manually on a cluster with relevant HBase changes in place, via Pig 
running against hcat.

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.3.2, 0.14.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HIVE-20077.0.patch
>
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at 

[jira] [Updated] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-20077:
---
Attachment: HIVE-20077.0.patch

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.14.0, 2.3.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HIVE-20077.0.patch
>
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorIm

[jira] [Updated] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-20077:
---
Affects Version/s: 2.3.2

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.14.0, 2.3.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 

[jira] [Updated] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-20077:
---
Affects Version/s: 0.14.0

> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.14.0, 2.3.2
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>

[jira] [Assigned] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HIVE-20077:
--


> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(D

[jira] [Work started] (HIVE-20077) hcat command should follow same pattern as hive cli for getting HBase jars

2018-07-03 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-20077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-20077 started by Sean Busbey.
--
> hcat command should follow same pattern as hive cli for getting HBase jars
> --
>
> Key: HIVE-20077
> URL: https://issues.apache.org/jira/browse/HIVE-20077
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
>
> Currently the {{hcat}} command adds HBase jars to the classpath by using find 
> to walk the directories under {{$HBASE_HOME/lib}}.
> {code}
> # Look for HBase in a BigTop-compatible way. Avoid thrift version
> # conflict with modern versions of HBase.
> HBASE_HOME=${HBASE_HOME:-"/usr/lib/hbase"}
> HBASE_CONF_DIR=${HBASE_CONF_DIR:-"${HBASE_HOME}/conf"}
> if [ -d ${HBASE_HOME} ] ; then
>for jar in $(find $HBASE_HOME -name '*.jar' -not -name '*thrift*'); do
>   HBASE_CLASSPATH=$HBASE_CLASSPATH:${jar}
>done
>export HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CLASSPATH}"
> fi
> if [ -d $HBASE_CONF_DIR ] ; then
> HADOOP_CLASSPATH="${HADOOP_CLASSPATH}:${HBASE_CONF_DIR}"
> fi
> {code}
> This is incorrect as that path contains jars for a mixture of purposes; hbase 
> client jars, hbase server jars, and hbase shell specific jars. The inclusion 
> of unneeded jars is mostly innocuous until the upcoming HBase 2.1.0 release. 
> That release will have HBASE-20615 and HBASE-19735, which will mean most 
> client facing installations will have a number of shaded client artifacts 
> present.
> With those changes in place, the current implementation will include in the 
> hcat runtime a mix of shaded and non-shaded hbase artifacts that include some 
> Hadoop classes rewritten to use a shaded version of protobuf. When these mix 
> with other Hadoop classes in the classpath that have not been rewritten hcat 
> will fail with errors that look like this:
> {code}
> Exception in thread "main" java.lang.ClassCastException: 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetFileInfoRequestProto
>  cannot be cast to org.apache.hadoop.hbase.shaded.com.google.protobuf.Message
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:225)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy28.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:875)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy29.getFileInfo(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1643)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1495)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1492)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1507)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1668)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:686)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:625)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:557)
> at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:524)
> at org.apache.hive.hcatalog.cli.HCatCli.main(HCatCli.java:149)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccesso

[jira] [Commented] (HIVE-16049) upgrade to jetty 9

2017-03-28 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15945383#comment-15945383
 ] 

Sean Busbey commented on HIVE-16049:


v3 looks good. Any easy way to check if the test failures are related besides 
rerunning them locally?

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Aihua Xu
> Attachments: HIVE-16049.0.patch, HIVE-16049.1.patch, 
> HIVE-16049.2.patch, HIVE-16049.3.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16281) Upgrade master branch to JDK8

2017-03-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938498#comment-15938498
 ] 

Sean Busbey commented on HIVE-16281:


Oh! you meant the java version in the configs for the compiler plugin. sorry. 
yeah, that's correct.

> Upgrade master branch to JDK8
> -
>
> Key: HIVE-16281
> URL: https://issues.apache.org/jira/browse/HIVE-16281
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16281.1.patch, HIVE-16281.2.patch
>
>
> This is to track the JDK 8 upgrade work for the master branch.
> Here are threads for the discussion:
> https://lists.apache.org/thread.html/83d8235bc9547cc94a0d689580f20db4b946876b6d0369e31ea12b51@1460158490@%3Cdev.hive.apache.org%3E
> https://lists.apache.org/thread.html/dcd57844ceac7faf8975a00d5b8b1825ab5544d94734734aedc3840e@%3Cdev.hive.apache.org%3E
> JDK7 is end of public update and some newer version of dependent libraries 
> like jetty require newer JDK. Seems it's reasonable to upgrade to JDK8 in 2.x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16281) Upgrade master branch to JDK8

2017-03-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938493#comment-15938493
 ] 

Sean Busbey commented on HIVE-16281:


IIRC, you need the compiler update because the one in the apache pom is too old.

> Upgrade master branch to JDK8
> -
>
> Key: HIVE-16281
> URL: https://issues.apache.org/jira/browse/HIVE-16281
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16281.1.patch, HIVE-16281.2.patch
>
>
> This is to track the JDK 8 upgrade work for the master branch.
> Here are threads for the discussion:
> https://lists.apache.org/thread.html/83d8235bc9547cc94a0d689580f20db4b946876b6d0369e31ea12b51@1460158490@%3Cdev.hive.apache.org%3E
> https://lists.apache.org/thread.html/dcd57844ceac7faf8975a00d5b8b1825ab5544d94734734aedc3840e@%3Cdev.hive.apache.org%3E
> JDK7 is end of public update and some newer version of dependent libraries 
> like jetty require newer JDK. Seems it's reasonable to upgrade to JDK8 in 2.x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16281) Upgrade master branch to JDK8

2017-03-23 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15938271#comment-15938271
 ] 

Sean Busbey commented on HIVE-16281:


+1 (non-binding) change looks good. the test failure is unrelated AFAICT (I 
can't find the specific failure details in the logs, but the same test didn't 
fail on precommit checks over on HIVE-16049 for the patch where this change was 
included)

> Upgrade master branch to JDK8
> -
>
> Key: HIVE-16281
> URL: https://issues.apache.org/jira/browse/HIVE-16281
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-16281.1.patch
>
>
> This is to track the JDK 8 upgrade work for the master branch.
> Here are threads for the discussion:
> https://lists.apache.org/thread.html/83d8235bc9547cc94a0d689580f20db4b946876b6d0369e31ea12b51@1460158490@%3Cdev.hive.apache.org%3E
> https://lists.apache.org/thread.html/dcd57844ceac7faf8975a00d5b8b1825ab5544d94734734aedc3840e@%3Cdev.hive.apache.org%3E
> JDK7 is end of public update and some newer version of dependent libraries 
> like jetty require newer JDK. Seems it's reasonable to upgrade to JDK8 in 2.x.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16049) upgrade to jetty 9

2017-03-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15933416#comment-15933416
 ] 

Sean Busbey commented on HIVE-16049:


Change looks reasonable to me. Was the update to exclude stuff in 
{{hcatalog/webhcat/svr}} the only difference?

Any particular testing folks would like to see?

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Aihua Xu
> Attachments: HIVE-16049.0.patch, HIVE-16049.1.patch, 
> HIVE-16049.2.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16049) upgrade to jetty 9

2017-03-09 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HIVE-16049:
--

Assignee: (was: Sean Busbey)

unassigning from myself at the moment because I'm caught up in other things.

The remaining work is essentially cleaning up the runtime classpath for the 
failing tests. It should be similar to what I did in the service module: the 
various ways we get transitive versions of servlet, jsp, jetty, etc jars need 
to be excluded and the ones we actually use for Jetty need to be included 
expressly.

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
> Attachments: HIVE-16049.0.patch, HIVE-16049.1.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HIVE-16049) upgrade to jetty 9

2017-03-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15897425#comment-15897425
 ] 

Sean Busbey edited comment on HIVE-16049 at 3/6/17 2:51 PM:


-01

  - updated dependencies for some additional jetty ones needed in hive-service
  - removed non-compatible sources of javax.servlet
  - updated to jdk 8 (updated to latest apache parent pom in the process)

I didn't see a [RESULT] for the vote Thejas started about jdk8+ yet, but it 
looks like it has consensus.

This patch passes doing {{mvn -DskipTests install}} at the top level followed 
by {{mvn verify}} of all the changed modules (common, hcatalog, llap-server, 
service, spark-client).

If there's additional testing folks would like to see beyond whatever the 
precommit process will check, let me know.


was (Author: busbey):
-01

  - updated dependencies for some additional jetty ones needed in hive-service
  - removed non-compatible sources of javax.servlet
  - updated to jdk 8 (updated to latest apache parent pom in the process)

I didn't see a [RESULT] for the vote Thejas started about jdk8+ yet, but it 
looks like it has consensus.

This patch passes doing {{mvn -DskipTests install}} at the top level followed 
by {{mvn verify}} of all the changed modules (common, hcatalog, llap-server, 
service, spark-client).

If there's additional testing beyond whatever the precommit process will check, 
let me know.

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch, HIVE-16049.1.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16049) upgrade to jetty 9

2017-03-06 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-16049:
---
Release Note: JDK8+ is now required. Embedded web services now rely on 
Jetty 9; downstream users who rely on Hive's classpath for their Jetty jars 
will need to update their use for the change.
  Status: Patch Available  (was: In Progress)

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch, HIVE-16049.1.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16049) upgrade to jetty 9

2017-03-06 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-16049:
---
Attachment: HIVE-16049.1.patch

-01

  - updated dependencies for some additional jetty ones needed in hive-service
  - removed non-compatible sources of javax.servlet
  - updated to jdk 8 (updated to latest apache parent pom in the process)

I didn't see a [RESULT] for the vote Thejas started about jdk8+ yet, but it 
looks like it has consensus.

This patch passes doing {{mvn -DskipTests install}} at the top level followed 
by {{mvn verify}} of all the changed modules (common, hcatalog, llap-server, 
service, spark-client).

If there's additional testing beyond whatever the precommit process will check, 
let me know.

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch, HIVE-16049.1.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16049) upgrade to jetty 9

2017-02-28 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15888095#comment-15888095
 ] 

Sean Busbey commented on HIVE-16049:


for future reference, [this is the vote thread|https://s.apache.org/rgI4] 
Thejas mentioned above.

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16049) upgrade to jetty 9

2017-02-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-16049:
---
Status: In Progress  (was: Patch Available)

I have to futz with more dependency versions; the current jetty 9 updates fail 
in one of the hive-service tests because of the wrong servlet jar version.

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HIVE-16049) upgrade to jetty 9

2017-02-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15887055#comment-15887055
 ] 

Sean Busbey edited comment on HIVE-16049 at 2/28/17 2:01 AM:
-

the [thread on jdk8 support|https://s.apache.org/vBpR] from april to june 2016 
seems to have consensus on moving to jdk8+ for master / then-2.2.0-SNAP. AFIACT 
the update just didn't happen.

For now, I'll include updating the pom to mandate jdk8. If that works for 
folks, I'll update the description / release note to include the update to 
jdk8. If folks would prefer a separate JIRA just say the word.


was (Author: busbey):
the [thread on jdk8 support|https://s.apache.org/vBpR] from april to june 2016 
seems to have consensus that moving to jdk8+ for master / then-2.2.0-SNAP. 
AFIACT the update just didn't happen.

For now, I'll include updating the pom to mandate jdk8. If that works for 
folks, I'll update the description / release note to include the update to 
jdk8. If folks would prefer a separate JIRA just say the word.

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16049) upgrade to jetty 9

2017-02-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15887055#comment-15887055
 ] 

Sean Busbey commented on HIVE-16049:


the [thread on jdk8 support|https://s.apache.org/vBpR] from april to june 2016 
seems to have consensus that moving to jdk8+ for master / then-2.2.0-SNAP. 
AFIACT the update just didn't happen.

For now, I'll include updating the pom to mandate jdk8. If that works for 
folks, I'll update the description / release note to include the update to 
jdk8. If folks would prefer a separate JIRA just say the word.

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16049) upgrade to jetty 9

2017-02-27 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15886935#comment-15886935
 ] 

Sean Busbey commented on HIVE-16049:


Jetty 9.3 (the one used in this patch, also used in Hadoop and HBase) requires 
Java 1.8. [Looking at the Jetty docs on version 
reqs|http://www.eclipse.org/jetty/documentation/current/what-jetty-version.html],
 I could switch to Jetty 9.2 and it ought to work for Java 1.7, though I don't 
know yet how compatible it is being colocated with 9.3.

Is there a branch I could target that is Java 1.8+ only? If not, can anyone 
point me to prior dev@ discussion(s) about java version reqs so I can check 
background before making a case to add a Java 1.8+ only branch?

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16049) upgrade to jetty 9

2017-02-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-16049:
---
Status: Patch Available  (was: In Progress)

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16049) upgrade to jetty 9

2017-02-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-16049:
---
Attachment: HIVE-16049.0.patch

-00

  - Replace jetty-all from jetty 7 with needed jetty 9 artifacts
  - Update API usage from jetty 7 to jetty 9 while keeping same behavior
  - leave comments in for API use that no longer exists in jetty 9 (I think)


This first patch gets through {{mvn -DskipTests install}} working on tests now.

> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HIVE-16049.0.patch
>
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (HIVE-16049) upgrade to jetty 9

2017-02-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-16049 started by Sean Busbey.
--
> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16049) upgrade to jetty 9

2017-02-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HIVE-16049:
--


> upgrade to jetty 9
> --
>
> Key: HIVE-16049
> URL: https://issues.apache.org/jira/browse/HIVE-16049
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>
> Jetty 7 has been deprecated for a couple of years now. Hadoop and HBase have 
> both updated to Jetty 9 for their next major releases, which will complicate 
> classpath concerns.
> Proactively update to Jetty 9 in the few places we use a web server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-5302) PartitionPruner logs warning on Avro non-partitioned data

2016-08-08 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HIVE-5302:
--
Resolution: Won't Fix
  Assignee: (was: Sean Busbey)
Status: Resolved  (was: Patch Available)

> PartitionPruner logs warning on Avro non-partitioned data
> -
>
> Key: HIVE-5302
> URL: https://issues.apache.org/jira/browse/HIVE-5302
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.11.0
>Reporter: Sean Busbey
>  Labels: avro
> Attachments: HIVE-5302.1-branch-0.12.patch.txt, 
> HIVE-5302.1.patch.txt, HIVE-5302.1.patch.txt
>
>
> While updating HIVE-3585 I found a test case that causes the failure in the 
> MetaStoreUtils partition retrieval from back in HIVE-4789.
> in this case, the failure is triggered when the partition pruner is handed a 
> non-partitioned table and has to construct a pseudo-partition.
> e.g.
> {code}
>   INSERT OVERWRITE TABLE partitioned_table PARTITION(col) SELECT id, foo, col 
> FROM non_partitioned_table WHERE col <= 9;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12721) Add UUID built in function

2016-05-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15302648#comment-15302648
 ] 

Sean Busbey commented on HIVE-12721:


can a committer with access to the QA job relaunch it to see if these failures 
are related?

> Add UUID built in function
> --
>
> Key: HIVE-12721
> URL: https://issues.apache.org/jira/browse/HIVE-12721
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Jeremy Beard
>Assignee: Jeremy Beard
> Attachments: HIVE-12721.1.patch, HIVE-12721.2.patch, HIVE-12721.patch
>
>
> A UUID function would be very useful for ETL jobs that need to generate 
> surrogate keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12721) Add UUID built in function

2016-05-26 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15302607#comment-15302607
 ] 

Sean Busbey commented on HIVE-12721:


Patch looks like a straight-forward implementation. [~jbeard] do you know if 
the test failures are related?

> Add UUID built in function
> --
>
> Key: HIVE-12721
> URL: https://issues.apache.org/jira/browse/HIVE-12721
> Project: Hive
>  Issue Type: Improvement
>  Components: UDF
>Reporter: Jeremy Beard
>Assignee: Jeremy Beard
> Attachments: HIVE-12721.1.patch, HIVE-12721.2.patch, HIVE-12721.patch
>
>
> A UUID function would be very useful for ETL jobs that need to generate 
> surrogate keys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12002) correct implementation typo

2015-09-30 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14938732#comment-14938732
 ] 

Sean Busbey commented on HIVE-12002:


patch looks good, changes only exposed in javadocs and a log message at INFO.

the change in the log message might require updating some test expected 
outputs. :/

> correct implementation typo
> ---
>
> Key: HIVE-12002
> URL: https://issues.apache.org/jira/browse/HIVE-12002
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline, HCatalog, Metastore
>Affects Versions: 1.2.1
>Reporter: Alex Moundalexis
>Assignee: Alex Moundalexis
>Priority: Trivial
>  Labels: newbie, typo
> Attachments: HIVE-12002.patch
>
>
> The term "implemenation" is seen in HiveMetaScore INFO logs. Correcting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)