[jira] [Updated] (HIVE-13648) ORC Schema Evolution doesn't support same type conversion for VARCHAR, CHAR, or DECIMAL when maxLength or precision/scale is different
[ https://issues.apache.org/jira/browse/HIVE-13648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-13648: Status: Patch Available (was: Open) > ORC Schema Evolution doesn't support same type conversion for VARCHAR, CHAR, > or DECIMAL when maxLength or precision/scale is different > -- > > Key: HIVE-13648 > URL: https://issues.apache.org/jira/browse/HIVE-13648 > Project: Hive > Issue Type: Bug >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-13648.01.patch > > > E.g. when a data file is copied in has a VARCHAR maxLength that doesn't match > the DDL's maxLength. This error is produced: > {code} > java.io.IOException: ORC does not support type conversion from file type > varchar(145) (36) to reader type varchar(114) (36) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13648) ORC Schema Evolution doesn't support same type conversion for VARCHAR, CHAR, or DECIMAL when maxLength or precision/scale is different
[ https://issues.apache.org/jira/browse/HIVE-13648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-13648: Attachment: HIVE-13648.01.patch > ORC Schema Evolution doesn't support same type conversion for VARCHAR, CHAR, > or DECIMAL when maxLength or precision/scale is different > -- > > Key: HIVE-13648 > URL: https://issues.apache.org/jira/browse/HIVE-13648 > Project: Hive > Issue Type: Bug >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-13648.01.patch > > > E.g. when a data file is copied in has a VARCHAR maxLength that doesn't match > the DDL's maxLength. This error is produced: > {code} > java.io.IOException: ORC does not support type conversion from file type > varchar(145) (36) to reader type varchar(114) (36) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13897) don't support comparison operator for variable of decimal or double type
[ https://issues.apache.org/jira/browse/HIVE-13897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] longgeligelong updated HIVE-13897: -- Affects Version/s: (was: 2.2.0) 2.0.1 > don't support comparison operator for variable of decimal or double type > > > Key: HIVE-13897 > URL: https://issues.apache.org/jira/browse/HIVE-13897 > Project: Hive > Issue Type: Bug > Components: hpl/sql >Affects Versions: 2.0.1 >Reporter: longgeligelong > > decimal can't compare to decimal > decimal can't compare to double > decimal can't compare to integer > double can't compare to integer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13885) Hive session close is not resetting thread name
[ https://issues.apache.org/jira/browse/HIVE-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307417#comment-15307417 ] Rajat Khandelwal commented on HIVE-13885: - Will just a logical explanation of why this change makes sense. HiveSessionImpl has a SessionState object (will call the instance sessionState). And on HiveSessionImpl.release, sessionState.resetThreadNames() is called, but the call is protected by null check on sessionState object. Now in close, we are doing sessionState.close(); sessionState = null; this.release;. The last call would have reset the thread name, if the sessionState wasn't made null just before it. Thread names are updated on acquire and reset on release. In close, we are doing one acquire call at the beginning, and one release call at the end, which means that the thread name will be updated but not reset (since sessionState has been made null before release call). So every thread that tries to close a session, has an extra id appended to its name which never gets removed. Hence the thread names keep growing and the logs are flooded with thread names. > Hive session close is not resetting thread name > --- > > Key: HIVE-13885 > URL: https://issues.apache.org/jira/browse/HIVE-13885 > Project: Hive > Issue Type: Bug >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > Fix For: 2.1.0 > > Attachments: HIVE-13885.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13897) don't support comparison operator for variable of decimal or double type
[ https://issues.apache.org/jira/browse/HIVE-13897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] longgeligelong updated HIVE-13897: -- Attachment: HIVE-13897.patch > don't support comparison operator for variable of decimal or double type > > > Key: HIVE-13897 > URL: https://issues.apache.org/jira/browse/HIVE-13897 > Project: Hive > Issue Type: Bug > Components: hpl/sql >Affects Versions: 2.0.1 >Reporter: longgeligelong > Attachments: HIVE-13897.patch > > > decimal can't compare to decimal > decimal can't compare to double > decimal can't compare to integer > double can't compare to integer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13897) don't support comparison operator for variable of decimal or double type
[ https://issues.apache.org/jira/browse/HIVE-13897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] longgeligelong updated HIVE-13897: -- Attachment: (was: HIVE-13897.patch) > don't support comparison operator for variable of decimal or double type > > > Key: HIVE-13897 > URL: https://issues.apache.org/jira/browse/HIVE-13897 > Project: Hive > Issue Type: Bug > Components: hpl/sql >Affects Versions: 2.0.1 >Reporter: longgeligelong > > decimal can't compare to decimal > decimal can't compare to double > decimal can't compare to integer > double can't compare to integer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11233) Include Apache Phoenix support in HBaseStorageHandler
[ https://issues.apache.org/jira/browse/HIVE-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307466#comment-15307466 ] Hive QA commented on HIVE-11233: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12806127/HIVE-11233.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 10190 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file TestJdbcWithMiniHA - did not produce a TEST-*.xml file TestJdbcWithMiniMr - did not produce a TEST-*.xml file TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_external_table_ppd org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_storage_queries org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec org.apache.hadoop.hive.ql.TestTxnCommands.testSimpleAcidInsert org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testLocksInSubquery org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.testPigPopulation {noformat} Test results: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/463/testReport Console output: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/463/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-463/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12806127 - PreCommit-HIVE-MASTER-Build > Include Apache Phoenix support in HBaseStorageHandler > - > > Key: HIVE-11233 > URL: https://issues.apache.org/jira/browse/HIVE-11233 > Project: Hive > Issue Type: New Feature > Components: HBase Handler >Affects Versions: 1.2.1, 2.0.0 >Reporter: Svetozar Ivanov >Assignee: Svetozar Ivanov > Labels: Binary, Hbase, Numeric, Phoenix, Sortable > Attachments: HIVE-11233-branch-1.2.patch, > HIVE-11233-branch-2.0.patch, HIVE-11233.1.patch, HIVE-11233.2.patch, > HIVE-11233.3.patch, HIVE-11233.4.patch, HIVE-11233.patch > > > Currently HBaseStorageHandler doesn't provide mechanism for storage of binary > sortable key and values. It is necessary when given HBase table is used for > persistence by Apache Hive and Apache Phoenix. In that way all byte arrays > read or written by Hive will be compatible with binary sortable format used > in Phoenix. > It turns out the major difference is in all numeric data types accordingly > officially provided documentation - > https://phoenix.apache.org/language/datatypes.html. > That's how I'm using it in my code: > {code} > private static String buildWithSerDeProperties(TableDescriptor > tableDescriptor) { > Map serdePropertiesMap = new HashMap<>(); > serdePropertiesMap.put(HBaseSerDe.HBASE_TABLE_NAME, > tableDescriptor.getTableName()); > serdePropertiesMap.put(HBaseSerDe.HBASE_TABLE_DEFAULT_STORAGE_TYPE, > BINARY_STORAGE_TYPE); > serdePropertiesMap.put(HBaseSerDe.HBASE_COLUMNS_MAPPING, > buildHBaseColumnsDefinition(tableDescriptor)); > serdePropertiesMap.put(HBaseSerDe.HBASE_VALUE_FACTORY_CLASS, > PhoenixValueFactory.class.getName()); > /* Use different key factory for simple and composite primary key */ > if (tableDescriptor.getPkDescriptors().size() == 1) { > serdePropertiesMap.put(HBaseSerDe.HBASE_KEY_FACTORY_CLASS, > PhoenixKeyFactory.class.getName()); > } else { > serdePropertiesMap.put(HBaseSerDe.HBASE_COMPOSITE_KEY_FACTORY, > PhoenixCompositeKeyFactory.class.getName()); > } > String serDeProperties = serdePropertiesMap.entrySet().stream() > .map(e -> quoteInSingleQuotes(e.getKey()) + " = " + > quoteInSingleQuotes(e.getValue())) > .collect(Collectors.joining(COLUMNS_SEPARATOR)); > logger.debug("SERDEPROPERTIES are [{}]", serDeProperties); > return serDeProperties; > } > {code}
[jira] [Commented] (HIVE-13847) Avoid file open call in RecordReaderUtils as the stream is already available
[ https://issues.apache.org/jira/browse/HIVE-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307467#comment-15307467 ] Hive QA commented on HIVE-13847: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12806111/HIVE-13847.1.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/464/testReport Console output: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/464/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-464/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]] + export JAVA_HOME=/usr/java/jdk1.8.0_25 + JAVA_HOME=/usr/java/jdk1.8.0_25 + export PATH=/usr/java/jdk1.8.0_25/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + PATH=/usr/java/jdk1.8.0_25/bin/:/usr/lib64/qt-3.3/bin:/usr/local/apache-maven-3.0.5/bin:/usr/java/jdk1.7.0_45-cloudera/bin:/usr/local/apache-ant-1.9.1/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hiveptest/bin + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-464/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 4d988b1 HIVE-13837: current_timestamp() output format is different in some cases (Pengcheng Xiong, reviewed by Jason Dere) + git clean -f -d Removing hbase-handler/src/java/org/apache/hadoop/hive/hbase/phoenix/ Removing serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBaseWrapper.java Removing serde/src/java/org/apache/hadoop/hive/serde2/lazydio/LazyDioDate.java Removing serde/src/java/org/apache/hadoop/hive/serde2/lazydio/LazyDioTimestamp.java + git checkout master Already on 'master' + git reset --hard origin/master HEAD is now at 4d988b1 HIVE-13837: current_timestamp() output format is different in some cases (Pengcheng Xiong, reviewed by Jason Dere) + git merge --ff-only origin/master Already up-to-date. + git gc + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12806111 - PreCommit-HIVE-MASTER-Build > Avoid file open call in RecordReaderUtils as the stream is already available > > > Key: HIVE-13847 > URL: https://issues.apache.org/jira/browse/HIVE-13847 > Project: Hive > Issue Type: Improvement > Components: ORC >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-13847.1.patch > > > File open call in RecordReaderUtils::readRowIndex can be avoided. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13897) don't support comparison operator for variable of decimal or double type
[ https://issues.apache.org/jira/browse/HIVE-13897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] longgeligelong updated HIVE-13897: -- Attachment: HIVE-13897.patch > don't support comparison operator for variable of decimal or double type > > > Key: HIVE-13897 > URL: https://issues.apache.org/jira/browse/HIVE-13897 > Project: Hive > Issue Type: Bug > Components: hpl/sql >Affects Versions: 2.0.1 >Reporter: longgeligelong > Attachments: HIVE-13897.patch > > > decimal can't compare to decimal > decimal can't compare to double > decimal can't compare to integer > double can't compare to integer -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13898) don't support add subtract multiply or divide for variable of decimal or double type
[ https://issues.apache.org/jira/browse/HIVE-13898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] longgeligelong updated HIVE-13898: -- Attachment: HIVE-13898.patch > don't support add subtract multiply or divide for variable of decimal or > double type > > > Key: HIVE-13898 > URL: https://issues.apache.org/jira/browse/HIVE-13898 > Project: Hive > Issue Type: Bug > Components: hpl/sql >Affects Versions: 2.0.1 >Reporter: longgeligelong > Attachments: HIVE-13898.patch > > > don't support > int + decimal > int + double > decimal + double > decimal + decimal > double + double -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12779) Buffer underflow when inserting data to table
[ https://issues.apache.org/jira/browse/HIVE-12779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307573#comment-15307573 ] Oleksiy Sayankin commented on HIVE-12779: - Alina has found a workaround for this issue. *ROOT-CAUSE:* Consider method {code} protected int require (int required) throws KryoException {code} from class com.esotericsoftware.kryo.io.Input where exception happens. {code} int remaining = limit - position; if (remaining >= required) return remaining; if (required > capacity) throw new KryoException("Buffer too small: capacity: " + capacity + ", required: " + required); int count; // Try to fill the buffer. if (remaining > 0) { count = fill(buffer, limit, capacity - limit); if (count == -1) throw new KryoException("Buffer underflow."); {code} We can see that exception ("Buffer underflow.") occurs when count == -1. So let us see method fill(byte[] buffer, int offset, int count) in details: {code} if (inputStream == null) return -1; try { return inputStream.read(buffer, offset, count); } catch (IOException ex) { throw new KryoException(ex); } {code} It returns -1 either when inputStream == null or from inputStream.read(buffer, offset, count). We definitely know that inputStream can not be equal null here because of constructor: {code} public Input (InputStream inputStream) { this(4096); if (inputStream == null) throw new IllegalArgumentException("inputStream cannot be null."); this.inputStream = inputStream; } {code} >From Java docs we know that if no byte is available because the stream is at >end of file, the value -1 is returned by the method inputStream.read(buffer, >offset, count). Hence we suspect here some errors in HDFS here that causes -1 >to be a return value. Skipping usage of file system as query plan storage and >sending it via RPC directly will fix the issue. *SOLUTION:* Use {code} hive.rpc.query.plan true {code} in hive-site.xml as workaround. This property defines whether to send the query plan via local resource or RPC. > Buffer underflow when inserting data to table > - > > Key: HIVE-12779 > URL: https://issues.apache.org/jira/browse/HIVE-12779 > Project: Hive > Issue Type: Bug > Components: Database/Schema, SQL > Environment: CDH 5.4.9 >Reporter: Ming Hsuan Tu >Assignee: Alan Gates > > I face a buffer underflow problem when inserting data to table from hive > 1.1.0. > the block size is 128 MB and the data size is only 10MB, but it gives me 891 > mappers. > Task with the most failures(4): > - > Task ID: > task_1451989578563_0001_m_08 > URL: > > http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1451989578563_0001&tipid=task_1451989578563_0001_m_08 > - > Diagnostic Messages for this Task: > Error: java.lang.RuntimeException: Failed to load plan: > hdfs://tpe-nn-3-1:8020/tmp/hive/alec.tu/af798488-dbf5-45da-8adb-e4f2ddde1242/hive_2016-01-05_18-34-26_864_3947114301988950007-1/-mr-10004/bb86c923-0dca-43cd-aa5d-ef575d764e06/map.xml: > org.apache.hive.com.esotericsoftware.kryo.KryoException: Buffer underflow. > at > org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:450) > at > org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:296) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.init(HiveInputFormat.java:268) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:234) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:701) > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > Caused by: org.apache.hive.com.esotericsoftware.kryo.KryoException: Buffer > underflow. > at > org.apache.hive.com.esotericsoftware.kryo.io.Input.require(Input.java:181) > at > org.apache.hive.com.esotericsoftware.kryo.io.Input.readBoolean(Input.java:783) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.UnsafeCacheFields$UnsafeBooleanField.read(UnsafeCacheFields.java:120) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:507) > at > org.apache.hi
[jira] [Commented] (HIVE-13518) Hive on Tez: Shuffle joins do not choose the right 'big' table.
[ https://issues.apache.org/jira/browse/HIVE-13518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307588#comment-15307588 ] Hive QA commented on HIVE-13518: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12806300/HIVE-13518.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 10172 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file TestJdbcWithMiniHA - did not produce a TEST-*.xml file TestJdbcWithMiniMr - did not produce a TEST-*.xml file TestMiniTezCliDriver-insert_values_non_partitioned.q-schema_evol_orc_nonvec_mapwork_part.q-union5.q-and-12-more - did not produce a TEST-*.xml file TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testLocksInSubquery {noformat} Test results: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/467/testReport Console output: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/467/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-467/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12806300 - PreCommit-HIVE-MASTER-Build > Hive on Tez: Shuffle joins do not choose the right 'big' table. > --- > > Key: HIVE-13518 > URL: https://issues.apache.org/jira/browse/HIVE-13518 > Project: Hive > Issue Type: Bug >Reporter: Vikram Dixit K >Assignee: Vikram Dixit K > Attachments: HIVE-13518.1.patch, HIVE-13518.2.patch, > HIVE-13518.3.patch > > > Currently the big table is always assumed to be at position 0 but this isn't > efficient for some queries as the big table at position 1 could have a lot > more keys/skew. We already have a mechanism of choosing the big table that > can be leveraged to make the right choice. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13852) NPE in TaskLocationHints during LLAP GetSplits request
[ https://issues.apache.org/jira/browse/HIVE-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307737#comment-15307737 ] Hive QA commented on HIVE-13852: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12806215/HIVE-13852.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 10187 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file TestJdbcWithMiniHA - did not produce a TEST-*.xml file TestJdbcWithMiniMr - did not produce a TEST-*.xml file TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testLocksInSubquery {noformat} Test results: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/468/testReport Console output: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/468/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-468/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12806215 - PreCommit-HIVE-MASTER-Build > NPE in TaskLocationHints during LLAP GetSplits request > -- > > Key: HIVE-13852 > URL: https://issues.apache.org/jira/browse/HIVE-13852 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13852.1.patch > > > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.io.IOException: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:194) > at org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:116) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:434) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:426) > at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:144) > ... 15 more > Caused by: java.io.IOException: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:366) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:185) > ... 23 more > Caused by: java.lang.NullPointerException: null > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:344) > ... 24 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13792) Show create table should not show stats info in the table properties
[ https://issues.apache.org/jira/browse/HIVE-13792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13792: Attachment: HIVE-13792.final.patch > Show create table should not show stats info in the table properties > > > Key: HIVE-13792 > URL: https://issues.apache.org/jira/browse/HIVE-13792 > Project: Hive > Issue Type: Sub-task > Components: Query Planning >Affects Versions: 2.1.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 2.2.0 > > Attachments: HIVE-13792.1.patch, HIVE-13792.2.patch, > HIVE-13792.3.patch, HIVE-13792.final.patch > > > From the test > org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries > failure, we are printing table stats in show create table parameters. This > info should be skipped since it would be incorrect when you just copy them to > create a table. > {noformat} > PREHOOK: query: SHOW CREATE TABLE hbase_table_1_like > PREHOOK: type: SHOW_CREATETABLE > PREHOOK: Input: default@hbase_table_1_like > POSTHOOK: query: SHOW CREATE TABLE hbase_table_1_like > POSTHOOK: type: SHOW_CREATETABLE > POSTHOOK: Input: default@hbase_table_1_like > CREATE EXTERNAL TABLE `hbase_table_1_like`( > `key` int COMMENT 'It is a column key', > `value` string COMMENT 'It is the column string value') > ROW FORMAT SERDE > 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY > 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > 'hbase.columns.mapping'='cf:string', > 'serialization.format'='1') > TBLPROPERTIES ( > 'COLUMN_STATS_ACCURATE'='{\"BASIC_STATS\":\"true\"}', > 'hbase.table.name'='hbase_table_0', > 'numFiles'='0', > 'numRows'='0', > 'rawDataSize'='0', > 'totalSize'='0', > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13863) Improve AnnotateWithStatistics with support for cartesian product
[ https://issues.apache.org/jira/browse/HIVE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13863: --- Attachment: (was: HIVE-13863.patch) > Improve AnnotateWithStatistics with support for cartesian product > - > > Key: HIVE-13863 > URL: https://issues.apache.org/jira/browse/HIVE-13863 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.1.0 > > > Currently cartesian product stats based on cardinality of inputs are not > inferred correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13863) Improve AnnotateWithStatistics with support for cartesian product
[ https://issues.apache.org/jira/browse/HIVE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13863: --- Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Regenerated golden files, pushed to master and branch-2.1. Thanks for reviewing [~ashutoshc]! > Improve AnnotateWithStatistics with support for cartesian product > - > > Key: HIVE-13863 > URL: https://issues.apache.org/jira/browse/HIVE-13863 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.1.0 > > > Currently cartesian product stats based on cardinality of inputs are not > inferred correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13863) Improve AnnotateWithStatistics with support for cartesian product
[ https://issues.apache.org/jira/browse/HIVE-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13863: --- Attachment: HIVE-13863.01.patch > Improve AnnotateWithStatistics with support for cartesian product > - > > Key: HIVE-13863 > URL: https://issues.apache.org/jira/browse/HIVE-13863 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.1.0 > > Attachments: HIVE-13863.01.patch > > > Currently cartesian product stats based on cardinality of inputs are not > inferred correctly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-13742) Hive ptest has many failures due to metastore connection refused
[ https://issues.apache.org/jira/browse/HIVE-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña reassigned HIVE-13742: -- Assignee: Sergio Peña > Hive ptest has many failures due to metastore connection refused > > > Key: HIVE-13742 > URL: https://issues.apache.org/jira/browse/HIVE-13742 > Project: Hive > Issue Type: Bug >Reporter: Sergio Peña >Assignee: Sergio Peña > Attachments: hive.log > > > The following exception is thrown on the Hive ptest with many tests, and it > is due to some Derby database issues: > {noformat} > 016-05-11T15:46:25,123 INFO [Thread-2[]]: metastore.HiveMetaStore > (HiveMetaStore.java:newRawStore(563)) - 0: Opening raw store with > implementation class:org.apache.hadoop.hive.metastore.ObjectStore > 2016-05-11T15:46:25,175 INFO [Thread-2[]]: metastore.ObjectStore > (ObjectStore.java:initialize(324)) - ObjectStore, initialize called > 2016-05-11T15:46:25,966 DEBUG [Thread-2[]]: bonecp.BoneCPDataSource > (BoneCPDataSource.java:getConnection(119)) - JDBC URL = > jdbc:derby:;databaseName=/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db;create=true, > Username = APP, partitions = 1, max (per partition) = 10, min (per > partition) = 0, idle max age = 60 min, idle test period = 240 min, strategy = > DEFAULT > 2016-05-11T15:46:26,003 ERROR [Thread-2[]]: Datastore.Schema > (Log4JLogger.java:error(125)) - Failed initialising database. > org.datanucleus.exceptions.NucleusDataStoreException: Unable to open a test > connection to the given database. JDBC url = > jdbc:derby:;databaseName=/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db;create=true, > username = APP. Terminating connection pool (set lazyInit to true if you > expect to start your database after your app). Original Exception: -- > java.sql.SQLException: Failed to create database > '/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db', > see the next exception for details. > at > org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown > Source) > at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source) > at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown > Source) > at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > at org.apache.derby.impl.jdbc.EmbedConnection40.(Unknown Source) > at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source) > at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source) > at org.apache.derby.jdbc.Driver20.connect(Unknown Source) > at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source) > at java.sql.DriverManager.getConnection(DriverManager.java:664) > at java.sql.DriverManager.getConnection(DriverManager.java:208) > at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361) > at com.jolbox.bonecp.BoneCP.(BoneCP.java:416) > at > com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120) > at > org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483) > at > org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:296) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:408) > at > org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606) > at > org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301) > at > org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133) > at > org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:420) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:821) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:338) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:217) > at sun.reflect.NativeMethodAccessorI
[jira] [Updated] (HIVE-13825) Using JOIN in 2 tables that has same path locations, but different colum names fail wtih an error exception
[ https://issues.apache.org/jira/browse/HIVE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-13825: --- Assignee: Vihang Karajgaonkar > Using JOIN in 2 tables that has same path locations, but different colum > names fail wtih an error exception > --- > > Key: HIVE-13825 > URL: https://issues.apache.org/jira/browse/HIVE-13825 > Project: Hive > Issue Type: Bug >Reporter: Sergio Peña >Assignee: Vihang Karajgaonkar > > The following scenario of 2 tables with same locations cannot be used on a > JOIN query: > {noformat} > hive> create table t1 (a string, b string) location > '/user/hive/warehouse/test1'; > OK > hive> create table t2 (c string, d string) location > '/user/hive/warehouse/test1'; > OK > hive> select t1.a from t1 join t2 on t1.a = t2.c; > ... > 2016-05-23 16:39:57 Starting to launch local task to process map join; > maximum memory = 477102080 > Execution failed with exit status: 2 > Obtaining error information > Task failed! > Task ID: > Stage-4 > Logs: > FAILED: Execution Error, return code 2 from > org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask > {noformat} > The logs contain this error exception: > {noformat} > 2016-05-23T16:39:58,163 ERROR [main]: mr.MapredLocalTask (:()) - Hive Runtime > Error: Map local work failed > java.lang.RuntimeException: cannot find field a from [0:c, 1:d] > at > org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:485) > at > org.apache.hadoop.hive.serde2.BaseStructObjectInspector.getStructFieldRef(BaseStructObjectInspector.java:133) > at > org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:55) > at > org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:973) > at > org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:999) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:75) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:355) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:504) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:457) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:365) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:504) > at > org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:457) > at > org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:365) > at > org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.initializeOperators(MapredLocalTask.java:499) > at > org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:403) > at > org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.executeInProcess(MapredLocalTask.java:383) > at > org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:751) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13742) Hive ptest has many failures due to metastore connection refused
[ https://issues.apache.org/jira/browse/HIVE-13742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña resolved HIVE-13742. Resolution: Fixed Fix Version/s: 2.2.0 Reduce the number of drones per box from 3 to 2 fixed this issue. > Hive ptest has many failures due to metastore connection refused > > > Key: HIVE-13742 > URL: https://issues.apache.org/jira/browse/HIVE-13742 > Project: Hive > Issue Type: Bug >Reporter: Sergio Peña >Assignee: Sergio Peña > Fix For: 2.2.0 > > Attachments: hive.log > > > The following exception is thrown on the Hive ptest with many tests, and it > is due to some Derby database issues: > {noformat} > 016-05-11T15:46:25,123 INFO [Thread-2[]]: metastore.HiveMetaStore > (HiveMetaStore.java:newRawStore(563)) - 0: Opening raw store with > implementation class:org.apache.hadoop.hive.metastore.ObjectStore > 2016-05-11T15:46:25,175 INFO [Thread-2[]]: metastore.ObjectStore > (ObjectStore.java:initialize(324)) - ObjectStore, initialize called > 2016-05-11T15:46:25,966 DEBUG [Thread-2[]]: bonecp.BoneCPDataSource > (BoneCPDataSource.java:getConnection(119)) - JDBC URL = > jdbc:derby:;databaseName=/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db;create=true, > Username = APP, partitions = 1, max (per partition) = 10, min (per > partition) = 0, idle max age = 60 min, idle test period = 240 min, strategy = > DEFAULT > 2016-05-11T15:46:26,003 ERROR [Thread-2[]]: Datastore.Schema > (Log4JLogger.java:error(125)) - Failed initialising database. > org.datanucleus.exceptions.NucleusDataStoreException: Unable to open a test > connection to the given database. JDBC url = > jdbc:derby:;databaseName=/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db;create=true, > username = APP. Terminating connection pool (set lazyInit to true if you > expect to start your database after your app). Original Exception: -- > java.sql.SQLException: Failed to create database > '/home/hiveptest/54.177.132.113-hiveptest-1/apache-github-source-source/itests/hive-unit/target/tmpTestFilterHooksmetastore_db', > see the next exception for details. > at > org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown > Source) > at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source) > at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source) > at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown > Source) > at org.apache.derby.impl.jdbc.EmbedConnection.(Unknown Source) > at org.apache.derby.impl.jdbc.EmbedConnection40.(Unknown Source) > at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source) > at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source) > at org.apache.derby.jdbc.Driver20.connect(Unknown Source) > at org.apache.derby.jdbc.AutoloadedDriver.connect(Unknown Source) > at java.sql.DriverManager.getConnection(DriverManager.java:664) > at java.sql.DriverManager.getConnection(DriverManager.java:208) > at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:361) > at com.jolbox.bonecp.BoneCP.(BoneCP.java:416) > at > com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120) > at > org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:483) > at > org.datanucleus.store.rdbms.RDBMSStoreManager.(RDBMSStoreManager.java:296) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:408) > at > org.datanucleus.plugin.NonManagedPluginRegistry.createExecutableExtension(NonManagedPluginRegistry.java:606) > at > org.datanucleus.plugin.PluginManager.createExecutableExtension(PluginManager.java:301) > at > org.datanucleus.NucleusContextHelper.createStoreManagerForProperties(NucleusContextHelper.java:133) > at > org.datanucleus.PersistenceNucleusContextImpl.initialise(PersistenceNucleusContextImpl.java:420) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:821) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:338) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFa
[jira] [Commented] (HIVE-13831) Error pushing predicates to HBase storage handler
[ https://issues.apache.org/jira/browse/HIVE-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307783#comment-15307783 ] Jesus Camacho Rodriguez commented on HIVE-13831: {noformat} Test Name DurationAge org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testLockTimeout 3 min 9 sec 1 org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation 10 sec 1 org.apache.hive.hcatalog.listener.TestDbNotificationListener.cleanupNotifs 1 min 6 sec 2 org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority 5.4 sec 24 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner 3.8 sec 28 org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_static 1 min 52 sec44 org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_dynamic 1 min 23 sec44 org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForMemoryTokenStore 1.7 sec 44 org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForDBTokenStore 0.36 sec44 org.apache.hive.minikdc.TestMiniHiveKdc.testLogin 2 min 7 sec 44 org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec 66 ms 64 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_selectindate11 sec 96 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avrocountemptytbl 12 sec 96 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_order_null 34 sec 96 org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_join_with_different_encryption_keys 1 min 47 sec96 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 7.6 sec 96 org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver 1 min 7 sec 96 {noformat} > Error pushing predicates to HBase storage handler > - > > Key: HIVE-13831 > URL: https://issues.apache.org/jira/browse/HIVE-13831 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-13831.01.patch, HIVE-13831.02.patch, > HIVE-13831.patch > > > Discovered while working on HIVE-13693. > There is an error on the predicates that we can push to HBaseStorageHandler. > In particular, range predicates of the shape {{(bounded, open)}} and {{(open, > bounded)}} over long or int columns get pushed and return wrong results. > The problem has to do with the storage order for keys in HBase. Keys are > sorted lexicographically. Since the byte representation of negative values > comes after the positive values, open range predicates need special handling > that we do not have right now. > Thus, for instance, when we push the predicate {{key > 2}}, we return all > records with column _key_ greater than 2, plus the records with negative > values for the column _key_. This problem does not get exposed if a filter is > kept in the Hive operator tree, but we should not assume the latest. > This fix avoids pushing this kind of predicates to the storage handler, > returning them in the _residual_ part of the predicate that cannot be pushed. > In the future, special handling might be added to support this kind of > predicates. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13831) Error pushing predicates to HBase storage handler
[ https://issues.apache.org/jira/browse/HIVE-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13831: --- Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Fails are unrelated. Pushed to master, branch-2.1. Thanks for reviewing [~ashutoshc]! > Error pushing predicates to HBase storage handler > - > > Key: HIVE-13831 > URL: https://issues.apache.org/jira/browse/HIVE-13831 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.1.0 > > Attachments: HIVE-13831.01.patch, HIVE-13831.02.patch, > HIVE-13831.patch > > > Discovered while working on HIVE-13693. > There is an error on the predicates that we can push to HBaseStorageHandler. > In particular, range predicates of the shape {{(bounded, open)}} and {{(open, > bounded)}} over long or int columns get pushed and return wrong results. > The problem has to do with the storage order for keys in HBase. Keys are > sorted lexicographically. Since the byte representation of negative values > comes after the positive values, open range predicates need special handling > that we do not have right now. > Thus, for instance, when we push the predicate {{key > 2}}, we return all > records with column _key_ greater than 2, plus the records with negative > values for the column _key_. This problem does not get exposed if a filter is > kept in the Hive operator tree, but we should not assume the latest. > This fix avoids pushing this kind of predicates to the storage handler, > returning them in the _residual_ part of the predicate that cannot be pushed. > In the future, special handling might be added to support this kind of > predicates. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"
[ https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307830#comment-15307830 ] Aihua Xu commented on HIVE-13882: - Pushed to master. Thanks Jimmy for reviewing. > When hive.server2.async.exec.async.compile is turned on, from JDBC we will > get "The query did not generate a result set" > - > > Key: HIVE-13882 > URL: https://issues.apache.org/jira/browse/HIVE-13882 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.2.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 2.2.0 > > Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch > > > The following would fail with "The query did not generate a result set" > stmt.execute("SET hive.driver.parallel.compilation=true"); > stmt.execute("SET hive.server2.async.exec.async.compile=true"); > ResultSet res = stmt.executeQuery("SELECT * FROM " + tableName); > res.next(); -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13882) When hive.server2.async.exec.async.compile is turned on, from JDBC we will get "The query did not generate a result set"
[ https://issues.apache.org/jira/browse/HIVE-13882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13882: Resolution: Fixed Fix Version/s: 2.2.0 Status: Resolved (was: Patch Available) > When hive.server2.async.exec.async.compile is turned on, from JDBC we will > get "The query did not generate a result set" > - > > Key: HIVE-13882 > URL: https://issues.apache.org/jira/browse/HIVE-13882 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.2.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 2.2.0 > > Attachments: HIVE-13882.1.patch, HIVE-13882.2.patch > > > The following would fail with "The query did not generate a result set" > stmt.execute("SET hive.driver.parallel.compilation=true"); > stmt.execute("SET hive.server2.async.exec.async.compile=true"); > ResultSet res = stmt.executeQuery("SELECT * FROM " + tableName); > res.next(); -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8458) Potential null dereference in Utilities#clearWork()
[ https://issues.apache.org/jira/browse/HIVE-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HIVE-8458: - Description: {code} Path mapPath = getPlanPath(conf, MAP_PLAN_NAME); Path reducePath = getPlanPath(conf, REDUCE_PLAN_NAME); // if the plan path hasn't been initialized just return, nothing to clean. if (mapPath == null && reducePath == null) { return; } try { FileSystem fs = mapPath.getFileSystem(conf); {code} If mapPath is null but reducePath is not null, getFileSystem() call would produce NPE was: {code} Path mapPath = getPlanPath(conf, MAP_PLAN_NAME); Path reducePath = getPlanPath(conf, REDUCE_PLAN_NAME); // if the plan path hasn't been initialized just return, nothing to clean. if (mapPath == null && reducePath == null) { return; } try { FileSystem fs = mapPath.getFileSystem(conf); {code} If mapPath is null but reducePath is not null, getFileSystem() call would produce NPE > Potential null dereference in Utilities#clearWork() > --- > > Key: HIVE-8458 > URL: https://issues.apache.org/jira/browse/HIVE-8458 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1 >Reporter: Ted Yu >Assignee: skrho >Priority: Minor > Attachments: HIVE-8458.v2.patch, HIVE-8458_001.patch > > > {code} > Path mapPath = getPlanPath(conf, MAP_PLAN_NAME); > Path reducePath = getPlanPath(conf, REDUCE_PLAN_NAME); > // if the plan path hasn't been initialized just return, nothing to clean. > if (mapPath == null && reducePath == null) { > return; > } > try { > FileSystem fs = mapPath.getFileSystem(conf); > {code} > If mapPath is null but reducePath is not null, getFileSystem() call would > produce NPE -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13895) HoS start-up overhead in yarn-client mode
[ https://issues.apache.org/jira/browse/HIVE-13895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307855#comment-15307855 ] Szehon Ho commented on HIVE-13895: -- +1 as well > HoS start-up overhead in yarn-client mode > - > > Key: HIVE-13895 > URL: https://issues.apache.org/jira/browse/HIVE-13895 > Project: Hive > Issue Type: Bug >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-13895.1.patch > > > To avoid the too verbose app state report, HIVE-13376 increases the state > check interval to a default 60s. However, bigger interval brings considerable > start-up wait time for yarn-client mode. > Since the state report only exists in yarn-cluster mode, we can disable it > using {{spark.yarn.submit.waitAppCompletion}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-12983) Provide a builtin function to get Hive version
[ https://issues.apache.org/jira/browse/HIVE-12983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307861#comment-15307861 ] Szehon Ho commented on HIVE-12983: -- +1 > Provide a builtin function to get Hive version > -- > > Key: HIVE-12983 > URL: https://issues.apache.org/jira/browse/HIVE-12983 > Project: Hive > Issue Type: Improvement > Components: UDF >Affects Versions: 2.0.0 >Reporter: Lenni Kuff >Assignee: Lenni Kuff > Attachments: HIVE-12983.1.patch, HIVE-12983.2.patch > > > It would be nice to have a builtin function that would return the Hive > version. This would make it easier for a users and tests to programmatically > check the Hive version in a SQL script. It's also useful so a client can > check the Hive version on a remote cluster. > For example: > {code} > beeline> SELECT version(); > 2.1.0-SNAPSHOT r208ab352311a6cbbcd1f7fcd40964da2dbc6703d > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13693) Multi-insert query drops Filter before file output when there is a.val <> b.val
[ https://issues.apache.org/jira/browse/HIVE-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13693: --- Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Regenerated q files, pushed to master and branch-2.1. Thanks for reviewing [~ashutoshc]! > Multi-insert query drops Filter before file output when there is a.val <> > b.val > --- > > Key: HIVE-13693 > URL: https://issues.apache.org/jira/browse/HIVE-13693 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 1.3.0, 2.0.0, 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.1.0 > > Attachments: HIVE-13693.01.patch, HIVE-13693.01.patch, > HIVE-13693.02.patch, HIVE-13693.patch > > > To reproduce: > {noformat} > CREATE TABLE T_A ( id STRING, val STRING ); > CREATE TABLE T_B ( id STRING, val STRING ); > CREATE TABLE join_result_1 ( ida STRING, vala STRING, idb STRING, valb STRING > ); > CREATE TABLE join_result_3 ( ida STRING, vala STRING, idb STRING, valb STRING > ); > INSERT INTO TABLE T_A > VALUES ('Id_1', 'val_101'), ('Id_2', 'val_102'), ('Id_3', 'val_103'); > INSERT INTO TABLE T_B > VALUES ('Id_1', 'val_103'), ('Id_2', 'val_104'); > explain > FROM T_A a LEFT JOIN T_B b ON a.id = b.id > INSERT OVERWRITE TABLE join_result_1 > SELECT a.*, b.* > WHERE b.id = 'Id_1' AND b.val = 'val_103' > INSERT OVERWRITE TABLE join_result_3 > SELECT a.*, b.* > WHERE b.val = 'val_104' AND b.id = 'Id_2' AND a.val <> b.val; > {noformat} > The (wrong) plan is the following: > {noformat} > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-3 depends on stages: Stage-2 > Stage-0 depends on stages: Stage-3 > Stage-4 depends on stages: Stage-0 > Stage-1 depends on stages: Stage-3 > Stage-5 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Tez > DagId: haha_20160504140944_174465c9-5d1a-42f9-9665-fae02eeb2767:2 > Edges: > Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 3 (SIMPLE_EDGE) > DagName: > Vertices: > Map 1 > Map Operator Tree: > TableScan > alias: a > Statistics: Num rows: 3 Data size: 36 Basic stats: COMPLETE > Column stats: NONE > Reduce Output Operator > key expressions: id (type: string) > sort order: + > Map-reduce partition columns: id (type: string) > Statistics: Num rows: 3 Data size: 36 Basic stats: > COMPLETE Column stats: NONE > value expressions: val (type: string) > Map 3 > Map Operator Tree: > TableScan > alias: b > Statistics: Num rows: 2 Data size: 24 Basic stats: COMPLETE > Column stats: NONE > Reduce Output Operator > key expressions: id (type: string) > sort order: + > Map-reduce partition columns: id (type: string) > Statistics: Num rows: 2 Data size: 24 Basic stats: > COMPLETE Column stats: NONE > value expressions: val (type: string) > Reducer 2 > Reduce Operator Tree: > Merge Join Operator > condition map: > Left Outer Join0 to 1 > keys: > 0 id (type: string) > 1 id (type: string) > outputColumnNames: _col0, _col1, _col6 > Statistics: Num rows: 3 Data size: 39 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: _col0 (type: string), _col1 (type: string), > 'Id_1' (type: string), 'val_103' (type: string) > outputColumnNames: _col0, _col1, _col2, _col3 > Statistics: Num rows: 3 Data size: 39 Basic stats: COMPLETE > Column stats: NONE > File Output Operator > compressed: false > Statistics: Num rows: 3 Data size: 39 Basic stats: > COMPLETE Column stats: NONE > table: > input format: org.apache.hadoop.mapred.TextInputFormat > output format: > org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat > serde: > org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe > name: bugtest2.join_result_1 > Filter Operator > predicate: (_col1 <> _col6) (type: boolean) > Statistics: Num rows: 3 Data size: 39 Basic stats: COMPLETE > Column sta
[jira] [Updated] (HIVE-13693) Multi-insert query drops Filter before file output when there is a.val <> b.val
[ https://issues.apache.org/jira/browse/HIVE-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13693: --- Attachment: HIVE-13693.02.patch > Multi-insert query drops Filter before file output when there is a.val <> > b.val > --- > > Key: HIVE-13693 > URL: https://issues.apache.org/jira/browse/HIVE-13693 > Project: Hive > Issue Type: Bug > Components: Logical Optimizer >Affects Versions: 1.3.0, 2.0.0, 2.1.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.1.0 > > Attachments: HIVE-13693.01.patch, HIVE-13693.01.patch, > HIVE-13693.02.patch, HIVE-13693.patch > > > To reproduce: > {noformat} > CREATE TABLE T_A ( id STRING, val STRING ); > CREATE TABLE T_B ( id STRING, val STRING ); > CREATE TABLE join_result_1 ( ida STRING, vala STRING, idb STRING, valb STRING > ); > CREATE TABLE join_result_3 ( ida STRING, vala STRING, idb STRING, valb STRING > ); > INSERT INTO TABLE T_A > VALUES ('Id_1', 'val_101'), ('Id_2', 'val_102'), ('Id_3', 'val_103'); > INSERT INTO TABLE T_B > VALUES ('Id_1', 'val_103'), ('Id_2', 'val_104'); > explain > FROM T_A a LEFT JOIN T_B b ON a.id = b.id > INSERT OVERWRITE TABLE join_result_1 > SELECT a.*, b.* > WHERE b.id = 'Id_1' AND b.val = 'val_103' > INSERT OVERWRITE TABLE join_result_3 > SELECT a.*, b.* > WHERE b.val = 'val_104' AND b.id = 'Id_2' AND a.val <> b.val; > {noformat} > The (wrong) plan is the following: > {noformat} > STAGE DEPENDENCIES: > Stage-2 is a root stage > Stage-3 depends on stages: Stage-2 > Stage-0 depends on stages: Stage-3 > Stage-4 depends on stages: Stage-0 > Stage-1 depends on stages: Stage-3 > Stage-5 depends on stages: Stage-1 > STAGE PLANS: > Stage: Stage-2 > Tez > DagId: haha_20160504140944_174465c9-5d1a-42f9-9665-fae02eeb2767:2 > Edges: > Reducer 2 <- Map 1 (SIMPLE_EDGE), Map 3 (SIMPLE_EDGE) > DagName: > Vertices: > Map 1 > Map Operator Tree: > TableScan > alias: a > Statistics: Num rows: 3 Data size: 36 Basic stats: COMPLETE > Column stats: NONE > Reduce Output Operator > key expressions: id (type: string) > sort order: + > Map-reduce partition columns: id (type: string) > Statistics: Num rows: 3 Data size: 36 Basic stats: > COMPLETE Column stats: NONE > value expressions: val (type: string) > Map 3 > Map Operator Tree: > TableScan > alias: b > Statistics: Num rows: 2 Data size: 24 Basic stats: COMPLETE > Column stats: NONE > Reduce Output Operator > key expressions: id (type: string) > sort order: + > Map-reduce partition columns: id (type: string) > Statistics: Num rows: 2 Data size: 24 Basic stats: > COMPLETE Column stats: NONE > value expressions: val (type: string) > Reducer 2 > Reduce Operator Tree: > Merge Join Operator > condition map: > Left Outer Join0 to 1 > keys: > 0 id (type: string) > 1 id (type: string) > outputColumnNames: _col0, _col1, _col6 > Statistics: Num rows: 3 Data size: 39 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: _col0 (type: string), _col1 (type: string), > 'Id_1' (type: string), 'val_103' (type: string) > outputColumnNames: _col0, _col1, _col2, _col3 > Statistics: Num rows: 3 Data size: 39 Basic stats: COMPLETE > Column stats: NONE > File Output Operator > compressed: false > Statistics: Num rows: 3 Data size: 39 Basic stats: > COMPLETE Column stats: NONE > table: > input format: org.apache.hadoop.mapred.TextInputFormat > output format: > org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat > serde: > org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe > name: bugtest2.join_result_1 > Filter Operator > predicate: (_col1 <> _col6) (type: boolean) > Statistics: Num rows: 3 Data size: 39 Basic stats: COMPLETE > Column stats: NONE > Select Operator > expressions: _col0 (type: string), _col1 (type: string), > 'Id_2' (type: string), 'val_10
[jira] [Resolved] (HIVE-13635) HiveServer2 shows stack trace when parsing invalid inputs
[ https://issues.apache.org/jira/browse/HIVE-13635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez resolved HIVE-13635. Resolution: Cannot Reproduce Assignee: (was: Takuma Wakamori) Fix Version/s: 2.1.0 [~Takuma], I close the issue then as we are wrapping up 2.1.0. Thanks > HiveServer2 shows stack trace when parsing invalid inputs > - > > Key: HIVE-13635 > URL: https://issues.apache.org/jira/browse/HIVE-13635 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Takuma Wakamori >Priority: Trivial > Fix For: 2.1.0 > > Attachments: HIVE-13635.1.patch > > > HiveServer2 shows stack trace when parsing invalid syntax. > How to reproduce: > {code} > Input: > hostA$ hiveserver2 > hostB$ beeline -u jdbc:hive2://localhost:1 -n user -p pass -e "invalid > syntax;" > Output: > hostA$ NoViableAltException(26@[]) > > [0/1248] > at > org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1108) > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204) > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:444) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:319) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1199) > at > org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1186) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:146) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:218) > ... > FAILED: ParseException line 1:0 cannot recognize input near 'invalid' > 'syntax' '' > hostB$ Error: Error while compiling statement: FAILED: ParseException line > 1:0 cannot recognize input near 'invalid' 'syntax' '' > (state=42000,code=4) > {code} > This issue is related to the post of Hive developer mailing list: > http://mail-archives.apache.org/mod_mbox/hive-dev/201604.mbox/%3CCAOLfT9AaKZ8Nt77QnvrNcxWrQ_1htaj9C0UOsnN5HheoTzM6DQ%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13889) HiveServer2 shows stack trace when parsing invalid inputs
[ https://issues.apache.org/jira/browse/HIVE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez resolved HIVE-13889. Resolution: Duplicate Assignee: (was: Takuma Wakamori) Target Version/s: (was: 2.1.0) Duplicate of HIVE-13635. Feel free to reopen it if it is a different issue. > HiveServer2 shows stack trace when parsing invalid inputs > - > > Key: HIVE-13889 > URL: https://issues.apache.org/jira/browse/HIVE-13889 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Takuma Wakamori >Priority: Trivial > > HiveServer2 shows stack trace when parsing invalid syntax. > How to reproduce: > {code} > Input: > hostA$ hiveserver2 > hostB$ beeline -u jdbc:hive2://localhost:1 -n user -p pass -e "invalid > syntax;" > Output: > hostA$ NoViableAltException(26@[]) > > [0/1248] > at > org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1108) > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204) > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:444) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:319) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1199) > at > org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1186) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:146) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:218) > ... > FAILED: ParseException line 1:0 cannot recognize input near 'invalid' > 'syntax' '' > hostB$ Error: Error while compiling statement: FAILED: ParseException line > 1:0 cannot recognize input near 'invalid' 'syntax' '' > (state=42000,code=4) > {code} > This issue is related to the post of Hive developer mailing list: > http://mail-archives.apache.org/mod_mbox/hive-dev/201604.mbox/%3CCAOLfT9AaKZ8Nt77QnvrNcxWrQ_1htaj9C0UOsnN5HheoTzM6DQ%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13889) HiveServer2 shows stack trace when parsing invalid inputs
[ https://issues.apache.org/jira/browse/HIVE-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13889: --- Affects Version/s: 2.1.0 > HiveServer2 shows stack trace when parsing invalid inputs > - > > Key: HIVE-13889 > URL: https://issues.apache.org/jira/browse/HIVE-13889 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 2.1.0 >Reporter: Takuma Wakamori >Priority: Trivial > > HiveServer2 shows stack trace when parsing invalid syntax. > How to reproduce: > {code} > Input: > hostA$ hiveserver2 > hostB$ beeline -u jdbc:hive2://localhost:1 -n user -p pass -e "invalid > syntax;" > Output: > hostA$ NoViableAltException(26@[]) > > [0/1248] > at > org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1108) > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:204) > at > org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:444) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:319) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1199) > at > org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1186) > at > org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:146) > at > org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:218) > ... > FAILED: ParseException line 1:0 cannot recognize input near 'invalid' > 'syntax' '' > hostB$ Error: Error while compiling statement: FAILED: ParseException line > 1:0 cannot recognize input near 'invalid' 'syntax' '' > (state=42000,code=4) > {code} > This issue is related to the post of Hive developer mailing list: > http://mail-archives.apache.org/mod_mbox/hive-dev/201604.mbox/%3CCAOLfT9AaKZ8Nt77QnvrNcxWrQ_1htaj9C0UOsnN5HheoTzM6DQ%40mail.gmail.com%3E -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13844) Invalid index handler in org.apache.hadoop.hive.ql.index.HiveIndex class
[ https://issues.apache.org/jira/browse/HIVE-13844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13844: --- Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master, branch-2.1. Thanks for your contribution [~svetozari]! > Invalid index handler in org.apache.hadoop.hive.ql.index.HiveIndex class > > > Key: HIVE-13844 > URL: https://issues.apache.org/jira/browse/HIVE-13844 > Project: Hive > Issue Type: Bug > Components: Indexing >Affects Versions: 2.0.0 >Reporter: Svetozar Ivanov >Priority: Minor > Fix For: 2.1.0 > > Attachments: HIVE-13844.patch > > > Class org.apache.hadoop.hive.ql.index.HiveIndex has invalid handler name > 'org.apache.hadoop.hive.ql.AggregateIndexHandler'. The actual FQ class name > is 'org.apache.hadoop.hive.ql.index.AggregateIndexHandler' > {code} > public static enum IndexType { > AGGREGATE_TABLE("aggregate", > "org.apache.hadoop.hive.ql.AggregateIndexHandler"), > COMPACT_SUMMARY_TABLE("compact", > "org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler"), > > BITMAP_TABLE("bitmap","org.apache.hadoop.hive.ql.index.bitmap.BitmapIndexHandler"); > private IndexType(String indexType, String className) { > indexTypeName = indexType; > this.handlerClsName = className; > } > private final String indexTypeName; > private final String handlerClsName; > public String getName() { > return indexTypeName; > } > public String getHandlerClsName() { > return handlerClsName; > } > } > > {code} > Because all of the above statement like 'SHOW INDEXES ON MY_TABLE' doesn't > work in case of configured > 'org.apache.hadoop.hive.ql.index.AggregateIndexHandler' as index handler. In > hive server log is observed java.lang.NullPointerException. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-10337) CBO (Calcite Return Path): java.lang.IndexOutOfBoundsException for query with rank() over(partition ...)
[ https://issues.apache.org/jira/browse/HIVE-10337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-10337: --- Assignee: (was: Jesus Camacho Rodriguez) > CBO (Calcite Return Path): java.lang.IndexOutOfBoundsException for query with > rank() over(partition ...) > > > Key: HIVE-10337 > URL: https://issues.apache.org/jira/browse/HIVE-10337 > Project: Hive > Issue Type: Bug >Reporter: Mostafa Mokhtar > > CBO throws Index out of bound exception for TPC-DS Q70. > Query > {code} > explain > select > sum(ss_net_profit) as total_sum >,s_state >,s_county >,grouping__id as lochierarchy >, rank() over(partition by grouping__id, case when grouping__id == 2 then > s_state end order by sum(ss_net_profit)) as rank_within_parent > from > store_sales ss join date_dim d1 on d1.d_date_sk = ss.ss_sold_date_sk > join store s on s.s_store_sk = ss.ss_store_sk > where > d1.d_month_seq between 1193 and 1193+11 > and s.s_state in > ( select s_state >from (select s_state as s_state, sum(ss_net_profit), > rank() over ( partition by s_state order by > sum(ss_net_profit) desc) as ranking > from store_sales, store, date_dim > where d_month_seq between 1193 and 1193+11 > and date_dim.d_date_sk = > store_sales.ss_sold_date_sk > and store.s_store_sk = store_sales.ss_store_sk > group by s_state > ) tmp1 >where ranking <= 5 > ) > group by s_state,s_county with rollup > order by >lochierarchy desc > ,case when lochierarchy = 0 then s_state end > ,rank_within_parent > limit 100 > {code} > Exception > {code} > 15/04/14 02:42:52 [main]: ERROR parse.CalcitePlanner: CBO failed, skipping > CBO. > java.lang.IndexOutOfBoundsException: Index: 5, Size: 5 > at java.util.ArrayList.rangeCheck(ArrayList.java:635) > at java.util.ArrayList.get(ArrayList.java:411) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter$RexVisitor.visitInputRef(ASTConverter.java:395) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter$RexVisitor.visitInputRef(ASTConverter.java:372) > at org.apache.calcite.rex.RexInputRef.accept(RexInputRef.java:112) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter$RexVisitor.visitCall(ASTConverter.java:543) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter$RexVisitor.visitCall(ASTConverter.java:372) > at org.apache.calcite.rex.RexCall.accept(RexCall.java:107) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter$RexVisitor.visitCall(ASTConverter.java:543) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter$RexVisitor.visitCall(ASTConverter.java:372) > at org.apache.calcite.rex.RexCall.accept(RexCall.java:107) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter.convertOBToASTNode(ASTConverter.java:252) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter.convert(ASTConverter.java:208) > at > org.apache.hadoop.hive.ql.optimizer.calcite.translator.ASTConverter.convert(ASTConverter.java:98) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:607) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:239) > at > org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10003) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:202) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224) > at > org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:74) > at > org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:224) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) > at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) > at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) > at org.apache.hadoo
[jira] [Updated] (HIVE-13856) Fetching transaction batches during ACID streaming against Hive Metastore using Oracle DB fails
[ https://issues.apache.org/jira/browse/HIVE-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13856: -- Target Version/s: 1.3.0, 2.1.0 Fix Version/s: (was: 2.1.0) (was: 1.3.0) > Fetching transaction batches during ACID streaming against Hive Metastore > using Oracle DB fails > --- > > Key: HIVE-13856 > URL: https://issues.apache.org/jira/browse/HIVE-13856 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.3.0, 2.1.0, 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Eugene Koifman >Priority: Blocker > > {noformat} > 2016-05-25 00:43:49,682 INFO [pool-4-thread-5]: txn.TxnHandler > (TxnHandler.java:checkRetryable(1585)) - Non-retryable error: ORA-00933: SQL > command not properly ended > (SQLState=42000, ErrorCode=933) > 2016-05-25 00:43:49,685 ERROR [pool-4-thread-5]: metastore.RetryingHMSHandler > (RetryingHMSHandler.java:invoke(159)) - MetaException(message:Unable to > select from transaction database java.sql.SQLSyntaxErrorException: ORA-00933: > SQL command not properly ended > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396) > at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837) > at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) > at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) > at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523) > at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193) > at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999) > at > oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315) > at > oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890) > at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855) > at > oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304) > at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:429) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy15.open_txns(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11604) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11589) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > ) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:438) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy15.open_txns(Unknown Source) > at > org.apache.hadoo
[jira] [Commented] (HIVE-13838) Set basic stats as inaccurate for all ACID tables
[ https://issues.apache.org/jira/browse/HIVE-13838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307993#comment-15307993 ] Hive QA commented on HIVE-13838: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12806226/HIVE-13838.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 10173 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file TestJdbcWithMiniHA - did not produce a TEST-*.xml file TestJdbcWithMiniMr - did not produce a TEST-*.xml file TestMiniTezCliDriver-auto_join30.q-script_pipe.q-vector_decimal_10_0.q-and-12-more - did not produce a TEST-*.xml file TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_table_stats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autoColumnStats_4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_values_orig_table_use_metadata org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_llap_acid org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testLocksInSubquery {noformat} Test results: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/469/testReport Console output: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/469/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-469/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 20 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12806226 - PreCommit-HIVE-MASTER-Build > Set basic stats as inaccurate for all ACID tables > - > > Key: HIVE-13838 > URL: https://issues.apache.org/jira/browse/HIVE-13838 > Project: Hive > Issue Type: Sub-task >Reporter: Pengcheng Xiong >Assignee: Pengcheng Xiong > Attachments: HIVE-13838.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Work started] (HIVE-13856) Fetching transaction batches during ACID streaming against Hive Metastore using Oracle DB fails
[ https://issues.apache.org/jira/browse/HIVE-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-13856 started by Eugene Koifman. - > Fetching transaction batches during ACID streaming against Hive Metastore > using Oracle DB fails > --- > > Key: HIVE-13856 > URL: https://issues.apache.org/jira/browse/HIVE-13856 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.3.0, 2.1.0, 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-13856.patch > > > {noformat} > 2016-05-25 00:43:49,682 INFO [pool-4-thread-5]: txn.TxnHandler > (TxnHandler.java:checkRetryable(1585)) - Non-retryable error: ORA-00933: SQL > command not properly ended > (SQLState=42000, ErrorCode=933) > 2016-05-25 00:43:49,685 ERROR [pool-4-thread-5]: metastore.RetryingHMSHandler > (RetryingHMSHandler.java:invoke(159)) - MetaException(message:Unable to > select from transaction database java.sql.SQLSyntaxErrorException: ORA-00933: > SQL command not properly ended > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396) > at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837) > at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) > at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) > at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523) > at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193) > at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999) > at > oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315) > at > oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890) > at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855) > at > oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304) > at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:429) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy15.open_txns(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11604) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11589) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > ) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:438) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy15.open_txns(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor
[jira] [Updated] (HIVE-13856) Fetching transaction batches during ACID streaming against Hive Metastore using Oracle DB fails
[ https://issues.apache.org/jira/browse/HIVE-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13856: -- Attachment: HIVE-13856.patch HIVE-13856.patch - preliminary patch > Fetching transaction batches during ACID streaming against Hive Metastore > using Oracle DB fails > --- > > Key: HIVE-13856 > URL: https://issues.apache.org/jira/browse/HIVE-13856 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.3.0, 2.1.0, 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-13856.patch > > > {noformat} > 2016-05-25 00:43:49,682 INFO [pool-4-thread-5]: txn.TxnHandler > (TxnHandler.java:checkRetryable(1585)) - Non-retryable error: ORA-00933: SQL > command not properly ended > (SQLState=42000, ErrorCode=933) > 2016-05-25 00:43:49,685 ERROR [pool-4-thread-5]: metastore.RetryingHMSHandler > (RetryingHMSHandler.java:invoke(159)) - MetaException(message:Unable to > select from transaction database java.sql.SQLSyntaxErrorException: ORA-00933: > SQL command not properly ended > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396) > at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837) > at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) > at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) > at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523) > at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193) > at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999) > at > oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315) > at > oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890) > at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855) > at > oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304) > at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:429) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy15.open_txns(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11604) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11589) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > ) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:438) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy15.open_txns(Unknown Source) > at > org.apache.hadoo
[jira] [Updated] (HIVE-13856) Fetching transaction batches during ACID streaming against Hive Metastore using Oracle DB fails
[ https://issues.apache.org/jira/browse/HIVE-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-13856: -- Status: Patch Available (was: In Progress) > Fetching transaction batches during ACID streaming against Hive Metastore > using Oracle DB fails > --- > > Key: HIVE-13856 > URL: https://issues.apache.org/jira/browse/HIVE-13856 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.3.0, 2.1.0, 2.2.0 >Reporter: Deepesh Khandelwal >Assignee: Eugene Koifman >Priority: Blocker > Attachments: HIVE-13856.patch > > > {noformat} > 2016-05-25 00:43:49,682 INFO [pool-4-thread-5]: txn.TxnHandler > (TxnHandler.java:checkRetryable(1585)) - Non-retryable error: ORA-00933: SQL > command not properly ended > (SQLState=42000, ErrorCode=933) > 2016-05-25 00:43:49,685 ERROR [pool-4-thread-5]: metastore.RetryingHMSHandler > (RetryingHMSHandler.java:invoke(159)) - MetaException(message:Unable to > select from transaction database java.sql.SQLSyntaxErrorException: ORA-00933: > SQL command not properly ended > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) > at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396) > at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837) > at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) > at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) > at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523) > at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193) > at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999) > at > oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315) > at > oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890) > at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855) > at > oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304) > at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:429) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy15.open_txns(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11604) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11589) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:745) > ) > at > org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:438) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) > at com.sun.proxy.$Proxy15.open_txns(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.Th
[jira] [Updated] (HIVE-13855) select INPUT__FILE__NAME throws NPE exception
[ https://issues.apache.org/jira/browse/HIVE-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-13855: Resolution: Fixed Fix Version/s: 2.2.0 Status: Resolved (was: Patch Available) Pushed to master. Thanks Yongzhi for reviewing. > select INPUT__FILE__NAME throws NPE exception > - > > Key: HIVE-13855 > URL: https://issues.apache.org/jira/browse/HIVE-13855 > Project: Hive > Issue Type: Bug > Components: Query Processor >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 2.2.0 > > Attachments: HIVE-13855.1.patch > > > The following query executes successfully > select INPUT__FILE__NAME from src limit 1; > But the following NPE is thrown > {noformat} > 16/05/25 16:49:49 ERROR exec.Utilities: Failed to load plan: null: > java.lang.NullPointerException > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:407) > at > org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:299) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:315) > at > org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79) > at > org.apache.hadoop.hive.ql.exec.FetchOperator$1.doNext(FetchOperator.java:340) > at > org.apache.hadoop.hive.ql.exec.FetchOperator$1.doNext(FetchOperator.java:331) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:484) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:424) > at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:144) > at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1884) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:252) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8282) Potential null deference in ConvertJoinMapJoin#convertJoinBucketMapJoin()
[ https://issues.apache.org/jira/browse/HIVE-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HIVE-8282: - Description: In convertJoinMapJoin(): {code} for (Operator parentOp : joinOp.getParentOperators()) { if (parentOp instanceof MuxOperator) { return null; } } {code} NPE would result if convertJoinMapJoin() returns null: {code} MapJoinOperator mapJoinOp = convertJoinMapJoin(joinOp, context, bigTablePosition); MapJoinDesc joinDesc = mapJoinOp.getConf(); {code} was: In convertJoinMapJoin(): {code} for (Operator parentOp : joinOp.getParentOperators()) { if (parentOp instanceof MuxOperator) { return null; } } {code} NPE would result if convertJoinMapJoin() returns null: {code} MapJoinOperator mapJoinOp = convertJoinMapJoin(joinOp, context, bigTablePosition); MapJoinDesc joinDesc = mapJoinOp.getConf(); {code} > Potential null deference in ConvertJoinMapJoin#convertJoinBucketMapJoin() > - > > Key: HIVE-8282 > URL: https://issues.apache.org/jira/browse/HIVE-8282 > Project: Hive > Issue Type: Bug >Affects Versions: 0.14.0 >Reporter: Ted Yu >Priority: Minor > Attachments: HIVE-8282.patch > > > In convertJoinMapJoin(): > {code} > for (Operator parentOp : > joinOp.getParentOperators()) { > if (parentOp instanceof MuxOperator) { > return null; > } > } > {code} > NPE would result if convertJoinMapJoin() returns null: > {code} > MapJoinOperator mapJoinOp = convertJoinMapJoin(joinOp, context, > bigTablePosition); > MapJoinDesc joinDesc = mapJoinOp.getConf(); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6589) Automatically add partitions for external tables
[ https://issues.apache.org/jira/browse/HIVE-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308032#comment-15308032 ] Dan Gustafsson commented on HIVE-6589: -- This is a nice-to-have, but would really simplify loading of data. Any solution in sight (that i might have missed) ? > Automatically add partitions for external tables > > > Key: HIVE-6589 > URL: https://issues.apache.org/jira/browse/HIVE-6589 > Project: Hive > Issue Type: New Feature >Affects Versions: 0.14.0 >Reporter: Ken Dallmeyer >Assignee: Dharmendra Pratap Singh > > I have a data stream being loaded into Hadoop via Flume. It loads into a date > partition folder in HDFS. The path looks like this: > {code}/flume/my_data//MM/DD/HH > /flume/my_data/2014/03/02/01 > /flume/my_data/2014/03/02/02 > /flume/my_data/2014/03/02/03{code} > On top of it I create an EXTERNAL hive table to do querying. As of now, I > have to manually add partitions. What I want is for EXTERNAL tables, Hive > should "discover" those partitions. Additionally I would like to specify a > partition pattern so that when I query Hive will know to use the partition > pattern to find the HDFS folder. > So something like this: > {code}CREATE EXTERNAL TABLE my_data ( > col1 STRING, > col2 INT > ) > PARTITIONED BY ( > dt STRING, > hour STRING > ) > LOCATION > '/flume/mydata' > TBLPROPERTIES ( > 'hive.partition.spec' = 'dt=$Y-$M-$D, hour=$H', > 'hive.partition.spec.location' = '$Y/$M/$D/$H', > ); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6893) out of sequence error in HiveMetastore server
[ https://issues.apache.org/jira/browse/HIVE-6893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-6893: Resolution: Fixed Fix Version/s: 1.3.0 Status: Resolved (was: Patch Available) The fix in the proposed patch, is included in HIVE-10956. So this issue should be resolved by that fix. I am closing the jira. > out of sequence error in HiveMetastore server > - > > Key: HIVE-6893 > URL: https://issues.apache.org/jira/browse/HIVE-6893 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 0.12.0 >Reporter: Romain Rigaux >Assignee: Naveen Gangam > Fix For: 1.3.0 > > Attachments: HIVE-6893.1.patch > > > Calls listing databases or tables fail. It seems to be a concurrency problem. > {code} > 014-03-06 05:34:00,785 ERROR hive.log: > org.apache.thrift.TApplicationException: get_databases failed: out of > sequence response > at > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:76) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_databases(ThriftHiveMetastore.java:472) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_databases(ThriftHiveMetastore.java:459) > at > org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaStoreClient.java:648) > at > org.apache.hive.service.cli.operation.GetSchemasOperation.run(GetSchemasOperation.java:66) > at > org.apache.hive.service.cli.session.HiveSessionImpl.getSchemas(HiveSessionImpl.java:278) > at sun.reflect.GeneratedMethodAccessor323.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:62) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408) > at > org.apache.hadoop.hive.shims.HadoopShimsSecure.doAs(HadoopShimsSecure.java:582) > at > org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:57) > at com.sun.proxy.$Proxy9.getSchemas(Unknown Source) > at > org.apache.hive.service.cli.CLIService.getSchemas(CLIService.java:192) > at > org.apache.hive.service.cli.thrift.ThriftCLIService.GetSchemas(ThriftCLIService.java:263) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$GetSchemas.getResult(TCLIService.java:1433) > at > org.apache.hive.service.cli.thrift.TCLIService$Processor$GetSchemas.getResult(TCLIService.java:1418) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) > at > org.apache.hive.service.cli.thrift.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:38) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:724) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-10404) hive.exec.parallel=true causes "out of sequence response" and SocketTimeoutException: Read timed out
[ https://issues.apache.org/jira/browse/HIVE-10404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308100#comment-15308100 ] Naveen Gangam commented on HIVE-10404: -- This should be resolved by HIVE-10956 (session threadlocals are shared by other threads in the same hive session) + HIVE-12790. Would you agree? Thanks > hive.exec.parallel=true causes "out of sequence response" and > SocketTimeoutException: Read timed out > > > Key: HIVE-10404 > URL: https://issues.apache.org/jira/browse/HIVE-10404 > Project: Hive > Issue Type: Bug > Components: Query Processor >Reporter: Eugene Koifman > > With hive.exec.parallel=true, Driver.lauchTask() calls Task.initialize() from > 1 thread on several Tasks. It then starts new threads to run those tasks. > Taks.initiazlie() gets an instance of Hive and holds on to it. Hive.java > internally uses ThreadLocal to hand out instances, but since > Task.initialize() is called by a single thread from the Driver multiple tasks > share an instance of Hive. > Each Hive instances has a single instance of MetaStoreClient; the later is > not thread safe. > With hive.exec.parallel=true, different threads actually execute the tasks, > different threads end up sharing the same MetaStoreClient. > If you make 2 concurrent calls, for example Hive.getTable(String), the Thrift > responses may return to the wrong caller. > Thus the first caller gets "out of sequence response", drops this message and > reconnects. If the timing is right, it will consume the other's response, > but the the other caller will block for hive.metastore.client.socket.timeout > since its response message has now been lost. > This is just one concrete example. > One possible fix is to make Task.db use ThreadLocal. > This could be related to HIVE-6893 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13836) DbNotifications giving an error = Invalid state. Transaction has already started
[ https://issues.apache.org/jira/browse/HIVE-13836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308104#comment-15308104 ] Nachiket Vaidya commented on HIVE-13836: The failures are not related except {noformat} org.apache.hive.hcatalog.listener.TestDbNotificationListener.cleanupNotifs org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropDatabase {noformat} I ran them on my system are everything was fine with those test cases: {noformat} --- T E S T S --- Running org.apache.hive.hcatalog.listener.TestDbNotificationListener Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.126 sec - in org.apache.hive.hcatalog.listener.TestDbNotificationListener Running org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.828 sec - in org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler Running org.apache.hive.hcatalog.api.TestHCatClientNotification Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.95 sec - in org.apache.hive.hcatalog.api.TestHCatClientNotification {noformat} > DbNotifications giving an error = Invalid state. Transaction has already > started > > > Key: HIVE-13836 > URL: https://issues.apache.org/jira/browse/HIVE-13836 > Project: Hive > Issue Type: Bug >Reporter: Nachiket Vaidya >Assignee: Nachiket Vaidya >Priority: Critical > Labels: patch-available > Attachments: HIVE-13836.patch > > > I used pyhs2 python client to create tables/partitions in hive. I was working > fine until I moved to multithreaded scripts which created 8 connections and > ran DDL queries concurrently. > I got the error as > {noformat} > 2016-05-04 17:49:26,226 ERROR > org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-4-thread-194]: > HMSHandler Fatal error: Invalid state. Transaction has already started > org.datanucleus.transaction.NucleusTransactionException: Invalid state. > Transaction has already started > at > org.datanucleus.transaction.TransactionManager.begin(TransactionManager.java:47) > at org.datanucleus.TransactionImpl.begin(TransactionImpl.java:131) > at > org.datanucleus.api.jdo.JDOTransaction.internalBegin(JDOTransaction.java:88) > at > org.datanucleus.api.jdo.JDOTransaction.begin(JDOTransaction.java:80) > at > org.apache.hadoop.hive.metastore.ObjectStore.openTransaction(ObjectStore.java:463) > at > org.apache.hadoop.hive.metastore.ObjectStore.addNotificationEvent(ObjectStore.java:7522) > at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114) > at com.sun.proxy.$Proxy10.addNotificationEvent(Unknown Source) > at > org.apache.hive.hcatalog.listener.DbNotificationListener.enqueue(DbNotificationListener.java:261) > at > org.apache.hive.hcatalog.listener.DbNotificationListener.onCreateTable(DbNotificationListener.java:123) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1483) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1502) > at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:138) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) > at > com.sun.proxy.$Proxy14.create_table_with_environment_context(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:9267) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13857) insert overwrite select from some table fails throwing org.apache.hadoop.security.AccessControlException - II
[ https://issues.apache.org/jira/browse/HIVE-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308106#comment-15308106 ] Ashutosh Chauhan commented on HIVE-13857: - [~hsubramaniyan] Can you please commit this on branch-2.1 as well? > insert overwrite select from some table fails throwing > org.apache.hadoop.security.AccessControlException - II > - > > Key: HIVE-13857 > URL: https://issues.apache.org/jira/browse/HIVE-13857 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Fix For: 2.1.0 > > Attachments: HIVE-13857.1.patch, HIVE-13857.2.patch, > HIVE-13857.3.patch, HIVE-13857.4.patch, HIVE-13857.5.patch > > > HIVE-13810 missed a fix, tracking it here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-6589) Automatically add partitions for external tables
[ https://issues.apache.org/jira/browse/HIVE-6589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308032#comment-15308032 ] Dan Gustafsson edited comment on HIVE-6589 at 5/31/16 4:57 PM: --- This is a nice-to-have, but would really simplify loading of data. The solution given by the blogs are : MSCK REPAIR TABLE your_table_name; This seems to identify and repair partitions, if they have been created as folders with the "partition_symbol=partition_value" structure. was (Author: dan0704090...@hotmail.com): This is a nice-to-have, but would really simplify loading of data. Any solution in sight (that i might have missed) ? > Automatically add partitions for external tables > > > Key: HIVE-6589 > URL: https://issues.apache.org/jira/browse/HIVE-6589 > Project: Hive > Issue Type: New Feature >Affects Versions: 0.14.0 >Reporter: Ken Dallmeyer >Assignee: Dharmendra Pratap Singh > > I have a data stream being loaded into Hadoop via Flume. It loads into a date > partition folder in HDFS. The path looks like this: > {code}/flume/my_data//MM/DD/HH > /flume/my_data/2014/03/02/01 > /flume/my_data/2014/03/02/02 > /flume/my_data/2014/03/02/03{code} > On top of it I create an EXTERNAL hive table to do querying. As of now, I > have to manually add partitions. What I want is for EXTERNAL tables, Hive > should "discover" those partitions. Additionally I would like to specify a > partition pattern so that when I query Hive will know to use the partition > pattern to find the HDFS folder. > So something like this: > {code}CREATE EXTERNAL TABLE my_data ( > col1 STRING, > col2 INT > ) > PARTITIONED BY ( > dt STRING, > hour STRING > ) > LOCATION > '/flume/mydata' > TBLPROPERTIES ( > 'hive.partition.spec' = 'dt=$Y-$M-$D, hour=$H', > 'hive.partition.spec.location' = '$Y/$M/$D/$H', > ); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13857) insert overwrite select from some table fails throwing org.apache.hadoop.security.AccessControlException - II
[ https://issues.apache.org/jira/browse/HIVE-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308131#comment-15308131 ] Hari Sankar Sivarama Subramaniyan commented on HIVE-13857: -- This was committed to master, branch-2.1 when it was marked resolved. Thanks [~ashutoshc] for the reviews. > insert overwrite select from some table fails throwing > org.apache.hadoop.security.AccessControlException - II > - > > Key: HIVE-13857 > URL: https://issues.apache.org/jira/browse/HIVE-13857 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > Fix For: 2.1.0 > > Attachments: HIVE-13857.1.patch, HIVE-13857.2.patch, > HIVE-13857.3.patch, HIVE-13857.4.patch, HIVE-13857.5.patch > > > HIVE-13810 missed a fix, tracking it here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-11956) SHOW LOCKS should indicate what acquired the lock
[ https://issues.apache.org/jira/browse/HIVE-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-11956: -- Attachment: HIVE-11956.2.patch patch 2 addresses review comments > SHOW LOCKS should indicate what acquired the lock > - > > Key: HIVE-11956 > URL: https://issues.apache.org/jira/browse/HIVE-11956 > Project: Hive > Issue Type: Improvement > Components: CLI, Transactions >Affects Versions: 0.14.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-11956.2.patch, HIVE-11956.patch > > > This can be a queryId, Flume agent id, Storm bolt id, etc. This would > dramatically help diagnosing issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13196) UDFLike: reduce Regex NFA sizes
[ https://issues.apache.org/jira/browse/HIVE-13196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13196: Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) Pushed to master & branch-2.1 > UDFLike: reduce Regex NFA sizes > --- > > Key: HIVE-13196 > URL: https://issues.apache.org/jira/browse/HIVE-13196 > Project: Hive > Issue Type: Improvement > Components: UDF >Affects Versions: 1.3.0, 1.2.1, 2.0.0, 2.1.0 >Reporter: Gopal V >Assignee: Gopal V >Priority: Minor > Fix For: 2.1.0 > > Attachments: HIVE-13196.1.patch, HIVE-13196.1.patch > > > The NFAs built from complex regexes in UDFLike are extremely complex and > spend a lot of time doing simple expression matching with no backtracking. > Prevent NFA -> DFA explosion by using reluctant regex matches instead of > greedy matches. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13880) add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api
[ https://issues.apache.org/jira/browse/HIVE-13880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez resolved HIVE-13880. Resolution: Duplicate Closing as duplicate of HIVE-13879. > add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api > -- > > Key: HIVE-13880 > URL: https://issues.apache.org/jira/browse/HIVE-13880 > Project: Hive > Issue Type: Bug > Components: Authorization >Reporter: Thejas M Nair >Assignee: Thejas M Nair > > Passing context information to grant/revoke methods will help auditing > logging those methods by authorizer plugin implementations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-13823) Remove unnecessary log line in common join operator
[ https://issues.apache.org/jira/browse/HIVE-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez resolved HIVE-13823. Resolution: Fixed Fix Version/s: 2.1.0 Pushed to master, branch-2.1. Thanks [~hagleitn]! > Remove unnecessary log line in common join operator > --- > > Key: HIVE-13823 > URL: https://issues.apache.org/jira/browse/HIVE-13823 > Project: Hive > Issue Type: Bug >Reporter: Gunther Hagleitner >Assignee: Gunther Hagleitner > Fix For: 2.1.0 > > Attachments: HIVE-13823.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-11956) SHOW LOCKS should indicate what acquired the lock
[ https://issues.apache.org/jira/browse/HIVE-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308278#comment-15308278 ] Wei Zheng commented on HIVE-11956: -- +1 > SHOW LOCKS should indicate what acquired the lock > - > > Key: HIVE-11956 > URL: https://issues.apache.org/jira/browse/HIVE-11956 > Project: Hive > Issue Type: Improvement > Components: CLI, Transactions >Affects Versions: 0.14.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Critical > Attachments: HIVE-11956.2.patch, HIVE-11956.patch > > > This can be a queryId, Flume agent id, Storm bolt id, etc. This would > dramatically help diagnosing issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13855) select INPUT__FILE__NAME throws NPE exception
[ https://issues.apache.org/jira/browse/HIVE-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308281#comment-15308281 ] Hive QA commented on HIVE-13855: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12806244/HIVE-13855.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 10191 tests executed *Failed tests:* {noformat} TestHWISessionManager - did not produce a TEST-*.xml file TestJdbcWithMiniHA - did not produce a TEST-*.xml file TestJdbcWithMiniMr - did not produce a TEST-*.xml file TestOperationLoggingAPIWithTez - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constant_prop_3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constprog3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cross_join org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_view org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ppd_repeated_alias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_join_view org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testLocksInSubquery {noformat} Test results: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/470/testReport Console output: http://ec2-54-177-240-2.us-west-1.compute.amazonaws.com/job/PreCommit-HIVE-MASTER-Build/470/console Test logs: http://ec2-50-18-27-0.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-470/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 22 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12806244 - PreCommit-HIVE-MASTER-Build > select INPUT__FILE__NAME throws NPE exception > - > > Key: HIVE-13855 > URL: https://issues.apache.org/jira/browse/HIVE-13855 > Project: Hive > Issue Type: Bug > Components: Query Processor >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 2.2.0 > > Attachments: HIVE-13855.1.patch > > > The following query executes successfully > select INPUT__FILE__NAME from src limit 1; > But the following NPE is thrown > {noformat} > 16/05/25 16:49:49 ERROR exec.Utilities: Failed to load plan: null: > java.lang.NullPointerException > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:407) > at > org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:299) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:315) > at > org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79) > at > org.apache.hadoop.hive.ql.exec.FetchOperator$1.doNext(FetchOperator.java:340) > at > org.apache.hadoop.hive.ql.exec.FetchOperator$1.doNext(FetchOperator.java:331) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:484) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:424) > at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:144) > at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1884) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:252) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.
[jira] [Updated] (HIVE-13599) LLAP: Incorrect handling of the preemption queue on finishable state updates
[ https://issues.apache.org/jira/browse/HIVE-13599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13599: -- Attachment: HIVE-13599.01.patch Re-uploading for jenkins. > LLAP: Incorrect handling of the preemption queue on finishable state updates > > > Key: HIVE-13599 > URL: https://issues.apache.org/jira/browse/HIVE-13599 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Siddharth Seth >Priority: Critical > Attachments: HIVE-13599.01.patch, HIVE-13599.01.patch > > > When running some tests with pre-emption enabled, got the following exception > Looks like a race condition when removing items from pre-emption queue. > {code} > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] ERROR impl.TaskExecutorService : > Wait queue scheduler worker exited with failure! > java.util.NoSuchElementException > at java.util.AbstractQueue.remove(AbstractQueue.java:117) > ~[?:1.7.0_55] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.removeAndGetFromPreemptionQueue(TaskExecutorService.java:568) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:493) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:81) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:285) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_55] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [?:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55] > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] INFO impl.LlapDaemon : > UncaughtExceptionHandler invoked > 16/04/23 23:32:00 [Wait-Queue-Scheduler-0[]] ERROR impl.LlapDaemon : Thread > Thread[Wait-Queue-Scheduler-0,5,main] threw an Exception. Shutting down now... > java.util.NoSuchElementException > at java.util.AbstractQueue.remove(AbstractQueue.java:117) > ~[?:1.7.0_55] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.removeAndGetFromPreemptionQueue(TaskExecutorService.java:568) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.handleScheduleAttemptedRejection(TaskExecutorService.java:493) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService.access$1100(TaskExecutorService.java:81) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.llap.daemon.impl.TaskExecutorService$WaitQueueWorker.run(TaskExecutorService.java:285) > ~[hive-llap-server-2.1.0-SNAPSHOT.jar:2.1.0-SNAPSHOT] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_55] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > [?:1.7.0_55] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [?:1.7.0_55] > at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13281) Update some default configs for LLAP - disable default uber enabled
[ https://issues.apache.org/jira/browse/HIVE-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13281: -- Attachment: HIVE-13281.03.patch Re-uploading for jenkins. > Update some default configs for LLAP - disable default uber enabled > --- > > Key: HIVE-13281 > URL: https://issues.apache.org/jira/browse/HIVE-13281 > Project: Hive > Issue Type: Task >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-13281.03.patch, HIVE-13281.03.patch, > HIVE-13281.1.patch, HIVE-13281.2.patch > > > Disable uber mode. > Enable llap.io by default -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13855) select INPUT__FILE__NAME throws NPE exception
[ https://issues.apache.org/jira/browse/HIVE-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308308#comment-15308308 ] Aihua Xu commented on HIVE-13855: - Somehow I thought the test build was run and I committed the change. Good thing is the failures don't seem to be related to the patch. > select INPUT__FILE__NAME throws NPE exception > - > > Key: HIVE-13855 > URL: https://issues.apache.org/jira/browse/HIVE-13855 > Project: Hive > Issue Type: Bug > Components: Query Processor >Reporter: Aihua Xu >Assignee: Aihua Xu > Fix For: 2.2.0 > > Attachments: HIVE-13855.1.patch > > > The following query executes successfully > select INPUT__FILE__NAME from src limit 1; > But the following NPE is thrown > {noformat} > 16/05/25 16:49:49 ERROR exec.Utilities: Failed to load plan: null: > java.lang.NullPointerException > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.Utilities.getBaseWork(Utilities.java:407) > at > org.apache.hadoop.hive.ql.exec.Utilities.getMapWork(Utilities.java:299) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:315) > at > org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79) > at > org.apache.hadoop.hive.ql.exec.FetchOperator$1.doNext(FetchOperator.java:340) > at > org.apache.hadoop.hive.ql.exec.FetchOperator$1.doNext(FetchOperator.java:331) > at > org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:484) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:424) > at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:144) > at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1884) > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:252) > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) > at > org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) > at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) > at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13788) hive msck listpartitions need to make use of directSQL instead of datanucleus
[ https://issues.apache.org/jira/browse/HIVE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308309#comment-15308309 ] Hari Sankar Sivarama Subramaniyan commented on HIVE-13788: -- [~rajesh.balamohan] Do we have the entire Hive stack trace. We might be making unnecessary calls to {{Hive::getPartitions(tbl)}} instead of {{Hive::getAllPartitionsOf(tbl)}} which is light weight (because it doesnt look for auth info). Thanks Hari > hive msck listpartitions need to make use of directSQL instead of datanucleus > - > > Key: HIVE-13788 > URL: https://issues.apache.org/jira/browse/HIVE-13788 > Project: Hive > Issue Type: Improvement >Reporter: Rajesh Balamohan >Assignee: Hari Sankar Sivarama Subramaniyan >Priority: Minor > Attachments: msck_stack_trace.png > > > Currently, for tables having 1000s of partitions too many DB calls are made > via datanucleus. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13859) mask() UDF not retaining day and month field values
[ https://issues.apache.org/jira/browse/HIVE-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308310#comment-15308310 ] Ashutosh Chauhan commented on HIVE-13859: - +1 > mask() UDF not retaining day and month field values > --- > > Key: HIVE-13859 > URL: https://issues.apache.org/jira/browse/HIVE-13859 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Madhan Neethiraj > Fix For: 2.1.0 > > Attachments: HIVE-13859.patch > > > For date type parameters, mask() UDF replaces year/month/day field values > with the values given in arguments to the UDF. Argument value -1 is treated > as special, to specify that mask() should retain the value in the parameter. > This allows to selectively mask only year/month/day fields. > Specifying "-1" does not retain the values for day/month fields; however the > year value is retained, as shown below. > {code} > 0: jdbc:hive2://localhost:1> select id, join_date from employee where id > < 4; > +-+-+--+ > | id | join_date | > +-+-+--+ > | 1 | 2012-01-01 | > | 2 | 2014-02-01 | > | 3 | 2013-03-01 | > +-+-+--+ > 3 rows selected (0.435 seconds) > 0: jdbc:hive2://localhost:1> select id, mask(join_date, -1, -1, -1, > -1,-1, -1,-1,-1) join_date from employee where id < 4; > +-+-+--+ > | id | join_date | > +-+-+--+ > | 1 | 2012-01-01 | > | 2 | 2014-01-01 | > | 3 | 2013-01-01 | > +-+-+--+ > 3 rows selected (0.344 seconds) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13884) Disallow queries fetching more than a configured number of partitions in PartitionPruner
[ https://issues.apache.org/jira/browse/HIVE-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308319#comment-15308319 ] Sergey Shelukhin commented on HIVE-13884: - Should the limit rather be passed to metastore to avoid 2 network roundtrips for normal cases? > Disallow queries fetching more than a configured number of partitions in > PartitionPruner > > > Key: HIVE-13884 > URL: https://issues.apache.org/jira/browse/HIVE-13884 > Project: Hive > Issue Type: Improvement >Reporter: Mohit Sabharwal >Assignee: Mohit Sabharwal > > Currently the PartitionPruner requests either all partitions or partitions > based on filter expression. In either scenarios, if the number of partitions > accessed is large there can be significant memory pressure at the HMS server > end. > We already have a config {{hive.limit.query.max.table.partition}} that > enforces limits on number of partitions that may be scanned per operator. But > this check happens after the PartitionPruner has already fetched all > partitions. > We should add an option at PartitionPruner level to disallow queries that > attempt to access number of partitions beyond a configurable limit. > Note that {{hive.mapred.mode=strict}} disallow queries without a partition > filter in PartitionPruner, but this check accepts any query with a pruning > condition, even if partitions fetched are large. In multi-tenant > environments, admins could use more control w.r.t. number of partitions > allowed based on HMS memory capacity. > One option is to have PartitionPruner first fetch the partition names > (instead of partition specs) and throw an exception if number of partitions > exceeds the configured value. Otherwise, fetch the partition specs. > Looks like the existing {{listPartitionNames}} call could be used if extended > to take partition filter expressions like {{getPartitionsByExpr}} call does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13885) Hive session close is not resetting thread name
[ https://issues.apache.org/jira/browse/HIVE-13885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308317#comment-15308317 ] Sergey Shelukhin commented on HIVE-13885: - +1 > Hive session close is not resetting thread name > --- > > Key: HIVE-13885 > URL: https://issues.apache.org/jira/browse/HIVE-13885 > Project: Hive > Issue Type: Bug >Reporter: Rajat Khandelwal >Assignee: Rajat Khandelwal > Fix For: 2.1.0 > > Attachments: HIVE-13885.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13859) mask() UDF not retaining day and month field values
[ https://issues.apache.org/jira/browse/HIVE-13859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13859: Resolution: Fixed Status: Resolved (was: Patch Available) Pushed to master & branch-2.1 > mask() UDF not retaining day and month field values > --- > > Key: HIVE-13859 > URL: https://issues.apache.org/jira/browse/HIVE-13859 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Madhan Neethiraj > Fix For: 2.1.0 > > Attachments: HIVE-13859.patch > > > For date type parameters, mask() UDF replaces year/month/day field values > with the values given in arguments to the UDF. Argument value -1 is treated > as special, to specify that mask() should retain the value in the parameter. > This allows to selectively mask only year/month/day fields. > Specifying "-1" does not retain the values for day/month fields; however the > year value is retained, as shown below. > {code} > 0: jdbc:hive2://localhost:1> select id, join_date from employee where id > < 4; > +-+-+--+ > | id | join_date | > +-+-+--+ > | 1 | 2012-01-01 | > | 2 | 2014-02-01 | > | 3 | 2013-03-01 | > +-+-+--+ > 3 rows selected (0.435 seconds) > 0: jdbc:hive2://localhost:1> select id, mask(join_date, -1, -1, -1, > -1,-1, -1,-1,-1) join_date from employee where id < 4; > +-+-+--+ > | id | join_date | > +-+-+--+ > | 1 | 2012-01-01 | > | 2 | 2014-01-01 | > | 3 | 2013-01-01 | > +-+-+--+ > 3 rows selected (0.344 seconds) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13719) TestConverters fails on master
[ https://issues.apache.org/jira/browse/HIVE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308326#comment-15308326 ] Siddharth Seth commented on HIVE-13719: --- Committing this. The test failures were caused by too many executors on a single node. Have verified that the test which the patch touches succeeds locally. > TestConverters fails on master > -- > > Key: HIVE-13719 > URL: https://issues.apache.org/jira/browse/HIVE-13719 > Project: Hive > Issue Type: Bug > Components: llap, Tests >Affects Versions: 2.1.0 >Reporter: Ashutosh Chauhan >Assignee: Siddharth Seth > Attachments: HIVE-13719.01.patch, HIVE-13719.02.patch > > > Can be reproduced locally also. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13248) Change date_add/date_sub/to_date functions to return Date type rather than String
[ https://issues.apache.org/jira/browse/HIVE-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308327#comment-15308327 ] Ashutosh Chauhan commented on HIVE-13248: - [~jdere] Is this ready to go in? > Change date_add/date_sub/to_date functions to return Date type rather than > String > - > > Key: HIVE-13248 > URL: https://issues.apache.org/jira/browse/HIVE-13248 > Project: Hive > Issue Type: Improvement > Components: UDF >Affects Versions: 2.0.0, 2.1.0 >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13248.1.patch, HIVE-13248.2.patch, > HIVE-13248.3.patch > > > Some of the original "date" related functions return string values rather > than Date values, because they were created before the Date type existed in > Hive. We can try to change these to return Date in the 2.x line. > Date values should be implicitly convertible to String. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (HIVE-13719) TestConverters fails on master
[ https://issues.apache.org/jira/browse/HIVE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308326#comment-15308326 ] Siddharth Seth edited comment on HIVE-13719 at 5/31/16 6:45 PM: Committing this. The test failures were caused by too many executors on a single node. Have verified that the test which the patch touches succeeds locally. {code} Running org.apache.hadoop.hive.llap.tez.TestConverters Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.617 sec - in org.apache.hadoop.hive.llap.tez.TestConverters Results : Tests run: 2, Failures: 0, Errors: 0, Skipped: 0 {code} was (Author: sseth): Committing this. The test failures were caused by too many executors on a single node. Have verified that the test which the patch touches succeeds locally. > TestConverters fails on master > -- > > Key: HIVE-13719 > URL: https://issues.apache.org/jira/browse/HIVE-13719 > Project: Hive > Issue Type: Bug > Components: llap, Tests >Affects Versions: 2.1.0 >Reporter: Ashutosh Chauhan >Assignee: Siddharth Seth > Attachments: HIVE-13719.01.patch, HIVE-13719.02.patch > > > Can be reproduced locally also. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13719) TestConverters fails on master
[ https://issues.apache.org/jira/browse/HIVE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-13719: -- Resolution: Fixed Fix Version/s: 2.1.0 Status: Resolved (was: Patch Available) > TestConverters fails on master > -- > > Key: HIVE-13719 > URL: https://issues.apache.org/jira/browse/HIVE-13719 > Project: Hive > Issue Type: Bug > Components: llap, Tests >Affects Versions: 2.1.0 >Reporter: Ashutosh Chauhan >Assignee: Siddharth Seth > Fix For: 2.1.0 > > Attachments: HIVE-13719.01.patch, HIVE-13719.02.patch > > > Can be reproduced locally also. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13798) Fix the unit test failure org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
[ https://issues.apache.org/jira/browse/HIVE-13798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308337#comment-15308337 ] Ashutosh Chauhan commented on HIVE-13798: - +1 seems like tests didnt run on this one. > Fix the unit test failure > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload > > > Key: HIVE-13798 > URL: https://issues.apache.org/jira/browse/HIVE-13798 > Project: Hive > Issue Type: Sub-task >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13798.2.patch, HIVE-13798.3.patch, > HIVE-13798.4.patch, HIVE-13798.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13852) NPE in TaskLocationHints during LLAP GetSplits request
[ https://issues.apache.org/jira/browse/HIVE-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308339#comment-15308339 ] Jason Dere commented on HIVE-13852: --- Failures do not look related, 2 new failures (1 of which also shows up in the next build) and the other does not fail running locally. > NPE in TaskLocationHints during LLAP GetSplits request > -- > > Key: HIVE-13852 > URL: https://issues.apache.org/jira/browse/HIVE-13852 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Jason Dere >Assignee: Jason Dere > Attachments: HIVE-13852.1.patch > > > {noformat} > Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: > java.io.IOException: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:194) > at org.apache.hadoop.hive.ql.exec.UDTFOperator.process(UDTFOperator.java:116) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:434) > at > org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:426) > at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:144) > ... 15 more > Caused by: java.io.IOException: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:366) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.process(GenericUDTFGetSplits.java:185) > ... 23 more > Caused by: java.lang.NullPointerException: null > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDTFGetSplits.getSplits(GenericUDTFGetSplits.java:344) > ... 24 more > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13840) Orc split generation is reading file footers twice
[ https://issues.apache.org/jira/browse/HIVE-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13840: - Resolution: Fixed Fix Version/s: 2.2.0 2.1.0 Status: Resolved (was: Patch Available) Committed to branch-2.1 and master. > Orc split generation is reading file footers twice > -- > > Key: HIVE-13840 > URL: https://issues.apache.org/jira/browse/HIVE-13840 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Critical > Fix For: 2.1.0, 2.2.0 > > Attachments: HIVE-13840.1.patch, HIVE-13840.2.patch, > HIVE-13840.3.patch > > > Recent refactorings to move orc out introduced a regression in split > generation. This leads to reading the orc file footers twice during split > generation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13840) Orc split generation is reading file footers twice
[ https://issues.apache.org/jira/browse/HIVE-13840?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-13840: - Release Note: Fix for ORC split generation reading file footers twice. Also reduces number of file system calls during ORC split generation. > Orc split generation is reading file footers twice > -- > > Key: HIVE-13840 > URL: https://issues.apache.org/jira/browse/HIVE-13840 > Project: Hive > Issue Type: Bug > Components: ORC >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Critical > Fix For: 2.1.0, 2.2.0 > > Attachments: HIVE-13840.1.patch, HIVE-13840.2.patch, > HIVE-13840.3.patch > > > Recent refactorings to move orc out introduced a regression in split > generation. This leads to reading the orc file footers twice during split > generation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13872) Vectorization: Fix cross-product reduce sink serialization
[ https://issues.apache.org/jira/browse/HIVE-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13872: --- Target Version/s: 2.2.0 (was: 2.1.0) > Vectorization: Fix cross-product reduce sink serialization > -- > > Key: HIVE-13872 > URL: https://issues.apache.org/jira/browse/HIVE-13872 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V > Attachments: HIVE-13872.WIP.patch > > > TPC-DS Q13 produces a cross-product without CBO simplifying the query > {code} > Caused by: java.lang.RuntimeException: null STRING entry: batchIndex 0 > projection column num 1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.nullBytesReadError(VectorExtractRow.java:349) > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:267) > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRow(VectorExtractRow.java:343) > at > org.apache.hadoop.hive.ql.exec.vector.VectorReduceSinkOperator.process(VectorReduceSinkOperator.java:103) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:762) > ... 18 more > {code} > Simplified query > {code} > set hive.cbo.enable=false; > -- explain > select count(1) > from store_sales > ,customer_demographics > where ( > ( > customer_demographics.cd_demo_sk = store_sales.ss_cdemo_sk > and customer_demographics.cd_marital_status = 'M' > )or > ( >customer_demographics.cd_demo_sk = ss_cdemo_sk > and customer_demographics.cd_marital_status = 'U' > )) > ; > {code} > {code} > Map 3 > Map Operator Tree: > TableScan > alias: customer_demographics > Statistics: Num rows: 1920800 Data size: 717255532 Basic > stats: COMPLETE Column stats: NONE > Reduce Output Operator > sort order: > Statistics: Num rows: 1920800 Data size: 717255532 Basic > stats: COMPLETE Column stats: NONE > value expressions: cd_demo_sk (type: int), > cd_marital_status (type: string) > Execution mode: vectorized, llap > LLAP IO: all inputs > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13879) add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api
[ https://issues.apache.org/jira/browse/HIVE-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13879: --- Target Version/s: 2.2.0 (was: 2.1.0) > add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api > -- > > Key: HIVE-13879 > URL: https://issues.apache.org/jira/browse/HIVE-13879 > Project: Hive > Issue Type: Bug > Components: Authorization >Reporter: Thejas M Nair >Assignee: Thejas M Nair > > Passing context information to grant/revoke methods will help auditing > logging those methods by authorizer plugin implementations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13879) add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api
[ https://issues.apache.org/jira/browse/HIVE-13879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308354#comment-15308354 ] Jesus Camacho Rodriguez commented on HIVE-13879: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > add HiveAuthzContext to grant/revoke methods in HiveAuthorizer api > -- > > Key: HIVE-13879 > URL: https://issues.apache.org/jira/browse/HIVE-13879 > Project: Hive > Issue Type: Bug > Components: Authorization >Reporter: Thejas M Nair >Assignee: Thejas M Nair > > Passing context information to grant/revoke methods will help auditing > logging those methods by authorizer plugin implementations. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13872) Vectorization: Fix cross-product reduce sink serialization
[ https://issues.apache.org/jira/browse/HIVE-13872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308356#comment-15308356 ] Jesus Camacho Rodriguez commented on HIVE-13872: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Vectorization: Fix cross-product reduce sink serialization > -- > > Key: HIVE-13872 > URL: https://issues.apache.org/jira/browse/HIVE-13872 > Project: Hive > Issue Type: Bug > Components: Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V > Attachments: HIVE-13872.WIP.patch > > > TPC-DS Q13 produces a cross-product without CBO simplifying the query > {code} > Caused by: java.lang.RuntimeException: null STRING entry: batchIndex 0 > projection column num 1 > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.nullBytesReadError(VectorExtractRow.java:349) > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRowColumn(VectorExtractRow.java:267) > at > org.apache.hadoop.hive.ql.exec.vector.VectorExtractRow.extractRow(VectorExtractRow.java:343) > at > org.apache.hadoop.hive.ql.exec.vector.VectorReduceSinkOperator.process(VectorReduceSinkOperator.java:103) > at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837) > at > org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:130) > at > org.apache.hadoop.hive.ql.exec.vector.VectorMapOperator.process(VectorMapOperator.java:762) > ... 18 more > {code} > Simplified query > {code} > set hive.cbo.enable=false; > -- explain > select count(1) > from store_sales > ,customer_demographics > where ( > ( > customer_demographics.cd_demo_sk = store_sales.ss_cdemo_sk > and customer_demographics.cd_marital_status = 'M' > )or > ( >customer_demographics.cd_demo_sk = ss_cdemo_sk > and customer_demographics.cd_marital_status = 'U' > )) > ; > {code} > {code} > Map 3 > Map Operator Tree: > TableScan > alias: customer_demographics > Statistics: Num rows: 1920800 Data size: 717255532 Basic > stats: COMPLETE Column stats: NONE > Reduce Output Operator > sort order: > Statistics: Num rows: 1920800 Data size: 717255532 Basic > stats: COMPLETE Column stats: NONE > value expressions: cd_demo_sk (type: int), > cd_marital_status (type: string) > Execution mode: vectorized, llap > LLAP IO: all inputs > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13871) Tez exec summary does not get the HIVE counters right
[ https://issues.apache.org/jira/browse/HIVE-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13871: --- Target Version/s: 2.2.0 (was: 2.1.0) > Tez exec summary does not get the HIVE counters right > - > > Key: HIVE-13871 > URL: https://issues.apache.org/jira/browse/HIVE-13871 > Project: Hive > Issue Type: Bug > Components: llap, Tez >Affects Versions: 2.1.0 >Reporter: Gopal V > > {code} > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - HIVE: > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -CREATED_FILES: 1 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > DESERIALIZE_ERRORS: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_1: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_4: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_5: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_6: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_7: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_OUT_0: 10 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_1: 418284 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_4: 27440 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_5: 365 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_6: 101 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_7: 48000 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Reducer_2: 10 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - Shuffle Errors: > {code} > However, the actual operator counters do indicate the total # of vectors. > {code} > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > TaskCounter_Map_1_INPUT_Map_4: > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > FIRST_EVENT_RECEIVED: 1 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > INPUT_RECORDS_PROCESSED: 27440 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13871) Tez exec summary does not get the HIVE counters right
[ https://issues.apache.org/jira/browse/HIVE-13871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308358#comment-15308358 ] Jesus Camacho Rodriguez commented on HIVE-13871: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Tez exec summary does not get the HIVE counters right > - > > Key: HIVE-13871 > URL: https://issues.apache.org/jira/browse/HIVE-13871 > Project: Hive > Issue Type: Bug > Components: llap, Tez >Affects Versions: 2.1.0 >Reporter: Gopal V > > {code} > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - HIVE: > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -CREATED_FILES: 1 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > DESERIALIZE_ERRORS: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_1: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_4: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_5: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_6: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_IN_Map_7: 0 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) -RECORDS_OUT_0: 10 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_1: 418284 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_4: 27440 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_5: 365 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_6: 101 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Map_7: 48000 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > RECORDS_OUT_INTERMEDIATE_Reducer_2: 10 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - Shuffle Errors: > {code} > However, the actual operator counters do indicate the total # of vectors. > {code} > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > TaskCounter_Map_1_INPUT_Map_4: > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > FIRST_EVENT_RECEIVED: 1 > 2016-05-26T21:59:51,421 INFO [main]: exec.Task (:()) - > INPUT_RECORDS_PROCESSED: 27440 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13862) org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter falls back to ORM
[ https://issues.apache.org/jira/browse/HIVE-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308361#comment-15308361 ] Jesus Camacho Rodriguez commented on HIVE-13862: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter > falls back to ORM > --- > > Key: HIVE-13862 > URL: https://issues.apache.org/jira/browse/HIVE-13862 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > Fix For: 2.1.0 > > Attachments: HIVE-13862.1.patch, HIVE-13862.patch > > > We are seeing following exception and calls fall back to ORM which make it > costly : > {noformat} > WARN org.apache.hadoop.hive.metastore.ObjectStore - Direct SQL failed, > falling back to ORM > java.lang.ClassCastException: > org.datanucleus.store.rdbms.query.ForwardQueryResult cannot be cast to > java.lang.Number > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.extractSqlInt(MetaStoreDirectSql.java:892) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:855) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter(MetaStoreDirectSql.java:405) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore$5.getSqlResult(ObjectStore.java:2763) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore$5.getSqlResult(ObjectStore.java:2755) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2606) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore.getNumPartitionsByFilterInternal(ObjectStore.java:2770) > [hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore.getNumPartitionsByFilter(ObjectStore.java:2746) > [hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13847) Avoid file open call in RecordReaderUtils as the stream is already available
[ https://issues.apache.org/jira/browse/HIVE-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13847: --- Target Version/s: 2.2.0 (was: 2.1.0) > Avoid file open call in RecordReaderUtils as the stream is already available > > > Key: HIVE-13847 > URL: https://issues.apache.org/jira/browse/HIVE-13847 > Project: Hive > Issue Type: Improvement > Components: ORC >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-13847.1.patch > > > File open call in RecordReaderUtils::readRowIndex can be avoided. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13862) org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter falls back to ORM
[ https://issues.apache.org/jira/browse/HIVE-13862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13862: --- Target Version/s: 2.2.0 (was: 2.1.0) > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter > falls back to ORM > --- > > Key: HIVE-13862 > URL: https://issues.apache.org/jira/browse/HIVE-13862 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Amareshwari Sriramadasu >Assignee: Rajat Khandelwal > Fix For: 2.1.0 > > Attachments: HIVE-13862.1.patch, HIVE-13862.patch > > > We are seeing following exception and calls fall back to ORM which make it > costly : > {noformat} > WARN org.apache.hadoop.hive.metastore.ObjectStore - Direct SQL failed, > falling back to ORM > java.lang.ClassCastException: > org.datanucleus.store.rdbms.query.ForwardQueryResult cannot be cast to > java.lang.Number > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.extractSqlInt(MetaStoreDirectSql.java:892) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilterInternal(MetaStoreDirectSql.java:855) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getNumPartitionsViaSqlFilter(MetaStoreDirectSql.java:405) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore$5.getSqlResult(ObjectStore.java:2763) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore$5.getSqlResult(ObjectStore.java:2755) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2606) > ~[hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore.getNumPartitionsByFilterInternal(ObjectStore.java:2770) > [hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > at > org.apache.hadoop.hive.metastore.ObjectStore.getNumPartitionsByFilter(ObjectStore.java:2746) > [hive-exec-2.1.2-inm-SNAPSHOT.jar:2.1.2-inm-SNAPSHOT] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13842) Expose ability to set number of connections in the pool in TxnHandler
[ https://issues.apache.org/jira/browse/HIVE-13842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13842: --- Target Version/s: 1.3.0, 2.2.0 (was: 1.3.0, 2.1.0) > Expose ability to set number of connections in the pool in TxnHandler > - > > Key: HIVE-13842 > URL: https://issues.apache.org/jira/browse/HIVE-13842 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman > > Current defaults are hardcoded 8/10 for dbcp/bonecp -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13842) Expose ability to set number of connections in the pool in TxnHandler
[ https://issues.apache.org/jira/browse/HIVE-13842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308364#comment-15308364 ] Jesus Camacho Rodriguez commented on HIVE-13842: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Expose ability to set number of connections in the pool in TxnHandler > - > > Key: HIVE-13842 > URL: https://issues.apache.org/jira/browse/HIVE-13842 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman > > Current defaults are hardcoded 8/10 for dbcp/bonecp -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13833) Add an initial delay when starting the heartbeat
[ https://issues.apache.org/jira/browse/HIVE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308366#comment-15308366 ] Jesus Camacho Rodriguez commented on HIVE-13833: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Add an initial delay when starting the heartbeat > > > Key: HIVE-13833 > URL: https://issues.apache.org/jira/browse/HIVE-13833 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0, 2.1.0 >Reporter: Wei Zheng >Assignee: Wei Zheng >Priority: Minor > Attachments: HIVE-13833.1.patch > > > Since the scheduling of heartbeat happens immediately after lock acquisition, > it's unnecessary to send heartbeat at the time when locks is acquired. Add an > initial delay to skip this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13847) Avoid file open call in RecordReaderUtils as the stream is already available
[ https://issues.apache.org/jira/browse/HIVE-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308362#comment-15308362 ] Jesus Camacho Rodriguez commented on HIVE-13847: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Avoid file open call in RecordReaderUtils as the stream is already available > > > Key: HIVE-13847 > URL: https://issues.apache.org/jira/browse/HIVE-13847 > Project: Hive > Issue Type: Improvement > Components: ORC >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-13847.1.patch > > > File open call in RecordReaderUtils::readRowIndex can be avoided. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13833) Add an initial delay when starting the heartbeat
[ https://issues.apache.org/jira/browse/HIVE-13833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13833: --- Target Version/s: 2.0.0, 2.2.0 (was: 2.0.0, 2.1.0) > Add an initial delay when starting the heartbeat > > > Key: HIVE-13833 > URL: https://issues.apache.org/jira/browse/HIVE-13833 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.0.0, 2.1.0 >Reporter: Wei Zheng >Assignee: Wei Zheng >Priority: Minor > Attachments: HIVE-13833.1.patch > > > Since the scheduling of heartbeat happens immediately after lock acquisition, > it's unnecessary to send heartbeat at the time when locks is acquired. Add an > initial delay to skip this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13827) LLAPIF: authentication on the output channel
[ https://issues.apache.org/jira/browse/HIVE-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308371#comment-15308371 ] Jesus Camacho Rodriguez commented on HIVE-13827: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > LLAPIF: authentication on the output channel > > > Key: HIVE-13827 > URL: https://issues.apache.org/jira/browse/HIVE-13827 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > The current thinking is that we'd send the token. There's no protocol on the > channel right now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS
[ https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308373#comment-15308373 ] Jesus Camacho Rodriguez commented on HIVE-13822: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot > parse COLUMN_STATS > -- > > Key: HIVE-13822 > URL: https://issues.apache.org/jira/browse/HIVE-13822 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > > Thanks to [~jcamachorodriguez] for uncovering this issue as part of > HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether > stats are up-to-date. In case of PerfCliDriver, ‘false’ (thus, not > up-to-date) is returned and the following debug message in the logs: > {code} > In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in > StatsSetupConst) > {code} > Looks like the issue started happening after HIVE-12261 went in. > The fix would be to replace > {color:red}COLUMN_STATS_ACCURATE,true{color} > with > {color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color} > where key, value are the column names. > in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13828) Enable hive.orc.splits.include.file.footer by default
[ https://issues.apache.org/jira/browse/HIVE-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308368#comment-15308368 ] Jesus Camacho Rodriguez commented on HIVE-13828: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Enable hive.orc.splits.include.file.footer by default > - > > Key: HIVE-13828 > URL: https://issues.apache.org/jira/browse/HIVE-13828 > Project: Hive > Issue Type: Improvement > Components: ORC >Reporter: Rajesh Balamohan >Priority: Minor > > As a part of setting up the OrcInputFormat.getRecordReader in the task side, > hive ends up opening the file path and reads the metadata information. If > hive.orc.splits.include.file.footer=true, this metadata info can be passed on > to task side which can help reduce the overhead. It would be good to > consider enabling this parameter by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13828) Enable hive.orc.splits.include.file.footer by default
[ https://issues.apache.org/jira/browse/HIVE-13828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13828: --- Target Version/s: 2.2.0 (was: 2.1.0) > Enable hive.orc.splits.include.file.footer by default > - > > Key: HIVE-13828 > URL: https://issues.apache.org/jira/browse/HIVE-13828 > Project: Hive > Issue Type: Improvement > Components: ORC >Reporter: Rajesh Balamohan >Priority: Minor > > As a part of setting up the OrcInputFormat.getRecordReader in the task side, > hive ends up opening the file path and reads the metadata information. If > hive.orc.splits.include.file.footer=true, this metadata info can be passed on > to task side which can help reduce the overhead. It would be good to > consider enabling this parameter by default. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13870) Decimal vector is not resized correctly
[ https://issues.apache.org/jira/browse/HIVE-13870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-13870: Resolution: Fixed Fix Version/s: 2.0.2 Status: Resolved (was: Patch Available) Committed to branches. > Decimal vector is not resized correctly > --- > > Key: HIVE-13870 > URL: https://issues.apache.org/jira/browse/HIVE-13870 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Fix For: 2.0.2, 2.1.0 > > Attachments: HIVE-13870.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS
[ https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13822: --- Target Version/s: 2.2.0 (was: 2.1.0) > TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot > parse COLUMN_STATS > -- > > Key: HIVE-13822 > URL: https://issues.apache.org/jira/browse/HIVE-13822 > Project: Hive > Issue Type: Bug >Reporter: Hari Sankar Sivarama Subramaniyan >Assignee: Hari Sankar Sivarama Subramaniyan > > Thanks to [~jcamachorodriguez] for uncovering this issue as part of > HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether > stats are up-to-date. In case of PerfCliDriver, ‘false’ (thus, not > up-to-date) is returned and the following debug message in the logs: > {code} > In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in > StatsSetupConst) > {code} > Looks like the issue started happening after HIVE-12261 went in. > The fix would be to replace > {color:red}COLUMN_STATS_ACCURATE,true{color} > with > {color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color} > where key, value are the column names. > in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13827) LLAPIF: authentication on the output channel
[ https://issues.apache.org/jira/browse/HIVE-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13827: --- Target Version/s: 2.2.0 (was: 2.1.0) > LLAPIF: authentication on the output channel > > > Key: HIVE-13827 > URL: https://issues.apache.org/jira/browse/HIVE-13827 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > The current thinking is that we'd send the token. There's no protocol on the > channel right now. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13798) Fix the unit test failure org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
[ https://issues.apache.org/jira/browse/HIVE-13798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308379#comment-15308379 ] Jesus Camacho Rodriguez commented on HIVE-13798: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Fix the unit test failure > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload > > > Key: HIVE-13798 > URL: https://issues.apache.org/jira/browse/HIVE-13798 > Project: Hive > Issue Type: Sub-task >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13798.2.patch, HIVE-13798.3.patch, > HIVE-13798.4.patch, HIVE-13798.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13798) Fix the unit test failure org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
[ https://issues.apache.org/jira/browse/HIVE-13798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13798: --- Target Version/s: 2.2.0 (was: 2.1.0) > Fix the unit test failure > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload > > > Key: HIVE-13798 > URL: https://issues.apache.org/jira/browse/HIVE-13798 > Project: Hive > Issue Type: Sub-task >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-13798.2.patch, HIVE-13798.3.patch, > HIVE-13798.4.patch, HIVE-13798.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13809) Hybrid Grace Hash Join memory usage estimation didn't take into account the bloom filter size
[ https://issues.apache.org/jira/browse/HIVE-13809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308377#comment-15308377 ] Jesus Camacho Rodriguez commented on HIVE-13809: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Hybrid Grace Hash Join memory usage estimation didn't take into account the > bloom filter size > - > > Key: HIVE-13809 > URL: https://issues.apache.org/jira/browse/HIVE-13809 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0, 2.1.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > > Memory estimation is important during hash table loading, because we need to > make the decision of whether to load the next hash partition in memory or > spill it. If the assumption is there's enough memory but it turns out not the > case, we will run into OOM problem. > Currently hybrid grace hash join memory usage estimation didn't take into > account the bloom filter size. In large test cases (TB scale) the bloom > filter grows as big as hundreds of MB, big enough to cause estimation error. > The solution is to count in the bloom filter size into memory estimation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13809) Hybrid Grace Hash Join memory usage estimation didn't take into account the bloom filter size
[ https://issues.apache.org/jira/browse/HIVE-13809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13809: --- Target Version/s: 2.0.0, 2.2.0 (was: 2.0.0, 2.1.0) > Hybrid Grace Hash Join memory usage estimation didn't take into account the > bloom filter size > - > > Key: HIVE-13809 > URL: https://issues.apache.org/jira/browse/HIVE-13809 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.0.0, 2.1.0 >Reporter: Wei Zheng >Assignee: Wei Zheng > > Memory estimation is important during hash table loading, because we need to > make the decision of whether to load the next hash partition in memory or > spill it. If the assumption is there's enough memory but it turns out not the > case, we will run into OOM problem. > Currently hybrid grace hash join memory usage estimation didn't take into > account the bloom filter size. In large test cases (TB scale) the bloom > filter grows as big as hundreds of MB, big enough to cause estimation error. > The solution is to count in the bloom filter size into memory estimation. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13771) LLAPIF: generate app ID
[ https://issues.apache.org/jira/browse/HIVE-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13771: --- Target Version/s: 2.2.0 (was: 2.1.0) > LLAPIF: generate app ID > --- > > Key: HIVE-13771 > URL: https://issues.apache.org/jira/browse/HIVE-13771 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13771.patch > > > See comments in the HIVE-13675 patch. The uniqueness needs to be ensured; the > user may be allowed to supply a prefix (e.g. his YARN app Id, if any) for > ease of tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13771) LLAPIF: generate app ID
[ https://issues.apache.org/jira/browse/HIVE-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308380#comment-15308380 ] Jesus Camacho Rodriguez commented on HIVE-13771: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > LLAPIF: generate app ID > --- > > Key: HIVE-13771 > URL: https://issues.apache.org/jira/browse/HIVE-13771 > Project: Hive > Issue Type: Sub-task >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-13771.patch > > > See comments in the HIVE-13675 patch. The uniqueness needs to be ensured; the > user may be allowed to supply a prefix (e.g. his YARN app Id, if any) for > ease of tracking -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13751) LlapOutputFormatService should have a configurable send buffer size
[ https://issues.apache.org/jira/browse/HIVE-13751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-13751: --- Target Version/s: 2.2.0 (was: 2.1.0) > LlapOutputFormatService should have a configurable send buffer size > --- > > Key: HIVE-13751 > URL: https://issues.apache.org/jira/browse/HIVE-13751 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13751.1.patch, HIVE-13751.2.patch, > HIVE-13751.3.patch > > > Netty channel buffer size is hard-coded 128KB now. It should be made > configurable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13751) LlapOutputFormatService should have a configurable send buffer size
[ https://issues.apache.org/jira/browse/HIVE-13751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308381#comment-15308381 ] Jesus Camacho Rodriguez commented on HIVE-13751: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > LlapOutputFormatService should have a configurable send buffer size > --- > > Key: HIVE-13751 > URL: https://issues.apache.org/jira/browse/HIVE-13751 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 2.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-13751.1.patch, HIVE-13751.2.patch, > HIVE-13751.3.patch > > > Netty channel buffer size is hard-coded 128KB now. It should be made > configurable. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13631) Support index in HBase Metastore
[ https://issues.apache.org/jira/browse/HIVE-13631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308384#comment-15308384 ] Jesus Camacho Rodriguez commented on HIVE-13631: Removing 2.1.0 target as issue is not tagged as Critical/Blocker and the RC will be created tomorrow. Please feel free to commit to branch-2.1 anyway and fix for 2.1.0 if this happens before the release. > Support index in HBase Metastore > > > Key: HIVE-13631 > URL: https://issues.apache.org/jira/browse/HIVE-13631 > Project: Hive > Issue Type: Improvement > Components: HBase Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai > Attachments: HIVE-13631.1-nogen.patch, HIVE-13631.1.patch, > HIVE-13631.2-nogen.patch, HIVE-13631.2.patch > > > Currently all index related methods in HBaseStore is not implemented. We need > to add those missing methods and index support in hbaseimport tool. -- This message was sent by Atlassian JIRA (v6.3.4#6332)