[jira] [Commented] (HIVE-6109) Support customized location for EXTERNAL tables created by Dynamic Partitioning
[ https://issues.apache.org/jira/browse/HIVE-6109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1392#comment-1392 ] Satish Mittal commented on HIVE-6109: - You could use the following details: A new Job conf property hcat.dynamic.partitioning.custom.pattern is introduced that can be configured to provide custom path pattern in case of dynamic partitioning. E.g. suppose a table user_logs is partitioned by (year, month, day, hour, minute, country). If user wants data for dynamic partitions to get generated in the following location format: hdfs://hcat/data/user_logs/2013/12/06/10/US, then this property can be set to: ${year}/${month}/${day}/${hour}/${minute}/${country}. Support customized location for EXTERNAL tables created by Dynamic Partitioning --- Key: HIVE-6109 URL: https://issues.apache.org/jira/browse/HIVE-6109 Project: Hive Issue Type: Improvement Components: HCatalog Reporter: Satish Mittal Assignee: Satish Mittal Fix For: 0.13.0 Attachments: HIVE-6109.1.patch.txt, HIVE-6109.2.patch.txt, HIVE-6109.3.patch.txt, HIVE-6109.pdf Currently when dynamic partitions are created by HCatalog, the underlying directories for the partitions are created in a fixed 'Hive-style' format, i.e. root_dir/key1=value1/key2=value2/ and so on. However in case of external table, user should be able to control the format of directories created for dynamic partitions. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6579) HiveLockObjectData constructor makes too many queryStr instance causing oom
[ https://issues.apache.org/jira/browse/HIVE-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930012#comment-13930012 ] xieyuchen commented on HIVE-6579: - [~xuefuz] thanks for your reply. Except the testing code, there are actually 2 callers of this constructor: o.a.h.h.ql.Driver.acquireReadWriteLocks and o.a.h.h.ql.exec.DDLTask.lockTable The first situation will be fix in this patch. I think the second one will not cause oom. But if we have to keep HiveLockObjectData.queryStr trimmed, we can keep the trimming call in the constructor. HiveLockObjectData constructor makes too many queryStr instance causing oom --- Key: HIVE-6579 URL: https://issues.apache.org/jira/browse/HIVE-6579 Project: Hive Issue Type: Improvement Reporter: xieyuchen Attachments: HIVE-6579.1.patch.txt We have a huge sql which full outer joins 10+ partitoned tables, each table has at least 1k partitions. The sql has 300kb in length(it constructed automatically of cause). So when we running this sql, there are over 10k HiveLockObjectData instances. Because of the constructor of HiveLockObjectData trim the queryStr, there will be 10k individual String instances, each has 300kb in length! Then the Hive client will get an oom exception. Trying to trim the queryStr in Driver.compile function instead of doing it in HiveLockObjectData constructor to reduce memory wasting. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6434) Restrict function create/drop to admin roles
[ https://issues.apache.org/jira/browse/HIVE-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6434: Status: Open (was: Patch Available) Restrict function create/drop to admin roles Key: HIVE-6434 URL: https://issues.apache.org/jira/browse/HIVE-6434 Project: Hive Issue Type: Sub-task Components: Authorization, UDF Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-6434.1.patch, HIVE-6434.2.patch, HIVE-6434.3.patch, HIVE-6434.4.patch, HIVE-6434.5.patch, HIVE-6434.6.patch Restrict function create/drop to admin roles, if sql std auth is enabled. This would include temp/permanent functions, as well as macros. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6434) Restrict function create/drop to admin roles
[ https://issues.apache.org/jira/browse/HIVE-6434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6434: Status: Patch Available (was: Open) Jira needs to be made patch available again for pre-commit tests to pick them up. Restrict function create/drop to admin roles Key: HIVE-6434 URL: https://issues.apache.org/jira/browse/HIVE-6434 Project: Hive Issue Type: Sub-task Components: Authorization, UDF Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-6434.1.patch, HIVE-6434.2.patch, HIVE-6434.3.patch, HIVE-6434.4.patch, HIVE-6434.5.patch, HIVE-6434.6.patch Restrict function create/drop to admin roles, if sql std auth is enabled. This would include temp/permanent functions, as well as macros. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6602) Multi-user HiveServer2 throws error
[ https://issues.apache.org/jira/browse/HIVE-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930099#comment-13930099 ] Hive QA commented on HIVE-6602: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633685/HIVE-6602.1.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5376 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16 {noformat} Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1699/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1699/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12633685 Multi-user HiveServer2 throws error --- Key: HIVE-6602 URL: https://issues.apache.org/jira/browse/HIVE-6602 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6602.1.patch Error thrown: Error while processing statement: FAILED: RuntimeException org.apache.hadoop.security.AccessControlException: Permission denied: user=user_1, access=WRITE, inode=/tmp/hive-hive:hdfs:drwxr-xr-x For hive query execution, a scratch directory specified by hive.exec.scratchdir is created with default permission 700. In HiveServer2, during the CLIService startup, we check for the presence of scratch directories (local + dfs) and if they don't exist, create them with permission 777. However, we should also change the permission from the default 700 to 777 in case the dfs scratch directory already exists. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6611) Joining multiple union all outputs fails on Tez
[ https://issues.apache.org/jira/browse/HIVE-6611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-6611: - Status: Patch Available (was: Open) Joining multiple union all outputs fails on Tez --- Key: HIVE-6611 URL: https://issues.apache.org/jira/browse/HIVE-6611 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Priority: Critical Attachments: HIVE-6611.1.patch Queries like: with u as (select * from src union all select * from src) select * from u join u; will fail on Tez because only one union flows into the join reduce phase. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6012) restore backward compatibility of arithmetic operations
[ https://issues.apache.org/jira/browse/HIVE-6012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6012: Status: Open (was: Patch Available) restore backward compatibility of arithmetic operations --- Key: HIVE-6012 URL: https://issues.apache.org/jira/browse/HIVE-6012 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0 Reporter: Thejas M Nair Assignee: Jason Dere Attachments: HIVE-6012.1.patch, HIVE-6012.2.patch, HIVE-6012.3.patch, HIVE-6012.4.patch, HIVE-6012.5.patch, HIVE-6012.6.patch HIVE-5356 changed the behavior of some of the arithmetic operations, and the change is not backward compatible, as pointed out in this [jira comment|https://issues.apache.org/jira/browse/HIVE-5356?focusedCommentId=13813398page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13813398] {code} int / int = decimal float / float = double float * float = double float + float = double {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6012) restore backward compatibility of arithmetic operations
[ https://issues.apache.org/jira/browse/HIVE-6012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6012: Status: Patch Available (was: Open) restore backward compatibility of arithmetic operations --- Key: HIVE-6012 URL: https://issues.apache.org/jira/browse/HIVE-6012 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0 Reporter: Thejas M Nair Assignee: Jason Dere Attachments: HIVE-6012.1.patch, HIVE-6012.2.patch, HIVE-6012.3.patch, HIVE-6012.4.patch, HIVE-6012.5.patch, HIVE-6012.6.patch HIVE-5356 changed the behavior of some of the arithmetic operations, and the change is not backward compatible, as pointed out in this [jira comment|https://issues.apache.org/jira/browse/HIVE-5356?focusedCommentId=13813398page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13813398] {code} int / int = decimal float / float = double float * float = double float + float = double {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.
[ https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930182#comment-13930182 ] Thejas M Nair commented on HIVE-6486: - There is no server side impact of this feature. The server side should be configured for 'secure(/kerberos)' mode. Support secure Subject.doAs() in HiveServer2 JDBC client. - Key: HIVE-6486 URL: https://issues.apache.org/jira/browse/HIVE-6486 Project: Hive Issue Type: Improvement Components: Authentication, HiveServer2, JDBC Affects Versions: 0.11.0, 0.12.0 Reporter: Shivaraju Gowda Assignee: Shivaraju Gowda Fix For: 0.13.0 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java HIVE-5155 addresses the problem of kerberos authentication in multi-user middleware server using proxy user. In this mode the principal used by the middle ware server has privileges to impersonate selected users in Hive/Hadoop. This enhancement is to support Subject.doAs() authentication in Hive JDBC layer so that the end users Kerberos Subject is passed through in the middle ware server. With this improvement there won't be any additional setup in the server to grant proxy privileges to some users and there won't be need to specify a proxy user in the JDBC client. This version should also be more secure since it won't require principals with the privileges to impersonate other users in Hive/Hadoop setup. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.
[ https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930182#comment-13930182 ] Thejas M Nair edited comment on HIVE-6486 at 3/11/14 10:18 AM: --- There is no server side impact of this feature. The server side should be configured for 'secure(/kerberos)' mode. The server side doc does not require changes. was (Author: thejas): There is no server side impact of this feature. The server side should be configured for 'secure(/kerberos)' mode. Support secure Subject.doAs() in HiveServer2 JDBC client. - Key: HIVE-6486 URL: https://issues.apache.org/jira/browse/HIVE-6486 Project: Hive Issue Type: Improvement Components: Authentication, HiveServer2, JDBC Affects Versions: 0.11.0, 0.12.0 Reporter: Shivaraju Gowda Assignee: Shivaraju Gowda Fix For: 0.13.0 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, Hive_011_Support-Subject_doAS.patch, TestHive_SujectDoAs.java HIVE-5155 addresses the problem of kerberos authentication in multi-user middleware server using proxy user. In this mode the principal used by the middle ware server has privileges to impersonate selected users in Hive/Hadoop. This enhancement is to support Subject.doAs() authentication in Hive JDBC layer so that the end users Kerberos Subject is passed through in the middle ware server. With this improvement there won't be any additional setup in the server to grant proxy privileges to some users and there won't be need to specify a proxy user in the JDBC client. This version should also be more secure since it won't require principals with the privileges to impersonate other users in Hive/Hadoop setup. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5931) SQL std auth - add metastore get_principals_in_role api, support SHOW ROLE PRINCIPALS
[ https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5931: Attachment: HIVE-5931.nothrifgen.2.patch SQL std auth - add metastore get_principals_in_role api, support SHOW ROLE PRINCIPALS - Key: HIVE-5931 URL: https://issues.apache.org/jira/browse/HIVE-5931 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Attachments: HIVE-5931.1.patch, HIVE-5931.nothrifgen.1.patch, HIVE-5931.nothrifgen.2.patch, HIVE-5931.thriftapi.2.patch, HIVE-5931.thriftapi.3.patch, HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch Original Estimate: 24h Remaining Estimate: 24h Support command for listing all members of a role. A new metastore api call also needs to be added for this. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5931) SQL std auth - add metastore get_principals_in_role api, support SHOW ROLE PRINCIPALS
[ https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5931: Attachment: HIVE-5931.2.patch HIVE-5931.*.2.patch - addressing review comments. Update syntax to 'show principals role_name' SQL std auth - add metastore get_principals_in_role api, support SHOW ROLE PRINCIPALS - Key: HIVE-5931 URL: https://issues.apache.org/jira/browse/HIVE-5931 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Attachments: HIVE-5931.1.patch, HIVE-5931.2.patch, HIVE-5931.nothrifgen.1.patch, HIVE-5931.nothrifgen.2.patch, HIVE-5931.thriftapi.2.patch, HIVE-5931.thriftapi.3.patch, HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch Original Estimate: 24h Remaining Estimate: 24h Support command for listing all members of a role. A new metastore api call also needs to be added for this. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HIVE-6600) Add Remus to Hive people list on credits page
[ https://issues.apache.org/jira/browse/HIVE-6600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remus Rusanu resolved HIVE-6600. Resolution: Fixed Committed to hive-site r1576291. Add Remus to Hive people list on credits page - Key: HIVE-6600 URL: https://issues.apache.org/jira/browse/HIVE-6600 Project: Hive Issue Type: Task Components: Website Reporter: Remus Rusanu Assignee: Remus Rusanu Priority: Trivial Attachments: HIVE-6600.1.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5155) Support secure proxy user access to HiveServer2
[ https://issues.apache.org/jira/browse/HIVE-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930276#comment-13930276 ] Thejas M Nair commented on HIVE-5155: - +1 Support secure proxy user access to HiveServer2 --- Key: HIVE-5155 URL: https://issues.apache.org/jira/browse/HIVE-5155 Project: Hive Issue Type: Improvement Components: Authentication, HiveServer2, JDBC Affects Versions: 0.12.0 Reporter: Prasad Mujumdar Assignee: Prasad Mujumdar Attachments: HIVE-5155-1-nothrift.patch, HIVE-5155-noThrift.2.patch, HIVE-5155-noThrift.4.patch, HIVE-5155-noThrift.5.patch, HIVE-5155-noThrift.6.patch, HIVE-5155-noThrift.7.patch, HIVE-5155-noThrift.8.patch, HIVE-5155.1.patch, HIVE-5155.2.patch, HIVE-5155.3.patch, HIVE-5155.4.patch, HIVE-5155.5.patch, ProxyAuth.java, ProxyAuth.out, TestKERBEROS_Hive_JDBC.java The HiveServer2 can authenticate a client using via Kerberos and impersonate the connecting user with underlying secure hadoop. This becomes a gateway for a remote client to access secure hadoop cluster. Now this works fine for when the client obtains Kerberos ticket and directly connects to HiveServer2. There's another big use case for middleware tools where the end user wants to access Hive via another server. For example Oozie action or Hue submitting queries or a BI tool server accessing to HiveServer2. In these cases, the third party server doesn't have end user's Kerberos credentials and hence it can't submit queries to HiveServer2 on behalf of the end user. This ticket is for enabling proxy access to HiveServer2 for third party tools on behalf of end users. There are two parts of the solution proposed in this ticket: 1) Delegation token based connection for Oozie (OOZIE-1457) This is the common mechanism for Hadoop ecosystem components. Hive Remote Metastore and HCatalog already support this. This is suitable for tool like Oozie that submits the MR jobs as actions on behalf of its client. Oozie already uses similar mechanism for Metastore/HCatalog access. 2) Direct proxy access for privileged hadoop users The delegation token implementation can be a challenge for non-hadoop (especially non-java) components. This second part enables a privileged user to directly specify an alternate session user during the connection. If the connecting user has hadoop level privilege to impersonate the requested userid, then HiveServer2 will run the session as that requested user. For example, user Hue is allowed to impersonate user Bob (via core-site.xml proxy user configuration). Then user Hue can connect to HiveServer2 and specify Bob as session user via a session property. HiveServer2 will verify Hue's proxy user privilege and then impersonate user Bob instead of Hue. This will enable any third party tool to impersonate alternate userid without having to implement delegation token connection. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6512) HiveServer2 ThriftCLIServiceTest#testDoAs is an invalid test
[ https://issues.apache.org/jira/browse/HIVE-6512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930288#comment-13930288 ] Thejas M Nair commented on HIVE-6512: - +1 HiveServer2 ThriftCLIServiceTest#testDoAs is an invalid test Key: HIVE-6512 URL: https://issues.apache.org/jira/browse/HIVE-6512 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-6512.1.patch Basically the test tries to test a kerberos doAs which is invalid since it doesn't do a kerberos login and it's not possible to unit test a kerberos setup. Surprisingly it has been hanging around for a while. Needs to be removed from the test suite. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-4764) Support Kerberos HTTP authentication for HiveServer2 running in http mode
[ https://issues.apache.org/jira/browse/HIVE-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930295#comment-13930295 ] Thejas M Nair commented on HIVE-4764: - Ok, I will commit this after committing HIVE-6512, otherwise it will result in false alarms. Support Kerberos HTTP authentication for HiveServer2 running in http mode - Key: HIVE-4764 URL: https://issues.apache.org/jira/browse/HIVE-4764 Project: Hive Issue Type: Sub-task Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Thejas M Nair Assignee: Vaibhav Gumashta Fix For: 0.13.0 Attachments: HIVE-4764.1.patch, HIVE-4764.2.patch, HIVE-4764.3.patch, HIVE-4764.4.patch Support Kerberos authentication for HiveServer2 running in http mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-5931) SQL std auth - add metastore get_principals_in_role api, support SHOW PRINCIPALS role_name
[ https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-5931: Summary: SQL std auth - add metastore get_principals_in_role api, support SHOW PRINCIPALS role_name (was: SQL std auth - add metastore get_principals_in_role api, support SHOW ROLE PRINCIPALS) SQL std auth - add metastore get_principals_in_role api, support SHOW PRINCIPALS role_name -- Key: HIVE-5931 URL: https://issues.apache.org/jira/browse/HIVE-5931 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Attachments: HIVE-5931.1.patch, HIVE-5931.2.patch, HIVE-5931.nothrifgen.1.patch, HIVE-5931.nothrifgen.2.patch, HIVE-5931.thriftapi.2.patch, HIVE-5931.thriftapi.3.patch, HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch Original Estimate: 24h Remaining Estimate: 24h Support command for listing all members of a role. A new metastore api call also needs to be added for this. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization
[ https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remus Rusanu updated HIVE-6594: --- Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk r1576317 UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization - Key: HIVE-6594 URL: https://issues.apache.org/jira/browse/HIVE-6594 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0 Reporter: Remus Rusanu Assignee: Remus Rusanu Fix For: 0.14.0 Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch Discovered this while investigating why my fix for HIVE-6222 produced diffs. I discovered that Decimal128.addDestructive does not adjust the internal count when an the number of relevant ints increases. Since this count is used in the fast HiveDecimalWriter conversion code, the results are off. The root cause is UnsignedDecimal128.differenceInternal does not do an updateCount() on the result. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6602) Multi-user HiveServer2 throws error
[ https://issues.apache.org/jira/browse/HIVE-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair updated HIVE-6602: Resolution: Fixed Fix Version/s: (was: 0.13.0) 0.14.0 Status: Resolved (was: Patch Available) Patch committed to trunk. Thanks Vaibhav! Multi-user HiveServer2 throws error --- Key: HIVE-6602 URL: https://issues.apache.org/jira/browse/HIVE-6602 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.13.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.14.0 Attachments: HIVE-6602.1.patch Error thrown: Error while processing statement: FAILED: RuntimeException org.apache.hadoop.security.AccessControlException: Permission denied: user=user_1, access=WRITE, inode=/tmp/hive-hive:hdfs:drwxr-xr-x For hive query execution, a scratch directory specified by hive.exec.scratchdir is created with default permission 700. In HiveServer2, during the CLIService startup, we check for the presence of scratch directories (local + dfs) and if they don't exist, create them with permission 777. However, we should also change the permission from the default 700 to 777 in case the dfs scratch directory already exists. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6579) HiveLockObjectData constructor makes too many queryStr instance causing oom
[ https://issues.apache.org/jira/browse/HIVE-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930387#comment-13930387 ] Xuefu Zhang commented on HIVE-6579: --- Okay. You probably need to rebase your patch so that the test can run. HiveLockObjectData constructor makes too many queryStr instance causing oom --- Key: HIVE-6579 URL: https://issues.apache.org/jira/browse/HIVE-6579 Project: Hive Issue Type: Improvement Reporter: xieyuchen Attachments: HIVE-6579.1.patch.txt We have a huge sql which full outer joins 10+ partitoned tables, each table has at least 1k partitions. The sql has 300kb in length(it constructed automatically of cause). So when we running this sql, there are over 10k HiveLockObjectData instances. Because of the constructor of HiveLockObjectData trim the queryStr, there will be 10k individual String instances, each has 300kb in length! Then the Hive client will get an oom exception. Trying to trim the queryStr in Driver.compile function instead of doing it in HiveLockObjectData constructor to reduce memory wasting. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6222) Make Vector Group By operator abandon grouping if too many distinct keys
[ https://issues.apache.org/jira/browse/HIVE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remus Rusanu updated HIVE-6222: --- Status: Open (was: Patch Available) Make Vector Group By operator abandon grouping if too many distinct keys Key: HIVE-6222 URL: https://issues.apache.org/jira/browse/HIVE-6222 Project: Hive Issue Type: Sub-task Components: Query Processor Affects Versions: 0.13.0 Reporter: Remus Rusanu Assignee: Remus Rusanu Priority: Minor Labels: vectorization Attachments: HIVE-6222.1.patch, HIVE-6222.2.patch Row mode GBY is becoming a pass-through if not enough aggregation occurs on the map side, relying on the shuffle+reduce side to do the work. Have VGBY do the same. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6222) Make Vector Group By operator abandon grouping if too many distinct keys
[ https://issues.apache.org/jira/browse/HIVE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remus Rusanu updated HIVE-6222: --- Attachment: HIVE-6222.3.patch Make Vector Group By operator abandon grouping if too many distinct keys Key: HIVE-6222 URL: https://issues.apache.org/jira/browse/HIVE-6222 Project: Hive Issue Type: Sub-task Components: Query Processor Affects Versions: 0.13.0 Reporter: Remus Rusanu Assignee: Remus Rusanu Priority: Minor Labels: vectorization Attachments: HIVE-6222.1.patch, HIVE-6222.2.patch, HIVE-6222.3.patch Row mode GBY is becoming a pass-through if not enough aggregation occurs on the map side, relying on the shuffle+reduce side to do the work. Have VGBY do the same. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6222) Make Vector Group By operator abandon grouping if too many distinct keys
[ https://issues.apache.org/jira/browse/HIVE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Remus Rusanu updated HIVE-6222: --- Status: Patch Available (was: Open) .3.Patch addresses the test failures. Incorrect comparison in checkHashEfficiency was triggering switch to streaming mode on first row processed. While the fix addresses the problem, the results diff also showed that there are rounding diffs between streamign mode (agg done using UnsignedInt128) vs. streaming mode (agg done on reduce side, using HiveDecimal). This is similar to the issues HIVE-6511 exposed and I'll open a separate JIRA to address it. Make Vector Group By operator abandon grouping if too many distinct keys Key: HIVE-6222 URL: https://issues.apache.org/jira/browse/HIVE-6222 Project: Hive Issue Type: Sub-task Components: Query Processor Affects Versions: 0.13.0 Reporter: Remus Rusanu Assignee: Remus Rusanu Priority: Minor Labels: vectorization Attachments: HIVE-6222.1.patch, HIVE-6222.2.patch, HIVE-6222.3.patch Row mode GBY is becoming a pass-through if not enough aggregation occurs on the map side, relying on the shuffle+reduce side to do the work. Have VGBY do the same. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6614) Vectorized aggregates computed on map side diffe (hash mode) from values computed on reduce side (streaming mode)
Remus Rusanu created HIVE-6614: -- Summary: Vectorized aggregates computed on map side diffe (hash mode) from values computed on reduce side (streaming mode) Key: HIVE-6614 URL: https://issues.apache.org/jira/browse/HIVE-6614 Project: Hive Issue Type: Bug Reporter: Remus Rusanu Assignee: Remus Rusanu Priority: Critical HIVE-6222 allows vectorized aggregates to operate on streaming mode, ie. flush after each key change and let the shuffle+reduce side to compute the final aggregate values. An error in patch .2 for HIVE-6222 shows that when the queries run in streaming mode, there are rounding diffs for some agg functions (VAR and friends). These occurred for non-decimal types, like ctinyint: {code} select csmallint, VAR_POP(ctinyint) from alltypesorc where csmallint = -75 group by csmallint; {code} This produces 107.56 vs. 107.54. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_principals_in_role api, support SHOW PRINCIPALS role_name
[ https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930471#comment-13930471 ] Hive QA commented on HIVE-5931: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633684/HIVE-5931.1.patch {color:green}SUCCESS:{color} +1 5379 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1701/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1701/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633684 SQL std auth - add metastore get_principals_in_role api, support SHOW PRINCIPALS role_name -- Key: HIVE-5931 URL: https://issues.apache.org/jira/browse/HIVE-5931 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Attachments: HIVE-5931.1.patch, HIVE-5931.2.patch, HIVE-5931.nothrifgen.1.patch, HIVE-5931.nothrifgen.2.patch, HIVE-5931.thriftapi.2.patch, HIVE-5931.thriftapi.3.patch, HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch Original Estimate: 24h Remaining Estimate: 24h Support command for listing all members of a role. A new metastore api call also needs to be added for this. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5931) SQL std auth - add metastore get_principals_in_role api, support SHOW PRINCIPALS role_name
[ https://issues.apache.org/jira/browse/HIVE-5931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930474#comment-13930474 ] Ashutosh Chauhan commented on HIVE-5931: +1 SQL std auth - add metastore get_principals_in_role api, support SHOW PRINCIPALS role_name -- Key: HIVE-5931 URL: https://issues.apache.org/jira/browse/HIVE-5931 Project: Hive Issue Type: Sub-task Components: Authorization Reporter: Thejas M Nair Attachments: HIVE-5931.1.patch, HIVE-5931.2.patch, HIVE-5931.nothrifgen.1.patch, HIVE-5931.nothrifgen.2.patch, HIVE-5931.thriftapi.2.patch, HIVE-5931.thriftapi.3.patch, HIVE-5931.thriftapi.followup.patch, HIVE-5931.thriftapi.patch Original Estimate: 24h Remaining Estimate: 24h Support command for listing all members of a role. A new metastore api call also needs to be added for this. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Timeline for the Hive 0.13 release?
yes sure. On Mar 10, 2014, at 3:55 PM, Gopal V gop...@apache.org wrote: Can I add HIVE-6518 as well to the merge queue on https://cwiki.apache.org/confluence/display/Hive/Hive+0.13+release+status It is a relatively simple OOM safety patch to vectorized group-by. Tests pass locally for vec group-by, but the pre-commit tests haven't fired eventhough it's been PA for a while now. Cheers, Gopal -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Updated] (HIVE-6608) Add apache pom as parent pom
[ https://issues.apache.org/jira/browse/HIVE-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harish Butani updated HIVE-6608: Attachment: HIVE-6608.2.patch Add apache pom as parent pom Key: HIVE-6608 URL: https://issues.apache.org/jira/browse/HIVE-6608 Project: Hive Issue Type: Bug Reporter: Harish Butani Assignee: Harish Butani Priority: Trivial Fix For: 0.13.0 Attachments: HIVE-6608.1.patch, HIVE-6608.2.patch From https://www.apache.org/dev/publishing-maven-artifacts.html So we can use the distribution management targets. We manually did the prepare your release step. Will run Step 4 Stage the release for a vote when we are ready to release 0.13. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6608) Add apache pom as parent pom
[ https://issues.apache.org/jira/browse/HIVE-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930562#comment-13930562 ] Brock Noland commented on HIVE-6608: +1 Add apache pom as parent pom Key: HIVE-6608 URL: https://issues.apache.org/jira/browse/HIVE-6608 Project: Hive Issue Type: Bug Reporter: Harish Butani Assignee: Harish Butani Priority: Trivial Fix For: 0.13.0 Attachments: HIVE-6608.1.patch, HIVE-6608.2.patch From https://www.apache.org/dev/publishing-maven-artifacts.html So we can use the distribution management targets. We manually did the prepare your release step. Will run Step 4 Stage the release for a vote when we are ready to release 0.13. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 18972: Vectorized cast of decimal to string and timestamp produces incorrect result.
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18972/#review36803 --- Ship it! Ship It! - Eric Hanson On March 10, 2014, 9:51 p.m., Jitendra Pandey wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18972/ --- (Updated March 10, 2014, 9:51 p.m.) Review request for hive and Eric Hanson. Repository: hive-git Description --- Vectorized cast of decimal to string and timestamp produces incorrect result. Diffs - common/src/java/org/apache/hadoop/hive/common/type/Decimal128.java 9d25620 common/src/java/org/apache/hadoop/hive/common/type/UnsignedInt128.java 34bd9d0 common/src/test/org/apache/hadoop/hive/common/type/TestDecimal128.java debc270 common/src/test/org/apache/hadoop/hive/common/type/TestUnsignedInt128.java 9ac68fe ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/CastDecimalToString.java 2e8c3a4 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/CastDecimalToTimestamp.java df7e1ee ql/src/test/org/apache/hadoop/hive/ql/exec/vector/expressions/TestVectorTypeCasts.java 832463d ql/src/test/queries/clientpositive/vector_decimal_expressions.q 38934d2 ql/src/test/results/clientpositive/vector_decimal_expressions.q.out 629f5d5 Diff: https://reviews.apache.org/r/18972/diff/ Testing --- Thanks, Jitendra Pandey
[jira] [Commented] (HIVE-6568) Vectorized cast of decimal to string and timestamp produces incorrect result.
[ https://issues.apache.org/jira/browse/HIVE-6568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930574#comment-13930574 ] Eric Hanson commented on HIVE-6568: --- +1 Vectorized cast of decimal to string and timestamp produces incorrect result. - Key: HIVE-6568 URL: https://issues.apache.org/jira/browse/HIVE-6568 Project: Hive Issue Type: Bug Components: Vectorization Affects Versions: 0.13.0 Reporter: Jitendra Nath Pandey Assignee: Jitendra Nath Pandey Attachments: HIVE-6568.1.patch, HIVE-6568.2.patch, HIVE-6568.3.patch A decimal value 1.23 with scale 5 is represented in string as 1.23000. This behavior is different from HiveDecimal behavior. The difference in cast to timestamp is due to more aggressive rounding in vectorized expression. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HIVE-6608) Add apache pom as parent pom
[ https://issues.apache.org/jira/browse/HIVE-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Harish Butani resolved HIVE-6608. - Resolution: Fixed Brock thanks for reviewing. Add apache pom as parent pom Key: HIVE-6608 URL: https://issues.apache.org/jira/browse/HIVE-6608 Project: Hive Issue Type: Bug Reporter: Harish Butani Assignee: Harish Butani Priority: Trivial Fix For: 0.13.0 Attachments: HIVE-6608.1.patch, HIVE-6608.2.patch From https://www.apache.org/dev/publishing-maven-artifacts.html So we can use the distribution management targets. We manually did the prepare your release step. Will run Step 4 Stage the release for a vote when we are ready to release 0.13. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization
[ https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930581#comment-13930581 ] Jitendra Nath Pandey commented on HIVE-6594: [~rhbutani] This is a serious bug and can cause incorrect results and affects hive-0.13 as well. I will port the fix to branch-0.13. UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization - Key: HIVE-6594 URL: https://issues.apache.org/jira/browse/HIVE-6594 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0 Reporter: Remus Rusanu Assignee: Remus Rusanu Fix For: 0.14.0 Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch Discovered this while investigating why my fix for HIVE-6222 produced diffs. I discovered that Decimal128.addDestructive does not adjust the internal count when an the number of relevant ints increases. Since this count is used in the fast HiveDecimalWriter conversion code, the results are off. The root cause is UnsignedDecimal128.differenceInternal does not do an updateCount() on the result. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6615) Cannot insert data into an ORC-based table
Michael created HIVE-6615: - Summary: Cannot insert data into an ORC-based table Key: HIVE-6615 URL: https://issues.apache.org/jira/browse/HIVE-6615 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.12.0, 0.11.0 Environment: Red Hat Enterprise Linux Server release 6.3 (Santiago) hadoop-2.2.0 hive-0.12.0 Java(TM) SE Runtime Environment (build 1.7.0_10-b18) Java HotSpot(TM) 64-Bit Server VM (build 23.6-b04, mixed mode) Reporter: Michael I have the following table definitions: create external table rmtail(day int, dm1 int, dm2 int) row format delimited fields terminated by ',' location '${env:HOME}/hive_db/POC_RAW_DATA'; create table dag(day int, dm2 int) partitioned by (dm1 int) stored as orc ; Unfortunately, the following insert statement INSERT OVERWRITE TABLE dag PARTITION (dm1) SELECT rm. day,rm.dm2,rm.dm1 FROM rmtail rm; give the errors and does nothing. The problem is the conversion to the ORC format as everything works fine as soon as I remove the 'stored as ORC' clause. The errors printed on the terminal are shown below: Total MapReduce jobs = 3 Launching Job 1 out of 3 Number of reduce tasks is set to 0 since there's no reduce operator SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/devjuser1/jp/ccjp/michaelg/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/devjuser1/jp/ccjp/michaelg/hive-0.12.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 14/03/11 20:49:51 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 14/03/11 20:49:53 WARN conf.Configuration: file:/tmp/hive-michaelg/hive_2014-03-11_20-49-44_726_2311004197897210334-1/-local-10003/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring. 14/03/11 20:49:53 WARN conf.Configuration: file:/tmp/hive-michaelg/hive_2014-03-11_20-49-44_726_2311004197897210334-1/-local-10003/jobconf.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring. 14/03/11 20:49:53 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive 14/03/11 20:49:53 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize 14/03/11 20:49:53 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize 14/03/11 20:49:53 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack 14/03/11 20:49:53 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node 14/03/11 20:49:53 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces 14/03/11 20:49:53 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 14/03/11 20:49:53 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. Execution log at: /tmp/michaelg/.log Job running in-process (local Hadoop) Hadoop job information for null: number of mappers: 0; number of reducers: 0 2014-03-11 20:50:00,327 null map = 0%, reduce = 0% Ended Job = job_local2076881875_0001 with errors Error during job, obtaining debugging information... Execution failed with exit status: 2 Obtaining error information Task failed! Task ID: Stage-1 Logs: /tmp/michaelg/hive.log FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask And the log file contents is as follows: 2014-03-11 20:45:34,664 WARN conf.HiveConf (HiveConf.java:initialize(1142)) - DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. 2014-03-11 20:45:35,503 WARN util.NativeCodeLoader (NativeCodeLoader.java:clinit(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2014-03-11 20:45:35,943 WARN conf.HiveConf (HiveConf.java:initialize(1142)) - DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value
[jira] [Created] (HIVE-6616) Document ORC file format to enable development of external converters to/from ORC/text files
Michael created HIVE-6616: - Summary: Document ORC file format to enable development of external converters to/from ORC/text files Key: HIVE-6616 URL: https://issues.apache.org/jira/browse/HIVE-6616 Project: Hive Issue Type: Bug Components: File Formats Affects Versions: 0.12.0, 0.11.0 Reporter: Michael Please document the structure of ORC file in a way that it allow writing and reading such a file by external software. I would like to be able to create ORC files myself without help of Hive. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6610: - Attachment: patch_db_name Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: patch_db_name Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6610: - Status: Patch Available (was: Open) Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: patch_db_name Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6538) yet another annoying exception in test logs
[ https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930627#comment-13930627 ] Sergey Shelukhin commented on HIVE-6538: Will commit tomorrow and fix long line on commit if there are no objections yet another annoying exception in test logs --- Key: HIVE-6538 URL: https://issues.apache.org/jira/browse/HIVE-6538 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Szehon Ho Priority: Trivial Attachments: HIVE-6538.2.patch, HIVE-6538.2.patch, HIVE-6538.patch Whenever you look at failed q tests you have to go thru this useless exception. {noformat} 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(143)) - MetaException(message:NoSuchObjectException(message:Function default.qtest_get_java_boolean does not exist)) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) at $Proxy8.get_function(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89) at $Proxy9.getFunction(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606) at org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94) at org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004) at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655) at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772) at org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34) at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23) at org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262) at
[jira] [Updated] (HIVE-6550) SemanticAnalyzer.reset() doesn't clear all the state
[ https://issues.apache.org/jira/browse/HIVE-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6550: - Attachment: HIVE-6550.Patch SemanticAnalyzer.reset() doesn't clear all the state Key: HIVE-6550 URL: https://issues.apache.org/jira/browse/HIVE-6550 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6550.Patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6550) SemanticAnalyzer.reset() doesn't clear all the state
[ https://issues.apache.org/jira/browse/HIVE-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6550: - Attachment: (was: SemanticAnalyzer-Reset-Patch) SemanticAnalyzer.reset() doesn't clear all the state Key: HIVE-6550 URL: https://issues.apache.org/jira/browse/HIVE-6550 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6550.Patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930632#comment-13930632 ] Ashutosh Chauhan commented on HIVE-6610: Name the patch as per convention. Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: patch_db_name Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 18179: Support more generic way of using composite key for HBaseHandler
On March 10, 2014, 9:25 p.m., Xuefu Zhang wrote: ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java, line 702 https://reviews.apache.org/r/18179/diff/5/?file=513405#file513405line702 Do these methods have to be public? Private if just used locally. Navis Ryu wrote: Seemed to find use case for this in sometime. But ok. We can make them public when the use case comes, but not the other way around. On March 10, 2014, 9:25 p.m., Xuefu Zhang wrote: hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java, line 730 https://reviews.apache.org/r/18179/diff/5/?file=513392#file513392line730 Can we define serielize() interface in HBaseKeyFactory, move the existing implementation here to HBaseCompositeKeyFactory? Serialize() seems seems generic enough to expect from all key factories. Doing this will eliminate HBaseWritableKeyFactory and use of the class to detect what method to call. Navis Ryu wrote: If the default serialization can be done by simple decent method call, I would have done like that. But current implementation needs seven argument for that(+serdeParams), which made me think twice of it. byte[] serialize( int i, ListColumnMapping mapping, List? extends StructField fields, ListObject list, List? extends StructField declaredFields, boolean useJSONSerialize, ByteStream.Output serializeStream) throws IOException; Yes, I agree that too many params for a method is ugly. In this case, however, it doesn't seem too bad: 1. i and and the 4 lists can be reduced to 4 fields, as i is just the index in the lists, which are derived from object inspector and serdeparams. To further reduce the arg number, a struct can be defined to wrap the 4 items: keyMapping, keyField, keyObject, and keyDeclaredField. (keyDeclaredField may not be needed as we are talking about row key here.) 2. useJsonSerialize seems always false, so it can be removed. I understand that some refactoring is needed. However, I think it's worth the effort for readability and maintenance. - Xuefu --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18179/#review36688 --- On March 7, 2014, 7:46 a.m., Navis Ryu wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18179/ --- (Updated March 7, 2014, 7:46 a.m.) Review request for hive. Bugs: HIVE-6411 https://issues.apache.org/jira/browse/HIVE-6411 Repository: hive-git Description --- HIVE-2599 introduced using custom object for the row key. But it forces key objects to extend HBaseCompositeKey, which is again extension of LazyStruct. If user provides proper Object and OI, we can replace internal key and keyOI with those. Initial implementation is based on factory interface. {code} public interface HBaseKeyFactory { void init(SerDeParameters parameters, Properties properties) throws SerDeException; ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException; LazyObjectBase createObject(ObjectInspector inspector) throws SerDeException; } {code} Diffs - hbase-handler/pom.xml 132af43 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 5008f15 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKeyFactory.java PRE-CREATION hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java PRE-CREATION hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java PRE-CREATION hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java PRE-CREATION hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 2cd65cb hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 29e5da5 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseWritableKeyFactory.java PRE-CREATION hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java 704fcb9 hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java fc40195 hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java 13c344b hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java PRE-CREATION hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java PRE-CREATION hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION
[jira] [Commented] (HIVE-6550) SemanticAnalyzer.reset() doesn't clear all the state
[ https://issues.apache.org/jira/browse/HIVE-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930630#comment-13930630 ] Laljo John Pullokkaran commented on HIVE-6550: -- Renamed patch to follow naming convention SemanticAnalyzer.reset() doesn't clear all the state Key: HIVE-6550 URL: https://issues.apache.org/jira/browse/HIVE-6550 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6550.Patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6610: - Attachment: (was: patch_db_name) Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6610.patch Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6610: - Status: Open (was: Patch Available) Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6610.patch Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930636#comment-13930636 ] Laljo John Pullokkaran commented on HIVE-6610: -- Renamed patch to follow naming convention Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6610.patch Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6610: - Attachment: HIVE-6610.patch Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6610.patch Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6610: - Status: Patch Available (was: Open) Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6610.patch Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6550) SemanticAnalyzer.reset() doesn't clear all the state
[ https://issues.apache.org/jira/browse/HIVE-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6550: - Attachment: HIVE-6550.patch SemanticAnalyzer.reset() doesn't clear all the state Key: HIVE-6550 URL: https://issues.apache.org/jira/browse/HIVE-6550 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6550.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6550) SemanticAnalyzer.reset() doesn't clear all the state
[ https://issues.apache.org/jira/browse/HIVE-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6550: - Status: Patch Available (was: Open) SemanticAnalyzer.reset() doesn't clear all the state Key: HIVE-6550 URL: https://issues.apache.org/jira/browse/HIVE-6550 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6550.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6550) SemanticAnalyzer.reset() doesn't clear all the state
[ https://issues.apache.org/jira/browse/HIVE-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laljo John Pullokkaran updated HIVE-6550: - Attachment: (was: HIVE-6550.Patch) SemanticAnalyzer.reset() doesn't clear all the state Key: HIVE-6550 URL: https://issues.apache.org/jira/browse/HIVE-6550 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6550.patch -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6617) Reduce ambiguity in grammar
Ashutosh Chauhan created HIVE-6617: -- Summary: Reduce ambiguity in grammar Key: HIVE-6617 URL: https://issues.apache.org/jira/browse/HIVE-6617 Project: Hive Issue Type: Task Reporter: Ashutosh Chauhan As of today, antlr reports 214 warnings. Need to bring down this number, ideally to 0. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6610) Unit test log needs to reflect DB Name
[ https://issues.apache.org/jira/browse/HIVE-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930644#comment-13930644 ] Ashutosh Chauhan commented on HIVE-6610: +1 Unit test log needs to reflect DB Name -- Key: HIVE-6610 URL: https://issues.apache.org/jira/browse/HIVE-6610 Project: Hive Issue Type: Bug Affects Versions: 0.12.0 Reporter: Laljo John Pullokkaran Assignee: Laljo John Pullokkaran Attachments: HIVE-6610.patch Following Hadoop2 Unit tests are failing because ddl pre/post hooks are printing out database name. auto_join14.q, join14.q, input12.q, input39.q Current analysis suggest authentication changes caused it. These tests are marked as hadoop-2 only. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6605) Hive does not set the environment correctly when running in Tez mode
[ https://issues.apache.org/jira/browse/HIVE-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-6605: --- Summary: Hive does not set the environment correctly when running in Tez mode (was: Hive does not set the java.library.path correctly when running in Tez mode) Hive does not set the environment correctly when running in Tez mode Key: HIVE-6605 URL: https://issues.apache.org/jira/browse/HIVE-6605 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HIVE-6605.01.patch, HIVE-6605.patch When running in Tez mode, Hive does not correctly set the java.library.path. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6587) allow specifying additional Hive classpath for Hadoop
[ https://issues.apache.org/jira/browse/HIVE-6587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-6587: --- Resolution: Fixed Fix Version/s: 0.13.0 Status: Resolved (was: Patch Available) committed to trunk and 0.13 branch allow specifying additional Hive classpath for Hadoop - Key: HIVE-6587 URL: https://issues.apache.org/jira/browse/HIVE-6587 Project: Hive Issue Type: Improvement Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Trivial Fix For: 0.13.0 Attachments: HIVE-6587.patch Allow users to add jars to hive's Hadoop classpath without explicitly modifying their Hadoop classpath -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6605) Hive does not set the environment correctly when running in Tez mode
[ https://issues.apache.org/jira/browse/HIVE-6605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-6605: --- Resolution: Fixed Fix Version/s: 0.13.0 Status: Resolved (was: Patch Available) Committed to trunk and 0.13 branch Hive does not set the environment correctly when running in Tez mode Key: HIVE-6605 URL: https://issues.apache.org/jira/browse/HIVE-6605 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Fix For: 0.13.0 Attachments: HIVE-6605.01.patch, HIVE-6605.patch When running in Tez mode, Hive does not correctly set the java.library.path. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6563) hdfs jar being pulled in when creating a hadoop-2 based hive tar ball
[ https://issues.apache.org/jira/browse/HIVE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vikram Dixit K updated HIVE-6563: - Summary: hdfs jar being pulled in when creating a hadoop-2 based hive tar ball (was: hdfs jar being pulled in when creating a hadoop-2 tar ball) hdfs jar being pulled in when creating a hadoop-2 based hive tar ball - Key: HIVE-6563 URL: https://issues.apache.org/jira/browse/HIVE-6563 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.13.0, 0.14.0 Reporter: Vikram Dixit K Assignee: Vikram Dixit K Priority: Blocker Attachments: HIVE-6563.1.patch Looks like some dependency issue is causing hadoop-hdfs jar to be packaged in the hive tar ball. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization
[ https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930719#comment-13930719 ] Jitendra Nath Pandey commented on HIVE-6594: Committed to branch-0.13 as well. UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization - Key: HIVE-6594 URL: https://issues.apache.org/jira/browse/HIVE-6594 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0 Reporter: Remus Rusanu Assignee: Remus Rusanu Fix For: 0.13.0, 0.14.0 Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch Discovered this while investigating why my fix for HIVE-6222 produced diffs. I discovered that Decimal128.addDestructive does not adjust the internal count when an the number of relevant ints increases. Since this count is used in the fast HiveDecimalWriter conversion code, the results are off. The root cause is UnsignedDecimal128.differenceInternal does not do an updateCount() on the result. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization
[ https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6594: --- Fix Version/s: (was: 0.14.0) 0.13.0 UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization - Key: HIVE-6594 URL: https://issues.apache.org/jira/browse/HIVE-6594 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0 Reporter: Remus Rusanu Assignee: Remus Rusanu Fix For: 0.13.0, 0.14.0 Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch Discovered this while investigating why my fix for HIVE-6222 produced diffs. I discovered that Decimal128.addDestructive does not adjust the internal count when an the number of relevant ints increases. Since this count is used in the fast HiveDecimalWriter conversion code, the results are off. The root cause is UnsignedDecimal128.differenceInternal does not do an updateCount() on the result. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6594) UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization
[ https://issues.apache.org/jira/browse/HIVE-6594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HIVE-6594: --- Fix Version/s: 0.14.0 UnsignedInt128 addition does not increase internal int array count resulting in corrupted values during serialization - Key: HIVE-6594 URL: https://issues.apache.org/jira/browse/HIVE-6594 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0 Reporter: Remus Rusanu Assignee: Remus Rusanu Fix For: 0.13.0, 0.14.0 Attachments: HIVE-6594.1.patch, HIVE-6594.2.patch Discovered this while investigating why my fix for HIVE-6222 produced diffs. I discovered that Decimal128.addDestructive does not adjust the internal count when an the number of relevant ints increases. Since this count is used in the fast HiveDecimalWriter conversion code, the results are off. The root cause is UnsignedDecimal128.differenceInternal does not do an updateCount() on the result. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6618) assertion when getting reference key from loader with byte-array mapjoin key
Sergey Shelukhin created HIVE-6618: -- Summary: assertion when getting reference key from loader with byte-array mapjoin key Key: HIVE-6618 URL: https://issues.apache.org/jira/browse/HIVE-6618 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin java.lang.AssertionError: Should be called after loading tables at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.processRow(MapRecordProcessor.java:205) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:171) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:152) This is because tables may have already been loaded. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6147) Support avro data stored in HBase columns
[ https://issues.apache.org/jira/browse/HIVE-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930726#comment-13930726 ] Swarnim Kulkarni commented on HIVE-6147: Thanks [~xuefuz] for reviewing. I agree it makes lot of sense for HIVE-6411 to go in first and then I can refactor this on the basis of that. Also on the point of reusing AvroSerDe code, I have tried to write AvroLazyObjectInspector simply as a wrapper on top of AvroSerDe still delegating most of the operations to the serde. Any specific instance PC instance you want me to look deeper into? Support avro data stored in HBase columns - Key: HIVE-6147 URL: https://issues.apache.org/jira/browse/HIVE-6147 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.12.0 Reporter: Swarnim Kulkarni Assignee: Swarnim Kulkarni Attachments: HIVE-6147.1.patch.txt, HIVE-6147.2.patch.txt, HIVE-6147.3.patch.txt, HIVE-6147.3.patch.txt, HIVE-6147.4.patch.txt, HIVE-6147.5.patch.txt Presently, the HBase Hive integration supports querying only primitive data types in columns. It would be nice to be able to store and query Avro objects in HBase columns by making them visible as structs to Hive. This will allow Hive to perform ad hoc analysis of HBase data which can be deeply structured. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6618) assertion when getting reference key from loader with byte-array mapjoin key
[ https://issues.apache.org/jira/browse/HIVE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-6618: --- Attachment: HIVE-6618.patch assertion when getting reference key from loader with byte-array mapjoin key Key: HIVE-6618 URL: https://issues.apache.org/jira/browse/HIVE-6618 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HIVE-6618.patch java.lang.AssertionError: Should be called after loading tables at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.processRow(MapRecordProcessor.java:205) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:171) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:152) This is because tables may have already been loaded. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6618) assertion when getting reference key from loader with byte-array mapjoin key
[ https://issues.apache.org/jira/browse/HIVE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-6618: --- Affects Version/s: 0.13.0 assertion when getting reference key from loader with byte-array mapjoin key Key: HIVE-6618 URL: https://issues.apache.org/jira/browse/HIVE-6618 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HIVE-6618.patch java.lang.AssertionError: Should be called after loading tables at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.processRow(MapRecordProcessor.java:205) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:171) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:152) This is because tables may have already been loaded. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6618) assertion when getting reference key from loader with byte-array mapjoin key
[ https://issues.apache.org/jira/browse/HIVE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930735#comment-13930735 ] Sergey Shelukhin commented on HIVE-6618: Instead of getting key from loader, get from tables assertion when getting reference key from loader with byte-array mapjoin key Key: HIVE-6618 URL: https://issues.apache.org/jira/browse/HIVE-6618 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HIVE-6618.patch java.lang.AssertionError: Should be called after loading tables at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.processRow(MapRecordProcessor.java:205) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:171) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:152) This is because tables may have already been loaded. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6618) assertion when getting reference key from loader with byte-array mapjoin key
[ https://issues.apache.org/jira/browse/HIVE-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-6618: --- Status: Patch Available (was: Open) assertion when getting reference key from loader with byte-array mapjoin key Key: HIVE-6618 URL: https://issues.apache.org/jira/browse/HIVE-6618 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Attachments: HIVE-6618.patch java.lang.AssertionError: Should be called after loading tables at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.processRow(MapRecordProcessor.java:205) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:171) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:152) This is because tables may have already been loaded. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HIVE-6595) Hive 0.11.0 build failure
[ https://issues.apache.org/jira/browse/HIVE-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szehon Ho resolved HIVE-6595. - Resolution: Won't Fix Assignee: Szehon Ho Release Note: Resolving this JIRA as wont fix as its working in current trunk Hive 0.11.0 build failure - Key: HIVE-6595 URL: https://issues.apache.org/jira/browse/HIVE-6595 Project: Hive Issue Type: Bug Components: Build Infrastructure Affects Versions: 0.11.0 Environment: CentOS 6.5, java version 1.7.0_45, Hadoop 2.2.0 Reporter: Amit Anand Assignee: Szehon Ho I am unable to build Hive 0.11.0 from the source. I have a single node hadoop 2.2.0, that I built from the source, running. I followed steps given below: svn co http://svn.apache.org/repos/asf/hive/tags/release-0.11.0/ hive-0.11.0 cd hive-0.11.0 ant clean ant package I got messages given below compile: [echo] Project: jdbc [javac] Compiling 28 source files to /opt/apache/source/hive-0.11.0/build/jdbc/classes [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveCallableStatement.java:48: error: HiveCallableStatement is not abstract and does not override abstract method TgetObject(String,ClassT) in CallableStatement [javac] public class HiveCallableStatement implements java.sql.CallableStatement { [javac]^ [javac] where T is a type-variable: [javac] T extends Object declared in method TgetObject(String,ClassT) [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java:65: error: HiveConnection is not abstract and does not override abstract method getNetworkTimeout() in Connection [javac] public class HiveConnection implements java.sql.Connection { [javac]^ [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDataSource.java:31: error: HiveDataSource is not abstract and does not override abstract method getParentLogger() in CommonDataSource [javac] public class HiveDataSource implements DataSource { [javac]^ [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:56: error: HiveDatabaseMetaData is not abstract and does not override abstract method generatedKeyAlwaysReturned() in DatabaseMetaData [javac] public class HiveDatabaseMetaData implements DatabaseMetaData { [javac]^ [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDatabaseMetaData.java:707: error: anonymous org.apache.hive.jdbc.HiveDatabaseMetaData$1 is not abstract and does not override abstract method TgetObject(String,ClassT) in ResultSet [javac] , null) { [javac] ^ [javac] where T is a type-variable: [javac] T extends Object declared in method TgetObject(String,ClassT) [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveDriver.java:35: error: HiveDriver is not abstract and does not override abstract method getParentLogger() in Driver [javac] public class HiveDriver implements Driver { [javac]^ [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HivePreparedStatement.java:56: error: HivePreparedStatement is not abstract and does not override abstract method isCloseOnCompletion() in Statement [javac] public class HivePreparedStatement implements PreparedStatement { [javac]^ [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveQueryResultSet.java:48: error: HiveQueryResultSet is not abstract and does not override abstract method TgetObject(String,ClassT) in ResultSet [javac] public class HiveQueryResultSet extends HiveBaseResultSet { [javac]^ [javac] where T is a type-variable: [javac] T extends Object declared in method TgetObject(String,ClassT) [javac] /opt/apache/source/hive-0.11.0/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java:42: error: HiveStatement is not abstract and does not override abstract method isCloseOnCompletion() in Statement [javac] public class HiveStatement implements java.sql.Statement { [javac]^ [javac] Note: Some input files use or override a deprecated API. [javac] Note: Recompile with -Xlint:deprecation for details. [javac] Note: Some input files use unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. [javac] 9 errors BUILD FAILED /opt/apache/source/hive-0.11.0/build.xml:274: The following error occurred while executing this line: /opt/apache/source/hive-0.11.0/build.xml:113: The following error occurred while executing this
[jira] [Updated] (HIVE-6538) yet another annoying exception in test logs
[ https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-6538: --- Status: Patch Available (was: Reopened) well, patch was not submitted :P yet another annoying exception in test logs --- Key: HIVE-6538 URL: https://issues.apache.org/jira/browse/HIVE-6538 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Szehon Ho Priority: Trivial Attachments: HIVE-6538.2.patch, HIVE-6538.2.patch, HIVE-6538.patch Whenever you look at failed q tests you have to go thru this useless exception. {noformat} 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(143)) - MetaException(message:NoSuchObjectException(message:Function default.qtest_get_java_boolean does not exist)) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) at $Proxy8.get_function(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89) at $Proxy9.getFunction(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606) at org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94) at org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004) at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655) at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772) at org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34) at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23) at org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at
[jira] [Commented] (HIVE-6538) yet another annoying exception in test logs
[ https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930763#comment-13930763 ] Sergey Shelukhin commented on HIVE-6538: I'll wait for HiveQA good point... just submitted the patch yet another annoying exception in test logs --- Key: HIVE-6538 URL: https://issues.apache.org/jira/browse/HIVE-6538 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Szehon Ho Priority: Trivial Attachments: HIVE-6538.2.patch, HIVE-6538.2.patch, HIVE-6538.patch Whenever you look at failed q tests you have to go thru this useless exception. {noformat} 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(143)) - MetaException(message:NoSuchObjectException(message:Function default.qtest_get_java_boolean does not exist)) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) at $Proxy8.get_function(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89) at $Proxy9.getFunction(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606) at org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94) at org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004) at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655) at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772) at org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34) at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23) at org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262) at
[jira] [Commented] (HIVE-6538) yet another annoying exception in test logs
[ https://issues.apache.org/jira/browse/HIVE-6538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930769#comment-13930769 ] Szehon Ho commented on HIVE-6538: - Oh I missed that, thanks. yet another annoying exception in test logs --- Key: HIVE-6538 URL: https://issues.apache.org/jira/browse/HIVE-6538 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Szehon Ho Priority: Trivial Attachments: HIVE-6538.2.patch, HIVE-6538.2.patch, HIVE-6538.patch Whenever you look at failed q tests you have to go thru this useless exception. {noformat} 2014-03-03 11:22:54,872 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(143)) - MetaException(message:NoSuchObjectException(message:Function default.qtest_get_java_boolean does not exist)) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:4575) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_function(HiveMetaStore.java:4702) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) at $Proxy8.get_function(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getFunction(HiveMetaStoreClient.java:1526) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89) at $Proxy9.getFunction(Unknown Source) at org.apache.hadoop.hive.ql.metadata.Hive.getFunction(Hive.java:2603) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfoFromMetastore(FunctionRegistry.java:546) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getQualifiedFunctionInfo(FunctionRegistry.java:578) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:599) at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:606) at org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:94) at org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:60) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:327) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:445) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:345) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1078) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1121) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1014) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1004) at org.apache.hadoop.hive.ql.QTestUtil.runCmd(QTestUtil.java:655) at org.apache.hadoop.hive.ql.QTestUtil.createSources(QTestUtil.java:772) at org.apache.hadoop.hive.cli.TestCliDriver.clinit(TestCliDriver.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.internal.runners.SuiteMethod.testFromSuiteMethod(SuiteMethod.java:34) at org.junit.internal.runners.SuiteMethod.init(SuiteMethod.java:23) at org.junit.internal.builders.SuiteMethodBuilder.runnerForClass(SuiteMethodBuilder.java:14) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.builders.AllDefaultPossibilitiesBuilder.runnerForClass(AllDefaultPossibilitiesBuilder.java:29) at org.junit.runners.model.RunnerBuilder.safeRunnerForClass(RunnerBuilder.java:57) at org.junit.internal.requests.ClassRequest.getRunner(ClassRequest.java:24) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:262) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) at
[jira] [Created] (HIVE-6619) Stats inaccurate for auto_join32.q
Laljo John Pullokkaran created HIVE-6619: Summary: Stats inaccurate for auto_join32.q Key: HIVE-6619 URL: https://issues.apache.org/jira/browse/HIVE-6619 Project: Hive Issue Type: Bug Components: Statistics Reporter: Laljo John Pullokkaran Assignee: Prasanth J auto_join32.q unit test fails for hadoop2. Seems like stats have changed. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Timeline for the Hive 0.13 release?
Can you please consider the following: https://issues.apache.org/jira/browse/HIVE-6602 (committed to trunk), https://issues.apache.org/jira/browse/HIVE-6512, https://issues.apache.org/jira/browse/HIVE-6068, https://issues.apache.org/jira/browse/HIVE-6580. Most of them are bug fixes. Thanks, --Vaibhav On Tue, Mar 11, 2014 at 8:39 AM, Harish Butani hbut...@hortonworks.comwrote: yes sure. On Mar 10, 2014, at 3:55 PM, Gopal V gop...@apache.org wrote: Can I add HIVE-6518 as well to the merge queue on https://cwiki.apache.org/confluence/display/Hive/Hive+0.13+release+status It is a relatively simple OOM safety patch to vectorized group-by. Tests pass locally for vec group-by, but the pre-commit tests haven't fired eventhough it's been PA for a while now. Cheers, Gopal -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
[jira] [Commented] (HIVE-6558) HiveServer2 Plain SASL authentication broken after hadoop 2.3 upgrade
[ https://issues.apache.org/jira/browse/HIVE-6558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930796#comment-13930796 ] Hive QA commented on HIVE-6558: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633729/HIVE-6558.2.patch {color:green}SUCCESS:{color} +1 5380 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1702/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1702/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633729 HiveServer2 Plain SASL authentication broken after hadoop 2.3 upgrade - Key: HIVE-6558 URL: https://issues.apache.org/jira/browse/HIVE-6558 Project: Hive Issue Type: Bug Components: Authentication, HiveServer2 Affects Versions: 0.13.0 Reporter: Prasad Mujumdar Assignee: Prasad Mujumdar Priority: Blocker Attachments: HIVE-6558.2.patch, HIVE-6558.2.patch Java only includes Plain SASL client and not server. Hence HiveServer2 includes a Plain SASL server implementation. Now Hadoop has its own Plain SASL server [HADOOP-9020|https://issues.apache.org/jira/browse/HADOOP-9020] which is part of Hadoop 2.3 [release|http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/releasenotes.html]. The two servers use different Sasl callbacks and the servers are registered in java.security.Provider via static code. As a result the HiveServer2 instance could be using Hadoop's Plain SASL server which breaks the authentication. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.
[ https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivaraju Gowda updated HIVE-6486: -- Attachment: (was: Hive_011_Support-Subject_doAS.patch) Support secure Subject.doAs() in HiveServer2 JDBC client. - Key: HIVE-6486 URL: https://issues.apache.org/jira/browse/HIVE-6486 Project: Hive Issue Type: Improvement Components: Authentication, HiveServer2, JDBC Affects Versions: 0.11.0, 0.12.0 Reporter: Shivaraju Gowda Assignee: Shivaraju Gowda Fix For: 0.13.0 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, TestHive_SujectDoAs.java HIVE-5155 addresses the problem of kerberos authentication in multi-user middleware server using proxy user. In this mode the principal used by the middle ware server has privileges to impersonate selected users in Hive/Hadoop. This enhancement is to support Subject.doAs() authentication in Hive JDBC layer so that the end users Kerberos Subject is passed through in the middle ware server. With this improvement there won't be any additional setup in the server to grant proxy privileges to some users and there won't be need to specify a proxy user in the JDBC client. This version should also be more secure since it won't require principals with the privileges to impersonate other users in Hive/Hadoop setup. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.
[ https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivaraju Gowda updated HIVE-6486: -- Attachment: (was: TestHive_SujectDoAs.java) Support secure Subject.doAs() in HiveServer2 JDBC client. - Key: HIVE-6486 URL: https://issues.apache.org/jira/browse/HIVE-6486 Project: Hive Issue Type: Improvement Components: Authentication, HiveServer2, JDBC Affects Versions: 0.11.0, 0.12.0 Reporter: Shivaraju Gowda Assignee: Shivaraju Gowda Fix For: 0.13.0 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, HIVE-6486_Hive0.11.patch, TestCase_HIVE-6486.java HIVE-5155 addresses the problem of kerberos authentication in multi-user middleware server using proxy user. In this mode the principal used by the middle ware server has privileges to impersonate selected users in Hive/Hadoop. This enhancement is to support Subject.doAs() authentication in Hive JDBC layer so that the end users Kerberos Subject is passed through in the middle ware server. With this improvement there won't be any additional setup in the server to grant proxy privileges to some users and there won't be need to specify a proxy user in the JDBC client. This version should also be more secure since it won't require principals with the privileges to impersonate other users in Hive/Hadoop setup. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6486) Support secure Subject.doAs() in HiveServer2 JDBC client.
[ https://issues.apache.org/jira/browse/HIVE-6486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shivaraju Gowda updated HIVE-6486: -- Attachment: TestCase_HIVE-6486.java Support secure Subject.doAs() in HiveServer2 JDBC client. - Key: HIVE-6486 URL: https://issues.apache.org/jira/browse/HIVE-6486 Project: Hive Issue Type: Improvement Components: Authentication, HiveServer2, JDBC Affects Versions: 0.11.0, 0.12.0 Reporter: Shivaraju Gowda Assignee: Shivaraju Gowda Fix For: 0.13.0 Attachments: HIVE-6486.1.patch, HIVE-6486.2.patch, HIVE-6486.3.patch, HIVE-6486_Hive0.11.patch, TestCase_HIVE-6486.java HIVE-5155 addresses the problem of kerberos authentication in multi-user middleware server using proxy user. In this mode the principal used by the middle ware server has privileges to impersonate selected users in Hive/Hadoop. This enhancement is to support Subject.doAs() authentication in Hive JDBC layer so that the end users Kerberos Subject is passed through in the middle ware server. With this improvement there won't be any additional setup in the server to grant proxy privileges to some users and there won't be need to specify a proxy user in the JDBC client. This version should also be more secure since it won't require principals with the privileges to impersonate other users in Hive/Hadoop setup. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6147) Support avro data stored in HBase columns
[ https://issues.apache.org/jira/browse/HIVE-6147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930813#comment-13930813 ] Xuefu Zhang commented on HIVE-6147: --- [~swarnim] I'm glad that you have the principle of code reuse in mind. I only browsed the patch, and spotted HiveSerdeHelper.getSchemaFromFS(), which is seemingly for the same purpose as AvroSerdeUtils.getSchemaFromFS() is. This might be coincidental. No big deal. Support avro data stored in HBase columns - Key: HIVE-6147 URL: https://issues.apache.org/jira/browse/HIVE-6147 Project: Hive Issue Type: Bug Components: HBase Handler Affects Versions: 0.12.0 Reporter: Swarnim Kulkarni Assignee: Swarnim Kulkarni Attachments: HIVE-6147.1.patch.txt, HIVE-6147.2.patch.txt, HIVE-6147.3.patch.txt, HIVE-6147.3.patch.txt, HIVE-6147.4.patch.txt, HIVE-6147.5.patch.txt Presently, the HBase Hive integration supports querying only primitive data types in columns. It would be nice to be able to store and query Avro objects in HBase columns by making them visible as structs to Hive. This will allow Hive to perform ad hoc analysis of HBase data which can be deeply structured. -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Timeline for the Hive 0.13 release?
ok with the first 3. HIVE-6068 doesn’t have a patch yet. Can this be deferred? On Mar 11, 2014, at 12:06 PM, Vaibhav Gumashta vgumas...@hortonworks.com wrote: Can you please consider the following: https://issues.apache.org/jira/browse/HIVE-6602 (committed to trunk), https://issues.apache.org/jira/browse/HIVE-6512, https://issues.apache.org/jira/browse/HIVE-6068, https://issues.apache.org/jira/browse/HIVE-6580. Most of them are bug fixes. Thanks, --Vaibhav On Tue, Mar 11, 2014 at 8:39 AM, Harish Butani hbut...@hortonworks.comwrote: yes sure. On Mar 10, 2014, at 3:55 PM, Gopal V gop...@apache.org wrote: Can I add HIVE-6518 as well to the merge queue on https://cwiki.apache.org/confluence/display/Hive/Hive+0.13+release+status It is a relatively simple OOM safety patch to vectorized group-by. Tests pass locally for vec group-by, but the pre-commit tests haven't fired eventhough it's been PA for a while now. Cheers, Gopal -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You. -- CONFIDENTIALITY NOTICE NOTICE: This message is intended for the use of the individual or entity to which it is addressed and may contain information that is confidential, privileged and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, you are hereby notified that any printing, copying, dissemination, distribution, disclosure or forwarding of this communication is strictly prohibited. If you have received this communication in error, please contact the sender immediately and delete it from your system. Thank You.
Re: Review Request 18065: HIVE-6024 Load data local inpath unnecessarily creates a copy task
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18065/ --- (Updated March 11, 2014, 8:37 p.m.) Review request for hive and Ashutosh Chauhan. Changes --- Addressed test failure, Bugs: HIVE-6024 https://issues.apache.org/jira/browse/HIVE-6024 Repository: hive-git Description --- Excerpt from the JIRA: Load data command creates an additional copy task only when its loading from local It doesn't create this additional copy task while loading from DFS though. Diffs (updated) - itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/history/TestHiveHistory.java 8beef09 ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java a190155 ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java e10bdb4 ql/src/java/org/apache/hadoop/hive/ql/metadata/Table.java 8318be1 ql/src/java/org/apache/hadoop/hive/ql/parse/LoadSemanticAnalyzer.java 3dd0f6f ql/src/java/org/apache/hadoop/hive/ql/plan/MoveWork.java 407450e ql/src/test/org/apache/hadoop/hive/ql/exec/TestExecDriver.java 5991aae ql/src/test/queries/clientpositive/load_local_dir_test.q PRE-CREATION ql/src/test/results/clientpositive/input4.q.out 9b169f9 ql/src/test/results/clientpositive/load_local_dir_test.q.out PRE-CREATION ql/src/test/results/clientpositive/stats11.q.out ce1197e ql/src/test/results/clientpositive/stats3.q.out a14e449 Diff: https://reviews.apache.org/r/18065/diff/ Testing --- Ran some existing q tests with LOAD DATA LOCAL INPATH. Thanks, Mohammad Islam
[jira] [Updated] (HIVE-6024) Load data local inpath unnecessarily creates a copy task
[ https://issues.apache.org/jira/browse/HIVE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-6024: --- Status: Open (was: Patch Available) Load data local inpath unnecessarily creates a copy task Key: HIVE-6024 URL: https://issues.apache.org/jira/browse/HIVE-6024 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Ashutosh Chauhan Assignee: Mohammad Kamrul Islam Attachments: HIVE-6024.1.patch, HIVE-6024.2.patch, HIVE-6024.3.patch, HIVE-6024.4.patch, HIVE-6024.5.patch Load data command creates an additional copy task only when its loading from {{local}} It doesn't create this additional copy task while loading from DFS though. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6024) Load data local inpath unnecessarily creates a copy task
[ https://issues.apache.org/jira/browse/HIVE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohammad Kamrul Islam updated HIVE-6024: Attachment: HIVE-6024.5.patch Addressed failed test cases. Load data local inpath unnecessarily creates a copy task Key: HIVE-6024 URL: https://issues.apache.org/jira/browse/HIVE-6024 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Ashutosh Chauhan Assignee: Mohammad Kamrul Islam Attachments: HIVE-6024.1.patch, HIVE-6024.2.patch, HIVE-6024.3.patch, HIVE-6024.4.patch, HIVE-6024.5.patch Load data command creates an additional copy task only when its loading from {{local}} It doesn't create this additional copy task while loading from DFS though. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6024) Load data local inpath unnecessarily creates a copy task
[ https://issues.apache.org/jira/browse/HIVE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohammad Kamrul Islam updated HIVE-6024: Attachment: HIVE-6024.6.patch Load data local inpath unnecessarily creates a copy task Key: HIVE-6024 URL: https://issues.apache.org/jira/browse/HIVE-6024 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Ashutosh Chauhan Assignee: Mohammad Kamrul Islam Attachments: HIVE-6024.1.patch, HIVE-6024.2.patch, HIVE-6024.3.patch, HIVE-6024.4.patch, HIVE-6024.5.patch, HIVE-6024.5.patch, HIVE-6024.6.patch Load data command creates an additional copy task only when its loading from {{local}} It doesn't create this additional copy task while loading from DFS though. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6024) Load data local inpath unnecessarily creates a copy task
[ https://issues.apache.org/jira/browse/HIVE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohammad Kamrul Islam updated HIVE-6024: Attachment: HIVE-6024.5.patch replacing with the intended patch. Load data local inpath unnecessarily creates a copy task Key: HIVE-6024 URL: https://issues.apache.org/jira/browse/HIVE-6024 Project: Hive Issue Type: Improvement Components: Query Processor Reporter: Ashutosh Chauhan Assignee: Mohammad Kamrul Islam Attachments: HIVE-6024.1.patch, HIVE-6024.2.patch, HIVE-6024.3.patch, HIVE-6024.4.patch, HIVE-6024.5.patch, HIVE-6024.5.patch Load data command creates an additional copy task only when its loading from {{local}} It doesn't create this additional copy task while loading from DFS though. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6620) UDF printf doesn't take either CHAR or VARCHAR as the first argument
[ https://issues.apache.org/jira/browse/HIVE-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xuefu Zhang updated HIVE-6620: -- Summary: UDF printf doesn't take either CHAR or VARCHAR as the first argument (was: UDF printf doesn't take CHAR and VARCHAR as the first argument) UDF printf doesn't take either CHAR or VARCHAR as the first argument Key: HIVE-6620 URL: https://issues.apache.org/jira/browse/HIVE-6620 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.12.0 Reporter: Xuefu Zhang Assignee: Xuefu Zhang {code} hive desc vc; OK c char(5) None vcvarchar(7) None s string None hive select printf(c) from vc; FAILED: SemanticException [Error 10016]: Line 1:14 Argument type mismatch 'c': Argument 1 of function PRINTF must be string, but char(5) was found. {code} However, if the argument is string type, the query runs successfully. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (HIVE-6509) Error in metadata: MetaException(message:java.lang.IllegalStateException: Can't overwrite cause)
[ https://issues.apache.org/jira/browse/HIVE-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan resolved HIVE-6509. Resolution: Invalid I tried this on latest trunk and I was able to get it to work. Please reopen if you can repro and provide a test-case. Error in metadata: MetaException(message:java.lang.IllegalStateException: Can't overwrite cause) Key: HIVE-6509 URL: https://issues.apache.org/jira/browse/HIVE-6509 Project: Hive Issue Type: Task Reporter: Rishabh Bhardwaj Labels: newbie I have created a external table and when I provide the location of the data for this table I get the following error: FAILED: Error in metadata: MetaException(message:java.lang.IllegalStateException: Can't overwrite cause) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask Also I am able to load the same file using PIG Script using the PigStorage() loader function. I have the following permissions on the file: -rw-rw-r-- and on the folder where this file resides (Giving the path of this folder in location in the query ) : drwxrwxr-x What can be the cause for this and how to correct this error ? -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6623) Add owner tag to ptest2 created instances
Brock Noland created HIVE-6623: -- Summary: Add owner tag to ptest2 created instances Key: HIVE-6623 URL: https://issues.apache.org/jira/browse/HIVE-6623 Project: Hive Issue Type: Bug Reporter: Brock Noland We have a new requirement to have an owner tag on instances. We need to change ptest2 to support this. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6623) Add owner tag to ptest2 created instances
[ https://issues.apache.org/jira/browse/HIVE-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13930987#comment-13930987 ] Brock Noland commented on HIVE-6623: FYI [~szehon] Add owner tag to ptest2 created instances --- Key: HIVE-6623 URL: https://issues.apache.org/jira/browse/HIVE-6623 Project: Hive Issue Type: Bug Reporter: Brock Noland We have a new requirement to have an owner tag on instances. We need to change ptest2 to support this. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6562) Protection from exceptions in ORC predicate evaluation
[ https://issues.apache.org/jira/browse/HIVE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931013#comment-13931013 ] Sergey Shelukhin commented on HIVE-6562: minor comment on RB. Does any test (I just skimmed them) actually test the new path w/exception? Protection from exceptions in ORC predicate evaluation -- Key: HIVE-6562 URL: https://issues.apache.org/jira/browse/HIVE-6562 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Prasanth J Assignee: Prasanth J Labels: orcfile Attachments: HIVE-6562.1.patch, HIVE-6562.2.patch ORC evaluates predicate expressions to select row groups that satisfy predicate condition. There can be exceptions (mostly ClassCastException) when data types of predicate constant and min/max values are different. To avoid this patch catches any such exception and provides a default behaviour i.e; selecting the row group. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6624) Support ExprNodeNullDesc in vectorized mode.
Jitendra Nath Pandey created HIVE-6624: -- Summary: Support ExprNodeNullDesc in vectorized mode. Key: HIVE-6624 URL: https://issues.apache.org/jira/browse/HIVE-6624 Project: Hive Issue Type: Bug Reporter: Jitendra Nath Pandey Support ExprNodeNullDesc in vectorized mode. An example where this shows up in the plan: case when a 0 then b else null end TPCDS query 73 has an expression like the above. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (HIVE-6624) Support ExprNodeNullDesc in vectorized mode.
[ https://issues.apache.org/jira/browse/HIVE-6624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey reassigned HIVE-6624: -- Assignee: Jitendra Nath Pandey Support ExprNodeNullDesc in vectorized mode. Key: HIVE-6624 URL: https://issues.apache.org/jira/browse/HIVE-6624 Project: Hive Issue Type: Bug Reporter: Jitendra Nath Pandey Assignee: Jitendra Nath Pandey Support ExprNodeNullDesc in vectorized mode. An example where this shows up in the plan: case when a 0 then b else null end TPCDS query 73 has an expression like the above. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6562) Protection from exceptions in ORC predicate evaluation
[ https://issues.apache.org/jira/browse/HIVE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931021#comment-13931021 ] Prasanth J commented on HIVE-6562: -- Yes. testPredEvalWithDateStats() has some invalid cases that should throw. Protection from exceptions in ORC predicate evaluation -- Key: HIVE-6562 URL: https://issues.apache.org/jira/browse/HIVE-6562 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Prasanth J Assignee: Prasanth J Labels: orcfile Attachments: HIVE-6562.1.patch, HIVE-6562.2.patch ORC evaluates predicate expressions to select row groups that satisfy predicate condition. There can be exceptions (mostly ClassCastException) when data types of predicate constant and min/max values are different. To avoid this patch catches any such exception and provides a default behaviour i.e; selecting the row group. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6578) Use ORC file footer statistics for analyze command
[ https://issues.apache.org/jira/browse/HIVE-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931070#comment-13931070 ] Sergey Shelukhin commented on HIVE-6578: some comments on RB Use ORC file footer statistics for analyze command -- Key: HIVE-6578 URL: https://issues.apache.org/jira/browse/HIVE-6578 Project: Hive Issue Type: New Feature Affects Versions: 0.13.0 Reporter: Prasanth J Assignee: Prasanth J Labels: orcfile Attachments: HIVE-6578.1.patch ORC provides file level statistics which can be used in analyze partialscan and noscan cases to compute basic statistics like number of rows, number of files, total file size and raw data size. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6559) sourcing txn-script from schema script results in failure for mysql oracle
[ https://issues.apache.org/jira/browse/HIVE-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931087#comment-13931087 ] Hive QA commented on HIVE-6559: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12633741/HIVE-6559.patch {color:green}SUCCESS:{color} +1 5377 tests passed Test results: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1704/testReport Console output: http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1704/console Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12633741 sourcing txn-script from schema script results in failure for mysql oracle Key: HIVE-6559 URL: https://issues.apache.org/jira/browse/HIVE-6559 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0 Reporter: Ashutosh Chauhan Assignee: Alan Gates Fix For: 0.13.0 Attachments: HIVE-6559.patch On mysql, I got: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' SOURCE hive-txn-schem' at line 1 On Oracle, I got: SP2-0310: unable to open file hive-txn-schema-0.13.0.oracle.sql -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6060) Define API for RecordUpdater and UpdateReader
[ https://issues.apache.org/jira/browse/HIVE-6060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931102#comment-13931102 ] Prasanth J commented on HIVE-6060: -- [~owen.omalley] HIVE-6578 added support for partialscan and noscan support in analyze statement for ORC files. When analyze command with partial or noscan is executed, each partition directory is iterated, creating ORC readers for files under the each directory. Basic statistics like number of rows, file size, raw data size are computed by reading stats from ORC file footer. How does HIVE-5317 and HIVE-6060 changes affect HIVE-6578 way of stats gathering? Define API for RecordUpdater and UpdateReader - Key: HIVE-6060 URL: https://issues.apache.org/jira/browse/HIVE-6060 Project: Hive Issue Type: Sub-task Reporter: Owen O'Malley Assignee: Owen O'Malley Attachments: HIVE-6060.patch, acid-io.patch, h-5317.patch, h-5317.patch, h-5317.patch, h-6060.patch, h-6060.patch We need to define some new APIs for how Hive interacts with the file formats since it needs to be much richer than the current RecordReader and RecordWriter. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6562) Protection from exceptions in ORC predicate evaluation
[ https://issues.apache.org/jira/browse/HIVE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931248#comment-13931248 ] Sergey Shelukhin commented on HIVE-6562: lgtm Protection from exceptions in ORC predicate evaluation -- Key: HIVE-6562 URL: https://issues.apache.org/jira/browse/HIVE-6562 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Prasanth J Assignee: Prasanth J Labels: orcfile Attachments: HIVE-6562.1.patch, HIVE-6562.2.patch, HIVE-6562.3.patch ORC evaluates predicate expressions to select row groups that satisfy predicate condition. There can be exceptions (mostly ClassCastException) when data types of predicate constant and min/max values are different. To avoid this patch catches any such exception and provides a default behaviour i.e; selecting the row group. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6562) Protection from exceptions in ORC predicate evaluation
[ https://issues.apache.org/jira/browse/HIVE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931249#comment-13931249 ] Sergey Shelukhin commented on HIVE-6562: (as in, +1) Protection from exceptions in ORC predicate evaluation -- Key: HIVE-6562 URL: https://issues.apache.org/jira/browse/HIVE-6562 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Prasanth J Assignee: Prasanth J Labels: orcfile Attachments: HIVE-6562.1.patch, HIVE-6562.2.patch, HIVE-6562.3.patch ORC evaluates predicate expressions to select row groups that satisfy predicate condition. There can be exceptions (mostly ClassCastException) when data types of predicate constant and min/max values are different. To avoid this patch catches any such exception and provides a default behaviour i.e; selecting the row group. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6562) Protection from exceptions in ORC predicate evaluation
[ https://issues.apache.org/jira/browse/HIVE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth J updated HIVE-6562: - Attachment: HIVE-6562.2.patch Earlier patch had issues with Date conversions. Fixed them in this patch. Protection from exceptions in ORC predicate evaluation -- Key: HIVE-6562 URL: https://issues.apache.org/jira/browse/HIVE-6562 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Prasanth J Assignee: Prasanth J Labels: orcfile Attachments: HIVE-6562.1.patch, HIVE-6562.2.patch, HIVE-6562.3.patch ORC evaluates predicate expressions to select row groups that satisfy predicate condition. There can be exceptions (mostly ClassCastException) when data types of predicate constant and min/max values are different. To avoid this patch catches any such exception and provides a default behaviour i.e; selecting the row group. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6562) Protection from exceptions in ORC predicate evaluation
[ https://issues.apache.org/jira/browse/HIVE-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth J updated HIVE-6562: - Attachment: (was: HIVE-6562.2.patch) Protection from exceptions in ORC predicate evaluation -- Key: HIVE-6562 URL: https://issues.apache.org/jira/browse/HIVE-6562 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Prasanth J Assignee: Prasanth J Labels: orcfile Attachments: HIVE-6562.1.patch, HIVE-6562.2.patch, HIVE-6562.3.patch ORC evaluates predicate expressions to select row groups that satisfy predicate condition. There can be exceptions (mostly ClassCastException) when data types of predicate constant and min/max values are different. To avoid this patch catches any such exception and provides a default behaviour i.e; selecting the row group. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6559) sourcing txn-script from schema script results in failure for mysql oracle
[ https://issues.apache.org/jira/browse/HIVE-6559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13931132#comment-13931132 ] Ashutosh Chauhan commented on HIVE-6559: +1. yeah.. we should get this in 0.13 as well. sourcing txn-script from schema script results in failure for mysql oracle Key: HIVE-6559 URL: https://issues.apache.org/jira/browse/HIVE-6559 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0 Reporter: Ashutosh Chauhan Assignee: Alan Gates Fix For: 0.13.0 Attachments: HIVE-6559.patch On mysql, I got: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ' SOURCE hive-txn-schem' at line 1 On Oracle, I got: SP2-0310: unable to open file hive-txn-schema-0.13.0.oracle.sql -- This message was sent by Atlassian JIRA (v6.2#6252)
Re: Review Request 18936: HIVE-6430 MapJoin hash table has large memory overhead
On March 11, 2014, 12:30 a.m., Gopal V wrote: ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/BytesBytesMultiHashMap.java, line 195 https://reviews.apache.org/r/18936/diff/1/?file=513985#file513985line195 Quadriatic probing is much nicer for collisions. this is quadratic probing. It uses triangular numbers, which are (n+1)*n/2 It resets to random slot number when cycling across the end of hashmap, will fix that On March 11, 2014, 12:30 a.m., Gopal V wrote: ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/BytesBytesMultiHashMap.java, line 250 https://reviews.apache.org/r/18936/diff/1/?file=513985#file513985line250 if cmpLength != keylength comparison - cannot be equal if they are not byte-for-byte equal, right? that's checked first thing in isEqual, but yeah, can be checked earlier On March 11, 2014, 12:30 a.m., Gopal V wrote: ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java, line 169 https://reviews.apache.org/r/18936/diff/1/?file=513994#file513994line169 why is there an init()? this code is removed On March 11, 2014, 12:30 a.m., Gopal V wrote: serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinarySerDe.java, line 271 https://reviews.apache.org/r/18936/diff/1/?file=514006#file514006line271 Comment eaten up in diff? no, it no longer returns - Sergey --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18936/#review36680 --- On March 8, 2014, 12:31 a.m., Sergey Shelukhin wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/18936/ --- (Updated March 8, 2014, 12:31 a.m.) Review request for hive, Gopal V and Gunther Hagleitner. Repository: hive-git Description --- See JIRA Diffs - common/src/java/org/apache/hadoop/hive/conf/HiveConf.java edc3d38 ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java 170e8c0 ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java 3daf7a5 ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/AbstractMapJoinTableContainer.java 8854b19 ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/BytesBytesMultiHashMap.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/HashMapWrapper.java 61545b5 ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinBytesTableContainer.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinKey.java a00aab3 ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinPersistableTableContainer.java PRE-CREATION ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinRowContainer.java 008a8db ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainer.java a8cb1ae ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinTableContainerSerDe.java 55b7415 ql/src/java/org/apache/hadoop/hive/ql/exec/tez/HashTableLoader.java 84739ee ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorMapJoinOperator.java 6ecbcf7 ql/src/test/org/apache/hadoop/hive/ql/exec/persistence/TestMapJoinEqualityTableContainer.java 65e3779 ql/src/test/org/apache/hadoop/hive/ql/exec/persistence/TestMapJoinTableContainer.java 755d783 ql/src/test/queries/clientpositive/mapjoin_mapjoin.q 1eb95f6 ql/src/test/results/clientpositive/mapjoin_mapjoin.q.out d79b984 ql/src/test/results/clientpositive/tez/mapjoin_mapjoin.q.out bc2c650 serde/src/java/org/apache/hadoop/hive/serde2/ByteStream.java 73d9b29 serde/src/java/org/apache/hadoop/hive/serde2/WriteBuffers.java PRE-CREATION serde/src/java/org/apache/hadoop/hive/serde2/columnar/LazyBinaryColumnarSerDe.java bab505e serde/src/java/org/apache/hadoop/hive/serde2/io/DateWritable.java 1f4ccdd serde/src/java/org/apache/hadoop/hive/serde2/io/HiveDecimalWritable.java a99c7b4 serde/src/java/org/apache/hadoop/hive/serde2/io/TimestampWritable.java 435d6c6 serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinarySerDe.java b188c3f serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryUtils.java 6c14081 Diff: https://reviews.apache.org/r/18936/diff/ Testing --- Thanks, Sergey Shelukhin