[jira] [Commented] (HIVE-13425) Fix partition addition in MSCK REPAIR TABLE command

2016-05-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297789#comment-15297789
 ] 

Hive QA commented on HIVE-13425:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12805527/HIVE-13425.5.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 55 failed/errored test(s), 10043 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_sortmerge_join_7.q-orc_merge9.q-tez_union_dynamic_partition.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-cte_4.q-vector_non_string_partition.q-delete_where_non_partitioned.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-explainuser_4.q-update_after_multiple_inserts.q-mapreduce2.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_interval_2.q-schema_evol_text_nonvec_mapwork_part_all_primitive.q-tez_fsstat.q-and-12-more
 - did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_constantPropagateForSubQuery
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_minimr_broken_pipe
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorParallelism
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation
org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote
org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks
org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs
org.apache.hadoop.hive.metastore.TestHiveMetaStoreStatsMerge.testStatsMerge
org.apache.hadoop.hive.metastore.TestMetaStoreEndFunctionListener.testEndFunctionListener
org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus
org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithCommas
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithUnicode
org.apache.hadoop.hive.metastore.TestPartitionNameWhitelistValidation.testAppendPartitionWithValidCharacters
org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testLockTimeout
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testWriteSetTracking6
org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls
org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.hive.ql.security.TestFolderPermissions
org.apache.hadoop.hive.ql.security.TestMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener.org.apache.hadoop.hive.ql.security.TestMultiAuthorizationPreEventListener
org.apache.hadoop.hive.ql.security.TestStorageBasedClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropDatabase
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropTable
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropView
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProviderWithACL.testSimplePrivilege

[jira] [Updated] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13566:
---
Affects Version/s: 2.0.0

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13566:
---
Fix Version/s: 2.1.0

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297724#comment-15297724
 ] 

Pengcheng Xiong commented on HIVE-13566:


can not repo any of them pushed to master. Thanks [~ashutoshc] for the 
review.

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13566:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: TODOC2.1
> Fix For: 2.1.0
>
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297723#comment-15297723
 ] 

Pengcheng Xiong commented on HIVE-13566:


{code}
Test Result (44 failures / +18)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_serde
org.apache.hive.hcatalog.listener.TestDbNotificationListener.cleanupNotifs
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_gby_empty
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cte_3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_script_pipe
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_subquery_exists
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_casts
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_union7
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_string_funcs
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorization_decimal_date
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_union_multiinsert
org.apache.hadoop.hive.llap.daemon.impl.comparator.TestShortestJobFirstComparator.testWaitQueueComparatorWithinDagPriority
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample6
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_scriptfile1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats5
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats8
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats14
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_statsfs
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_elt
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_cbo_limit
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_12
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby10
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_cast_constant
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_14
org.apache.hadoop.hive.ql.exec.vector.expressions.TestVectorDateExpressions.testMultiThreadedVectorUDFDate
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_select_read_only_encrypted_tbl
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_static
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_insert_partition_dynamic
org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForMemoryTokenStore
org.apache.hive.minikdc.TestHiveAuthFactory.testStartTokenManagerForDBTokenStore
org.apache.hive.minikdc.TestMiniHiveKdc.testLogin
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_selectindate
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_avrocountemptytbl
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_order_null
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver_encryption_join_with_different_encryption_keys
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver
{code}

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: TODOC2.1
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13354) Add ability to specify Compaction options per table and per request

2016-05-23 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13354:
-
Attachment: HIVE-13354.2.patch

> Add ability to specify Compaction options per table and per request
> ---
>
> Key: HIVE-13354
> URL: https://issues.apache.org/jira/browse/HIVE-13354
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13354.1.patch, 
> HIVE-13354.1.withoutSchemaChange.patch, HIVE-13354.2.patch
>
>
> Currently the are a few options that determine when automatic compaction is 
> triggered.  They are specified once for the warehouse.
> This doesn't make sense - some table may be more important and need to be 
> compacted more often.
> We should allow specifying these on per table basis.
> Also, compaction is an MR job launched from within the metastore.  There is 
> currently no way to control job parameters (like memory, for example) except 
> to specify it in hive-site.xml for metastore which means they are site wide.
> Should add a way to specify these per table (perhaps even per compaction if 
> launched via ALTER TABLE)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13354) Add ability to specify Compaction options per table and per request

2016-05-23 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13354:
-
Attachment: (was: HIVE-13354.2.patch)

> Add ability to specify Compaction options per table and per request
> ---
>
> Key: HIVE-13354
> URL: https://issues.apache.org/jira/browse/HIVE-13354
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13354.1.patch, 
> HIVE-13354.1.withoutSchemaChange.patch, HIVE-13354.2.patch
>
>
> Currently the are a few options that determine when automatic compaction is 
> triggered.  They are specified once for the warehouse.
> This doesn't make sense - some table may be more important and need to be 
> compacted more often.
> We should allow specifying these on per table basis.
> Also, compaction is an MR job launched from within the metastore.  There is 
> currently no way to control job parameters (like memory, for example) except 
> to specify it in hive-site.xml for metastore which means they are site wide.
> Should add a way to specify these per table (perhaps even per compaction if 
> launched via ALTER TABLE)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13354) Add ability to specify Compaction options per table and per request

2016-05-23 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297708#comment-15297708
 ] 

Wei Zheng commented on HIVE-13354:
--

Thanks [~ekoifman] for the review.
1. I moved the setConf later to make it clearer.
2. You're right. "ready for cleaning" is due to the SQL failure in 
CompactionTxnHandler. After fixing the unmatching "?"s, I got "succeeded" 
response.
3. "size4" is due to the serialization scheme of jobConf (4 being the length of 
8192). The complete output of job.get("hive.compactor.table.props") is this:
{code}
11:9:totalSize4:207617:orc.compress.size4:819253:compactorthreshold.hive.compactor.delta.pct.threshold3:0.57:numRows1:711:rawDataSize1:021:COLUMN_STATS_ACCURATE22:{"BASIC_STATS":"true"}53:compactorthreshold.hive.compactor.delta.num.threshold1:48:numFiles1:421:transient_lastDdlTime10:146403755713:transactional4:true33:compactor.mapreduce.map.memory.mb4:2048
{code}
4. Deprecated the old compact() signature.
5. Fixed unmatching number of value entries in insert statement.
6. Removed cc_tblproperties from purgeCompactionHistory().

> Add ability to specify Compaction options per table and per request
> ---
>
> Key: HIVE-13354
> URL: https://issues.apache.org/jira/browse/HIVE-13354
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13354.1.patch, 
> HIVE-13354.1.withoutSchemaChange.patch, HIVE-13354.2.patch
>
>
> Currently the are a few options that determine when automatic compaction is 
> triggered.  They are specified once for the warehouse.
> This doesn't make sense - some table may be more important and need to be 
> compacted more often.
> We should allow specifying these on per table basis.
> Also, compaction is an MR job launched from within the metastore.  There is 
> currently no way to control job parameters (like memory, for example) except 
> to specify it in hive-site.xml for metastore which means they are site wide.
> Should add a way to specify these per table (perhaps even per compaction if 
> launched via ALTER TABLE)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13566:
--
Labels: TODOC2.1  (was: )

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: TODOC2.1
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13282) GroupBy and select operator encounter ArrayIndexOutOfBoundsException

2016-05-23 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297647#comment-15297647
 ] 

Matt McCline commented on HIVE-13282:
-

[~vikram.dixit] I have attached a Q where I attempt to repro the problem but I 
can't seem to coax the Optimizer to do an Sorted Merge Bucket Map Join 
Operator

> GroupBy and select operator encounter ArrayIndexOutOfBoundsException
> 
>
> Key: HIVE-13282
> URL: https://issues.apache.org/jira/browse/HIVE-13282
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1, 2.0.0, 2.1.0
>Reporter: Vikram Dixit K
>Assignee: Matt McCline
> Attachments: smb_groupby.q, smb_groupby.q.out
>
>
> The group by and select operators run into the ArrayIndexOutOfBoundsException 
> when they incorrectly initialize themselves with tag 0 but the incoming tag 
> id is different.
> {code}
> select count(*) from
> (select rt1.id from
> (select t1.key as id, t1.value as od from tab t1 group by key, value) rt1) vt1
> join
> (select rt2.id from
> (select t2.key as id, t2.value as od from tab_part t2 group by key, value) 
> rt2) vt2
> where vt1.id=vt2.id;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13282) GroupBy and select operator encounter ArrayIndexOutOfBoundsException

2016-05-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13282:

Attachment: smb_groupby.q

> GroupBy and select operator encounter ArrayIndexOutOfBoundsException
> 
>
> Key: HIVE-13282
> URL: https://issues.apache.org/jira/browse/HIVE-13282
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1, 2.0.0, 2.1.0
>Reporter: Vikram Dixit K
>Assignee: Matt McCline
> Attachments: smb_groupby.q, smb_groupby.q.out
>
>
> The group by and select operators run into the ArrayIndexOutOfBoundsException 
> when they incorrectly initialize themselves with tag 0 but the incoming tag 
> id is different.
> {code}
> select count(*) from
> (select rt1.id from
> (select t1.key as id, t1.value as od from tab t1 group by key, value) rt1) vt1
> join
> (select rt2.id from
> (select t2.key as id, t2.value as od from tab_part t2 group by key, value) 
> rt2) vt2
> where vt1.id=vt2.id;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13282) GroupBy and select operator encounter ArrayIndexOutOfBoundsException

2016-05-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13282:

Attachment: smb_groupby.q.out

> GroupBy and select operator encounter ArrayIndexOutOfBoundsException
> 
>
> Key: HIVE-13282
> URL: https://issues.apache.org/jira/browse/HIVE-13282
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1, 2.0.0, 2.1.0
>Reporter: Vikram Dixit K
>Assignee: Matt McCline
> Attachments: smb_groupby.q, smb_groupby.q.out
>
>
> The group by and select operators run into the ArrayIndexOutOfBoundsException 
> when they incorrectly initialize themselves with tag 0 but the incoming tag 
> id is different.
> {code}
> select count(*) from
> (select rt1.id from
> (select t1.key as id, t1.value as od from tab t1 group by key, value) rt1) vt1
> join
> (select rt2.id from
> (select t2.key as id, t2.value as od from tab_part t2 group by key, value) 
> rt2) vt2
> where vt1.id=vt2.id;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13584) HBaseStorageHandler should support table pre-split

2016-05-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297634#comment-15297634
 ] 

Hive QA commented on HIVE-13584:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12805526/HIVE-13584.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 82 failed/errored test(s), 9924 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_join1.q-schema_evol_text_vec_mapwork_part_all_complex.q-vector_complex_join.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_sortmerge_join_7.q-orc_merge9.q-tez_union_dynamic_partition.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-constprog_dpp.q-dynamic_partition_pruning.q-vectorization_10.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-explainuser_4.q-update_after_multiple_inserts.q-mapreduce2.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-order_null.q-vector_acid3.q-orc_merge10.q-and-12-more - 
did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_coalesce.q-cbo_windowing.q-tez_join.q-and-12-more - 
did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorization_13.q-auto_sortmerge_join_13.q-tez_bmj_schema_evolution.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorization_16.q-vector_decimal_round.q-orc_merge6.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-bucketsortoptimize_insert_7.q-smb_mapjoin_15.q-mapreduce1.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-groupby_grouping_id2.q-vectorization_13.q-auto_sortmerge_join_13.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-mapreduce2.q-groupby7_noskew.q-vectorization_5.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-union_top_level.q-join11.q-auto_join1.q-and-12-more - did 
not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_bucketpruning1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_udf_udaf
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_column_names_with_leading_and_trailing_spaces
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge7
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_table
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_select_dummy_source
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_subquery_in
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_vector_dynpart_hashjoin_1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_aggregate_without_gby
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_ve

[jira] [Commented] (HIVE-13797) Provide a connection string example in beeline

2016-05-23 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297600#comment-15297600
 ] 

Lefty Leverenz commented on HIVE-13797:
---

Thanks for the changes, [~vihangk1]!  I can't give you a technical +1, but it 
looks good to me.

> Provide a connection string example in beeline
> --
>
> Key: HIVE-13797
> URL: https://issues.apache.org/jira/browse/HIVE-13797
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-13797.01.patch, HIVE-13797.02.patch
>
>
> It would save a bunch of googling if we could provide some examples of 
> connection strings directly to beeline help message
> Eg:
> {code}
> ./bin/beeline --help
> Usage: java org.apache.hive.cli.beeline.BeeLine 
>-uthe JDBC URL to connect to
>-r  reconnect to last saved connect url (in 
> conjunction with !save)
>-nthe username to connect as
>-pthe password to connect as
>-dthe driver class to use
>-i   script file for initialization
>-e   query that should be executed
>-f   script file that should be executed
>-w (or) --password-file   the password file to read 
> password from
>--hiveconf property=value   Use value for given property
>--hivevar name=valuehive variable name and value
>This is Hive specific settings in which 
> variables
>can be set at session level and referenced 
> in Hive
>commands or queries.
>--color=[true/false]control whether color is used for display
>--showHeader=[true/false]   show column names in query results
>--headerInterval=ROWS;  the interval between which heades are 
> displayed
>--fastConnect=[true/false]  skip building table/column list for 
> tab-completion
>--autoCommit=[true/false]   enable/disable automatic transaction commit
>--verbose=[true/false]  show verbose error messages and debug info
>--showWarnings=[true/false] display connection warnings
>--showNestedErrs=[true/false]   display nested errors
>--numberFormat=[pattern]format numbers using DecimalFormat pattern
>--force=[true/false]continue running script even after errors
>--maxWidth=MAXWIDTH the maximum width of the terminal
>--maxColumnWidth=MAXCOLWIDTHthe maximum width to use when displaying 
> columns
>--silent=[true/false]   be more silent
>--autosave=[true/false] automatically save preferences
>--outputformat=[table/vertical/csv2/tsv2/dsv/csv/tsv]  format mode for 
> result display
>Note that csv, and tsv are deprecated - 
> use csv2, tsv2 instead
>--incremental=[true/false]  Defaults to false. When set to false, the 
> entire result set
>is fetched and buffered before being 
> displayed, yielding optimal
>display column sizing. When set to true, 
> result rows are displayed
>immediately as they are fetched, yielding 
> lower latency and
>memory usage at the price of extra display 
> column padding.
>Setting --incremental=true is recommended 
> if you encounter an OutOfMemory
>on the client side (due to the fetched 
> result set size being large).
>--truncateTable=[true/false]truncate table column when it exceeds 
> length
>--delimiterForDSV=DELIMITER specify the delimiter for 
> delimiter-separated values output format (default: |)
>--isolation=LEVEL   set the transaction isolation level
>--nullemptystring=[true/false]  set to true to get historic behavior of 
> printing null as empty string
>--addlocaldriverjar=DRIVERJARNAME Add driver jar file in the beeline 
> client side
>--addlocaldrivername=DRIVERNAME Add drvier name needs to be supported in 
> the beeline client side
>--showConnectedUrl=[true/false] Prompt HiveServer2s URI to which this 
> beeline connected.
>Only works for HiveServer2 cluster mode.
>--help  display this message
>  
>Example:
> 1. beeline -u jdbc:hive2://localhost:1 username password
> 2. beeline -n username -p password -u jdbc:hive2://hs2.local:10012
> {code}



--
This messa

[jira] [Updated] (HIVE-10176) skip.header.line.count causes values to be skipped when performing insert values

2016-05-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-10176:

Status: Open  (was: Patch Available)

Reported failures looks related.
{code}
org.apache.hadoop.hive.ql.parse.SemanticException: Bucketed table metadata is 
not correct. Fix the metadata or don't use bucketed mapjoin, by setting 
hive.enforce.bucketmapjoin to false. The number of buckets for table 
bucket_small partition ds=2008-04-08 is 2, whereas the number of files is 1
at 
org.apache.hadoop.hive.ql.optimizer.AbstractBucketJoinProc.checkConvertBucketMapJoin(AbstractBucketJoinProc.java:290)
at 
org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.canConvertJoinToBucketMapJoin(AbstractSMBJoinProc.java:497)
at 
org.apache.hadoop.hive.ql.optimizer.AbstractSMBJoinProc.canConvertJoinToSMBJoin(AbstractSMBJoinProc.java:414)
at 
org.apache.hadoop.hive.ql.optimizer.SortedMergeJoinProc.process(SortedMergeJoinProc.java:45)
at 
org.apache.hadoop.hive.ql.lib.DefaultRuleDispatcher.dispatch(DefaultRuleDispatcher.java:90)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatchAndReturn(DefaultGraphWalker.java:105)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.dispatch(DefaultGraphWalker.java:89)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.walk(DefaultGraphWalker.java:158)
at 
org.apache.hadoop.hive.ql.lib.DefaultGraphWalker.startWalking(DefaultGraphWalker.java:120)
at 
org.apache.hadoop.hive.ql.optimizer.SortedMergeBucketMapJoinOptimizer.transform(SortedMergeBucketMapJoinOptimizer.java:109)
at 
org.apache.hadoop.hive.ql.optimizer.Optimizer.optimize(Optimizer.java:244)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10745)
at 
org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:236)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
at 
org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:75)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:250)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:437)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:329)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1157)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1252)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1083)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1071)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1137)
at 
org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:)
at 
org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:135)
at 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_auto_sortmerge_join_11(TestCliDriver.java:108)
{code}

> skip.header.line.count causes values to be skipped when performing insert 
> values
> 
>
> Key: HIVE-10176
> URL: https://issues.apache.org/jira/browse/HIVE-10176
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1, 1.0.0
>Reporter: Wenbo Wang
>Assignee: Vladyslav Pavlenko
> Fix For: 2.0.0
>
> Attachments: HIVE-10176.1.patch, HIVE-10176.10.patch, 
> HIVE-10176.11.patch, HIVE-10176.12.patch, HIVE-10176.13.patch, 
> HIVE-10176.14.patch, HIVE-10176.15.patch, HIVE-10176.16.patch, 
> HIVE-10176.2.patch, HIVE-10176.3.patch, HIVE-10176.4.patch, 
> HIVE-10176.5.patch, HIVE-10176.6.patch, HIVE-10176.7.patch, 
> HIVE-10176.8.patch, HIVE-10176.9.patch, data
>
>
> When inserting values in to tables with TBLPROPERTIES 
> ("skip.header.line.count"="1") the first value listed is also skipped. 
> create table test (row int, name string) TBLPROPERTIES 
> ("skip.header.line.count"="1"); 
> load data local inpath '/root/data' into table test;
> insert into table test values (1, 'a'), (2, 'b'), (3, 'c');
> (1, 'a') isn't inserted into the table. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13782) Compile async query asynchronously

2016-05-23 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297574#comment-15297574
 ] 

Lefty Leverenz commented on HIVE-13782:
---

Doc note:  This adds the configuration parameter 
*hive.server2.async.exec.async.compile* to HiveConf.java, so it will need to be 
documented in the HiveServer2 section of Configuration Properties for release 
2.1.0.

* [Configuration Properties -- HiveServer2 | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-HiveServer2]

Added a TODOC2.1 label.

> Compile async query asynchronously
> --
>
> Key: HIVE-13782
> URL: https://issues.apache.org/jira/browse/HIVE-13782
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
>  Labels: TODOC2.1
> Fix For: 2.0.1
>
> Attachments: HIVE-13782.1.patch
>
>
> Currently, when an async query is submitted to HS2, HS2 does the preparation 
> synchronously. One of the preparation step is to compile the query, which may 
> take some time. It will be helpful to provide an option to do the compilation 
> asynchronously.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13782) Compile async query asynchronously

2016-05-23 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13782:
--
Labels: TODOC2.1  (was: )

> Compile async query asynchronously
> --
>
> Key: HIVE-13782
> URL: https://issues.apache.org/jira/browse/HIVE-13782
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
>  Labels: TODOC2.1
> Fix For: 2.0.1
>
> Attachments: HIVE-13782.1.patch
>
>
> Currently, when an async query is submitted to HS2, HS2 does the preparation 
> synchronously. One of the preparation step is to compile the query, which may 
> take some time. It will be helpful to provide an option to do the compilation 
> asynchronously.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13782) Compile async query asynchronously

2016-05-23 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297568#comment-15297568
 ] 

Lefty Leverenz commented on HIVE-13782:
---

[~jxiang], the Fix Version has a typo -- it should be 2.1.0 instead of 2.0.1.

Also, would you please attach the final patch, just for the record?

> Compile async query asynchronously
> --
>
> Key: HIVE-13782
> URL: https://issues.apache.org/jira/browse/HIVE-13782
> Project: Hive
>  Issue Type: Improvement
>Reporter: Jimmy Xiang
>Assignee: Jimmy Xiang
>Priority: Minor
>  Labels: TODOC2.1
> Fix For: 2.0.1
>
> Attachments: HIVE-13782.1.patch
>
>
> Currently, when an async query is submitted to HS2, HS2 does the preparation 
> synchronously. One of the preparation step is to compile the query, which may 
> take some time. It will be helpful to provide an option to do the compilation 
> asynchronously.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13616) Investigate renaming a table without invalidating the column stats

2016-05-23 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297561#comment-15297561
 ] 

Lefty Leverenz commented on HIVE-13616:
---

Okay, thanks.

> Investigate renaming a table without invalidating the column stats
> --
>
> Key: HIVE-13616
> URL: https://issues.apache.org/jira/browse/HIVE-13616
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 2.1.0
>
> Attachments: HIVE-13616.1.patch, HIVE-13616.2.patch
>
>
> Right now when we rename a table, we clear the column stats rather than 
> updating it (HIVE-9720) since ObjectStore uses DN to talk to DB. Investigate 
> the possibility that if we can achieve updating the stats without rescanning 
> the whole table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13736) View's input/output formats are TEXT by default

2016-05-23 Thread Chaoyu Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297560#comment-15297560
 ] 

Chaoyu Tang commented on HIVE-13736:


+1

> View's input/output formats are TEXT by default
> ---
>
> Key: HIVE-13736
> URL: https://issues.apache.org/jira/browse/HIVE-13736
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Pavas Garg
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-13736.1.patch
>
>
> Feature request where Hive View's input/output formats are text by default in 
> order to help 3rd party compatibility



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-11045) ArrayIndexOutOfBoundsException with Hive 1.2.0 and Tez 0.7.0

2016-05-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline resolved HIVE-11045.
-
Resolution: Duplicate

HIVE-13282

> ArrayIndexOutOfBoundsException with Hive 1.2.0 and Tez 0.7.0
> 
>
> Key: HIVE-11045
> URL: https://issues.apache.org/jira/browse/HIVE-11045
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.0
> Environment: Hive 1.2.0, HDP 2.2, Hadoop 2.6, Tez 0.7.0
>Reporter: Soundararajan Velu
>Assignee: Matt McCline
>
>  TaskAttempt 3 failed, info=[Error: Failure while running 
> task:java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{"_col0":4457890},"value":{"_col0":null,"_col1":null,"_col2":null,"_col3":null,"_col4":null,"_col5":null,"_col6":null,"_col7":null,"_col8":null,"_col9":null,"_col10":null,"_col11":null,"_col12":null,"_col13":null,"_col14":null,"_col15":null,"_col16":null,"_col17":"fkl_shipping_b2c","_col18":null,"_col19":null,"_col20":null,"_col21":null}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{"_col0":4457890},"value":{"_col0":null,"_col1":null,"_col2":null,"_col3":null,"_col4":null,"_col5":null,"_col6":null,"_col7":null,"_col8":null,"_col9":null,"_col10":null,"_col11":null,"_col12":null,"_col13":null,"_col14":null,"_col15":null,"_col16":null,"_col17":"fkl_shipping_b2c","_col18":null,"_col19":null,"_col20":null,"_col21":null}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:302)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:249)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:148)
> ... 14 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row (tag=0) 
> {"key":{"_col0":4457890},"value":{"_col0":null,"_col1":null,"_col2":null,"_col3":null,"_col4":null,"_col5":null,"_col6":null,"_col7":null,"_col8":null,"_col9":null,"_col10":null,"_col11":null,"_col12":null,"_col13":null,"_col14":null,"_col15":null,"_col16":null,"_col17":"fkl_shipping_b2c","_col18":null,"_col19":null,"_col20":null,"_col21":null}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:370)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:292)
> ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row (tag=1) 
> {"key":{"_col0":6417306,"_col1":{0:{"_col0":"2014-08-01 
> 02:14:02"}}},"value":{"_col0":"2014-08-01 
> 02:14:02","_col1":20140801,"_col2":"sc_jarvis_b2c","_col3":"action_override","_col4":"WITHIN_GRACE_PERIOD","_col5":"policy_override"}}
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.fetchOneRow(CommonMergeJoinOperator.java:413)
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.fetchNextGroup(CommonMergeJoinOperator.java:381)
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOp

[jira] [Reopened] (HIVE-11045) ArrayIndexOutOfBoundsException with Hive 1.2.0 and Tez 0.7.0

2016-05-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reopened HIVE-11045:
-
  Assignee: Matt McCline

> ArrayIndexOutOfBoundsException with Hive 1.2.0 and Tez 0.7.0
> 
>
> Key: HIVE-11045
> URL: https://issues.apache.org/jira/browse/HIVE-11045
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.0
> Environment: Hive 1.2.0, HDP 2.2, Hadoop 2.6, Tez 0.7.0
>Reporter: Soundararajan Velu
>Assignee: Matt McCline
>
>  TaskAttempt 3 failed, info=[Error: Failure while running 
> task:java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{"_col0":4457890},"value":{"_col0":null,"_col1":null,"_col2":null,"_col3":null,"_col4":null,"_col5":null,"_col6":null,"_col7":null,"_col8":null,"_col9":null,"_col10":null,"_col11":null,"_col12":null,"_col13":null,"_col14":null,"_col15":null,"_col16":null,"_col17":"fkl_shipping_b2c","_col18":null,"_col19":null,"_col20":null,"_col21":null}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:171)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:137)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:345)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row (tag=0) 
> {"key":{"_col0":4457890},"value":{"_col0":null,"_col1":null,"_col2":null,"_col3":null,"_col4":null,"_col5":null,"_col6":null,"_col7":null,"_col8":null,"_col9":null,"_col10":null,"_col11":null,"_col12":null,"_col13":null,"_col14":null,"_col15":null,"_col16":null,"_col17":"fkl_shipping_b2c","_col18":null,"_col19":null,"_col20":null,"_col21":null}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:302)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:249)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:148)
> ... 14 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row (tag=0) 
> {"key":{"_col0":4457890},"value":{"_col0":null,"_col1":null,"_col2":null,"_col3":null,"_col4":null,"_col5":null,"_col6":null,"_col7":null,"_col8":null,"_col9":null,"_col10":null,"_col11":null,"_col12":null,"_col13":null,"_col14":null,"_col15":null,"_col16":null,"_col17":"fkl_shipping_b2c","_col18":null,"_col19":null,"_col20":null,"_col21":null}}
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource$GroupIterator.next(ReduceRecordSource.java:370)
> at 
> org.apache.hadoop.hive.ql.exec.tez.ReduceRecordSource.pushRecord(ReduceRecordSource.java:292)
> ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row (tag=1) 
> {"key":{"_col0":6417306,"_col1":{0:{"_col0":"2014-08-01 
> 02:14:02"}}},"value":{"_col0":"2014-08-01 
> 02:14:02","_col1":20140801,"_col2":"sc_jarvis_b2c","_col3":"action_override","_col4":"WITHIN_GRACE_PERIOD","_col5":"policy_override"}}
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.fetchOneRow(CommonMergeJoinOperator.java:413)
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.fetchNextGroup(CommonMergeJoinOperator.java:381)
> at 
> org.apache.hadoop.hive.ql.exec.CommonMergeJoinOperator.pr

[jira] [Assigned] (HIVE-13282) GroupBy and select operator encounter ArrayIndexOutOfBoundsException

2016-05-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-13282:
---

Assignee: Matt McCline  (was: Vikram Dixit K)

> GroupBy and select operator encounter ArrayIndexOutOfBoundsException
> 
>
> Key: HIVE-13282
> URL: https://issues.apache.org/jira/browse/HIVE-13282
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 1.2.1, 2.0.0, 2.1.0
>Reporter: Vikram Dixit K
>Assignee: Matt McCline
>
> The group by and select operators run into the ArrayIndexOutOfBoundsException 
> when they incorrectly initialize themselves with tag 0 but the incoming tag 
> id is different.
> {code}
> select count(*) from
> (select rt1.id from
> (select t1.key as id, t1.value as od from tab t1 group by key, value) rt1) vt1
> join
> (select rt2.id from
> (select t2.key as id, t2.value as od from tab_part t2 group by key, value) 
> rt2) vt2
> where vt1.id=vt2.id;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13502) Beeline doesnt support session parameters in JDBC URL as documentation states.

2016-05-23 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297521#comment-15297521
 ] 

Lefty Leverenz commented on HIVE-13502:
---

(Also the fix for HIVE-9144.)

> Beeline doesnt support session parameters in JDBC URL as documentation states.
> --
>
> Key: HIVE-13502
> URL: https://issues.apache.org/jira/browse/HIVE-13502
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 2.1.0
>
> Attachments: HIVE-13502.1.patch, HIVE-13502.2.patch, 
> HIVE-13502.3.patch, HIVE-13502.3.patch, HIVE-13502.4.patch, 
> HIVE-13502.5.patch, HIVE-13502.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-ConnectionURLs
> documents that sessions variables like credentials etc are accepted as part 
> of the URL. However, Beeline does not support such URLs today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13826) Make VectorUDFAdaptor work for GenericUDFBetween when used as FILTER

2016-05-23 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-13826:

Attachment: HIVE-13826.01.patch

> Make VectorUDFAdaptor work for GenericUDFBetween when used as FILTER
> 
>
> Key: HIVE-13826
> URL: https://issues.apache.org/jira/browse/HIVE-13826
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-13826.01.patch
>
>
> GenericUDFBetween doesn't vectorize with VectorUDFAdaptor when used as FILTER 
> (i.e. as single item for WHERE).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13502) Beeline doesnt support session parameters in JDBC URL as documentation states.

2016-05-23 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297520#comment-15297520
 ] 

Lefty Leverenz commented on HIVE-13502:
---

Should this fix be documented in the wiki?

* [HiveServer2 Clients -- Connection URLs | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-ConnectionURLs]

> Beeline doesnt support session parameters in JDBC URL as documentation states.
> --
>
> Key: HIVE-13502
> URL: https://issues.apache.org/jira/browse/HIVE-13502
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 2.1.0
>
> Attachments: HIVE-13502.1.patch, HIVE-13502.2.patch, 
> HIVE-13502.3.patch, HIVE-13502.3.patch, HIVE-13502.4.patch, 
> HIVE-13502.5.patch, HIVE-13502.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-ConnectionURLs
> documents that sessions variables like credentials etc are accepted as part 
> of the URL. However, Beeline does not support such URLs today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13719) TestConverters fails on master

2016-05-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297510#comment-15297510
 ] 

Sergey Shelukhin commented on HIVE-13719:
-

+1

> TestConverters fails on master
> --
>
> Key: HIVE-13719
> URL: https://issues.apache.org/jira/browse/HIVE-13719
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tests
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13719.01.patch
>
>
> Can be reproduced locally also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13719) TestConverters fails on master

2016-05-23 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth reassigned HIVE-13719:
-

Assignee: Siddharth Seth

> TestConverters fails on master
> --
>
> Key: HIVE-13719
> URL: https://issues.apache.org/jira/browse/HIVE-13719
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tests
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13719.01.patch
>
>
> Can be reproduced locally also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13719) TestConverters fails on master

2016-05-23 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13719:
--
Status: Patch Available  (was: Open)

> TestConverters fails on master
> --
>
> Key: HIVE-13719
> URL: https://issues.apache.org/jira/browse/HIVE-13719
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tests
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13719.01.patch
>
>
> Can be reproduced locally also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13719) TestConverters fails on master

2016-05-23 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13719:
--
Attachment: HIVE-13719.01.patch

Trivial patch. [~sershe] - could you please take a look.

> TestConverters fails on master
> --
>
> Key: HIVE-13719
> URL: https://issues.apache.org/jira/browse/HIVE-13719
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tests
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13719.01.patch
>
>
> Can be reproduced locally also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13651) LlapBaseInputFormat: figure out where credentials come from

2016-05-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297502#comment-15297502
 ] 

Sergey Shelukhin commented on HIVE-13651:
-

[~jdere] can you comment wrt the above, and the compilation and materialization 
logic for LLAPIF? What do we need to get the tokens during that compilation. 

> LlapBaseInputFormat: figure out where credentials come from
> ---
>
> Key: HIVE-13651
> URL: https://issues.apache.org/jira/browse/HIVE-13651
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>
> todo in LlapBaseInputFormat.constructSubmitWorkRequestProto()
> {code}
> // TODO Figure out where credentials will come from. Normally Hive sets up
> // URLs on the tez dag, for which Tez acquires credentials.
> //taskCredentials.addAll(getContext().getCredentials());
> //
> Preconditions.checkState(currentQueryIdentifierProto.getDagIdentifier() ==
> //
> taskSpec.getTaskAttemptID().getTaskID().getVertexID().getDAGId().getId());
> //ByteBuffer credentialsBinary = 
> credentialMap.get(currentQueryIdentifierProto);
> //if (credentialsBinary == null) {
> //  credentialsBinary = 
> serializeCredentials(getContext().getCredentials());
> //  credentialMap.putIfAbsent(currentQueryIdentifierProto, 
> credentialsBinary.duplicate());
> //} else {
> //  credentialsBinary = credentialsBinary.duplicate();
> //}
> //
> builder.setCredentialsBinary(ByteString.copyFrom(credentialsBinary));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13771) LLAPIF: generate app ID

2016-05-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13771:

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-13441

> LLAPIF: generate app ID
> ---
>
> Key: HIVE-13771
> URL: https://issues.apache.org/jira/browse/HIVE-13771
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13771.patch
>
>
> See comments in the HIVE-13675 patch. The uniqueness needs to be ensured; the 
> user may be allowed to supply a prefix (e.g. his YARN app Id, if any) for 
> ease of tracking



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13827) LLAPIF: authentication on the output channel

2016-05-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13827:

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-13441

> LLAPIF: authentication on the output channel
> 
>
> Key: HIVE-13827
> URL: https://issues.apache.org/jira/browse/HIVE-13827
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> The current thinking is that we'd send the token. There's no protocol on the 
> channel right now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13441) LLAPIF: security and signed fragments

2016-05-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13441:

Summary: LLAPIF: security and signed fragments  (was: LLAP: signed 
fragments)

> LLAPIF: security and signed fragments
> -
>
> Key: HIVE-13441
> URL: https://issues.apache.org/jira/browse/HIVE-13441
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>  Labels: llap
>
> Allows external clients to get securely signed splits from HS2, and submit 
> them to LLAP without running as a privileged user; LLAP will verify the 
> splits before running.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-11233) Include Apache Phoenix support in HBaseStorageHandler

2016-05-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297482#comment-15297482
 ] 

Hive QA commented on HIVE-11233:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12805513/HIVE-11233.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 50 failed/errored test(s), 10071 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_join30.q-vector_decimal_10_0.q-acid_globallimit.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vectorized_parquet.q-insert_values_non_partitioned.q-schema_evol_orc_nonvec_mapwork_part.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-parallel_join1.q-escape_distributeby1.q-auto_sortmerge_join_7.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-table_access_keys_stats.q-bucketsortoptimize_insert_4.q-runtime_skewjoin_mapjoin_spark.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-timestamp_lazy.q-union29.q-join23.q-and-12-more - did not 
produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_external_table_ppd
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_storage_queries
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.org.apache.hadoop.hive.cli.TestMiniTezCliDriver
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_join21
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cross_product_check_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cte_mat_4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_llapdecider
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_schema_evol_text_nonvec_mapwork_part_all_primitive
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_script_env_var1
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_fsstat
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_tez_union_with_udf
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_union4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_interval_2
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_null_projection
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorization_12
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorization_4
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_ptf
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vectorized_shufflejoin
org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testPreemptionQueueComparator
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation
org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks
org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf.org.apache.hadoop.hive.metastore.TestHiveMetaStoreGetMetaConf
org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus
org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener
org.apache.hadoop.hive.metastore.TestRemoteUGIHiveMetaStoreIpAddress.testIpAddress
org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testRevokeTimedOutWorkers
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testLockTimeout
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationDrops.testDropPartition
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbFailure
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadDbSuccess
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableFailure
org.apache.hadoop.hive.ql.security.TestStorageBasedMetastoreAuthorizationReads.testReadTableSuccess
org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.testPigPopulation
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp
org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithM

[jira] [Commented] (HIVE-13797) Provide a connection string example in beeline

2016-05-23 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297415#comment-15297415
 ] 

Vihang Karajgaonkar commented on HIVE-13797:


{noformat}
./beeline --help
Usage: java org.apache.hive.cli.beeline.BeeLine 
   -uthe JDBC URL to connect to
   -nthe username to connect as
   -pthe password to connect as
   -dthe driver class to use
   -i   script file for initialization
   -e   query that should be executed
   -f   script file that should be executed
   -w (or) --password-file   the password file to read password 
from
   --hiveconf property=value   Use value for given property
   --hivevar name=valuehive variable name and value
   This is Hive specific settings in which 
variables
   can be set at session level and referenced 
in Hive
   commands or queries.
   --color=[true/false]control whether color is used for display
   --showHeader=[true/false]   show column names in query results
   --headerInterval=ROWS;  the interval between which heades are 
displayed
   --fastConnect=[true/false]  skip building table/column list for 
tab-completion
   --autoCommit=[true/false]   enable/disable automatic transaction commit
   --verbose=[true/false]  show verbose error messages and debug info
   --showWarnings=[true/false] display connection warnings
   --showNestedErrs=[true/false]   display nested errors
   --numberFormat=[pattern]format numbers using DecimalFormat pattern
   --force=[true/false]continue running script even after errors
   --maxWidth=MAXWIDTH the maximum width of the terminal
   --maxColumnWidth=MAXCOLWIDTHthe maximum width to use when displaying 
columns
   --silent=[true/false]   be more silent
   --autosave=[true/false] automatically save preferences
   --outputformat=[table/vertical/csv2/tsv2/dsv/csv/tsv]  format mode for 
result display
   Note that csv, and tsv are deprecated - use 
csv2, tsv2 instead
   --incremental=[true/false]  Defaults to false. When set to false, the 
entire result set
   is fetched and buffered before being 
displayed, yielding optimal
   display column sizing. When set to true, 
result rows are displayed
   immediately as they are fetched, yielding 
lower latency and
   memory usage at the price of extra display 
column padding.
   Setting --incremental=true is recommended if 
you encounter an OutOfMemory
   on the client side (due to the fetched 
result set size being large).
   --truncateTable=[true/false]truncate table column when it exceeds length
   --delimiterForDSV=DELIMITER specify the delimiter for 
delimiter-separated values output format (default: |)
   --isolation=LEVEL   set the transaction isolation level
   --nullemptystring=[true/false]  set to true to get historic behavior of 
printing null as empty string
   --addlocaldriverjar=DRIVERJARNAME Add driver jar file in the beeline client 
side
   --addlocaldrivername=DRIVERNAME Add drvier name needs to be supported in the 
beeline client side
   --showConnectedUrl=[true/false] Prompt HiveServer2s URI to which this 
beeline connected.
   Only works for HiveServer2 cluster mode.
   --help  display this message
 
   Example:
1. Connect using simple authentication to HiveServer2 on localhost:1
$ beeline -u jdbc:hive2://localhost:1 username password
2. Connect using simple authentication to HiveServer2 on hs.local:1 
using -n for username and -p for password
$ beeline -n username -p password -u jdbc:hive2://hs2.local:10012
3. Connect using Kerberos authentication with hive/localh...@mydomain.com 
as HiveServer2 principal
$ beeline -u 
"jdbc:hive2://hs2.local:10013/default;principal=hive/localh...@mydomain.com
4. Connect using SSL connection to HiveServer2 on localhost at 1
$ beeline 
jdbc:hive2://localhost:1/default;ssl=true;sslTrustStore=/usr/local/truststore;trustStorePassword=mytruststorepassword
5. Connect using LDAP authentication
$ beeline -u jdbc:hive2://hs2.local:10013/default  

{noformat}

Thanks [~leftylev] for the review. Incorporated your suggestions and uploaded 
the updated patch HIVE-13797.02.patch again.

> Provide a connection string example in beeline
> --
>
> Key: HIVE-13797
> URL: https://issues.a

[jira] [Updated] (HIVE-13797) Provide a connection string example in beeline

2016-05-23 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-13797:
---
Attachment: HIVE-13797.02.patch

> Provide a connection string example in beeline
> --
>
> Key: HIVE-13797
> URL: https://issues.apache.org/jira/browse/HIVE-13797
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 2.0.0
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
>Priority: Minor
> Attachments: HIVE-13797.01.patch, HIVE-13797.02.patch
>
>
> It would save a bunch of googling if we could provide some examples of 
> connection strings directly to beeline help message
> Eg:
> {code}
> ./bin/beeline --help
> Usage: java org.apache.hive.cli.beeline.BeeLine 
>-uthe JDBC URL to connect to
>-r  reconnect to last saved connect url (in 
> conjunction with !save)
>-nthe username to connect as
>-pthe password to connect as
>-dthe driver class to use
>-i   script file for initialization
>-e   query that should be executed
>-f   script file that should be executed
>-w (or) --password-file   the password file to read 
> password from
>--hiveconf property=value   Use value for given property
>--hivevar name=valuehive variable name and value
>This is Hive specific settings in which 
> variables
>can be set at session level and referenced 
> in Hive
>commands or queries.
>--color=[true/false]control whether color is used for display
>--showHeader=[true/false]   show column names in query results
>--headerInterval=ROWS;  the interval between which heades are 
> displayed
>--fastConnect=[true/false]  skip building table/column list for 
> tab-completion
>--autoCommit=[true/false]   enable/disable automatic transaction commit
>--verbose=[true/false]  show verbose error messages and debug info
>--showWarnings=[true/false] display connection warnings
>--showNestedErrs=[true/false]   display nested errors
>--numberFormat=[pattern]format numbers using DecimalFormat pattern
>--force=[true/false]continue running script even after errors
>--maxWidth=MAXWIDTH the maximum width of the terminal
>--maxColumnWidth=MAXCOLWIDTHthe maximum width to use when displaying 
> columns
>--silent=[true/false]   be more silent
>--autosave=[true/false] automatically save preferences
>--outputformat=[table/vertical/csv2/tsv2/dsv/csv/tsv]  format mode for 
> result display
>Note that csv, and tsv are deprecated - 
> use csv2, tsv2 instead
>--incremental=[true/false]  Defaults to false. When set to false, the 
> entire result set
>is fetched and buffered before being 
> displayed, yielding optimal
>display column sizing. When set to true, 
> result rows are displayed
>immediately as they are fetched, yielding 
> lower latency and
>memory usage at the price of extra display 
> column padding.
>Setting --incremental=true is recommended 
> if you encounter an OutOfMemory
>on the client side (due to the fetched 
> result set size being large).
>--truncateTable=[true/false]truncate table column when it exceeds 
> length
>--delimiterForDSV=DELIMITER specify the delimiter for 
> delimiter-separated values output format (default: |)
>--isolation=LEVEL   set the transaction isolation level
>--nullemptystring=[true/false]  set to true to get historic behavior of 
> printing null as empty string
>--addlocaldriverjar=DRIVERJARNAME Add driver jar file in the beeline 
> client side
>--addlocaldrivername=DRIVERNAME Add drvier name needs to be supported in 
> the beeline client side
>--showConnectedUrl=[true/false] Prompt HiveServer2s URI to which this 
> beeline connected.
>Only works for HiveServer2 cluster mode.
>--help  display this message
>  
>Example:
> 1. beeline -u jdbc:hive2://localhost:1 username password
> 2. beeline -n username -p password -u jdbc:hive2://hs2.local:10012
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12643) For self describing InputFormat don't replicate schema information in partitions

2016-05-23 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-12643:

   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master.

> For self describing InputFormat don't replicate schema information in 
> partitions
> 
>
> Key: HIVE-12643
> URL: https://issues.apache.org/jira/browse/HIVE-12643
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 2.1.0
>
> Attachments: HIVE-12643.1.patch, HIVE-12643.2.patch, 
> HIVE-12643.3.patch, HIVE-12643.3.patch, HIVE-12643.patch
>
>
> Since self describing Input Formats don't use individual partition schemas 
> for schema resolution, there is no need to send that info to tasks.
> Doing this should cut down plan size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13773) Stats state is not captured correctly in dynpart_sort_optimization_acid.q

2016-05-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13773:
---
Attachment: t.q.out.right

> Stats state is not captured correctly in dynpart_sort_optimization_acid.q
> -
>
> Key: HIVE-13773
> URL: https://issues.apache.org/jira/browse/HIVE-13773
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13773.01.patch, t.q, t.q.out, t.q.out.right
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13646) make hive.optimize.sort.dynamic.partition compatible with ACID tables

2016-05-23 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297363#comment-15297363
 ] 

Pengcheng Xiong commented on HIVE-13646:


Hi [~ekoifman] and [~wzheng], thanks for your work to make 
hive.optimize.sort.dynamic.partition compatible with ACID tables. However, it 
seems that extra work is needed to make them work together. After I discuss 
with [~prasanth_j], this problem sounds serious as we are getting wrong #rows 
and also wrong total data size which will result in correctness issue and also 
performance issue. Could you guys take a look at HIVE-13773? I have attached a 
t.q and a wrong t.q.out and it can be easily repro on master. If I revert the 
patch in this JIRA, it works fine. Thanks. also ccing [~ashutoshc].

> make hive.optimize.sort.dynamic.partition compatible with ACID tables
> -
>
> Key: HIVE-13646
> URL: https://issues.apache.org/jira/browse/HIVE-13646
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Fix For: 1.3.0, 2.1.0
>
> Attachments: HIVE-13646.2.patch, HIVE-13646.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> HIVE-8875 disabled hive.optimize.sort.dynamic.partition for ACID queries.
> dynamic inserts are common in ACID and this leaves users with few options if 
> they are seeing OutOfMemory errors due to too many writers.
> hive.optimize.sort.dynamic.partition sorts data by partition col/bucket 
> col/sort col to ensure each reducer only needs 1 writer.
> Acid requires data in each bucket file to be sorted by ROW__ID and thus 
> doesn't allow end user to determine sorting.
> So we should be able to support hive.optimize.sort.dynamic.partition with
> sort on partition col/bucket col/ROW__ID 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13798) Fix the unit test failure org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload

2016-05-23 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297356#comment-15297356
 ] 

Ashutosh Chauhan commented on HIVE-13798:
-

+1 Thanks Aihua for looking into this.

> Fix the unit test failure 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
> 
>
> Key: HIVE-13798
> URL: https://issues.apache.org/jira/browse/HIVE-13798
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 2.1.0
>
> Attachments: HIVE-13798.2.patch, HIVE-13798.3.patch, HIVE-13798.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13773) Stats state is not captured correctly in dynpart_sort_optimization_acid.q

2016-05-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13773:
---
Attachment: t.q.out
t.q

> Stats state is not captured correctly in dynpart_sort_optimization_acid.q
> -
>
> Key: HIVE-13773
> URL: https://issues.apache.org/jira/browse/HIVE-13773
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13773.01.patch, t.q, t.q.out
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13670) Improve Beeline connect/reconnect semantics

2016-05-23 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297341#comment-15297341
 ] 

Shannon Ladymon commented on HIVE-13670:


Doc done - thanks [~sushanth] for writing up the documentation:
* [HiveServer2 Clients - Beeline Command Options | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-BeelineCommandOptions]
* [JDBC - Named Connection URLs | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-NamedConnectionURLs]
* [JDBC - Reconnecting | 
https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Reconnecting]

> Improve Beeline connect/reconnect semantics
> ---
>
> Key: HIVE-13670
> URL: https://issues.apache.org/jira/browse/HIVE-13670
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Fix For: 2.1.0
>
> Attachments: HIVE-13670.2.patch, HIVE-13670.3.patch, 
> HIVE-13670.4.patch, HIVE-13670.patch
>
>
> For most users of beeline, chances are that they will be using it with a 
> single HS2 instance most of the time. In this scenario, having them type out 
> a jdbc uri for HS2 every single time to !connect can get tiresome. Thus, we 
> should improve semantics so that if a user does a successful !connect, then 
> we must store the last-connected-to-url, so that if they do a !close, and 
> then a !reconnect, then !reconnect should attempt to connect to the last 
> successfully used url.
> Also, if they then do a !save, then that last-successfully-used url must be 
> saved, so that in subsequent sessions, they can simply do !reconnect rather 
> than specifying a url for !connect.
> In addition, it would be useful to introduce a new way of doing !connect that 
> does involve typing out a jdbc url every time (since this is highly likely to 
> be error-prone)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13670) Improve Beeline connect/reconnect semantics

2016-05-23 Thread Lefty Leverenz (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lefty Leverenz updated HIVE-13670:
--
Labels:   (was: TODOC2.1)

> Improve Beeline connect/reconnect semantics
> ---
>
> Key: HIVE-13670
> URL: https://issues.apache.org/jira/browse/HIVE-13670
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.1.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Fix For: 2.1.0
>
> Attachments: HIVE-13670.2.patch, HIVE-13670.3.patch, 
> HIVE-13670.4.patch, HIVE-13670.patch
>
>
> For most users of beeline, chances are that they will be using it with a 
> single HS2 instance most of the time. In this scenario, having them type out 
> a jdbc uri for HS2 every single time to !connect can get tiresome. Thus, we 
> should improve semantics so that if a user does a successful !connect, then 
> we must store the last-connected-to-url, so that if they do a !close, and 
> then a !reconnect, then !reconnect should attempt to connect to the last 
> successfully used url.
> Also, if they then do a !save, then that last-successfully-used url must be 
> saved, so that in subsequent sessions, they can simply do !reconnect rather 
> than specifying a url for !connect.
> In addition, it would be useful to introduce a new way of doing !connect that 
> does involve typing out a jdbc url every time (since this is highly likely to 
> be error-prone)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12643) For self describing InputFormat don't replicate schema information in partitions

2016-05-23 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297281#comment-15297281
 ] 

Matt McCline commented on HIVE-12643:
-

LGTM +1

> For self describing InputFormat don't replicate schema information in 
> partitions
> 
>
> Key: HIVE-12643
> URL: https://issues.apache.org/jira/browse/HIVE-12643
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 2.0.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-12643.1.patch, HIVE-12643.2.patch, 
> HIVE-12643.3.patch, HIVE-12643.3.patch, HIVE-12643.patch
>
>
> Since self describing Input Formats don't use individual partition schemas 
> for schema resolution, there is no need to send that info to tasks.
> Doing this should cut down plan size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13029) NVDIMM support for LLAP Cache

2016-05-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297279#comment-15297279
 ] 

Hive QA commented on HIVE-13029:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12805509/HIVE-13029.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 56 failed/errored test(s), 10011 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_join1.q-schema_evol_text_vec_mapwork_part_all_complex.q-vector_complex_join.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_sortmerge_join_16.q-skewjoin.q-vectorization_div0.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-cte_4.q-vector_non_string_partition.q-delete_where_non_partitioned.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-mapjoin_mapjoin.q-insert_into1.q-vector_decimal_2.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-auto_join_reordering_values.q-ptf_seqfile.q-auto_join18.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-bucketsortoptimize_insert_7.q-smb_mapjoin_15.q-mapreduce1.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-groupby2.q-custom_input_output_format.q-join41.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-groupby_complex_types.q-groupby_map_ppr_multi_distinct.q-vectorization_16.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-multi_insert.q-join5.q-groupby6.q-and-12-more - did not 
produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_join5
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_escape_distributeby1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby8_map_skew
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_multi_single_reducer2
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_ppr_multi_distinct
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_parallel_join1
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample10
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample7
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vector_left_outer_join
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_vectorized_nested_mapjoin
org.apache.hadoop.hive.llap.daemon.impl.TestTaskExecutorService.testPreemptionQueueComparator
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.metastore.TestAuthzApiEmbedAuthorizerInRemote.org.apache.hadoop.hive.meta

[jira] [Commented] (HIVE-13720) TestLlapTaskCommunicator fails on master

2016-05-23 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297256#comment-15297256
 ] 

Prasanth Jayachandran commented on HIVE-13720:
--

[~sershe] man you are fast! are you crawling or using bots? :P

> TestLlapTaskCommunicator fails on master
> 
>
> Key: HIVE-13720
> URL: https://issues.apache.org/jira/browse/HIVE-13720
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Test
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13720.01.patch
>
>
> Can be reproduced locally as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13720) TestLlapTaskCommunicator fails on master

2016-05-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297254#comment-15297254
 ] 

Sergey Shelukhin commented on HIVE-13720:
-

+1

> TestLlapTaskCommunicator fails on master
> 
>
> Key: HIVE-13720
> URL: https://issues.apache.org/jira/browse/HIVE-13720
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Test
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13720.01.patch
>
>
> Can be reproduced locally as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13720) TestLlapTaskCommunicator fails on master

2016-05-23 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13720:
--
Attachment: HIVE-13720.01.patch

Simple patch to fix this. Was caused by changing the default value from "" to 
null.

[~prasanth_j] - could you please take a look.

> TestLlapTaskCommunicator fails on master
> 
>
> Key: HIVE-13720
> URL: https://issues.apache.org/jira/browse/HIVE-13720
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Test
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13720.01.patch
>
>
> Can be reproduced locally as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-13720) TestLlapTaskCommunicator fails on master

2016-05-23 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth reassigned HIVE-13720:
-

Assignee: Siddharth Seth

> TestLlapTaskCommunicator fails on master
> 
>
> Key: HIVE-13720
> URL: https://issues.apache.org/jira/browse/HIVE-13720
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Test
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13720.01.patch
>
>
> Can be reproduced locally as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13720) TestLlapTaskCommunicator fails on master

2016-05-23 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13720:
--
Status: Patch Available  (was: Open)

> TestLlapTaskCommunicator fails on master
> 
>
> Key: HIVE-13720
> URL: https://issues.apache.org/jira/browse/HIVE-13720
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Test
>Affects Versions: 2.1.0
>Reporter: Ashutosh Chauhan
>Assignee: Siddharth Seth
> Attachments: HIVE-13720.01.patch
>
>
> Can be reproduced locally as well



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13444) LLAP: add HMAC signatures to LLAP; verify them on LLAP side

2016-05-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13444:

Attachment: HIVE-13444.03.patch

Addressed CR feedback. Also removed the client factory for remote clients, 
since this will only be used in CLI for now.

> LLAP: add HMAC signatures to LLAP; verify them on LLAP side
> ---
>
> Key: HIVE-13444
> URL: https://issues.apache.org/jira/browse/HIVE-13444
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13444.01.patch, HIVE-13444.02.patch, 
> HIVE-13444.03.patch, HIVE-13444.WIP.patch, HIVE-13444.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HIVE-13824) NOSUCHMethodFound org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I

2016-05-23 Thread Ekta Paliwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ekta Paliwal updated HIVE-13824:

Comment: was deleted

(was: Gopal V : 

Hello Sir, 

I tried doing that as mentioned by you but gives me error

"Missing Hive CLI JAR"

but I make sure this is not from the HIVE file because I tested it with the 
help of echo. I am not sure where this message is coming from. What should I 
do?)

> NOSUCHMethodFound org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I
> --
>
> Key: HIVE-13824
> URL: https://issues.apache.org/jira/browse/HIVE-13824
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, Hive
> Environment: WIndows 8, HADOOP 2.7, HIVE 1.2.1, SPARK 1.6.1
>Reporter: Ekta Paliwal
>
> 0
> down vote
> favorite
> I have been trying to install hive on windows. I have 64 bit windows 8 on 
> which HADOOP and SPARK are running. I have
> 1.HADOOP_HOME
> 2.HIVE_HOME
> 3.SPARK_HOME
> 4.Platform
> 5.PATH
> all these variables set up on my system. Also, I was getting these error 
> before
> Missing Hive Execution Jar: 
> C:\hadoop1\hadoop-2.7.2\apache-hive-1.2.1-bin/lib/hive-exec-*.jar
> I solved these error by editing the Hive file inside bin folder of HIVE. 
> These errors are because of the forward slash"/" in environment variables in 
> HIVE file. I replace them with "\" and those errors are gone. But now I am 
> facing another problem. I am getting these error
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/C:/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/C:/hadoop2.7/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Beeline version 1.6.1 by Apache Hive
> Exception in thread "main" java.lang.NoSuchMethodError: 
> org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I
> at 
> jline.WindowsTerminal.getConsoleOutputCodepage(WindowsTerminal.java:293)
> at jline.WindowsTerminal.getOutputEncoding(WindowsTerminal.java:186)
> at jline.console.ConsoleReader.(ConsoleReader.java:230)
> at jline.console.ConsoleReader.(ConsoleReader.java:221)
> at jline.console.ConsoleReader.(ConsoleReader.java:209)
> at org.apache.hive.beeline.BeeLine.getConsoleReader(BeeLine.java:834)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:770)
> at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> I have searched alot on these. Also I have posted these question on HIVE User 
> mailing List but got no response. Please help me with this. Not even getting 
> results when google this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13824) NOSUCHMethodFound org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I

2016-05-23 Thread Ekta Paliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297197#comment-15297197
 ] 

Ekta Paliwal commented on HIVE-13824:
-

Gopal V :
Hello Sir,
I tried doing that as mentioned by you but gives me error
"Missing Hive CLI JAR"
but I make sure this is not from the HIVE file because I tested it with the 
help of echo. I am not sure where this message is coming from. What should I do?

> NOSUCHMethodFound org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I
> --
>
> Key: HIVE-13824
> URL: https://issues.apache.org/jira/browse/HIVE-13824
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, Hive
> Environment: WIndows 8, HADOOP 2.7, HIVE 1.2.1, SPARK 1.6.1
>Reporter: Ekta Paliwal
>
> 0
> down vote
> favorite
> I have been trying to install hive on windows. I have 64 bit windows 8 on 
> which HADOOP and SPARK are running. I have
> 1.HADOOP_HOME
> 2.HIVE_HOME
> 3.SPARK_HOME
> 4.Platform
> 5.PATH
> all these variables set up on my system. Also, I was getting these error 
> before
> Missing Hive Execution Jar: 
> C:\hadoop1\hadoop-2.7.2\apache-hive-1.2.1-bin/lib/hive-exec-*.jar
> I solved these error by editing the Hive file inside bin folder of HIVE. 
> These errors are because of the forward slash"/" in environment variables in 
> HIVE file. I replace them with "\" and those errors are gone. But now I am 
> facing another problem. I am getting these error
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/C:/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/C:/hadoop2.7/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Beeline version 1.6.1 by Apache Hive
> Exception in thread "main" java.lang.NoSuchMethodError: 
> org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I
> at 
> jline.WindowsTerminal.getConsoleOutputCodepage(WindowsTerminal.java:293)
> at jline.WindowsTerminal.getOutputEncoding(WindowsTerminal.java:186)
> at jline.console.ConsoleReader.(ConsoleReader.java:230)
> at jline.console.ConsoleReader.(ConsoleReader.java:221)
> at jline.console.ConsoleReader.(ConsoleReader.java:209)
> at org.apache.hive.beeline.BeeLine.getConsoleReader(BeeLine.java:834)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:770)
> at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> I have searched alot on these. Also I have posted these question on HIVE User 
> mailing List but got no response. Please help me with this. Not even getting 
> results when google this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13736) View's input/output formats are TEXT by default

2016-05-23 Thread Yongzhi Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297191#comment-15297191
 ] 

Yongzhi Chen commented on HIVE-13736:
-

[~ctang.ma], hive does not use the input/output formats for view. So it will 
not affect any backward compatibility.
There is some third party tool like Vertica does not support sequence file, so 
we need a way to change the default value for view input/output format. 

> View's input/output formats are TEXT by default
> ---
>
> Key: HIVE-13736
> URL: https://issues.apache.org/jira/browse/HIVE-13736
> Project: Hive
>  Issue Type: New Feature
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Pavas Garg
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-13736.1.patch
>
>
> Feature request where Hive View's input/output formats are text by default in 
> order to help 3rd party compatibility



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13825) Map joins with cloned tables with same locations, but different column names throw error exceptions

2016-05-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297168#comment-15297168
 ] 

Sergio Peña commented on HIVE-13825:


Current workaround solution is to use a view for the {{t2}} table.

{noformat}
hive> CREATE TABLE t1 ( a string, b string) location 
'/user/hive/warehouse/test1';
OK
hive> INSERT INTO t1 VALUES (1,2), (1, 3), (2, 4), (3, 5);
OK
hive> CREATE VIEW t2 (c, d) AS SELECT * FROM t1;
OK
hive>SELECT t1.a FROM t1 JOIN t2 ON t1.a = t2.c;
<...>
OK
1
1
1
1
2
3
{noformat}

> Map joins with cloned tables with same locations, but different column names 
> throw error exceptions
> ---
>
> Key: HIVE-13825
> URL: https://issues.apache.org/jira/browse/HIVE-13825
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergio Peña
>
> The following scenario of 2 tables with same locations cannot be used on a 
> JOIN query:
> {noformat}
> hive> create table t1 (a string, b string) location 
> '/user/hive/warehouse/test1';
> OK
> hive> create table t2 (c string, d string) location 
> '/user/hive/warehouse/test1';
> OK
> hive> select t1.a from t1 join t2 on t1.a = t2.c;
> ...
> 2016-05-23 16:39:57 Starting to launch local task to process map join;
>   maximum memory = 477102080
> Execution failed with exit status: 2
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-4
> Logs:
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> {noformat}
> The logs contain this error exception:
> {noformat}
> 2016-05-23T16:39:58,163 ERROR [main]: mr.MapredLocalTask (:()) - Hive Runtime 
> Error: Map local work failed
> java.lang.RuntimeException: cannot find field a from [0:c, 1:d]
> at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:485)
> at 
> org.apache.hadoop.hive.serde2.BaseStructObjectInspector.getStructFieldRef(BaseStructObjectInspector.java:133)
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:55)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:973)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:999)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:75)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:355)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:504)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:457)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:365)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:504)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:457)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:365)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.initializeOperators(MapredLocalTask.java:499)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:403)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.executeInProcess(MapredLocalTask.java:383)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:751)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13825) Map joins with cloned tables with same locations, but different column names throw error exceptions

2016-05-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297161#comment-15297161
 ] 

Sergio Peña commented on HIVE-13825:


I dig into the code, and found out the problem is when getting the table 
information from {{getPathToPartitionInfo}}:
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java#L178

{{getPathToPartitionInfo}} is a method from the {{MapWork}} class, and it 
returns a HashMap where its key-value information is:
   table-location => table-information

Before getting into {{MapJoinProcessor}}, the HashMap is initialized from the 
code below where the {{t1}} table information is overridden by {{t2}} table 
because they have the same table-location, and a HashMap cannot store repeated 
keys:
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMapRedUtils.java#L722

When {{MapJoinProcessor}} is executed, it then wants to get {{t1}} table 
information using its table location, but it gets {{t2}} table information 
instead. So, it throws the exception posted in this ticket.

> Map joins with cloned tables with same locations, but different column names 
> throw error exceptions
> ---
>
> Key: HIVE-13825
> URL: https://issues.apache.org/jira/browse/HIVE-13825
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergio Peña
>
> The following scenario of 2 tables with same locations cannot be used on a 
> JOIN query:
> {noformat}
> hive> create table t1 (a string, b string) location 
> '/user/hive/warehouse/test1';
> OK
> hive> create table t2 (c string, d string) location 
> '/user/hive/warehouse/test1';
> OK
> hive> select t1.a from t1 join t2 on t1.a = t2.c;
> ...
> 2016-05-23 16:39:57 Starting to launch local task to process map join;
>   maximum memory = 477102080
> Execution failed with exit status: 2
> Obtaining error information
> Task failed!
> Task ID:
>   Stage-4
> Logs:
> FAILED: Execution Error, return code 2 from 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> {noformat}
> The logs contain this error exception:
> {noformat}
> 2016-05-23T16:39:58,163 ERROR [main]: mr.MapredLocalTask (:()) - Hive Runtime 
> Error: Map local work failed
> java.lang.RuntimeException: cannot find field a from [0:c, 1:d]
> at 
> org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorUtils.getStandardStructFieldRef(ObjectInspectorUtils.java:485)
> at 
> org.apache.hadoop.hive.serde2.BaseStructObjectInspector.getStructFieldRef(BaseStructObjectInspector.java:133)
> at 
> org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator.initialize(ExprNodeColumnEvaluator.java:55)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluators(Operator.java:973)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initEvaluatorsAndReturnStruct(Operator.java:999)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.initializeOp(SelectOperator.java:75)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:355)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:504)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:457)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:365)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:504)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:457)
> at 
> org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:365)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.initializeOperators(MapredLocalTask.java:499)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:403)
> at 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.executeInProcess(MapredLocalTask.java:383)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:751)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13490) Change itests to be part of the main Hive build

2016-05-23 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297156#comment-15297156
 ] 

Siddharth Seth commented on HIVE-13490:
---

[~kgyrtkirk] - I'd be in favour of the second approach, which allows the tests 
to be run from the top level with a profile, unless there's some downsides to 
it ? Will the profile need to be specified within the itest directory as well ? 
That would break existing usage scenarios as well.

Please feel free to take over this jira, if you're making this change.

[~spena] - should we look at failsafe in an alternate jira ? Looks like 
[~kgyrtkirk] already has an approach identified which builds itest directly, 
improved IDE integration and avoids running the itests when invoking mvn test 
from the top level.


> Change itests to be part of the main Hive build
> ---
>
> Key: HIVE-13490
> URL: https://issues.apache.org/jira/browse/HIVE-13490
> Project: Hive
>  Issue Type: Improvement
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-13490.01.patch, HIVE-13490.02.patch
>
>
> Instead of having to build Hive, and then itests separately.
> With IntelliJ, this ends up being loaded as two separate dependencies, and 
> there's a lot of hops involved to make changes.
> Does anyone know why these have been kept separate ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13824) NOSUCHMethodFound org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I

2016-05-23 Thread Ekta Paliwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297145#comment-15297145
 ] 

Ekta Paliwal commented on HIVE-13824:
-

Gopal V : 

Hello Sir, 

I tried doing that as mentioned by you but gives me error

"Missing Hive CLI JAR"

but I make sure this is not from the HIVE file because I tested it with the 
help of echo. I am not sure where this message is coming from. What should I do?

> NOSUCHMethodFound org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I
> --
>
> Key: HIVE-13824
> URL: https://issues.apache.org/jira/browse/HIVE-13824
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, Hive
> Environment: WIndows 8, HADOOP 2.7, HIVE 1.2.1, SPARK 1.6.1
>Reporter: Ekta Paliwal
>
> 0
> down vote
> favorite
> I have been trying to install hive on windows. I have 64 bit windows 8 on 
> which HADOOP and SPARK are running. I have
> 1.HADOOP_HOME
> 2.HIVE_HOME
> 3.SPARK_HOME
> 4.Platform
> 5.PATH
> all these variables set up on my system. Also, I was getting these error 
> before
> Missing Hive Execution Jar: 
> C:\hadoop1\hadoop-2.7.2\apache-hive-1.2.1-bin/lib/hive-exec-*.jar
> I solved these error by editing the Hive file inside bin folder of HIVE. 
> These errors are because of the forward slash"/" in environment variables in 
> HIVE file. I replace them with "\" and those errors are gone. But now I am 
> facing another problem. I am getting these error
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/C:/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/C:/hadoop2.7/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Beeline version 1.6.1 by Apache Hive
> Exception in thread "main" java.lang.NoSuchMethodError: 
> org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I
> at 
> jline.WindowsTerminal.getConsoleOutputCodepage(WindowsTerminal.java:293)
> at jline.WindowsTerminal.getOutputEncoding(WindowsTerminal.java:186)
> at jline.console.ConsoleReader.(ConsoleReader.java:230)
> at jline.console.ConsoleReader.(ConsoleReader.java:221)
> at jline.console.ConsoleReader.(ConsoleReader.java:209)
> at org.apache.hive.beeline.BeeLine.getConsoleReader(BeeLine.java:834)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:770)
> at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> I have searched alot on these. Also I have posted these question on HIVE User 
> mailing List but got no response. Please help me with this. Not even getting 
> results when google this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13798) Fix the unit test failure org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload

2016-05-23 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-13798:

Attachment: HIVE-13798.3.patch

Patch-3: create the test jar from the code before running the q tests.

> Fix the unit test failure 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
> 
>
> Key: HIVE-13798
> URL: https://issues.apache.org/jira/browse/HIVE-13798
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Fix For: 2.1.0
>
> Attachments: HIVE-13798.2.patch, HIVE-13798.3.patch, HIVE-13798.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HIVE-13520) Don't allow any test to run for longer than 60minutes in the ptest setup

2016-05-23 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth resolved HIVE-13520.
---
   Resolution: Fixed
Fix Version/s: 2.1.0

Committed to master. Thanks for the review [~ashutoshc]

> Don't allow any test to run for longer than 60minutes in the ptest setup
> 
>
> Key: HIVE-13520
> URL: https://issues.apache.org/jira/browse/HIVE-13520
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Fix For: 2.1.0
>
> Attachments: HIVE-13520.01.txt, HIVE-13520.02.txt, HIVE-13520.03.patch
>
>
> Current timeout for batches is 2hours. This needs to be lowered. 1hour may be 
> too much as well. We can start with this, and reduce timeouts further.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13800) Disable auth enabled by default on LLAP UI for secure clusters

2016-05-23 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-13800:
--
Status: Patch Available  (was: Open)

> Disable auth enabled by default on LLAP UI for secure clusters
> --
>
> Key: HIVE-13800
> URL: https://issues.apache.org/jira/browse/HIVE-13800
> Project: Hive
>  Issue Type: Task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-13800.01.patch
>
>
> There's no sensitive information that I'm aware of. (The logs would be the 
> most sensitive).
> Similar to the HS2 UI, the LLAP UI can be default unprotected even on secure 
> clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13824) NOSUCHMethodFound org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I

2016-05-23 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297131#comment-15297131
 ] 

Gopal V commented on HIVE-13824:


[~epaliwal]: try disabling the console update configs - {{hive --hiveconf 
hive.tez.exec.inplace.progress=false}};

> NOSUCHMethodFound org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I
> --
>
> Key: HIVE-13824
> URL: https://issues.apache.org/jira/browse/HIVE-13824
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline, Hive
> Environment: WIndows 8, HADOOP 2.7, HIVE 1.2.1, SPARK 1.6.1
>Reporter: Ekta Paliwal
>
> 0
> down vote
> favorite
> I have been trying to install hive on windows. I have 64 bit windows 8 on 
> which HADOOP and SPARK are running. I have
> 1.HADOOP_HOME
> 2.HIVE_HOME
> 3.SPARK_HOME
> 4.Platform
> 5.PATH
> all these variables set up on my system. Also, I was getting these error 
> before
> Missing Hive Execution Jar: 
> C:\hadoop1\hadoop-2.7.2\apache-hive-1.2.1-bin/lib/hive-exec-*.jar
> I solved these error by editing the Hive file inside bin folder of HIVE. 
> These errors are because of the forward slash"/" in environment variables in 
> HIVE file. I replace them with "\" and those errors are gone. But now I am 
> facing another problem. I am getting these error
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/C:/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/C:/hadoop2.7/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> Beeline version 1.6.1 by Apache Hive
> Exception in thread "main" java.lang.NoSuchMethodError: 
> org.fusesource.jansi.internal.Kernel32.GetConsoleOutputCP()I
> at 
> jline.WindowsTerminal.getConsoleOutputCodepage(WindowsTerminal.java:293)
> at jline.WindowsTerminal.getOutputEncoding(WindowsTerminal.java:186)
> at jline.console.ConsoleReader.(ConsoleReader.java:230)
> at jline.console.ConsoleReader.(ConsoleReader.java:221)
> at jline.console.ConsoleReader.(ConsoleReader.java:209)
> at org.apache.hive.beeline.BeeLine.getConsoleReader(BeeLine.java:834)
> at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:770)
> at 
> org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:484)
> at org.apache.hive.beeline.BeeLine.main(BeeLine.java:467)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> I have searched alot on these. Also I have posted these question on HIVE User 
> mailing List but got no response. Please help me with this. Not even getting 
> results when google this error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13773) Stats state is not captured correctly in dynpart_sort_optimization_acid.q

2016-05-23 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297119#comment-15297119
 ] 

Prasanth Jayachandran commented on HIVE-13773:
--

[~pxiong] I initially added it for ORC writers (not ORC updaters - ACID). ORC 
writers implement the StatsProvidingRecordWriter interface. This interface 
returns the internally gathered stats (row count and raw data size). ACID was 
added later and I guess it does not implement the interface as it cannot 
provide reliable stats (because of deletes). I wanted to make sure this works 
for non-ACID use case. Also, this stats gathering should happen in processOp() 
and closeOp(). The reason for that is, with 
hive.optimize.sort.dynamic.partition there is only one record writer open per 
reducer at any point. Before closing the previous writer in processOp() we need 
to collect the statistics and for the last writer we gather statistics in 
closeOp(). I am not clear why you are removing the stats collection from 
processOp().

> Stats state is not captured correctly in dynpart_sort_optimization_acid.q
> -
>
> Key: HIVE-13773
> URL: https://issues.apache.org/jira/browse/HIVE-13773
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13773.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13354) Add ability to specify Compaction options per table and per request

2016-05-23 Thread Wei Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Zheng updated HIVE-13354:
-
Attachment: HIVE-13354.2.patch

> Add ability to specify Compaction options per table and per request
> ---
>
> Key: HIVE-13354
> URL: https://issues.apache.org/jira/browse/HIVE-13354
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.3.0, 2.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>  Labels: TODOC2.1
> Attachments: HIVE-13354.1.patch, 
> HIVE-13354.1.withoutSchemaChange.patch, HIVE-13354.2.patch
>
>
> Currently the are a few options that determine when automatic compaction is 
> triggered.  They are specified once for the warehouse.
> This doesn't make sense - some table may be more important and need to be 
> compacted more often.
> We should allow specifying these on per table basis.
> Also, compaction is an MR job launched from within the metastore.  There is 
> currently no way to control job parameters (like memory, for example) except 
> to specify it in hive-site.xml for metastore which means they are site wide.
> Should add a way to specify these per table (perhaps even per compaction if 
> launched via ALTER TABLE)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13823) Remove unnecessary log line in common join operator

2016-05-23 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297084#comment-15297084
 ] 

Prasanth Jayachandran commented on HIVE-13823:
--

+1

> Remove unnecessary log line in common join operator
> ---
>
> Key: HIVE-13823
> URL: https://issues.apache.org/jira/browse/HIVE-13823
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 2.1.0
>
> Attachments: HIVE-13823.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13787) LLAP: bug in recent security patches (wrong argument order; using full user name in id)

2016-05-23 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-13787:

   Resolution: Fixed
Fix Version/s: 2.1.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the review!

> LLAP: bug in recent security patches (wrong argument order; using full user 
> name in id)
> ---
>
> Key: HIVE-13787
> URL: https://issues.apache.org/jira/browse/HIVE-13787
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 2.1.0
>
> Attachments: HIVE-13787.01.patch, HIVE-13787.02.patch, 
> HIVE-13787.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13823) Remove unnecessary log line in common join operator

2016-05-23 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-13823:
--
Attachment: HIVE-13823.1.patch

> Remove unnecessary log line in common join operator
> ---
>
> Key: HIVE-13823
> URL: https://issues.apache.org/jira/browse/HIVE-13823
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 2.1.0
>
> Attachments: HIVE-13823.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13561) HiveServer2 is leaking ClassLoaders when add jar / temporary functions are used

2016-05-23 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297070#comment-15297070
 ] 

Vaibhav Gumashta commented on HIVE-13561:
-

+1

> HiveServer2 is leaking ClassLoaders when add jar / temporary functions are 
> used
> ---
>
> Key: HIVE-13561
> URL: https://issues.apache.org/jira/browse/HIVE-13561
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.0, 1.2.1, 2.0.0
>Reporter: Trystan Leftwich
>Assignee: Trystan Leftwich
> Attachments: HIVE-13561-branch-1.2.3.patch, HIVE-13561.2.patch, 
> HIVE-13561.3.patch, HIVE-13561.4.patch
>
>
> I can repo this on branch-1.2 and branch-2.0.
> It looks to be the same issues as: HIVE-11408
> The patch from HIVE-11408 looks to fix the issue as well.
> I've updated the patch from HIVE-11408 to be aligned with branch-1.2 and 
> master



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13787) LLAP: bug in recent security patches (wrong argument order; using full user name in id)

2016-05-23 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297045#comment-15297045
 ] 

Siddharth Seth commented on HIVE-13787:
---

+1

> LLAP: bug in recent security patches (wrong argument order; using full user 
> name in id)
> ---
>
> Key: HIVE-13787
> URL: https://issues.apache.org/jira/browse/HIVE-13787
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13787.01.patch, HIVE-13787.02.patch, 
> HIVE-13787.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13787) LLAP: bug in recent security patches (wrong argument order; using full user name in id)

2016-05-23 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297038#comment-15297038
 ] 

Sergey Shelukhin commented on HIVE-13787:
-

Update - double checked, the users in the token should be set to full user name.

> LLAP: bug in recent security patches (wrong argument order; using full user 
> name in id)
> ---
>
> Key: HIVE-13787
> URL: https://issues.apache.org/jira/browse/HIVE-13787
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-13787.01.patch, HIVE-13787.02.patch, 
> HIVE-13787.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13651) LlapBaseInputFormat: figure out where credentials come from

2016-05-23 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297008#comment-15297008
 ] 

Siddharth Seth commented on HIVE-13651:
---

For a regular Tez dag execution - HBase tokens are obtained specifically by the 
Hive client. For HDFS tokens - hive provides a list of URIs to Tez, and Tez 
takes care of obtaining the tokens.
The main scenario here will be access data on the local cluster. For HDFS - 
this should be handled by LLAP dameons having access to data owned by Hive (no 
tokens need to be propagated). I'm not sure how HBase will work - will the 
kerberos credentials used by LLAP be sufficient to talk to HBase ?

For the non-standard case - e.g. 1. accessing data from a different cluster - 
HS2 could obtain the tokens for the hive user. An important aspect to consider 
here is whether HS2 will be able to perform security checks for an alternate 
table. 2. Accessing data owned by some other user - this responsibility would 
fall on the client (HS2 does not have the required credentials to do this).

> LlapBaseInputFormat: figure out where credentials come from
> ---
>
> Key: HIVE-13651
> URL: https://issues.apache.org/jira/browse/HIVE-13651
> Project: Hive
>  Issue Type: Sub-task
>  Components: llap
>Reporter: Jason Dere
>
> todo in LlapBaseInputFormat.constructSubmitWorkRequestProto()
> {code}
> // TODO Figure out where credentials will come from. Normally Hive sets up
> // URLs on the tez dag, for which Tez acquires credentials.
> //taskCredentials.addAll(getContext().getCredentials());
> //
> Preconditions.checkState(currentQueryIdentifierProto.getDagIdentifier() ==
> //
> taskSpec.getTaskAttemptID().getTaskID().getVertexID().getDAGId().getId());
> //ByteBuffer credentialsBinary = 
> credentialMap.get(currentQueryIdentifierProto);
> //if (credentialsBinary == null) {
> //  credentialsBinary = 
> serializeCredentials(getContext().getCredentials());
> //  credentialMap.putIfAbsent(currentQueryIdentifierProto, 
> credentialsBinary.duplicate());
> //} else {
> //  credentialsBinary = credentialsBinary.duplicate();
> //}
> //
> builder.setCredentialsBinary(ByteString.copyFrom(credentialsBinary));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13821) OrcSplit groups all delta files together into a single split

2016-05-23 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297003#comment-15297003
 ] 

Eugene Koifman commented on HIVE-13821:
---

+1

> OrcSplit groups all delta files together into a single split
> 
>
> Key: HIVE-13821
> URL: https://issues.apache.org/jira/browse/HIVE-13821
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-13821.1.patch, HIVE-13821.2.patch
>
>
> HIVE-7428 had fix for worst case column projection size estimate. It was 
> removed in HIVE-10397 to return file length but for ACID strategy file length 
> is passed as 0. In worst case, this always return 0 and all files ends up in 
> single split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13817) Allow DNS CNAME ALIAS Resolution from apache hive beeline JDBC URL to allow for failover

2016-05-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296999#comment-15296999
 ] 

Hive QA commented on HIVE-13817:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12805512/HIVE-13817.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 56 failed/errored test(s), 9933 tests 
executed
*Failed tests:*
{noformat}
TestHWISessionManager - did not produce a TEST-*.xml file
TestMiniTezCliDriver-auto_join30.q-vector_decimal_10_0.q-acid_globallimit.q-and-12-more
 - did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_coalesce.q-cbo_windowing.q-tez_join.q-and-12-more - 
did not produce a TEST-*.xml file
TestMiniTezCliDriver-vector_interval_2.q-schema_evol_text_nonvec_mapwork_part_all_primitive.q-tez_fsstat.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-auto_join_reordering_values.q-ptf_seqfile.q-auto_join18.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-bucketsortoptimize_insert_7.q-smb_mapjoin_15.q-mapreduce1.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join9.q-join_casesensitive.q-filter_join_breaktask.q-and-12-more
 - did not produce a TEST-*.xml file
TestSparkCliDriver-join_cond_pushdown_3.q-groupby7.q-auto_join17.q-and-12-more 
- did not produce a TEST-*.xml file
TestSparkCliDriver-order.q-auto_join18_multi_distinct.q-union2.q-and-12-more - 
did not produce a TEST-*.xml file
TestSparkCliDriver-script_pipe.q-stats12.q-auto_join24.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-smb_mapjoin_4.q-groupby8_map.q-groupby4_map.q-and-12-more - 
did not produce a TEST-*.xml file
TestSparkCliDriver-stats13.q-stats2.q-ppd_gby_join.q-and-12-more - did not 
produce a TEST-*.xml file
TestSparkCliDriver-vector_distinct_2.q-join15.q-load_dyn_part3.q-and-12-more - 
did not produce a TEST-*.xml file
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_ivyDownload
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket5
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_bucket6
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_constprog_partitioner
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_disable_merge_for_bucketing
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_index_bitmap3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_map_operators
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_reducers_power_two
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_list_bucket_dml_10
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge9
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_reduce_deduplicate
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join1
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join2
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join3
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join4
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_vector_outer_join5
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_vector_complex_all
org.apache.hadoop.hive.llap.tez.TestConverters.testFragmentSpecToTaskSpec
org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskCommunicator.testFinishableStateUpdateFailure
org.apache.hadoop.hive.metastore.TestFilterHooks.org.apache.hadoop.hive.metastore.TestFilterHooks
org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs.org.apache.hadoop.hive.metastore.TestHiveMetaStorePartitionSpecs
org.apache.hadoop.hive.metastore.TestMetaStoreEventListenerOnlyOnCommit.testEventStatus
org.apache.hadoop.hive.metastore.TestMetaStoreInitListener.testMetaStoreInitListener
org.apache.hadoop.hive.metastore.TestMetaStoreMetrics.org.apache.hadoop.hive.metastore.TestMetaStoreMetrics
org.apache.hadoop.hive.ql.security.TestClientSideAuthorizationProvider.testSimplePrivileges
org.apache.hadoop.hive.ql.security.TestExtendedAcls.org.apache.hadoop.hive.ql.security.TestExtendedAcls
org.apache.hadoop.hive.ql.security.TestFolderPermissions.org.apache.hadoop.h

[jira] [Assigned] (HIVE-13788) hive msck listpartitions need to make use of directSQL instead of datanucleus

2016-05-23 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan reassigned HIVE-13788:


Assignee: Hari Sankar Sivarama Subramaniyan

> hive msck listpartitions need to make use of directSQL instead of datanucleus
> -
>
> Key: HIVE-13788
> URL: https://issues.apache.org/jira/browse/HIVE-13788
> Project: Hive
>  Issue Type: Improvement
>Reporter: Rajesh Balamohan
>Assignee: Hari Sankar Sivarama Subramaniyan
>Priority: Minor
> Attachments: msck_stack_trace.png
>
>
> Currently, for tables having 1000s of partitions too many DB calls are made 
> via datanucleus.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-13821) OrcSplit groups all delta files together into a single split

2016-05-23 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296995#comment-15296995
 ] 

Prasanth Jayachandran edited comment on HIVE-13821 at 5/23/16 8:09 PM:
---

[~ekoifman] Added a unit test.


was (Author: prasanth_j):
[~aechttpd] Added a unit test.

> OrcSplit groups all delta files together into a single split
> 
>
> Key: HIVE-13821
> URL: https://issues.apache.org/jira/browse/HIVE-13821
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-13821.1.patch, HIVE-13821.2.patch
>
>
> HIVE-7428 had fix for worst case column projection size estimate. It was 
> removed in HIVE-10397 to return file length but for ACID strategy file length 
> is passed as 0. In worst case, this always return 0 and all files ends up in 
> single split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13821) OrcSplit groups all delta files together into a single split

2016-05-23 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13821:
-
Attachment: HIVE-13821.2.patch

[~aechttpd] Added a unit test.

> OrcSplit groups all delta files together into a single split
> 
>
> Key: HIVE-13821
> URL: https://issues.apache.org/jira/browse/HIVE-13821
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-13821.1.patch, HIVE-13821.2.patch
>
>
> HIVE-7428 had fix for worst case column projection size estimate. It was 
> removed in HIVE-10397 to return file length but for ACID strategy file length 
> is passed as 0. In worst case, this always return 0 and all files ends up in 
> single split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13821) OrcSplit groups all delta files together into a single split

2016-05-23 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-13821:
-
Status: Patch Available  (was: Open)

> OrcSplit groups all delta files together into a single split
> 
>
> Key: HIVE-13821
> URL: https://issues.apache.org/jira/browse/HIVE-13821
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-13821.1.patch, HIVE-13821.2.patch
>
>
> HIVE-7428 had fix for worst case column projection size estimate. It was 
> removed in HIVE-10397 to return file length but for ACID strategy file length 
> is passed as 0. In worst case, this always return 0 and all files ends up in 
> single split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13264) JDBC driver makes 2 Open Session Calls for every open session

2016-05-23 Thread NITHIN MAHESH (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

NITHIN MAHESH updated HIVE-13264:
-
Attachment: HIVE-13264.6.patch

> JDBC driver makes 2 Open Session Calls for every open session
> -
>
> Key: HIVE-13264
> URL: https://issues.apache.org/jira/browse/HIVE-13264
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Reporter: NITHIN MAHESH
>Assignee: NITHIN MAHESH
>  Labels: jdbc
> Attachments: HIVE-13264.1.patch, HIVE-13264.2.patch, 
> HIVE-13264.3.patch, HIVE-13264.4.patch, HIVE-13264.5.patch, 
> HIVE-13264.6.patch, HIVE-13264.patch
>
>
> When HTTP is used as the transport mode by the Hive JDBC driver, we noticed 
> that there is an additional open/close session just to validate the 
> connection. 
>  
> TCLIService.Iface client = new TCLIService.Client(new 
> TBinaryProtocol(transport));
>   TOpenSessionResp openResp = client.OpenSession(new TOpenSessionReq());
>   if (openResp != null) {
> client.CloseSession(new 
> TCloseSessionReq(openResp.getSessionHandle()));
>   }
>  
> The open session call is a costly one and should not be used to test 
> transport. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13725) ACID: Streaming API should synchronize calls when multiple threads use the same endpoint

2016-05-23 Thread David Edelstein (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296962#comment-15296962
 ] 

David Edelstein commented on HIVE-13725:


We run into this problem when having multiple streaming destinations with their 
own connection and batches.  We get this error while heartbeating some 
transactions while committing a batch.  Why should separate batches require 
thread safety?  

> ACID: Streaming API should synchronize calls when multiple threads use the 
> same endpoint
> 
>
> Key: HIVE-13725
> URL: https://issues.apache.org/jira/browse/HIVE-13725
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Metastore, Transactions
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Critical
>  Labels: ACID, Streaming
>
> Currently, the streaming endpoint creates a metastore client which gets used 
> for RPC. The client itself is not internally thread safe. Therefore, the API 
> methods should provide the relevant synchronization so that the methods can 
> be called from different threads. A sample use case is as follows:
> 1. Thread 1 creates a streaming endpoint and opens a txn batch.
> 2. Thread 2 heartbeats the txn batch.
> With the current impl, this can result in an "out of sequence response", 
> since the response of the calls in thread1 might end up going to thread2 and 
> vice-versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12594) X lock on partition should not conflict with S lock on DB

2016-05-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-12594:
--
Issue Type: Improvement  (was: Bug)

> X lock on partition should not conflict with S lock on DB
> -
>
> Key: HIVE-12594
> URL: https://issues.apache.org/jira/browse/HIVE-12594
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> S lock on DB is acquired when creating a new table in that DB to make sure 
> the DB is not dropped at the same time
> This should not conflict with operations such as rebuild index which takes an 
> Exclusive lock on a partition.  See also HIVE-10242



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13725) ACID: Streaming API should synchronize calls when multiple threads use the same endpoint

2016-05-23 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296945#comment-15296945
 ] 

Vaibhav Gumashta commented on HIVE-13725:
-

[~ekoifman] Agree. Will take that into consideration in my patch.

> ACID: Streaming API should synchronize calls when multiple threads use the 
> same endpoint
> 
>
> Key: HIVE-13725
> URL: https://issues.apache.org/jira/browse/HIVE-13725
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Metastore, Transactions
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Critical
>  Labels: ACID, Streaming
>
> Currently, the streaming endpoint creates a metastore client which gets used 
> for RPC. The client itself is not internally thread safe. Therefore, the API 
> methods should provide the relevant synchronization so that the methods can 
> be called from different threads. A sample use case is as follows:
> 1. Thread 1 creates a streaming endpoint and opens a txn batch.
> 2. Thread 2 heartbeats the txn batch.
> With the current impl, this can result in an "out of sequence response", 
> since the response of the calls in thread1 might end up going to thread2 and 
> vice-versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13369) AcidUtils.getAcidState() is not paying attention toValidTxnList when choosing the "best" base file

2016-05-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13369:
--
Target Version/s: 2.0.0, 1.3.0, 2.1.0  (was: 1.3.0, 2.0.0)

> AcidUtils.getAcidState() is not paying attention toValidTxnList when choosing 
> the "best" base file
> --
>
> Key: HIVE-13369
> URL: https://issues.apache.org/jira/browse/HIVE-13369
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Wei Zheng
>Priority: Blocker
>
> The JavaDoc on getAcidState() reads, in part:
> "Note that because major compactions don't
>preserve the history, we can't use a base directory that includes a
>transaction id that we must exclude."
> which is correct but there is nothing in the code that does this.
> And if we detect a situation where txn X must be excluded but and there are 
> deltas that contain X, we'll have to aborted the txn.  This can't 
> (reasonably) happen with auto commit mode, but with multi statement txns it's 
> possible.
> Suppose some long running txn starts and lock in snapshot at 17 (HWM).  An 
> hour later it decides to access some partition for which all txns < 20 (for 
> example) have already been compacted (i.e. GC'd).  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13725) ACID: Streaming API should synchronize calls when multiple threads use the same endpoint

2016-05-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13725:
--
Priority: Critical  (was: Major)

> ACID: Streaming API should synchronize calls when multiple threads use the 
> same endpoint
> 
>
> Key: HIVE-13725
> URL: https://issues.apache.org/jira/browse/HIVE-13725
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog, Metastore, Transactions
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>Priority: Critical
>  Labels: ACID, Streaming
>
> Currently, the streaming endpoint creates a metastore client which gets used 
> for RPC. The client itself is not internally thread safe. Therefore, the API 
> methods should provide the relevant synchronization so that the methods can 
> be called from different threads. A sample use case is as follows:
> 1. Thread 1 creates a streaming endpoint and opens a txn batch.
> 2. Thread 2 heartbeats the txn batch.
> With the current impl, this can result in an "out of sequence response", 
> since the response of the calls in thread1 might end up going to thread2 and 
> vice-versa.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13821) OrcSplit groups all delta files together into a single split

2016-05-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13821:
--
Target Version/s: 1.3.0, 2.1.0  (was: 2.1.0)

> OrcSplit groups all delta files together into a single split
> 
>
> Key: HIVE-13821
> URL: https://issues.apache.org/jira/browse/HIVE-13821
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-13821.1.patch
>
>
> HIVE-7428 had fix for worst case column projection size estimate. It was 
> removed in HIVE-10397 to return file length but for ACID strategy file length 
> is passed as 0. In worst case, this always return 0 and all files ends up in 
> single split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13821) OrcSplit groups all delta files together into a single split

2016-05-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13821:
--
Priority: Critical  (was: Major)

> OrcSplit groups all delta files together into a single split
> 
>
> Key: HIVE-13821
> URL: https://issues.apache.org/jira/browse/HIVE-13821
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Critical
> Attachments: HIVE-13821.1.patch
>
>
> HIVE-7428 had fix for worst case column projection size estimate. It was 
> removed in HIVE-10397 to return file length but for ACID strategy file length 
> is passed as 0. In worst case, this always return 0 and all files ends up in 
> single split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11956) SHOW LOCKS should indicate what acquired the lock

2016-05-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-11956:
--
Priority: Critical  (was: Major)

> SHOW LOCKS should indicate what acquired the lock
> -
>
> Key: HIVE-11956
> URL: https://issues.apache.org/jira/browse/HIVE-11956
> Project: Hive
>  Issue Type: Improvement
>  Components: CLI, Transactions
>Affects Versions: 0.14.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
>
> This can be a queryId, Flume agent id, Storm bolt id, etc.  This would 
> dramatically help diagnosing issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13821) OrcSplit groups all delta files together into a single split

2016-05-23 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13821:
--
Component/s: Transactions

> OrcSplit groups all delta files together into a single split
> 
>
> Key: HIVE-13821
> URL: https://issues.apache.org/jira/browse/HIVE-13821
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-13821.1.patch
>
>
> HIVE-7428 had fix for worst case column projection size estimate. It was 
> removed in HIVE-10397 to return file length but for ACID strategy file length 
> is passed as 0. In worst case, this always return 0 and all files ends up in 
> single split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS

2016-05-23 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13822:
-
Description: 
Thanks to [~jcamachorodriguez] for uncovering this issue as part of HIVE-13269. 
StatsSetupConst.areColumnStatsUptoDate() is used to check whether stats are 
up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not up-to-date) is 
returned and the following debug message in the logs:

{code}
In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
StatsSetupConst)
{code}

Looks like the issue started happening after HIVE-12261 went in. 

The fix would be to replace
{color:red}COLUMN_STATS_ACCURATE,true{color}
with
{color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
where key, value are the column names.
in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt

  was:
Thanks to [~jcamachorodriguez] for uncovering this issue as part of HIVE-13269. 
StatsSetupConst.areColumnStatsUptoDate() is used to check whether stats are 
up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not up-to-date) is 
returned and the following debug message in the logs:

{code}
In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
StatsSetupConst)
{code}

Looks like the issue started happening after HIVE-12261 went in. 

The fix would be to replace
{color:red}COLUMN_STATS_ACCURATE,true{color}
with
{color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt


> TestPerfCliDriver throws warning in StatsSetupConst that  JsonParser cannot 
> parse COLUMN_STATS
> --
>
> Key: HIVE-13822
> URL: https://issues.apache.org/jira/browse/HIVE-13822
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>
> Thanks to [~jcamachorodriguez] for uncovering this issue as part of 
> HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether 
> stats are up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not 
> up-to-date) is returned and the following debug message in the logs:
> {code}
> In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
> StatsSetupConst)
> {code}
> Looks like the issue started happening after HIVE-12261 went in. 
> The fix would be to replace
> {color:red}COLUMN_STATS_ACCURATE,true{color}
> with
> {color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
> where key, value are the column names.
> in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS

2016-05-23 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13822:
-
Status: Open  (was: Patch Available)

> TestPerfCliDriver throws warning in StatsSetupConst that  JsonParser cannot 
> parse COLUMN_STATS
> --
>
> Key: HIVE-13822
> URL: https://issues.apache.org/jira/browse/HIVE-13822
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>
> Thanks to [~jcamachorodriguez] for uncovering this issue as part of 
> HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether 
> stats are up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not 
> up-to-date) is returned and the following debug message in the logs:
> {code}
> In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
> StatsSetupConst)
> {code}
> Looks like the issue started happening after HIVE-12261 went in. 
> The fix would be to replace
> {color:red}COLUMN_STATS_ACCURATE,true{color}
> with
> {color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
> where key, value are the column names.
> in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13566:
---
Status: Patch Available  (was: Open)

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13566:
---
Status: Open  (was: Patch Available)

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13566) Auto-gather column stats - phase 1

2016-05-23 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-13566:
---
Attachment: HIVE-13566.03.patch

address skewjoin again.

> Auto-gather column stats - phase 1
> --
>
> Key: HIVE-13566
> URL: https://issues.apache.org/jira/browse/HIVE-13566
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-13566.01.patch, HIVE-13566.02.patch, 
> HIVE-13566.03.patch
>
>
> This jira adds code and tests for auto-gather column stats. Golden file 
> update will be done in phase 2 - HIVE-11160



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS

2016-05-23 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13822:
-
Attachment: (was: HIVE-13822.1.patch)

> TestPerfCliDriver throws warning in StatsSetupConst that  JsonParser cannot 
> parse COLUMN_STATS
> --
>
> Key: HIVE-13822
> URL: https://issues.apache.org/jira/browse/HIVE-13822
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>
> Thanks to [~jcamachorodriguez] for uncovering this issue as part of 
> HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether 
> stats are up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not 
> up-to-date) is returned and the following debug message in the logs:
> {code}
> In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
> StatsSetupConst)
> {code}
> Looks like the issue started happening after HIVE-12261 went in. 
> The fix would be to replace
> {color:red}COLUMN_STATS_ACCURATE,true{color}
> with
> {color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
> in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13821) OrcSplit groups all delta files together into a single split

2016-05-23 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15296869#comment-15296869
 ] 

Eugene Koifman commented on HIVE-13821:
---

is it feasible to add a test for this?

> OrcSplit groups all delta files together into a single split
> 
>
> Key: HIVE-13821
> URL: https://issues.apache.org/jira/browse/HIVE-13821
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-13821.1.patch
>
>
> HIVE-7428 had fix for worst case column projection size estimate. It was 
> removed in HIVE-10397 to return file length but for ACID strategy file length 
> is passed as 0. In worst case, this always return 0 and all files ends up in 
> single split. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS

2016-05-23 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13822:
-
Attachment: HIVE-13822.1.patch

> TestPerfCliDriver throws warning in StatsSetupConst that  JsonParser cannot 
> parse COLUMN_STATS
> --
>
> Key: HIVE-13822
> URL: https://issues.apache.org/jira/browse/HIVE-13822
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13822.1.patch
>
>
> Thanks to [~jcamachorodriguez] for uncovering this issue as part of 
> HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether 
> stats are up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not 
> up-to-date) is returned and the following debug message in the logs:
> {code}
> In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
> StatsSetupConst)
> {code}
> Looks like the issue started happening after HIVE-12261 went in. 
> The fix would be to replace
> {color:red}COLUMN_STATS_ACCURATE,true{color}
> with
> {color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
> in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS

2016-05-23 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13822:
-
Status: Patch Available  (was: Open)

> TestPerfCliDriver throws warning in StatsSetupConst that  JsonParser cannot 
> parse COLUMN_STATS
> --
>
> Key: HIVE-13822
> URL: https://issues.apache.org/jira/browse/HIVE-13822
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
> Attachments: HIVE-13822.1.patch
>
>
> Thanks to [~jcamachorodriguez] for uncovering this issue as part of 
> HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether 
> stats are up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not 
> up-to-date) is returned and the following debug message in the logs:
> {code}
> In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
> StatsSetupConst)
> {code}
> Looks like the issue started happening after HIVE-12261 went in. 
> The fix would be to replace
> {color:red}COLUMN_STATS_ACCURATE,true{color}
> with
> {color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
> in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS

2016-05-23 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13822:
-
Description: 
Thanks to [~jcamachorodriguez] for uncovering this issue as part of HIVE-13269. 
StatsSetupConst.areColumnStatsUptoDate() is used to check whether stats are 
up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not up-to-date) is 
returned and the following debug message in the logs:

{code}
In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
StatsSetupConst)
{code}

Looks like the issue started happening after HIVE-12261 went in. 

The fix would be to replace
{color:red}COLUMN_STATS_ACCURATE,true{color}
with
{color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt

  was:
Thanks to [~jcamachorodriguez] for uncovering this issue as part of HIVE-13269. 
StatsSetupConst.areColumnStatsUptoDate() is used to check whether stats are 
up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not up-to-date) is 
returned and the following debug message in the logs:

{code}
In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
StatsSetupConst)
{code}

Looks like the issue started happening after HIVE-12261 went in. 

The fix would be to replace
{{COLUMN_STATS_ACCURATE,true}}
with
{{COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}
 }}
in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt


> TestPerfCliDriver throws warning in StatsSetupConst that  JsonParser cannot 
> parse COLUMN_STATS
> --
>
> Key: HIVE-13822
> URL: https://issues.apache.org/jira/browse/HIVE-13822
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>
> Thanks to [~jcamachorodriguez] for uncovering this issue as part of 
> HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether 
> stats are up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not 
> up-to-date) is returned and the following debug message in the logs:
> {code}
> In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
> StatsSetupConst)
> {code}
> Looks like the issue started happening after HIVE-12261 went in. 
> The fix would be to replace
> {color:red}COLUMN_STATS_ACCURATE,true{color}
> with
> {color:green}COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}{color}
> in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13822) TestPerfCliDriver throws warning in StatsSetupConst that JsonParser cannot parse COLUMN_STATS

2016-05-23 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-13822:
-
Description: 
Thanks to [~jcamachorodriguez] for uncovering this issue as part of HIVE-13269. 
StatsSetupConst.areColumnStatsUptoDate() is used to check whether stats are 
up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not up-to-date) is 
returned and the following debug message in the logs:

{code}
In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
StatsSetupConst)
{code}

Looks like the issue started happening after HIVE-12261 went in. 

The fix would be to replace
{{COLUMN_STATS_ACCURATE,true}}
with
{{COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}
 }}
in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt

  was:
Thanks to [~jcamachorodriguez] for uncovering this issue as part of HIVE-13269. 
StatsSetupConst.areColumnStatsUptoDate() is used to check whether stats are 
up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not up-to-date) is 
returned and the following debug message in the logs:

{code}
In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
StatsSetupConst)
{code}

Looks like the issue started happening after HIVE-12261 went in. 

The fix would be to replace
{{COLUMN_STATS_ACCURATE,true}}
with
{{COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}}}
in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt


> TestPerfCliDriver throws warning in StatsSetupConst that  JsonParser cannot 
> parse COLUMN_STATS
> --
>
> Key: HIVE-13822
> URL: https://issues.apache.org/jira/browse/HIVE-13822
> Project: Hive
>  Issue Type: Bug
>Reporter: Hari Sankar Sivarama Subramaniyan
>Assignee: Hari Sankar Sivarama Subramaniyan
>
> Thanks to [~jcamachorodriguez] for uncovering this issue as part of 
> HIVE-13269. StatsSetupConst.areColumnStatsUptoDate() is used to check whether 
> stats are up-to-date.  In case of PerfCliDriver, ‘false’ (thus, not 
> up-to-date) is returned and the following debug message in the logs:
> {code}
> In StatsSetupConst, JsonParser can not parse COLUMN_STATS. (line 190 in 
> StatsSetupConst)
> {code}
> Looks like the issue started happening after HIVE-12261 went in. 
> The fix would be to replace
> {{COLUMN_STATS_ACCURATE,true}}
> with
> {{COLUMN_STATS_ACCURATE,{"COLUMN_STATS":{"key":"true","value":"true"},"BASIC_STATS":"true"}
>  }}
> in data/files/tpcds-perf/metastore_export/csv/TABLE_PARAMS.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >