[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-11 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472488#comment-16472488
 ] 

Sankar Hariappan commented on HIVE-18193:
-

The branch-3 patch works fine locally. Looks to be some issue with ptest.

Patch committed to branch-3 as well.

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01-branch-3.patch, HIVE-18193.01.patch, 
> HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472473#comment-16472473
 ] 

Hive QA commented on HIVE-18193:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12922796/HIVE-18193.01-branch-3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10831/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10831/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10831/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-05-11 18:45:13.081
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10831/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z branch-3 ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-05-11 18:45:13.085
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   9cfc15a..154a686  master -> origin/master
   b331338..32e29cc  branch-3   -> origin/branch-3
+ git reset --hard HEAD
HEAD is now at 9cfc15a HIVE-19435: Incremental replication cause data loss if a 
table is dropped followed by create and insert-into with different partition 
type (Sankar Hariappan, reviewed by Mahesh Kumar Behera, Thejas M Nair)
+ git clean -f -d
+ git checkout branch-3
Switched to branch 'branch-3'
Your branch is behind 'origin/branch-3' by 4 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/branch-3
HEAD is now at 32e29cc HIVE-19453 : Extend Load Data statement to take Input 
file format and Serde as parameters (Deepak Jaiswal, reviewed by Jason Dere)
+ git merge --ff-only origin/branch-3
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-05-11 18:45:18.836
+ rm -rf ../yetus_PreCommit-HIVE-Build-10831
+ mkdir ../yetus_PreCommit-HIVE-Build-10831
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10831
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10831/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/txn/TxnDbUtil.java:
 does not exist in index
error: a/standalone-metastore/src/main/sql/derby/hive-schema-3.0.0.derby.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/derby/upgrade-2.3.0-to-3.0.0.derby.sql: 
does not exist in index
error: a/standalone-metastore/src/main/sql/mssql/hive-schema-3.0.0.mssql.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/mssql/upgrade-2.3.0-to-3.0.0.mssql.sql: 
does not exist in index
error: a/standalone-metastore/src/main/sql/mysql/hive-schema-3.0.0.mysql.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql: 
does not exist in index
error: a/standalone-metastore/src/main/sql/oracle/hive-schema-3.0.0.oracle.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/oracle/upgrade-2.3.0-to-3.0.0.oracle.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/postgres/hive-schema-3.0.0.postgres.sql: 
does not exist in index
error: 
a/standalone-metastore/src/main/sql/postgres/upgrade-2.3.0-to-3.0.0.postgres.sql:
 does not exist in index
Going to apply patch with: git apply -p1
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc5755933008813138933.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc5755933008813138933.exe, 

[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-09 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469928#comment-16469928
 ] 

Sankar Hariappan commented on HIVE-18193:
-

Thanks for the review [~thejas], [~ekoifman] and [~maheshk114]!

Patch committed to master. Will submit the branch-3 patch shortly.

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-09 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469437#comment-16469437
 ] 

Eugene Koifman commented on HIVE-18193:
---

LGTM

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-09 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469277#comment-16469277
 ] 

Thejas M Nair commented on HIVE-18193:
--

+1


> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-07 Thread mahesh kumar behera (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16466815#comment-16466815
 ] 

mahesh kumar behera commented on HIVE-18193:


with Acid replication not yet supported, if we upgrade all tables to ACID, will 
it not disable replication for all tables ?

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-06 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465400#comment-16465400
 ] 

Sankar Hariappan commented on HIVE-18193:
-

Test failures are irrelevant to the patch.

[~ekoifman], [~maheshk114], [~thejas], Please review it.

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-06 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465031#comment-16465031
 ] 

Hive QA commented on HIVE-18193:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12922142/HIVE-18193.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 14322 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_vector_dynpart_hashjoin_1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning 
(batchId=309)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235)
org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testSSLFetchHttp (batchId=239)
org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testMultipleTriggers2 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomCreatedFiles 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomNonExistent 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerCustomReadOps 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesRead 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighBytesWrite 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerHighShuffleBytes 
(batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryElapsedTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerSlowQueryExecutionTime
 (batchId=241)
org.apache.hive.jdbc.TestTriggersWorkloadManager.testTriggerVertexRawInputSplitsNoKill
 (batchId=241)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=304)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10725/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10725/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10725/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 35 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12922142 - PreCommit-HIVE-Build

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>   

[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-06 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465007#comment-16465007
 ] 

Hive QA commented on HIVE-18193:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10725/dev-support/hive-personality.sh
 |
| git revision | master / f8a6c74 |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10725/yetus/whitespace-tabs.txt
 |
| modules | C: metastore standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10725/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was 

[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16464703#comment-16464703
 ] 

Hive QA commented on HIVE-18193:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12921959/HIVE-18193.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10704/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10704/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10704/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Tests exited with: ExecutionException: java.util.concurrent.ExecutionException: 
java.io.IOException: Could not create 
/data/hiveptest/logs/PreCommit-HIVE-Build-10704/succeeded/179-TestEncryptedHDFSCliDriver-encryption_drop_table.q-encryption_insert_values.q-encryption_select_read_only_unencrypted_tbl.q-and-3-more
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12921959 - PreCommit-HIVE-Build

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-05 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16464702#comment-16464702
 ] 

Hive QA commented on HIVE-18193:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10704/dev-support/hive-personality.sh
 |
| git revision | master / 52f1b24 |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10704/yetus/whitespace-tabs.txt
 |
| modules | C: metastore standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10704/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-18193.01.patch, HIVE-18193.02.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was 

[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463661#comment-16463661
 ] 

Hive QA commented on HIVE-18193:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12921772/HIVE-18193.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 83 failed/errored test(s), 14316 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTxnExIm - did not produce a TEST-*.xml file (likely timed out) (batchId=286)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez2]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_time_window]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_reflect_neg] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udf_test_error] 
(batchId=96)
org.apache.hadoop.hive.metastore.TestHiveMetaStoreTxns.testLocksWithTxn 
(batchId=219)
org.apache.hadoop.hive.metastore.tools.TestSchemaToolForMetastore.testMetastoreDbPropertiesAfterUpgrade
 (batchId=211)
org.apache.hadoop.hive.metastore.tools.TestSchemaToolForMetastore.testSchemaUpgrade
 (batchId=211)
org.apache.hadoop.hive.metastore.tools.TestSchemaToolForMetastore.testValidateSchemaTables
 (batchId=211)
org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testFindPotentialCompactions
 (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.testMarkCleanedCleansTxnsAndTxnComponents
 (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testCheckLockAcquireAfterWaiting
 (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testCheckLockTxnAborted 
(batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testLockEESW (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testLockESRSW (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testLockSRSW (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testLockSWSR (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testLockSWSWSR (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testLockSWSWSW (batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testUnlockOnAbort 
(batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testUnlockOnCommit 
(batchId=264)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testUnlockWithTxn 
(batchId=264)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228)
org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 
(batchId=228)
org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testDelete (batchId=301)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testReadWrite (batchId=301)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testRollback (batchId=301)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testSingleWritePartition 
(batchId=301)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testSingleWriteTable 
(batchId=301)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testUpdate (batchId=301)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager.testWriteDynamicPartition 
(batchId=301)
org.apache.hadoop.hive.ql.parse.TestCopyUtils.testPrivilegedDistCpWithSameUserAsCurrentDoesNotTryToImpersonate
 (batchId=231)
org.apache.hadoop.hive.ql.parse.TestReplicationOnHDFSEncryptedZones.targetAndSourceHaveDifferentEncryptionZoneKeys
 (batchId=231)
org.apache.hadoop.hive.ql.txn.compactor.TestCleaner.blockedByLockPartition 
(batchId=273)

[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-05-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16463594#comment-16463594
 ] 

Hive QA commented on HIVE-18193:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10669/dev-support/hive-personality.sh
 |
| git revision | master / 39917ef |
| Default Java | 1.8.0_111 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10669/yetus/whitespace-tabs.txt
 |
| modules | C: metastore standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10669/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
> Attachments: HIVE-18193.01.patch
>
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2018-04-25 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16452979#comment-16452979
 ] 

Eugene Koifman commented on HIVE-18193:
---

this is blocker for 3.0

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Blocker
>  Labels: ACID, Upgrade
>
> dependent upon HIVE-18192
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files.
> 1. Need to create metadata tables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID.
> 2. Add entries for each ACID/MM tables into NEXT_WRITE_ID where NWI_NEXT is 
> set to current value of NEXT_TXN_ID.NTXN_NEXT.
> 3. All current open/abort transactions to have an entry in TXN_TO_WRITE_ID 
> such that T2W_TXNID=T2W_WRITEID=Open/AbortedTxnId.
> 4. Added new column TC_WRITEID in TXN_COMPONENTS and CTC_WRITEID in 
> COMPLETED_TXN_COMPONENTS to store the write id which should be set as 
> respective values of TC_TXNID and CTC_TXNID from the same row.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2017-12-20 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299511#comment-16299511
 ] 

anishek commented on HIVE-18193:


Hey [~asherman]

the design doc is attached in HIVE-18320 which will be the umbrella jira for 
acid related work. 


> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
>
> dependent upon HIVE-18192 
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18193) Migrate existing ACID tables to use write id per table rather than global transaction id

2017-12-01 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16274638#comment-16274638
 ] 

Andrew Sherman commented on HIVE-18193:
---

Hi [~anishek] can you explain a little more about this. How is the write id 
generated? Will this require on-disk data to be rewritten?
Sorry if I missed previous discussions but is there a design for snapshot 
isolation. Thanks

> Migrate existing ACID tables to use write id per table rather than global 
> transaction id
> 
>
> Key: HIVE-18193
> URL: https://issues.apache.org/jira/browse/HIVE-18193
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
>
> dependent upon HIVE-18192 
> For existing ACID Tables we need to update the table level write id 
> metatables/sequences so any new operations on these tables works seamlessly 
> without any conflicting data in existing base/delta files. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)