[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-28 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16380657#comment-16380657
 ] 

Sankar Hariappan commented on HIVE-18192:
-

[~sershe], created HIVE-18824 to track this bug!

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16379515#comment-16379515
 ] 

Sergey Shelukhin commented on HIVE-18192:
-

[~sankarh] thanks! Is there a ticket #?

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-26 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16378087#comment-16378087
 ] 

Sankar Hariappan commented on HIVE-18192:
-

{quote}do you have an estimate on how long that "Only mm_exchangepartition test 
failure is relevant to this patch" will be the guest on the regularry failing 
test's list? :)
{quote}
[~kgyrtkirk]

I'm targeting to close this issue HIVE-18750 within a week... :)

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-26 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16378082#comment-16378082
 ] 

Sankar Hariappan commented on HIVE-18192:
-

[~sershe]

If you see "4" for "hive.txn.tables.valid.writeids" means the current txn_id is 
4. Also, we build the ValidWriteIdList only for tables in Read Entities. As it 
is insert operation, there is only WriteEntity and we assume no body reads the 
table and hence didn't bother to build ValidWriteIdList.

Now, I see, you will read this table for stats update which looks like a bug. I 
will raise a ticket to include tables from WriteEntities as well while building 
ValidWriteIdList.

cc [~ekoifman]

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16377989#comment-16377989
 ] 

Sergey Shelukhin commented on HIVE-18192:
-

Write IDs. I'm just calling the standard method to get ValidWriteIds object; it 
didn't have the table I need so I logged the string it uses, which was "4".
The query is
insert into table acid_dtt select cint, cast(cstring1 as varchar(128)) from 
alltypesorc where cint is not null order by cint limit 10;


> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-26 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16377977#comment-16377977
 ] 

Eugene Koifman commented on HIVE-18192:
---

what property are you looking up?  There is 

hive.txn.valid.txns and 

hive.txn.tables.valid.writeids

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-26 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16377934#comment-16377934
 ] 

Sergey Shelukhin commented on HIVE-18192:
-

[~sankarh] [~ekoifman] I'm looking at some ACID stats improvements, and trying 
to get ACID state during an insert query.
When I get the config setting where txn state is stored as string for an insert 
query in autoColumnStats_4 (insert into acid table from non-acid table), the 
setting is "4" - basically it has txn ID, but no write Id for any table.
Is that by design? How does it write into the table without having a write ID 
for that table?

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-24 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16375581#comment-16375581
 ] 

Zoltan Haindrich commented on HIVE-18192:
-

[~sankarh] do you have an estimate on how long that "Only mm_exchangepartition 
test failure is relevant to this patch" will be the guest on the regularry 
failing test's list? :)

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-23 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16374605#comment-16374605
 ] 

ASF GitHub Bot commented on HIVE-18192:
---

Github user sankarh closed the pull request at:

https://github.com/apache/hive/pull/290


> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-23 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16374596#comment-16374596
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Patch 17 committed to master!

Thanks [~ekoifman] for the review!

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16374505#comment-16374505
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12911681/HIVE-18192.17.patch

{color:green}SUCCESS:{color} +1 due to 29 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 13017 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,create_table_failure4.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,load_wrong_fileformat.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,udf_min.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,invalid_select_column.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,alter_external_with_constraint.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,authorization_create_macro1.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,char_pad_convert_fail0.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.q,serde_regex3.q,authorization_delete_nodeletepriv.q,materialized_view_delete.q,create_or_replace_view6.q,bucket_mapjoin_wrong_table_metadata_2.q,msck_repair_3.q,udf_sort_array_by_wrong2.q,local_mapred_error_cache.q,alter_external_acid.q,mm_concatenate.q,authorization_fail_3.q,set_hiveconf_internal_variable0.q,udf_last_day_error_2.q,alter_table_constraint_invalid_ref.q,create_table_wrong_regex.q,describe_xpath4.q,j

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16374438#comment-16374438
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
28s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
31s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 17s{color} 
| {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 31s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 5 new + 19 unchanged 
- 2 fixed = 24 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hcatalog/streaming: The patch generated 9 new + 423 
unchanged - 40 fixed = 432 total (was 463) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} itests/hive-unit: The patch generated 39 new + 225 
unchanged - 45 fixed = 264 total (was 270) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
19s{color} | {color:red} ql: The patch generated 107 new + 4337 unchanged - 310 
fixed =  total (was 4647) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
27s{color} | {color:red} standalone-metastore: The patch generated 46 new + 
1299 unchanged - 43 fixed = 1345 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 70 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9330/dev-support/hive-personality.sh
 |
| git revision | master / f9768af |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9330/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9330/yetus/patch-mvninstall-itests_hive-unit.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCo

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-23 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16374055#comment-16374055
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 17.patch with fixes for 2 nits from [~ekoifman] and corrected test for 
few test failures.

Thanks [~ekoifman] for the detailed review and +1 !

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch, HIVE-18192.17.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16373934#comment-16373934
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12911626/HIVE-18192.16.patch

{color:green}SUCCESS:{color} +1 due to 29 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 42 failed/errored test(s), 13017 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,create_table_failure4.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,load_wrong_fileformat.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,udf_min.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,invalid_select_column.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,alter_external_with_constraint.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,authorization_create_macro1.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,char_pad_convert_fail0.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.q,serde_regex3.q,authorization_delete_nodeletepriv.q,materialized_view_delete.q,create_or_replace_view6.q,bucket_mapjoin_wrong_table_metadata_2.q,msck_repair_3.q,udf_sort_array_by_wrong2.q,local_mapred_error_cache.q,alter_external_acid.q,mm_concatenate.q,authorization_fail_3.q,set_hiveconf_internal_variable0.q,udf_last_day_error_2.q,alter_table_constraint_invalid_ref.q,create_table_wrong_regex.q,describe_xpath4.q,j

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16373907#comment-16373907
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
15s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
31s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 15s{color} 
| {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 31s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 5 new + 19 unchanged 
- 2 fixed = 24 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} hcatalog/streaming: The patch generated 9 new + 423 
unchanged - 40 fixed = 432 total (was 463) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 39 new + 225 
unchanged - 45 fixed = 264 total (was 270) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
14s{color} | {color:red} ql: The patch generated 107 new + 4337 unchanged - 310 
fixed =  total (was 4647) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} standalone-metastore: The patch generated 46 new + 
1299 unchanged - 43 fixed = 1345 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 70 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9322/dev-support/hive-personality.sh
 |
| git revision | master / 1593c10 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9322/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-22 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16373704#comment-16373704
 ] 

Eugene Koifman commented on HIVE-18192:
---

I added a couple of nits on pull request for patch 16 - can be done in a 
followup.

+1 for patch 16 pending tests

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-22 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16373619#comment-16373619
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Attached 16.patch with fixes for test failures and also couple of comments from 
[~ekoifman].

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch, HIVE-18192.16.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16373209#comment-16373209
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12911542/HIVE-18192.15.patch

{color:green}SUCCESS:{color} +1 due to 28 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 55 failed/errored test(s), 13016 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,create_table_failure4.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,load_wrong_fileformat.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,udf_min.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,invalid_select_column.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,alter_external_with_constraint.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,authorization_create_macro1.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,char_pad_convert_fail0.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.q,serde_regex3.q,authorization_delete_nodeletepriv.q,materialized_view_delete.q,create_or_replace_view6.q,bucket_mapjoin_wrong_table_metadata_2.q,msck_repair_3.q,udf_sort_array_by_wrong2.q,local_mapred_error_cache.q,alter_external_acid.q,mm_concatenate.q,authorization_fail_3.q,set_hiveconf_internal_variable0.q,udf_last_day_error_2.q,alter_table_constraint_invalid_ref.q,create_table_wrong_regex.q,describe_xpath4.q,j

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16373106#comment-16373106
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
34s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
31s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
17s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
34s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 17s{color} 
| {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 34s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 5 new + 19 unchanged 
- 2 fixed = 24 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hcatalog/streaming: The patch generated 9 new + 423 
unchanged - 40 fixed = 432 total (was 463) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 39 new + 225 
unchanged - 45 fixed = 264 total (was 270) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
19s{color} | {color:red} ql: The patch generated 107 new + 4334 unchanged - 310 
fixed = 4441 total (was 4644) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} standalone-metastore: The patch generated 46 new + 
1299 unchanged - 43 fixed = 1345 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 70 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-9313/dev-support/hive-personality.sh
 |
| git revision | master / ec2378f |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9313/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-22 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16372728#comment-16372728
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Attached 15.patch with fixes for Eugene's comments.

 

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch, 
> HIVE-18192.15.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-20 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370340#comment-16370340
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Yes [~ekoifman]. We have follow-up Jira HIVE-18750 to handle this failure.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-20 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16370291#comment-16370291
 ] 

Eugene Koifman commented on HIVE-18192:
---

is 
{{org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition]}}
 a related failure?

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368941#comment-16368941
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12911093/HIVE-18192.14.patch

{color:green}SUCCESS:{color} +1 due to 28 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 36 failed/errored test(s), 13771 tests 
executed
*Failed tests:*
{noformat}
TestTriggersWorkloadManager - did not produce a TEST-*.xml file (likely timed 
out) (batchId=235)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=248)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_dynpart_hashjoin_1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_string_decimal]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_udf_string_to_boolean]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_div0]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_mapjoin3]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=179)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestAddPartitionsFromPartSpec.testAddPartitionSpecEmptyDB[Remote]
 (batchId=205)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=205)
org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex 
(batchId=242)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex 
(batchId=242)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex 
(batchId=242)
org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9268/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9268/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9268/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 36 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12911093 - PreCommit-HIVE-Build

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368897#comment-16368897
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
15s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
29s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
16s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
32s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 16s{color} 
| {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 32s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 5 new + 19 unchanged 
- 2 fixed = 24 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/streaming: The patch generated 9 new + 423 
unchanged - 40 fixed = 432 total (was 463) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} itests/hive-unit: The patch generated 39 new + 225 
unchanged - 45 fixed = 264 total (was 270) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
16s{color} | {color:red} ql: The patch generated 104 new + 4337 unchanged - 307 
fixed = 4441 total (was 4644) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} standalone-metastore: The patch generated 46 new + 
1299 unchanged - 43 fixed = 1345 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 68 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / e0bf12d |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9268/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-18 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16368833#comment-16368833
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 14.patch to explicitly set sort order as ascending while fetching 
aborted/opened write ids to be filled in ValidWriteIdList and also rebased 
against master.

Out of scope of HIVE-18192: Will raise separate tickets for each of this items.
 # Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
 # Rename tables should update the entries in NEXT_WRITE_ID and TXN_TO_WRITE_ID 
tables. 
 # Update classes to use WriteID instead of TxnId as methods and variable names.
 # HiveEndPoint: Opening txns and getting writeids should be 1 metastore call 
not 2 for performance. 
 # Need to change transactonid to writerid in 
RecordIdentifier.Field.transactionId.
 # Exchange partition doesn't work as write id not valid across tables.
 # Failure of TestAcidOnTez.testGetSplitsLocks.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch, HIVE-18192.14.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366141#comment-16366141
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12910731/HIVE-18192.13.patch

{color:green}SUCCESS:{color} +1 due to 28 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 28 failed/errored test(s), 13763 tests 
executed
*Failed tests:*
{noformat}
TestDFSErrorHandling - did not produce a TEST-*.xml file (likely timed out) 
(batchId=235)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
 (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex 
(batchId=242)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex 
(batchId=242)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex 
(batchId=242)
org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill 
(batchId=235)
org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill
 (batchId=235)
org.apache.hive.service.cli.operation.TestQueryLifeTimeHooksWithSQLOperation.testQueryInfoInHookContext
 (batchId=218)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9235/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9235/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9235/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 28 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12910731 - PreCommit-HIVE-Build

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persis

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-15 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16366072#comment-16366072
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
37s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
33s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} storage-api: The patch generated 5 new + 19 unchanged 
- 2 fixed = 24 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} hcatalog/streaming: The patch generated 9 new + 423 
unchanged - 40 fixed = 432 total (was 463) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} itests/hive-unit: The patch generated 39 new + 225 
unchanged - 45 fixed = 264 total (was 270) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
31s{color} | {color:red} ql: The patch generated 104 new + 4309 unchanged - 307 
fixed = 4413 total (was 4616) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
30s{color} | {color:red} standalone-metastore: The patch generated 46 new + 
1298 unchanged - 43 fixed = 1344 total (was 1341) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 68 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 974d419 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9235/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-15 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16365455#comment-16365455
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 13.patch to fix the test failures reported on previous ptest run.

Out of scope of HIVE-18192: Will raise separate tickets for each of this items.
 # Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
 # Update classes to use WriteID instead of TxnId as methods and variable names.
 # HiveEndPoint: Opening txns and getting writeids should be 1 metastore call 
not 2 for performance. 
 # Need to change transactonid to writerid in 
RecordIdentifier.Field.transactionId.
 # Exchange partition doesn't work as write id not valid across tables.
 # Failure of TestAcidOnTez.testGetSplitsLocks.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch, HIVE-18192.13.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-14 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16364515#comment-16364515
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12910601/HIVE-18192.12.patch

{color:green}SUCCESS:{color} +1 due to 25 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 13105 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=179)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketmapjoin6]
 (batchId=179)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_opt_shuffle_serde]
 (batchId=179)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=121)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query1] 
(batchId=250)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=205)
org.apache.hadoop.hive.metastore.txn.TestCompactionTxnHandler.addDynamicPartitions
 (batchId=259)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testAbortTxn (batchId=259)
org.apache.hadoop.hive.metastore.txn.TestTxnHandler.testUnlockOnAbort 
(batchId=259)
org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=224)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testToAcidConversionMultiBucket 
(batchId=280)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testToAcidConversionMultiBucket
 (batchId=280)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testWriteSetTracking4 
(batchId=293)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testWriteSetTracking5 
(batchId=293)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=187)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.alterIndex 
(batchId=242)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.createIndex 
(batchId=242)
org.apache.hive.hcatalog.listener.TestDbNotificationListener.dropIndex 
(batchId=242)
org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteTimestamp 
(batchId=192)
org.apache.hive.jdbc.TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd 
(batchId=235)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9213/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9213/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9213/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 37 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12910601 - PreCommit-HIVE-Build

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-14 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16364459#comment-16364459
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
38s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
19s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
39s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 19s{color} 
| {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 39s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} storage-api: The patch generated 5 new + 19 unchanged 
- 2 fixed = 24 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} hcatalog/streaming: The patch generated 9 new + 423 
unchanged - 40 fixed = 432 total (was 463) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
21s{color} | {color:red} itests/hive-unit: The patch generated 39 new + 225 
unchanged - 45 fixed = 264 total (was 270) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
31s{color} | {color:red} ql: The patch generated 98 new + 3842 unchanged - 307 
fixed = 3940 total (was 4149) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
31s{color} | {color:red} standalone-metastore: The patch generated 46 new + 
1298 unchanged - 43 fixed = 1344 total (was 1341) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 68 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / b98fb1f |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9213/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-H

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-14 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16364353#comment-16364353
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 12.patch with below updates
 # Scripts to add Metastore tables for write id management. Add for other 
non-derby databases too.
 # CompactionTxnHandler cleanup COMPLETED_TXN_COMPONENTS based on highest write 
id instead of highest txn id.

Out of scope of HIVE-18192: Will raise separate tickets for each of this items.
 # Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
 # Update classes to use WriteID instead of TxnId as methods and variable names.
 # HiveEndPoint: Opening txns and getting writeids should be 1 metastore call 
not 2 for performance. 
 # Need to change transactonid to writerid in 
RecordIdentifier.Field.transactionId.
 # Exchange partition doesn't work as write id not valid across tables.
 # Explain plan allocate write but don't use it and never remove the entry from 
meta tables.
 # Failure of TestAcidOnTez.testGetSplitsLocks.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch, 
> HIVE-18192.12.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361278#comment-16361278
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12910240/HIVE-18192.11.patch

{color:green}SUCCESS:{color} +1 due to 25 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 27 failed/errored test(s), 13173 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=241)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=180)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_opt_shuffle_serde]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query1] 
(batchId=251)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=222)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=206)
org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.TestAcidOnTez.testGetSplitsLocks (batchId=225)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testToAcidConversionMultiBucket 
(batchId=281)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testToAcidConversionMultiBucket
 (batchId=281)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=235)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=235)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=235)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9175/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9175/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9175/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 27 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12910240 - PreCommit-HIVE-Build

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that 

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16361214#comment-16361214
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
16s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 18s{color} 
| {color:red} streaming in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} hive-unit in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} storage-api: The patch generated 5 new + 19 unchanged 
- 2 fixed = 24 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} hcatalog/streaming: The patch generated 9 new + 425 
unchanged - 40 fixed = 434 total (was 465) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} itests/hive-unit: The patch generated 39 new + 225 
unchanged - 45 fixed = 264 total (was 270) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
17s{color} | {color:red} ql: The patch generated 87 new + 3829 unchanged - 302 
fixed = 3916 total (was 4131) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
29s{color} | {color:red} standalone-metastore: The patch generated 46 new + 
1306 unchanged - 35 fixed = 1352 total (was 1341) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 68 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 49 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 887233d |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9175/yetus/patch-mvninstall-hcatalog_streaming.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9175/yetus/patch-mvninstall-itests_hive-unit.txt
 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9175/yetus/pa

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-12 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16360920#comment-16360920
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 11.patch with below updates.
 # Fix check style issues.
 # Proper error handling for not setting ValidWriteIdList config properly.
 # Removed unnecessary logs.

Pending changes for this feature
 # Scripts to add Metastore tables for write id management. Add for other 
non-derby databases too.
 # CompactionTxnHandler cleanup COMPLETED_TXN_COMPONENTS based on highest write 
id instead of highest txn id.

Out of scope of HIVE-18192:
 # Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
 # Update classes to use WriteID instead of TxnId as methods and variable names.
 # HiveEndPoint: Opening txns and getting writeids should be 1 metastore call 
not 2 for performance. 
 # Need to change transactonid to writerid in 
RecordIdentifier.Field.transactionId.
 # Exchange partition doesn't work as write id not valid across tables.
 # Explain plan allocate write but don't use it and never remove the entry from 
meta tables.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch, HIVE-18192.11.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16360159#comment-16360159
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12910124/HIVE-18192.10.patch

{color:green}SUCCESS:{color} +1 due to 25 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 66 failed/errored test(s), 13160 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=241)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_part_varchar]
 (batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_opt_shuffle_serde]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query1] 
(batchId=251)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=222)
org.apache.hadoop.hive.metastore.TestMarkPartition.testMarkingPartitionSet 
(batchId=215)
org.apache.hadoop.hive.metastore.client.TestDropPartitions.testDropPartitionDeleteDataNoPurge[Remote]
 (batchId=206)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testToAcidConversionMultiBucket 
(batchId=281)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testToAcidConversionMultiBucket
 (batchId=281)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=257)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask[3]
 (batchId=197)
org.apache.hive.hcatalog.mapreduce.TestHCatDynamicPartitioned.testHCatDynamicPartitionedTable[3]
 (batchId=197)
org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask[3]
 (batchId=199)
org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatDynamicPartitionedTable[3]
 (batchId=199)
org.apache.hive.hcatalog.mapreduce.TestHCatExternalDynamicPartitioned.testHCatExternalDynamicCustomLocation[3]
 (batchId=199)
org.apache.hive.hcatalog.mapreduce.TestHCatExternalNonPartitioned.testHCatNonPartitionedTable[3]
 (batchId=200)
org.apache.hive.hcatalog.mapreduce.TestHCatExternalPartitioned.testHCatPartitionedTable[3]
 (batchId=196)
org.apache.hive.hcatalog.mapreduce.TestHCatMutableDynamicPartitioned.testHCatDynamicPartitionedTableMultipleTask[3]
 (batchId=194)
org.apache.hive.hcatalog.mapreduce.TestHCatMutableDynamicPartitioned.testHCatDynamicPartitionedTable[3]
 (batchId=194)
org.apache.hive.hcatalog.mapreduce.TestHCatMutableNonPartitioned.testHCatNonPartitionedTable[3]
 (batchId=200)
org.apache.hive.hcatalog.mapreduce.TestHCatMutablePartitioned.testHCatPartitionedTable[3]
 (batchId=198)
org.apache.hive.hcatalog.mapreduce.TestHCatNonPartitioned.testHCatNonPartitionedTable[3]
 (batchId=195)
org.apache.hive.hcatalog.mapreduce.TestHCatPartitioned.testHCatPartitionedTable[3]
 (batchId=195)
org.apache.hive.hcatalog.pig.TestE2EScenarios.testReadOrcAndRCFromPig 
(batchId=193)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testMapNullKey[3] 
(batchId=193)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testMapWithComplexData[3]
 (batchId=193)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testSyntheticComplexSchema[3]
 (batchId=193)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testTupleInBagInTupleInBag[3]
 (batchId=193)
org.apache.hive.hcatalog.pig.TestOrcHCatLoader.testColumnarStorePushdown 
(batchId=193)
org.apache.hive.hcatalog.pig.TestOrcHCatLoader.testColumnarStorePushdown2 
(batchId=193)
org.apa

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-11 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16360150#comment-16360150
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
32s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 10s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} storage-api: The patch generated 36 new + 19 unchanged 
- 2 fixed = 55 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
32s{color} | {color:red} standalone-metastore: The patch generated 44 new + 
1326 unchanged - 15 fixed = 1370 total (was 1341) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
29s{color} | {color:red} ql: The patch generated 262 new + 3900 unchanged - 231 
fixed = 4162 total (was 4131) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} hcatalog/streaming: The patch generated 20 new + 450 
unchanged - 15 fixed = 470 total (was 465) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 45 new + 225 
unchanged - 45 fixed = 270 total (was 270) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 68 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
48s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 9fdb601 |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9164/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9164/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9164/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9164/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9164/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9164/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9164/yetus/whit

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-11 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16360084#comment-16360084
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 10.patch with below updates.
 # Address review comments from Eugene.
 # Address all test failures.

Pending changes for this feature
 # Scripts to add Metastore tables for write id management. Add for other 
non-derby databases too.
 # CompactionTxnHandler cleanup COMPLETED_TXN_COMPONENTS based on highest write 
id instead of highest txn id.
 # Fix check style issues.

Out of scope of HIVE-18192:
 # Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
 # Update classes to use WriteID instead of TxnId as methods and variable names.
 # HiveEndPoint: Opening txns and getting writeids should be 1 metastore call 
not 2 for performance. 

 # Need to change transactonid to writerid in 
RecordIdentifier.Field.transactionId.
 # Exchange partition doesn't work as write id not valid across tables.
 # Explain plan allocate write but don't use it and never remove the entry from 
meta tables.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch, HIVE-18192.10.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16355168#comment-16355168
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12909460/HIVE-18192.09.patch

{color:green}SUCCESS:{color} +1 due to 25 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 12983 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_subquery] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=91)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=180)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestGetTableMeta.testGetTableMetaNullOrEmptyDb[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.TestTxnNoBuckets.testToAcidConversionMultiBucket 
(batchId=280)
org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testToAcidConversionMultiBucket
 (batchId=280)
org.apache.hadoop.hive.ql.exec.TestOperators.testNoConditionalTaskSizeForLlap 
(batchId=282)
org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=256)
org.apache.hive.beeline.cli.TestHiveCli.testNoErrorDB (batchId=188)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=234)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=234)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=234)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/9064/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/9064/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-9064/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 37 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12909460 - PreCommit-HIVE-Build

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Pro

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-07 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16355114#comment-16355114
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} storage-api: The patch generated 35 new + 19 unchanged 
- 2 fixed = 54 total (was 21) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
30s{color} | {color:red} standalone-metastore: The patch generated 41 new + 
1326 unchanged - 13 fixed = 1367 total (was 1339) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
25s{color} | {color:red} ql: The patch generated 239 new + 4064 unchanged - 221 
fixed = 4303 total (was 4285) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/streaming: The patch generated 26 new + 463 
unchanged - 20 fixed = 489 total (was 483) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} itests/hive-unit: The patch generated 45 new + 225 
unchanged - 45 fixed = 270 total (was 270) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 68 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
48s{color} | {color:red} standalone-metastore generated 3 new + 60 unchanged - 
0 fixed = 63 total (was 60) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 2422e18 |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9064/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9064/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9064/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9064/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9064/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9064/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9064/yetus/whit

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-06 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16354062#comment-16354062
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 09.patch with below updates.
 # Remove entries from TXNS_TO_WRITE_ID table when drop a table/database. Also, 
split db and table names into 2 columns.
 # Multi-statement txn support by setting ValidTxnList for open txn and get/set 
ValidWriteId list for each operation within txn.

Pending changes for this feature
 # Scripts to add Metastore tables for write id management. Add for other 
non-derby databases too.
 # CompactionTxnHandler cleanup COMPLETED_TXN_COMPONENTS based on highest write 
id instead of highest txn id.
 # Address few more test failures.
 # Fix check style issues.

Out of scope of HIVE-18192:
 # Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
 # Update classes to use WriteID instead of TxnId as methods and variable names.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch, 
> HIVE-18192.09.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16351881#comment-16351881
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12909156/HIVE-18192.08.patch

{color:green}SUCCESS:{color} +1 due to 25 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 52 failed/errored test(s), 12970 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=248)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_subquery] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_cttas] (batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=248)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_cttas] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_dummy]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_input_format_excludes]
 (batchId=163)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=122)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=221)
org.apache.hadoop.hive.metastore.client.TestTablesCreateDropAlterTruncate.testAlterTableNullStorageDescriptorInNew[Embedded]
 (batchId=206)
org.apache.hadoop.hive.metastore.client.TestTablesGetExists.testGetAllTablesCaseInsensitive[Embedded]
 (batchId=206)
org.apache.hadoop.hive.metastore.client.TestTablesList.testListTableNamesByFilterNullDatabase[Embedded]
 (batchId=206)
org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=224)
org.apache.hadoop.hive.ql.TestTxnCommands.testImplici

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16351871#comment-16351871
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 32 new + 22 unchanged 
- 6 fixed = 54 total (was 28) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
29s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
17s{color} | {color:red} ql: The patch generated 244 new + 4070 unchanged - 217 
fixed = 4314 total (was 4287) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/streaming: The patch generated 26 new + 463 
unchanged - 20 fixed = 489 total (was 483) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 45 new + 225 
unchanged - 45 fixed = 270 total (was 270) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
46s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 4a33ec8 |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9012/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9012/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9012/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9012/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9012/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9012/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9012/yetus/whit

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-04 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16351852#comment-16351852
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Attached 08.patch with a fix to use txn Id instead of write id when add 
TXN_COMPONENTS for dynamic partitions. Also, set default write Id as 0 if tries 
to allocate write id without txn opened (this is a hack for few tests to run).

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch, HIVE-18192.08.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16351572#comment-16351572
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12909111/HIVE-18192.07.patch

{color:green}SUCCESS:{color} +1 due to 24 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 120 failed/errored test(s), 12970 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=248)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_insert_overwrite] 
(batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_subquery] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_all_partitioned] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_where_partitioned]
 (batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_whole_partition] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_acid_dynamic_partition]
 (batchId=20)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_dynamic_partitioned]
 (batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_acid] (batchId=81)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_acid_fast] 
(batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_cttas] (batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[update_all_partitioned] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[update_where_partitioned]
 (batchId=62)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_dynamic]
 (batchId=177)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=248)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_cttas] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_all_partitioned]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_where_partitioned]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_whole_partition]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_dynamic_partitioned]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=173)
org.apa

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-03 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16351556#comment-16351556
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  8s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 32 new + 22 unchanged 
- 6 fixed = 54 total (was 28) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
14s{color} | {color:red} ql: The patch generated 244 new + 3819 unchanged - 216 
fixed = 4063 total (was 4035) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hcatalog/streaming: The patch generated 26 new + 463 
unchanged - 20 fixed = 489 total (was 483) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} itests/hive-unit: The patch generated 45 new + 225 
unchanged - 45 fixed = 270 total (was 270) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 4a33ec8 |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9004/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9004/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9004/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9004/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9004/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9004/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-9004/yetus/whit

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-02-03 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16351526#comment-16351526
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Attached 07.patch with below updates.
 # Upgrade script changed for derby.
 # Fixed a bug related to setting ValidWriteIdList config for multi-table 
queries.
 # CTAS for acid table
 # Non-acid to acid conversion through alter table transaction property.
 # Load table flow to allocate write Id.
 # ACID test code correction.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch, HIVE-18192.07.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-31 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16347301#comment-16347301
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Based on discussion with ACID folks, the proposal for pending changes.

Must complete before merging the branch to master:
 # Remove entries from TXNS_TO_WRITE_ID table when drop a table/database. Also, 
split db and table names into 2 columns.
 # Scripts to add Metastore tables for write id management. Add for other 
non-derby databases too.
 # CompactionTxnHandler cleanup COMPLETED_TXN_COMPONENTS based on highest write 
id instead of highest txn id.
 # Non-Acid to Acid conversion through alter table transaction property.

Can be done after the merge to master:
 # New logic for cleaner to remove TXNS_TO_WRITE_ID table entries. Also, need 
to maintain the LWM for each table write id.
 # Update classes to use WriteID instead of TxnId as methods and variable names.

cc [~thejas], [~ekoifman]

 

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346194#comment-16346194
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12908434/HIVE-18192.06.patch

{color:green}SUCCESS:{color} +1 due to 20 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 116 failed/errored test(s), 12862 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=240)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=248)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_tmp_table] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_tmp_table] 
(batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_acid] (batchId=81)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=90)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=49)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=83)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=79)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=248)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[schemeAuthority2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_tmp_table]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_tmp_table]
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_dummy]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=161)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_conversions]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocal

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16346165#comment-16346165
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  8s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 30 new + 22 unchanged 
- 6 fixed = 52 total (was 28) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
2s{color} | {color:red} ql: The patch generated 71 new + 2634 unchanged - 43 
fixed = 2705 total (was 2677) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/streaming: The patch generated 26 new + 463 
unchanged - 20 fixed = 489 total (was 483) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
16s{color} | {color:red} itests/hive-unit: The patch generated 9 new + 73 
unchanged - 9 fixed = 82 total (was 82) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
49s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 123e2eb |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8937/yetus/whitespace-

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-30 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16345824#comment-16345824
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 06.patch with below changes.

1. Support ValidWriteIdList config string with multiple tables. Need to change 
the format to include table name and separator as 
follows.:hwm:minWriteId:open1,open2:abort1,abort2$:hwm...

2. Pass appropriate ValidWriteIdList string to ORCInputFormat without need to 
get table name there. (As per Gopal’s suggestion)

 

Pending changes for this feature
 # Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
 # Update classes to use WriteID instead of TxnId as methods and variable names.
 # Scripts to add Metastore tables for write id management. Add for other 
non-derby databases too.
 # Remove entries from TXNS_TO_WRITE_ID table when drop a table/database. Also, 
split db and table names into 2 columns.
 # CompactionTxnHandler cleanup COMPLETED_TXN_COMPONENTS based on highest write 
id instead of highest txn id.
 # Non-Acid to Acid conversion through alter table transaction property.

 

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch, 
> HIVE-18192.06.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-24 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338283#comment-16338283
 ] 

Alan Gates commented on HIVE-18192:
---

Makes sense.  Thanks for the detailed explanation.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-24 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338208#comment-16338208
 ] 

Eugene Koifman commented on HIVE-18192:
---

Suppose A is the set of all ValidTxnList across all active readers.  Each 
ValidTxnList has minOpenTxnId.

MIN_HISTORY_LEVEL allows us to determine min(minOpenTxnId) across all currently 
active readers.  It's not the same as COMPLETED_TXN_COMPONENTS.

Entries from COMPLETED_TXN_COMPONENTS get removed once compaction has processed 
the relevant partition which can happen before all readers that think txn X is 
open have gone away.

 

Suppose txn 17 starts at t1 and sees txnid 13 with writeID 13 open.

13 commits (via it's parent txn) at t2 > t1.  (17 is still running).

Compaction runs at t3 >t2 to produce base_14 (or delta_10_14 for example) on 
Table1/Part1 (17 is still running)

COMPLETED_TXN_COMPONENTS may be cleaned at this point.

at t4 > t3 17 may (multi stmt txn) needs to read Table1/Part1.  It now needs to 
construct a ValidWriteIDList as it would have looked like at the time 17 
started to maintain SI.

But if TXNS2WRITE_ID no longer has txnid13 -> writeID13 mapping, there is no 
way to know that ValidWriteIDList should have writeID13 Open.   

 

 

 

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-24 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338020#comment-16338020
 ] 

Alan Gates commented on HIVE-18192:
---

I have a question on the design.  I don't understand the MIN_HISTORY_LEVEL 
table.  In the doc it says:
{quote} need this to ensure that we don’t remove TXNS2WRITE_ID data before any 
operation that started while this txn was open/aborted is still live. This is 
populated when current_txnid starts and locks in its ValidTxnList and the entry 
is removed when this txn commits/aborts.
{quote}
Why does this need to be tracked separately?  There is already logic to 
determine when an entry can be removed from TXN_COMPONENTS and 
COMPLETED_TXN_COMPONENTS.  Can that not be extended to remove the related entry 
from TXNS2WRITE_ID?

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16336384#comment-16336384
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907175/HIVE-18192.05.patch

{color:green}SUCCESS:{color} +1 due to 20 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 107 failed/errored test(s), 11632 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=246)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_tmp_table] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_tmp_table] 
(batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_acid] (batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_dynamic]
 (batchId=175)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_static]
 (batchId=172)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=246)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_tmp_table]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_tmp_table]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_dummy]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16336341#comment-16336341
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 29 new + 22 unchanged 
- 6 fixed = 51 total (was 28) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
28s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
1s{color} | {color:red} ql: The patch generated 67 new + 2632 unchanged - 43 
fixed = 2699 total (was 2675) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hcatalog/streaming: The patch generated 26 new + 463 
unchanged - 20 fixed = 489 total (was 483) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} itests/hive-unit: The patch generated 9 new + 73 
unchanged - 9 fixed = 82 total (was 82) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
50s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / fff86f3 |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8780/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8780/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8780/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8780/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8780/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8780/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8780/yetus/whitespace-eol.txt 
|
| ja

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335863#comment-16335863
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907175/HIVE-18192.05.patch

{color:green}SUCCESS:{color} +1 due to 20 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 103 failed/errored test(s), 11633 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=246)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_tmp_table] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_tmp_table] 
(batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_acid] (batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_dynamic]
 (batchId=175)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_static]
 (batchId=172)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=246)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_tmp_table]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_tmp_table]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_dummy]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.t

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-23 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335802#comment-16335802
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} storage-api: The patch generated 23 new + 18 unchanged 
- 1 fixed = 41 total (was 19) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 55 new + 1589 unchanged - 33 
fixed = 1644 total (was 1622) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hcatalog/streaming: The patch generated 12 new + 201 
unchanged - 7 fixed = 213 total (was 208) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8775/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8775/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8775/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8775/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8775/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8775/yetus/whitespace-eol.txt 
|
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8775/yetus/diff-javadoc-javadoc-standalone-metastore.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8775/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api common standalone-metastore metastore ql 
hcatalog/streaming i

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335400#comment-16335400
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907175/HIVE-18192.05.patch

{color:green}SUCCESS:{color} +1 due to 20 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 106 failed/errored test(s), 11633 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=246)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_join] (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=53)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_tmp_table] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_tmp_table] 
(batchId=4)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_acid] (batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=3)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_buckets] (batchId=60)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] 
(batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=82)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_exchangepartition] 
(batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_loaddata] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=78)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_case_column_pruning] 
(batchId=78)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_dynamic]
 (batchId=175)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_insert_partition_static]
 (batchId=172)
org.apache.hadoop.hive.cli.TestEncryptedHDFSCliDriver.testCliDriver[encryption_move_tbl]
 (batchId=173)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=246)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[delete_tmp_table]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_tmp_table]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_dummy]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16335390#comment-16335390
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
0s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} storage-api: The patch generated 23 new + 18 unchanged 
- 1 fixed = 41 total (was 19) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 55 new + 1589 unchanged - 33 
fixed = 1644 total (was 1622) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hcatalog/streaming: The patch generated 12 new + 201 
unchanged - 7 fixed = 213 total (was 208) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
47s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
12s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 35s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 3bbf35f |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/whitespace-eol.txt 
|
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/diff-javadoc-javadoc-standalone-metastore.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8767/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api common standalone-metastore metastore ql 
hcatalog/streaming i

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334311#comment-16334311
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 05.patch after reverting field name in ROW__ID struct from "writeid" to 
"transactionid" to fix several test failures.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch, HIVE-18192.04.patch, HIVE-18192.05.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-22 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16334073#comment-16334073
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  9s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} storage-api: The patch generated 23 new + 18 unchanged 
- 1 fixed = 41 total (was 19) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
25s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
46s{color} | {color:red} ql: The patch generated 55 new + 1589 unchanged - 33 
fixed = 1644 total (was 1622) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} hcatalog/streaming: The patch generated 12 new + 201 
unchanged - 7 fixed = 213 total (was 208) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
43s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
14s{color} | {color:red} The patch generated 6 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / deb3adc |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8748/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8748/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8748/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8748/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8748/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8748/yetus/whitespace-eol.txt 
|
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8748/yetus/diff-javadoc-javadoc-standalone-metastore.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8748/yetus/patch-asflicense-problems.txt
 |
| modules | C: storage-api common standalone-metastore metastore ql 
hcatalog/streaming i

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-21 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333960#comment-16333960
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Attached 04.patch after rebasing with master.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch, 
> HIVE-18192.03.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16333553#comment-16333553
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12907005/HIVE-18192.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8738/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8738/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8738/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-01-21 15:04:14.083
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-8738/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-01-21 15:04:14.086
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 90d236a HIVE-18231 : validate resource plan - part 2 - validate 
action and trigger expressions (Harish Jaiprakash, reviewed by Sergey Shelukhin)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 90d236a HIVE-18231 : validate resource plan - part 2 - validate 
action and trigger expressions (Harish Jaiprakash, reviewed by Sergey Shelukhin)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-01-21 15:04:14.603
+ rm -rf ../yetus
+ mkdir ../yetus
+ git gc
+ cp -R . ../yetus
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8738/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
fatal: git apply: bad git-diff - inconsistent old filename on line 48126
error: patch failed: 
standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp:1240
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/gen/thrift/gen-cpp/ThriftHiveMetastore.cpp' with 
conflicts.
error: patch failed: 
standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp:23884
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/gen/thrift/gen-cpp/hive_metastore_types.cpp' with 
conflicts.
error: patch failed: 
standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java:34145
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/ThriftHiveMetastore.java'
 with conflicts.
error: patch failed: 
standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/WMGetTriggersForResourePlanResponse.java:346
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/WMGetTriggersForResourePlanResponse.java'
 with conflicts.
error: patch failed: 
standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/WMValidateResourcePlanResponse.java:376
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/WMValidateResourcePlanResponse.java'
 with conflicts.
error: patch failed: 
standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php:12984
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/gen/thrift/gen-php/metastore/ThriftHiveMetastore.php' 
with conflicts.
error: patch failed: 
standalone-metastore/src/gen/thrift/gen-php/metastore/Types.php:23418
Falling back to three-way merge...
Applied patch to 
'standalone-metastore/src/gen/thrift

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-19 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16332627#comment-16332627
 ] 

Hive QA commented on HIVE-18192:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m  8s{color} 
| {color:red} storage-api generated 2 new + 0 unchanged - 2 fixed = 2 total 
(was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} storage-api: The patch generated 23 new + 19 unchanged 
- 1 fixed = 42 total (was 20) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
26s{color} | {color:red} standalone-metastore: The patch generated 18 new + 
1329 unchanged - 13 fixed = 1347 total (was 1342) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 55 new + 1619 unchanged - 33 
fixed = 1674 total (was 1652) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hcatalog/streaming: The patch generated 12 new + 207 
unchanged - 7 fixed = 219 total (was 214) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 84 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
45s{color} | {color:red} standalone-metastore generated 3 new + 62 unchanged - 
0 fixed = 65 total (was 62) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / d6c6d96 |
| Default Java | 1.8.0_111 |
| javac | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8704/yetus/diff-compile-javac-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8704/yetus/diff-checkstyle-storage-api.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8704/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8704/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8704/yetus/diff-checkstyle-hcatalog_streaming.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8704/yetus/whitespace-eol.txt 
|
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8704/yetus/diff-javadoc-javadoc-standalone-metastore.txt
 |
| modules | C: storage-api common standalone-metastore metastore ql 
hcatalog/streaming itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-87

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-19 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16331982#comment-16331982
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Added 03.patch with below changes.
 # Fixed compilation issue from TestCompactor.java.
 # Corrected test code in TestCompactor.java, TestAcidOnTex.java and 
TestStreaming.java and all passing locally.
 # Streaming ingest flow is modified for write ID change. HiveEndPoint.java

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table.
> The current primary key is determined via 
>  
> which will move to 
>  
> For each table modified by the given transaction will have a table level 
> write ID allocated and a persisted map of global txn id -> to table -> write 
> id for that table has to be maintained to allow Snapshot isolation.
> Readers should use the combination of ValidTxnList and 
> ValidWriteIdList(Table) for snapshot isolation.
>  
>  [Hive Replication - ACID 
> Tables.pdf|https://issues.apache.org/jira/secure/attachment/12903157/Hive%20Replication-%20ACID%20Tables.pdf]
>  has a section "Per Table Sequences (Write-Id)" with more detials



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-18 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16330740#comment-16330740
 ] 

Hive QA commented on HIVE-18192:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12906445/HIVE-18192.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8680/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8680/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8680/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/classification/target/hive-classification-3.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability$Unstable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/ByteArrayOutputStream.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/OutputStream.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Closeable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/AutoCloseable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Flushable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(javax/xml/bind/annotation/XmlRootElement.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/commons/commons-exec/1.1/commons-exec-1.1.jar(org/apache/commons/exec/ExecuteException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/security/PrivilegedExceptionAction.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/ExecutionException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/TimeoutException.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.0.0-beta1/hadoop-common-3.0.0-beta1.jar(org/apache/hadoop/fs/FileSystem.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-3.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShimsSecure.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-3.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/ShimLoader.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-3.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShims.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-3.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShims$WebHCatJTShim.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.0.0-beta1/hadoop-common-3.0.0-beta1.jar(org/apache/hadoop/util/ToolRunner.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/CancellationException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/RejectedExecutionException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/SynchronousQueue.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/ThreadPoolExecutor.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/TimeUnit.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/Future.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.0.0-beta1/hadoop-common-3.0.0-beta1.jar(org/apache/hadoop/conf/Configured.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.0.0-beta1/hadoop-common-3.0.0-beta1.jar(org/apache/hadoop/io/NullWritable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.0.0-beta1/hadoop-common-3.0.0-beta1.jar(org/apache/hadoop/io/Text.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-mapreduce-client-core/3.0.0-beta1/hadoop-mapreduce-client-core-3.0.0-beta1.jar(org/apache/hadoop/mapred/JobClient.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-mapreduce-c

[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-17 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329176#comment-16329176
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Attached 02.patch with below changes.
{quote}1. Metastore tables for write id management. (Added only for derby now)
2. ValidWriteIdList generation. Used Read and Write entities to build the 
tables list.
3. ACID write and read flows.
4. Compactor based on write id instead of txn id.
5. Working well for queries with single table. Join queries won’t work. 6.
6. Streaming ingest to use table write id.{quote}
Pending changes relevant to this Jira which will be available in subsequent 
patches.
{quote}1. Support ValidWriteIdList config string with multiple tables. Need to 
change the format to include table name and separator as 
follows.\{:hwm:minWriteId:open1,open2:abort1,abort2:}{:hwm:…}
2. Pass appropriate ValidWriteIdList string to ORCInputFormat without need to 
get table name there. (As per Gopal’s suggestion)
3. Store ValidTxnList in the TxnManager to be re-used for multi-statement txns. 
Also, need to pass ValidTxnList while getting ValidWriteIdList instead of 
getting open txns list in metastore.
4. Cleaner for TXNS_TO_WRITE_ID table entries. Also, need to maintain the LWM 
for each table write id.
5. Update classes to use WriteID instead of TxnId as methods and variable names.
6. Correct test cases for new changes.
7. FetchTask optimisation to refer ValidTxnList seems to be not valid if query 
involves multiple tables. Need to check it.
8. Metastore tables for write id management. Add for other databases too.
9. Remove entries from TXNS_TO_WRITE_ID table when drop a table/database.{quote}
 
Request [~ekoifman]/[~anishek] to review this patch and provide your feedback.
cc [~thejas]

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-18192.01.patch, HIVE-18192.02.patch
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2018-01-14 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16325734#comment-16325734
 ] 

ASF GitHub Bot commented on HIVE-18192:
---

GitHub user sankarh opened a pull request:

https://github.com/apache/hive/pull/290

HIVE-18192: Introduce WriteID per table rather than using global 
transaction ID



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sankarh/hive HIVE-18192

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/290.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #290


commit ced6d749a7e65c42ed31a8af9e38faaf5941251a
Author: Sankar Hariappan 
Date:   2018-01-03T05:47:38Z

HIVE-18192: Introduce WriteID per table rather than using global 
transaction ID




> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>  Labels: ACID, DR, pull-request-available
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2017-12-21 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299686#comment-16299686
 ] 

Sankar Hariappan commented on HIVE-18192:
-

Design document is attached in the umbrella jira HIVE-18320.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>  Labels: ACID, DR
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2017-12-20 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299027#comment-16299027
 ] 

Eugene Koifman commented on HIVE-18192:
---

I'll try to share the doc by end of the week.  Snapshot Isolation will of 
course be preserved.

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>  Labels: ACID, DR
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18192) Introduce WriteID per table rather than using global transaction ID

2017-12-20 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16298997#comment-16298997
 ] 

Andrew Sherman commented on HIVE-18192:
---

Can someone share this design doc please?

> Introduce WriteID per table rather than using global transaction ID
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>  Labels: ACID, DR
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18192) Introduce WriteId per table rather than using global transaction id

2017-12-10 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285591#comment-16285591
 ] 

Sankar Hariappan commented on HIVE-18192:
-

[~anishek]
[~ekoifman] have written the design to introduce table Write ID.

Cc [~alangates]

> Introduce WriteId per table rather than using global transaction id
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18192) Introduce WriteId per table rather than using global transaction id

2017-12-10 Thread anishek (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16285526#comment-16285526
 ] 

anishek commented on HIVE-18192:


[~alangates], there is a design doc that [~sankarh] has created, 

[~sankarh] can you please share the same and link this bug to the parent design 
doc so we can navigate easily. 

> Introduce WriteId per table rather than using global transaction id
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18192) Introduce WriteId per table rather than using global transaction id

2017-12-01 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274739#comment-16274739
 ] 

Alan Gates commented on HIVE-18192:
---

Before we go too far down this road, is there a design doc on how this will 
work?  This will be a big change to the transaction management system.  In 
particular it seems like multi-statement transactions will be affected.  

> Introduce WriteId per table rather than using global transaction id
> ---
>
> Key: HIVE-18192
> URL: https://issues.apache.org/jira/browse/HIVE-18192
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Transactions
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
>
> To support ACID replication, we will be introducing a per table write Id 
> which will replace the transaction id in the primary key for each row in a 
> ACID table. 
> The current primary key is determined via 
> 
> which will move to 
> 
> a persistable map of global txn id -> to table -> write id for that table has 
> to be maintained to now allow Snapshot isolation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)