[jira] [Updated] (HADOOP-15988) Should be able to set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative directory listings

2018-12-11 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15988:

Attachment: HADOOP-15988.002.patch

> Should be able to set empty directory flag to TRUE in 
> DynamoDBMetadataStore#innerGet when using authoritative directory listings
> 
>
> Key: HADOOP-15988
> URL: https://issues.apache.org/jira/browse/HADOOP-15988
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15988.001.patch, HADOOP-15988.002.patch
>
>
> We have the following comment and implementation in DynamoDBMetadataStore:
> {noformat}
> // When this class has support for authoritative
> // (fully-cached) directory listings, we may also be able to answer
> // TRUE here.  Until then, we don't know if we have full listing or
> // not, thus the UNKNOWN here:
> meta.setIsEmptyDirectory(
> hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
> {noformat}
> We have authoritative listings now in dynamo since HADOOP-15621, so we should 
> resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16000) Remove TLSv1 and SSLv2Hello from the default value of hadoop.ssl.enabled.protocols

2018-12-11 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-16000:
---

Assignee: Gabor Bota

> Remove TLSv1 and SSLv2Hello from the default value of 
> hadoop.ssl.enabled.protocols
> --
>
> Key: HADOOP-16000
> URL: https://issues.apache.org/jira/browse/HADOOP-16000
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Major
>
> {code:title=core-default.xml}
>   public static final String SSL_ENABLED_PROTOCOLS_DEFAULT =
>   "TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2";
> {code}
> TLSv1 and SSLv2Hello are considered to be vulnerable. Let's remove these by 
> default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before initializing the test

2018-12-10 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16715234#comment-16715234
 ] 

Gabor Bota commented on HADOOP-15987:
-

I'll correct that an upload a new patch soon.

> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> initializing the test
> ---
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15987.001.patch
>
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. 
> A failed assumption does not mean the code is broken, but that the test 
> provides no useful information. 
> The default JUnit runner treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15563) S3guard init and set-capacity to support DDB autoscaling

2018-12-10 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15563:

Priority: Major  (was: Minor)

> S3guard init and set-capacity to support DDB autoscaling
> 
>
> Key: HADOOP-15563
> URL: https://issues.apache.org/jira/browse/HADOOP-15563
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> To keep costs down on DDB, autoscaling is a key feature: you set the max 
> values and when idle, you don't get billed, *at the cost of delayed scale 
> time and risk of not getting the max value when AWS is busy*
> It can be done from the AWS web UI, but not in the s3guard init and 
> set-capacity calls
> It can be done [through the 
> API|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.HowTo.SDK.html]
> Usual issues then: wiring up, CLI params, testing. It'll be hard to test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15563) S3guard init and set-capacity to support DDB autoscaling

2018-12-10 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15563:
---

Assignee: Gabor Bota

> S3guard init and set-capacity to support DDB autoscaling
> 
>
> Key: HADOOP-15563
> URL: https://issues.apache.org/jira/browse/HADOOP-15563
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> To keep costs down on DDB, autoscaling is a key feature: you set the max 
> values and when idle, you don't get billed, *at the cost of delayed scale 
> time and risk of not getting the max value when AWS is busy*
> It can be done from the AWS web UI, but not in the s3guard init and 
> set-capacity calls
> It can be done [through the 
> API|https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.HowTo.SDK.html]
> Usual issues then: wiring up, CLI params, testing. It'll be hard to test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15988) Should be able to set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative directory listings

2018-12-10 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15988:

Status: Patch Available  (was: In Progress)

submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting 
spreadsheet|https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing])
 - timeout, succesful after rerun

These issues are not related.



> Should be able to set empty directory flag to TRUE in 
> DynamoDBMetadataStore#innerGet when using authoritative directory listings
> 
>
> Key: HADOOP-15988
> URL: https://issues.apache.org/jira/browse/HADOOP-15988
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15988.001.patch
>
>
> We have the following comment and implementation in DynamoDBMetadataStore:
> {noformat}
> // When this class has support for authoritative
> // (fully-cached) directory listings, we may also be able to answer
> // TRUE here.  Until then, we don't know if we have full listing or
> // not, thus the UNKNOWN here:
> meta.setIsEmptyDirectory(
> hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
> {noformat}
> We have authoritative listings now in dynamo since HADOOP-15621, so we should 
> resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15988) Should be able to set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative directory listings

2018-12-10 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15988:

Attachment: HADOOP-15988.001.patch

> Should be able to set empty directory flag to TRUE in 
> DynamoDBMetadataStore#innerGet when using authoritative directory listings
> 
>
> Key: HADOOP-15988
> URL: https://issues.apache.org/jira/browse/HADOOP-15988
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15988.001.patch
>
>
> We have the following comment and implementation in DynamoDBMetadataStore:
> {noformat}
> // When this class has support for authoritative
> // (fully-cached) directory listings, we may also be able to answer
> // TRUE here.  Until then, we don't know if we have full listing or
> // not, thus the UNKNOWN here:
> meta.setIsEmptyDirectory(
> hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
> {noformat}
> We have authoritative listings now in dynamo since HADOOP-15621, so we should 
> resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15988) Should be able to set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative directory listings

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15988 started by Gabor Bota.
---
> Should be able to set empty directory flag to TRUE in 
> DynamoDBMetadataStore#innerGet when using authoritative directory listings
> 
>
> Key: HADOOP-15988
> URL: https://issues.apache.org/jira/browse/HADOOP-15988
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> We have the following comment and implementation in DynamoDBMetadataStore:
> {noformat}
> // When this class has support for authoritative
> // (fully-cached) directory listings, we may also be able to answer
> // TRUE here.  Until then, we don't know if we have full listing or
> // not, thus the UNKNOWN here:
> meta.setIsEmptyDirectory(
> hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
> {noformat}
> We have authoritative listings now in dynamo since HADOOP-15621, so we should 
> resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15988) Should be able to set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative directory listings

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15988:

Summary: Should be able to set empty directory flag to TRUE in 
DynamoDBMetadataStore#innerGet when using authoritative directory listings  
(was: Should be able to set empty directory flag to TRUE in 
DynamoDBMetadataStore#innerGet when using authoritative mode)

> Should be able to set empty directory flag to TRUE in 
> DynamoDBMetadataStore#innerGet when using authoritative directory listings
> 
>
> Key: HADOOP-15988
> URL: https://issues.apache.org/jira/browse/HADOOP-15988
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> We have the following comment and implementation in DynamoDBMetadataStore:
> {noformat}
> // When this class has support for authoritative
> // (fully-cached) directory listings, we may also be able to answer
> // TRUE here.  Until then, we don't know if we have full listing or
> // not, thus the UNKNOWN here:
> meta.setIsEmptyDirectory(
> hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
> {noformat}
> We have authoritative listings now in dynamo since HADOOP-15621, so we should 
> resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15988) Should be able to set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative mode

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15988:

Summary: Should be able to set empty directory flag to TRUE in 
DynamoDBMetadataStore#innerGet when using authoritative mode  (was: Set empty 
directory flag to TRUE in DynamoDBMetadataStore#innerGet when using 
authoritative mode)

> Should be able to set empty directory flag to TRUE in 
> DynamoDBMetadataStore#innerGet when using authoritative mode
> --
>
> Key: HADOOP-15988
> URL: https://issues.apache.org/jira/browse/HADOOP-15988
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> We have the following comment and implementation in DynamoDBMetadataStore:
> {noformat}
> // When this class has support for authoritative
> // (fully-cached) directory listings, we may also be able to answer
> // TRUE here.  Until then, we don't know if we have full listing or
> // not, thus the UNKNOWN here:
> meta.setIsEmptyDirectory(
> hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
> {noformat}
> We have authoritative listings now in dynamo since HADOOP-15621, so we should 
> resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15988) Set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative mode

2018-12-07 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15988:
---

 Summary: Set empty directory flag to TRUE in 
DynamoDBMetadataStore#innerGet when using authoritative mode
 Key: HADOOP-15988
 URL: https://issues.apache.org/jira/browse/HADOOP-15988
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


We have the following comment and implementation in DynamoDBMetadataStore:
{noformat}
// When this class has support for authoritative
// (fully-cached) directory listings, we may also be able to answer
// TRUE here.  Until then, we don't know if we have full listing or
// not, thus the UNKNOWN here:
meta.setIsEmptyDirectory(
hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
{noformat}

We have authoritative listings now in dynamo since HADOOP-15621, so we should 
resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before initializing the test

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15987:

Status: Patch Available  (was: In Progress)

> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> initializing the test
> ---
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15987.001.patch
>
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. 
> A failed assumption does not mean the code is broken, but that the test 
> provides no useful information. 
> The default JUnit runner treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before initializing the test

2018-12-07 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16712897#comment-16712897
 ] 

Gabor Bota commented on HADOOP-15987:
-

Uploaded patch 001 with additional docs fix: we don't have local dynamo to test 
against, and the parameter is {{auth}} instead of {{non-auth}}.

Tested against eu-west-1. 
{{testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster)}} 
still failing, other flaky tests are tracked continously in [S3A test failure 
tracker|https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing].

> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> initializing the test
> ---
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15987.001.patch
>
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. 
> A failed assumption does not mean the code is broken, but that the test 
> provides no useful information. 
> The default JUnit runner treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before initializing the test

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15987:

Attachment: HADOOP-15987.001.patch

> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> initializing the test
> ---
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15987.001.patch
>
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. 
> A failed assumption does not mean the code is broken, but that the test 
> provides no useful information. 
> The default JUnit runner treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before initializing the test

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15987:

Attachment: (was: HADOOP-15987.001.patch)

> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> initializing the test
> ---
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. 
> A failed assumption does not mean the code is broken, but that the test 
> provides no useful information. 
> The default JUnit runner treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before initializing the test

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15987:

Attachment: HADOOP-15987.001.patch

> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> initializing the test
> ---
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15987.001.patch
>
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. 
> A failed assumption does not mean the code is broken, but that the test 
> provides no useful information. 
> The default JUnit runner treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before the test

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15987 started by Gabor Bota.
---
> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> the test
> --
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. A failed assumption does not mean the code is broken, 
> but that the test provides no useful information. The default JUnit runner 
> treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before initializing the test

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15987:

Summary: ITestDynamoDBMetadataStore should check if test ddb table set 
properly before initializing the test  (was: ITestDynamoDBMetadataStore should 
check if test ddb table set properly before the test)

> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> initializing the test
> ---
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. 
> A failed assumption does not mean the code is broken, but that the test 
> provides no useful information. 
> The default JUnit runner treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before the test

2018-12-07 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15987:

Description: 
The jira covers the following:
* We should assert that the table name is configured when DynamoDBMetadataStore 
is used for testing, so the test should fail if it's not configured.
* We should assert that the test table is not the same as the production table, 
as the test table could be modified and destroyed multiple times during the 
test.
* This behavior should be added to the testing docs.

[Assume from junit 
doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
{noformat}
A set of methods useful for stating assumptions about the conditions in which a 
test is meaningful. 
A failed assumption does not mean the code is broken, but that the test 
provides no useful information. 
The default JUnit runner treats tests with failing assumptions as ignored.
{noformat}

A failed assert will cause test failure instead of just skipping the test.

  was:
The jira covers the following:
* We should assert that the table name is configured when DynamoDBMetadataStore 
is used for testing, so the test should fail if it's not configured.
* We should assert that the test table is not the same as the production table, 
as the test table could be modified and destroyed multiple times during the 
test.
* This behavior should be added to the testing docs.

[Assume from junit 
doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
{noformat}
A set of methods useful for stating assumptions about the conditions in which a 
test is meaningful. A failed assumption does not mean the code is broken, but 
that the test provides no useful information. The default JUnit runner treats 
tests with failing assumptions as ignored.
{noformat}

A failed assert will cause test failure instead of just skipping the test.


> ITestDynamoDBMetadataStore should check if test ddb table set properly before 
> the test
> --
>
> Key: HADOOP-15987
> URL: https://issues.apache.org/jira/browse/HADOOP-15987
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The jira covers the following:
> * We should assert that the table name is configured when 
> DynamoDBMetadataStore is used for testing, so the test should fail if it's 
> not configured.
> * We should assert that the test table is not the same as the production 
> table, as the test table could be modified and destroyed multiple times 
> during the test.
> * This behavior should be added to the testing docs.
> [Assume from junit 
> doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
> {noformat}
> A set of methods useful for stating assumptions about the conditions in which 
> a test is meaningful. 
> A failed assumption does not mean the code is broken, but that the test 
> provides no useful information. 
> The default JUnit runner treats tests with failing assumptions as ignored.
> {noformat}
> A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15987) ITestDynamoDBMetadataStore should check if test ddb table set properly before the test

2018-12-07 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15987:
---

 Summary: ITestDynamoDBMetadataStore should check if test ddb table 
set properly before the test
 Key: HADOOP-15987
 URL: https://issues.apache.org/jira/browse/HADOOP-15987
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


The jira covers the following:
* We should assert that the table name is configured when DynamoDBMetadataStore 
is used for testing, so the test should fail if it's not configured.
* We should assert that the test table is not the same as the production table, 
as the test table could be modified and destroyed multiple times during the 
test.
* This behavior should be added to the testing docs.

[Assume from junit 
doc|http://junit.sourceforge.net/javadoc/org/junit/Assume.html]:
{noformat}
A set of methods useful for stating assumptions about the conditions in which a 
test is meaningful. A failed assumption does not mean the code is broken, but 
that the test provides no useful information. The default JUnit runner treats 
tests with failing assumptions as ignored.
{noformat}

A failed assert will cause test failure instead of just skipping the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-07 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16712683#comment-16712683
 ] 

Gabor Bota commented on HADOOP-15819:
-

Thanks [~adam.antal], could you remove the bad one?

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> HADOOP-15819.001.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.ha

[jira] [Assigned] (HADOOP-14425) Add more s3guard metrics

2018-12-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14425:
---

Assignee: Gabor Bota

> Add more s3guard metrics
> 
>
> Key: HADOOP-14425
> URL: https://issues.apache.org/jira/browse/HADOOP-14425
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Ai Deng
>Assignee: Gabor Bota
>Priority: Major
>
> The metrics suggested to add:
> Status:
> S3GUARD_METADATASTORE_ENABLED
> S3GUARD_METADATASTORE_IS_AUTHORITATIVE
> Operations:
> S3GUARD_METADATASTORE_INITIALIZATION
> S3GUARD_METADATASTORE_DELETE_PATH
> S3GUARD_METADATASTORE_DELETE_PATH_LATENCY
> S3GUARD_METADATASTORE_DELETE_SUBTREE_PATCH
> S3GUARD_METADATASTORE_GET_PATH
> S3GUARD_METADATASTORE_GET_PATH_LATENCY
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH
> S3GUARD_METADATASTORE_GET_CHILDREN_PATH_LATENCY
> S3GUARD_METADATASTORE_MOVE_PATH
> S3GUARD_METADATASTORE_PUT_PATH
> S3GUARD_METADATASTORE_PUT_PATH_LATENCY
> S3GUARD_METADATASTORE_CLOSE
> S3GUARD_METADATASTORE_DESTORY
> From S3Guard:
> S3GUARD_METADATASTORE_MERGE_DIRECTORY
> For the failures:
> S3GUARD_METADATASTORE_DELETE_FAILURE
> S3GUARD_METADATASTORE_GET_FAILURE
> S3GUARD_METADATASTORE_PUT_FAILURE
> Etc:
> S3GUARD_METADATASTORE_PUT_RETRY_TIMES



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14109) improvements to S3GuardTool destroy command

2018-12-06 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711973#comment-16711973
 ] 

Gabor Bota commented on HADOOP-14109:
-

I'll start work on this after HADOOP-15428 and HADOOP-15845 gets resolved.

> improvements to S3GuardTool destroy command
> ---
>
> Key: HADOOP-14109
> URL: https://issues.apache.org/jira/browse/HADOOP-14109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> The S3GuardTool destroy operation initializes dynamoDB, and in doing so has 
> some issues
> # if the version of the table is incompatible, init fails, so table isn't 
> deleteable
> # if the system is configured to create the table on demand, then whenever 
> destroy is called for a table that doesn't exist, it gets created and then 
> destroyed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-06 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711957#comment-16711957
 ] 

Gabor Bota commented on HADOOP-15819:
-

Thanks [~adam.antal], I'll test it tomorrow up and downstream.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, HADOOP-15819.001.patch, 
> S3ACloseEnforcedFileSystem.java, S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.I

[jira] [Updated] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15845:

Status: Patch Available  (was: In Progress)

Test run against eu-west-1. No unknown issues 
(testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster) 
still failing).

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15845.001.patch
>
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-06 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15845:

Attachment: HADOOP-15845.001.patch

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15845.001.patch
>
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-05 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710184#comment-16710184
 ] 

Gabor Bota edited comment on HADOOP-15845 at 12/5/18 3:09 PM:
--

Note that the following is described in init, destroy, set-capacity, etc. 
commands: 
{noformat}
"  Specifying both the -" + REGION_FLAG + " option and an S3A path\n" +
"  is not supported.";
{noformat}

Just a note for myself before the patch: should look into that and clear the 
current implementation up to work that way.


was (Author: gabor.bota):
Note that the following is described in init, destroy, set-capacity, etc. 
commands: 
{noformat}
"  URLs for Amazon DynamoDB are of the form dynamodb://TABLE_NAME.\n" +
"  Specifying both the -" + REGION_FLAG + " option and an S3A path\n" +
"  is not supported.";
{noformat}

Just a note for myself before the patch: should look into that and clear the 
current implementation up to work that way.

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-05 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710184#comment-16710184
 ] 

Gabor Bota commented on HADOOP-15845:
-

Note that the following is described in init, destroy, set-capacity, etc. 
commands: 
{noformat}
"  URLs for Amazon DynamoDB are of the form dynamodb://TABLE_NAME.\n" +
"  Specifying both the -" + REGION_FLAG + " option and an S3A path\n" +
"  is not supported.";
{noformat}

Just a note for myself before the patch: should look into that and clear the 
current implementation up to work that way.

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-05 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15845:

Comment: was deleted

(was: I think that bucket in cli should be mandatory to these commands. This 
would make more sense after the HADOOP-14927 fix. I'll upload a patch for it 
soon with tests.)

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-05 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710139#comment-16710139
 ] 

Gabor Bota commented on HADOOP-15845:
-

I think that bucket in cli should be mandatory to these commands. This would 
make more sense after the HADOOP-14927 fix. I'll upload a patch for it soon 
with tests.

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-12-04 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15819 started by Gabor Bota.
---
> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: HADOOP-15819.000.patch, S3ACloseEnforcedFileSystem.java, 
> S3ACloseEnforcedFileSystem.java, closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40)
>   at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
>  

[jira] [Assigned] (HADOOP-14109) improvements to S3GuardTool destroy command

2018-12-04 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14109:
---

Assignee: Gabor Bota

> improvements to S3GuardTool destroy command
> ---
>
> Key: HADOOP-14109
> URL: https://issues.apache.org/jira/browse/HADOOP-14109
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> The S3GuardTool destroy operation initializes dynamoDB, and in doing so has 
> some issues
> # if the version of the table is incompatible, init fails, so table isn't 
> deleteable
> # if the system is configured to create the table on demand, then whenever 
> destroy is called for a table that doesn't exist, it gets created and then 
> destroyed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-12-04 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15845 started by Gabor Bota.
---
> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-04 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16708785#comment-16708785
 ] 

Gabor Bota edited comment on HADOOP-15428 at 12/4/18 2:41 PM:
--

submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting 
spreadsheet|https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing])
 - timeout, succesful after rerun

These issues are not related.




was (Author: gabor.bota):
submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting 
spreadsheet|https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing])

These issues are not related.



> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-04 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15428:

Status: Patch Available  (was: In Progress)

> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-04 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16708785#comment-16708785
 ] 

Gabor Bota edited comment on HADOOP-15428 at 12/4/18 2:41 PM:
--

submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting 
spreadsheet|https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing])

These issues are not related.




was (Author: gabor.bota):
submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting spreadsheet|
https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing])

These issues are not related.



> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-04 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16708785#comment-16708785
 ] 

Gabor Bota edited comment on HADOOP-15428 at 12/4/18 2:40 PM:
--

submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting spreadsheet|
https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing])

These issues are not related.




was (Author: gabor.bota):
submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting spreadsheet|
https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing)

These issues are not related.



> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-04 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16708785#comment-16708785
 ] 

Gabor Bota edited comment on HADOOP-15428 at 12/4/18 2:40 PM:
--

submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting spreadsheet|
https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing)

These issues are not related.




was (Author: gabor.bota):
submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting spreadsheet|
https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing)



> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-04 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16708785#comment-16708785
 ] 

Gabor Bota commented on HADOOP-15428:
-

submitted patch v001, tested against eu-west-1. 
Issues:
* org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster#testWithMiniCluster 
(known)
* ITestS3GuardToolDynamoDB#testPruneCommandCLI (added to [S3 flakyness 
collecting spreadsheet|
https://docs.google.com/spreadsheets/d/1Z9dkg5yC7Hu7VQ5G2Wz40hG1pb0DhLuLjJ3PZe7WL3Q/edit?usp=sharing)



> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-04 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15428:

Attachment: HADOOP-15428.001.patch

> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-03 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15428 started by Gabor Bota.
---
> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Status: Patch Available  (was: In Progress)

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Description: com.google.guava:guava should be upgraded to 27.0-jre due to 
new CVE's found 
[CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].  (was: 
com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
[CVE-2018-10237|http://example.com].)

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Description: com.google.guava:guava should be upgraded to 27.0-jre due to 
new CVE's found [CVE-2018-10237|http://example.com].  (was: 
com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237.)

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|http://example.com].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16703365#comment-16703365
 ] 

Gabor Bota commented on HADOOP-15960:
-

I'll search for other guava updates and link them here.
Uploaded my initial patch for this. It compiles, and I've ran tests for a few 
modules, but I'm not really sure if the way I modified 
{{ensure-jars-have-correct-contents.sh}} is the right one.

This was the error message I fixed with adding {{allowed_expr+="|^afu/"}}.
{noformat}
[INFO] Artifact looks correct: 'hadoop-client-api-3.3.0-SNAPSHOT.jar'
[ERROR] Found artifact with unexpected contents: 
'hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.3.0-SNAPSHOT.jar'
Please check the following and either correct the build or update
the allowed list with reasoning.

afu/
afu/org/
afu/org/checkerframework/
afu/org/checkerframework/checker/formatter/
afu/org/checkerframework/checker/formatter/FormatUtil$Conversion.class

afu/org/checkerframework/checker/formatter/FormatUtil$ExcessiveOrMissingFormatArgumentException.class

afu/org/checkerframework/checker/formatter/FormatUtil$IllegalFormatConversionCategoryException.class
afu/org/checkerframework/checker/formatter/FormatUtil.class
afu/org/checkerframework/checker/nullness/
afu/org/checkerframework/checker/nullness/NullnessUtils.class
afu/org/checkerframework/checker/regex/

afu/org/checkerframework/checker/regex/RegexUtil$CheckedPatternSyntaxException.class
afu/org/checkerframework/checker/regex/RegexUtil.class
afu/org/checkerframework/checker/units/
afu/org/checkerframework/checker/units/UnitsTools.class
afu/plume/RegexUtil$CheckedPatternSyntaxException.class
afu/plume/RegexUtil.class
{noformat}

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Attachment: HADOOP-15960.000.WIP.patch

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15960.000.WIP.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Description: 
com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237.



  was:com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's 
found CVE-2018-10237


> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15960:
---

 Summary: Update guava to 27.0-jre in hadoop-common
 Key: HADOOP-15960
 URL: https://issues.apache.org/jira/browse/HADOOP-15960
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, security
Affects Versions: 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
CVE-2018-10237



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15960 started by Gabor Bota.
---
> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15960) Update guava to 27.0-jre in hadoop-common

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15960:

Description: com.google.guava:guava should be upgraded to 27.0-jre due to 
new CVE's found CVE-2018-10237.  (was: com.google.guava:guava should be 
upgraded to 27.0-jre due to new CVE's found CVE-2018-10237.

)

> Update guava to 27.0-jre in hadoop-common
> -
>
> Key: HADOOP-15960
> URL: https://issues.apache.org/jira/browse/HADOOP-15960
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> CVE-2018-10237.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15845) s3guard init and destroy command will create/destroy tables if ddb.table & region are set

2018-11-29 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15845:
---

Assignee: Gabor Bota

> s3guard init and destroy command will create/destroy tables if ddb.table & 
> region are set
> -
>
> Key: HADOOP-15845
> URL: https://issues.apache.org/jira/browse/HADOOP-15845
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> If you have s3guard set up with a table name and a region, then s3guard init 
> will automatically create the table, without you specifying a bucket or URI.
> I had expected the command just to print out its arguments, but it actually 
> did the init with the default bucket values
> Even worse, `hadoop s3guard destroy` will destroy the table. 
> This is too dangerous to allow. The command must require either the name of a 
> bucket or an an explicit ddb table URI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15798) LocalMetadataStore put() does not retain isDeleted in parent listing

2018-11-28 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15798:

Priority: Major  (was: Minor)

> LocalMetadataStore put() does not retain isDeleted in parent listing
> 
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15798.001.patch
>
>
> To reproduce: mvn verify -Dauth -Ds3guard -Dlocal -Dtest=none 
> -Dit.test=ITestS3GuardListConsistency
> ITestS3GuardListConsistency#testConsistentListAfterDelete test fails 
> constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15947) Fix ITestDynamoDBMetadataStore test error issues

2018-11-28 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16701829#comment-16701829
 ] 

Gabor Bota commented on HADOOP-15947:
-

I can't reproduce those, but (at least for me) these errors are happening in 
all runs and the patch fixes it.

> Fix ITestDynamoDBMetadataStore test error issues
> 
>
> Key: HADOOP-15947
> URL: https://issues.apache.org/jira/browse/HADOOP-15947
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15947.001.patch
>
>
> When running regression hadoop-aws integration tests for HADOOP-15370, I got 
> the following errors in ITestDynamoDBMetadataStore: 
> {noformat}
> [ERROR] Tests run: 40, Failures: 4, Errors: 2, Skipped: 0, Time elapsed: 
> 177.303 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
> [ERROR] 
> testBatchWrite(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  
> Time elapsed: 1.262 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.doTestBatchWrite(ITestDynamoDBMetadataStore.java:365)
>  (...)
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 11.394 s  <<< FAILURE!
> java.lang.AssertionError: expected:<20> but was:<10> (...)
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:594)
>  (...)
> [ERROR] 
> testPruneUnsetsAuthoritative(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.323 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:737)
>  (...)
> [ERROR] 
> testDeleteSubtree(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.377 s  <<< FAILURE!
> java.lang.AssertionError: Directory /ADirectory2 is null in cache (...)
> [ERROR] 
> testPutNew(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  Time 
> elapsed: 1.466 s  <<< FAILURE!
> java.lang.AssertionError: Directory /da2 is null in cache (...)
> [ERROR] 
> testDeleteSubtreeHostPath(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.378 s  <<< FAILURE!
> java.lang.AssertionError: Directory 
> s3a://cloudera-dev-gabor-ireland/ADirectory2 is null in cache (...)
> {noformat}
> I create this jira to handle and fix all of these issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-11-27 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16700370#comment-16700370
 ] 

Gabor Bota commented on HADOOP-15819:
-

fyi [~adam.antal] I've created an issue for the problems when file caching is 
disabled: HADOOP-15796

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:4

[jira] [Commented] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-27 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16700093#comment-16700093
 ] 

Gabor Bota commented on HADOOP-15370:
-

Usually, these tests are failing if the location I run tests against is really 
remote (like running against us-west from Budapest office). What target are you 
running your tests against?

Sure we can create a google doc for these failures. Please create one 
[~mackrorysd] with the tests that are failing, and the configuration you use. 
I'll test it and figure out what's happening.

> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15798) LocalMetadataStore put() does not retain isDeleted in parent listing

2018-11-26 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15798:

Description: 
To reproduce: mvn verify -Dauth -Ds3guard -Dlocal -Dtest=none 
-Dit.test=ITestS3GuardListConsistency

ITestS3GuardListConsistency#testConsistentListAfterDelete test fails constantly 
when running with LocalMetadataStore.

{noformat}
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

  was:
ITestS3GuardListConsistency#testConsistentListAfterDelete test fails constantly 
when running with LocalMetadataStore.

{noformat}
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}


> LocalMetadataStore put() does not retain isDeleted in parent listing
> 
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15798.001.patch
>
>
> To reproduce: mvn verify -Dauth -Ds3guard -Dlocal -Dtest=none 
> -Dit.test=ITestS3GuardListConsistency
> ITestS3GuardListConsistency#testConsistentListAfterDelete test fails 
> constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.

[jira] [Updated] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-26 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14927:

Status: Patch Available  (was: Open)

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0, 3.0.0-alpha3, 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14927.001.patch, HADOOP-14927.002.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-26 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-14927:

Attachment: HADOOP-14927.002.patch

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14927.001.patch, HADOOP-14927.002.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15798) LocalMetadataStore put() does not retain isDeleted in parent listing

2018-11-26 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15798:

Description: 
ITestS3GuardListConsistency#testConsistentListAfterDelete test fails constantly 
when running with LocalMetadataStore.

{noformat}
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}

  was:
Test fails constantly when running with LocalMetadataStore.

{noformat}
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertFalse(Assert.java:64)
at org.junit.Assert.assertFalse(Assert.java:74)
at 
org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}


> LocalMetadataStore put() does not retain isDeleted in parent listing
> 
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15798.001.patch
>
>
> ITestS3GuardListConsistency#testConsistentListAfterDelete test fails 
> constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.inte

[jira] [Comment Edited] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-26 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698953#comment-16698953
 ] 

Gabor Bota edited comment on HADOOP-14927 at 11/26/18 2:35 PM:
---

The issue is that org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.Destroy#run does 
not even check if an s3a:// bucket is passed, just tries to init the metastore 
and destroy it.
based on the usage help we want to support the following:
{noformat}
destroy [OPTIONS] [s3a://BUCKET]
destroy Metadata Store data (all data in S3 is preserved)

Common options:
  -meta URL - Metadata repository details (implementation-specific)

Amazon DynamoDB-specific options:
  -region REGION - Service region for connections

  URLs for Amazon DynamoDB are of the form dynamodb://TABLE_NAME.
  Specifying both the -region option and an S3A path
  is not supported.
{noformat}

So the implementation should check if the s3a:// bucket is supplied before 
instantiating and destroying the metadatastore with the configured table name 
that could be different than what we supply on cli. 
I'll provide a patch with this soon.


was (Author: gabor.bota):
The issue is that org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.Destroy#run does 
not even check if an s3a:// bucket is passed, just tries to init the metastore 
and destroy it.
based on the usage help we want to support the following:
{noformat}
destroy [OPTIONS] [s3a://BUCKET]
destroy Metadata Store data (all data in S3 is preserved)

Common options:
  -meta URL - Metadata repository details (implementation-specific)

Amazon DynamoDB-specific options:
  -region REGION - Service region for connections

  URLs for Amazon DynamoDB are of the form dynamodb://TABLE_NAME.
  Specifying both the -region option and an S3A path
  is not supported.
{noformat}

So the implementation should check if the s3a:// bucket is supplied before 
instantiating and destroying the metadatastore with the configured table name 
that *could be different* than what we supply on cli. 
I'll provide a patch with this soon.

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14927.001.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-26 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698953#comment-16698953
 ] 

Gabor Bota commented on HADOOP-14927:
-

The issue is that org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.Destroy#run does 
not even check if an s3a:// bucket is passed, just tries to init the metastore 
and destroy it.
based on the usage help we want to support the following:
{noformat}
destroy [OPTIONS] [s3a://BUCKET]
destroy Metadata Store data (all data in S3 is preserved)

Common options:
  -meta URL - Metadata repository details (implementation-specific)

Amazon DynamoDB-specific options:
  -region REGION - Service region for connections

  URLs for Amazon DynamoDB are of the form dynamodb://TABLE_NAME.
  Specifying both the -region option and an S3A path
  is not supported.
{noformat}

So the implementation should check if the s3a:// bucket is supplied before 
instantiating and destroying the metadatastore with the configured table name 
that *could be different* than what we supply on cli. 
I'll provide a patch with this soon.

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14927.001.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15947) Fix ITestDynamoDBMetadataStore test error issues

2018-11-25 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15947:

Attachment: HADOOP-15947.001.patch

> Fix ITestDynamoDBMetadataStore test error issues
> 
>
> Key: HADOOP-15947
> URL: https://issues.apache.org/jira/browse/HADOOP-15947
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15947.001.patch
>
>
> When running regression hadoop-aws integration tests for HADOOP-15370, I got 
> the following errors in ITestDynamoDBMetadataStore: 
> {noformat}
> [ERROR] Tests run: 40, Failures: 4, Errors: 2, Skipped: 0, Time elapsed: 
> 177.303 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
> [ERROR] 
> testBatchWrite(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  
> Time elapsed: 1.262 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.doTestBatchWrite(ITestDynamoDBMetadataStore.java:365)
>  (...)
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 11.394 s  <<< FAILURE!
> java.lang.AssertionError: expected:<20> but was:<10> (...)
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:594)
>  (...)
> [ERROR] 
> testPruneUnsetsAuthoritative(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.323 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:737)
>  (...)
> [ERROR] 
> testDeleteSubtree(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.377 s  <<< FAILURE!
> java.lang.AssertionError: Directory /ADirectory2 is null in cache (...)
> [ERROR] 
> testPutNew(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  Time 
> elapsed: 1.466 s  <<< FAILURE!
> java.lang.AssertionError: Directory /da2 is null in cache (...)
> [ERROR] 
> testDeleteSubtreeHostPath(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.378 s  <<< FAILURE!
> java.lang.AssertionError: Directory 
> s3a://cloudera-dev-gabor-ireland/ADirectory2 is null in cache (...)
> {noformat}
> I create this jira to handle and fix all of these issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-25 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698141#comment-16698141
 ] 

Gabor Bota commented on HADOOP-15370:
-

I've created HADOOP-15947 and fixed the 4 issues in 
{{ITestDynamoDBMetadataStore}}, submitted a patch.

> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15947) Fix ITestDynamoDBMetadataStore test error issues

2018-11-25 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698132#comment-16698132
 ] 

Gabor Bota edited comment on HADOOP-15947 at 11/25/18 11:19 AM:


Test ran against eu-west-1. Results:
* {{[ERROR] 
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)}}
 - handled in HADOOP-14927
* {{[ERROR] 
testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster)}} - 
* handled in HADOOP-14556


was (Author: gabor.bota):
Test ran against eu-west-1. Results:
{{[ERROR] 
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)}}
 - handled in HADOOP-14927
{{[ERROR] 
testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster) }} - 
handled in HADOOP-14556

> Fix ITestDynamoDBMetadataStore test error issues
> 
>
> Key: HADOOP-15947
> URL: https://issues.apache.org/jira/browse/HADOOP-15947
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15947.001.patch
>
>
> When running regression hadoop-aws integration tests for HADOOP-15370, I got 
> the following errors in ITestDynamoDBMetadataStore: 
> {noformat}
> [ERROR] Tests run: 40, Failures: 4, Errors: 2, Skipped: 0, Time elapsed: 
> 177.303 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
> [ERROR] 
> testBatchWrite(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  
> Time elapsed: 1.262 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.doTestBatchWrite(ITestDynamoDBMetadataStore.java:365)
>  (...)
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 11.394 s  <<< FAILURE!
> java.lang.AssertionError: expected:<20> but was:<10> (...)
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:594)
>  (...)
> [ERROR] 
> testPruneUnsetsAuthoritative(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.323 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:737)
>  (...)
> [ERROR] 
> testDeleteSubtree(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.377 s  <<< FAILURE!
> java.lang.AssertionError: Directory /ADirectory2 is null in cache (...)
> [ERROR] 
> testPutNew(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  Time 
> elapsed: 1.466 s  <<< FAILURE!
> java.lang.AssertionError: Directory /da2 is null in cache (...)
> [ERROR] 
> testDeleteSubtreeHostPath(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.378 s  <<< FAILURE!
> java.lang.AssertionError: Directory 
> s3a://cloudera-dev-gabor-ireland/ADirectory2 is null in cache (...)
> {noformat}
> I create this jira to handle and fix all of these issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15947) Fix ITestDynamoDBMetadataStore test error issues

2018-11-25 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698132#comment-16698132
 ] 

Gabor Bota edited comment on HADOOP-15947 at 11/25/18 11:19 AM:


Test ran against eu-west-1. Results:
* {{[ERROR] 
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)}}
 - handled in HADOOP-14927
* {{[ERROR] 
testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster)}} - 
handled in HADOOP-14556


was (Author: gabor.bota):
Test ran against eu-west-1. Results:
* {{[ERROR] 
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)}}
 - handled in HADOOP-14927
* {{[ERROR] 
testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster)}} - 
* handled in HADOOP-14556

> Fix ITestDynamoDBMetadataStore test error issues
> 
>
> Key: HADOOP-15947
> URL: https://issues.apache.org/jira/browse/HADOOP-15947
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15947.001.patch
>
>
> When running regression hadoop-aws integration tests for HADOOP-15370, I got 
> the following errors in ITestDynamoDBMetadataStore: 
> {noformat}
> [ERROR] Tests run: 40, Failures: 4, Errors: 2, Skipped: 0, Time elapsed: 
> 177.303 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
> [ERROR] 
> testBatchWrite(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  
> Time elapsed: 1.262 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.doTestBatchWrite(ITestDynamoDBMetadataStore.java:365)
>  (...)
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 11.394 s  <<< FAILURE!
> java.lang.AssertionError: expected:<20> but was:<10> (...)
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:594)
>  (...)
> [ERROR] 
> testPruneUnsetsAuthoritative(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.323 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:737)
>  (...)
> [ERROR] 
> testDeleteSubtree(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.377 s  <<< FAILURE!
> java.lang.AssertionError: Directory /ADirectory2 is null in cache (...)
> [ERROR] 
> testPutNew(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  Time 
> elapsed: 1.466 s  <<< FAILURE!
> java.lang.AssertionError: Directory /da2 is null in cache (...)
> [ERROR] 
> testDeleteSubtreeHostPath(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.378 s  <<< FAILURE!
> java.lang.AssertionError: Directory 
> s3a://cloudera-dev-gabor-ireland/ADirectory2 is null in cache (...)
> {noformat}
> I create this jira to handle and fix all of these issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15947) Fix ITestDynamoDBMetadataStore test error issues

2018-11-25 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15947:

Status: Patch Available  (was: In Progress)

Test ran against eu-west-1. Results:
{{[ERROR] 
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB)}}
 - handled in HADOOP-14927
{{[ERROR] 
testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster) }} - 
handled in HADOOP-14556

> Fix ITestDynamoDBMetadataStore test error issues
> 
>
> Key: HADOOP-15947
> URL: https://issues.apache.org/jira/browse/HADOOP-15947
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15947.001.patch
>
>
> When running regression hadoop-aws integration tests for HADOOP-15370, I got 
> the following errors in ITestDynamoDBMetadataStore: 
> {noformat}
> [ERROR] Tests run: 40, Failures: 4, Errors: 2, Skipped: 0, Time elapsed: 
> 177.303 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
> [ERROR] 
> testBatchWrite(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  
> Time elapsed: 1.262 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.doTestBatchWrite(ITestDynamoDBMetadataStore.java:365)
>  (...)
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 11.394 s  <<< FAILURE!
> java.lang.AssertionError: expected:<20> but was:<10> (...)
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:594)
>  (...)
> [ERROR] 
> testPruneUnsetsAuthoritative(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.323 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:737)
>  (...)
> [ERROR] 
> testDeleteSubtree(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.377 s  <<< FAILURE!
> java.lang.AssertionError: Directory /ADirectory2 is null in cache (...)
> [ERROR] 
> testPutNew(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  Time 
> elapsed: 1.466 s  <<< FAILURE!
> java.lang.AssertionError: Directory /da2 is null in cache (...)
> [ERROR] 
> testDeleteSubtreeHostPath(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.378 s  <<< FAILURE!
> java.lang.AssertionError: Directory 
> s3a://cloudera-dev-gabor-ireland/ADirectory2 is null in cache (...)
> {noformat}
> I create this jira to handle and fix all of these issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14927) ITestS3GuardTool failures in testDestroyNoBucket()

2018-11-25 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-14927:
---

Assignee: Gabor Bota  (was: Aaron Fabbri)

> ITestS3GuardTool failures in testDestroyNoBucket()
> --
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-14927.001.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify 
> -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests: 
>   
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
>   ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 
> Expected an exception, got 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15947) Fix ITestDynamoDBMetadataStore test error issues

2018-11-25 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15947 started by Gabor Bota.
---
> Fix ITestDynamoDBMetadataStore test error issues
> 
>
> Key: HADOOP-15947
> URL: https://issues.apache.org/jira/browse/HADOOP-15947
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> When running regression hadoop-aws integration tests for HADOOP-15370, I got 
> the following errors in ITestDynamoDBMetadataStore: 
> {noformat}
> [ERROR] Tests run: 40, Failures: 4, Errors: 2, Skipped: 0, Time elapsed: 
> 177.303 s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
> [ERROR] 
> testBatchWrite(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  
> Time elapsed: 1.262 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.doTestBatchWrite(ITestDynamoDBMetadataStore.java:365)
>  (...)
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 11.394 s  <<< FAILURE!
> java.lang.AssertionError: expected:<20> but was:<10> (...)
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:594)
>  (...)
> [ERROR] 
> testPruneUnsetsAuthoritative(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.323 s  <<< ERROR!
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:737)
>  (...)
> [ERROR] 
> testDeleteSubtree(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.377 s  <<< FAILURE!
> java.lang.AssertionError: Directory /ADirectory2 is null in cache (...)
> [ERROR] 
> testPutNew(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  Time 
> elapsed: 1.466 s  <<< FAILURE!
> java.lang.AssertionError: Directory /da2 is null in cache (...)
> [ERROR] 
> testDeleteSubtreeHostPath(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 2.378 s  <<< FAILURE!
> java.lang.AssertionError: Directory 
> s3a://cloudera-dev-gabor-ireland/ADirectory2 is null in cache (...)
> {noformat}
> I create this jira to handle and fix all of these issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15947) Fix ITestDynamoDBMetadataStore test error issues

2018-11-25 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15947:
---

 Summary: Fix ITestDynamoDBMetadataStore test error issues
 Key: HADOOP-15947
 URL: https://issues.apache.org/jira/browse/HADOOP-15947
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.1.0
Reporter: Gabor Bota
Assignee: Gabor Bota


When running regression hadoop-aws integration tests for HADOOP-15370, I got 
the following errors in ITestDynamoDBMetadataStore: 
{noformat}
[ERROR] Tests run: 40, Failures: 4, Errors: 2, Skipped: 0, Time elapsed: 
177.303 s <<< FAILURE! - in 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore
[ERROR] 
testBatchWrite(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  
Time elapsed: 1.262 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.doTestBatchWrite(ITestDynamoDBMetadataStore.java:365)
 (...)

[ERROR] 
testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore) 
 Time elapsed: 11.394 s  <<< FAILURE!
java.lang.AssertionError: expected:<20> but was:<10> (...)
org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:594)
 (...)

[ERROR] 
testPruneUnsetsAuthoritative(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
  Time elapsed: 2.323 s  <<< ERROR!
java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3a.s3guard.MetadataStoreTestBase.testPruneUnsetsAuthoritative(MetadataStoreTestBase.java:737)
 (...)

[ERROR] 
testDeleteSubtree(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)  
Time elapsed: 2.377 s  <<< FAILURE!
java.lang.AssertionError: Directory /ADirectory2 is null in cache (...)

[ERROR] testPutNew(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore) 
 Time elapsed: 1.466 s  <<< FAILURE!
java.lang.AssertionError: Directory /da2 is null in cache (...)

[ERROR] 
testDeleteSubtreeHostPath(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
  Time elapsed: 2.378 s  <<< FAILURE!
java.lang.AssertionError: Directory 
s3a://cloudera-dev-gabor-ireland/ADirectory2 is null in cache (...)
{noformat}

I create this jira to handle and fix all of these issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15843) s3guard bucket-info command to not print a stack trace on bucket-not-found

2018-11-23 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16697163#comment-16697163
 ] 

Gabor Bota commented on HADOOP-15843:
-

Thanks [~ste...@apache.org], I've ran some tests as a regression an got some 
errors on dynamo. I think we should address those first. I'll create the issues 
for those.

> s3guard bucket-info command to not print a stack trace on bucket-not-found
> --
>
> Key: HADOOP-15843
> URL: https://issues.apache.org/jira/browse/HADOOP-15843
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Adam Antal
>Priority: Minor
> Attachments: HADOOP-15843-001.patch
>
>
> when you go {{hadoop s3guard bucket-info s3a://bucket-which-doesnt-exist}} 
> you get a full stack trace on the failure. This is overkill: all the caller 
> needs to know is the bucket isn't there.
> Proposed: catch FNFE and treat as special, have return code of "44", "not 
> found".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13980) S3Guard CLI: Add fsck check command

2018-11-23 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-13980:
---

Assignee: Gabor Bota

> S3Guard CLI: Add fsck check command
> ---
>
> Key: HADOOP-13980
> URL: https://issues.apache.org/jira/browse/HADOOP-13980
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
>
> As discussed in HADOOP-13650, we want to add an S3Guard CLI command which 
> compares S3 with MetadataStore, and returns a failure status if any 
> invariants are violated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15799) ITestS3AEmptyDirectory#testDirectoryBecomesEmpty fails when running with dynamo

2018-11-23 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696681#comment-16696681
 ] 

Gabor Bota commented on HADOOP-15799:
-

I was not able to reproduce this issue. Closing this as Cannot Reproduce.

> ITestS3AEmptyDirectory#testDirectoryBecomesEmpty fails when running with 
> dynamo
> ---
>
> Key: HADOOP-15799
> URL: https://issues.apache.org/jira/browse/HADOOP-15799
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> I've seen a new failure when running verify for HADOOP-15621. First I thought 
> it was my new patch, but it happens on trunk. This is a major issue, it could 
> be because of implementation issue in dynamo.
> {noformat}
> [ERROR] 
> testDirectoryBecomesEmpty(org.apache.hadoop.fs.s3a.ITestS3AEmptyDirectory) 
> Time elapsed: 1.864 s <<< FAILURE!
> java.lang.AssertionError: dir is empty expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AEmptyDirectory.assertEmptyDirectory(ITestS3AEmptyDirectory.java:56)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AEmptyDirectory.testDirectoryBecomesEmpty(ITestS3AEmptyDirectory.java:48)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15799) ITestS3AEmptyDirectory#testDirectoryBecomesEmpty fails when running with dynamo

2018-11-23 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-15799.
-
Resolution: Cannot Reproduce

> ITestS3AEmptyDirectory#testDirectoryBecomesEmpty fails when running with 
> dynamo
> ---
>
> Key: HADOOP-15799
> URL: https://issues.apache.org/jira/browse/HADOOP-15799
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> I've seen a new failure when running verify for HADOOP-15621. First I thought 
> it was my new patch, but it happens on trunk. This is a major issue, it could 
> be because of implementation issue in dynamo.
> {noformat}
> [ERROR] 
> testDirectoryBecomesEmpty(org.apache.hadoop.fs.s3a.ITestS3AEmptyDirectory) 
> Time elapsed: 1.864 s <<< FAILURE!
> java.lang.AssertionError: dir is empty expected: but was:
> at org.junit.Assert.fail(Assert.java:88)
> at org.junit.Assert.failNotEquals(Assert.java:743)
> at org.junit.Assert.assertEquals(Assert.java:118)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AEmptyDirectory.assertEmptyDirectory(ITestS3AEmptyDirectory.java:56)
> at 
> org.apache.hadoop.fs.s3a.ITestS3AEmptyDirectory.testDirectoryBecomesEmpty(ITestS3AEmptyDirectory.java:48)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-23 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696678#comment-16696678
 ] 

Gabor Bota commented on HADOOP-15370:
-

ITestS3AMiniYarnCluster#testWithMiniCluster could fail because of HADOOP-15832.
I'll check if we have created issues for these errors, and create if we haven't 
yet.

> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-23 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696674#comment-16696674
 ] 

Gabor Bota commented on HADOOP-15370:
-

mvn verify ran against eu-west-1. s3guard: dynamo, non-auth and auth after that.

Results are not looking good. For non-authoritative:
{noformat}
[ERROR] Failures:
[ERROR]   
ITestDynamoDBMetadataStore>MetadataStoreTestBase.testDeleteSubtree:321->MetadataStoreTestBase.deleteSubtreeHelper:346->MetadataStoreTestBase.assertEmptyDirectory:927->MetadataStoreTestBase.assertDirectorySize:882->Assert.assertNotNull:712->Assert.assertTrue:41->Assert.fail:88
 Directory /ADirectory2 is null in cache
[ERROR]   
ITestDynamoDBMetadataStore>MetadataStoreTestBase.testDeleteSubtreeHostPath:326->MetadataStoreTestBase.deleteSubtreeHelper:346->MetadataStoreTestBase.assertEmptyDirectory:927->MetadataStoreTestBase.assertDirectorySize:882->Assert.assertNotNull:712->Assert.assertTrue:41->Assert.fail:88
 Directory s3a://cloudera-dev-gabor-ireland/ADirectory2 is null in cache
[ERROR]   
ITestDynamoDBMetadataStore.testProvisionTable:594->Assert.assertEquals:631->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 expected:<20> but was:<10>
[ERROR]   
ITestDynamoDBMetadataStore>MetadataStoreTestBase.testPutNew:243->MetadataStoreTestBase.assertEmptyDirs:932->MetadataStoreTestBase.assertEmptyDirectory:927->MetadataStoreTestBase.assertDirectorySize:882->Assert.assertNotNull:712->Assert.assertTrue:41->Assert.fail:88
 Directory /da2 is null in cache
[ERROR] Errors:
[ERROR]   ITestDynamoDBMetadataStore.testBatchWrite:318->doTestBatchWrite:365 
NullPointer
[ERROR]   
ITestDynamoDBMetadataStore>MetadataStoreTestBase.testPruneUnsetsAuthoritative:737
 ? NullPointer
[ERROR]   
ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:336->AbstractS3GuardToolTestBase.run:115
 ? IllegalArgument
[ERROR]   ITestS3AMiniYarnCluster.setup:68 ? NoClassDefFound 
org/bouncycastle/jce/provid...
{noformat}

I've seen {{ITestS3AMiniYarnCluster#testWithMiniCluster}} and 
{{ITestS3GuardToolDynamoDB#testDestroyNoBucket}} but not the others.

These failures/errors are not related to the patch (obviously). I did a 
regression testing with clean trunk, and got the same results. 

Failures/errors with non-authoritative:
{noformat}
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 19.912 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster
[ERROR] 
testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster)  
Time elapsed: 19.697 s  <<< ERROR!
java.lang.NoClassDefFoundError: 
org/bouncycastle/jce/provider/BouncyCastleProvider
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:840)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1270)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:328)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.yarn.server.MiniYARNCluster.initResourceManager(MiniYARNCluster.java:348)
at 
org.apache.hadoop.yarn.server.MiniYARNCluster.access$200(MiniYARNCluster.java:128)
at 
org.apache.hadoop.yarn.server.MiniYARNCluster$ResourceManagerWrapper.serviceInit(MiniYARNCluster.java:497)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
at 
org.apache.hadoop.yarn.server.MiniYARNCluster.serviceInit(MiniYARNCluster.java:316)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at 
org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster.setup(ITestS3AMiniYarnCluster.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$Cal

[jira] [Issue Comment Deleted] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-23 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15370:

Comment: was deleted

(was: Maybe the ITestS3AMiniYarnCluster error occurs because of YARN-8448. I'll 
check if we have issues for these errors and create if we don't.)

> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-23 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696676#comment-16696676
 ] 

Gabor Bota commented on HADOOP-15370:
-

Maybe the ITestS3AMiniYarnCluster error occurs because of YARN-8448. I'll check 
if we have issues for these errors and create if we don't.

> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15798) LocalMetadataStore put() does not retain isDeleted in parent listing

2018-11-22 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696271#comment-16696271
 ] 

Gabor Bota edited comment on HADOOP-15798 at 11/22/18 9:40 PM:
---

Made some sense into 
{{org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore#put(org.apache.hadoop.fs.s3a.s3guard.PathMetadata)}}
 Update cached parent dir part while fixing the issue.

mvn verify ran against eu-west-1. Two test error:
testDestroyNoBucket(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolLocal)
testWithMiniCluster(org.apache.hadoop.fs.s3a.yarn.ITestS3AMiniYarnCluster)

testDestroyNoBucket is known, but testWithMiniCluster is not, and seems 
unrelated.


was (Author: gabor.bota):
Made some sense into 
{{org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore#put(org.apache.hadoop.fs.s3a.s3guard.PathMetadata)}}
 Update cached parent dir part while fixing the issue.

> LocalMetadataStore put() does not retain isDeleted in parent listing
> 
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15798.001.patch
>
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15798) LocalMetadataStore put() does not retain isDeleted in parent listing

2018-11-22 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696271#comment-16696271
 ] 

Gabor Bota commented on HADOOP-15798:
-

Made some sense into 
{{org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore#put(org.apache.hadoop.fs.s3a.s3guard.PathMetadata)}}
 Update cached parent dir part while fixing the issue.

> LocalMetadataStore put() does not retain isDeleted in parent listing
> 
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15798.001.patch
>
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15798) LocalMetadataStore put() does not retain isDeleted in parent listing

2018-11-22 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15798:

Status: Patch Available  (was: In Progress)

> LocalMetadataStore put() does not retain isDeleted in parent listing
> 
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15798.001.patch
>
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15798) LocalMetadataStore put() does not retain isDeleted in parent listing

2018-11-22 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15798:

Summary: LocalMetadataStore put() does not retain isDeleted in parent 
listing  (was: ITestS3GuardListConsistency#testConsistentListAfterDelete 
failing with LocalMetadataStore)

> LocalMetadataStore put() does not retain isDeleted in parent listing
> 
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15798.001.patch
>
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15798) ITestS3GuardListConsistency#testConsistentListAfterDelete failing with LocalMetadataStore

2018-11-22 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15798:

Attachment: HADOOP-15798.001.patch

> ITestS3GuardListConsistency#testConsistentListAfterDelete failing with 
> LocalMetadataStore
> -
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15798.001.patch
>
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-15798) ITestS3GuardListConsistency#testConsistentListAfterDelete failing with LocalMetadataStore

2018-11-22 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15798:

Comment: was deleted

(was: The fast and dirty fix would be to add
{code:java}
if(meta.isDeleted()) {
 parentMeta.getDirListingMeta().markDeleted(path);
}
{code}
to {{LocalMetadataStore.java:281}} else branch, but I'd prefer a solution which 
is more versatile: add a new {{put}} method to {{DirListingMetadata}} with 
{{PathMetadata}} parameter type. This way we could add PathMetadata instead of 
just a FileStatus, so additional parameters like isDeleted could be set 
automatically and we could avoid problems like this. Currently running the 
tests, will report back soon.)

> ITestS3GuardListConsistency#testConsistentListAfterDelete failing with 
> LocalMetadataStore
> -
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15798) ITestS3GuardListConsistency#testConsistentListAfterDelete failing with LocalMetadataStore

2018-11-22 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696146#comment-16696146
 ] 

Gabor Bota edited comment on HADOOP-15798 at 11/22/18 5:32 PM:
---

The fast and dirty fix would be to add
{code:java}
if(meta.isDeleted()) {
 parentMeta.getDirListingMeta().markDeleted(path);
}
{code}
to {{LocalMetadataStore.java:281}} else branch, but I'd prefer a solution which 
is more versatile: add a new {{put}} method to {{DirListingMetadata}} with 
{{PathMetadata}} parameter type. This way we could add PathMetadata instead of 
just a FileStatus, so additional parameters like isDeleted could be set 
automatically and we could avoid problems like this. Currently running the 
tests, will report back soon.


was (Author: gabor.bota):
The fast and dirty fix would be to add
{code:java}
if(meta.isDeleted()) {
 parentMeta.getDirListingMeta().markDeleted(path);
}
{code}
to {{LocalMetadataStore.java:281}} else branch, but I'd prefer a solution which 
is more versatile: add a new {{put}} method to {{DirListingMetadata}} with 
{{PathMetadata}} parameter type. This way we could add PathMetadata instead of 
just a FileStatus, so additional parameters like isDeleted could be set and we 
could avoid problems like this. Currently running the tests, will report back 
soon.

> ITestS3GuardListConsistency#testConsistentListAfterDelete failing with 
> LocalMetadataStore
> -
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15798) ITestS3GuardListConsistency#testConsistentListAfterDelete failing with LocalMetadataStore

2018-11-22 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696146#comment-16696146
 ] 

Gabor Bota commented on HADOOP-15798:
-

The fast and dirty fix would be to add
{code:java}
if(meta.isDeleted()) {
 parentMeta.getDirListingMeta().markDeleted(path);
}
{code}
to {{LocalMetadataStore.java:281}} else branch, but I'd prefer a solution which 
is more versatile: add a new {{put}} method to {{DirListingMetadata}} with 
{{PathMetadata}} parameter type. This way we could add PathMetadata instead of 
just a FileStatus, so additional parameters like isDeleted could be set and we 
could avoid problems like this. Currently running the tests, will report back 
soon.

> ITestS3GuardListConsistency#testConsistentListAfterDelete failing with 
> LocalMetadataStore
> -
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15798) ITestS3GuardListConsistency#testConsistentListAfterDelete failing with LocalMetadataStore

2018-11-22 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16696050#comment-16696050
 ] 

Gabor Bota commented on HADOOP-15798:
-

It's an implementation bug in 
{{org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore#put(org.apache.hadoop.fs.s3a.s3guard.PathMetadata)}}.
{{parentDirMeta.put(status);}} and 
{{parentMeta.getDirListingMeta().put(status);}} creates the element in the 
DirListingMetadata for the child element even if its deleted (tombstone).

> ITestS3GuardListConsistency#testConsistentListAfterDelete failing with 
> LocalMetadataStore
> -
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-15798) ITestS3GuardListConsistency#testConsistentListAfterDelete failing with LocalMetadataStore

2018-11-21 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15798 started by Gabor Bota.
---
> ITestS3GuardListConsistency#testConsistentListAfterDelete failing with 
> LocalMetadataStore
> -
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15798) ITestS3GuardListConsistency#testConsistentListAfterDelete failing with LocalMetadataStore

2018-11-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694973#comment-16694973
 ] 

Gabor Bota commented on HADOOP-15798:
-

Thanks! I think that's maybe another issue, and not strongly related to this.

> ITestS3GuardListConsistency#testConsistentListAfterDelete failing with 
> LocalMetadataStore
> -
>
> Key: HADOOP-15798
> URL: https://issues.apache.org/jira/browse/HADOOP-15798
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Minor
>
> Test fails constantly when running with LocalMetadataStore.
> {noformat}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-21 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15370:

Status: Patch Available  (was: Open)

> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694952#comment-16694952
 ] 

Gabor Bota edited comment on HADOOP-15370 at 11/21/18 5:07 PM:
---

Created an initial patch based on the idea that 
{{org.apache.hadoop.fs.s3a.S3AFileSystem#innerDelete}} returns what returned 
from {{org.apache.hadoop.fs.s3a.S3AFileSystem#rejectRootDirectoryDelete}}, and 
the same is returned to {{org.apache.hadoop.fs.s3a.S3AFileSystem#delete}} which 
ultimately returned to {{org.apache.hadoop.fs.shell.Delete.Rm#processPath}} and 
we have the {{PathIOException}} which outputs the {{Input/output error}}. 
Based on that, I found that the best places to add the messages are the ones I 
used.



was (Author: gabor.bota):
Created an initial patch based on the idea that 
org.apache.hadoop.fs.s3a.S3AFileSystem#innerDelete returns what returned from 
org.apache.hadoop.fs.s3a.S3AFileSystem#rejectRootDirectoryDelete, and the same 
is returned to org.apache.hadoop.fs.s3a.S3AFileSystem#delete which ultimately 
returned to org.apache.hadoop.fs.shell.Delete.Rm#processPath and we have the 
{{PathIOException}} which outputs the Input/output error. 
Based on that, I found that the best places to add the messages are the ones I 
used.


> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-21 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16694952#comment-16694952
 ] 

Gabor Bota commented on HADOOP-15370:
-

Created an initial patch based on the idea that 
org.apache.hadoop.fs.s3a.S3AFileSystem#innerDelete returns what returned from 
org.apache.hadoop.fs.s3a.S3AFileSystem#rejectRootDirectoryDelete, and the same 
is returned to org.apache.hadoop.fs.s3a.S3AFileSystem#delete which ultimately 
returned to org.apache.hadoop.fs.shell.Delete.Rm#processPath and we have the 
{{PathIOException}} which outputs the Input/output error. 
Based on that, I found that the best places to add the messages are the ones I 
used.


> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15370) S3A log message on rm s3a://bucket/ not intuitive

2018-11-21 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15370:

Attachment: HADOOP-15370.001.patch

> S3A log message on rm s3a://bucket/ not intuitive
> -
>
> Key: HADOOP-15370
> URL: https://issues.apache.org/jira/browse/HADOOP-15370
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Trivial
> Attachments: HADOOP-15370.001.patch
>
>
> when you try to delete the root of a bucket from command line, e.g. {{hadoop 
> fs -rm -r -skipTrash s3a://hwdev-steve-new/}}, the output isn't that useful
> {code}
> 2018-04-06 16:35:23,048 [main] INFO  s3a.S3AFileSystem 
> (S3AFileSystem.java:rejectRootDirectoryDelete(1837)) - s3a delete the 
> hwdev-steve-new root directory of true
> rm: `s3a://hwdev-steve-new/': Input/output error
> 2018-04-06 16:35:23,050 [pool-2-thread-1] DEBUG s3a.S3AFileSystem
> {code}
> the single log message doesn't parse, and the error message raised is lost by 
> the FS -rm CLI command (why?)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work stopped] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-17 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-15819 stopped by Gabor Bota.
---
> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193)
>   at 
> org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40)
>   at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(De

[jira] [Commented] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16652265#comment-16652265
 ] 

Gabor Bota commented on HADOOP-15848:
-

You are right [~ste...@apache.org], thanks for the fix. 

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-16 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651943#comment-16651943
 ] 

Gabor Bota commented on HADOOP-15848:
-

Thanks for the patch [~ehiggs]! I think it would be better if we could skip the 
test in the pom.xml. What do you think?

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Ewan Higgs
>Priority: Major
> Attachments: HADOOP-15848.01.patch
>
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-12 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15848:

Description: 
To reproduce failure: {{mvn verify -Dscale 
-Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
-Dtest=skip}} against {{eu-west-1}}.

Test output:
{noformat}
[INFO] Running 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
[ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 59.301 
s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
[ERROR] 
testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
  Time elapsed: 0.75 s  <<< ERROR!
java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
inclusive, but is 0
{noformat}




  was:
To reproduce failure: {{mvn verify -Dscale 
-Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
-Dtest=skip}} against {{eu-west-1}}.

Test output:
{noformat}
[INFO] Running 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
[ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 59.301 
s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
[ERROR] 
testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
  Time elapsed: 0.75 s  <<< ERROR!
java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
inclusive, but is 0
{/noformat}





> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Priority: Major
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error

2018-10-12 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15848:

Summary: ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart 
test error  (was: 
ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test failure)

> ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test error
> -
>
> Key: HADOOP-15848
> URL: https://issues.apache.org/jira/browse/HADOOP-15848
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Priority: Major
>
> To reproduce failure: {{mvn verify -Dscale 
> -Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
> -Dtest=skip}} against {{eu-west-1}}.
> Test output:
> {noformat}
> [INFO] Running 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.301 s <<< FAILURE! - in 
> org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
> [ERROR] 
> testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
>   Time elapsed: 0.75 s  <<< ERROR!
> java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
> inclusive, but is 0
> {/noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15848) ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test failure

2018-10-12 Thread Gabor Bota (JIRA)
Gabor Bota created HADOOP-15848:
---

 Summary: 
ITestS3AContractMultipartUploader#testMultipartUploadEmptyPart test failure
 Key: HADOOP-15848
 URL: https://issues.apache.org/jira/browse/HADOOP-15848
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.1.1
Reporter: Gabor Bota


To reproduce failure: {{mvn verify -Dscale 
-Dit.test=org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader 
-Dtest=skip}} against {{eu-west-1}}.

Test output:
{noformat}
[INFO] Running 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
[ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 59.301 
s <<< FAILURE! - in 
org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader
[ERROR] 
testMultipartUploadEmptyPart(org.apache.hadoop.fs.contract.s3a.ITestS3AContractMultipartUploader)
  Time elapsed: 0.75 s  <<< ERROR!
java.lang.IllegalArgumentException: partNumber must be between 1 and 1 
inclusive, but is 0
{/noformat}






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644973#comment-16644973
 ] 

Gabor Bota edited comment on HADOOP-15819 at 10/10/18 2:19 PM:
---

I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching _should be_ disabled.

What I did is I modified the 
{{org.apache.hadoop.fs.s3a.AbstractS3ATestBase#teardown}} to

{code:java}
  @Override
  public void teardown() throws Exception {
super.teardown();
boolean fsCacheDisabled = getConfiguration()
.getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false);
if(fsCacheDisabled){
  describe("closing file system");
  LOG.warn("Closing fs. FS_S3A_IMPL_DISABLE_CACHE: " + fsCacheDisabled);
  IOUtils.closeStream(getFileSystem());
}
  }
{code}

And there were still issues after this.


was (Author: gabor.bota):
I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching _should be_ disabled.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of th

[jira] [Comment Edited] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run

2018-10-10 Thread Gabor Bota (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16644973#comment-16644973
 ] 

Gabor Bota edited comment on HADOOP-15819 at 10/10/18 1:47 PM:
---

I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching _should be_ disabled.


was (Author: gabor.bota):
I have some bad news: I get these issues for tests where 
{{FS_S3A_IMPL_DISABLE_CACHE}} set to true, so the caching is disabled.

> S3A integration test failures: FileSystem is closed! - without parallel test 
> run
> 
>
> Key: HADOOP-15819
> URL: https://issues.apache.org/jira/browse/HADOOP-15819
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Critical
> Attachments: S3ACloseEnforcedFileSystem.java, 
> closed_fs_closers_example_5klines.log.zip
>
>
> Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against 
> Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these 
> failures:
> {noformat}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob)
>   Time elapsed: 0.027 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob
> [ERROR] 
> testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.021 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob)
>   Time elapsed: 0.022 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 
> s <<< FAILURE! - in 
> org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest
> [ERROR] 
> testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest)
>   Time elapsed: 0.023 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob
> [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob)  
> Time elapsed: 0.039 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 
> s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory
> [ERROR] 
> testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory)  
> Time elapsed: 0.014 s  <<< ERROR!
> java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed!
> {noformat}
> The big issue is that the tests are running in a serial manner - no test is 
> running on top of the other - so we should not see that the tests are failing 
> like this. The issue could be in how we handle 
> org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same 
> S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B 
> test will get the same FileSystem object from the cache and try to use it, 
> but it is closed.
> We see this a lot in our downstream testing too. It's not possible to tell 
> that the failed regression test result is an implementation issue in the 
> runtime code or a test implementation problem. 
> I've checked when and what closes the S3AFileSystem with a sightly modified 
> version of S3AFileSystem which logs the closers of the fs if an error should 
> occur. I'll attach this modified java file for reference. See the next 
> example of the result when it's running:
> {noformat}
> 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem 
> (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): 
> java.lang.RuntimeException: Using closed FS!.
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73)
>   at 
> org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474)
>   at 
> org.apache.hadoop.fs.contract.AbstractFSContractTestBase

<    3   4   5   6   7   8   9   10   11   12   >