[jira] [Commented] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-20 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979043#comment-16979043
 ] 

Yiqun Lin commented on HDFS-14651:
--

The latest patch almost looks good now, only some minor comments:
{quote}Should we make CONNECTION_TIMEOUT_MS configurable?
{quote}
Yes, you can add this if it's really needed. I suggest to use the setting 
name/description.
 dfs.client.deadnode.detection.probe.connection.timeout.ms: Connection timeout 
for probing dead node in milliseconds.
{code:java}
+Future future = rpcThreadPool.submit(new Callable() 
{
+  @Override
+  public DatanodeLocalInfo call() throws Exception {
+return proxy.getDatanodeInfo();
+  }
+
+});   < format this line
{code}
Can you format above line?

Can we update '{{Remove the node out from dead}}' to '{{Remove the node out 
from dead node list}}'?

*TestDeadNodeDetection.java*
 Revisit for this unit test, we need additional test for the deadnode queue max 
limitation. We can just limit 1 as max queue limitation and then verify in 
test. Would you mind adding this additional test, [~leosun08]? Also actually I 
saw some duplicated lines in this unit test, we can make a simple refactor in 
followup task.

Others looks good to me.

 

> DeadNodeDetector checks dead node periodically
> --
>
> Key: HDFS-14651
> URL: https://issues.apache.org/jira/browse/HDFS-14651
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14651.001.patch, HDFS-14651.002.patch, 
> HDFS-14651.003.patch, HDFS-14651.004.patch, HDFS-14651.005.patch, 
> HDFS-14651.006.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979036#comment-16979036
 ] 

Hadoop QA commented on HDFS-14986:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
44s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 34s{color} | {color:orange} root: The patch generated 3 new + 117 unchanged 
- 0 fixed = 120 total (was 117) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 30s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}106m 55s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
56s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}224m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
|   | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.hdfs.TestMultipleNNPortQOP |
|   | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14986 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986378/HDFS-14986.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7a09c1a83624 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality

[jira] [Commented] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979033#comment-16979033
 ] 

Hudson commented on HDFS-14996:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17681 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17681/])
HDFS-14996. RBF: GetFileStatus fails for directory with EC policy set in 
(ayushsaxena: rev 98d249dcdabb664ca82083a323afb1a8ed13c062)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRPCMultipleDestinationMountTableResolver.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederationUtil.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java


> RBF: GetFileStatus fails for directory with EC policy set in case of multiple 
> destinations 
> ---
>
> Key: HDFS-14996
> URL: https://issues.apache.org/jira/browse/HDFS-14996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, rbf
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14996-01.patch, HDFS-14996-02.patch, 
> HDFS-14996-03.patch
>
>
> In case of multi destinations for one mount and following PathAll type Order.
> Getting FileStatus Fails if it has an EC Policy set on it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14820) The default 8KB buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream is too big

2019-11-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979030#comment-16979030
 ] 

Ayush Saxena commented on HDFS-14820:
-

Isn't it already configurable,  Whoever wants can change the value, isn't it?
Seems [~elgoiri] to had concerns changing the default value like this.
Isn't changing the default not an incompatible change?

>  The default 8KB buffer of 
> BlockReaderRemote#newBlockReader#BufferedOutputStream is too big
> ---
>
> Key: HDFS-14820
> URL: https://issues.apache.org/jira/browse/HDFS-14820
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14820.001.patch, HDFS-14820.002.patch
>
>
> this issue is similar to HDFS-14535.
> {code:java}
> public static BlockReader newBlockReader(String file,
> ExtendedBlock block,
> Token blockToken,
> long startOffset, long len,
> boolean verifyChecksum,
> String clientName,
> Peer peer, DatanodeID datanodeID,
> PeerCache peerCache,
> CachingStrategy cachingStrategy,
> int networkDistance) throws IOException {
>   // in and out will be closed when sock is closed (by the caller)
>   final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(
>   peer.getOutputStream()));
>   new Sender(out).readBlock(block, blockToken, clientName, startOffset, len,
>   verifyChecksum, cachingStrategy);
> }
> public BufferedOutputStream(OutputStream out) {
> this(out, 8192);
> }
> {code}
> Sender#readBlock parameter( block,blockToken, clientName, startOffset, len, 
> verifyChecksum, cachingStrategy) could not use such a big buffer.
> So i think it should reduce BufferedOutputStream buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-20 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979026#comment-16979026
 ] 

Lisheng Sun commented on HDFS-14651:


[~linyiqun]

I updated the patch and uploaed the v006 patch.
 Should we make CONNECTION_TIMEOUT_MS configurable? Since every application 
need different timeout requirements.
{code:java}
 try {
  future.get(CONNECTION_TIMEOUT_MS, TimeUnit.MILLISECONDS);
} catch (TimeoutException e) {
  LOG.error("Probe failed, datanode: {}, type: {}.", datanodeInfo, type,
  e);
  deadNodeDetector.probeCallBack(this, false);
  return;
} finally {
  future.cancel(true);
}
{code}

> DeadNodeDetector checks dead node periodically
> --
>
> Key: HDFS-14651
> URL: https://issues.apache.org/jira/browse/HDFS-14651
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14651.001.patch, HDFS-14651.002.patch, 
> HDFS-14651.003.patch, HDFS-14651.004.patch, HDFS-14651.005.patch, 
> HDFS-14651.006.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14820) The default 8KB buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream is too big

2019-11-20 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979024#comment-16979024
 ] 

Lisheng Sun commented on HDFS-14820:


Thanks [~weichiu] for your reminder.

I updated the patch and uploaded the v002 patch. Could you help review it? 
Thank you.

>  The default 8KB buffer of 
> BlockReaderRemote#newBlockReader#BufferedOutputStream is too big
> ---
>
> Key: HDFS-14820
> URL: https://issues.apache.org/jira/browse/HDFS-14820
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14820.001.patch, HDFS-14820.002.patch
>
>
> this issue is similar to HDFS-14535.
> {code:java}
> public static BlockReader newBlockReader(String file,
> ExtendedBlock block,
> Token blockToken,
> long startOffset, long len,
> boolean verifyChecksum,
> String clientName,
> Peer peer, DatanodeID datanodeID,
> PeerCache peerCache,
> CachingStrategy cachingStrategy,
> int networkDistance) throws IOException {
>   // in and out will be closed when sock is closed (by the caller)
>   final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(
>   peer.getOutputStream()));
>   new Sender(out).readBlock(block, blockToken, clientName, startOffset, len,
>   verifyChecksum, cachingStrategy);
> }
> public BufferedOutputStream(OutputStream out) {
> this(out, 8192);
> }
> {code}
> Sender#readBlock parameter( block,blockToken, clientName, startOffset, len, 
> verifyChecksum, cachingStrategy) could not use such a big buffer.
> So i think it should reduce BufferedOutputStream buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979023#comment-16979023
 ] 

Ayush Saxena commented on HDFS-14996:
-

Committed to trunk.
Thanx [~elgoiri] for the review!!!

> RBF: GetFileStatus fails for directory with EC policy set in case of multiple 
> destinations 
> ---
>
> Key: HDFS-14996
> URL: https://issues.apache.org/jira/browse/HDFS-14996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, rbf
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14996-01.patch, HDFS-14996-02.patch, 
> HDFS-14996-03.patch
>
>
> In case of multi destinations for one mount and following PathAll type Order.
> Getting FileStatus Fails if it has an EC Policy set on it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14996:

Fix Version/s: 3.3.0

> RBF: GetFileStatus fails for directory with EC policy set in case of multiple 
> destinations 
> ---
>
> Key: HDFS-14996
> URL: https://issues.apache.org/jira/browse/HDFS-14996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, rbf
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14996-01.patch, HDFS-14996-02.patch, 
> HDFS-14996-03.patch
>
>
> In case of multi destinations for one mount and following PathAll type Order.
> Getting FileStatus Fails if it has an EC Policy set on it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14996:

Hadoop Flags: Reviewed
  Resolution: Fixed
  Status: Resolved  (was: Patch Available)

> RBF: GetFileStatus fails for directory with EC policy set in case of multiple 
> destinations 
> ---
>
> Key: HDFS-14996
> URL: https://issues.apache.org/jira/browse/HDFS-14996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, rbf
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14996-01.patch, HDFS-14996-02.patch, 
> HDFS-14996-03.patch
>
>
> In case of multi destinations for one mount and following PathAll type Order.
> Getting FileStatus Fails if it has an EC Policy set on it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14820) The default 8KB buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream is too big

2019-11-20 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14820:
---
Attachment: HDFS-14820.002.patch

>  The default 8KB buffer of 
> BlockReaderRemote#newBlockReader#BufferedOutputStream is too big
> ---
>
> Key: HDFS-14820
> URL: https://issues.apache.org/jira/browse/HDFS-14820
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14820.001.patch, HDFS-14820.002.patch
>
>
> this issue is similar to HDFS-14535.
> {code:java}
> public static BlockReader newBlockReader(String file,
> ExtendedBlock block,
> Token blockToken,
> long startOffset, long len,
> boolean verifyChecksum,
> String clientName,
> Peer peer, DatanodeID datanodeID,
> PeerCache peerCache,
> CachingStrategy cachingStrategy,
> int networkDistance) throws IOException {
>   // in and out will be closed when sock is closed (by the caller)
>   final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(
>   peer.getOutputStream()));
>   new Sender(out).readBlock(block, blockToken, clientName, startOffset, len,
>   verifyChecksum, cachingStrategy);
> }
> public BufferedOutputStream(OutputStream out) {
> this(out, 8192);
> }
> {code}
> Sender#readBlock parameter( block,blockToken, clientName, startOffset, len, 
> verifyChecksum, cachingStrategy) could not use such a big buffer.
> So i think it should reduce BufferedOutputStream buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-20 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14651:
---
Attachment: HDFS-14651.006.patch

> DeadNodeDetector checks dead node periodically
> --
>
> Key: HDFS-14651
> URL: https://issues.apache.org/jira/browse/HDFS-14651
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14651.001.patch, HDFS-14651.002.patch, 
> HDFS-14651.003.patch, HDFS-14651.004.patch, HDFS-14651.005.patch, 
> HDFS-14651.006.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979016#comment-16979016
 ] 

Ayush Saxena commented on HDFS-14998:
-

Thanx [~ferhui] for the patch. Lets wait for a couple of days to see if we can 
get some more feedback.

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15000) Improve FsDatasetImpl to avoid IO operation in datasetLock

2019-11-20 Thread Xiaoqiao He (Jira)
Xiaoqiao He created HDFS-15000:
--

 Summary: Improve FsDatasetImpl to avoid IO operation in datasetLock
 Key: HDFS-15000
 URL: https://issues.apache.org/jira/browse/HDFS-15000
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Xiaoqiao He
Assignee: Aiphago


As HDFS-14997 mentioned, some methods in #FsDatasetImpl such as #finalizeBlock, 
#finalizeReplica, #createRbw includes IO operation in the datasetLock, It will 
block some logic when IO load is very high. We should reduce grain fineness or 
move IO operation out of datasetLock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-20 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16979006#comment-16979006
 ] 

Yiqun Lin commented on HDFS-14986:
--

Hi [~Aiphag0],
{code:java}
+  // Prevent ReplicaCachingGetSpaceUsed dead lock at 
FsDatasetImpl#deepCopyReplica
+  if 
(this.getClass().getSimpleName().equals("ReplicaCachingGetSpaceUsed")) {
+initRefeshThread(true);
+return;
+  }
{code}
Judgement for the subclass in parent class is a little tricky, we can add a new 
flag and reset that in subclass. I find a way to remove runImmediately for 
RefreshThreaad and add {{shouldInitRefresh = true}} in CachingGetSpaceUsed. By 
default, we will do init refresh.
{code:java}
if (used.get() < 0) {
  used.set(0);
  if(shouldInitRefresh) {
  refresh();
  }
}
{code}
And we need to define a protected method to override this only in 
FSCachingGetSpaceUsed.
{code:java}
  /**
   * Reset that if we need to do the initial refresh.
   * @param shouldInitRefresh The flag value to set.
   */
  protected void setShouldInitRefresh(boolean shouldInitRefresh) {
  this.shouldInitRefresh = shouldInitRefresh;
  }

  public FSCachingGetSpaceUsed(Builder builder) throws IOException {
super(builder);
this.setShouldInitRefresh(false);
  }
{code}
 

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14924) RenameSnapshot not updating new modification time

2019-11-20 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978994#comment-16978994
 ] 

hemanthboyina commented on HDFS-14924:
--

updated the patch with editsStored  xml and binary file . please review

> RenameSnapshot not updating new modification time
> -
>
> Key: HDFS-14924
> URL: https://issues.apache.org/jira/browse/HDFS-14924
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14924.001.patch, HDFS-14924.002.patch, 
> HDFS-14924.003.patch, HDFS-14924.004.patch
>
>
> RenameSnapshot doesnt updating modification time



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14924) RenameSnapshot not updating new modification time

2019-11-20 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14924:
-
Attachment: HDFS-14924.004.patch

> RenameSnapshot not updating new modification time
> -
>
> Key: HDFS-14924
> URL: https://issues.apache.org/jira/browse/HDFS-14924
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14924.001.patch, HDFS-14924.002.patch, 
> HDFS-14924.003.patch, HDFS-14924.004.patch
>
>
> RenameSnapshot doesnt updating modification time



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2385) As admin, volume list command should list all volumes not just admin user owned volumes

2019-11-20 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2385 started by YiSheng Lien.
--
> As admin, volume list command should list all volumes not just admin user 
> owned volumes
> ---
>
> Key: HDDS-2385
> URL: https://issues.apache.org/jira/browse/HDDS-2385
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone CLI
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: YiSheng Lien
>Priority: Major
>
> The command `ozone sh volume ls` lists only the volumes that are owned by the 
> user.
>  
> Expected behavior: The command should list all the volumes in the system if 
> the user is an ozone administrator. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14820) The default 8KB buffer of BlockReaderRemote#newBlockReader#BufferedOutputStream is too big

2019-11-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978978#comment-16978978
 ] 

Wei-Chiu Chuang commented on HDFS-14820:


This one looks useful. [~leosun08] are you planning to update the patch?

>  The default 8KB buffer of 
> BlockReaderRemote#newBlockReader#BufferedOutputStream is too big
> ---
>
> Key: HDFS-14820
> URL: https://issues.apache.org/jira/browse/HDFS-14820
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14820.001.patch
>
>
> this issue is similar to HDFS-14535.
> {code:java}
> public static BlockReader newBlockReader(String file,
> ExtendedBlock block,
> Token blockToken,
> long startOffset, long len,
> boolean verifyChecksum,
> String clientName,
> Peer peer, DatanodeID datanodeID,
> PeerCache peerCache,
> CachingStrategy cachingStrategy,
> int networkDistance) throws IOException {
>   // in and out will be closed when sock is closed (by the caller)
>   final DataOutputStream out = new DataOutputStream(new BufferedOutputStream(
>   peer.getOutputStream()));
>   new Sender(out).readBlock(block, blockToken, clientName, startOffset, len,
>   verifyChecksum, cachingStrategy);
> }
> public BufferedOutputStream(OutputStream out) {
> this(out, 8192);
> }
> {code}
> Sender#readBlock parameter( block,blockToken, clientName, startOffset, len, 
> verifyChecksum, cachingStrategy) could not use such a big buffer.
> So i think it should reduce BufferedOutputStream buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2394) Ozone S3 Gateway allows bucket name with underscore to be created but throws an error during put key operation

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2394?focusedWorklogId=347159&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347159
 ]

ASF GitHub Bot logged work on HDDS-2394:


Author: ASF GitHub Bot
Created on: 21/Nov/19 04:52
Start Date: 21/Nov/19 04:52
Worklog Time Spent: 10m 
  Work Description: vivekratnavel commented on pull request #243: 
HDDS-2394. Ozone S3 Gateway allows bucket name with underscore to be created
URL: https://github.com/apache/hadoop-ozone/pull/243
 
 
   ## What changes were proposed in this pull request?
   
   The patch adds verification of bucket name when bucket create request is 
handled from s3 api.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2394
   
   ## How was this patch tested?
   
   The patch was tested by trying to create a bucket with an invalid bucket 
name in ozones3 docker compose env. Also, a robot testcase is added to catch 
create bucket request with invalid bucket name. I verified that this newly 
added test passes.  
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347159)
Remaining Estimate: 0h
Time Spent: 10m

> Ozone S3 Gateway allows bucket name with underscore to be created but throws 
> an error during put key operation
> --
>
> Key: HDDS-2394
> URL: https://issues.apache.org/jira/browse/HDDS-2394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Steps to reproduce:
> aws s3api --endpoint http://localhost:9878 create-bucket --bucket ozone_test
> aws s3api --endpoint http://localhost:9878 put-object --bucket ozone_test 
> --key ozone-site.xml --body /etc/hadoop/conf/ozone-site.xml
> S3 gateway throws a warning:
> {code:java}
> javax.servlet.ServletException: javax.servlet.ServletException: 
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : _
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:139)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:539)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.servlet.ServletException: 
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : _
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1628)
>   at 
> org.ec

[jira] [Updated] (HDDS-2394) Ozone S3 Gateway allows bucket name with underscore to be created but throws an error during put key operation

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2394:
-
Labels: pull-request-available  (was: )

> Ozone S3 Gateway allows bucket name with underscore to be created but throws 
> an error during put key operation
> --
>
> Key: HDDS-2394
> URL: https://issues.apache.org/jira/browse/HDDS-2394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>  Labels: pull-request-available
>
> Steps to reproduce:
> aws s3api --endpoint http://localhost:9878 create-bucket --bucket ozone_test
> aws s3api --endpoint http://localhost:9878 put-object --bucket ozone_test 
> --key ozone-site.xml --body /etc/hadoop/conf/ozone-site.xml
> S3 gateway throws a warning:
> {code:java}
> javax.servlet.ServletException: javax.servlet.ServletException: 
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : _
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:139)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:539)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.servlet.ServletException: 
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : _
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1628)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   ... 13 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For addi

[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-20 Thread Ryan Wu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978976#comment-16978976
 ] 

Ryan Wu commented on HDFS-14986:


Hi Aiphago, thanks for providing the patch that fix me too. I will backport it. 

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14993) checkDiskError doesn't work during datanode startup

2019-11-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978973#comment-16978973
 ] 

Wei-Chiu Chuang commented on HDFS-14993:


[~sodonnell] please take a look. Perhaps the logic was changed after HDFS-14333.

> checkDiskError doesn't work during datanode startup
> ---
>
> Key: HDFS-14993
> URL: https://issues.apache.org/jira/browse/HDFS-14993
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Yang Yun
>Assignee: Yang Yun
>Priority: Major
> Attachments: HDFS-14993.patch, HDFS-14993.patch
>
>
> the function checkDiskError() is called before addBlockPool, but list 
> bpSlices is empty this time. So the function check() in FsVolumeImpl.java 
> does nothing.
> @Override
> public VolumeCheckResult check(VolumeCheckContext ignored)
>  throws DiskErrorException {
>  // TODO:FEDERATION valid synchronization
>  for (BlockPoolSlice s : bpSlices.values()) {
>  s.checkDirs();
>  }
>  return VolumeCheckResult.HEALTHY;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2600) Move chaos test to org.apache.hadoop.ozone.chaos package

2019-11-20 Thread Mukul Kumar Singh (Jira)
Mukul Kumar Singh created HDDS-2600:
---

 Summary: Move chaos test to org.apache.hadoop.ozone.chaos package
 Key: HDDS-2600
 URL: https://issues.apache.org/jira/browse/HDDS-2600
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Mukul Kumar Singh


This is a simple refactoring change where all the chaos test are moved to  
org.apache.hadoop.ozone.chaos package



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2394) Ozone S3 Gateway allows bucket name with underscore to be created but throws an error during put key operation

2019-11-20 Thread Vivek Ratnavel Subramanian (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vivek Ratnavel Subramanian updated HDDS-2394:
-
Summary: Ozone S3 Gateway allows bucket name with underscore to be created 
but throws an error during put key operation  (was: Ozone allows bucket name 
with underscore to be created but throws an error during put key operation)

> Ozone S3 Gateway allows bucket name with underscore to be created but throws 
> an error during put key operation
> --
>
> Key: HDDS-2394
> URL: https://issues.apache.org/jira/browse/HDDS-2394
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
>Reporter: Vivek Ratnavel Subramanian
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
>
> Steps to reproduce:
> aws s3api --endpoint http://localhost:9878 create-bucket --bucket ozone_test
> aws s3api --endpoint http://localhost:9878 put-object --bucket ozone_test 
> --key ozone-site.xml --body /etc/hadoop/conf/ozone-site.xml
> S3 gateway throws a warning:
> {code:java}
> javax.servlet.ServletException: javax.servlet.ServletException: 
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : _
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:139)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:539)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: javax.servlet.ServletException: 
> java.lang.IllegalArgumentException: Bucket or Volume name has an unsupported 
> character : _
>   at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>   at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>   at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>   at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>   at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1628)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   ... 13 more
> {code}



--
This message was sent by Atlassia

[jira] [Resolved] (HDDS-2536) Add ozone.om.internal.service.id to OM HA configuration

2019-11-20 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2536.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Add ozone.om.internal.service.id to OM HA configuration
> ---
>
> Key: HDDS-2536
> URL: https://issues.apache.org/jira/browse/HDDS-2536
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to add ozone.om.internal.serviceid to let OM knows it belong to 
> a particular service.
>  
> As now we have ozone.om.service.ids -≥ where we can define all service id's 
> in a cluster.(This can happen if the same config is shared across the cluster)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2536) Add ozone.om.internal.service.id to OM HA configuration

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2536?focusedWorklogId=347152&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347152
 ]

ASF GitHub Bot logged work on HDDS-2536:


Author: ASF GitHub Bot
Created on: 21/Nov/19 04:19
Start Date: 21/Nov/19 04:19
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #218: 
HDDS-2536. Add ozone.om.internal.service.id to OM HA configuration.
URL: https://github.com/apache/hadoop-ozone/pull/218
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347152)
Time Spent: 20m  (was: 10m)

> Add ozone.om.internal.service.id to OM HA configuration
> ---
>
> Key: HDDS-2536
> URL: https://issues.apache.org/jira/browse/HDDS-2536
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This Jira is to add ozone.om.internal.serviceid to let OM knows it belong to 
> a particular service.
>  
> As now we have ozone.om.service.ids -≥ where we can define all service id's 
> in a cluster.(This can happen if the same config is shared across the cluster)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14563) Enhance interface about recommissioning/decommissioning

2019-11-20 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978954#comment-16978954
 ] 

Xiaoqiao He commented on HDFS-14563:


Thanks [~weichiu] pick up this JIRA and sorry for no update for long time. 
Please feel free to commit other PRs. And I will rebase this one and re-upload 
when have time. Thanks again.

> Enhance interface about recommissioning/decommissioning
> ---
>
> Key: HDFS-14563
> URL: https://issues.apache.org/jira/browse/HDFS-14563
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
>  Labels: decommission
> Attachments: HDFS-14563.001.patch, HDFS-14563.002.patch, mt_mode-2.txt
>
>
> In current implementation, if we need to decommissioning or recommissioning 
> one datanode, the only way is add the datanode to include or exclude file 
> under namenode configuration path then execute command `bin/hadoop dfsadmin 
> -refreshNodes` and trigger namenode to reload include/exclude and start to 
> recommissioning or decommissioning datanode.
> The shortcomings of this approach is that:
> a. namenode reload include/exclude configuration file from devices, if I/O 
> load is high, handler may be blocked.
> b. namenode has to process every datnodes in include and exclude 
> configurations, if there are many datanodes (very common for large cluster) 
> pending to process, namenode will be hung for hundred seconds to wait 
> recommision/decommision finish at the worst since holding write lock.
> I think we should expose one lightweight interface to support recommissioning 
> or decommissioning single datanode, thus we can operate datanode using 
> dfsadmin more smooth.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-20 Thread Aiphago (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978951#comment-16978951
 ] 

Aiphago commented on HDFS-14986:


hi [~jianliang.wu], I just assign this Jira to myself, please feel free to 
assign back if you would also like to work on this.

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-20 Thread Aiphago (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aiphago reassigned HDFS-14986:
--

Assignee: Aiphago  (was: Ryan Wu)

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Aiphago
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-20 Thread Aiphago (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978950#comment-16978950
 ] 

Aiphago commented on HDFS-14986:


Hi [~linyiqun],Thank you for your valuable advice.In previous patch I modify 
CachingGetSpaceUsed#init(),but this will influences the subclass of 
CachingGetSpaceUsed like DU.So I add a filter,and now the related unit tests 
can pass.[^HDFS-14986.003.patch]

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Ryan Wu
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2596) Remove unused private method "createPipeline"

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2596?focusedWorklogId=347146&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347146
 ]

ASF GitHub Bot logged work on HDDS-2596:


Author: ASF GitHub Bot
Created on: 21/Nov/19 03:34
Start Date: 21/Nov/19 03:34
Worklog Time Spent: 10m 
  Work Description: abhishekaypurohit commented on pull request #239: 
HDDS-2596. Remove unused private method "createPipeline"
URL: https://github.com/apache/hadoop-ozone/pull/239
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347146)
Time Spent: 20m  (was: 10m)

> Remove unused private method "createPipeline"
> -
>
> Key: HDDS-2596
> URL: https://issues.apache.org/jira/browse/HDDS-2596
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWe&open=AW5md_AVKcVY8lQ4ZsWe]
> and 
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWW&open=AW5md_AVKcVY8lQ4ZsWW



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14986) ReplicaCachingGetSpaceUsed throws ConcurrentModificationException

2019-11-20 Thread Aiphago (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aiphago updated HDFS-14986:
---
Attachment: HDFS-14986.003.patch

> ReplicaCachingGetSpaceUsed throws  ConcurrentModificationException
> --
>
> Key: HDFS-14986
> URL: https://issues.apache.org/jira/browse/HDFS-14986
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, performance
>Reporter: Ryan Wu
>Assignee: Ryan Wu
>Priority: Major
> Attachments: HDFS-14986.001.patch, HDFS-14986.002.patch, 
> HDFS-14986.003.patch
>
>
> Running DU across lots of disks is very expensive . We applied the patch 
> HDFS-14313 to get  used space from ReplicaInfo in memory.However, new du 
> threads throw the exception
> {code:java}
> // 2019-11-08 18:07:13,858 ERROR 
> [refreshUsed-/home/vipshop/hard_disk/7/dfs/dn/current/BP-1203969992--1450855658517]
>  
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed:
>  ReplicaCachingGetSpaceUsed refresh error
> java.util.ConcurrentModificationException: Tree has been modified outside of 
> iterator
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.checkForModification(FoldedTreeSet.java:311)
> 
> at 
> org.apache.hadoop.hdfs.util.FoldedTreeSet$TreeSetIterator.hasNext(FoldedTreeSet.java:256)
> 
> at java.util.AbstractCollection.addAll(AbstractCollection.java:343)
> at java.util.HashSet.(HashSet.java:120)
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.deepCopyReplica(FsDatasetImpl.java:1052)
> 
> at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.ReplicaCachingGetSpaceUsed.refresh(ReplicaCachingGetSpaceUsed.java:73)
> 
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:178)
>    
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978934#comment-16978934
 ] 

Íñigo Goiri commented on HDFS-14996:


Yetus was clean, +1 on [^HDFS-14996-03.patch].

> RBF: GetFileStatus fails for directory with EC policy set in case of multiple 
> destinations 
> ---
>
> Key: HDFS-14996
> URL: https://issues.apache.org/jira/browse/HDFS-14996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, rbf
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14996-01.patch, HDFS-14996-02.patch, 
> HDFS-14996-03.patch
>
>
> In case of multi destinations for one mount and following PathAll type Order.
> Getting FileStatus Fails if it has an EC Policy set on it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2538) Sonar: Fix issues found in DatabaseHelper in ozone audit parser package

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2538?focusedWorklogId=347140&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347140
 ]

ASF GitHub Bot logged work on HDDS-2538:


Author: ASF GitHub Bot
Created on: 21/Nov/19 02:58
Start Date: 21/Nov/19 02:58
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #221: 
HDDS-2538. Fix issues found in DatabaseHelper.
URL: https://github.com/apache/hadoop-ozone/pull/221
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347140)
Time Spent: 20m  (was: 10m)

> Sonar: Fix issues found in DatabaseHelper in ozone audit parser package
> ---
>
> Key: HDDS-2538
> URL: https://issues.apache.org/jira/browse/HDDS-2538
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&open=AW5md-dWKcVY8lQ4Zr39&resolved=false&severities=BLOCKER&types=BUG



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2538) Sonar: Fix issues found in DatabaseHelper in ozone audit parser package

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-2538.
-
Target Version/s: 0.5.0
  Resolution: Fixed

> Sonar: Fix issues found in DatabaseHelper in ozone audit parser package
> ---
>
> Key: HDDS-2538
> URL: https://issues.apache.org/jira/browse/HDDS-2538
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Tools
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&open=AW5md-dWKcVY8lQ4Zr39&resolved=false&severities=BLOCKER&types=BUG



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2594) S3 RangeReads failing with NumberFormatException

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2594?focusedWorklogId=347138&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347138
 ]

ASF GitHub Bot logged work on HDDS-2594:


Author: ASF GitHub Bot
Created on: 21/Nov/19 02:50
Start Date: 21/Nov/19 02:50
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #242: 
HDDS-2594. S3 RangeReads failing with NumberFormatException.
URL: https://github.com/apache/hadoop-ozone/pull/242
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347138)
Time Spent: 20m  (was: 10m)

> S3 RangeReads failing with NumberFormatException
> 
>
> Key: HDDS-2594
> URL: https://issues.apache.org/jira/browse/HDDS-2594
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
>  
> {code:java}
> 2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: java.lang.NumberFormatException: For input 
> string: "3977248768"
>         at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>         at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>         at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>         at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>         at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>         at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>         at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>         at org.eclipse.jetty.server.Server.handle(Server.java:539)
>         at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>         at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>         at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>         at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>         at java.lang.Thread.run(Thread.java:748)
>

[jira] [Updated] (HDDS-2594) S3 RangeReads failing with NumberFormatException

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2594:

Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> S3 RangeReads failing with NumberFormatException
> 
>
> Key: HDDS-2594
> URL: https://issues.apache.org/jira/browse/HDDS-2594
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
>  
> {code:java}
> 2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: java.lang.NumberFormatException: For input 
> string: "3977248768"
>         at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>         at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>         at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>         at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>         at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>         at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>         at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>         at org.eclipse.jetty.server.Server.handle(Server.java:539)
>         at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>         at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>         at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>         at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>         at java.lang.Thread.run(Thread.java:748)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2597) Remove toString() as log calls it implicitly

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-2597.
-
Fix Version/s: 0.5.0
   Resolution: Fixed

> Remove toString() as log calls it implicitly
> 
>
> Key: HDDS-2597
> URL: https://issues.apache.org/jira/browse/HDDS-2597
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter.
>  
> Related to 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978923#comment-16978923
 ] 

Hadoop QA commented on HDFS-14998:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
5s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
40m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14998 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986376/HDFS-14998.002.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux cc2279540156 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1a0c0e4 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 305 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28358/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2597) Remove toString() as log calls it implicitly

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2597?focusedWorklogId=347137&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347137
 ]

ASF GitHub Bot logged work on HDDS-2597:


Author: ASF GitHub Bot
Created on: 21/Nov/19 02:48
Start Date: 21/Nov/19 02:48
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #240: 
HDDS-2597. Remove toString() as log calls it implicitly
URL: https://github.com/apache/hadoop-ozone/pull/240
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347137)
Time Spent: 20m  (was: 10m)

> Remove toString() as log calls it implicitly
> 
>
> Key: HDDS-2597
> URL: https://issues.apache.org/jira/browse/HDDS-2597
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter.
>  
> Related to 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2597) Remove toString() as log calls it implicitly

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2597:

Summary: Remove toString() as log calls it implicitly  (was: No need to 
call "toString()" method as formatting and string conversion is done by the 
Formatter.)

> Remove toString() as log calls it implicitly
> 
>
> Key: HDDS-2597
> URL: https://issues.apache.org/jira/browse/HDDS-2597
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter.
>  
> Related to 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2597) No need to call "toString()" method as formatting and string conversion is done by the Formatter.

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2597:

Description: 
No need to call "toString()" method as formatting and string conversion is done 
by the Formatter.

 

Related to 
[https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb]

  was:Related to 
https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb


> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter.
> -
>
> Key: HDDS-2597
> URL: https://issues.apache.org/jira/browse/HDDS-2597
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter.
>  
> Related to 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2596) Remove unused private method "createPipeline"

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2596:

Summary: Remove unused private method "createPipeline"  (was: Remove this 
unused private "createPipeline" method)

> Remove unused private method "createPipeline"
> -
>
> Key: HDDS-2596
> URL: https://issues.apache.org/jira/browse/HDDS-2596
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWe&open=AW5md_AVKcVY8lQ4ZsWe]
> and 
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWW&open=AW5md_AVKcVY8lQ4ZsWW



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2598) Remove unused private field "LOG"

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2598?focusedWorklogId=347136&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347136
 ]

ASF GitHub Bot logged work on HDDS-2598:


Author: ASF GitHub Bot
Created on: 21/Nov/19 02:43
Start Date: 21/Nov/19 02:43
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #241: 
HDDS-2598. Remove unused private field "LOG"
URL: https://github.com/apache/hadoop-ozone/pull/241
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347136)
Time Spent: 20m  (was: 10m)

> Remove unused private field "LOG"
> -
>
> Key: HDDS-2598
> URL: https://issues.apache.org/jira/browse/HDDS-2598
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_APKcVY8lQ4ZsWS&open=AW5md_APKcVY8lQ4ZsWS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2598) Remove unused private field "LOG"

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-2598.
-
Fix Version/s: 0.5.0
   Resolution: Fixed

> Remove unused private field "LOG"
> -
>
> Key: HDDS-2598
> URL: https://issues.apache.org/jira/browse/HDDS-2598
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_APKcVY8lQ4ZsWS&open=AW5md_APKcVY8lQ4ZsWS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2598) Remove unused private field "LOG"

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2598:

Summary: Remove unused private field "LOG"  (was: Remove this unused "LOG" 
private field.)

> Remove unused private field "LOG"
> -
>
> Key: HDDS-2598
> URL: https://issues.apache.org/jira/browse/HDDS-2598
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_APKcVY8lQ4ZsWS&open=AW5md_APKcVY8lQ4ZsWS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978916#comment-16978916
 ] 

Hadoop QA commented on HDFS-14996:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m 
31s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
43s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14996 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986269/HDFS-14996-03.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b78de1d4a09c 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 6f899e9 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28357/testReport/ |
| Max. process+thread count | 2843 (vs. ulimit of 5500) |
| modules |

[jira] [Comment Edited] (HDDS-2591) No tailMap needed for startIndex 0 in ContainerSet#listContainer

2019-11-20 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978910#comment-16978910
 ] 

Bharat Viswanadham edited comment on HDDS-2591 at 11/21/19 2:10 AM:


Hi [~adoroszlai]

This API is added so that it can be used by Scanners implementation. I think, 
for now, we can leave it, and fix the issue reported.


was (Author: bharatviswa):
Hi [~adoroszlai]

This API is used so that it can be used by Scanners. I think, for now, we can 
leave it, and fix the issue reported.

> No tailMap needed for startIndex 0 in ContainerSet#listContainer
> 
>
> Key: HDDS-2591
> URL: https://issues.apache.org/jira/browse/HDDS-2591
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>
> {{ContainerSet#listContainer}} has this code:
> {code:title=https://github.com/apache/hadoop-ozone/blob/3c334f6a7b344e0e5f52fec95071c369286cfdcb/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java#L198}
> map = containerMap.tailMap(containerMap.firstKey(), true);
> {code}
> This is equivalent to:
> {code}
> map = containerMap;
> {code}
> since {{tailMap}} is a sub-map with all keys larger than or equal to 
> ({{inclusive=true}}) {{firstKey}}, which is the lowest key in the map.  So it 
> is a sub-map with all keys, ie. the whole map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2591) No tailMap needed for startIndex 0 in ContainerSet#listContainer

2019-11-20 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978910#comment-16978910
 ] 

Bharat Viswanadham commented on HDDS-2591:
--

Hi [~adoroszlai]

This API is used so that it can be used by Scanners. I think, for now, we can 
leave it, and fix the issue reported.

> No tailMap needed for startIndex 0 in ContainerSet#listContainer
> 
>
> Key: HDDS-2591
> URL: https://issues.apache.org/jira/browse/HDDS-2591
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>
> {{ContainerSet#listContainer}} has this code:
> {code:title=https://github.com/apache/hadoop-ozone/blob/3c334f6a7b344e0e5f52fec95071c369286cfdcb/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java#L198}
> map = containerMap.tailMap(containerMap.firstKey(), true);
> {code}
> This is equivalent to:
> {code}
> map = containerMap;
> {code}
> since {{tailMap}} is a sub-map with all keys larger than or equal to 
> ({{inclusive=true}}) {{firstKey}}, which is the lowest key in the map.  So it 
> is a sub-map with all keys, ie. the whole map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978895#comment-16978895
 ] 

Íñigo Goiri commented on HDFS-14998:


[~shv], [~xiangheng], do you guys mind taking a look at this and HDFS-14961.

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-20 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978894#comment-16978894
 ] 

Fei Hui commented on HDFS-14998:


[~ayushtkn] Thanks for your review!
Upload v002 patch

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14961) Prevent ZKFC changing Observer Namenode state

2019-11-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978892#comment-16978892
 ] 

Íñigo Goiri commented on HDFS-14961:


Changing the documentation is good but I feel we should clarify these things in 
the method itself.
Probably in the javadoc for {{transitionToStandby()}} we should add a high 
level description and probably a pointer to the doc.
Similar for the test case.

> Prevent ZKFC changing Observer Namenode state
> -
>
> Key: HDFS-14961
> URL: https://issues.apache.org/jira/browse/HDFS-14961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14961-01.patch, HDFS-14961-02.patch, 
> ZKFC-TEST-14961.patch
>
>
> HDFS-14130 made ZKFC aware of the Observer Namenode and hence allows ZKFC 
> running along with the observer NOde.
> The Observer namenode isn't suppose to be part of ZKFC election process.
> But if the  Namenode was part of election, before turning into Observer by 
> transitionToObserver Command. The ZKFC still sends instruction to the 
> Namenode as a result of previous participation and sometimes tend to change 
> the state of Observer to Standby.
> This is also the reason for  failure in TestDFSZKFailoverController.
> TestDFSZKFailoverController has been consistently failing with a time out 
> waiting in testManualFailoverWithDFSHAAdmin(). In particular 
> {{waitForHAState(1, HAServiceState.OBSERVER);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-20 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14998:
---
Attachment: HDFS-14998.002.patch

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
> Attachments: HDFS-14998.001.patch, HDFS-14998.002.patch
>
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14961) Prevent ZKFC changing Observer Namenode state

2019-11-20 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978885#comment-16978885
 ] 

Fei Hui commented on HDFS-14961:


[~ayushtkn] Thanks.
A potential race condition occurs when both user and zkfc want to transition NN 
to other states. NN can judge that whether it is  a reasonable request.
LGTM +1


> Prevent ZKFC changing Observer Namenode state
> -
>
> Key: HDFS-14961
> URL: https://issues.apache.org/jira/browse/HDFS-14961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14961-01.patch, HDFS-14961-02.patch, 
> ZKFC-TEST-14961.patch
>
>
> HDFS-14130 made ZKFC aware of the Observer Namenode and hence allows ZKFC 
> running along with the observer NOde.
> The Observer namenode isn't suppose to be part of ZKFC election process.
> But if the  Namenode was part of election, before turning into Observer by 
> transitionToObserver Command. The ZKFC still sends instruction to the 
> Namenode as a result of previous participation and sometimes tend to change 
> the state of Observer to Standby.
> This is also the reason for  failure in TestDFSZKFailoverController.
> TestDFSZKFailoverController has been consistently failing with a time out 
> waiting in testManualFailoverWithDFSHAAdmin(). In particular 
> {{waitForHAState(1, HAServiceState.OBSERVER);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14563) Enhance interface about recommissioning/decommissioning

2019-11-20 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-14563:
---
Labels: decommission  (was: )

> Enhance interface about recommissioning/decommissioning
> ---
>
> Key: HDFS-14563
> URL: https://issues.apache.org/jira/browse/HDFS-14563
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
>  Labels: decommission
> Attachments: HDFS-14563.001.patch, HDFS-14563.002.patch, mt_mode-2.txt
>
>
> In current implementation, if we need to decommissioning or recommissioning 
> one datanode, the only way is add the datanode to include or exclude file 
> under namenode configuration path then execute command `bin/hadoop dfsadmin 
> -refreshNodes` and trigger namenode to reload include/exclude and start to 
> recommissioning or decommissioning datanode.
> The shortcomings of this approach is that:
> a. namenode reload include/exclude configuration file from devices, if I/O 
> load is high, handler may be blocked.
> b. namenode has to process every datnodes in include and exclude 
> configurations, if there are many datanodes (very common for large cluster) 
> pending to process, namenode will be hung for hundred seconds to wait 
> recommision/decommision finish at the worst since holding write lock.
> I think we should expose one lightweight interface to support recommissioning 
> or decommissioning single datanode, thus we can operate datanode using 
> dfsadmin more smooth.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2599) This block of commented-out lines of code should be removed.

2019-11-20 Thread Abhishek Purohit (Jira)
Abhishek Purohit created HDDS-2599:
--

 Summary: This block of commented-out lines of code should be 
removed.
 Key: HDDS-2599
 URL: https://issues.apache.org/jira/browse/HDDS-2599
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Abhishek Purohit
Assignee: Abhishek Purohit


https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md-_2KcVY8lQ4ZsVm&open=AW5md-_2KcVY8lQ4ZsVm



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2594) S3 RangeReads failing with NumberFormatException

2019-11-20 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2594:
-
Status: Patch Available  (was: Open)

> S3 RangeReads failing with NumberFormatException
> 
>
> Key: HDDS-2594
> URL: https://issues.apache.org/jira/browse/HDDS-2594
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> {code:java}
> 2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: java.lang.NumberFormatException: For input 
> string: "3977248768"
>         at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>         at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>         at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>         at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>         at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>         at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>         at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>         at org.eclipse.jetty.server.Server.handle(Server.java:539)
>         at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>         at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>         at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>         at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>         at java.lang.Thread.run(Thread.java:748)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2594) S3 RangeReads failing with NumberFormatException

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2594?focusedWorklogId=347107&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347107
 ]

ASF GitHub Bot logged work on HDDS-2594:


Author: ASF GitHub Bot
Created on: 21/Nov/19 00:57
Start Date: 21/Nov/19 00:57
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #242: 
HDDS-2594. S3 RangeReads failing with NumberFormatException.
URL: https://github.com/apache/hadoop-ozone/pull/242
 
 
   ## What changes were proposed in this pull request?
   RangerHeaderParserUtil throws NumberFormatException, because of using 
Integer.parseInt.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2594
   
   ## How was this patch tested?
   
   Added UT for this.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347107)
Remaining Estimate: 0h
Time Spent: 10m

> S3 RangeReads failing with NumberFormatException
> 
>
> Key: HDDS-2594
> URL: https://issues.apache.org/jira/browse/HDDS-2594
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> {code:java}
> 2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: java.lang.NumberFormatException: For input 
> string: "3977248768"
>         at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>         at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>         at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>         at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>         at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>         at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>         at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>         at org.eclipse.jetty.server.Server.handle(Server.java:539)
>         at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>         at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>         at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>         at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>         at 
> org.eclipse.jetty.util.thread.strateg

[jira] [Updated] (HDDS-2594) S3 RangeReads failing with NumberFormatException

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2594:
-
Labels: pull-request-available  (was: )

> S3 RangeReads failing with NumberFormatException
> 
>
> Key: HDDS-2594
> URL: https://issues.apache.org/jira/browse/HDDS-2594
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
>  
> {code:java}
> 2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: java.lang.NumberFormatException: For input 
> string: "3977248768"
>         at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>         at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>         at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>         at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>         at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>         at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>         at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>         at org.eclipse.jetty.server.Server.handle(Server.java:539)
>         at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>         at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>         at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>         at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>         at java.lang.Thread.run(Thread.java:748)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2598) Remove this unused "LOG" private field.

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2598:
-
Labels: pull-request-available  (was: )

> Remove this unused "LOG" private field.
> ---
>
> Key: HDDS-2598
> URL: https://issues.apache.org/jira/browse/HDDS-2598
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_APKcVY8lQ4ZsWS&open=AW5md_APKcVY8lQ4ZsWS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2598) Remove this unused "LOG" private field.

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2598?focusedWorklogId=347102&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347102
 ]

ASF GitHub Bot logged work on HDDS-2598:


Author: ASF GitHub Bot
Created on: 21/Nov/19 00:51
Start Date: 21/Nov/19 00:51
Worklog Time Spent: 10m 
  Work Description: abhishekaypurohit commented on pull request #241: 
HDDS-2598. Removed unused field
URL: https://github.com/apache/hadoop-ozone/pull/241
 
 
   ## What changes were proposed in this pull request?
   
   Removed unused filed
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2598
   
   ## How was this patch tested?
   
   mvn builds
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347102)
Remaining Estimate: 0h
Time Spent: 10m

> Remove this unused "LOG" private field.
> ---
>
> Key: HDDS-2598
> URL: https://issues.apache.org/jira/browse/HDDS-2598
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_APKcVY8lQ4ZsWS&open=AW5md_APKcVY8lQ4ZsWS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2598) Remove this unused "LOG" private field.

2019-11-20 Thread Abhishek Purohit (Jira)
Abhishek Purohit created HDDS-2598:
--

 Summary: Remove this unused "LOG" private field.
 Key: HDDS-2598
 URL: https://issues.apache.org/jira/browse/HDDS-2598
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Abhishek Purohit
Assignee: Abhishek Purohit


https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_APKcVY8lQ4ZsWS&open=AW5md_APKcVY8lQ4ZsWS



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2597) No need to call "toString()" method as formatting and string conversion is done by the Formatter.

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2597?focusedWorklogId=347100&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347100
 ]

ASF GitHub Bot logged work on HDDS-2597:


Author: ASF GitHub Bot
Created on: 21/Nov/19 00:47
Start Date: 21/Nov/19 00:47
Worklog Time Spent: 10m 
  Work Description: abhishekaypurohit commented on pull request #240: 
HDDS-2597. Removed tostring as log calls it implicitly.
URL: https://github.com/apache/hadoop-ozone/pull/240
 
 
   ## What changes were proposed in this pull request?
   
   Removed unnecessary toString
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2597
   
   ## How was this patch tested?
   
   mvn builds
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347100)
Remaining Estimate: 0h
Time Spent: 10m

> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter.
> -
>
> Key: HDDS-2597
> URL: https://issues.apache.org/jira/browse/HDDS-2597
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Related to 
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2597) No need to call "toString()" method as formatting and string conversion is done by the Formatter.

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2597:
-
Labels: pull-request-available  (was: )

> No need to call "toString()" method as formatting and string conversion is 
> done by the Formatter.
> -
>
> Key: HDDS-2597
> URL: https://issues.apache.org/jira/browse/HDDS-2597
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>
> Related to 
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2597) No need to call "toString()" method as formatting and string conversion is done by the Formatter.

2019-11-20 Thread Abhishek Purohit (Jira)
Abhishek Purohit created HDDS-2597:
--

 Summary: No need to call "toString()" method as formatting and 
string conversion is done by the Formatter.
 Key: HDDS-2597
 URL: https://issues.apache.org/jira/browse/HDDS-2597
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Abhishek Purohit
Assignee: Abhishek Purohit


Related to 
https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWb&open=AW5md_AVKcVY8lQ4ZsWb



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2596) Remove this unused private "createPipeline" method

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2596?focusedWorklogId=347098&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347098
 ]

ASF GitHub Bot logged work on HDDS-2596:


Author: ASF GitHub Bot
Created on: 21/Nov/19 00:44
Start Date: 21/Nov/19 00:44
Worklog Time Spent: 10m 
  Work Description: abhishekaypurohit commented on pull request #239: 
HDDS-2596. Removed unused private method
URL: https://github.com/apache/hadoop-ozone/pull/239
 
 
   ## What changes were proposed in this pull request?
   
   Removed unused method
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2596
   
   ## How was this patch tested?
   
   mvn build
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347098)
Remaining Estimate: 0h
Time Spent: 10m

> Remove this unused private "createPipeline" method
> --
>
> Key: HDDS-2596
> URL: https://issues.apache.org/jira/browse/HDDS-2596
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWe&open=AW5md_AVKcVY8lQ4ZsWe]
> and 
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWW&open=AW5md_AVKcVY8lQ4ZsWW



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2596) Remove this unused private "createPipeline" method

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2596:
-
Labels: pull-request-available  (was: )

> Remove this unused private "createPipeline" method
> --
>
> Key: HDDS-2596
> URL: https://issues.apache.org/jira/browse/HDDS-2596
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Abhishek Purohit
>Assignee: Abhishek Purohit
>Priority: Major
>  Labels: pull-request-available
>
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWe&open=AW5md_AVKcVY8lQ4ZsWe]
> and 
> https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWW&open=AW5md_AVKcVY8lQ4ZsWW



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2596) Remove this unused private "createPipeline" method

2019-11-20 Thread Abhishek Purohit (Jira)
Abhishek Purohit created HDDS-2596:
--

 Summary: Remove this unused private "createPipeline" method
 Key: HDDS-2596
 URL: https://issues.apache.org/jira/browse/HDDS-2596
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Abhishek Purohit
Assignee: Abhishek Purohit


 

[https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWe&open=AW5md_AVKcVY8lQ4ZsWe]

and 

https://sonarcloud.io/project/issues?id=hadoop-ozone&issues=AW5md_AVKcVY8lQ4ZsWW&open=AW5md_AVKcVY8lQ4ZsWW



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2595) Update Ratis version to latest snapshot version

2019-11-20 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2595:


 Summary: Update Ratis version to latest snapshot version
 Key: HDDS-2595
 URL: https://issues.apache.org/jira/browse/HDDS-2595
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


Update Ratis dependency version to latest snapshot ( 
[ce699ba|https://github.com/apache/incubator-ratis/commit/ce699ba] ), to avoid 
out of memory exceptions (RATIS-714).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2594) S3 RangeReads failing with NumberFormatException

2019-11-20 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2594:


 Summary: S3 RangeReads failing with NumberFormatException
 Key: HDDS-2594
 URL: https://issues.apache.org/jira/browse/HDDS-2594
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


 
{code:java}
2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler:
javax.servlet.ServletException: java.lang.NumberFormatException: For input 
string: "3977248768"
        at 
org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
        at 
org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
        at 
org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
        at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
        at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
        at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
        at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
        at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
        at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
        at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
        at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
        at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
        at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
        at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
        at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
        at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
        at org.eclipse.jetty.server.Server.handle(Server.java:539)
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
        at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
        at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
        at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
        at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
        at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
        at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
        at java.lang.Thread.run(Thread.java:748)
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2594) S3 RangeReads failing with NumberFormatException

2019-11-20 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-2594:


Assignee: Bharat Viswanadham

> S3 RangeReads failing with NumberFormatException
> 
>
> Key: HDDS-2594
> URL: https://issues.apache.org/jira/browse/HDDS-2594
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
>  
> {code:java}
> 2019-11-20 15:32:04,684 WARN org.eclipse.jetty.servlet.ServletHandler:
> javax.servlet.ServletException: java.lang.NumberFormatException: For input 
> string: "3977248768"
>         at 
> org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:432)
>         at 
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:370)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:389)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:342)
>         at 
> org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:229)
>         at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:840)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1780)
>         at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1609)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>         at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1767)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:583)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>         at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>         at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:513)
>         at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>         at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>         at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>         at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>         at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>         at org.eclipse.jetty.server.Server.handle(Server.java:539)
>         at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
>         at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>         at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>         at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>         at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>         at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>         at java.lang.Thread.run(Thread.java:748)
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1812) Du while calculating used disk space reports that chunk files are file not found

2019-11-20 Thread Anu Engineer (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978817#comment-16978817
 ] 

Anu Engineer commented on HDDS-1812:


bq. Do we really need the info on space used by Datanode? It does not seem 
suitable for decisions regarding allocation, since the disk may be full with 
other data.

I am ok with removing this information. As you mentioned it might not be very 
useful for SCM to know.

> Du while calculating used disk space reports that chunk files are file not 
> found
> 
>
> Key: HDDS-1812
> URL: https://issues.apache.org/jira/browse/HDDS-1812
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Attila Doroszlai
>Priority: Critical
>
> {code}
> 2019-07-16 08:16:49,787 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information for path /data/3/ozone-0715
> ExitCodeException exitCode=1: du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/b113dd390e68e914d3ff405f3deec564_stream_60448f
> 77-6349-48fa-ae86-b2d311730569_chunk_1.tmp.1.14118085': No such file or 
> directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/37993af2849bdd0320d0f9d4a6ef4b92_stream_1f68be9f-e083-45e5-84a9-08809bc392ed
> _chunk_1.tmp.1.14118091': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a38677def61389ec0be9105b1b4fddff_stream_9c3c3741-f710-4482-8423-7ac6695be96b
> _chunk_1.tmp.1.14118102': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a689c89f71a75547471baf6182f3be01_stream_baf0f21d-2fb0-4cd8-84b0-eff1723019a0
> _chunk_1.tmp.1.14118105': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/f58cf0fa5cb9360058ae25e8bc983e84_stream_d8d5ea61-995f-4ff5-88fb-4a9e97932f00
> _chunk_1.tmp.1.14118109': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a1d13ee6bbefd1f8156b1bd8db0d1b67_stream_db214bdd-a0c0-4f4a-8bc7-a3817e047e45_chunk_1.tmp.1.14118115':
>  No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/8f8a4bd3f6c31161a70f82cb5ab8ee60_stream_d532d657-3d87-4332-baf8-effad9b3db23_chunk_1.tmp.1.14118127':
>  No such file or directory
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
> at org.apache.hadoop.util.Shell.run(Shell.java:901)
> at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:62)
> at org.apache.hadoop.fs.DU.refresh(DU.java:53)
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:181)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978815#comment-16978815
 ] 

Íñigo Goiri commented on HDFS-14996:


[^HDFS-14996-03.patch] LGTM.
Could we get another Yetus run just for sanity?

> RBF: GetFileStatus fails for directory with EC policy set in case of multiple 
> destinations 
> ---
>
> Key: HDFS-14996
> URL: https://issues.apache.org/jira/browse/HDFS-14996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, rbf
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14996-01.patch, HDFS-14996-02.patch, 
> HDFS-14996-03.patch
>
>
> In case of multi destinations for one mount and following PathAll type Order.
> Getting FileStatus Fails if it has an EC Policy set on it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2356) Multipart upload report errors while writing to ozone Ratis pipeline

2019-11-20 Thread Bharat Viswanadham (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978812#comment-16978812
 ] 

Bharat Viswanadham commented on HDDS-2356:
--

With HDDS-2477 PR, I was able to verify that MPU for larger size files is 
working.

https://github.com/apache/hadoop-ozone/pull/159#issuecomment-556527944

> Multipart upload report errors while writing to ozone Ratis pipeline
> 
>
> Key: HDDS-2356
> URL: https://issues.apache.org/jira/browse/HDDS-2356
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.1
> Environment: Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM 
> on a separate VM
>Reporter: Li Cheng
>Assignee: Bharat Viswanadham
>Priority: Blocker
> Fix For: 0.5.0
>
> Attachments: 2018-11-15-OM-logs.txt, 2019-11-06_18_13_57_422_ERROR, 
> hs_err_pid9340.log, image-2019-10-31-18-56-56-177.png, 
> om-audit-VM_50_210_centos.log, om_audit_log_plc_1570863541668_9278.txt
>
>
> Env: 4 VMs in total: 3 Datanodes on 3 VMs, 1 OM & 1 SCM on a separate VM, say 
> it's VM0.
> I use goofys as a fuse and enable ozone S3 gateway to mount ozone to a path 
> on VM0, while reading data from VM0 local disk and write to mount path. The 
> dataset has various sizes of files from 0 byte to GB-level and it has a 
> number of ~50,000 files. 
> The writing is slow (1GB for ~10 mins) and it stops after around 4GB. As I 
> look at hadoop-root-om-VM_50_210_centos.out log, I see OM throwing errors 
> related with Multipart upload. This error eventually causes the  writing to 
> terminate and OM to be closed. 
>  
> Updated on 11/06/2019:
> See new multipart upload error NO_SUCH_MULTIPART_UPLOAD_ERROR and full logs 
> are in the attachment.
>  2019-11-05 18:12:37,766 ERROR 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest:
>  MultipartUpload Commit is failed for Key:./2
> 0191012/plc_1570863541668_9278 in Volume/Bucket 
> s325d55ad283aa400af464c76d713c07ad/ozone-test
> NO_SUCH_MULTIPART_UPLOAD_ERROR 
> org.apache.hadoop.ozone.om.exceptions.OMException: No such Multipart upload 
> is with specified uploadId fcda8608-b431-48b7-8386-
> 0a332f1a709a-103084683261641950
> at 
> org.apache.hadoop.ozone.om.request.s3.multipart.S3MultipartUploadCommitPartRequest.validateAndUpdateCache(S3MultipartUploadCommitPartRequest.java:1
> 56)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequestDirectlyToOM(OzoneManagerProtocolServerSideTranslatorPB.
> java:217)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:132)
> at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
> at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:100)
> at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
>  
> Updated on 10/28/2019:
> See MISMATCH_MULTIPART_LIST error.
>  
> 2019-10-28 11:44:34,079 [qtp1383524016-70] ERROR - Error in Complete 
> Multipart Upload Request for bucket: ozone-test, key: 
> 20191012/plc_1570863541668_927
>  8
>  MISMATCH_MULTIPART_LIST org.apache.hadoop.ozone.om.exceptions.OMException: 
> Complete Multipart Upload Failed: volume: 
> s3c89e813c80ffcea9543004d57b2a1239bucket:
>  ozone-testkey: 20191012/plc_1570863541668_9278
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.handleError(OzoneManagerProtocolClientSideTranslatorPB.java:732)
>  at 
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.completeMultipartUpload(OzoneManagerProtocolClientSideTranslatorPB
>  .java:1104)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at j

[jira] [Commented] (HDDS-1812) Du while calculating used disk space reports that chunk files are file not found

2019-11-20 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978791#comment-16978791
 ] 

Attila Doroszlai commented on HDDS-1812:


Storage location report has 3 pieces of numeric information:
 # capacity
 # available space
 # space used by Datanode

* The first two pieces are cheap to obtain.
* Hadoop has two implementations for space usage: can be calculated expensively 
(using {{du}}), or approximated cheaply (using {{df}}, ie. {{capacity - 
available space}}).  The approximation is much better if the volume is 
dedicated to Datanode storage.  Is it fair to assume that if performance 
requires it, dedicated volumes will be used for Ozone?
* Do we really need the info on space used by Datanode?  It does not seem 
suitable for decisions regarding allocation, since the disk may be full with 
other data.

> Du while calculating used disk space reports that chunk files are file not 
> found
> 
>
> Key: HDDS-1812
> URL: https://issues.apache.org/jira/browse/HDDS-1812
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Attila Doroszlai
>Priority: Critical
>
> {code}
> 2019-07-16 08:16:49,787 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information for path /data/3/ozone-0715
> ExitCodeException exitCode=1: du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/b113dd390e68e914d3ff405f3deec564_stream_60448f
> 77-6349-48fa-ae86-b2d311730569_chunk_1.tmp.1.14118085': No such file or 
> directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/37993af2849bdd0320d0f9d4a6ef4b92_stream_1f68be9f-e083-45e5-84a9-08809bc392ed
> _chunk_1.tmp.1.14118091': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a38677def61389ec0be9105b1b4fddff_stream_9c3c3741-f710-4482-8423-7ac6695be96b
> _chunk_1.tmp.1.14118102': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a689c89f71a75547471baf6182f3be01_stream_baf0f21d-2fb0-4cd8-84b0-eff1723019a0
> _chunk_1.tmp.1.14118105': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/f58cf0fa5cb9360058ae25e8bc983e84_stream_d8d5ea61-995f-4ff5-88fb-4a9e97932f00
> _chunk_1.tmp.1.14118109': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a1d13ee6bbefd1f8156b1bd8db0d1b67_stream_db214bdd-a0c0-4f4a-8bc7-a3817e047e45_chunk_1.tmp.1.14118115':
>  No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/8f8a4bd3f6c31161a70f82cb5ab8ee60_stream_d532d657-3d87-4332-baf8-effad9b3db23_chunk_1.tmp.1.14118127':
>  No such file or directory
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
> at org.apache.hadoop.util.Shell.run(Shell.java:901)
> at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:62)
> at org.apache.hadoop.fs.DU.refresh(DU.java:53)
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:181)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2593) DatanodeAdminMonitor should track under replicated containers and complete the admin workflow accordingly

2019-11-20 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDDS-2593:

Summary: DatanodeAdminMonitor should track under replicated containers and 
complete the admin workflow accordingly  (was: DatanodeAdminMonitor should 
track unreplicated containers and complete the admin workflow accordingly)

> DatanodeAdminMonitor should track under replicated containers and complete 
> the admin workflow accordingly
> -
>
> Key: HDDS-2593
> URL: https://issues.apache.org/jira/browse/HDDS-2593
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>
> HDDS-2459 allowed the replicationManager to take care of containers which are 
> under-replicated due to decommission and maintenance.
> Its also exposed a new API to return a ContainerReplicaCount object:
> {code}
> getContainerReplicaCount(Container container)
> {code}
> This object will allow the DatanodeAdminMonitor to check if each container is 
> "sufficiently replicated" before decommission or maintenance can complete and 
> hence can be used to track the progress of each node as it progresses though 
> the admin workflow.
> We should track the containers on each node in administration and ensure that 
> each is closed and sufficiently replicated before allowing decommission to 
> complete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2592) Add Datanode command to allow the datanode to persist its admin state

2019-11-20 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978787#comment-16978787
 ] 

Stephen O'Donnell commented on HDDS-2592:
-

PR https://github.com/apache/hadoop-ozone/pull/160 has a proof of concept of 
this change.

> Add Datanode command to allow the datanode to persist its admin state 
> --
>
> Key: HDDS-2592
> URL: https://issues.apache.org/jira/browse/HDDS-2592
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode, SCM
>Affects Versions: 0.5.0
>Reporter: Stephen O'Donnell
>Priority: Major
>
> When a node is decommissioned or put into maintenance, SCM will receive the 
> command to kick off the workflow. As part of that workflow, it should issue a 
> further command to the datanode to set the datanode as either:
> maintenance
> decommissioned
> in_service (this is the default state)
> This state should be persisted in the datanode yaml file so it survives 
> reboots.
> Upon receiving this command, the datanode will return a new state for all its 
> containers in the next container report.
> For all closed containers it should return a state of DECOMMISSIONED or 
> MAINTENANCE accordingly, while non-closed container should return their 
> original value until they are closed. That way SCM can monitor for unclosed 
> containers as part of the decommission flow.
> I don't believe there is any need for the datanode to have multiple states 
> for each admin state (eg decommissioning + decommissioned / 
> entering_maintenance + in_maintenance) as those are only really relevant to 
> SCM. Instead it should be enough to set the datanode state once and assume 
> SCM will cause it to eventually reach that state. 
> These states will be added via HDDS-2459 to progress the changes in the 
> Replication Manager on the SCM side:
> {code}
> ContainerReplicaProto.State.DECOMMISSIONED
> ContainerReplicaProto.State.MAINTENANCE
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2593) DatanodeAdminMonitor should track unreplicated containers and complete the admin workflow accordingly

2019-11-20 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-2593:
---

 Summary: DatanodeAdminMonitor should track unreplicated containers 
and complete the admin workflow accordingly
 Key: HDDS-2593
 URL: https://issues.apache.org/jira/browse/HDDS-2593
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Affects Versions: 0.5.0
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


HDDS-2459 allowed the replicationManager to take care of containers which are 
under-replicated due to decommission and maintenance.

Its also exposed a new API to return a ContainerReplicaCount object:

{code}
getContainerReplicaCount(Container container)
{code}

This object will allow the DatanodeAdminMonitor to check if each container is 
"sufficiently replicated" before decommission or maintenance can complete and 
hence can be used to track the progress of each node as it progresses though 
the admin workflow.

We should track the containers on each node in administration and ensure that 
each is closed and sufficiently replicated before allowing decommission to 
complete.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14563) Enhance interface about recommissioning/decommissioning

2019-11-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978781#comment-16978781
 ] 

Wei-Chiu Chuang commented on HDFS-14563:


I'm about to commit HDFS-14854 but I suspect that has conflict with this patch 
and will require rebase.

> Enhance interface about recommissioning/decommissioning
> ---
>
> Key: HDFS-14563
> URL: https://issues.apache.org/jira/browse/HDFS-14563
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs-client, namenode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-14563.001.patch, HDFS-14563.002.patch, mt_mode-2.txt
>
>
> In current implementation, if we need to decommissioning or recommissioning 
> one datanode, the only way is add the datanode to include or exclude file 
> under namenode configuration path then execute command `bin/hadoop dfsadmin 
> -refreshNodes` and trigger namenode to reload include/exclude and start to 
> recommissioning or decommissioning datanode.
> The shortcomings of this approach is that:
> a. namenode reload include/exclude configuration file from devices, if I/O 
> load is high, handler may be blocked.
> b. namenode has to process every datnodes in include and exclude 
> configurations, if there are many datanodes (very common for large cluster) 
> pending to process, namenode will be hung for hundred seconds to wait 
> recommision/decommision finish at the worst since holding write lock.
> I think we should expose one lightweight interface to support recommissioning 
> or decommissioning single datanode, thus we can operate datanode using 
> dfsadmin more smooth.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2592) Add Datanode command to allow the datanode to persist its admin state

2019-11-20 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-2592:
---

 Summary: Add Datanode command to allow the datanode to persist its 
admin state 
 Key: HDDS-2592
 URL: https://issues.apache.org/jira/browse/HDDS-2592
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Datanode, SCM
Affects Versions: 0.5.0
Reporter: Stephen O'Donnell


When a node is decommissioned or put into maintenance, SCM will receive the 
command to kick off the workflow. As part of that workflow, it should issue a 
further command to the datanode to set the datanode as either:

maintenance
decommissioned
in_service (this is the default state)

This state should be persisted in the datanode yaml file so it survives reboots.

Upon receiving this command, the datanode will return a new state for all its 
containers in the next container report.

For all closed containers it should return a state of DECOMMISSIONED or 
MAINTENANCE accordingly, while non-closed container should return their 
original value until they are closed. That way SCM can monitor for unclosed 
containers as part of the decommission flow.

I don't believe there is any need for the datanode to have multiple states for 
each admin state (eg decommissioning + decommissioned / entering_maintenance + 
in_maintenance) as those are only really relevant to SCM. Instead it should be 
enough to set the datanode state once and assume SCM will cause it to 
eventually reach that state. 

These states will be added via HDDS-2459 to progress the changes in the 
Replication Manager on the SCM side:

{code}
ContainerReplicaProto.State.DECOMMISSIONED
ContainerReplicaProto.State.MAINTENANCE
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1812) Du while calculating used disk space reports that chunk files are file not found

2019-11-20 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1812 started by Attila Doroszlai.
--
> Du while calculating used disk space reports that chunk files are file not 
> found
> 
>
> Key: HDDS-1812
> URL: https://issues.apache.org/jira/browse/HDDS-1812
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Attila Doroszlai
>Priority: Critical
>
> {code}
> 2019-07-16 08:16:49,787 WARN org.apache.hadoop.fs.CachingGetSpaceUsed: Could 
> not get disk usage information for path /data/3/ozone-0715
> ExitCodeException exitCode=1: du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/b113dd390e68e914d3ff405f3deec564_stream_60448f
> 77-6349-48fa-ae86-b2d311730569_chunk_1.tmp.1.14118085': No such file or 
> directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/37993af2849bdd0320d0f9d4a6ef4b92_stream_1f68be9f-e083-45e5-84a9-08809bc392ed
> _chunk_1.tmp.1.14118091': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a38677def61389ec0be9105b1b4fddff_stream_9c3c3741-f710-4482-8423-7ac6695be96b
> _chunk_1.tmp.1.14118102': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a689c89f71a75547471baf6182f3be01_stream_baf0f21d-2fb0-4cd8-84b0-eff1723019a0
> _chunk_1.tmp.1.14118105': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/f58cf0fa5cb9360058ae25e8bc983e84_stream_d8d5ea61-995f-4ff5-88fb-4a9e97932f00
> _chunk_1.tmp.1.14118109': No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/a1d13ee6bbefd1f8156b1bd8db0d1b67_stream_db214bdd-a0c0-4f4a-8bc7-a3817e047e45_chunk_1.tmp.1.14118115':
>  No such file or directory
> du: cannot access 
> '/data/3/ozone-0715/hdds/1b467d25-46cd-4de0-a4a1-e9405bde23ff/current/containerDir3/1724/chunks/8f8a4bd3f6c31161a70f82cb5ab8ee60_stream_d532d657-3d87-4332-baf8-effad9b3db23_chunk_1.tmp.1.14118127':
>  No such file or directory
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1008)
> at org.apache.hadoop.util.Shell.run(Shell.java:901)
> at org.apache.hadoop.fs.DU$DUShell.startRefresh(DU.java:62)
> at org.apache.hadoop.fs.DU.refresh(DU.java:53)
> at 
> org.apache.hadoop.fs.CachingGetSpaceUsed$RefreshThread.run(CachingGetSpaceUsed.java:181)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14999) Avoid Potential Infinite Loop in DFSNetworkTopology

2019-11-20 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978779#comment-16978779
 ] 

Wei-Chiu Chuang commented on HDFS-14999:


It might be related to HADOOP-15317. (not sure. the stacktrace looks familiar 
to me)
We couldn't find the root cause of the infinite loop but the code was rewritten 
to eliminate a while loop.

> Avoid Potential Infinite Loop in DFSNetworkTopology
> ---
>
> Key: HDFS-14999
> URL: https://issues.apache.org/jira/browse/HDFS-14999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> {code:java}
> do {
>   chosen = chooseRandomWithStorageTypeAndExcludeRoot(root, excludeRoot,
>   type);
>   if (excludedNodes == null || !excludedNodes.contains(chosen)) {
> break;
>   } else {
> LOG.debug("Node {} is excluded, continuing.", chosen);
>   }
> } while (true);
> {code}
> Observed this loop getting stuck as part of testing HDFS-14913.
> There should be some exit condition or max retries here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14999) Avoid Potential Infinite Loop in DFSNetworkTopology

2019-11-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978766#comment-16978766
 ] 

Ayush Saxena commented on HDFS-14999:
-

I don't have a fair idea, what exactly a exit condition should be. May be some 
configurable number of retries? some hard coded value? or equal to number of 
nodes?
This isn't a bug post HDFS-14913, but logically has a potential of getting 
stuck long, if choose random keeps on returning the excluded node
Any suggestions?

> Avoid Potential Infinite Loop in DFSNetworkTopology
> ---
>
> Key: HDFS-14999
> URL: https://issues.apache.org/jira/browse/HDFS-14999
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> {code:java}
> do {
>   chosen = chooseRandomWithStorageTypeAndExcludeRoot(root, excludeRoot,
>   type);
>   if (excludedNodes == null || !excludedNodes.contains(chosen)) {
> break;
>   } else {
> LOG.debug("Node {} is excluded, continuing.", chosen);
>   }
> } while (true);
> {code}
> Observed this loop getting stuck as part of testing HDFS-14913.
> There should be some exit condition or max retries here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14999) Avoid Potential Infinite Loop in DFSNetworkTopology

2019-11-20 Thread Ayush Saxena (Jira)
Ayush Saxena created HDFS-14999:
---

 Summary: Avoid Potential Infinite Loop in DFSNetworkTopology
 Key: HDFS-14999
 URL: https://issues.apache.org/jira/browse/HDFS-14999
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ayush Saxena
Assignee: Ayush Saxena


{code:java}
do {
  chosen = chooseRandomWithStorageTypeAndExcludeRoot(root, excludeRoot,
  type);
  if (excludedNodes == null || !excludedNodes.contains(chosen)) {
break;
  } else {
LOG.debug("Node {} is excluded, continuing.", chosen);
  }
} while (true);
{code}

Observed this loop getting stuck as part of testing HDFS-14913.

There should be some exit condition or max retries here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2590) Integration tests for Recon with Ozone Manager.

2019-11-20 Thread Siddharth Wagle (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-2590:
-

Assignee: Shweta

> Integration tests for Recon with Ozone Manager.
> ---
>
> Key: HDDS-2590
> URL: https://issues.apache.org/jira/browse/HDDS-2590
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Shweta
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently, Recon has only unit tests. We need to add the following 
> integration tests to make sure there are no regressions or contract breakage 
> with Ozone Manager. 
> The first step would be to add Recon as a new component to Mini Ozone cluster.
> * *Test 1* - *Verify Recon can get full snapshot and subsequent delta updates 
> from Ozone Manager on startup.*
>   > Start up a Mini Ozone cluster (with Recon) with a few keys in OM.
>   > Verify Recon gets full DB snapshot from OM.
>   > Add 100 keys to OM
>   > Verify Recon picks up the new keys using the delta updates mechanism.
>   > Verify OM DB seq number == Recon's OM DB snapshot's seq number
> * *Test 2* - *Verify Recon restart does not cause issues with the OM DB 
> syncing.*
>> Startup Mini Ozone cluster (with Recon).
>> Add 100 keys to OM
>> Verify Recon picks up the new keys.
>> Stop Recon Server
>> Add 5 keys to OM.
>> Start Recon Server
>> Verify that Recon Server does not request full snapshot from OM (since 
> only a small 
>number of keys have been added, and hence Recon should be able to get 
> the 
>updates alone)
>> Verify OM DB seq number == Recon's OM DB snapshot's seq number
> *Note* : This exercise might expose a few bugs in Recon-OM integration which 
> is perfectly normal and is the exact reason why we want these tests to be 
> written. Please file JIRAs for any major issues encountered and link them 
> here. Minor issues can hopefully be fixed as part of this effort. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14961) Prevent ZKFC changing Observer Namenode state

2019-11-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978734#comment-16978734
 ] 

Ayush Saxena commented on HDFS-14961:
-

[~elgoiri] [~vinayakumarb] can you give a check once. :)

> Prevent ZKFC changing Observer Namenode state
> -
>
> Key: HDFS-14961
> URL: https://issues.apache.org/jira/browse/HDFS-14961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14961-01.patch, HDFS-14961-02.patch, 
> ZKFC-TEST-14961.patch
>
>
> HDFS-14130 made ZKFC aware of the Observer Namenode and hence allows ZKFC 
> running along with the observer NOde.
> The Observer namenode isn't suppose to be part of ZKFC election process.
> But if the  Namenode was part of election, before turning into Observer by 
> transitionToObserver Command. The ZKFC still sends instruction to the 
> Namenode as a result of previous participation and sometimes tend to change 
> the state of Observer to Standby.
> This is also the reason for  failure in TestDFSZKFailoverController.
> TestDFSZKFailoverController has been consistently failing with a time out 
> waiting in testManualFailoverWithDFSHAAdmin(). In particular 
> {{waitForHAState(1, HAServiceState.OBSERVER);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14924) RenameSnapshot not updating new modification time

2019-11-20 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978730#comment-16978730
 ] 

Hadoop QA commented on HDFS-14924:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  0s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch 
generated 1 new + 622 unchanged - 0 fixed = 623 total (was 622) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
14s{color} | {color:green} The patch generated 0 new + 104 unchanged - 132 
fixed = 104 total (was 236) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 15s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14924 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986356/HDFS-14924.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  shellcheck  shelldocs  xml  |
| uname | Linux 9d5f716b013f 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess

[jira] [Updated] (HDDS-2522) Fix TestSecureOzoneCluster

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2522:

Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~adoroszlai] for reporting and fixing the issue, thanks [~xyao] for 
reviews.

> Fix TestSecureOzoneCluster
> --
>
> Key: HDDS-2522
> URL: https://issues.apache.org/jira/browse/HDDS-2522
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TestSecureOzoneCluster is failing with {{failure to login}}.
> {code:title=https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2291-5997d/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt}
> ---
> Test set: org.apache.hadoop.ozone.TestSecureOzoneCluster
> ---
> Tests run: 10, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 23.937 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 2.474 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: 
> failure to login: for principal: 
> scm/pr-hdds-2291-5997d-4279494...@example.com from keytab 
> /workdir/hadoop-ozone/integration-test/target/test-dir/TestSecureOzoneCluster/scm.keytab
>  javax.security.auth.login.LoginException: Unable to obtain password from user
>   at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008)
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.loginAsSCMUser(StorageContainerManager.java:508)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:254)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:212)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:600)
>   at 
> org.apache.hadoop.hdds.scm.HddsTestUtils.getScm(HddsTestUtils.java:91)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:299)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2522) Fix TestSecureOzoneCluster

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2522?focusedWorklogId=347001&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347001
 ]

ASF GitHub Bot logged work on HDDS-2522:


Author: ASF GitHub Bot
Created on: 20/Nov/19 21:00
Start Date: 20/Nov/19 21:00
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #207: 
HDDS-2522. Fix TestSecureOzoneCluster
URL: https://github.com/apache/hadoop-ozone/pull/207
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347001)
Time Spent: 20m  (was: 10m)

> Fix TestSecureOzoneCluster
> --
>
> Key: HDDS-2522
> URL: https://issues.apache.org/jira/browse/HDDS-2522
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> TestSecureOzoneCluster is failing with {{failure to login}}.
> {code:title=https://github.com/elek/ozone-ci-03/blob/master/pr/pr-hdds-2291-5997d/integration/hadoop-ozone/integration-test/org.apache.hadoop.ozone.TestSecureOzoneCluster.txt}
> ---
> Test set: org.apache.hadoop.ozone.TestSecureOzoneCluster
> ---
> Tests run: 10, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 23.937 s <<< 
> FAILURE! - in org.apache.hadoop.ozone.TestSecureOzoneCluster
> testSCMSecurityProtocol(org.apache.hadoop.ozone.TestSecureOzoneCluster)  Time 
> elapsed: 2.474 s  <<< ERROR!
> org.apache.hadoop.security.KerberosAuthException: 
> failure to login: for principal: 
> scm/pr-hdds-2291-5997d-4279494...@example.com from keytab 
> /workdir/hadoop-ozone/integration-test/target/test-dir/TestSecureOzoneCluster/scm.keytab
>  javax.security.auth.login.LoginException: Unable to obtain password from user
>   at 
> org.apache.hadoop.security.UserGroupInformation.doSubjectLogin(UserGroupInformation.java:1847)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytabAndReturnUGI(UserGroupInformation.java:1215)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:1008)
>   at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:315)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.loginAsSCMUser(StorageContainerManager.java:508)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:254)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.(StorageContainerManager.java:212)
>   at 
> org.apache.hadoop.hdds.scm.server.StorageContainerManager.createSCM(StorageContainerManager.java:600)
>   at 
> org.apache.hadoop.hdds.scm.HddsTestUtils.getScm(HddsTestUtils.java:91)
>   at 
> org.apache.hadoop.ozone.TestSecureOzoneCluster.testSCMSecurityProtocol(TestSecureOzoneCluster.java:299)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-20 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2523:

Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

[~szetszwo] thanks for reporting the issue and sharing debug information.

[~adoroszlai] thanks for fixing the issue

> BufferPool.releaseBuffer may release a buffer different than the head of the 
> list
> -
>
> Key: HDDS-2523
> URL: https://issues.apache.org/jira/browse/HDDS-2523
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
> Attachments: a.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> //BufferPool
>   public void releaseBuffer(ByteBuffer byteBuffer) {
> // always remove from head of the list and append at last
> ByteBuffer buffer = bufferList.remove(0);
> // Ensure the buffer to be removed is always at the head of the list.
> Preconditions.checkArgument(buffer.equals(byteBuffer));
> buffer.clear();
> bufferList.add(buffer);
> Preconditions.checkArgument(currentBufferIndex >= 0);
> currentBufferIndex--;
>   }
> {code}
> In the code above, it expects buffer and byteBuffer are the same object, i.e. 
>  buffer == byteBuffer.  However the precondition is checking 
> buffer.equals(byteBuffer). Unfortunately, both buffer and byteBuffer have 
> remaining() == 0 so that equals(..) returns true and the precondition does 
> not catch the bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2523) BufferPool.releaseBuffer may release a buffer different than the head of the list

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2523?focusedWorklogId=346998&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346998
 ]

ASF GitHub Bot logged work on HDDS-2523:


Author: ASF GitHub Bot
Created on: 20/Nov/19 20:48
Start Date: 20/Nov/19 20:48
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #232: 
HDDS-2523. BufferPool.releaseBuffer may release a buffer different than the 
head of the list
URL: https://github.com/apache/hadoop-ozone/pull/232
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 346998)
Time Spent: 20m  (was: 10m)

> BufferPool.releaseBuffer may release a buffer different than the head of the 
> list
> -
>
> Key: HDDS-2523
> URL: https://issues.apache.org/jira/browse/HDDS-2523
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Tsz-wo Sze
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Attachments: a.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {code}
> //BufferPool
>   public void releaseBuffer(ByteBuffer byteBuffer) {
> // always remove from head of the list and append at last
> ByteBuffer buffer = bufferList.remove(0);
> // Ensure the buffer to be removed is always at the head of the list.
> Preconditions.checkArgument(buffer.equals(byteBuffer));
> buffer.clear();
> bufferList.add(buffer);
> Preconditions.checkArgument(currentBufferIndex >= 0);
> currentBufferIndex--;
>   }
> {code}
> In the code above, it expects buffer and byteBuffer are the same object, i.e. 
>  buffer == byteBuffer.  However the precondition is checking 
> buffer.equals(byteBuffer). Unfortunately, both buffer and byteBuffer have 
> remaining() == 0 so that equals(..) returns true and the precondition does 
> not catch the bug.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2588) Consolidate compose environments

2019-11-20 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2588:
---
Status: Patch Available  (was: In Progress)

> Consolidate compose environments
> 
>
> Key: HDDS-2588
> URL: https://issues.apache.org/jira/browse/HDDS-2588
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are a few slightly different sample docker compose environments: ozone, 
> ozoneperf, ozones3, ozone-recon. This issue proposes to merge these 4 by 
> minor additions to ozoneperf:
>  # add {{recon}} service from {{ozone-recon}}
>  # run GDPR and S3 tests
>  # expose datanode web port (eg. for profiling)
> Plus: also run ozone-shell test (from basic suite), which is currently run 
> only in ozonesecure
> I also propose to rename {{ozoneperf}} to {{ozone}} for simplicity.
> Consolidating these 4 environments would slightly reduce both code 
> duplication and the time needed for acceptance tests.
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2591) No tailMap needed for startIndex 0 in ContainerSet#listContainer

2019-11-20 Thread Attila Doroszlai (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978713#comment-16978713
 ] 

Attila Doroszlai commented on HDDS-2591:


Actually, the method is only used by tests.  [~bharat], do you think it can be 
removed, or is there a plan to use it later?

> No tailMap needed for startIndex 0 in ContainerSet#listContainer
> 
>
> Key: HDDS-2591
> URL: https://issues.apache.org/jira/browse/HDDS-2591
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>
> {{ContainerSet#listContainer}} has this code:
> {code:title=https://github.com/apache/hadoop-ozone/blob/3c334f6a7b344e0e5f52fec95071c369286cfdcb/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java#L198}
> map = containerMap.tailMap(containerMap.firstKey(), true);
> {code}
> This is equivalent to:
> {code}
> map = containerMap;
> {code}
> since {{tailMap}} is a sub-map with all keys larger than or equal to 
> ({{inclusive=true}}) {{firstKey}}, which is the lowest key in the map.  So it 
> is a sub-map with all keys, ie. the whole map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2591) No tailMap needed for startIndex 0 in ContainerSet#listContainer

2019-11-20 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2591:
--

 Summary: No tailMap needed for startIndex 0 in 
ContainerSet#listContainer
 Key: HDDS-2591
 URL: https://issues.apache.org/jira/browse/HDDS-2591
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{ContainerSet#listContainer}} has this code:

{code:title=https://github.com/apache/hadoop-ozone/blob/3c334f6a7b344e0e5f52fec95071c369286cfdcb/hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerSet.java#L198}
map = containerMap.tailMap(containerMap.firstKey(), true);
{code}

This is equivalent to:

{code}
map = containerMap;
{code}

since {{tailMap}} is a sub-map with all keys larger than or equal to 
({{inclusive=true}}) {{firstKey}}, which is the lowest key in the map.  So it 
is a sub-map with all keys, ie. the whole map.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14996) RBF: GetFileStatus fails for directory with EC policy set in case of multiple destinations

2019-11-20 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16978709#comment-16978709
 ] 

Ayush Saxena commented on HDFS-14996:
-

Test Failures seems unrelated.
Pls Review!!!

> RBF: GetFileStatus fails for directory with EC policy set in case of multiple 
> destinations 
> ---
>
> Key: HDFS-14996
> URL: https://issues.apache.org/jira/browse/HDFS-14996
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec, rbf
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14996-01.patch, HDFS-14996-02.patch, 
> HDFS-14996-03.patch
>
>
> In case of multi destinations for one mount and following PathAll type Order.
> Getting FileStatus Fails if it has an EC Policy set on it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2590) Integration tests for Recon with Ozone Manager.

2019-11-20 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-2590:
---

 Summary: Integration tests for Recon with Ozone Manager.
 Key: HDDS-2590
 URL: https://issues.apache.org/jira/browse/HDDS-2590
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Recon
Reporter: Aravindan Vijayan
 Fix For: 0.5.0


Currently, Recon has only unit tests. We need to add the following integration 
tests to make sure there are no regressions or contract breakage with Ozone 
Manager. 

The first step would be to add Recon as a new component to Mini Ozone cluster.

* *Test 1* - Verify Recon can get full snapshot and subsequent delta updates 
from Ozone Manager on startup.
  > Start up a Mini Ozone cluster (with Recon) with a few keys in OM.
  > Verify Recon gets full DB snapshot from OM.
  > Add 100 keys to OM
  > Verify Recon picks up the new keys using the delta updates mechanism.
  > Verify OM DB seq number == Recon's OM DB snapshot's seq number

* *Test 2* - Verify Recon restart does not cause issues with the OM DB syncing.
   > Startup Mini Ozone cluster (with Recon).
   > Add 100 keys to OM
   > Verify Recon picks up the new keys.
   > Stop Recon Server
   > Add 5 keys to OM.
   > Start Recon Server
   > Verify that Recon Server does not request full snapshot from OM (since 
only a small 
   number of keys have been added, and hence Recon should be able to get 
the 
   updates alone)
   > Verify OM DB seq number == Recon's OM DB snapshot's seq number

*Note* : This exercise might expose a few bugs in Recon-OM integration which is 
perfectly normal and is the exact reason why we want these tests to be written. 
Please file JIRAs for any major issues encountered and link them here. Minor 
issues can hopefully be fixed as part of this effort. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2590) Integration tests for Recon with Ozone Manager.

2019-11-20 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-2590:

Description: 
Currently, Recon has only unit tests. We need to add the following integration 
tests to make sure there are no regressions or contract breakage with Ozone 
Manager. 

The first step would be to add Recon as a new component to Mini Ozone cluster.

* *Test 1* - *Verify Recon can get full snapshot and subsequent delta updates 
from Ozone Manager on startup.*
  > Start up a Mini Ozone cluster (with Recon) with a few keys in OM.
  > Verify Recon gets full DB snapshot from OM.
  > Add 100 keys to OM
  > Verify Recon picks up the new keys using the delta updates mechanism.
  > Verify OM DB seq number == Recon's OM DB snapshot's seq number

* *Test 2* - *Verify Recon restart does not cause issues with the OM DB 
syncing.*
   > Startup Mini Ozone cluster (with Recon).
   > Add 100 keys to OM
   > Verify Recon picks up the new keys.
   > Stop Recon Server
   > Add 5 keys to OM.
   > Start Recon Server
   > Verify that Recon Server does not request full snapshot from OM (since 
only a small 
   number of keys have been added, and hence Recon should be able to get 
the 
   updates alone)
   > Verify OM DB seq number == Recon's OM DB snapshot's seq number

*Note* : This exercise might expose a few bugs in Recon-OM integration which is 
perfectly normal and is the exact reason why we want these tests to be written. 
Please file JIRAs for any major issues encountered and link them here. Minor 
issues can hopefully be fixed as part of this effort. 

  was:
Currently, Recon has only unit tests. We need to add the following integration 
tests to make sure there are no regressions or contract breakage with Ozone 
Manager. 

The first step would be to add Recon as a new component to Mini Ozone cluster.

* *Test 1* - Verify Recon can get full snapshot and subsequent delta updates 
from Ozone Manager on startup.
  > Start up a Mini Ozone cluster (with Recon) with a few keys in OM.
  > Verify Recon gets full DB snapshot from OM.
  > Add 100 keys to OM
  > Verify Recon picks up the new keys using the delta updates mechanism.
  > Verify OM DB seq number == Recon's OM DB snapshot's seq number

* *Test 2* - Verify Recon restart does not cause issues with the OM DB syncing.
   > Startup Mini Ozone cluster (with Recon).
   > Add 100 keys to OM
   > Verify Recon picks up the new keys.
   > Stop Recon Server
   > Add 5 keys to OM.
   > Start Recon Server
   > Verify that Recon Server does not request full snapshot from OM (since 
only a small 
   number of keys have been added, and hence Recon should be able to get 
the 
   updates alone)
   > Verify OM DB seq number == Recon's OM DB snapshot's seq number

*Note* : This exercise might expose a few bugs in Recon-OM integration which is 
perfectly normal and is the exact reason why we want these tests to be written. 
Please file JIRAs for any major issues encountered and link them here. Minor 
issues can hopefully be fixed as part of this effort. 


> Integration tests for Recon with Ozone Manager.
> ---
>
> Key: HDDS-2590
> URL: https://issues.apache.org/jira/browse/HDDS-2590
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently, Recon has only unit tests. We need to add the following 
> integration tests to make sure there are no regressions or contract breakage 
> with Ozone Manager. 
> The first step would be to add Recon as a new component to Mini Ozone cluster.
> * *Test 1* - *Verify Recon can get full snapshot and subsequent delta updates 
> from Ozone Manager on startup.*
>   > Start up a Mini Ozone cluster (with Recon) with a few keys in OM.
>   > Verify Recon gets full DB snapshot from OM.
>   > Add 100 keys to OM
>   > Verify Recon picks up the new keys using the delta updates mechanism.
>   > Verify OM DB seq number == Recon's OM DB snapshot's seq number
> * *Test 2* - *Verify Recon restart does not cause issues with the OM DB 
> syncing.*
>> Startup Mini Ozone cluster (with Recon).
>> Add 100 keys to OM
>> Verify Recon picks up the new keys.
>> Stop Recon Server
>> Add 5 keys to OM.
>> Start Recon Server
>> Verify that Recon Server does not request full snapshot from OM (since 
> only a small 
>number of keys have been added, and hence Recon should be able to get 
> the 
>updates alone)
>> Verify OM DB seq number == Recon's OM DB snapshot's seq number
> *Note* : This exercise might expose a few bugs in Recon-OM integration which 
> is perfectly normal and is the exact reason why we want these tests to be 
> written. Please file JIRAs for any ma

[jira] [Work logged] (HDDS-2588) Consolidate compose environments

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2588?focusedWorklogId=346962&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346962
 ]

ASF GitHub Bot logged work on HDDS-2588:


Author: ASF GitHub Bot
Created on: 20/Nov/19 19:36
Start Date: 20/Nov/19 19:36
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #238: HDDS-2588. 
Consolidate compose environments
URL: https://github.com/apache/hadoop-ozone/pull/238
 
 
   ## What changes were proposed in this pull request?
   
   There are a few slightly different sample docker compose environments: 
`ozone`, `ozoneperf`, `ozones3`, `ozone-recon`.  This change proposes to merge 
these 4 by minor additions to `ozoneperf`:
   
   1. add `recon` service from `ozone-recon`
   2. run GDPR and S3 tests
   3. expose datanode web port (eg. for profiling)
   
   Plus: also run `ozone-shell` test (from `basic` suite), which is currently 
run only in `ozonesecure`
   
   I also propose to rename `ozoneperf` to `ozone` for simplicity.
   
   Consolidating these 4 environments would slightly reduce both code 
duplication and the time needed for acceptance tests.
   
   https://issues.apache.org/jira/browse/HDDS-2588
   
   ## How was this patch tested?
   
   Ran acceptance test in `ozone` dir.  Generated keys using freon, verified 
that Jaeger, Prometheus, Grafana reflect the operations.
   
   Clean CI in private branch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 346962)
Remaining Estimate: 0h
Time Spent: 10m

> Consolidate compose environments
> 
>
> Key: HDDS-2588
> URL: https://issues.apache.org/jira/browse/HDDS-2588
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There are a few slightly different sample docker compose environments: ozone, 
> ozoneperf, ozones3, ozone-recon. This issue proposes to merge these 4 by 
> minor additions to ozoneperf:
>  # add {{recon}} service from {{ozone-recon}}
>  # run GDPR and S3 tests
>  # expose datanode web port (eg. for profiling)
> Plus: also run ozone-shell test (from basic suite), which is currently run 
> only in ozonesecure
> I also propose to rename {{ozoneperf}} to {{ozone}} for simplicity.
> Consolidating these 4 environments would slightly reduce both code 
> duplication and the time needed for acceptance tests.
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2588) Consolidate compose environments

2019-11-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2588:
-
Labels: pull-request-available  (was: )

> Consolidate compose environments
> 
>
> Key: HDDS-2588
> URL: https://issues.apache.org/jira/browse/HDDS-2588
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> There are a few slightly different sample docker compose environments: ozone, 
> ozoneperf, ozones3, ozone-recon. This issue proposes to merge these 4 by 
> minor additions to ozoneperf:
>  # add {{recon}} service from {{ozone-recon}}
>  # run GDPR and S3 tests
>  # expose datanode web port (eg. for profiling)
> Plus: also run ozone-shell test (from basic suite), which is currently run 
> only in ozonesecure
> I also propose to rename {{ozoneperf}} to {{ozone}} for simplicity.
> Consolidating these 4 environments would slightly reduce both code 
> duplication and the time needed for acceptance tests.
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-2588) Consolidate compose environments

2019-11-20 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-2588 started by Attila Doroszlai.
--
> Consolidate compose environments
> 
>
> Key: HDDS-2588
> URL: https://issues.apache.org/jira/browse/HDDS-2588
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: docker
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> There are a few slightly different sample docker compose environments: ozone, 
> ozoneperf, ozones3, ozone-recon. This issue proposes to merge these 4 by 
> minor additions to ozoneperf:
>  # add {{recon}} service from {{ozone-recon}}
>  # run GDPR and S3 tests
>  # expose datanode web port (eg. for profiling)
> Plus: also run ozone-shell test (from basic suite), which is currently run 
> only in ozonesecure
> I also propose to rename {{ozoneperf}} to {{ozone}} for simplicity.
> Consolidating these 4 environments would slightly reduce both code 
> duplication and the time needed for acceptance tests.
> CC [~elek]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >