[jira] [Commented] (HDFS-11716) Ozone: SCM: CLI: Revisit delete container API

2017-04-30 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990593#comment-15990593
 ] 

Yuanbo Liu commented on HDFS-11716:
---

{quote}
Because if a container is open and we try to delete without force option, that 
seems to work. 
{quote}
The default vaule of fource delete is false.

> Ozone: SCM: CLI: Revisit delete container API
> -
>
> Key: HDFS-11716
> URL: https://issues.apache.org/jira/browse/HDFS-11716
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> Current delete container API seems can be possibly running into inconsistent 
> state. SCM maintains a mapping of container to nodes, datanode maintains the 
> actual container's data. When deletes a container, we need to make sure db is 
> removed as well as the mapping in SCM also gets updated. What if the datanode 
> failed to remove stuff for a container, do we update the mapping? We need to 
> revisit the implementation and get these issues addressed. See more 
> discussion 
> [here|https://issues.apache.org/jira/browse/HDFS-11675?focusedCommentId=15987798=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15987798].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11695) [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.

2017-04-30 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990575#comment-15990575
 ] 

Yuanbo Liu commented on HDFS-11695:
---

[~surendrasingh] Thanks for your patch.
overall looks good to me. two comments:
1. Would you mind writing test cases in SPS test JIRA instead of 
TestStoragePolicyCommands.java. 
2. fixt the checkstyle issue.
Thanks again for your carefully finding.

> [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.
> 
>
> Key: HDFS-11695
> URL: https://issues.apache.org/jira/browse/HDFS-11695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: fsimage.xml, HDFS-11695-HDFS-10285.001.patch
>
>
> {noformat}
> 2017-04-23 13:27:51,971 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: Cannot request to call satisfy storage policy on path 
> /ssl, as this file/dir was already called for satisfying storage policy.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSatisfyStoragePolicy(FSDirAttrOp.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirXAttrOp.unprotectedSetXAttrs(FSDirXAttrOp.java:284)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:918)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:241)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:150)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-9005) Provide configuration support for upgrade domain

2017-04-30 Thread Ming Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ming Ma updated HDFS-9005:
--
Attachment: HDFS-9005.branch-2.8.001.patch

Per the discussion in the umbrella jira, we want to the feature to be in 2.8. 
Here is the patch for branch-2.8. It requires some manual effort. All HDFS 
tests passed locally.

> Provide configuration support for upgrade domain
> 
>
> Key: HDFS-9005
> URL: https://issues.apache.org/jira/browse/HDFS-9005
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Ming Ma
>Assignee: Ming Ma
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-9005-2.patch, HDFS-9005-3.patch, HDFS-9005-4.patch, 
> HDFS-9005.branch-2.8.001.patch, HDFS-9005.patch
>
>
> As part of the upgrade domain feature, we need to provide a mechanism to 
> specify upgrade domain for each datanode. One way to accomplish that is to 
> allow admins specify an upgrade domain script that takes DN ip or hostname as 
> input and return the upgrade domain. Then namenode will use it at run time to 
> set {{DatanodeInfo}}'s upgrade domain string. The configuration can be 
> something like:
> {noformat}
> 
> dfs.namenode.upgrade.domain.script.file.name
> /etc/hadoop/conf/upgrade-domain.sh
> 
> {noformat}
> just like topology script, 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7541) Upgrade Domains in HDFS

2017-04-30 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990559#comment-15990559
 ] 

Ming Ma commented on HDFS-7541:
---

Sure I can backport HDFS-9005, HDFS-9016 and HDFS-9922 to 2.8. Which 2.8 
release do we want, 2.8.1 or 2.8.2? Pushing the feature to 2.7 requires much 
more work though. Regarding the production quality, yes it has been pretty 
reliable. The only feature we don't use in our production is HDFS-9005. We used 
script-based configuration approach while the feature was developed and tested 
and haven't spend time changing the configuration mechanism. 

> Upgrade Domains in HDFS
> ---
>
> Key: HDFS-7541
> URL: https://issues.apache.org/jira/browse/HDFS-7541
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Ming Ma
>Assignee: Kihwal Lee
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HDFS-7541-2.patch, HDFS-7541.patch, 
> SupportforfastHDFSdatanoderollingupgrade.pdf, UpgradeDomains_design_v2.pdf, 
> UpgradeDomains_Design_v3.pdf
>
>
> Current HDFS DN rolling upgrade step requires sequential DN restart to 
> minimize the impact on data availability and read/write operations. The side 
> effect is longer upgrade duration for large clusters. This might be 
> acceptable for DN JVM quick restart to update hadoop code/configuration. 
> However, for OS upgrade that requires machine reboot, the overall upgrade 
> duration will be too long if we continue to do sequential DN rolling restart.
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11529) Add libHDFS API to return last exception

2017-04-30 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990538#comment-15990538
 ] 

John Zhuge commented on HDFS-11529:
---

Thanks [~aw] for reporting the issue HDFS-11724. I posted a patch there.

> Add libHDFS API to return last exception
> 
>
> Key: HDFS-11529
> URL: https://issues.apache.org/jira/browse/HDFS-11529
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: libhdfs
>Affects Versions: 2.6.0
>Reporter: Sailesh Mukil
>Assignee: Sailesh Mukil
>Priority: Critical
>  Labels: errorhandling, libhdfs
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-11529.000.patch, HDFS-11529.001.patch, 
> HDFS-11529.002.patch, HDFS-11529.003.patch, HDFS-11529.004.patch, 
> HDFS-11529.005.patch, HDFS-11529.006.patch
>
>
> libHDFS uses a table to compare exceptions against and returns a 
> corresponding error code to the application in case of an error.
> However, this table is manually populated and many times is disremembered 
> when new exceptions are added.
> This causes libHDFS to return EINTERNAL (or Unknown Error(255)) whenever 
> these exceptions are hit. These are some examples of exceptions that have 
> been observed on an Error(255):
> org.apache.hadoop.ipc.StandbyException (Operation category WRITE is not 
> supported in state standby)
> java.io.EOFException: Cannot seek after EOF
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)
> It is of course not possible to have an error code for each and every type of 
> exception, so one suggestion of how this can be addressed is by having a call 
> such as hdfsGetLastException() that would return the last exception that a 
> libHDFS thread encountered. This way, an application may choose to call 
> hdfsGetLastException() if it receives EINTERNAL.
> We can make use of the Thread Local Storage to store this information. Also, 
> this makes sure that the current functionality is preserved.
> This is a follow up from HDFS-4997.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11695) [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.

2017-04-30 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990197#comment-15990197
 ] 

Surendra Singh Lilhore commented on HDFS-11695:
---

bq. hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier

This test case is failing randomly. I raised HDFS-11726 to fix this.
Other failed test case are not related to this patch, Please review.

> [SPS]: Namenode failed to start while loading SPS xAttrs from the edits log.
> 
>
> Key: HDFS-11695
> URL: https://issues.apache.org/jira/browse/HDFS-11695
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-10285
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Blocker
> Attachments: fsimage.xml, HDFS-11695-HDFS-10285.001.patch
>
>
> {noformat}
> 2017-04-23 13:27:51,971 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
> java.io.IOException: Cannot request to call satisfy storage policy on path 
> /ssl, as this file/dir was already called for satisfying storage policy.
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.unprotectedSatisfyStoragePolicy(FSDirAttrOp.java:511)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSDirXAttrOp.unprotectedSetXAttrs(FSDirXAttrOp.java:284)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:918)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:241)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:150)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11726) [SPS] : StoragePolicySatisfier should not select same storage type as source and destination in same datanode.

2017-04-30 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-11726:
-

 Summary: [SPS] : StoragePolicySatisfier should not select same 
storage type as source and destination in same datanode.
 Key: HDFS-11726
 URL: https://issues.apache.org/jira/browse/HDFS-11726
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: HDFS-10285
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


{code}
2017-04-30 16:12:28,569 [BlockMoverTask-0] INFO  
datanode.StoragePolicySatisfyWorker (Worker.java:moveBlock(248)) - Start moving 
block:blk_1073741826_1002 from src:127.0.0.1:41699 to destin:127.0.0.1:41699 to 
satisfy storageType, sourceStoragetype:ARCHIVE and destinStoragetype:ARCHIVE
{code}

{code}
2017-04-30 16:12:28,571 [DataXceiver for client /127.0.0.1:36428 [Replacing 
block BP-1409501412-127.0.1.1-1493548923222:blk_1073741826_1002 from 
6c7aa66e-a778-43d5-89f6-053d5f6b35bc]] INFO  datanode.DataNode 
(DataXceiver.java:replaceBlock(1202)) - opReplaceBlock 
BP-1409501412-127.0.1.1-1493548923222:blk_1073741826_1002 received exception 
org.apache.hadoop.hdfs.server.datanode.ReplicaAlreadyExistsException: Replica 
FinalizedReplica, blk_1073741826_1002, FINALIZED
  getNumBytes() = 1024
  getBytesOnDisk()  = 1024
  getVisibleLength()= 1024
  getVolume()   = 
/home/sachin/software/hadoop/HDFS-10285/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data7
  getBlockURI() = 
file:/home/sachin/software/hadoop/HDFS-10285/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data7/current/BP-1409501412-127.0.1.1-1493548923222/current/finalized/subdir0/subdir0/blk_1073741826
 already exists on storage ARCHIVE
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-11675) Ozone: SCM CLI: Implement delete container command

2017-04-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990151#comment-15990151
 ] 

Weiwei Yang edited comment on HDFS-11675 at 4/30/17 6:54 AM:
-

Hi [~xyao]

Thanks, they did output these info in console. But you actually reminds me we 
should get rid of printing outputs in CLI via log4j, SCM CLI has the ability to 
set customized print streams (that is currently used in UT, and is useful in 
feature if we want to support a common option -f to redirect output to some 
files). I just uploaded v5 patch to remove {{LOG}} entries and replaced them 
with {{OzoneCommandHandler#logout()}} which prints outputs to certain print 
stream. See more CLI tests in [^Container_create_del_command_tests]. Please let 
me know if it sounds good or not. Thank you.


was (Author: cheersyang):
Hi [~xyao]

Thanks, they did output these info in console. But you actually reminds me we 
should get rid of printing outputs in CLI via log4j, SCM CLI has the ability to 
set customized print streams (that is currently used in UT, and is useful in 
feature if we want to support a common option -f to redirect output to some 
files). I just uploaded v5 patch to remove {{LOG}} entries and replaced them 
with {{OzoneCommandHandler#logout()}} which prints outputs to certain print 
stream. Please let me know if it sounds good or not. Thank you.

> Ozone: SCM CLI: Implement delete container command
> --
>
> Key: HDFS-11675
> URL: https://issues.apache.org/jira/browse/HDFS-11675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: command-line
> Attachments: Container_create_del_command_tests, 
> HDFS-11675-HDFS-7240.001.patch, HDFS-11675-HDFS-7240.002.patch, 
> HDFS-11675-HDFS-7240.003.patch, HDFS-11675-HDFS-7240.004.patch, 
> HDFS-11675-HDFS-7240.005.patch
>
>
> Implement delete container
> {code}
> hdfs scm -container del  -f
> {code}
> Deletes a container if it is empty. The -f options can be used to force a 
> delete of a non-empty container. If container name specified not exist, 
> prints a clear error message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11725) Ozone: Revise create container CLI specification and implementation

2017-04-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990154#comment-15990154
 ] 

Weiwei Yang commented on HDFS-11725:


Hi [~anu], please let me know how to update the design doc in HDFS-11470. Maybe 
we could add this doc to an online editor? Thanks.

> Ozone: Revise create container CLI specification and implementation
> ---
>
> Key: HDFS-11725
> URL: https://issues.apache.org/jira/browse/HDFS-11725
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> Per [design 
> doc|https://issues.apache.org/jira/secure/attachment/12861478/storage-container-manager-cli-v002.pdf]
>  in HDFS-11470
> {noformat}
> hdfs scm -container create -p 
> Notes : This command connects to SCM and creates a container. Once the 
> container is created in the SCM, the corresponding container is created at 
> the appropriate datanode. Optional -p allows the user to control which 
> pipeline to use while creating this container, this is strictly for debugging 
> and testing.
> {noformat}
> it has 2 problems with this design, 1st it does support a container name but 
> it is quite useful for testing; 2nd it supports an optional option for 
> pipeline, that is not quite necessary right now given SCM handles the 
> creation of the pipelines, we might want to support this later. So proposed 
> to revise the CLI to
> {code}
> hdfs scm -container create -c 
> {code}
> the {{-c}} option is *required*. Backend it does following steps
> # Given the container name, ask SCM where the container should be replicated 
> to. This returns a pipeline.
> # Communicate with each datanode in the pipeline to create the container.
> this jira is to track the work to update both the design doc as well as the 
> implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-11725) Ozone: Revise create container CLI specification and implementation

2017-04-30 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11725:
--

 Summary: Ozone: Revise create container CLI specification and 
implementation
 Key: HDFS-11725
 URL: https://issues.apache.org/jira/browse/HDFS-11725
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


Per [design 
doc|https://issues.apache.org/jira/secure/attachment/12861478/storage-container-manager-cli-v002.pdf]
 in HDFS-11470

{noformat}
hdfs scm -container create -p 

Notes : This command connects to SCM and creates a container. Once the 
container is created in the SCM, the corresponding container is created at the 
appropriate datanode. Optional -p allows the user to control which pipeline to 
use while creating this container, this is strictly for debugging and testing.
{noformat}

it has 2 problems with this design, 1st it does support a container name but it 
is quite useful for testing; 2nd it supports an optional option for pipeline, 
that is not quite necessary right now given SCM handles the creation of the 
pipelines, we might want to support this later. So proposed to revise the CLI to

{code}
hdfs scm -container create -c 
{code}

the {{-c}} option is *required*. Backend it does following steps
# Given the container name, ask SCM where the container should be replicated 
to. This returns a pipeline.
# Communicate with each datanode in the pipeline to create the container.

this jira is to track the work to update both the design doc as well as the 
implementation.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11716) Ozone: SCM: CLI: Revisit delete container API

2017-04-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990152#comment-15990152
 ] 

Weiwei Yang commented on HDFS-11716:


Found one more issue when testing delete API, it only checks if container is 
open when force option is used, in {{Dispatcher#handleDeleteContainer}}

{code}
 if (forceDelete) {
  if (this.containerManager.isOpen(pipeline.getContainerName())) {
throw new StorageContainerException("Attempting to force delete "
+ "an open container.", UNCLOSED_CONTAINER_IO);
  }
 }
{code}

see more in HDFS-11581, that seems an incorrect check. Because if a container 
is {{open}} and we try to delete without force option, that seems to work. Cc 
[~yuanbo], please take a look.

> Ozone: SCM: CLI: Revisit delete container API
> -
>
> Key: HDFS-11716
> URL: https://issues.apache.org/jira/browse/HDFS-11716
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>
> Current delete container API seems can be possibly running into inconsistent 
> state. SCM maintains a mapping of container to nodes, datanode maintains the 
> actual container's data. When deletes a container, we need to make sure db is 
> removed as well as the mapping in SCM also gets updated. What if the datanode 
> failed to remove stuff for a container, do we update the mapping? We need to 
> revisit the implementation and get these issues addressed. See more 
> discussion 
> [here|https://issues.apache.org/jira/browse/HDFS-11675?focusedCommentId=15987798=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15987798].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-11675) Ozone: SCM CLI: Implement delete container command

2017-04-30 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990151#comment-15990151
 ] 

Weiwei Yang commented on HDFS-11675:


Hi [~xyao]

Thanks, they did output these info in console. But you actually reminds me we 
should get rid of printing outputs in CLI via log4j, SCM CLI has the ability to 
set customized print streams (that is currently used in UT, and is useful in 
feature if we want to support a common option -f to redirect output to some 
files). I just uploaded v5 patch to remove {{LOG}} entries and replaced them 
with {{OzoneCommandHandler#logout()}} which prints outputs to certain print 
stream. Please let me know if it sounds good or not. Thank you.

> Ozone: SCM CLI: Implement delete container command
> --
>
> Key: HDFS-11675
> URL: https://issues.apache.org/jira/browse/HDFS-11675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: command-line
> Attachments: Container_create_del_command_tests, 
> HDFS-11675-HDFS-7240.001.patch, HDFS-11675-HDFS-7240.002.patch, 
> HDFS-11675-HDFS-7240.003.patch, HDFS-11675-HDFS-7240.004.patch, 
> HDFS-11675-HDFS-7240.005.patch
>
>
> Implement delete container
> {code}
> hdfs scm -container del  -f
> {code}
> Deletes a container if it is empty. The -f options can be used to force a 
> delete of a non-empty container. If container name specified not exist, 
> prints a clear error message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11675) Ozone: SCM CLI: Implement delete container command

2017-04-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11675:
---
Attachment: Container_create_del_command_tests

> Ozone: SCM CLI: Implement delete container command
> --
>
> Key: HDFS-11675
> URL: https://issues.apache.org/jira/browse/HDFS-11675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: command-line
> Attachments: Container_create_del_command_tests, 
> HDFS-11675-HDFS-7240.001.patch, HDFS-11675-HDFS-7240.002.patch, 
> HDFS-11675-HDFS-7240.003.patch, HDFS-11675-HDFS-7240.004.patch, 
> HDFS-11675-HDFS-7240.005.patch
>
>
> Implement delete container
> {code}
> hdfs scm -container del  -f
> {code}
> Deletes a container if it is empty. The -f options can be used to force a 
> delete of a non-empty container. If container name specified not exist, 
> prints a clear error message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11675) Ozone: SCM CLI: Implement delete container command

2017-04-30 Thread Weiwei Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated HDFS-11675:
---
Attachment: HDFS-11675-HDFS-7240.005.patch

> Ozone: SCM CLI: Implement delete container command
> --
>
> Key: HDFS-11675
> URL: https://issues.apache.org/jira/browse/HDFS-11675
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ozone
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: command-line
> Attachments: HDFS-11675-HDFS-7240.001.patch, 
> HDFS-11675-HDFS-7240.002.patch, HDFS-11675-HDFS-7240.003.patch, 
> HDFS-11675-HDFS-7240.004.patch, HDFS-11675-HDFS-7240.005.patch
>
>
> Implement delete container
> {code}
> hdfs scm -container del  -f
> {code}
> Deletes a container if it is empty. The -f options can be used to force a 
> delete of a non-empty container. If container name specified not exist, 
> prints a clear error message.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6984) In Hadoop 3, make FileStatus serialize itself via protobuf

2017-04-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15990141#comment-15990141
 ] 

Hadoop QA commented on HDFS-6984:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
38s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in trunk has 2 
extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 10 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m  8s{color} 
| {color:red} root generated 71 new + 788 unchanged - 0 fixed = 859 total (was 
788) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  6s{color} | {color:orange} root: The patch generated 14 new + 764 unchanged 
- 24 fixed = 778 total (was 788) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client generated 3 new 
+ 2 unchanged - 0 fixed = 5 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
19s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 46s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 25s{color} 
| {color:red} hadoop-hdfs-httpfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}161m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs |