[jira] [Updated] (HDFS-11100) Recursively deleting file protected by sticky bit should fail

2017-02-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11100:
--
Status: Patch Available  (was: In Progress)

> Recursively deleting file protected by sticky bit should fail
> -
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>  Labels: permissions
> Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, 
> HDFS-11100.003.patch, HDFS-11100.004.patch, hdfs_cmds
>
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11100) Recursively deleting file protected by sticky bit should fail

2017-02-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-11100:
--
Attachment: HDFS-11100.004.patch

Patch 004
* Performance optimization by only performing expensive operations right before 
throwing an exception 

> Recursively deleting file protected by sticky bit should fail
> -
>
> Key: HDFS-11100
> URL: https://issues.apache.org/jira/browse/HDFS-11100
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
>  Labels: permissions
> Attachments: HDFS-11100.001.patch, HDFS-11100.002.patch, 
> HDFS-11100.003.patch, HDFS-11100.004.patch, hdfs_cmds
>
>
> Recursively deleting a directory that contains files or directories protected 
> by sticky bit should fail but it doesn't in HDFS. In the case below, 
> {{/tmp/test/sticky_dir/f2}} is protected by sticky bit, thus recursive 
> deleting {{/tmp/test/sticky_dir}} should fail.
> {noformat}
> + hdfs dfs -ls -R /tmp/test
> drwxrwxrwt   - jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir
> -rwxrwxrwx   1 jzhuge supergroup  0 2016-11-03 18:08 
> /tmp/test/sticky_dir/f2
> + sudo -u hadoop hdfs dfs -rm -skipTrash /tmp/test/sticky_dir/f2
> rm: Permission denied by sticky bit: user=hadoop, 
> path="/tmp/test/sticky_dir/f2":jzhuge:supergroup:-rwxrwxrwx, 
> parent="/tmp/test/sticky_dir":jzhuge:supergroup:drwxrwxrwt
> + sudo -u hadoop hdfs dfs -rm -r -skipTrash /tmp/test/sticky_dir
> Deleted /tmp/test/sticky_dir
> {noformat}
> Centos 6.4 behavior:
> {noformat}
> $ ls -lR /tmp/test
> /tmp/test: 
> total 4
> drwxrwxrwt 2 systest systest 4096 Nov  3 18:36 sbit
> /tmp/test/sbit:
> total 0
> -rw-rw-rw- 1 systest systest 0 Nov  2 13:45 f2
> $ sudo -u mapred rm -fr /tmp/test/sbit
> rm: cannot remove `/tmp/test/sbit/f2': Operation not permitted
> $ chmod -t /tmp/test/sbit
> $ sudo -u mapred rm -fr /tmp/test/sbit
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-7859) Erasure Coding: Persist erasure coding policies in NameNode

2017-02-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15855094#comment-15855094
 ] 

Zhe Zhang commented on HDFS-7859:
-

Thanks for the thoughts Andrew. I think HDFS-11314 is indeed necessary to 
filter out unsuitable policies. Other than "ID/name that we already use, or an 
ID/name we might want to hardcode later", what other validations do you have in 
mind?

>From our current production perspective, the built-in EC policies are 
>sufficient. We prefer a lower complexity implementation at least in the 
>initial 3.0 GA. If we do decide to add pluggable EC policies in 3.0 GA, can we 
>add an on-off config option for the entire pluggable logic and default to off?

> Erasure Coding: Persist erasure coding policies in NameNode
> ---
>
> Key: HDFS-7859
> URL: https://issues.apache.org/jira/browse/HDFS-7859
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Xinwei Qin 
>Priority: Blocker
>  Labels: BB2015-05-TBR, hdfs-ec-3.0-must-do
> Attachments: HDFS-7859.001.patch, HDFS-7859.002.patch, 
> HDFS-7859.004.patch, HDFS-7859.005.patch, HDFS-7859.006.patch, 
> HDFS-7859.007.patch, HDFS-7859.008.patch, HDFS-7859.009.patch, 
> HDFS-7859-HDFS-7285.002.patch, HDFS-7859-HDFS-7285.002.patch, 
> HDFS-7859-HDFS-7285.003.patch
>
>
> In meetup discussion with [~zhz] and [~jingzhao], it's suggested that we 
> persist EC schemas in NameNode centrally and reliably, so that EC zones can 
> reference them by name efficiently.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-11314) Validate client-provided EC schema on the NameNode

2017-02-06 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HDFS-11314:
-

Assignee: Chen Liang

> Validate client-provided EC schema on the NameNode
> --
>
> Key: HDFS-11314
> URL: https://issues.apache.org/jira/browse/HDFS-11314
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha1
>Reporter: Andrew Wang
>Assignee: Chen Liang
>Priority: Blocker
>  Labels: hdfs-ec-3.0-must-do
>
> Filing based on discussion in HDFS-8095. A user might specify a policy that 
> is not appropriate for the cluster, e.g. a RS (10,4) policy when the cluster 
> only has 10 nodes. The NN should only allow the client to choose from a 
> pre-approved list determined by the cluster administrator.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-8498) Blocks can be committed with wrong size

2017-02-06 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-8498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15854906#comment-15854906
 ] 

Jitendra Nath Pandey commented on HDFS-8498:


+1, the patch looks good to me.

> Blocks can be committed with wrong size
> ---
>
> Key: HDFS-8498
> URL: https://issues.apache.org/jira/browse/HDFS-8498
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.5.0
>Reporter: Daryn Sharp
>Assignee: Jing Zhao
>Priority: Critical
> Attachments: HDFS-8498.000.patch, HDFS-8498.001.patch
>
>
> When an IBR for a UC block arrives, the NN updates the expected location's 
> block and replica state _only_ if it's on an unexpected storage for an 
> expected DN.  If it's for an expected storage, only the genstamp is updated.  
> When the block is committed, and the expected locations are verified, only 
> the genstamp is checked.  The size is not checked but it wasn't updated in 
> the expected locations anyway.
> A faulty client may misreport the size when committing the block.  The block 
> is effectively corrupted.  If the NN issues replications, the received IBR is 
> considered corrupt, the NN invalidates the block, immediately issues another 
> replication.  The NN eventually realizes all the original replicas are 
> corrupt after full BRs are received from the original DNs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-02-06 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15854790#comment-15854790
 ] 

Xiao Chen commented on HDFS-10860:
--

Thanks for revving John! +1 pending jenkins.

> Switch HttpFS from Tomcat to Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
> Attachments: HDFS-10860.001.patch, HDFS-10860.002.patch, 
> HDFS-10860.003.patch, HDFS-10860.004.patch, HDFS-10860.005.patch, 
> HDFS-10860.006.patch, HDFS-10860.007.patch, HDFS-10860.008.patch, 
> HDFS-10860.009.patch, HDFS-10860.010.patch, HDFS-10860.011.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-02-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10860:
--
Attachment: HDFS-10860.011.patch

Patch 011
* Fix a typo in patch 010

> Switch HttpFS from Tomcat to Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
> Attachments: HDFS-10860.001.patch, HDFS-10860.002.patch, 
> HDFS-10860.003.patch, HDFS-10860.004.patch, HDFS-10860.005.patch, 
> HDFS-10860.006.patch, HDFS-10860.007.patch, HDFS-10860.008.patch, 
> HDFS-10860.009.patch, HDFS-10860.010.patch, HDFS-10860.011.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11311) HDFS fsck continues to report all blocks present when DataNode is restarted with empty data directories

2017-02-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-11311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Frimberger updated HDFS-11311:

Attachment: HDFS-11311-branch-3.0.0-alpha2.001.patch

The attached patch adds the suggested fix and a test to verify it. I decided 
not to test against {{fsck}}, because it's just a symptom and not the root 
cause. Therefore, I relocated the test under "DataNode" to show that removing 
BlockPools offline should work as expected.

> HDFS fsck continues to report all blocks present when DataNode is restarted 
> with empty data directories
> ---
>
> Key: HDFS-11311
> URL: https://issues.apache.org/jira/browse/HDFS-11311
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.7.3, 3.0.0-alpha1
>Reporter: André Frimberger
> Attachments: HDFS-11311-branch-3.0.0-alpha2.001.patch, 
> HDFS-11311.reproduce.patch
>
>
> During cluster maintenance, we had to change parameters of the underlying 
> disk filesystem and we stopped the DataNode, reformatted all of its data 
> directories and started the DataNode again in under 10 minutes with no data 
> and only the {{VERSION}} file present. Running fsck afterwards reports that 
> all blocks are fully replicated, which does not reflect the true state of 
> HDFS. If an administrator trusts {{fsck}} and continues to replace further 
> DataNodes, *data will be lost!*
> Steps to reproduce:
> 1. Shutdown DataNode
> 2. Remove all BlockPools from all data directories (only {{VERSION}} file is 
> present)
> 3. Startup DataNode in under 10.5 minutes
> 4. Run {{hdfs fsck /}}
> *Actual result:* Average replication is falsely shown as 3.0
> *Expected result:* Average replication factor is < 3.0
> *Workaround:* Trigger a block report with {{hdfs dfsadmin -triggerBlockReport 
> $dn_host:$ipc_port}}
> *Cause:* The first block report is handled differently by NameNode and only 
> added blocks are respected. This behaviour was introduced in HDFS-7980 for 
> performance reasons. But is applied too widely and in our case data can be 
> lost.
> *Fix:* We suggest using stricter conditions on applying 
> {{processFirstBlockReport}} in {{BlockManager:processReport()}}:
> Change
> {code}
> if (storageInfo.getBlockReportCount() == 0) {
> // The first block report can be processed a lot more efficiently than
> // ordinary block reports.  This shortens restart times.
> processFirstBlockReport(storageInfo, newReport);
> } else {
> invalidatedBlocks = processReport(storageInfo, newReport);
> }
> {code}
> to
> {code}
> if (storageInfo.getBlockReportCount() == 0 && storageInfo.getState() != 
> State.FAILED && newReport.getNumberOfBlocks() > 0) {
> // The first block report can be processed a lot more efficiently than
> // ordinary block reports.  This shortens restart times.
> processFirstBlockReport(storageInfo, newReport);
> } else {
> invalidatedBlocks = processReport(storageInfo, newReport);
> }
> {code}
> In case the DataNode reports no blocks for a data directory, it might be a 
> new DataNode or the data directory may have been emptied for whatever reason 
> (offline replacement of storage, reformatting of data disk, etc.). In either 
> case, the changes should be reflected in the output of {{fsck}} in less than 
> 6 hours to prevent data loss due to misleading output.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-11311) HDFS fsck continues to report all blocks present when DataNode is restarted with empty data directories

2017-02-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HDFS-11311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

André Frimberger updated HDFS-11311:

Description: 
During cluster maintenance, we had to change parameters of the underlying disk 
filesystem and we stopped the DataNode, reformatted all of its data directories 
and started the DataNode again in under 10 minutes with no data and only the 
{{VERSION}} file present. Running fsck afterwards reports that all blocks are 
fully replicated, which does not reflect the true state of HDFS. If an 
administrator trusts {{fsck}} and continues to replace further DataNodes, *data 
will be lost!*

Steps to reproduce:
1. Shutdown DataNode
2. Remove all BlockPools from all data directories (only {{VERSION}} file is 
present)
3. Startup DataNode in under 10.5 minutes
4. Run {{hdfs fsck /}}

*Actual result:* Average replication is falsely shown as 3.0
*Expected result:* Average replication factor is < 3.0

*Workaround:* Trigger a block report with {{hdfs dfsadmin -triggerBlockReport 
$dn_host:$ipc_port}}

*Cause:* The first block report is handled differently by NameNode and only 
added blocks are respected. This behaviour was introduced in HDFS-7980 for 
performance reasons. But is applied too widely and in our case data can be lost.

*Fix:* We suggest using stricter conditions on applying 
{{processFirstBlockReport}} in {{BlockManager:processReport()}}:
Change
{code}
if (storageInfo.getBlockReportCount() == 0) {


// The first block report can be processed a lot more efficiently than
// ordinary block reports.  This shortens restart times.
processFirstBlockReport(storageInfo, newReport);
} else {
invalidatedBlocks = processReport(storageInfo, newReport);
}
{code}

to

{code}
if (storageInfo.getBlockReportCount() == 0 && storageInfo.getState() != 
State.FAILED && newReport.getNumberOfBlocks() > 0) {


// The first block report can be processed a lot more efficiently than
// ordinary block reports.  This shortens restart times.
processFirstBlockReport(storageInfo, newReport);
} else {
invalidatedBlocks = processReport(storageInfo, newReport);
}
{code}

In case the DataNode reports no blocks for a data directory, it might be a new 
DataNode or the data directory may have been emptied for whatever reason 
(offline replacement of storage, reformatting of data disk, etc.). In either 
case, the changes should be reflected in the output of {{fsck}} in less than 6 
hours to prevent data loss due to misleading output.


  was:
During cluster maintenance, we had to change parameters of the underlying disk 
filesystem and we stopped the DataNode, reformatted all of its data directories 
and started the DataNode again in under 10 minutes with no data and only the 
{{VERSION}} file present. Running fsck afterwards reports that all blocks are 
fully replicated, which does not reflect the true state of HDFS. If an 
administrator trusts {{fsck}} and continues to replace further DataNodes, *data 
will be lost!*

Steps to reproduce:
1. Shutdown DataNode
2. Remove all BlockPools from all data directories (only {{VERSION}} file is 
present)
3. Startup DataNode in under 10.5 minutes
4. Run {{hdfs fsck /}}

*Actual result:* Average replication is falsely shown as 3.0
*Expected result:* Average replication factor is < 3.0

*Workaround:* Trigger a block report with {{hdfs dfsadmin -triggerBlockReport 
$dn_host:$ipc_port}}

*Cause:* The first block report is handled differently by NameNode and only 
added blocks are respected. This behaviour was introduced in HDFS-7980 for 
performance reasons. But is applied too widely and in our case data can be lost.

*Fix:* We suggest using stricter conditions on applying 
{{processFirstBlockReport}} in {{BlockManager:processReport()}}:
Change
{code}
if (storageInfo.getBlockReportCount() == 0) {


// The first block report can be processed a lot more efficiently than
// ordinary block reports.  This shortens restart times.
processFirstBlockReport(storageInfo, newReport);
} else {
invalidatedBlocks = processReport(storageInfo, newReport);
}
{code}

to

{code}
if (storageInfo.getBlockReportCount() == 0 && storageInfo.getState() != 
State.FAILED && storageInfo.numBlocks() > 0) {


// The first block report can be processed a lot more efficiently than
// ordinary block reports.  This shortens restart times.
processFirstBlockReport(storageInfo, newReport);
} else {
invalidatedBlocks = processReport(storageInfo, newReport);
}
{code}

In case the DataNode reports no blocks for a data directory, it might be a new 
DataNode or the data directory may have been emptied for whatever reason 
(offline replacement of storage, reformatting of data disk, etc.). In either 
case, the changes should be reflected in the output of {{fsck}} in less than 6 
hours to prevent data loss due to misleading output.



> HDFS fsck continues to report all blocks 

[jira] [Created] (HDFS-11392) FSPermissionChecker#checkSubAccess should support inodeattribute provider

2017-02-06 Thread John Zhuge (JIRA)
John Zhuge created HDFS-11392:
-

 Summary: FSPermissionChecker#checkSubAccess should support 
inodeattribute provider
 Key: HDFS-11392
 URL: https://issues.apache.org/jira/browse/HDFS-11392
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.0
Reporter: John Zhuge
Priority: Minor


HDFS-6826 added this TODO in {{FSPermissionChecker#checkSubAccess}}:
{code:title=FSPermissionChecker#checkSubAccess}
//TODO have to figure this out with inodeattribute provider
INodeAttributes inodeAttr =
getINodeAttrs(components, pathIdx, d, snapshotId);
{code}

If inodeattribute provider in play, it always incorrectly returns the root attr 
of the subtree even when it descends multiple levels down the sub tree, because 
the components array is for the root .

{code:title=FSPermissionChecker#getINodeAttrs}
  private INodeAttributes getINodeAttrs(byte[][] pathByNameArr, int pathIdx,
  INode inode, int snapshotId) {
INodeAttributes inodeAttrs = inode.getSnapshotINode(snapshotId);
if (getAttributesProvider() != null) {
  String[] elements = new String[pathIdx + 1];
  for (int i = 0; i < elements.length; i++) {
elements[i] = DFSUtil.bytes2String(pathByNameArr[i]);
  }
  inodeAttrs = getAttributesProvider().getAttributes(elements, inodeAttrs);
}
return inodeAttrs;
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-9409) DataNode shutdown does not guarantee full shutdown of all threads due to race condition.

2017-02-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15854634#comment-15854634
 ] 

Chris Nauroth commented on HDFS-9409:
-

Using a hidden configuration flag for this sounds appropriate to me.  I agree 
that there is no need for a strict long wait on all threads in production 
operations if correctness doesn't depend on it.

> DataNode shutdown does not guarantee full shutdown of all threads due to race 
> condition.
> 
>
> Key: HDFS-9409
> URL: https://issues.apache.org/jira/browse/HDFS-9409
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Chris Nauroth
>
> {{DataNode#shutdown}} is documented to return "only after shutdown is 
> complete".  Even after completion of this method, it's possible that threads 
> started by the DataNode are still running.  Race conditions in the shutdown 
> sequence may cause it to skip stopping and joining the {{BPServiceActor}} 
> threads.
> This is likely not a big problem in normal operations, because these are 
> daemon threads that won't block overall process exit.  It is more of a 
> problem for tests, because it makes it impossible to write reliable 
> assertions that these threads exited cleanly.  For large test suites, it can 
> also cause an accumulation of unneeded threads, which might harm test 
> performance.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8196) Erasure Coding related information on NameNode UI

2017-02-06 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8196:
-
Status: Patch Available  (was: Open)

> Erasure Coding related information on NameNode UI
> -
>
> Key: HDFS-8196
> URL: https://issues.apache.org/jira/browse/HDFS-8196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: NameNode, WebUI, hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-8196.01.patch, Screen Shot 2017-02-06 at 
> 22.30.40.png
>
>
> NameNode WebUI shows EC related information and metrics. 
> This is depend on [HDFS-7674|https://issues.apache.org/jira/browse/HDFS-7674].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8196) Erasure Coding related information on NameNode UI

2017-02-06 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8196:
-
Attachment: Screen Shot 2017-02-06 at 22.30.40.png

> Erasure Coding related information on NameNode UI
> -
>
> Key: HDFS-8196
> URL: https://issues.apache.org/jira/browse/HDFS-8196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: NameNode, WebUI, hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-8196.01.patch, Screen Shot 2017-02-06 at 
> 22.30.40.png
>
>
> NameNode WebUI shows EC related information and metrics. 
> This is depend on [HDFS-7674|https://issues.apache.org/jira/browse/HDFS-7674].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-8196) Erasure Coding related information on NameNode UI

2017-02-06 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HDFS-8196:
-
Attachment: HDFS-8196.01.patch

> Erasure Coding related information on NameNode UI
> -
>
> Key: HDFS-8196
> URL: https://issues.apache.org/jira/browse/HDFS-8196
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: namenode
>Affects Versions: HDFS-7285
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>  Labels: NameNode, WebUI, hdfs-ec-3.0-nice-to-have
> Attachments: HDFS-8196.01.patch
>
>
> NameNode WebUI shows EC related information and metrics. 
> This is depend on [HDFS-7674|https://issues.apache.org/jira/browse/HDFS-7674].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10219) Change the default value for dfs.namenode.reconstruction.pending.timeout-sec from -1 to 300

2017-02-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15853955#comment-15853955
 ] 

Hudson commented on HDFS-10219:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11214 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11214/])
HDFS-10219. Change the default value for (yqlin: rev 
663e683adfbbbffeacdddcd846bd336c121df5c7)
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/PendingReconstructionBlocks.java


> Change the default value for dfs.namenode.reconstruction.pending.timeout-sec 
> from -1 to 300
> ---
>
> Key: HDFS-10219
> URL: https://issues.apache.org/jira/browse/HDFS-10219
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-10219.001.patch, HDFS-10219.002.patch, 
> HDFS-10219.003.patch
>
>
> The default value for "dfs.namenode.replication.pending.timeout-sec" 
> ({{DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT}}) is -1, but the 
> timeout is 5 minutes by default.
> {code:title=BlockManager.java}
> pendingReplications = new PendingReplicationBlocks(conf.getInt(
>   DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY,
>   DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT) * 
> 1000L);
> {code}
> {code:title=PendingReplicationBlocks.java}
>   private long timeout = 5 * 60 * 1000;
>   private final static long DEFAULT_RECHECK_INTERVAL = 5 * 60 * 1000;
>   PendingReplicationBlocks(long timeoutPeriod) {
> if ( timeoutPeriod > 0 ) {
>   this.timeout = timeoutPeriod;
> }
> pendingReplications = new HashMap<>();
> timedOutItems = new ArrayList<>();
>   }
> {code}
> I'm thinking we can change 
> {{DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT}} from -1 to 300 for 
> improving readability.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-02-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15853854#comment-15853854
 ] 

Hadoop QA commented on HDFS-10860:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
57s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
18s{color} | {color:green} The patch generated 0 new + 564 unchanged - 8 fixed 
= 564 total (was 572) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  9m  4s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 56s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
17s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestGroupsCaching |
|   | 

[jira] [Updated] (HDFS-10219) Change the default value for dfs.namenode.reconstruction.pending.timeout-sec from -1 to 300

2017-02-06 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10219:
-
Fix Version/s: 3.0.0-alpha3

Thanks [~ajisakaa] for the review. Committed to trunk.

> Change the default value for dfs.namenode.reconstruction.pending.timeout-sec 
> from -1 to 300
> ---
>
> Key: HDFS-10219
> URL: https://issues.apache.org/jira/browse/HDFS-10219
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-10219.001.patch, HDFS-10219.002.patch, 
> HDFS-10219.003.patch
>
>
> The default value for "dfs.namenode.replication.pending.timeout-sec" 
> ({{DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT}}) is -1, but the 
> timeout is 5 minutes by default.
> {code:title=BlockManager.java}
> pendingReplications = new PendingReplicationBlocks(conf.getInt(
>   DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY,
>   DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT) * 
> 1000L);
> {code}
> {code:title=PendingReplicationBlocks.java}
>   private long timeout = 5 * 60 * 1000;
>   private final static long DEFAULT_RECHECK_INTERVAL = 5 * 60 * 1000;
>   PendingReplicationBlocks(long timeoutPeriod) {
> if ( timeoutPeriod > 0 ) {
>   this.timeout = timeoutPeriod;
> }
> pendingReplications = new HashMap<>();
> timedOutItems = new ArrayList<>();
>   }
> {code}
> I'm thinking we can change 
> {{DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT}} from -1 to 300 for 
> improving readability.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10219) Change the default value for dfs.namenode.reconstruction.pending.timeout-sec from -1 to 300

2017-02-06 Thread Yiqun Lin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiqun Lin updated HDFS-10219:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Change the default value for dfs.namenode.reconstruction.pending.timeout-sec 
> from -1 to 300
> ---
>
> Key: HDFS-10219
> URL: https://issues.apache.org/jira/browse/HDFS-10219
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha2
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>Priority: Minor
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-10219.001.patch, HDFS-10219.002.patch, 
> HDFS-10219.003.patch
>
>
> The default value for "dfs.namenode.replication.pending.timeout-sec" 
> ({{DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT}}) is -1, but the 
> timeout is 5 minutes by default.
> {code:title=BlockManager.java}
> pendingReplications = new PendingReplicationBlocks(conf.getInt(
>   DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_KEY,
>   DFSConfigKeys.DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT) * 
> 1000L);
> {code}
> {code:title=PendingReplicationBlocks.java}
>   private long timeout = 5 * 60 * 1000;
>   private final static long DEFAULT_RECHECK_INTERVAL = 5 * 60 * 1000;
>   PendingReplicationBlocks(long timeoutPeriod) {
> if ( timeoutPeriod > 0 ) {
>   this.timeout = timeoutPeriod;
> }
> pendingReplications = new HashMap<>();
> timedOutItems = new ArrayList<>();
>   }
> {code}
> I'm thinking we can change 
> {{DFS_NAMENODE_REPLICATION_PENDING_TIMEOUT_SEC_DEFAULT}} from -1 to 300 for 
> improving readability.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-6804) race condition between transferring block and appending block causes "Unexpected checksum mismatch exception"

2017-02-06 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15853798#comment-15853798
 ] 

Brahma Reddy Battula commented on HDFS-6804:


any update on this..?

> race condition between transferring block and appending block causes 
> "Unexpected checksum mismatch exception" 
> --
>
> Key: HDFS-6804
> URL: https://issues.apache.org/jira/browse/HDFS-6804
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.2.0
>Reporter: Gordon Wang
>Assignee: Wei-Chiu Chuang
>
> We found some error log in the datanode. like this
> {noformat}
> 2014-07-22 01:49:51,338 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ex
> ception for BP-2072804351-192.168.2.104-1406008383435:blk_1073741997_9248
> java.io.IOException: Terminating due to a checksum error.java.io.IOException: 
> Unexpected checksum mismatch while writing 
> BP-2072804351-192.168.2.104-1406008383435:blk_1073741997_9248 from 
> /192.168.2.101:39495
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:536)
> at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:703)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:575)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:115)
> at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
> at java.lang.Thread.run(Thread.java:744)
> {noformat}
> While on the source datanode, the log says the block is transmitted.
> {noformat}
> 2014-07-22 01:49:50,805 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Da
> taTransfer: Transmitted 
> BP-2072804351-192.168.2.104-1406008383435:blk_1073741997
> _9248 (numBytes=16188152) to /192.168.2.103:50010
> {noformat}
> When the destination datanode gets the checksum mismatch, it reports bad 
> block to NameNode and NameNode marks the replica on the source datanode as 
> corrupt. But actually, the replica on the source datanode is valid. Because 
> the replica can pass the checksum verification.
> In all, the replica on the source data is wrongly marked as corrupted.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-6708) StorageType should be encoded in the block token

2017-02-06 Thread Ewan Higgs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewan Higgs updated HDFS-6708:
-
Attachment: HDFS-6708.0002.patch

Attached is a second version of the patch which correctly computes the hash of 
the {{storageTypes}} array.

> StorageType should be encoded in the block token
> 
>
> Key: HDFS-6708
> URL: https://issues.apache.org/jira/browse/HDFS-6708
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, namenode
>Affects Versions: 2.4.1
>Reporter: Arpit Agarwal
>Assignee: Ewan Higgs
> Fix For: 3.0.0-alpha3
>
> Attachments: HDFS-6708.0001.patch, HDFS-6708.0002.patch
>
>
> HDFS-6702 is adding support for file creation based on StorageType.
> The block token is used as a tamper-proof channel for communicating block 
> parameters from the NN to the DN during block creation. The StorageType 
> should be included in this block token.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-10860) Switch HttpFS from Tomcat to Jetty

2017-02-06 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HDFS-10860:
--
Attachment: HDFS-10860.010.patch

Patch 010
- Update ServerSetup.md.vm

TESTING DONE
- - Verify /jmx, /logLevel, /conf, and /stack with 
hadoop.httpfs.http.administrators set to “$USER”
- No access for kerberos login "hdfs"
- Access for kerberos login “$USER"
- HttpFS Bats regression tests https://github.com/jzhuge/hadoop-bats-tests and 
https://github.com/jzhuge/hadoop-setup-scripts in insecure, ssl, and 
ssl+kerberos mode


> Switch HttpFS from Tomcat to Jetty
> --
>
> Key: HDFS-10860
> URL: https://issues.apache.org/jira/browse/HDFS-10860
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: httpfs
>Affects Versions: 2.6.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Blocker
> Attachments: HDFS-10860.001.patch, HDFS-10860.002.patch, 
> HDFS-10860.003.patch, HDFS-10860.004.patch, HDFS-10860.005.patch, 
> HDFS-10860.006.patch, HDFS-10860.007.patch, HDFS-10860.008.patch, 
> HDFS-10860.009.patch, HDFS-10860.010.patch
>
>
> The Tomcat 6 we are using will reach EOL at the end of 2017. While there are 
> other good options, I would propose switching to {{Jetty 9}} for the 
> following reasons:
> * Easier migration. Both Tomcat and Jetty are based on {{Servlet 
> Containers}}, so we don't have to change client code that much. It would 
> require more work to switch to {{JAX-RS}}.
> * Well established.
> * Good performance and scalability.
> Other alternatives:
> * Jersey + Grizzly
> * Tomcat 8
> Your opinions will be greatly appreciated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org