[jira] [Created] (HDFS-12486) GetConf to get journalnodeslist

2017-09-18 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12486:
-

 Summary: GetConf to get journalnodeslist
 Key: HDFS-12486
 URL: https://issues.apache.org/jira/browse/HDFS-12486
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Bharat Viswanadham


GetConf command to list journal nodes.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12485) expunge may not remove trash from non-home directory encryption zone

2017-09-18 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12485:
--

 Summary: expunge may not remove trash from non-home directory 
encryption zone
 Key: HDFS-12485
 URL: https://issues.apache.org/jira/browse/HDFS-12485
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0-alpha1, 2.8.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


If I log into Linux as root, and then login as the superuser h...@example.com
{noformat}
[root@nightly511-1 ~]# hdfs dfs -rm /scale/b
17/09/18 15:21:32 INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/scale/b' to 
trash at: hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
[root@nightly511-1 ~]# hdfs dfs -expunge
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#deleteCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: Deleted trash checkpoint: 
/user/hdfs/.Trash/170918143916
17/09/18 15:21:59 INFO fs.TrashPolicyDefault: 
TrashPolicyDefault#createCheckpoint for trashRoot: hdfs://ns1/user/hdfs/.Trash
[root@nightly511-1 ~]# hdfs dfs -ls hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
-rw-r--r--   3 hdfs systest  0 2017-09-18 15:21 
hdfs://ns1/scale/.Trash/hdfs/Current/scale/b
{noformat}

expunge does not remove trash under /scale, because it does not know I am 
'hdfs' user.

{code:title=DistributedFileSystem#getTrashRoots}
Path ezTrashRoot = new Path(it.next().getPath(),
FileSystem.TRASH_PREFIX);
if (!exists(ezTrashRoot)) {
  continue;
}
if (allUsers) {
  for (FileStatus candidate : listStatus(ezTrashRoot)) {
if (exists(candidate.getPath())) {
  ret.add(candidate);
}
  }
} else {
  Path userTrash = new Path(ezTrashRoot, System.getProperty(
  "user.name")); --> bug
  try {
ret.add(getFileStatus(userTrash));
  } catch (FileNotFoundException ignored) {
  }
}
{code}

It should use UGI for user name, rather than system login user name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12484) hdfs dfs -expunge requires superuser permission after 2.8

2017-09-18 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12484:
--

 Summary: hdfs dfs -expunge requires superuser permission after 2.8
 Key: HDFS-12484
 URL: https://issues.apache.org/jira/browse/HDFS-12484
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0-alpha1, 2.8.0
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


Hadoop 2.8 added a feature to support trash inside encryption zones.

However, it breaks the existing -expunge semantics because now a user must have 
superuser permission in order to -expunge. The reason behind that is that 
-expunge gets all encryption zone paths using DFSClient#listEncryptionZones, 
which requires super user permission.

Not sure what's the best way to address this, so file this jira to invite 
comments.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12482) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-09-18 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12482:


 Summary: Provide a configuration to adjust the weight of EC 
recovery tasks to adjust the speed of recovery
 Key: HDFS-12482
 URL: https://issues.apache.org/jira/browse/HDFS-12482
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.0-alpha4
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor


The relative speed of EC recovery comparing to 3x replica recovery is a 
function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). 

Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of 
sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN 
uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the 
DataNode this we can add a coefficient for user to tune the weight of EC 
recovery tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12483) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-09-18 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-12483:


 Summary: Provide a configuration to adjust the weight of EC 
recovery tasks to adjust the speed of recovery
 Key: HDFS-12483
 URL: https://issues.apache.org/jira/browse/HDFS-12483
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: erasure-coding
Affects Versions: 3.0.0-alpha4
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor


The relative speed of EC recovery comparing to 3x replica recovery is a 
function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). 

Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of 
sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN 
uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the 
DataNode this we can add a coefficient for user to tune the weight of EC 
recovery tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12481) Ozone: Corona: Support for variable key length in offline mode

2017-09-18 Thread Nandakumar (JIRA)
Nandakumar created HDFS-12481:
-

 Summary: Ozone: Corona: Support for variable key length in offline 
mode
 Key: HDFS-12481
 URL: https://issues.apache.org/jira/browse/HDFS-12481
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Nandakumar
Assignee: Nandakumar


This jira is to bring support in corona to take key length from user and 
generate random data of that length to write into ozone.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12480) TestNameNodeMetrics#testTransactionAndCheckpointMetrics Fails in trunk

2017-09-18 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-12480:
---

 Summary: TestNameNodeMetrics#testTransactionAndCheckpointMetrics 
Fails in trunk
 Key: HDFS-12480
 URL: https://issues.apache.org/jira/browse/HDFS-12480
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Brahma Reddy Battula


{noformat}
java.lang.AssertionError: Bad value for metric LastWrittenTransactionId 
expected:<3> but was:<4>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:189)
at 
org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics.testTransactionAndCheckpointMetrics(TestNameNodeMetrics.java:854)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-09-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/526/

[Sep 18, 2017 4:20:43 AM] (wangda) YARN-7172. ResourceCalculator.fitsIn() 
should not take a cluster


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Resolved] (HDFS-12413) Inotify should support erasure coding policy op as replica meta change

2017-09-18 Thread Huafeng Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huafeng Wang resolved HDFS-12413.
-
Resolution: Not A Problem

> Inotify should support erasure coding policy op as replica meta change
> --
>
> Key: HDFS-12413
> URL: https://issues.apache.org/jira/browse/HDFS-12413
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: erasure-coding
>Reporter: Kai Zheng
>Assignee: Huafeng Wang
>
> Currently HDFS Inotify already supports meta change like replica for a file. 
> We should also support erasure coding policy setting/unsetting for a file 
> similarly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12479) Some misuses of lock in DFSStripedOutputStream

2017-09-18 Thread Huafeng Wang (JIRA)
Huafeng Wang created HDFS-12479:
---

 Summary: Some misuses of lock in DFSStripedOutputStream
 Key: HDFS-12479
 URL: https://issues.apache.org/jira/browse/HDFS-12479
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Huafeng Wang
Assignee: Huafeng Wang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12478) [WRITE] Command line tools for managing Provided Storage Backup mounts

2017-09-18 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-12478:
-

 Summary: [WRITE] Command line tools for managing Provided Storage 
Backup mounts
 Key: HDFS-12478
 URL: https://issues.apache.org/jira/browse/HDFS-12478
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: Ewan Higgs
Priority: Minor


This is a task for implementing the command line interface for attaching a 
PROVIDED storage backup system (see HDFS-9806, HDFS-12090).

# The administrator should be able to mount a PROVIDED storage volume from the 
command line. 
{code}hdfs attach -create [-name ]  {code}
# Whitelist of users who are able to manage mounts (create, attach, detach).
# Be able to interrogate the status of the attached storage (last time a 
snapshot was taken, files being backed up).
# The administrator should be able to remove an attached PROVIDED storage 
volume from the command line. This simply means that the synchronization 
process no longer runs. If the administrator has configured their setup to no 
longer have local copies of the data, the blocks in the subtree are simply no 
longer accessible as the external file store system is currently inaccessible.
{code}hdfs attach -remove  [-force | -flush]{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org