[jira] [Comment Edited] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-24 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144585#comment-17144585
 ] 

Virajith Jalaparti edited comment on HDFS-15436 at 6/25/20, 4:33 AM:
-

Similar to the example in the description, access to {{viewfs:///foor/bar}}  
(note absence of authority here) also doesn't work with the following 
configurations:

(1) {{fs.defaultFS = hdfs://clustername/}}
 (2) {{fs.hdfs.impl = org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}

Not specifying authority is a common use case when the same user code/UDFs 
needs to run on two different clusters.


was (Author: virajith):
Similar to the example in the description, access to {{viewfs:///foor/bar}}  
(note absence of authority here) also doesn't work with the following 
configurations:

(1) {{fs.defaultFS = hdfs://clustername/}}
 (2) {{fs.hdfs.impl = org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}

 

Not specifying authority is a common use case when the same user code/UDFs 
needs to run on two different clusters.

> Default mount table name used by ViewFileSystem should be configurable
> --
>
> Key: HDFS-15436
> URL: https://issues.apache.org/jira/browse/HDFS-15436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Currently, if no authority is provided and the scheme of the Path is doesn't 
> match the scheme of the {{fs.defaultFS}} , the mount table used by 
> ViewFileSystem to resolve this path is {{default}}. 
> This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
> when the following configurations are used:
> (1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
> (2) {{fs.hdfs.impl = 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}
> This JIRA proposes to add a new configuration 
> {{fs.viewfs.mounttable.default.name.key}} which is used to get the name of 
> the cluster/mount table when the authority is missing in cases like the 
> above. If not set, the string {{default}} will be used as today.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-24 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144638#comment-17144638
 ] 

Virajith Jalaparti commented on HDFS-15436:
---

Thanks for checking [~umamaheswararao]. I missed the section you mentioned. 
Took a look but it doesn't cover the case when the scheme of the URI (without 
authority) and the scheme of fs.defaultFS is different right? In that case, we 
would need something like this?

> Default mount table name used by ViewFileSystem should be configurable
> --
>
> Key: HDFS-15436
> URL: https://issues.apache.org/jira/browse/HDFS-15436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Currently, if no authority is provided and the scheme of the Path is doesn't 
> match the scheme of the {{fs.defaultFS}} , the mount table used by 
> ViewFileSystem to resolve this path is {{default}}. 
> This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
> when the following configurations are used:
> (1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
> (2) {{fs.hdfs.impl = 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}
> This JIRA proposes to add a new configuration 
> {{fs.viewfs.mounttable.default.name.key}} which is used to get the name of 
> the cluster/mount table when the authority is missing in cases like the 
> above. If not set, the string {{default}} will be used as today.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-24 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144636#comment-17144636
 ] 

Uma Maheswara Rao G commented on HDFS-15436:


[~virajith] Thanks a lot for working on this. We have covered this in design 
[^ViewFSOverloadScheme - V1.0.pdf] at last section "What to do when authority 
is missing". 

The proposal I made was the following:
 
{code:java}
Now the viewfs uses overloaded hdfs scheme
 ● hdfs://xxx/ use xxx as cluster name 
 ● hdfs:/// use then take authority from defaultfs if defaultfs is hdfs (ie if 
scheme matches){code}
However the configurable default cluster name/mountable name also works here. 
If we wanted to avoid one another config, above solution should work? ( I have 
not worked on this yet, but added the thoughts in design doc).
If that's not feasible, then we have option to make it configurable.
 

> Default mount table name used by ViewFileSystem should be configurable
> --
>
> Key: HDFS-15436
> URL: https://issues.apache.org/jira/browse/HDFS-15436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Currently, if no authority is provided and the scheme of the Path is doesn't 
> match the scheme of the {{fs.defaultFS}} , the mount table used by 
> ViewFileSystem to resolve this path is {{default}}. 
> This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
> when the following configurations are used:
> (1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
> (2) {{fs.hdfs.impl = 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}
> This JIRA proposes to add a new configuration 
> {{fs.viewfs.mounttable.default.name.key}} which is used to get the name of 
> the cluster/mount table when the authority is missing in cases like the 
> above. If not set, the string {{default}} will be used as today.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-24 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144594#comment-17144594
 ] 

Virajith Jalaparti commented on HDFS-15436:
---

[~umamahesh], [~ayushsaxena] can one of you take a look?

> Default mount table name used by ViewFileSystem should be configurable
> --
>
> Key: HDFS-15436
> URL: https://issues.apache.org/jira/browse/HDFS-15436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Currently, if no authority is provided and the scheme of the Path is doesn't 
> match the scheme of the {{fs.defaultFS}} , the mount table used by 
> ViewFileSystem to resolve this path is {{default}}. 
> This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
> when the following configurations are used:
> (1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
> (2) {{fs.hdfs.impl = 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}
> This JIRA proposes to add a new configuration 
> {{fs.viewfs.mounttable.default.name.key}} which is used to get the name of 
> the cluster/mount table when the authority is missing in cases like the 
> above. If not set, the string {{default}} will be used as today.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-24 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144593#comment-17144593
 ] 

Virajith Jalaparti commented on HDFS-15436:
---

PR created at : [https://github.com/apache/hadoop/pull/2100]

> Default mount table name used by ViewFileSystem should be configurable
> --
>
> Key: HDFS-15436
> URL: https://issues.apache.org/jira/browse/HDFS-15436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Currently, if no authority is provided and the scheme of the Path is doesn't 
> match the scheme of the {{fs.defaultFS}} , the mount table used by 
> ViewFileSystem to resolve this path is {{default}}. 
> This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
> when the following configurations are used:
> (1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
> (2) {{fs.hdfs.impl = 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}
> This JIRA proposes to add a new configuration 
> {{fs.viewfs.mounttable.default.name.key}} which is used to get the name of 
> the cluster/mount table when the authority is missing in cases like the 
> above. If not set, the string {{default}} will be used as today.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15070) Duplicated issue -- cancelled

2020-06-24 Thread Xudong Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Sun updated HDFS-15070:
--
Description: (was: I am using Hadoop-2.10.0.

The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
(which is the default value) and 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.

When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
namenode will not be started successfully because of an 
`InstantiationException` thrown from 
`org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 

The root cause is that while initializing namenode, `initAuditLoggers` will be 
called and it will try to call the default constructor of 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't have 
a default constructor. Thus the `InstantiationException` exception is thrown.

 

*Symptom*

*$ ./start-dfs.sh*

 
{code:java}
2019-12-18 14:05:20,670 ERROR 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
initialization failed.java.lang.RuntimeException: 
java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused 
by: java.lang.InstantiationException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
java.lang.Class.newInstance(Class.java:427)at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
 8 moreCaused by: java.lang.NoSuchMethodException: 
org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
java.lang.Class.getConstructor0(Class.java:3082)at 
java.lang.Class.newInstance(Class.java:412)
... 9 more
{code}
 

 

*Detailed Root Cause*

There is no default constructor in 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`:
{code:java}
/** 
 * An {@link AuditLogger} that sends logged data directly to the metrics 
 * systems. It is used when the top service is used directly by the name node 
 */ 
@InterfaceAudience.Private 
public class TopAuditLogger implements AuditLogger { 
  public static finalLogger LOG = 
LoggerFactory.getLogger(TopAuditLogger.class); 

  private final TopMetrics topMetrics; 

  public TopAuditLogger(TopMetrics topMetrics) {
Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
"TopMetrics");
this.topMetrics = topMetrics; 
  }

  @Override
  public void initialize(Configuration conf) { 
  }{code}
As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, `initAuditLoggers` 
will try to call its default constructor to make a new instance:
{code:java}
private List initAuditLoggers(Configuration conf) {
  // Initialize the custom access loggers if configured.
  Collection alClasses =
  conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
  List auditLoggers = Lists.newArrayList();
  if (alClasses != null && !alClasses.isEmpty()) {
for (String className : alClasses) {
  try {
AuditLogger logger;
if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
  logger = new DefaultAuditLogger();
} else {
  logger = (AuditLogger) Class.forName(className).newInstance();
}
logger.initialize(conf);
auditLoggers.add(logger);
  } catch (RuntimeException re) {
throw re;
  } catch (Exception e) {
throw new RuntimeException(e);
  }
}
  }{code}
`initAuditLoggers` tries to call the default constructor to make a new instance 
in:
{code:java}
logger = (AuditLogger) Class.forName(className).newInstance();{code}
This is very different from the default configuration, `default`, which 
implements a default constructor so the default is fine.

 

*How To Reproduce* 

The version of Hadoop: 2.10.0
 # Set the value of configuration parameter `dfs.namenode.audit.loggers` to 
`org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` in 
"hdfs-site.xml"(the default value is `default`)
 # Start the namenode by running "start-dfs.sh"
 # The namenode will not be started successfully.
  
{code:java}

  dfs.namenode.audit.loggers
  

[jira] [Updated] (HDFS-15070) Duplicated issue -- cancelled

2020-06-24 Thread Xudong Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Sun updated HDFS-15070:
--
Summary: Duplicated issue -- cancelled  (was: Duplicated: Crashing bugs in 
NameNode when using a valid configuration for `dfs.namenode.audit.loggers`)

> Duplicated issue -- cancelled
> -
>
> Key: HDFS-15070
> URL: https://issues.apache.org/jira/browse/HDFS-15070
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.10.0
>Reporter: Xudong Sun
>Priority: Critical
>
> I am using Hadoop-2.10.0.
> The configuration parameter `dfs.namenode.audit.loggers` allows `default` 
> (which is the default value) and 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`.
> When I use `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> namenode will not be started successfully because of an 
> `InstantiationException` thrown from 
> `org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers`. 
> The root cause is that while initializing namenode, `initAuditLoggers` will 
> be called and it will try to call the default constructor of 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger` which doesn't 
> have a default constructor. Thus the `InstantiationException` exception is 
> thrown.
>  
> *Symptom*
> *$ ./start-dfs.sh*
>  
> {code:java}
> 2019-12-18 14:05:20,670 ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem 
> initialization failed.java.lang.RuntimeException: 
> java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1024)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:858)at
>  
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:677)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:674)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:736)at
>  org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:961)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:940)at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)at
>  
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)Caused
>  by: java.lang.InstantiationException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLoggerat 
> java.lang.Class.newInstance(Class.java:427)at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initAuditLoggers(FSNamesystem.java:1017)...
>  8 moreCaused by: java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger.()at 
> java.lang.Class.getConstructor0(Class.java:3082)at 
> java.lang.Class.newInstance(Class.java:412)
> ... 9 more
> {code}
>  
>  
> *Detailed Root Cause*
> There is no default constructor in 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`:
> {code:java}
> /** 
>  * An {@link AuditLogger} that sends logged data directly to the metrics 
>  * systems. It is used when the top service is used directly by the name node 
>  */ 
> @InterfaceAudience.Private 
> public class TopAuditLogger implements AuditLogger { 
>   public static finalLogger LOG = 
> LoggerFactory.getLogger(TopAuditLogger.class); 
>   private final TopMetrics topMetrics; 
>   public TopAuditLogger(TopMetrics topMetrics) {
> Preconditions.checkNotNull(topMetrics, "Cannot init with a null " + 
> "TopMetrics");
> this.topMetrics = topMetrics; 
>   }
>   @Override
>   public void initialize(Configuration conf) { 
>   }{code}
> As long as the configuration parameter `dfs.namenode.audit.loggers` is set to 
> `org.apache.hadoop.hdfs.server.namenode.top.TopAuditLogger`, 
> `initAuditLoggers` will try to call its default constructor to make a new 
> instance:
> {code:java}
> private List initAuditLoggers(Configuration conf) {
>   // Initialize the custom access loggers if configured.
>   Collection alClasses =
>   conf.getTrimmedStringCollection(DFS_NAMENODE_AUDIT_LOGGERS_KEY);
>   List auditLoggers = Lists.newArrayList();
>   if (alClasses != null && !alClasses.isEmpty()) {
> for (String className : alClasses) {
>   try {
> AuditLogger logger;
> if (DFS_NAMENODE_DEFAULT_AUDIT_LOGGER_NAME.equals(className)) {
>   logger = new DefaultAuditLogger();
> } else {
>   logger = (AuditLogger) Class.forName(className).newInstance();
> }
> logger.initialize(conf);
> auditLoggers.add(logger);
>   } catch (RuntimeException re) {
> throw re;
>   } catch (Exception e) {
> throw new RuntimeException(e);
>   }
> }
>   }{code}
> `initAuditLoggers` tries to call 

[jira] [Commented] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-24 Thread Virajith Jalaparti (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144585#comment-17144585
 ] 

Virajith Jalaparti commented on HDFS-15436:
---

Similar to the example in the description, access to {{viewfs:///foor/bar}}  
(note absence of authority here) also doesn't work with the following 
configurations:

(1) {{fs.defaultFS = hdfs://clustername/}}
 (2) {{fs.hdfs.impl = org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}

 

Not specifying authority is a common use case when the same user code/UDFs 
needs to run on two different clusters.

> Default mount table name used by ViewFileSystem should be configurable
> --
>
> Key: HDFS-15436
> URL: https://issues.apache.org/jira/browse/HDFS-15436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Currently, if no authority is provided and the scheme of the Path is doesn't 
> match the scheme of the {{fs.defaultFS}} , the mount table used by 
> ViewFileSystem to resolve this path is {{default}}. 
> This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
> when the following configurations are used:
> (1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
> (2) {{fs.hdfs.impl = 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}
> This JIRA proposes to add a new configuration 
> {{fs.viewfs.mounttable.default.name.key}} which is used to get the name of 
> the cluster/mount table when the authority is missing in cases like the 
> above. If not set, the string {{default}} will be used as today.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-24 Thread Virajith Jalaparti (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Virajith Jalaparti updated HDFS-15436:
--
Component/s: viewfsOverloadScheme
 viewfs

> Default mount table name used by ViewFileSystem should be configurable
> --
>
> Key: HDFS-15436
> URL: https://issues.apache.org/jira/browse/HDFS-15436
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: viewfs, viewfsOverloadScheme
>Reporter: Virajith Jalaparti
>Assignee: Virajith Jalaparti
>Priority: Major
>
> Currently, if no authority is provided and the scheme of the Path is doesn't 
> match the scheme of the {{fs.defaultFS}} , the mount table used by 
> ViewFileSystem to resolve this path is {{default}}. 
> This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
> when the following configurations are used:
> (1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
> (2) {{fs.hdfs.impl = 
> org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}
> This JIRA proposes to add a new configuration 
> {{fs.viewfs.mounttable.default.name.key}} which is used to get the name of 
> the cluster/mount table when the authority is missing in cases like the 
> above. If not set, the string {{default}} will be used as today.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15436) Default mount table name used by ViewFileSystem should be configurable

2020-06-24 Thread Virajith Jalaparti (Jira)
Virajith Jalaparti created HDFS-15436:
-

 Summary: Default mount table name used by ViewFileSystem should be 
configurable
 Key: HDFS-15436
 URL: https://issues.apache.org/jira/browse/HDFS-15436
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Virajith Jalaparti
Assignee: Virajith Jalaparti


Currently, if no authority is provided and the scheme of the Path is doesn't 
match the scheme of the {{fs.defaultFS}} , the mount table used by 
ViewFileSystem to resolve this path is {{default}}. 

This breaks accesses to path like {{hdfs:///foo/bar}} (without any authority) 
when the following configurations are used:
(1) {{fs.defaultFS}} = {{viewfs://clustername/}} 
(2) {{fs.hdfs.impl = org.apache.hadoop.fs.viewfs.ViewFileSystemOverloadScheme}}

This JIRA proposes to add a new configuration 
{{fs.viewfs.mounttable.default.name.key}} which is used to get the name of the 
cluster/mount table when the authority is missing in cases like the above. If 
not set, the string {{default}} will be used as today.







--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-24 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144579#comment-17144579
 ] 

Takanobu Asanuma edited comment on HDFS-15421 at 6/25/20, 1:41 AM:
---

Thanks for working on this, [~aajisaka]. Thanks for reporting it, [~kihwal].

The patch looks good to me for the cases of append and truncate. But it may 
still leak when lease recovery(block recovery) runs. The following code creates 
a new GS.
 
[https://github.com/apache/hadoop/blob/4c53fb9ce102c46c6956b4aecdfd9dd513280b35/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3724-L3735]


was (Author: tasanuma0829):
Thanks for working on this, [~aajisaka].

The patch looks good to me for the cases of append and truncate. But it may 
still leak when lease recovery(block recovery) runs. The following code creates 
a new GS.
https://github.com/apache/hadoop/blob/4c53fb9ce102c46c6956b4aecdfd9dd513280b35/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3724-L3735

> IBR leak causes standby NN to be stuck in safe mode
> ---
>
> Key: HDFS-15421
> URL: https://issues.apache.org/jira/browse/HDFS-15421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Kihwal Lee
>Assignee: Akira Ajisaka
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15421-000.patch, HDFS-15421-001.patch, 
> HDFS-15421.002.patch, HDFS-15421.003.patch, HDFS-15421.004.patch
>
>
> After HDFS-14941, update of the global gen stamp is delayed in certain 
> situations.  This makes the last set of incremental block reports from append 
> "from future", which causes it to be simply re-queued to the pending DN 
> message queue, rather than processed to complete the block.  The last set of 
> IBRs will leak and never cleaned until it transitions to active.  The size of 
> {{pendingDNMessages}} constantly grows until then.
> If a leak happens while in a startup safe mode, the namenode will never be 
> able to come out of safe mode on its own.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-24 Thread Takanobu Asanuma (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144579#comment-17144579
 ] 

Takanobu Asanuma commented on HDFS-15421:
-

Thanks for working on this, [~aajisaka].

The patch looks good to me for the cases of append and truncate. But it may 
still leak when lease recovery(block recovery) runs. The following code creates 
a new GS.
https://github.com/apache/hadoop/blob/4c53fb9ce102c46c6956b4aecdfd9dd513280b35/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java#L3724-L3735

> IBR leak causes standby NN to be stuck in safe mode
> ---
>
> Key: HDFS-15421
> URL: https://issues.apache.org/jira/browse/HDFS-15421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Kihwal Lee
>Assignee: Akira Ajisaka
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15421-000.patch, HDFS-15421-001.patch, 
> HDFS-15421.002.patch, HDFS-15421.003.patch, HDFS-15421.004.patch
>
>
> After HDFS-14941, update of the global gen stamp is delayed in certain 
> situations.  This makes the last set of incremental block reports from append 
> "from future", which causes it to be simply re-queued to the pending DN 
> message queue, rather than processed to complete the block.  The last set of 
> IBRs will leak and never cleaned until it transitions to active.  The size of 
> {{pendingDNMessages}} constantly grows until then.
> If a leak happens while in a startup safe mode, the namenode will never be 
> able to come out of safe mode on its own.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15404) ShellCommandFencer should expose info about source

2020-06-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144546#comment-17144546
 ] 

Hadoop QA commented on HDFS-15404:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
21m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 52s{color} | {color:orange} root: The patch generated 5 new + 75 unchanged - 
0 fixed = 80 total (was 75) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 26s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 4s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}250m 24s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestSafeModeWithStripedFile |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSInputStream |
|   | hadoop.hdfs.tools.TestDFSHAAdminMiniCluster |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestRollingUpgrade |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29461/artifact/out/Dockerfile
 |
| JIRA Issue 

[jira] [Commented] (HDFS-15407) Hedged read will not work if a datanode slow for a long time

2020-06-24 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144356#comment-17144356
 ] 

Wei-Chiu Chuang commented on HDFS-15407:


I am aware of a HBaseCon talk two years ago that talked about production 
experience with HBase Read Replica and HDFS Hedged Read and how that brings 
availability to almost 100%. https://www.youtube.com/watch?v=l6S-Vbs9WsU

A number of hedged read bugs were found and fixed, and I believe they are in 
3.1.1.

IIRC, you may also need to reduce DataNode read/connection timeouts to lower 
numbers.

> Hedged read will not work if a datanode slow for a long time
> 
>
> Key: HDFS-15407
> URL: https://issues.apache.org/jira/browse/HDFS-15407
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: 3.1.1, datanode
>Affects Versions: 3.1.1
>Reporter: liuyanyu
>Assignee: liuyanyu
>Priority: Major
>
> I use cgroups to limit the datanode node IO to 1024Byte/s, use hedged read to 
> read the file, (where dfs.client.hedged.read.threadpool.size is set to 5, 
> dfs.client.hedged.read.threshold.millis is set to 500), the first 5 buffer 
> read timeout, switch other datenode nodes to read successfully. Then stuck 
> for a long time because of SocketTimeoutException. Log as follows
> 2020-06-11 16:40:07,832 | INFO  | main | Waited 500ms to read from 
> DatanodeInfoWithStorage[xx.xx.xx.28:25009,DS-9c843ac6-4ea1-4791-a1af-54c1ae3d5daf,DISK];
>  spawning hedged read | DFSInputStream.java:1188
> 2020-06-11 16:40:08,562 | INFO  | main | Waited 500ms to read from 
> DatanodeInfoWithStorage[xx.xx.xx.28:25009,DS-9c843ac6-4ea1-4791-a1af-54c1ae3d5daf,DISK];
>  spawning hedged read | DFSInputStream.java:1188
> 2020-06-11 16:40:09,102 | INFO  | main | Waited 500ms to read from 
> DatanodeInfoWithStorage[xx.xx.xx.28:25009,DS-9c843ac6-4ea1-4791-a1af-54c1ae3d5daf,DISK];
>  spawning hedged read | DFSInputStream.java:1188
> 2020-06-11 16:40:09,642 | INFO  | main | Waited 500ms to read from 
> DatanodeInfoWithStorage[xx.xx.xx.28:25009,DS-9c843ac6-4ea1-4791-a1af-54c1ae3d5daf,DISK];
>  spawning hedged read | DFSInputStream.java:1188
> 2020-06-11 16:40:10,182 | INFO  | main | Waited 500ms to read from 
> DatanodeInfoWithStorage[xx.xx.xx.28:25009,DS-9c843ac6-4ea1-4791-a1af-54c1ae3d5daf,DISK];
>  spawning hedged read | DFSInputStream.java:1188
> 2020-06-11 16:40:10,182 | INFO  | main | Execution rejected, Executing in 
> current thread | DFSClient.java:3049
> 2020-06-11 16:40:10,219 | INFO  | main | Execution rejected, Executing in 
> current thread | DFSClient.java:3049
> 2020-06-11 16:50:07,638 | WARN  | hedgedRead-0 | I/O error constructing 
> remote block reader. | BlockReaderFactory.java:764
> java.net.SocketTimeoutException: 60 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/xx.xx.xx.113:62750 remote=/xx.xx.xx.28:25009]
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
>   at java.io.FilterInputStream.read(FilterInputStream.java:83)
>   at 
> org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:551)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.newBlockReader(BlockReaderRemote.java:418)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:853)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:749)
>   at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:379)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:661)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream.actualGetFromOneDataNode(DFSInputStream.java:1063)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1035)
>   at 
> org.apache.hadoop.hdfs.DFSInputStream$2.call(DFSInputStream.java:1031)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> 2020-06-11 16:50:07,638 | WARN  | hedgedRead-0 | Connection failure: Failed 
> to 

[jira] [Commented] (HDFS-15404) ShellCommandFencer should expose info about source

2020-06-24 Thread Chen Liang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144249#comment-17144249
 ] 

Chen Liang commented on HDFS-15404:
---

Thanks for checking [~shv]! These three test might slipped through my previous 
local testing somehow. Updated with v03 patch to fix these tests. On high 
level, the fixes are:
1. some cases mock fencing with a null target HA state, which was treated as 
illegal state by this new change.
2. in the new fencing logic, for a successful failover, two tryFence gets 
called, no longer just one; for a failed failover, if fail happens on fencing 
target, fencing on source will be skipped. TestFailoverController needs to be 
changed to reflect this new logic. 

> ShellCommandFencer should expose info about source
> --
>
> Key: HDFS-15404
> URL: https://issues.apache.org/jira/browse/HDFS-15404
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15404.001.patch, HDFS-15404.002.patch, 
> HDFS-15404.003.patch
>
>
> Currently the HA fencing logic in ShellCommandFencer exposes environment 
> variable about only the fencing target. i.e. the $target_* variables as 
> mentioned in this [document 
> page|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]).
>  
> But here only the fencing target variables are getting exposed. Sometimes it 
> is useful to expose info about the fencing source node. One use case is would 
> allow source and target node to identify themselves separately and run 
> different commands/scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15404) ShellCommandFencer should expose info about source

2020-06-24 Thread Chen Liang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HDFS-15404:
--
Attachment: HDFS-15404.003.patch

> ShellCommandFencer should expose info about source
> --
>
> Key: HDFS-15404
> URL: https://issues.apache.org/jira/browse/HDFS-15404
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15404.001.patch, HDFS-15404.002.patch, 
> HDFS-15404.003.patch
>
>
> Currently the HA fencing logic in ShellCommandFencer exposes environment 
> variable about only the fencing target. i.e. the $target_* variables as 
> mentioned in this [document 
> page|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]).
>  
> But here only the fencing target variables are getting exposed. Sometimes it 
> is useful to expose info about the fencing source node. One use case is would 
> allow source and target node to identify themselves separately and run 
> different commands/scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15067) Optimize heartbeat for large cluster

2020-06-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144205#comment-17144205
 ] 

Hadoop QA commented on HDFS-15067:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} prototool {color} | {color:blue}  0m  
0s{color} | {color:blue} prototool was not available. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue} markdownlint was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  3m 
11s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
28s{color} | {color:blue} branch/hadoop-project no findbugs output file 
(findbugsXml.xml) {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
28s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 18m 28s{color} | 
{color:red} root generated 17 new + 145 unchanged - 17 fixed = 162 total (was 
162) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 12s{color} | {color:orange} root: The patch generated 11 new + 605 unchanged 
- 1 fixed = 616 total (was 606) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} hadoop-project has no data from findbugs {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | 

[jira] [Commented] (HDFS-15404) ShellCommandFencer should expose info about source

2020-06-24 Thread Konstantin Shvachko (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17144130#comment-17144130
 ] 

Konstantin Shvachko commented on HDFS-15404:


Hey [~vagarychen] test are still failing {{TestFailoverController}}, 
{{TestShellCommandFencer}}, {{TestNodeFencer}}. Look directly related to your 
change.

> ShellCommandFencer should expose info about source
> --
>
> Key: HDFS-15404
> URL: https://issues.apache.org/jira/browse/HDFS-15404
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Liang
>Assignee: Chen Liang
>Priority: Major
> Attachments: HDFS-15404.001.patch, HDFS-15404.002.patch
>
>
> Currently the HA fencing logic in ShellCommandFencer exposes environment 
> variable about only the fencing target. i.e. the $target_* variables as 
> mentioned in this [document 
> page|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html]).
>  
> But here only the fencing target variables are getting exposed. Sometimes it 
> is useful to expose info about the fencing source node. One use case is would 
> allow source and target node to identify themselves separately and run 
> different commands/scripts.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15067) Optimize heartbeat for large cluster

2020-06-24 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143903#comment-17143903
 ] 

Surendra Singh Lilhore commented on HDFS-15067:
---

Attached v3 patch.

please review..

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15067.01.patch, HDFS-15067.02.patch, 
> HDFS-15067.03.patch, image-2020-01-09-18-00-49-556.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15067) Optimize heartbeat for large cluster

2020-06-24 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-15067:
--
Attachment: HDFS-15067.03.patch

> Optimize heartbeat for large cluster
> 
>
> Key: HDFS-15067
> URL: https://issues.apache.org/jira/browse/HDFS-15067
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: Surendra Singh Lilhore
>Priority: Major
> Attachments: HDFS-15067.01.patch, HDFS-15067.02.patch, 
> HDFS-15067.03.patch, image-2020-01-09-18-00-49-556.png
>
>
> In a large cluster Namenode spend some time in processing heartbeats. For 
> example, in 10K node cluster namenode process 10K RPC's for heartbeat in each 
> 3sec. This will impact the client response time. This heart beat can be 
> optimized. DN can start skipping one heart beat if no 
> work(Write/replication/Delete) is allocated from long time. DN can start 
> sending heart beat in 6 sec. Once the DN stating getting work from NN , it 
> can start sending heart beat normally.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-24 Thread Kihwal Lee (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143880#comment-17143880
 ] 

Kihwal Lee commented on HDFS-15421:
---

Patch 004 looks good to me. +1.

> IBR leak causes standby NN to be stuck in safe mode
> ---
>
> Key: HDFS-15421
> URL: https://issues.apache.org/jira/browse/HDFS-15421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Kihwal Lee
>Assignee: Akira Ajisaka
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15421-000.patch, HDFS-15421-001.patch, 
> HDFS-15421.002.patch, HDFS-15421.003.patch, HDFS-15421.004.patch
>
>
> After HDFS-14941, update of the global gen stamp is delayed in certain 
> situations.  This makes the last set of incremental block reports from append 
> "from future", which causes it to be simply re-queued to the pending DN 
> message queue, rather than processed to complete the block.  The last set of 
> IBRs will leak and never cleaned until it transitions to active.  The size of 
> {{pendingDNMessages}} constantly grows until then.
> If a leak happens while in a startup safe mode, the namenode will never be 
> able to come out of safe mode on its own.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143865#comment-17143865
 ] 

Hadoop QA commented on HDFS-15421:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 40s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
53s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 29s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29459/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15421 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006336/HDFS-15421.004.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux f6c8e629a2f4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 84110d850e2 |
| Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
| unit | 

[jira] [Commented] (HDFS-15434) RBF: MountTableResolver#getDestinationForPath failing with AssertionError from localCache

2020-06-24 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143851#comment-17143851
 ] 

hemanthboyina commented on HDFS-15434:
--

Though we couldn't reproduce the issue , we doubt this issue occured under high 
concurrency 

discussed with [~brahmareddy] offline ,  i think we can extend removalListener 
to cacheBuilder to solve the problem

any suggestions ?

> RBF: MountTableResolver#getDestinationForPath failing with AssertionError 
> from localCache
> -
>
> Key: HDFS-15434
> URL: https://issues.apache.org/jira/browse/HDFS-15434
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: hemanthboyina
>Priority: Major
>
> {code:java}
> org.apache.hadoop.ipc.Remote.Exception : java.lang.AssertionError 
> com.google.common.cache.LocalCache$Segment.evictEntries(LocalCache.java:2698) 
> at 
> com.google.common.cache.LocalCache$Segment.storeLoadedValue(LocalCache.java:3166)
>  
> at 
> com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2386)
> at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2351) 
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>  
> at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) at
> at com.google.common.cache.LocalCache.get(LocalCache.java:3965) 
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
> at 
> org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:382)
> at 
> org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.getDestinationForPath(MultipleDestinationMountTableResolver.java:87)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1406)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1389)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.getFileInfo(RouterClientProtocol.java:741)
> at 
> org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:763)
>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15435) HdfsDtFetcher only fetches first DT of a filesystem

2020-06-24 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-15435:
-

 Summary: HdfsDtFetcher only fetches first DT of a filesystem
 Key: HDFS-15435
 URL: https://issues.apache.org/jira/browse/HDFS-15435
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs, security
Affects Versions: 3.3.0
Reporter: Steve Loughran


similar to HDFS-15433 -only a single DT per FS is picked up.

Here the fault is in org.apache.hadoop.hdfs.HdfsDtFetcher. 
Found in HADOOP-17077



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15433) hdfs fetchdt command only fetches first DT of a filesystem

2020-06-24 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-15433:
--
Priority: Minor  (was: Major)

> hdfs fetchdt command only fetches first DT of a filesystem
> --
>
> Key: HDFS-15433
> URL: https://issues.apache.org/jira/browse/HDFS-15433
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
>
> the {{hdfs fetchdt}} command only fetches the first DT of a filesystem, not 
> any other tokens issued (e.g KMS tokens)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15434) RBF: MountTableResolver#getDestinationForPath failing with AssertionError from localCache

2020-06-24 Thread hemanthboyina (Jira)
hemanthboyina created HDFS-15434:


 Summary: RBF: MountTableResolver#getDestinationForPath failing 
with AssertionError from localCache
 Key: HDFS-15434
 URL: https://issues.apache.org/jira/browse/HDFS-15434
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: hemanthboyina


{code:java}
org.apache.hadoop.ipc.Remote.Exception : java.lang.AssertionError 
com.google.common.cache.LocalCache$Segment.evictEntries(LocalCache.java:2698) 
at 
com.google.common.cache.LocalCache$Segment.storeLoadedValue(LocalCache.java:3166)
 
at 
com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2386)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2351) 
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
 
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) at
at com.google.common.cache.LocalCache.get(LocalCache.java:3965) 
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
at 
org.apache.hadoop.hdfs.server.federation.resolver.MountTableResolver.getDestinationForPath(MountTableResolver.java:382)
at 
org.apache.hadoop.hdfs.server.federation.resolver.MultipleDestinationMountTableResolver.getDestinationForPath(MultipleDestinationMountTableResolver.java:87)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1406)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getLocationsForPath(RouterRpcServer.java:1389)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterClientProtocol.getFileInfo(RouterClientProtocol.java:741)
at 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcServer.getFileInfo(RouterRpcServer.java:763)
 {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15433) hdfs fetchdt command only fetches first DT of a filesystem

2020-06-24 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143842#comment-17143842
 ] 

Steve Loughran commented on HDFS-15433:
---

found in testing of  HADOOP-17077

> hdfs fetchdt command only fetches first DT of a filesystem
> --
>
> Key: HDFS-15433
> URL: https://issues.apache.org/jira/browse/HDFS-15433
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> the {{hdfs fetchdt}} command only fetches the first DT of a filesystem, not 
> any other tokens issued (e.g KMS tokens)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15433) hdfs fetchdt command only fetches first DT of a filesystem

2020-06-24 Thread Steve Loughran (Jira)
Steve Loughran created HDFS-15433:
-

 Summary: hdfs fetchdt command only fetches first DT of a filesystem
 Key: HDFS-15433
 URL: https://issues.apache.org/jira/browse/HDFS-15433
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 3.3.0
Reporter: Steve Loughran


the {{hdfs fetchdt}} command only fetches the first DT of a filesystem, not any 
other tokens issued (e.g KMS tokens)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15424) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143806#comment-17143806
 ] 

Akira Ajisaka edited comment on HDFS-15424 at 6/24/20, 12:35 PM:
-

Yetus PR: https://github.com/apache/yetus/pull/112
Draft Hadoop PR to test the above Yetus PR: 
https://github.com/apache/hadoop/pull/2098


was (Author: ajisakaa):
Yetus PR: https://github.com/apache/yetus/pull/112
Hadoop PR to test the above Yetus PR: https://github.com/apache/hadoop/pull/2098

> Javadoc failing with "cannot find symbol  
> com.google.protobuf.GeneratedMessageV3 implements"
> 
>
> Key: HDFS-15424
> URL: https://issues.apache.org/jira/browse/HDFS-15424
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
> Environment: Java 11
>Reporter: Uma Maheswara Rao G
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  17.982 s
> [INFO] Finished at: 2020-06-20T01:56:28Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
> project hadoop-hdfs: An error has occurred in Javadoc report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
>  error: cannot find symbol
> [ERROR]   com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]  ^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
>  error: package com.google.protobuf.GeneratedMessageV3 does not exist
> [ERROR]   private 
> PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15424) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143806#comment-17143806
 ] 

Akira Ajisaka commented on HDFS-15424:
--

Yetus PR: https://github.com/apache/yetus/pull/112
Hadoop PR to test the above Yetus PR: https://github.com/apache/hadoop/pull/2098

> Javadoc failing with "cannot find symbol  
> com.google.protobuf.GeneratedMessageV3 implements"
> 
>
> Key: HDFS-15424
> URL: https://issues.apache.org/jira/browse/HDFS-15424
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
> Environment: Java 11
>Reporter: Uma Maheswara Rao G
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  17.982 s
> [INFO] Finished at: 2020-06-20T01:56:28Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
> project hadoop-hdfs: An error has occurred in Javadoc report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
>  error: cannot find symbol
> [ERROR]   com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]  ^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
>  error: package com.google.protobuf.GeneratedMessageV3 does not exist
> [ERROR]   private 
> PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDFS-13934) Multipart uploaders to be created through API call to FileSystem/FileContext, not service loader

2020-06-24 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-13934 started by Steve Loughran.
-
> Multipart uploaders to be created through API call to FileSystem/FileContext, 
> not service loader
> 
>
> Key: HDFS-13934
> URL: https://issues.apache.org/jira/browse/HDFS-13934
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs, fs/s3, hdfs
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> the Multipart Uploaders are created via service loaders. This is troublesome
> # HADOOP-12636, HADOOP-13323, HADOOP-13625 highlight how the load process 
> forces the transient loading of dependencies.  If a dependent class cannot be 
> loaded (e.g aws-sdk is not on the classpath), that service won't load. 
> Without error handling round the load process, this stops any uploader from 
> loading. Even with that error handling, the performance hit of that load, 
> especially with reshaded dependencies, hurts performance (HADOOP-13138).
> # it makes wrapping the the load with any filter impossible, stops transitive 
> binding through viewFS, mocking, etc.
> # It complicates security in a kerberized world. If you have an FS instance 
> of user A, then you should be able to create an MPU instance with that user's 
> permissions. currently, if a service were to try to create one, you'd be 
> looking at doAs() games around the service loading, and a more complex bind 
> process.
> Proposed
> # remove the service loader mech entirely
> # add to FS & FC as createMultipartUploader(path) call, which will create one 
> bound to the current FS, with its permissions, DTs, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143768#comment-17143768
 ] 

Akira Ajisaka commented on HDFS-15421:
--

004
* Fixed test failure

> IBR leak causes standby NN to be stuck in safe mode
> ---
>
> Key: HDFS-15421
> URL: https://issues.apache.org/jira/browse/HDFS-15421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Kihwal Lee
>Assignee: Akira Ajisaka
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15421-000.patch, HDFS-15421-001.patch, 
> HDFS-15421.002.patch, HDFS-15421.003.patch, HDFS-15421.004.patch
>
>
> After HDFS-14941, update of the global gen stamp is delayed in certain 
> situations.  This makes the last set of incremental block reports from append 
> "from future", which causes it to be simply re-queued to the pending DN 
> message queue, rather than processed to complete the block.  The last set of 
> IBRs will leak and never cleaned until it transitions to active.  The size of 
> {{pendingDNMessages}} constantly grows until then.
> If a leak happens while in a startup safe mode, the namenode will never be 
> able to come out of safe mode on its own.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143770#comment-17143770
 ] 

Akira Ajisaka commented on HDFS-15421:
--

Thanks [~vagarychen] for your review.

> IBR leak causes standby NN to be stuck in safe mode
> ---
>
> Key: HDFS-15421
> URL: https://issues.apache.org/jira/browse/HDFS-15421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Kihwal Lee
>Assignee: Akira Ajisaka
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15421-000.patch, HDFS-15421-001.patch, 
> HDFS-15421.002.patch, HDFS-15421.003.patch, HDFS-15421.004.patch
>
>
> After HDFS-14941, update of the global gen stamp is delayed in certain 
> situations.  This makes the last set of incremental block reports from append 
> "from future", which causes it to be simply re-queued to the pending DN 
> message queue, rather than processed to complete the block.  The last set of 
> IBRs will leak and never cleaned until it transitions to active.  The size of 
> {{pendingDNMessages}} constantly grows until then.
> If a leak happens while in a startup safe mode, the namenode will never be 
> able to come out of safe mode on its own.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15421:
-
Attachment: HDFS-15421.004.patch

> IBR leak causes standby NN to be stuck in safe mode
> ---
>
> Key: HDFS-15421
> URL: https://issues.apache.org/jira/browse/HDFS-15421
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Kihwal Lee
>Assignee: Akira Ajisaka
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HDFS-15421-000.patch, HDFS-15421-001.patch, 
> HDFS-15421.002.patch, HDFS-15421.003.patch, HDFS-15421.004.patch
>
>
> After HDFS-14941, update of the global gen stamp is delayed in certain 
> situations.  This makes the last set of incremental block reports from append 
> "from future", which causes it to be simply re-queued to the pending DN 
> message queue, rather than processed to complete the block.  The last set of 
> IBRs will leak and never cleaned until it transitions to active.  The size of 
> {{pendingDNMessages}} constantly grows until then.
> If a leak happens while in a startup safe mode, the namenode will never be 
> able to come out of safe mode on its own.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15409) Optimization Strategy for choosing ShortCircuitCache

2020-06-24 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143743#comment-17143743
 ] 

Lisheng Sun edited comment on HDFS-15409 at 6/24/20, 10:44 AM:
---

{quote}
So we could do it more tricky for case when clientShortCircuitNum = 3.

Divide by modulo 100 (instead of 10).
If two last digits between 0 and 32 then put it into clientShortCircuitNum[0] 
-> 32% of blocks
If two last digits between 33 and 65 then put it into clientShortCircuitNum[1] 
-> 32% of blocks
If two last digits between 66 and 99 then put it into clientShortCircuitNum[2] 
-> 33% of blocks
Similar logic we can use when clientShortCircuitNum = 4: 

Divide by modulo 100.
If two last digits between 0 and 24 then put it into clientShortCircuitNum[0] 
-> 25% of blocks
If two last digits between 25 and 49 then put it into clientShortCircuitNum[1] 
-> 25% of blocks
If two last digits between 50 and 74 then put it into clientShortCircuitNum[2] 
-> 25% of blocks
If two last digits between 75 and 99 then put it into clientShortCircuitNum[3] 
-> 25% of blocks
{quote}
i think these result don't depend on curent the code and  use new strategy.
your test result is relatively uniform.
 current strategy is  blockid % 10.
so if the clientShortCircuitNum is 3 or 4 and according to the current code, 
what is the result?
I think your new strategy result is better than current.:)


was (Author: leosun08):
{quote}
So we could do it more tricky for case when clientShortCircuitNum = 3.

Divide by modulo 100 (instead of 10).
If two last digits between 0 and 32 then put it into clientShortCircuitNum[0] 
-> 32% of blocks
If two last digits between 33 and 65 then put it into clientShortCircuitNum[1] 
-> 32% of blocks
If two last digits between 66 and 99 then put it into clientShortCircuitNum[2] 
-> 33% of blocks
Similar logic we can use when clientShortCircuitNum = 4: 

Divide by modulo 100.
If two last digits between 0 and 24 then put it into clientShortCircuitNum[0] 
-> 25% of blocks
If two last digits between 25 and 49 then put it into clientShortCircuitNum[1] 
-> 25% of blocks
If two last digits between 50 and 74 then put it into clientShortCircuitNum[2] 
-> 25% of blocks
If two last digits between 75 and 99 then put it into clientShortCircuitNum[3] 
-> 25% of blocks
{quote}
i think these result don't depend on curent the code and  use new strategy.
your test result is relatively uniform.
 current strategy is  blockid % 10.
so if the clientShortCircuitNum is 3 or 4 and according to the current code, 
what is the result?

>  Optimization Strategy for choosing ShortCircuitCache
> -
>
> Key: HDFS-15409
> URL: https://issues.apache.org/jira/browse/HDFS-15409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>
> When clientShortCircuitNum is 10, the probability of falling into each 
> ShortCircuitCache is the same, while the probability of other 
> clientShortCircuitNum is different.
> For example if clientShortCircuitNum is 3, when a lot of blockids of SSR are 
> ***1, ***4, ***7, this situation will fall into a ShortCircuitCache.
> Since the real environment blockid is completely unpredictable, i think it is 
> need to design a strategy which is allocated to a specific ShortCircuitCache. 
> This should improve performance even more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-15409) Optimization Strategy for choosing ShortCircuitCache

2020-06-24 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143743#comment-17143743
 ] 

Lisheng Sun edited comment on HDFS-15409 at 6/24/20, 10:42 AM:
---

{quote}
So we could do it more tricky for case when clientShortCircuitNum = 3.

Divide by modulo 100 (instead of 10).
If two last digits between 0 and 32 then put it into clientShortCircuitNum[0] 
-> 32% of blocks
If two last digits between 33 and 65 then put it into clientShortCircuitNum[1] 
-> 32% of blocks
If two last digits between 66 and 99 then put it into clientShortCircuitNum[2] 
-> 33% of blocks
Similar logic we can use when clientShortCircuitNum = 4: 

Divide by modulo 100.
If two last digits between 0 and 24 then put it into clientShortCircuitNum[0] 
-> 25% of blocks
If two last digits between 25 and 49 then put it into clientShortCircuitNum[1] 
-> 25% of blocks
If two last digits between 50 and 74 then put it into clientShortCircuitNum[2] 
-> 25% of blocks
If two last digits between 75 and 99 then put it into clientShortCircuitNum[3] 
-> 25% of blocks
{quote}
i think these result don't depend on curent the code and  use new strategy.
your test result is relatively uniform.
 current strategy is  blockid % 10.
so if the clientShortCircuitNum is 3 or 4 and according to the current code, 
what is the result?


was (Author: leosun08):
{quote}
So we could do it more tricky for case when clientShortCircuitNum = 3.

Divide by modulo 100 (instead of 10).
If two last digits between 0 and 32 then put it into clientShortCircuitNum[0] 
-> 32% of blocks
If two last digits between 33 and 65 then put it into clientShortCircuitNum[1] 
-> 32% of blocks
If two last digits between 66 and 99 then put it into clientShortCircuitNum[2] 
-> 33% of blocks
Similar logic we can use when clientShortCircuitNum = 4: 

Divide by modulo 100.
If two last digits between 0 and 24 then put it into clientShortCircuitNum[0] 
-> 25% of blocks
If two last digits between 25 and 49 then put it into clientShortCircuitNum[1] 
-> 25% of blocks
If two last digits between 50 and 74 then put it into clientShortCircuitNum[2] 
-> 25% of blocks
If two last digits between 75 and 99 then put it into clientShortCircuitNum[3] 
-> 25% of blocks
{quote}
i think these result don't depend on curent the code and  use new strategy.
 current strategy is  blockid % 10.
so if the clientShortCircuitNum is 3 or 4 and according to the current code, 
what is the result?

>  Optimization Strategy for choosing ShortCircuitCache
> -
>
> Key: HDFS-15409
> URL: https://issues.apache.org/jira/browse/HDFS-15409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>
> When clientShortCircuitNum is 10, the probability of falling into each 
> ShortCircuitCache is the same, while the probability of other 
> clientShortCircuitNum is different.
> For example if clientShortCircuitNum is 3, when a lot of blockids of SSR are 
> ***1, ***4, ***7, this situation will fall into a ShortCircuitCache.
> Since the real environment blockid is completely unpredictable, i think it is 
> need to design a strategy which is allocated to a specific ShortCircuitCache. 
> This should improve performance even more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15409) Optimization Strategy for choosing ShortCircuitCache

2020-06-24 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143743#comment-17143743
 ] 

Lisheng Sun commented on HDFS-15409:


{quote}
So we could do it more tricky for case when clientShortCircuitNum = 3.

Divide by modulo 100 (instead of 10).
If two last digits between 0 and 32 then put it into clientShortCircuitNum[0] 
-> 32% of blocks
If two last digits between 33 and 65 then put it into clientShortCircuitNum[1] 
-> 32% of blocks
If two last digits between 66 and 99 then put it into clientShortCircuitNum[2] 
-> 33% of blocks
Similar logic we can use when clientShortCircuitNum = 4: 

Divide by modulo 100.
If two last digits between 0 and 24 then put it into clientShortCircuitNum[0] 
-> 25% of blocks
If two last digits between 25 and 49 then put it into clientShortCircuitNum[1] 
-> 25% of blocks
If two last digits between 50 and 74 then put it into clientShortCircuitNum[2] 
-> 25% of blocks
If two last digits between 75 and 99 then put it into clientShortCircuitNum[3] 
-> 25% of blocks
{quote}
i think these result don't depend on curent the code and  use new strategy.
 current strategy is  blockid % 10.
so if the clientShortCircuitNum is 3 or 4 and according to the current code, 
what is the result?

>  Optimization Strategy for choosing ShortCircuitCache
> -
>
> Key: HDFS-15409
> URL: https://issues.apache.org/jira/browse/HDFS-15409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>
> When clientShortCircuitNum is 10, the probability of falling into each 
> ShortCircuitCache is the same, while the probability of other 
> clientShortCircuitNum is different.
> For example if clientShortCircuitNum is 3, when a lot of blockids of SSR are 
> ***1, ***4, ***7, this situation will fall into a ShortCircuitCache.
> Since the real environment blockid is completely unpredictable, i think it is 
> need to design a strategy which is allocated to a specific ShortCircuitCache. 
> This should improve performance even more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15424) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15424:
-
Component/s: build

> Javadoc failing with "cannot find symbol  
> com.google.protobuf.GeneratedMessageV3 implements"
> 
>
> Key: HDFS-15424
> URL: https://issues.apache.org/jira/browse/HDFS-15424
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: build
> Environment: Java 11
>Reporter: Uma Maheswara Rao G
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  17.982 s
> [INFO] Finished at: 2020-06-20T01:56:28Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
> project hadoop-hdfs: An error has occurred in Javadoc report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
>  error: cannot find symbol
> [ERROR]   com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]  ^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
>  error: package com.google.protobuf.GeneratedMessageV3 does not exist
> [ERROR]   private 
> PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-15424) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-15424:
-
Description: 
{noformat}
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  17.982 s
[INFO] Finished at: 2020-06-20T01:56:28Z
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
project hadoop-hdfs: An error has occurred in Javadoc report generation: 
[ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
as HTML 4.01 by using the -html4 option.
[ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
removed
[ERROR] in a future release. To suppress this warning, please ensure that any 
HTML constructs
[ERROR] in your comments are valid in HTML5, and remove the -html4 option.
[ERROR] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
 error: cannot find symbol
[ERROR]   com.google.protobuf.GeneratedMessageV3 implements
[ERROR]  ^
[ERROR]   symbol:   class GeneratedMessageV3
[ERROR]   location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
 error: cannot find symbol
[ERROR] com.google.protobuf.GeneratedMessageV3 implements
[ERROR]^
[ERROR]   symbol:   class GeneratedMessageV3
[ERROR]   location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
 error: cannot find symbol
[ERROR] com.google.protobuf.GeneratedMessageV3 implements
[ERROR]^
[ERROR]   symbol:   class GeneratedMessageV3
[ERROR]   location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
 error: package com.google.protobuf.GeneratedMessageV3 does not exist
[ERROR]   private 
PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
{noformat}


  was:

{noformat}
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time:  17.982 s
[INFO] Finished at: 2020-06-20T01:56:28Z
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
project hadoop-hdfs: An error has occurred in Javadoc report generation: 
[ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
as HTML 4.01 by using the -html4 option.
[ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
removed
[ERROR] in a future release. To suppress this warning, please ensure that any 
HTML constructs
[ERROR] in your comments are valid in HTML5, and remove the -html4 option.
[ERROR] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
 error: cannot find symbol
[ERROR]   com.google.protobuf.GeneratedMessageV3 implements
[ERROR]  ^
[ERROR]   symbol:   class GeneratedMessageV3
[ERROR]   location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
 error: cannot find symbol
[ERROR] com.google.protobuf.GeneratedMessageV3 implements
[ERROR]^
[ERROR]   symbol:   class GeneratedMessageV3
[ERROR]   location: package com.google.protobuf
[ERROR] 
/home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
 error: cannot find symbol
[ERROR] com.google.protobuf.GeneratedMessageV3 implements
[ERROR]^
[ERROR]   symbol:   class GeneratedMessageV3
[ERROR]   location: package com.google.protobuf
[ERROR] 

[jira] [Commented] (HDFS-15424) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-24 Thread Uma Maheswara Rao G (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143604#comment-17143604
 ] 

Uma Maheswara Rao G commented on HDFS-15424:


Thanks a lot [~aajisaka] for taking care of this.

> Javadoc failing with "cannot find symbol  
> com.google.protobuf.GeneratedMessageV3 implements"
> 
>
> Key: HDFS-15424
> URL: https://issues.apache.org/jira/browse/HDFS-15424
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Uma Maheswara Rao G
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  17.982 s
> [INFO] Finished at: 2020-06-20T01:56:28Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
> project hadoop-hdfs: An error has occurred in Javadoc report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
>  error: cannot find symbol
> [ERROR]   com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]  ^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
>  error: package com.google.protobuf.GeneratedMessageV3 does not exist
> [ERROR]   private 
> PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15409) Optimization Strategy for choosing ShortCircuitCache

2020-06-24 Thread Danil Lipovoy (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143602#comment-17143602
 ] 

Danil Lipovoy commented on HDFS-15409:
--

About HBase blocks distribution - all tests show the same picture - it is quite 
evenly distributes.


I've tried proposed above the way but unfortunately it wasn't good.


{code:java}
if (clientShortCircuitNum == 3) {
  idx = idx % 100;
  if (idx <= 32) {
LOG.info("shortCircuitCache: 0");
return shortCircuitCache[0];
  }
  if (idx > 32 && idx <= 65) {
LOG.info("shortCircuitCache: 1");
return shortCircuitCache[1];
  }
  if (idx > 66) {
LOG.info("shortCircuitCache: 2");
return shortCircuitCache[2];
  }
}
{code}
 

 

cat /var/log/hbase/hbase-cmf-hbase-REGIONSERVER-home.com.log.out |grep 
shortCircuitCache | awk '\{print $8}'| sort | uniq -c | sort -nr | awk 
'\{printf "%-8s%s\n", $2, $1}'|sort

0 581125
1 621340
2 450377

So, difference bigger and looks like it's not worth the effort.

 

>  Optimization Strategy for choosing ShortCircuitCache
> -
>
> Key: HDFS-15409
> URL: https://issues.apache.org/jira/browse/HDFS-15409
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>
> When clientShortCircuitNum is 10, the probability of falling into each 
> ShortCircuitCache is the same, while the probability of other 
> clientShortCircuitNum is different.
> For example if clientShortCircuitNum is 3, when a lot of blockids of SSR are 
> ***1, ***4, ***7, this situation will fall into a ShortCircuitCache.
> Since the real environment blockid is completely unpredictable, i think it is 
> need to design a strategy which is allocated to a specific ShortCircuitCache. 
> This should improve performance even more.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15312) Apply umask when creating directory by WebHDFS

2020-06-24 Thread Ye Ni (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143601#comment-17143601
 ] 

Ye Ni commented on HDFS-15312:
--

[~inigoiri] PR provided.

> Apply umask when creating directory by WebHDFS
> --
>
> Key: HDFS-15312
> URL: https://issues.apache.org/jira/browse/HDFS-15312
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: webhdfs
>Reporter: Ye Ni
>Assignee: Ye Ni
>Priority: Minor
>
> WebHDFS methods for creating file/directories were always creating it with 
> 755 permissions as default for both files and directories.
> The configured *fs.permissions.umask-mode* is intentionally ignored.
> This Jira is to apply this setting in such scenario.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15424) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143596#comment-17143596
 ] 

Akira Ajisaka commented on HDFS-15424:
--

We can fix this problem by
{noformat}
mvn process-sources javadoc:javadoc-no-fork
{noformat}

I'd like to update Hadoop personality setting and use it from Yetus.

> Javadoc failing with "cannot find symbol  
> com.google.protobuf.GeneratedMessageV3 implements"
> 
>
> Key: HDFS-15424
> URL: https://issues.apache.org/jira/browse/HDFS-15424
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Uma Maheswara Rao G
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  17.982 s
> [INFO] Finished at: 2020-06-20T01:56:28Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
> project hadoop-hdfs: An error has occurred in Javadoc report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
>  error: cannot find symbol
> [ERROR]   com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]  ^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
>  error: package com.google.protobuf.GeneratedMessageV3 does not exist
> [ERROR]   private 
> PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-15213) Fix indentation in BlockInfoStriped

2020-06-24 Thread Hongbing Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongbing Wang reassigned HDFS-15213:


Assignee: Hongbing Wang

> Fix indentation in BlockInfoStriped
> ---
>
> Key: HDFS-15213
> URL: https://issues.apache.org/jira/browse/HDFS-15213
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ec
>Affects Versions: 3.2.1
>Reporter: Hongbing Wang
>Assignee: Hongbing Wang
>Priority: Trivial
> Attachments: HDFS-15213.001.patch
>
>
> one method is not well indented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15421) IBR leak causes standby NN to be stuck in safe mode

2020-06-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143590#comment-17143590
 ] 

Hadoop QA commented on HDFS-15421:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
53s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 45s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 54s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}182m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.hdfs.server.namenode.ha.TestUpdateBlockTailing |
|   | hadoop.hdfs.server.datanode.TestBPOfferService |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
|   | hadoop.hdfs.server.balancer.TestBalancer |
|   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29458/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15421 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006311/HDFS-15421.003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux af4782cd6508 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Assigned] (HDFS-15424) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-24 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HDFS-15424:


Assignee: Akira Ajisaka

> Javadoc failing with "cannot find symbol  
> com.google.protobuf.GeneratedMessageV3 implements"
> 
>
> Key: HDFS-15424
> URL: https://issues.apache.org/jira/browse/HDFS-15424
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Uma Maheswara Rao G
>Assignee: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  17.982 s
> [INFO] Finished at: 2020-06-20T01:56:28Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
> project hadoop-hdfs: An error has occurred in Javadoc report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
>  error: cannot find symbol
> [ERROR]   com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]  ^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
>  error: package com.google.protobuf.GeneratedMessageV3 does not exist
> [ERROR]   private 
> PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15424) Javadoc failing with "cannot find symbol com.google.protobuf.GeneratedMessageV3 implements"

2020-06-24 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143589#comment-17143589
 ] 

Akira Ajisaka commented on HDFS-15424:
--

The source file is automatically generated in generated-sources phase and later 
the com.google.protobuf.*  are replaced with 
org.apache.hadoop.thirdparty.protobuf.* via maven-replacer-plugin in 
process-sources phase. The goal javadoc:javadoc is executed after 
generate-sources phase, which is before process-sources phase. Therefore the 
source code is not replaced when javadoc command is executed. That way the 
error occurs. In Java 8, this issue is regarded as warning (not error) as 
follows:

{noformat}
[INFO] --- maven-javadoc-plugin:3.0.1:javadoc (default-cli) @ hadoop-common ---
[INFO] 
ExcludePrivateAnnotationsStandardDoclet
101 warnings
[WARNING] Javadoc Warnings
[WARNING] 
/home/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/generated-sources/java/org/apache/hadoop/ipc/protobuf/RpcHeaderProtos.java:3467:
 error: cannot find symbol
[WARNING] com.google.protobuf.GeneratedMessageV3 implements
[WARNING] ^
[WARNING] symbol:   class GeneratedMessageV3
{noformat}

In Java 11, this becomes error.

> Javadoc failing with "cannot find symbol  
> com.google.protobuf.GeneratedMessageV3 implements"
> 
>
> Key: HDFS-15424
> URL: https://issues.apache.org/jira/browse/HDFS-15424
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Uma Maheswara Rao G
>Priority: Major
>
> {noformat}
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time:  17.982 s
> [INFO] Finished at: 2020-06-20T01:56:28Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
> project hadoop-hdfs: An error has occurred in Javadoc report generation: 
> [ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
> as HTML 4.01 by using the -html4 option.
> [ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
> removed
> [ERROR] in a future release. To suppress this warning, please ensure that any 
> HTML constructs
> [ERROR] in your comments are valid in HTML5, and remove the -html4 option.
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25197:
>  error: cannot find symbol
> [ERROR]   com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]  ^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:25319:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26068:
>  error: cannot find symbol
> [ERROR] com.google.protobuf.GeneratedMessageV3 implements
> [ERROR]^
> [ERROR]   symbol:   class GeneratedMessageV3
> [ERROR]   location: package com.google.protobuf
> [ERROR] 
> /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-2084/src/hadoop-hdfs-project/hadoop-hdfs/target/generated-sources/java/org/apache/hadoop/hdfs/server/namenode/FsImageProto.java:26073:
>  error: package com.google.protobuf.GeneratedMessageV3 does not exist
> [ERROR]   private 
> PersistToken(com.google.protobuf.GeneratedMessageV3.Builder builder) {
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-15432) RBF: Move cache datanode reports from NamenodeBeanMetrics to RouterRpcServer

2020-06-24 Thread Ye Ni (Jira)
Ye Ni created HDFS-15432:


 Summary: RBF: Move cache datanode reports from NamenodeBeanMetrics 
to RouterRpcServer
 Key: HDFS-15432
 URL: https://issues.apache.org/jira/browse/HDFS-15432
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: federation, rbf
Reporter: Ye Ni


Datanode reports in router should be cached in RPC server and provided with 
some APIs, rather than each client doing it separately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-15416) DataStorage#addStorageLocations() should add more reasonable information verification.

2020-06-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143559#comment-17143559
 ] 

Hadoop QA commented on HDFS-15416:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
20m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
22s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 43s{color} 
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}195m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
|
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
|   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.TestReconstructStripedFile |
|   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HDFS-Build/29456/artifact/out/Dockerfile
 |
| JIRA Issue | HDFS-15416 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13006307/HDFS-15416.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 29ebfd837aaa 4.15.0-101-generic #102-Ubuntu SMP Mon May