[jira] [Updated] (HADOOP-16761) KMSClientProvider does not work with client using ticket logged in externally

2020-07-23 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-16761:
-
Status: Open  (was: Patch Available)

> KMSClientProvider does not work with client using ticket logged in externally 
> --
>
> Key: HADOOP-16761
> URL: https://issues.apache.org/jira/browse/HADOOP-16761
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Blocker
>
> This is a regression from HDFS-13682 that checks not only the kerberos 
> credential but also enforce the login is non-external. This breaks client 
> applications that need to access HDFS encrypted file using kerberos ticket 
> that logged in external in ticket cache. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2020-05-12 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15524:
-
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~jesmith3] for the contribution. Merged the changes to trunk.

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Assignee: Joseph Smith
>Priority: Major
> Fix For: 3.4.0
>
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2020-05-11 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15524:
-
Status: Patch Available  (was: Open)

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Assignee: Joseph Smith
>Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2020-05-11 Thread Nanda kumar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17104735#comment-17104735
 ] 

Nanda kumar commented on HADOOP-15524:
--

Thanks for the update [~arp].
I'm +1 on the change. Just retriggered Jenkins, will merge it after the build.

https://builds.apache.org/job/hadoop-multibranch/job/PR-393/10/

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Assignee: Joseph Smith
>Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16534) Exclude submarine from hadoop source build

2019-09-03 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-16534:
-
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Exclude submarine from hadoop source build
> --
>
> Key: HADOOP-16534
> URL: https://issues.apache.org/jira/browse/HADOOP-16534
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16534.000.patch
>
>
> When we do source package of hadoop, it should not contain submarine 
> project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16534) Exclude submarine from hadoop source build

2019-09-03 Thread Nanda kumar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921377#comment-16921377
 ] 

Nanda kumar commented on HADOOP-16534:
--

Tested the patch locally, I will go ahead commit it.

> Exclude submarine from hadoop source build
> --
>
> Key: HADOOP-16534
> URL: https://issues.apache.org/jira/browse/HADOOP-16534
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-16534.000.patch
>
>
> When we do source package of hadoop, it should not contain submarine 
> project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16534) Exclude submarine from hadoop source build

2019-09-03 Thread Nanda kumar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921376#comment-16921376
 ] 

Nanda kumar commented on HADOOP-16534:
--

There seems to be some problem with the pre-commit build jobs. All the machines 
seems to be offline.
https://builds.apache.org/job/PreCommit-HADOOP-Build

> Exclude submarine from hadoop source build
> --
>
> Key: HADOOP-16534
> URL: https://issues.apache.org/jira/browse/HADOOP-16534
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-16534.000.patch
>
>
> When we do source package of hadoop, it should not contain submarine 
> project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16534) Exclude submarine from hadoop source build

2019-09-02 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-16534:
-
Attachment: HADOOP-16534.000.patch

> Exclude submarine from hadoop source build
> --
>
> Key: HADOOP-16534
> URL: https://issues.apache.org/jira/browse/HADOOP-16534
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-16534.000.patch
>
>
> When we do source package of hadoop, it should not contain submarine 
> project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16534) Exclude submarine from hadoop source build

2019-09-02 Thread Nanda kumar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921177#comment-16921177
 ] 

Nanda kumar commented on HADOOP-16534:
--

Thanks for the review [~sunilg]

Not sure how pre-commit jenkins jobs work for PR.
I tried to manually trigger the job: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1356/
But it's not getting reported in the PR.

I will attach a patch here for jenkins job to kick-in.

> Exclude submarine from hadoop source build
> --
>
> Key: HADOOP-16534
> URL: https://issues.apache.org/jira/browse/HADOOP-16534
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> When we do source package of hadoop, it should not contain submarine 
> project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16534) Exclude submarine from hadoop source build

2019-08-29 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-16534:
-
Status: Patch Available  (was: Open)

> Exclude submarine from hadoop source build
> --
>
> Key: HADOOP-16534
> URL: https://issues.apache.org/jira/browse/HADOOP-16534
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> When we do source package of hadoop, it should not contain submarine 
> project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16534) Exclude submarine from hadoop source build

2019-08-27 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-16534:
-
Summary: Exclude submarine from hadoop source build  (was: Exclude 
submarine code from hadoop source build)

> Exclude submarine from hadoop source build
> --
>
> Key: HADOOP-16534
> URL: https://issues.apache.org/jira/browse/HADOOP-16534
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> When we do source package of hadoop, it should not contain submarine 
> project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16534) Exclude submarine code from hadoop source build

2019-08-27 Thread Nanda kumar (Jira)
Nanda kumar created HADOOP-16534:


 Summary: Exclude submarine code from hadoop source build
 Key: HADOOP-16534
 URL: https://issues.apache.org/jira/browse/HADOOP-16534
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Nanda kumar
Assignee: Nanda kumar


When we do source package of hadoop, it should not contain submarine 
project/code.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15989) Synchronized at CompositeService#removeService is not required

2019-05-07 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16834600#comment-16834600
 ] 

Nanda kumar commented on HADOOP-15989:
--

Thanks [~Prabhu Joseph] for the patch. It looks good to me.
Nitpick: Can you also add a debug log in {{removeService}} like the logs we 
have in {{addService}} and {{serviceStart}} methods.

> Synchronized at CompositeService#removeService is not required
> --
>
> Key: HADOOP-15989
> URL: https://issues.apache.org/jira/browse/HADOOP-15989
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: 0001-HADOOP-15989.patch
>
>
> Synchronization at CompositeService#removeService method level is not 
> required.
> {code}
> protected synchronized boolean removeService(Service service) {
> synchronized (serviceList) {
> return serviceList.remove(service);
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14544) DistCp documentation for command line options is misaligned.

2019-04-11 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HADOOP-14544:


Assignee: Masatake Iwasaki  (was: Nanda kumar)

> DistCp documentation for command line options is misaligned.
> 
>
> Key: HADOOP-14544
> URL: https://issues.apache.org/jira/browse/HADOOP-14544
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.3, 3.2.1
>Reporter: Chris Nauroth
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: DistCp 2.7.3 Documentation.png, HADOOP-14544.001.patch
>
>
> In the DistCp documentation, the Command Line Options section appears to be 
> misaligned/incorrect in some of the Notes for release 2.7.3.  This is the 
> current stable version, so it's likely that users will drive into this 
> version of the document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16243) Change Log Level to trace in NetUtils.java

2019-04-10 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16814259#comment-16814259
 ] 

Nanda kumar commented on HADOOP-16243:
--

[~bharatviswa], moved the jira to Hadoop Commons. [~candychencan], made you a 
contributor in "Hadoop Commons" and assigned the jira to you.

> Change Log Level to trace in NetUtils.java
> --
>
> Key: HADOOP-16243
> URL: https://issues.apache.org/jira/browse/HADOOP-16243
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1407.001.patch
>
>
> When there is no String Constructor for the exception, we Log a Warn Message, 
> and rethrow the exception. We can change the Log level to TRACE/DEBUG.
>  
> {code:java}
> private static  T wrapWithMessage(
> T exception, String msg) throws T {
> Class clazz = exception.getClass();
> try {
> Constructor ctor = clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T)(t.initCause(exception));
> } catch (Throwable e) {
> LOG.warn("Unable to wrap exception of type {}: it has no (String) "
> + "constructor", clazz, e);
> throw exception;
> }
> }{code}
> {code:java}
> 2019-04-09 18:07:27,824 WARN ipc.Client 
> (Client.java:handleConnectionFailure(938)) - Interrupted while trying for 
> connection
> 2019-04-09 18:07:27,826 WARN net.NetUtils 
> (NetUtils.java:wrapWithMessage(834)) - Unable to wrap exception of type class 
> java.nio.channels.ClosedByInterruptException: it has no (String) constructor
> java.lang.NoSuchMethodException: 
> java.nio.channels.ClosedByInterruptException.(java.lang.String)
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.getConstructor(Class.java:1825)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy84.register(Unknown Source)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.register(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:160)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:120)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16243) Change Log Level to trace in NetUtils.java

2019-04-10 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HADOOP-16243:


Assignee: chencan

> Change Log Level to trace in NetUtils.java
> --
>
> Key: HADOOP-16243
> URL: https://issues.apache.org/jira/browse/HADOOP-16243
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: chencan
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1407.001.patch
>
>
> When there is no String Constructor for the exception, we Log a Warn Message, 
> and rethrow the exception. We can change the Log level to TRACE/DEBUG.
>  
> {code:java}
> private static  T wrapWithMessage(
> T exception, String msg) throws T {
> Class clazz = exception.getClass();
> try {
> Constructor ctor = clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T)(t.initCause(exception));
> } catch (Throwable e) {
> LOG.warn("Unable to wrap exception of type {}: it has no (String) "
> + "constructor", clazz, e);
> throw exception;
> }
> }{code}
> {code:java}
> 2019-04-09 18:07:27,824 WARN ipc.Client 
> (Client.java:handleConnectionFailure(938)) - Interrupted while trying for 
> connection
> 2019-04-09 18:07:27,826 WARN net.NetUtils 
> (NetUtils.java:wrapWithMessage(834)) - Unable to wrap exception of type class 
> java.nio.channels.ClosedByInterruptException: it has no (String) constructor
> java.lang.NoSuchMethodException: 
> java.nio.channels.ClosedByInterruptException.(java.lang.String)
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.getConstructor(Class.java:1825)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy84.register(Unknown Source)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.register(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:160)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:120)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16243) Change Log Level to trace in NetUtils.java

2019-04-10 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HADOOP-16243:


Assignee: (was: chencan)
Workflow: no-reopen-closed, patch-avail  (was: patch-available, re-open 
possible)
 Key: HADOOP-16243  (was: HDDS-1407)
 Project: Hadoop Common  (was: Hadoop Distributed Data Store)

> Change Log Level to trace in NetUtils.java
> --
>
> Key: HADOOP-16243
> URL: https://issues.apache.org/jira/browse/HADOOP-16243
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
> Attachments: HDDS-1407.001.patch
>
>
> When there is no String Constructor for the exception, we Log a Warn Message, 
> and rethrow the exception. We can change the Log level to TRACE/DEBUG.
>  
> {code:java}
> private static  T wrapWithMessage(
> T exception, String msg) throws T {
> Class clazz = exception.getClass();
> try {
> Constructor ctor = clazz.getConstructor(String.class);
> Throwable t = ctor.newInstance(msg);
> return (T)(t.initCause(exception));
> } catch (Throwable e) {
> LOG.warn("Unable to wrap exception of type {}: it has no (String) "
> + "constructor", clazz, e);
> throw exception;
> }
> }{code}
> {code:java}
> 2019-04-09 18:07:27,824 WARN ipc.Client 
> (Client.java:handleConnectionFailure(938)) - Interrupted while trying for 
> connection
> 2019-04-09 18:07:27,826 WARN net.NetUtils 
> (NetUtils.java:wrapWithMessage(834)) - Unable to wrap exception of type class 
> java.nio.channels.ClosedByInterruptException: it has no (String) constructor
> java.lang.NoSuchMethodException: 
> java.nio.channels.ClosedByInterruptException.(java.lang.String)
> at java.lang.Class.getConstructor0(Class.java:3082)
> at java.lang.Class.getConstructor(Class.java:1825)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:830)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1515)
> at org.apache.hadoop.ipc.Client.call(Client.java:1457)
> at org.apache.hadoop.ipc.Client.call(Client.java:1367)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
> at com.sun.proxy.$Proxy84.register(Unknown Source)
> at 
> org.apache.hadoop.ozone.protocolPB.StorageContainerDatanodeProtocolClientSideTranslatorPB.register(StorageContainerDatanodeProtocolClientSideTranslatorPB.java:160)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:120)
> at 
> org.apache.hadoop.ozone.container.common.states.endpoint.RegisterEndpointTask.call(RegisterEndpointTask.java:47)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-7050) proxyuser host/group config properties don't work if user name as DOT in it

2019-01-08 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HADOOP-7050.
-
Resolution: Duplicate

> proxyuser host/group config properties don't work if user name as DOT in it
> ---
>
> Key: HADOOP-7050
> URL: https://issues.apache.org/jira/browse/HADOOP-7050
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Alejandro Abdelnur
>Priority: Major
>
> If the user contains a DOT, "foo.bar", proxy user configuration fails to be 
> read properly and it does not kick in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-7050) proxyuser host/group config properties don't work if user name as DOT in it

2019-01-08 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-7050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16737168#comment-16737168
 ] 

Nanda kumar commented on HADOOP-7050:
-

This issued is fixed in HADOOP-15395.

> proxyuser host/group config properties don't work if user name as DOT in it
> ---
>
> Key: HADOOP-7050
> URL: https://issues.apache.org/jira/browse/HADOOP-7050
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Alejandro Abdelnur
>Priority: Major
>
> If the user contains a DOT, "foo.bar", proxy user configuration fails to be 
> read properly and it does not kick in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15747) warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal

2018-09-16 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15747:
-
Description: Now that HADOOP-14833 has removed user:secret from URIs, 
change the warning printed when people to that to explicitly declare Hadoop 3.3 
as when it will happen. Do the same in the docs.

> warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal
> ---
>
> Key: HADOOP-15747
> URL: https://issues.apache.org/jira/browse/HADOOP-15747
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.9.1, 3.1.1, 3.0.3
> Environment: Now that HADOOP-14833 has removed user:secret from URIs, 
> change the warning printed when people to that to explicitly declare Hadoop 
> 3.3 as when it will happen. Do the same in the docs
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Now that HADOOP-14833 has removed user:secret from URIs, change the warning 
> printed when people to that to explicitly declare Hadoop 3.3 as when it will 
> happen. Do the same in the docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15747) warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal

2018-09-16 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15747:
-
Environment: (was: Now that HADOOP-14833 has removed user:secret from 
URIs, change the warning printed when people to that to explicitly declare 
Hadoop 3.3 as when it will happen. Do the same in the docs)

> warning about user:pass in URI to explicitly call out Hadoop 3.2 as removal
> ---
>
> Key: HADOOP-15747
> URL: https://issues.apache.org/jira/browse/HADOOP-15747
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 2.9.1, 3.1.1, 3.0.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Now that HADOOP-14833 has removed user:secret from URIs, change the warning 
> printed when people to that to explicitly declare Hadoop 3.3 as when it will 
> happen. Do the same in the docs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15574) Suppress build error if there are no docs after excluding private annotations

2018-07-01 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16529055#comment-16529055
 ] 

Nanda kumar commented on HADOOP-15574:
--

Thanks [~tasanuma0829] for the contribution. I have committed this to 
branch-3.1 and trunk. 

> Suppress build error if there are no docs after excluding private annotations
> -
>
> Key: HADOOP-15574
> URL: https://issues.apache.org/jira/browse/HADOOP-15574
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDDS-202.1.patch
>
>
> Seen in hadoop-ozone when building with the Maven hdds profile enabled.
> {noformat}
> $ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
> hadoop-ozone/ozonefs
> ...
> [INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
> hadoop-ozone-filesystem ---
> [INFO]
> ExcludePrivateAnnotationsStandardDoclet
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13.223 s
> [INFO] Finished at: 2018-06-28T19:46:49+09:00
> [INFO] Final Memory: 122M/1196M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-ozone-filesystem: MavenReportException: Error while 
> generating Javadoc:
> [ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
> [ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
> [ERROR]   at 
> com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
> [ERROR]   at 
> org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
> [ERROR]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [ERROR]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [ERROR]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [ERROR]   at java.lang.reflect.Method.invoke(Method.java:498)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
> [ERROR]   at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:219)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:205)
> [ERROR]   at com.sun.tools.javadoc.Main.execute(Main.java:64)
> [ERROR]   at com.sun.tools.javadoc.Main.main(Main.java:54)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15574) Suppress build error if there are no docs after excluding private annotations

2018-07-01 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15574:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.1.1
   3.2.0
   Status: Resolved  (was: Patch Available)

> Suppress build error if there are no docs after excluding private annotations
> -
>
> Key: HADOOP-15574
> URL: https://issues.apache.org/jira/browse/HADOOP-15574
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: HDDS-202.1.patch
>
>
> Seen in hadoop-ozone when building with the Maven hdds profile enabled.
> {noformat}
> $ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
> hadoop-ozone/ozonefs
> ...
> [INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
> hadoop-ozone-filesystem ---
> [INFO]
> ExcludePrivateAnnotationsStandardDoclet
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13.223 s
> [INFO] Finished at: 2018-06-28T19:46:49+09:00
> [INFO] Final Memory: 122M/1196M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-ozone-filesystem: MavenReportException: Error while 
> generating Javadoc:
> [ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
> [ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
> [ERROR]   at 
> com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
> [ERROR]   at 
> org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
> [ERROR]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [ERROR]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [ERROR]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [ERROR]   at java.lang.reflect.Method.invoke(Method.java:498)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
> [ERROR]   at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:219)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:205)
> [ERROR]   at com.sun.tools.javadoc.Main.execute(Main.java:64)
> [ERROR]   at com.sun.tools.javadoc.Main.main(Main.java:54)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15574) Suppress build error if there are no docs after excluding private annotations

2018-07-01 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16529052#comment-16529052
 ] 

Nanda kumar commented on HADOOP-15574:
--

+1, LGTM. I will commit this shortly.

> Suppress build error if there are no docs after excluding private annotations
> -
>
> Key: HADOOP-15574
> URL: https://issues.apache.org/jira/browse/HADOOP-15574
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HDDS-202.1.patch
>
>
> Seen in hadoop-ozone when building with the Maven hdds profile enabled.
> {noformat}
> $ mvn clean install -DskipTests -DskipShade -Phdds -Pdist --projects 
> hadoop-ozone/ozonefs
> ...
> [INFO] --- maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) @ 
> hadoop-ozone-filesystem ---
> [INFO]
> ExcludePrivateAnnotationsStandardDoclet
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 13.223 s
> [INFO] Finished at: 2018-06-28T19:46:49+09:00
> [INFO] Final Memory: 122M/1196M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-javadoc-plugin:3.0.0-M1:jar (module-javadocs) 
> on project hadoop-ozone-filesystem: MavenReportException: Error while 
> generating Javadoc:
> [ERROR] Exit code: 1 - Picked up _JAVA_OPTIONS: -Duser.language=en
> [ERROR] java.lang.ArrayIndexOutOfBoundsException: 0
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setTopFile(ConfigurationImpl.java:537)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.ConfigurationImpl.setSpecificDocletOptions(ConfigurationImpl.java:309)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.Configuration.setOptions(Configuration.java:560)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.startGeneration(AbstractDoclet.java:134)
> [ERROR]   at 
> com.sun.tools.doclets.internal.toolkit.AbstractDoclet.start(AbstractDoclet.java:82)
> [ERROR]   at 
> com.sun.tools.doclets.formats.html.HtmlDoclet.start(HtmlDoclet.java:80)
> [ERROR]   at 
> com.sun.tools.doclets.standard.Standard.start(Standard.java:39)
> [ERROR]   at 
> org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet.start(ExcludePrivateAnnotationsStandardDoclet.java:41)
> [ERROR]   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> [ERROR]   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> [ERROR]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [ERROR]   at java.lang.reflect.Method.invoke(Method.java:498)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.invoke(DocletInvoker.java:310)
> [ERROR]   at 
> com.sun.tools.javadoc.DocletInvoker.start(DocletInvoker.java:189)
> [ERROR]   at com.sun.tools.javadoc.Start.parseAndExecute(Start.java:366)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:219)
> [ERROR]   at com.sun.tools.javadoc.Start.begin(Start.java:205)
> [ERROR]   at com.sun.tools.javadoc.Main.execute(Main.java:64)
> [ERROR]   at com.sun.tools.javadoc.Main.main(Main.java:54)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2018-06-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16511979#comment-16511979
 ] 

Nanda kumar commented on HADOOP-15524:
--

Thanks [~jesmith3] for the PR, the change looks good to me.
cc: [~arpitagarwal], can you please add [~jesmith3] as a contributor and assign 
this jira to him.

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15523) Shell command timeout given is in seconds whereas it is taken as millisec while scheduling

2018-06-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16511203#comment-16511203
 ] 

Nanda kumar commented on HADOOP-15523:
--

Thanks [~BilwaST] for updating the patch.
Looks good to me, +1 (non-binding).

> Shell command timeout given is in seconds whereas it is taken as millisec 
> while scheduling
> --
>
> Key: HADOOP-15523
> URL: https://issues.apache.org/jira/browse/HADOOP-15523
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: HADOOP-15523-001.patch, HADOOP-15523-002.patch, 
> HADOOP-15523-003.patch
>
>
> ShellBasedUnixGroupsMapping has a property 
> {{hadoop.security.groups.shell.command.timeout}} to control how long to wait 
> for the fetch groups command which can be configured in seconds. but while 
> scheduling the time taken is millisecs. so currently if u give value as 60s, 
> it is taken as 60ms.
> {code:java}
> timeout = conf.getTimeDuration(
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS,
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT,
> TimeUnit.SECONDS);{code}
> Time unit given is in seconds but it should be millisecs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2018-06-13 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16511171#comment-16511171
 ] 

Nanda kumar commented on HADOOP-15524:
--

Thanks [~jesmith3] for reporting this.
Yes we should handle this case in {{BytesWritable#setSize}} in a similar way as 
it's handled in {{ArrayList}}.

Please let me know if you need any help in creating a patch for this issue.

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Joseph Smith
>Priority: Major
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15523) Shell command timeout given is in seconds whereas it is taken as millisec while scheduling

2018-06-09 Thread Nanda kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16507284#comment-16507284
 ] 

Nanda kumar commented on HADOOP-15523:
--

Thanks [~BilwaST] for reporting and working on this.

Overall the patch looks good to me, some minor comments.
We can rename the property key name in {{CommonConfigurationKeysPublic}}:
 * from {{HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS}} to 
{{HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_KEY}}
 * from {{HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT}} to 
{{HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT}}

Not related to this patch:
 *ShellBasedUnixGroupsMapping.java*
Line:21 unused import
Line:55 {{"timeout = 0L"}} can be re-factored to
 {{"timeout = 
CommonConfigurationKeys.HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_DEFAULT"}}

> Shell command timeout given is in seconds whereas it is taken as millisec 
> while scheduling
> --
>
> Key: HADOOP-15523
> URL: https://issues.apache.org/jira/browse/HADOOP-15523
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
> Attachments: HADOOP-15523-001.patch
>
>
> ShellBasedUnixGroupsMapping has a property 
> {{hadoop.security.groups.shell.command.timeout}} to control how long to wait 
> for the fetch groups command which can be configured in seconds. but while 
> scheduling the time taken is millisecs. so currently if u give value as 60s, 
> it is taken as 60ms.
> {code:java}
> timeout = conf.getTimeDuration(
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS,
> CommonConfigurationKeys.
> HADOOP_SECURITY_GROUP_SHELL_COMMAND_TIMEOUT_SECS_DEFAULT,
> TimeUnit.SECONDS);{code}
> Time unit given is in seconds but it should be millisecs



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15526) Remove jdiff-workaround.patch from hadoop-common-project

2018-06-09 Thread Nanda kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar resolved HADOOP-15526.
--
Resolution: Not A Problem
  Assignee: (was: Nanda kumar)

> Remove jdiff-workaround.patch from hadoop-common-project
> 
>
> Key: HADOOP-15526
> URL: https://issues.apache.org/jira/browse/HADOOP-15526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Nanda kumar
>Priority: Trivial
>
> Remove {{jdiff-workaround.patch}} file from codebase which is checked-in as 
> part of HADOOP-13428.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15526) Remove jdiff-workaround.patch from hadoop-common-project

2018-06-09 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15526:


 Summary: Remove jdiff-workaround.patch from hadoop-common-project
 Key: HADOOP-15526
 URL: https://issues.apache.org/jira/browse/HADOOP-15526
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Nanda kumar
Assignee: Nanda kumar


Remove {{jdiff-workaround.patch}} file from codebase which is checked-in as 
part of HADOOP-13428.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-23 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15490:
-
Status: Patch Available  (was: Open)

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-23 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15490:
-
Attachment: HADOOP-15490.000.patch

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-22 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15490:
-
Attachment: (was: HADOOP-15490.000.patch)

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-22 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15490:
-
Status: Open  (was: Patch Available)

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-22 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15490:
-
Status: Patch Available  (was: Open)

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-22 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15490:
-
Attachment: HADOOP-15490.000.patch

> Multiple declaration of maven-enforcer-plugin found in pom.xml
> --
>
> Key: HADOOP-15490
> URL: https://issues.apache.org/jira/browse/HADOOP-15490
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15490.000.patch
>
>
> Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing 
> the below warning during build.
> {noformat}
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ 
> org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
> /Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
> [WARNING]
> [WARNING] Some problems were encountered while building the effective model 
> for org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
> [WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but 
> found duplicate declaration of plugin 
> org.apache.maven.plugins:maven-enforcer-plugin @ line 431, column 15
> [WARNING]
> [WARNING] It is highly recommended to fix these problems because they 
> threaten the stability of your build.
> [WARNING]
> [WARNING] For this reason, future Maven versions might no longer support 
> building such malformed projects.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15490) Multiple declaration of maven-enforcer-plugin found in pom.xml

2018-05-22 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15490:


 Summary: Multiple declaration of maven-enforcer-plugin found in 
pom.xml
 Key: HADOOP-15490
 URL: https://issues.apache.org/jira/browse/HADOOP-15490
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Nanda kumar
Assignee: Nanda kumar


Multiple declaration of {{maven-enforcer-plugin}} in {{pom.xml}} is causing the 
below warning during build.
{noformat}
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-project:pom:3.2.0-SNAPSHOT
[WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but found 
duplicate declaration of plugin org.apache.maven.plugins:maven-enforcer-plugin 
@ org.apache.hadoop:hadoop-main:3.2.0-SNAPSHOT, 
/Users/nvadivelu/codebase/apache/hadoop/pom.xml, line 431, column 15
[WARNING]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-main:pom:3.2.0-SNAPSHOT
[WARNING] 'build.plugins.plugin.(groupId:artifactId)' must be unique but found 
duplicate declaration of plugin org.apache.maven.plugins:maven-enforcer-plugin 
@ line 431, column 15
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten 
the stability of your build.
[WARNING]
[WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-22 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16484408#comment-16484408
 ] 

Nanda kumar commented on HADOOP-15486:
--

Agreed, updates to NetworkTopology should be rare. Patch v001 makes the lock 
always fair.

> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15486.000.patch, HADOOP-15486.001.patch
>
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Make NetworkTopology#netLock fair

2018-05-22 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Summary: Make NetworkTopology#netLock fair  (was: Add a config option to 
make NetworkTopology#netLock fair)

> Make NetworkTopology#netLock fair
> -
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15486.000.patch, HADOOP-15486.001.patch
>
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Make NetworkTopology#netLock fair

2018-05-22 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Description: 
Whenever a datanode is restarted, the registration call after the restart 
received by NameNode lands in {{NetworkTopology#add}} via 
{{DatanodeManager#registerDatanode}} requires write lock on 
{{NetworkTopology#netLock}}. This registration thread is getting starved by 
flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
clients those who were writing to the restarted datanode.
The registration call which is waiting for write lock on 
{{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
causing all the other RPC calls which require the lock on 
{{FSNamesystem#fsLock}} wait.
We can make {{NetworkTopology#netLock}} lock fair so that the registration 
thread will not starve.

  was:
Whenever a datanode is restarted, the registration call after the restart 
received by NameNode lands in {{NetworkTopology#add}} via 
{{DatanodeManager#registerDatanode}} requires write lock on 
{{NetworkTopology#netLock}}. This registration thread is getting starved by 
flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
clients those who were writing to the restarted datanode.
The registration call which is waiting for write lock on 
{{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
causing all the other RPC calls which require the lock on 
{{FSNamesystem#fsLock}} wait.
We can introduce a config property to make {{NetworkTopology#netLock}} lock 
fair so that the registration thread will not starve.


> Make NetworkTopology#netLock fair
> -
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15486.000.patch, HADOOP-15486.001.patch
>
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can make {{NetworkTopology#netLock}} lock fair so that the registration 
> thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-22 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Attachment: HADOOP-15486.001.patch

> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15486.000.patch, HADOOP-15486.001.patch
>
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-21 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Attachment: HADOOP-15486.000.patch

> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15486.000.patch
>
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-21 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Status: Patch Available  (was: Open)

> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15486.000.patch
>
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15486) Add a config option to make NetworkTopology#netLock fair

2018-05-21 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15486:
-
Description: 
Whenever a datanode is restarted, the registration call after the restart 
received by NameNode lands in {{NetworkTopology#add}} via 
{{DatanodeManager#registerDatanode}} requires write lock on 
{{NetworkTopology#netLock}}. This registration thread is getting starved by 
flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
clients those who were writing to the restarted datanode.
The registration call which is waiting for write lock on 
{{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
causing all the other RPC calls which require the lock on 
{{FSNamesystem#fsLock}} wait.
We can introduce a config property to make {{NetworkTopology#netLock}} lock 
fair so that the registration thread will not starve.

  was:
Whenever a datanode is restarted, the registration call after the restart 
received by NameNode lands in {{NetworkTopology#add}} via 
{{DatanodeManager#registerDatanode}} requires write lock on 
{{NetworkTopology#netLock}}. This registration thread is getting starved by 
flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
clients those who were writing to the restarted datanode.
The registration call which is waiting for write lock on 
{{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
causing all the other RPC calls which require the lock on 
{{FSNamesystem#fsLock}} wait.
We can make {{NetworkTopology#netLock}} lock fair so that the registration 
thread will not starve.


> Add a config option to make NetworkTopology#netLock fair
> 
>
> Key: HADOOP-15486
> URL: https://issues.apache.org/jira/browse/HADOOP-15486
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> Whenever a datanode is restarted, the registration call after the restart 
> received by NameNode lands in {{NetworkTopology#add}} via 
> {{DatanodeManager#registerDatanode}} requires write lock on 
> {{NetworkTopology#netLock}}. This registration thread is getting starved by 
> flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
> clients those who were writing to the restarted datanode.
> The registration call which is waiting for write lock on 
> {{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
> causing all the other RPC calls which require the lock on 
> {{FSNamesystem#fsLock}} wait.
> We can introduce a config property to make {{NetworkTopology#netLock}} lock 
> fair so that the registration thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15486) Make NetworkTopology#netLock fair

2018-05-21 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15486:


 Summary: Make NetworkTopology#netLock fair
 Key: HADOOP-15486
 URL: https://issues.apache.org/jira/browse/HADOOP-15486
 Project: Hadoop Common
  Issue Type: Improvement
  Components: net
Reporter: Nanda kumar
Assignee: Nanda kumar


Whenever a datanode is restarted, the registration call after the restart 
received by NameNode lands in {{NetworkTopology#add}} via 
{{DatanodeManager#registerDatanode}} requires write lock on 
{{NetworkTopology#netLock}}. This registration thread is getting starved by 
flood of {{FSNamesystem.getAdditionalDatanode}} calls, which are triggered by 
clients those who were writing to the restarted datanode.
The registration call which is waiting for write lock on 
{{NetworkTopology#netLock}} is holding write lock on {{FSNamesystem#fsLock}}, 
causing all the other RPC calls which require the lock on 
{{FSNamesystem#fsLock}} wait.
We can make {{NetworkTopology#netLock}} lock fair so that the registration 
thread will not starve.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15474) Rename properties introduced for

2018-05-18 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480338#comment-16480338
 ] 

Nanda kumar commented on HADOOP-15474:
--

[~zvenczel], thanks for updating the patch.
+1 (non-binding), looks good to me.

> Rename properties introduced for 
> ---
>
> Key: HADOOP-15474
> URL: https://issues.apache.org/jira/browse/HADOOP-15474
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Nanda kumar
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15474.01.patch, HADOOP-15474.02.patch
>
>
> HADOOP-15007 introduces the following two properties for tagging 
> configuration properties
> * hadoop.system.tags
> * hadoop.custom.tags
> This sounds like {{tags}} fall under {{hadoop.system}} and {{hadoop.custom}} 
> related properties, but what we really want to achieve here is to have two 
> sub-division of {{tags}} namely {{system}} and {{custom}}
> For better readability, we can rename them as
> * hadoop.tags.system
> * hadoop.tags.custom



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15474) Rename properties introduced for

2018-05-17 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479578#comment-16479578
 ] 

Nanda kumar edited comment on HADOOP-15474 at 5/17/18 7:16 PM:
---

Thanks [~zvenczel] for taking care of this.
The patch looks good to me, a minor comment in {{core-default.xml}}:
 * Can you also fix the typo in line 3038 & 3047: {{DEPRICATED}} --> 
{{DEPRECATED}}


was (Author: nandakumar131):
Thanks [~zvenczel] for taking care of this.
The patch looks good to me, a minor comment:
 * Can you also fix the typo in line 3038 & 3047: {{DEPRICATED}} --> 
{{DEPRECATED}}

> Rename properties introduced for 
> ---
>
> Key: HADOOP-15474
> URL: https://issues.apache.org/jira/browse/HADOOP-15474
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Nanda kumar
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15474.01.patch
>
>
> HADOOP-15007 introduces the following two properties for tagging 
> configuration properties
> * hadoop.system.tags
> * hadoop.custom.tags
> This sounds like {{tags}} fall under {{hadoop.system}} and {{hadoop.custom}} 
> related properties, but what we really want to achieve here is to have two 
> sub-division of {{tags}} namely {{system}} and {{custom}}
> For better readability, we can rename them as
> * hadoop.tags.system
> * hadoop.tags.custom



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15474) Rename properties introduced for

2018-05-17 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16479578#comment-16479578
 ] 

Nanda kumar commented on HADOOP-15474:
--

Thanks [~zvenczel] for taking care of this.
The patch looks good to me, a minor comment:
 * Can you also fix the typo in line 3038 & 3047: {{DEPRICATED}} --> 
{{DEPRECATED}}

> Rename properties introduced for 
> ---
>
> Key: HADOOP-15474
> URL: https://issues.apache.org/jira/browse/HADOOP-15474
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 3.1.0
>Reporter: Nanda kumar
>Assignee: Zsolt Venczel
>Priority: Major
> Attachments: HADOOP-15474.01.patch
>
>
> HADOOP-15007 introduces the following two properties for tagging 
> configuration properties
> * hadoop.system.tags
> * hadoop.custom.tags
> This sounds like {{tags}} fall under {{hadoop.system}} and {{hadoop.custom}} 
> related properties, but what we really want to achieve here is to have two 
> sub-division of {{tags}} namely {{system}} and {{custom}}
> For better readability, we can rename them as
> * hadoop.tags.system
> * hadoop.tags.custom



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15474) Rename properties introduced for

2018-05-17 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15474:


 Summary: Rename properties introduced for 
 Key: HADOOP-15474
 URL: https://issues.apache.org/jira/browse/HADOOP-15474
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.1.0
Reporter: Nanda kumar


HADOOP-15007 introduces the following two properties for tagging configuration 
properties
* hadoop.system.tags
* hadoop.custom.tags

This sounds like {{tags}} fall under {{hadoop.system}} and {{hadoop.custom}} 
related properties, but what we really want to achieve here is to have two 
sub-division of {{tags}} namely {{system}} and {{custom}}

For better readability, we can rename them as
* hadoop.tags.system
* hadoop.tags.custom





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-117) mapred temporary files not deleted

2018-04-11 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HADOOP-117:
--

Assignee: Doug Cutting  (was: Nanda kumar)

> mapred temporary files not deleted
> --
>
> Key: HADOOP-117
> URL: https://issues.apache.org/jira/browse/HADOOP-117
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.1.1
> Environment: windows with cygwin
>Reporter: raghavendra prabhu
>Assignee: Doug Cutting
>Priority: Blocker
> Fix For: 0.1.1, 0.2.0
>
> Attachments: JobConf.patch
>
>
> I found out that JobConf.java
> Created interchanged names with parent being file and file being parent 
> directory
> As a result files were not getting deleted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-117) mapred temporary files not deleted

2018-04-11 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar reassigned HADOOP-117:
--

Assignee: Nanda kumar  (was: Doug Cutting)

> mapred temporary files not deleted
> --
>
> Key: HADOOP-117
> URL: https://issues.apache.org/jira/browse/HADOOP-117
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.1.1
> Environment: windows with cygwin
>Reporter: raghavendra prabhu
>Assignee: Nanda kumar
>Priority: Blocker
> Fix For: 0.1.1, 0.2.0
>
> Attachments: JobConf.patch
>
>
> I found out that JobConf.java
> Created interchanged names with parent being file and file being parent 
> directory
> As a result files were not getting deleted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15275) Incorrect javadoc for return type of RetryPolicy#shouldRetry

2018-02-28 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15275:
-
Status: Patch Available  (was: Open)

> Incorrect javadoc for return type of RetryPolicy#shouldRetry
> 
>
> Key: HADOOP-15275
> URL: https://issues.apache.org/jira/browse/HADOOP-15275
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15275.000.patch
>
>
> The return type of {{RetryPolicy#shouldRetry}} has been changed from 
> {{boolean}} to {{RetryAction}}, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15275) Incorrect javadoc for return type of RetryPolicy#shouldRetry

2018-02-28 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15275:
-
Attachment: HADOOP-15275.000.patch

> Incorrect javadoc for return type of RetryPolicy#shouldRetry
> 
>
> Key: HADOOP-15275
> URL: https://issues.apache.org/jira/browse/HADOOP-15275
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
> Attachments: HADOOP-15275.000.patch
>
>
> The return type of {{RetryPolicy#shouldRetry}} has been changed from 
> {{boolean}} to {{RetryAction}}, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15275) Incorrect javadoc in return type of RetryPolicy#shouldRetry

2018-02-28 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15275:


 Summary: Incorrect javadoc in return type of 
RetryPolicy#shouldRetry
 Key: HADOOP-15275
 URL: https://issues.apache.org/jira/browse/HADOOP-15275
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Nanda kumar
Assignee: Nanda kumar


The return type of {{RetryPolicy#shouldRetry}} has been changed from 
{{boolean}} to {{RetryAction}}, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15275) Incorrect javadoc for return type of RetryPolicy#shouldRetry

2018-02-28 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15275:
-
Summary: Incorrect javadoc for return type of RetryPolicy#shouldRetry  
(was: Incorrect javadoc in return type of RetryPolicy#shouldRetry)

> Incorrect javadoc for return type of RetryPolicy#shouldRetry
> 
>
> Key: HADOOP-15275
> URL: https://issues.apache.org/jira/browse/HADOOP-15275
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Minor
>
> The return type of {{RetryPolicy#shouldRetry}} has been changed from 
> {{boolean}} to {{RetryAction}}, but the javadoc is not updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15266:
-
Status: Patch Available  (was: Open)

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15266-branch-2.000.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15266:
-
Attachment: HADOOP-15266-branch-2.000.patch

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15266-branch-2.000.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15266:
-
Hadoop Flags:   (was: Reviewed)

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15266:
-
Fix Version/s: (was: 3.1.0)

>  [branch-2] Upper/Lower case conversion support for group names in 
> LdapGroupsMapping
> 
>
> Key: HADOOP-15266
> URL: https://issues.apache.org/jira/browse/HADOOP-15266
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15266) [branch-2] Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-26 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15266:


 Summary:  [branch-2] Upper/Lower case conversion support for group 
names in LdapGroupsMapping
 Key: HADOOP-15266
 URL: https://issues.apache.org/jira/browse/HADOOP-15266
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Nanda kumar
Assignee: Nanda kumar
 Fix For: 3.1.0


On most LDAP servers the user and group names are case-insensitive. When we use 
{{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it is 
possible to configure {{SSSD}} to force the group names to be returned in 
lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.

This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
implementation based on LdapGroupsMapping which supports force lower/upper case 
group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15255) Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-23 Thread Nanda kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16374992#comment-16374992
 ] 

Nanda kumar commented on HADOOP-15255:
--

Fixed unit test failure and checkstyle issue in patch v001.

> Upper/Lower case conversion support for group names in LdapGroupsMapping
> 
>
> Key: HADOOP-15255
> URL: https://issues.apache.org/jira/browse/HADOOP-15255
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15255.000.patch, HADOOP-15255.001.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15255) Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-23 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15255:
-
Attachment: HADOOP-15255.001.patch

> Upper/Lower case conversion support for group names in LdapGroupsMapping
> 
>
> Key: HADOOP-15255
> URL: https://issues.apache.org/jira/browse/HADOOP-15255
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15255.000.patch, HADOOP-15255.001.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15255) Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-23 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15255:
-
Status: Patch Available  (was: Open)

> Upper/Lower case conversion support for group names in LdapGroupsMapping
> 
>
> Key: HADOOP-15255
> URL: https://issues.apache.org/jira/browse/HADOOP-15255
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15255.000.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15255) Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-23 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15255:
-
Attachment: HADOOP-15255.000.patch

> Upper/Lower case conversion support for group names in LdapGroupsMapping
> 
>
> Key: HADOOP-15255
> URL: https://issues.apache.org/jira/browse/HADOOP-15255
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Nanda kumar
>Priority: Major
> Attachments: HADOOP-15255.000.patch
>
>
> On most LDAP servers the user and group names are case-insensitive. When we 
> use {{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it 
> is possible to configure {{SSSD}} to force the group names to be returned in 
> lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.
> This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
> implementation based on LdapGroupsMapping which supports force lower/upper 
> case group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15255) Upper/Lower case conversion support for group names in LdapGroupsMapping

2018-02-23 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15255:


 Summary: Upper/Lower case conversion support for group names in 
LdapGroupsMapping
 Key: HADOOP-15255
 URL: https://issues.apache.org/jira/browse/HADOOP-15255
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Nanda kumar


On most LDAP servers the user and group names are case-insensitive. When we use 
{{JniBasedUnixGroupsMappingWithFallback}} and have {{SSSD}} in place, it is 
possible to configure {{SSSD}} to force the group names to be returned in 
lowercase. If we use {{LdapGroupsMapping}}, we don't have any such option.

This jira proposes to introduce a new {{hadoop.security.group.mapping}} 
implementation based on LdapGroupsMapping which supports force lower/upper case 
group names.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15152) Typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl

2018-01-02 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15152:
-
Attachment: HADOOP-15152.000.patch

> Typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl
> -
>
> Key: HADOOP-15152
> URL: https://issues.apache.org/jira/browse/HADOOP-15152
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Trivial
>  Labels: javadoc
> Attachments: HADOOP-15152.000.patch
>
>
> There is a typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl
> {code}
>* Subclasses must override this. This method applies the change to
>* all internal data structures derived from the configuration property
>* that is being changed. If this object owns other Reconfigurable objects
>* reconfigureProperty should be called recursively to make sure that
>* to make sure that the configuration of these objects is updated.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15152) Typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl

2018-01-02 Thread Nanda kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HADOOP-15152:
-
Status: Patch Available  (was: Open)

> Typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl
> -
>
> Key: HADOOP-15152
> URL: https://issues.apache.org/jira/browse/HADOOP-15152
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Nanda kumar
>Assignee: Nanda kumar
>Priority: Trivial
>  Labels: javadoc
> Attachments: HADOOP-15152.000.patch
>
>
> There is a typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl
> {code}
>* Subclasses must override this. This method applies the change to
>* all internal data structures derived from the configuration property
>* that is being changed. If this object owns other Reconfigurable objects
>* reconfigureProperty should be called recursively to make sure that
>* to make sure that the configuration of these objects is updated.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15152) Typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl

2018-01-02 Thread Nanda kumar (JIRA)
Nanda kumar created HADOOP-15152:


 Summary: Typo in javadoc of 
ReconfigurableBase#reconfigurePropertyImpl
 Key: HADOOP-15152
 URL: https://issues.apache.org/jira/browse/HADOOP-15152
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Nanda kumar
Assignee: Nanda kumar
Priority: Trivial


There is a typo in javadoc of ReconfigurableBase#reconfigurePropertyImpl

{code}
   * Subclasses must override this. This method applies the change to
   * all internal data structures derived from the configuration property
   * that is being changed. If this object owns other Reconfigurable objects
   * reconfigureProperty should be called recursively to make sure that
   * to make sure that the configuration of these objects is updated.
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org