[jira] [Updated] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol

2019-04-25 Thread Shen Yinjie (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shen Yinjie updated HDFS-14447:
---
Attachment: HDFS-14447-HDFS-13891.03.patch

> RBF: RouterAdminServer should support RefreshUserMappingsProtocol
> -
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread KWON BYUNGCHANG (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-14434:
---
Attachment: HDFS-14434.008.patch

> webhdfs that connect secure hdfs should not use user.name parameter
> ---
>
> Key: HDFS-14434
> URL: https://issues.apache.org/jira/browse/HDFS-14434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Minor
> Attachments: HDFS-14434.001.patch, HDFS-14434.002.patch, 
> HDFS-14434.003.patch, HDFS-14434.004.patch, HDFS-14434.005.patch, 
> HDFS-14434.006.patch, HDFS-14434.007.patch, HDFS-14434.008.patch
>
>
> I have two secure hadoop cluster.  Both cluster use cross-realm 
> authentication. 
> [use...@a.com|mailto:use...@a.com] can access to HDFS of B.COM realm
> by the way, hadoop username of use...@a.com  in B.COM realm is  
> cross_realm_a_com_user_a.
> hdfs dfs command of use...@a.com using B.COM webhdfs failed.
> root cause is  webhdfs that connect secure hdfs use user.name parameter.
> according to webhdfs spec,  insecure webhdfs use user.name,  secure webhdfs 
> use SPNEGO for authentication.
> I think webhdfs that connect secure hdfs  should not use user.name parameter.
> I will attach patch.
> below is error log
>  
> {noformat}
> $ hdfs dfs -ls  webhdfs://b.com:50070/
> ls: Usernames not matched: name=user_a != expected=cross_realm_a_com_user_a
>  
> # user.name in cross realm webhdfs
> $ curl -u : --negotiate 
> 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=user_a' 
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Usernames not 
> matched: name=user_a != expected=cross_realm_a_com_user_a"}}
> # USE SPNEGO
> $ curl -u : --negotiate 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN'
> {"Token"{"urlString":"XgA."}}
>  
> {noformat}
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1450) Fix nightly run failures after HDDS-976

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1450?focusedWorklogId=233280&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233280
 ]

ASF GitHub Bot logged work on HDDS-1450:


Author: ASF GitHub Bot
Created on: 26/Apr/19 06:06
Start Date: 26/Apr/19 06:06
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #757: HDDS-1450. Fix 
nightly run failures after HDDS-976. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/757#issuecomment-486937246
 
 
   @cjjnjust Sorry I was not very clear on the previous comment. 
   
   conf.get() without a default value will return null for schemaFileType when 
the key is not defined, which is the case for some existing xml based tests. 
This will cause NPE but was caught by catch (Throwable e) and rethrow as RTE 
with log message "Fail to load schema file...".
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233280)
Time Spent: 1.5h  (was: 1h 20m)

> Fix nightly run failures after HDDS-976
> ---
>
> Key: HDDS-1450
> URL: https://issues.apache.org/jira/browse/HDDS-1450
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone-nightly/72/testReport/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1450) Fix nightly run failures after HDDS-976

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1450?focusedWorklogId=233279&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233279
 ]

ASF GitHub Bot logged work on HDDS-1450:


Author: ASF GitHub Bot
Created on: 26/Apr/19 06:05
Start Date: 26/Apr/19 06:05
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on issue #757: HDDS-1450. Fix 
nightly run failures after HDDS-976. Contributed by Xi…
URL: https://github.com/apache/hadoop/pull/757#issuecomment-486937246
 
 
   @cjjnjust Sorry I was not very clear on the previous comment. 
   
   conf.get() without a default value will return null for schemaFileType when 
the key is not defined, which is the case for some tests. This will cause NPE 
but was caught by catch (Throwable e) and rethrow as RTE with log message "Fail 
to load schema file...".
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233279)
Time Spent: 1h 20m  (was: 1h 10m)

> Fix nightly run failures after HDDS-976
> ---
>
> Key: HDDS-1450
> URL: https://issues.apache.org/jira/browse/HDDS-1450
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> [https://ci.anzix.net/job/ozone-nightly/72/testReport/]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14401) Refine the implementation for HDFS cache on SCM

2019-04-25 Thread Rakesh R (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826642#comment-16826642
 ] 

Rakesh R edited comment on HDFS-14401 at 4/26/19 5:12 AM:
--

Thanks [~PhiloHe] for the good progress. Adding few comments,
# Move log message to respective constructor, that will make the 
FsDatasetCache.java more cleaner.
{code:java}
PmemMappableBlockLoader(){
LOG.info("Initializing cache loader: PmemMappableBlockLoader");
}

MemoryMappableBlockLoader(){
LOG.info("Initializing cache loader: MemoryMappableBlockLoader");
}
{code}
# How about using a {{MappableBlockLoaderFactory}} and move 
{{#createCacheLoader(DNConf)}} function into that.
{code:java}
   MappableBlockLoader loader = 
MappableBlockLoaderFactory.getInstance().createCacheLoader(this.getDnConf());
{code}
# Typo - '{{due to unsuccessfully mapping'}} -->to-> '{{due to unsuccessful 
mapping'}}.
# Can we make synchronized functions {{long release}} and {{public String 
getCachePath}}
# {{maxBytes = pmemDir.getTotalSpace();}}, IMHO, to use 
[File#getUsableSpace()|https://docs.oracle.com/javase/7/docs/api/java/io/File.html#getUsableSpace()]
 function.
# Remove unused var in PmemVolumeManager.java - {{// private final 
UsedBytesCount usedBytesCount;}}
# Its good to use {} instead of string concatenation in log messages. Please 
take care all such occurrences in newly writing code.
{code:java}
   LOG.info("Added persistent memory - " + volumes[n] +
  " with size=" + maxBytes);

   to

LOG.info("Added persistent memory - {} with size={}",
volumes[n], maxBytes);
{code}


was (Author: rakeshr):
Thanks [~PhiloHe] for the good progress. Adding few comments,
 # Move log message to respective constructor, that will make the 
FsDatasetCache.java more cleaner.
{code:java}
PmemMappableBlockLoader(){
LOG.info("Initializing cache loader: PmemMappableBlockLoader");
}

MemoryMappableBlockLoader(){
LOG.info("Initializing cache loader: MemoryMappableBlockLoader");
}
{code}

 # How about using a {{MappableBlockLoaderFactory}} and move 
{{#createCacheLoader(DNConf)}} function into that.
{code:java}
   MappableBlockLoader loader = 
MappableBlockLoaderFactory.getInstance().createCacheLoader(this.getDnConf());
{code}

 # Typo - '{{due to unsuccessfully mapping'}} -->to-> '{{due to unsuccessful 
mapping'}}.
 # Can we make synchronized functions {{long release}} and {{public String 
getCachePath}}
 # {{maxBytes = pmemDir.getTotalSpace();}}, IMHO, to use 
[File#getUsableSpace()|https://docs.oracle.com/javase/7/docs/api/java/io/File.html#getUsableSpace()]
 function.
 # Remove unused var in PmemVolumeManager.java - {{// private final 
UsedBytesCount usedBytesCount;}}
 # Its good to use {} instead of string concatenation in log messages. Please 
take care all such occurrences in newly writing code.
{code:java}
   LOG.info("Added persistent memory - " + volumes[n] +
  " with size=" + maxBytes);

   to

LOG.info("Added persistent memory - {} with size={}",
volumes[n], maxBytes);
{code}

> Refine the implementation for HDFS cache on SCM
> ---
>
> Key: HDFS-14401
> URL: https://issues.apache.org/jira/browse/HDFS-14401
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14401.000.patch, HDFS-14401.001.patch, 
> HDFS-14401.002.patch, HDFS-14401.003.patch, HDFS-14401.004.patch, 
> HDFS-14401.005.patch
>
>
> In this Jira, we will refine the implementation for HDFS cache on SCM, such 
> as: 1) Handle full pmem volume in VolumeManager; 2) Refine pmem volume 
> selection impl; 3) Clean up MapppableBlockLoader interface; etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14401) Refine the implementation for HDFS cache on SCM

2019-04-25 Thread Rakesh R (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826642#comment-16826642
 ] 

Rakesh R commented on HDFS-14401:
-

Thanks [~PhiloHe] for the good progress. Adding few comments,
 # Move log message to respective constructor, that will make the 
FsDatasetCache.java more cleaner.
{code:java}
PmemMappableBlockLoader(){
LOG.info("Initializing cache loader: PmemMappableBlockLoader");
}

MemoryMappableBlockLoader(){
LOG.info("Initializing cache loader: MemoryMappableBlockLoader");
}
{code}

 # How about using a {{MappableBlockLoaderFactory}} and move 
{{#createCacheLoader(DNConf)}} function into that.
{code:java}
   MappableBlockLoader loader = 
MappableBlockLoaderFactory.getInstance().createCacheLoader(this.getDnConf());
{code}

 # Typo - '{{due to unsuccessfully mapping'}} -->to-> '{{due to unsuccessful 
mapping'}}.
 # Can we make synchronized functions {{long release}} and {{public String 
getCachePath}}
 # {{maxBytes = pmemDir.getTotalSpace();}}, IMHO, to use 
[File#getUsableSpace()|https://docs.oracle.com/javase/7/docs/api/java/io/File.html#getUsableSpace()]
 function.
 # Remove unused var in PmemVolumeManager.java - {{// private final 
UsedBytesCount usedBytesCount;}}
 # Its good to use {} instead of string concatenation in log messages. Please 
take care all such occurrences in newly writing code.
{code:java}
   LOG.info("Added persistent memory - " + volumes[n] +
  " with size=" + maxBytes);

   to

LOG.info("Added persistent memory - {} with size={}",
volumes[n], maxBytes);
{code}

> Refine the implementation for HDFS cache on SCM
> ---
>
> Key: HDFS-14401
> URL: https://issues.apache.org/jira/browse/HDFS-14401
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14401.000.patch, HDFS-14401.001.patch, 
> HDFS-14401.002.patch, HDFS-14401.003.patch, HDFS-14401.004.patch, 
> HDFS-14401.005.patch
>
>
> In this Jira, we will refine the implementation for HDFS cache on SCM, such 
> as: 1) Handle full pmem volume in VolumeManager; 2) Refine pmem volume 
> selection impl; 3) Clean up MapppableBlockLoader interface; etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826632#comment-16826632
 ] 

Hadoop QA commented on HDFS-14434:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-hdfs-project generated 0 new + 536 unchanged 
- 2 fixed = 536 total (was 538) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 
90 unchanged - 18 fixed = 92 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m  8s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
|   | hadoop.hdfs.TestReconstructStripedFile |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14434 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967085/HDFS-14434.007.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e2b6c254e6b4 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git

[jira] [Commented] (HDFS-14437) Exception happened when rollEditLog expects empty EditsDoubleBuffer.bufCurrent but not

2019-04-25 Thread angerszhu (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826612#comment-16826612
 ] 

angerszhu commented on HDFS-14437:
--

[~starphin]

it seem work not well when high concurrency. LOCK in FSNameSystem level is 
complex. 

Hard to say what I think is totally right. But see our production env 's error. 
It seem match my guess that rollEditLog  trap in  #wait in logSync. 

The best way to confirm this is add some log in production env, to show the 
sequence of _*txid, synctxid,myTransactionId*_ . But risk so high, add these 
detail log will make more pressure. My boos won't let me to do like this.

 

Any way, what I change in pull request is to prevent this situation happen. 

And I test that when low concurrency, it truly not happen, when high 
concurrency, it happened.. 

> Exception happened when   rollEditLog expects empty 
> EditsDoubleBuffer.bufCurrent  but not
> -
>
> Key: HDFS-14437
> URL: https://issues.apache.org/jira/browse/HDFS-14437
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode, qjm
>Reporter: angerszhu
>Priority: Major
>
> For the problem mentioned in https://issues.apache.org/jira/browse/HDFS-10943 
> , I have sort the process of write and flush EditLog and some important 
> function, I found the in the class  FSEditLog class, the close() function 
> will call such process like below:
>  
> {code:java}
> waitForSyncToFinish();
> endCurrentLogSegment(true);{code}
> since we have gain the object lock in the function close(), so when  
> waitForSyncToFish() method return, it mean all logSync job has done and all 
> data in bufReady has been flushed out, and since current thread has the lock 
> of this object, when call endCurrentLogSegment(), no other thread will gain 
> the lock so they can't write new editlog into currentBuf.
> But when we don't call waitForSyncToFish() before endCurrentLogSegment(), 
> there may be some autoScheduled logSync()'s flush process is doing, since 
> this process don't need
> synchronization since it has mention in the comment of logSync() method :
>  
> {code:java}
> /**
>  * Sync all modifications done by this thread.
>  *
>  * The internal concurrency design of this class is as follows:
>  *   - Log items are written synchronized into an in-memory buffer,
>  * and each assigned a transaction ID.
>  *   - When a thread (client) would like to sync all of its edits, logSync()
>  * uses a ThreadLocal transaction ID to determine what edit number must
>  * be synced to.
>  *   - The isSyncRunning volatile boolean tracks whether a sync is currently
>  * under progress.
>  *
>  * The data is double-buffered within each edit log implementation so that
>  * in-memory writing can occur in parallel with the on-disk writing.
>  *
>  * Each sync occurs in three steps:
>  *   1. synchronized, it swaps the double buffer and sets the isSyncRunning
>  *  flag.
>  *   2. unsynchronized, it flushes the data to storage
>  *   3. synchronized, it resets the flag and notifies anyone waiting on the
>  *  sync.
>  *
>  * The lack of synchronization on step 2 allows other threads to continue
>  * to write into the memory buffer while the sync is in progress.
>  * Because this step is unsynchronized, actions that need to avoid
>  * concurrency with sync() should be synchronized and also call
>  * waitForSyncToFinish() before assuming they are running alone.
>  */
> public void logSync() {
>   long syncStart = 0;
>   // Fetch the transactionId of this thread. 
>   long mytxid = myTransactionId.get().txid;
>   
>   boolean sync = false;
>   try {
> EditLogOutputStream logStream = null;
> synchronized (this) {
>   try {
> printStatistics(false);
> // if somebody is already syncing, then wait
> while (mytxid > synctxid && isSyncRunning) {
>   try {
> wait(1000);
>   } catch (InterruptedException ie) {
>   }
> }
> //
> // If this transaction was already flushed, then nothing to do
> //
> if (mytxid <= synctxid) {
>   numTransactionsBatchedInSync++;
>   if (metrics != null) {
> // Metrics is non-null only when used inside name node
> metrics.incrTransactionsBatchedInSync();
>   }
>   return;
> }
>
> // now, this thread will do the sync
> syncStart = txid;
> isSyncRunning = true;
> sync = true;
> // swap buffers
> try {
>   if (journalSet.isEmpty()) {
> throw new IOException("No journals available to flush");
>   }
>   editLogStream.setReadyToFlush();
> } catch (IOException e) {
>   

[jira] [Comment Edited] (HDFS-14437) Exception happened when rollEditLog expects empty EditsDoubleBuffer.bufCurrent but not

2019-04-25 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826604#comment-16826604
 ] 

star edited comment on HDFS-14437 at 4/26/19 3:21 AM:
--

As [~kihwal] said in HDFS-10943 that '{{rollEditLog()}} does not and cannot 
solely depend on {{FSEditLog}} synchronization', it's not enough to reproduce 
this issue in FSEditLog level. We should reproduce it in the FSNamesystem level 
or even RPC level. I've tried that but failed. As far as I know, all method of 
FSEditLog are called in FSNamesystem with either readlock or writelock. 


was (Author: starphin):
As [~kihwal] said in HDFS-10943 that '{{rollEditLog()}} does not and cannot 
solely depend on {{FSEditLog}} synchronization', it's not enough to reproduce 
such issue in FSEditLog level. We should reproduce it in the FSNamesystem level 
or event RPC level. As far as I know, all method of FSEditLog are called in 
FSNamesystem with either readlock or writelock.

> Exception happened when   rollEditLog expects empty 
> EditsDoubleBuffer.bufCurrent  but not
> -
>
> Key: HDFS-14437
> URL: https://issues.apache.org/jira/browse/HDFS-14437
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode, qjm
>Reporter: angerszhu
>Priority: Major
>
> For the problem mentioned in https://issues.apache.org/jira/browse/HDFS-10943 
> , I have sort the process of write and flush EditLog and some important 
> function, I found the in the class  FSEditLog class, the close() function 
> will call such process like below:
>  
> {code:java}
> waitForSyncToFinish();
> endCurrentLogSegment(true);{code}
> since we have gain the object lock in the function close(), so when  
> waitForSyncToFish() method return, it mean all logSync job has done and all 
> data in bufReady has been flushed out, and since current thread has the lock 
> of this object, when call endCurrentLogSegment(), no other thread will gain 
> the lock so they can't write new editlog into currentBuf.
> But when we don't call waitForSyncToFish() before endCurrentLogSegment(), 
> there may be some autoScheduled logSync()'s flush process is doing, since 
> this process don't need
> synchronization since it has mention in the comment of logSync() method :
>  
> {code:java}
> /**
>  * Sync all modifications done by this thread.
>  *
>  * The internal concurrency design of this class is as follows:
>  *   - Log items are written synchronized into an in-memory buffer,
>  * and each assigned a transaction ID.
>  *   - When a thread (client) would like to sync all of its edits, logSync()
>  * uses a ThreadLocal transaction ID to determine what edit number must
>  * be synced to.
>  *   - The isSyncRunning volatile boolean tracks whether a sync is currently
>  * under progress.
>  *
>  * The data is double-buffered within each edit log implementation so that
>  * in-memory writing can occur in parallel with the on-disk writing.
>  *
>  * Each sync occurs in three steps:
>  *   1. synchronized, it swaps the double buffer and sets the isSyncRunning
>  *  flag.
>  *   2. unsynchronized, it flushes the data to storage
>  *   3. synchronized, it resets the flag and notifies anyone waiting on the
>  *  sync.
>  *
>  * The lack of synchronization on step 2 allows other threads to continue
>  * to write into the memory buffer while the sync is in progress.
>  * Because this step is unsynchronized, actions that need to avoid
>  * concurrency with sync() should be synchronized and also call
>  * waitForSyncToFinish() before assuming they are running alone.
>  */
> public void logSync() {
>   long syncStart = 0;
>   // Fetch the transactionId of this thread. 
>   long mytxid = myTransactionId.get().txid;
>   
>   boolean sync = false;
>   try {
> EditLogOutputStream logStream = null;
> synchronized (this) {
>   try {
> printStatistics(false);
> // if somebody is already syncing, then wait
> while (mytxid > synctxid && isSyncRunning) {
>   try {
> wait(1000);
>   } catch (InterruptedException ie) {
>   }
> }
> //
> // If this transaction was already flushed, then nothing to do
> //
> if (mytxid <= synctxid) {
>   numTransactionsBatchedInSync++;
>   if (metrics != null) {
> // Metrics is non-null only when used inside name node
> metrics.incrTransactionsBatchedInSync();
>   }
>   return;
> }
>
> // now, this thread will do the sync
> syncStart = txid;
> isSyncRunning = true;
> sync = true;
> // swap buffers
> try {
>   if (journalSet.isEmpty()) {
> throw n

[jira] [Commented] (HDFS-14437) Exception happened when rollEditLog expects empty EditsDoubleBuffer.bufCurrent but not

2019-04-25 Thread star (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826604#comment-16826604
 ] 

star commented on HDFS-14437:
-

As [~kihwal] said in HDFS-10943 that '{{rollEditLog()}} does not and cannot 
solely depend on {{FSEditLog}} synchronization', it's not enough to reproduce 
such issue in FSEditLog level. We should reproduce it in the FSNamesystem level 
or event RPC level. As far as I know, all method of FSEditLog are called in 
FSNamesystem with either readlock or writelock.

> Exception happened when   rollEditLog expects empty 
> EditsDoubleBuffer.bufCurrent  but not
> -
>
> Key: HDFS-14437
> URL: https://issues.apache.org/jira/browse/HDFS-14437
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha, namenode, qjm
>Reporter: angerszhu
>Priority: Major
>
> For the problem mentioned in https://issues.apache.org/jira/browse/HDFS-10943 
> , I have sort the process of write and flush EditLog and some important 
> function, I found the in the class  FSEditLog class, the close() function 
> will call such process like below:
>  
> {code:java}
> waitForSyncToFinish();
> endCurrentLogSegment(true);{code}
> since we have gain the object lock in the function close(), so when  
> waitForSyncToFish() method return, it mean all logSync job has done and all 
> data in bufReady has been flushed out, and since current thread has the lock 
> of this object, when call endCurrentLogSegment(), no other thread will gain 
> the lock so they can't write new editlog into currentBuf.
> But when we don't call waitForSyncToFish() before endCurrentLogSegment(), 
> there may be some autoScheduled logSync()'s flush process is doing, since 
> this process don't need
> synchronization since it has mention in the comment of logSync() method :
>  
> {code:java}
> /**
>  * Sync all modifications done by this thread.
>  *
>  * The internal concurrency design of this class is as follows:
>  *   - Log items are written synchronized into an in-memory buffer,
>  * and each assigned a transaction ID.
>  *   - When a thread (client) would like to sync all of its edits, logSync()
>  * uses a ThreadLocal transaction ID to determine what edit number must
>  * be synced to.
>  *   - The isSyncRunning volatile boolean tracks whether a sync is currently
>  * under progress.
>  *
>  * The data is double-buffered within each edit log implementation so that
>  * in-memory writing can occur in parallel with the on-disk writing.
>  *
>  * Each sync occurs in three steps:
>  *   1. synchronized, it swaps the double buffer and sets the isSyncRunning
>  *  flag.
>  *   2. unsynchronized, it flushes the data to storage
>  *   3. synchronized, it resets the flag and notifies anyone waiting on the
>  *  sync.
>  *
>  * The lack of synchronization on step 2 allows other threads to continue
>  * to write into the memory buffer while the sync is in progress.
>  * Because this step is unsynchronized, actions that need to avoid
>  * concurrency with sync() should be synchronized and also call
>  * waitForSyncToFinish() before assuming they are running alone.
>  */
> public void logSync() {
>   long syncStart = 0;
>   // Fetch the transactionId of this thread. 
>   long mytxid = myTransactionId.get().txid;
>   
>   boolean sync = false;
>   try {
> EditLogOutputStream logStream = null;
> synchronized (this) {
>   try {
> printStatistics(false);
> // if somebody is already syncing, then wait
> while (mytxid > synctxid && isSyncRunning) {
>   try {
> wait(1000);
>   } catch (InterruptedException ie) {
>   }
> }
> //
> // If this transaction was already flushed, then nothing to do
> //
> if (mytxid <= synctxid) {
>   numTransactionsBatchedInSync++;
>   if (metrics != null) {
> // Metrics is non-null only when used inside name node
> metrics.incrTransactionsBatchedInSync();
>   }
>   return;
> }
>
> // now, this thread will do the sync
> syncStart = txid;
> isSyncRunning = true;
> sync = true;
> // swap buffers
> try {
>   if (journalSet.isEmpty()) {
> throw new IOException("No journals available to flush");
>   }
>   editLogStream.setReadyToFlush();
> } catch (IOException e) {
>   final String msg =
>   "Could not sync enough journals to persistent storage " +
>   "due to " + e.getMessage() + ". " +
>   "Unsynced transactions: " + (txid - synctxid);
>   LOG.fatal(msg, new Exception());
>   synchronized(journalSetLock) {
>   

[jira] [Comment Edited] (HDFS-14356) Implement HDFS cache on SCM with native PMDK libs

2019-04-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825899#comment-16825899
 ] 

Feilong He edited comment on HDFS-14356 at 4/26/19 2:24 AM:


[~Sammi], I am so appreciative of your review. Your suggestions are quite 
valuable to us.
{quote}Use constants or enum for pmdk status
{quote}
Good suggestion. As you suggested, it's quite reasonable to use enum to 
maintain pmdk support state codes. I will refine this part of impl in the new 
patch.
{quote}Is there potential buffer overflow risk here?
{quote}
Actually, snprintf() method will truncate excessive characters when putting 
pmem error message into msg[1000]. So there should be no buffer overflow risk 
here. I will check other pieces of code to avoid this issue. Thanks for your 
great insight!
{quote}Do you plan to support Windows in this patch?  If not, please clarify 
the supported platform in the title or in the description.  Also make sure when 
compile native is enabled(-Pnative),  following two cases pass
{quote}
{quote} Linux platform, compile with and without PMDK enabled

     Windows platform,  compile without PMDK enabled
{quote}
I will make sure the two build cases pass. For linux platform, the build can 
pass on my side regardless of whether PMDK is enabled. And I will prepare a 
windows environment to make sure that.

 

Thanks [~Sammi] again!


was (Author: philohe):
[~Sammi], I am so appreciative of your review. Your suggestions are quite 
valuable to us.
{quote}Use constants or enum for pmdk status
{quote}
Good suggestion. As you suggested, it's quite reasonable to use enum to keep 
pmdk support state codes. I will refine this part of impl in the new patch.
{quote}Is there potential buffer overflow risk here?
{quote}
Actually, snprintf() method will truncate excessive characters when putting 
pmem error message into msg[1000]. So there should be no buffer overflow risk 
here. I will check other pieces of code to avoid this issue. Thanks for your 
great insight.
{quote}Do you plan to support Windows in this patch?  If not, please clarify 
the supported platform in the title or in the description.  Also make sure when 
compile native is enabled(-Pnative),  following two cases pass
{quote} * 
{quote} Linux platform, compile with and without PMDK enabled{quote}
 * 
{quote} Windows platform,  compile without PMDK enabled{quote}

I will make sure the two build cases pass. For linux platform, the build can 
pass on my side regardless of whether PMDK is enabled. And I will prepare a 
windows environment to make sure that.

 

Thanks [~Sammi] again!

> Implement HDFS cache on SCM with native PMDK libs
> -
>
> Key: HDFS-14356
> URL: https://issues.apache.org/jira/browse/HDFS-14356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14356.000.patch, HDFS-14356.001.patch, 
> HDFS-14356.002.patch
>
>
> In this implementation, native PMDK libs are used to map HDFS blocks to SCM. 
> To use this implementation, user should build hadoop with PMDK libs by 
> specifying a build option. This implementation is only supported in linux 
> platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread KWON BYUNGCHANG (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826589#comment-16826589
 ] 

KWON BYUNGCHANG commented on HDFS-14434:


[~eyang] Thank you.
I added non-ssl webhdfs test, and remove unused comment lines.

> webhdfs that connect secure hdfs should not use user.name parameter
> ---
>
> Key: HDFS-14434
> URL: https://issues.apache.org/jira/browse/HDFS-14434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Minor
> Attachments: HDFS-14434.001.patch, HDFS-14434.002.patch, 
> HDFS-14434.003.patch, HDFS-14434.004.patch, HDFS-14434.005.patch, 
> HDFS-14434.006.patch, HDFS-14434.007.patch
>
>
> I have two secure hadoop cluster.  Both cluster use cross-realm 
> authentication. 
> [use...@a.com|mailto:use...@a.com] can access to HDFS of B.COM realm
> by the way, hadoop username of use...@a.com  in B.COM realm is  
> cross_realm_a_com_user_a.
> hdfs dfs command of use...@a.com using B.COM webhdfs failed.
> root cause is  webhdfs that connect secure hdfs use user.name parameter.
> according to webhdfs spec,  insecure webhdfs use user.name,  secure webhdfs 
> use SPNEGO for authentication.
> I think webhdfs that connect secure hdfs  should not use user.name parameter.
> I will attach patch.
> below is error log
>  
> {noformat}
> $ hdfs dfs -ls  webhdfs://b.com:50070/
> ls: Usernames not matched: name=user_a != expected=cross_realm_a_com_user_a
>  
> # user.name in cross realm webhdfs
> $ curl -u : --negotiate 
> 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=user_a' 
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Usernames not 
> matched: name=user_a != expected=cross_realm_a_com_user_a"}}
> # USE SPNEGO
> $ curl -u : --negotiate 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN'
> {"Token"{"urlString":"XgA."}}
>  
> {noformat}
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread KWON BYUNGCHANG (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-14434:
---
Attachment: HDFS-14434.007.patch

> webhdfs that connect secure hdfs should not use user.name parameter
> ---
>
> Key: HDFS-14434
> URL: https://issues.apache.org/jira/browse/HDFS-14434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Minor
> Attachments: HDFS-14434.001.patch, HDFS-14434.002.patch, 
> HDFS-14434.003.patch, HDFS-14434.004.patch, HDFS-14434.005.patch, 
> HDFS-14434.006.patch, HDFS-14434.007.patch
>
>
> I have two secure hadoop cluster.  Both cluster use cross-realm 
> authentication. 
> [use...@a.com|mailto:use...@a.com] can access to HDFS of B.COM realm
> by the way, hadoop username of use...@a.com  in B.COM realm is  
> cross_realm_a_com_user_a.
> hdfs dfs command of use...@a.com using B.COM webhdfs failed.
> root cause is  webhdfs that connect secure hdfs use user.name parameter.
> according to webhdfs spec,  insecure webhdfs use user.name,  secure webhdfs 
> use SPNEGO for authentication.
> I think webhdfs that connect secure hdfs  should not use user.name parameter.
> I will attach patch.
> below is error log
>  
> {noformat}
> $ hdfs dfs -ls  webhdfs://b.com:50070/
> ls: Usernames not matched: name=user_a != expected=cross_realm_a_com_user_a
>  
> # user.name in cross realm webhdfs
> $ curl -u : --negotiate 
> 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=user_a' 
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Usernames not 
> matched: name=user_a != expected=cross_realm_a_com_user_a"}}
> # USE SPNEGO
> $ curl -u : --negotiate 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN'
> {"Token"{"urlString":"XgA."}}
>  
> {noformat}
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233199&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233199
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 26/Apr/19 00:22
Start Date: 26/Apr/19 00:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-486882561
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1049 | trunk passed |
   | +1 | compile | 34 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 733 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 58 | trunk passed |
   | +1 | javadoc | 26 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 38 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 16 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 737 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 21 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 106 | server-scm in the patch failed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3100 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/714 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 39bcea67402b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5dcf64 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/6/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/6/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233199)
Time Spent: 5h 10m  (was: 5h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1456) Stop the datanode, when any datanode statemachine state is set to shutdown

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1456?focusedWorklogId=233189&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233189
 ]

ASF GitHub Bot logged work on HDDS-1456:


Author: ASF GitHub Bot
Created on: 26/Apr/19 00:05
Start Date: 26/Apr/19 00:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #769: HDDS-1456. Stop 
the datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#issuecomment-486879930
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1069 | trunk passed |
   | +1 | compile | 1026 | trunk passed |
   | +1 | checkstyle | 136 | trunk passed |
   | +1 | mvnsite | 124 | trunk passed |
   | +1 | shadedclient | 1025 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 103 | trunk passed |
   | +1 | javadoc | 97 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 97 | the patch passed |
   | +1 | compile | 965 | the patch passed |
   | +1 | javac | 965 | the patch passed |
   | +1 | checkstyle | 131 | the patch passed |
   | +1 | mvnsite | 125 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 716 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 118 | the patch passed |
   | +1 | javadoc | 95 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 79 | container-service in the patch failed. |
   | +1 | unit | 126 | server-scm in the patch passed. |
   | -1 | unit | 1126 | integration-test in the patch failed. |
   | +1 | asflicense | 51 | The patch does not generate ASF License warnings. |
   | | | 7290 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.web.client.TestKeysRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.container.TestContainerReplication |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/769 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux c10765da7205 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5dcf64 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/4/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/4/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/4/testReport/ |
   | Max. process+thread count | 5247 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233189)
Time Spent: 2h 20m  (was: 2h 10m)

> Stop the datanode, when any datanode statemachine state is set to shutdown
> --
>
> Key: HDDS-1456
> URL: https://issues.apache.or

[jira] [Comment Edited] (HDDS-1458) Create a maven profile to run fault injection tests

2019-04-25 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826541#comment-16826541
 ] 

Eric Yang edited comment on HDDS-1458 at 4/26/19 12:04 AM:
---

Patch 001 is a draft of fault injection framework in maven + developer docker 
image.  What is in this patch:

# Modified start-build-env.sh to allow docker cli to run in docker for spin up 
additional docker container for testing.
# Include pytest and blockade for network fault injection tests.
# Include a maven profile to launch Ozone blockade test suites.

Disk fault injection can be simulated by running Ozone docker image with read 
only disks using maven docker compose plugin in combination of additional 
python tests.

Developer/Jenkins can start the fault injection tests by running:
{code}
./start-build.env.sh
cd hadoop/hadoop-ozone/dist
mvn clean verify -Pit
{code}

A couple of points for discussion:
- Are we ok with docker in docker addition to start-build-env.sh because it 
uses --privileged command to gain access to host level docker.  In my opinion, 
the existing setup already require user to have access to docker.  This new 
privileged flag gives more power to break out of container environment, but it 
is necessary to simulate network or disk failures.  I am fine without bundle 
this in start-build-env.sh, but it is nicer without having to look for 
developer dependencies to start development.
- Can we move hadoop-ozone/dist/src/main/blockade into integration-test 
project?  It seems a more logical choice to host fault injection test suites.
- Do we want the test to run as a profile, or default "mvn verify" is good?


was (Author: eyang):
Patch 001 is a draft of fault injection framework in maven + developer docker 
image.  What is in this patch:

# Modified start-build-env.sh to allow docker cli to run in docker for spin up 
additional docker container for testing.
# Include pytest and blockade for network fault injection tests.
# Include a maven profile to launch Ozone blockade test suites.

Disk fault injection can be simulated by running Ozone docker image with read 
only disks using maven docker compose plugin in combination of additional 
python tests.

Developer/Jenkins can start the fault injection tests by running:
{code}
./start-build.env.sh
cd hadoop/hadoop-ozone/dist
mvn clean verify -Pit
{code}

A couple of points for discussion:
- Are we ok with docker in docker addition to start-build-env.sh because it 
uses --privileged command to gain access to host level docker.  In my opinion, 
the existing setup already require user to have access to docker.  This new 
privileged flag gives more power to break out of container environment, but it 
is necessary to simulate network or disk failures.  I am fine without bundle 
this in start-build-env.sh, but it is nicer without having to look for 
developer dependencies to start development.
- Can we move hadoop-ozone/dist/src/main/blockade into integration-test 
project?  It seems a more logical choice to host fault injection test suites.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1458) Create a maven profile to run fault injection tests

2019-04-25 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826541#comment-16826541
 ] 

Eric Yang commented on HDDS-1458:
-

Patch 001 is a draft of fault injection framework in maven + developer docker 
image.  What is in this patch:

# Modified start-build-env.sh to allow docker cli to run in docker for spin up 
additional docker container for testing.
# Include pytest and blockade for network fault injection tests.
# Include a maven profile to launch Ozone blockade test suites.

Disk fault injection can be simulated by running Ozone docker image with read 
only disks using maven docker compose plugin in combination of additional 
python tests.

Developer/Jenkins can start the fault injection tests by running:
{code}
./start-build.env.sh
cd hadoop/hadoop-ozone/dist
mvn clean verify -Pit
{code}

A couple of points for discussion:
- Are we ok with docker in docker addition to start-build-env.sh because it 
uses --privileged command to gain access to host level docker.  In my opinion, 
the existing setup already require user to have access to docker.  This new 
privileged flag gives more power to break out of container environment, but it 
is necessary to simulate network or disk failures.  I am fine without bundle 
this in start-build-env.sh, but it is nicer without having to look for 
developer dependencies to start development.
- Can we move hadoop-ozone/dist/src/main/blockade into integration-test 
project?  It seems a more logical choice to host fault injection test suites.

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1403) KeyOutputStream writes fails after max retries while writing to a closed container

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1403?focusedWorklogId=233186&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233186
 ]

ASF GitHub Bot logged work on HDDS-1403:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:58
Start Date: 25/Apr/19 23:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #753: HDDS-1403. 
KeyOutputStream writes fails after max retries while writing to a closed 
container
URL: https://github.com/apache/hadoop/pull/753#issuecomment-486878583
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1029 | trunk passed |
   | +1 | compile | 983 | trunk passed |
   | +1 | checkstyle | 139 | trunk passed |
   | +1 | mvnsite | 192 | trunk passed |
   | +1 | shadedclient | 1058 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 253 | trunk passed |
   | +1 | javadoc | 171 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | -1 | mvninstall | 20 | client in the patch failed. |
   | -1 | mvninstall | 20 | objectstore-service in the patch failed. |
   | +1 | compile | 918 | the patch passed |
   | +1 | javac | 918 | the patch passed |
   | +1 | checkstyle | 137 | the patch passed |
   | -1 | mvnsite | 37 | objectstore-service in the patch failed. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 679 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | findbugs | 38 | objectstore-service in the patch failed. |
   | +1 | javadoc | 169 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 82 | common in the patch passed. |
   | +1 | unit | 40 | client in the patch passed. |
   | +1 | unit | 48 | common in the patch passed. |
   | -1 | unit | 37 | objectstore-service in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6612 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/753 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
   | uname | Linux 16f011cc5d3b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5dcf64 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/3/artifact/out/patch-mvninstall-hadoop-ozone_client.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/3/artifact/out/patch-mvninstall-hadoop-ozone_objectstore-service.txt
 |
   | mvnsite | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/3/artifact/out/patch-mvnsite-hadoop-ozone_objectstore-service.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/3/artifact/out/patch-findbugs-hadoop-ozone_objectstore-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/3/artifact/out/patch-unit-hadoop-ozone_objectstore-service.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/3/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/client hadoop-ozone/common 
hadoop-ozone/objectstore-service U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-753/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apac

[jira] [Updated] (HDDS-1458) Create a maven profile to run fault injection tests

2019-04-25 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HDDS-1458:

Attachment: HDDS-1458.001.patch

> Create a maven profile to run fault injection tests
> ---
>
> Key: HDDS-1458
> URL: https://issues.apache.org/jira/browse/HDDS-1458
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: HDDS-1458.001.patch
>
>
> Some fault injection tests have been written using blockade.  It would be 
> nice to have ability to start docker compose and exercise the blockade test 
> cases against Ozone docker containers, and generate reports.  This is 
> optional integration tests to catch race conditions and fault tolerance 
> defects. 
> We can introduce a profile with id: it (short for integration tests).  This 
> will launch docker compose via maven-exec-plugin and run blockade to simulate 
> container failures and timeout.
> Usage command:
> {code}
> mvn clean verify -Pit
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1456) Stop the datanode, when any datanode statemachine state is set to shutdown

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1456?focusedWorklogId=233181&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233181
 ]

ASF GitHub Bot logged work on HDDS-1456:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:43
Start Date: 25/Apr/19 23:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #769: HDDS-1456. Stop 
the datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#issuecomment-486875942
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1332 | trunk passed |
   | +1 | compile | 1534 | trunk passed |
   | +1 | checkstyle | 186 | trunk passed |
   | +1 | mvnsite | 253 | trunk passed |
   | +1 | shadedclient | 1234 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 103 | trunk passed |
   | +1 | javadoc | 95 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 21 | Maven dependency ordering for patch |
   | +1 | mvninstall | 100 | the patch passed |
   | +1 | compile | 961 | the patch passed |
   | +1 | javac | 961 | the patch passed |
   | +1 | checkstyle | 145 | the patch passed |
   | +1 | mvnsite | 120 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 728 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 118 | the patch passed |
   | +1 | javadoc | 88 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 75 | container-service in the patch failed. |
   | +1 | unit | 125 | server-scm in the patch passed. |
   | -1 | unit | 1373 | integration-test in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 8543 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.container.TestContainerReplication |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/769 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 59e0fa299b64 4.4.0-144-generic #170~14.04.1-Ubuntu SMP Mon 
Mar 18 15:02:05 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5dcf64 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/3/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/3/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/3/testReport/ |
   | Max. process+thread count | 4932 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233181)
Time Spent: 2h 10m  (was: 2h)

> Stop the datanode, when any datanode statemachine state is set to shutdown
> --
>
> Key: HDDS-1456
> URL: https://issues.apache.o

[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233180&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233180
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:41
Start Date: 25/Apr/19 23:41
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #714: HDDS-1406. 
Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-486875564
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233180)
Time Spent: 5h  (was: 4h 50m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233179&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233179
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:39
Start Date: 25/Apr/19 23:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-486875182
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1119 | trunk passed |
   | +1 | compile | 69 | trunk passed |
   | +1 | checkstyle | 24 | trunk passed |
   | +1 | mvnsite | 33 | trunk passed |
   | +1 | shadedclient | 765 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 42 | trunk passed |
   | +1 | javadoc | 24 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 826 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | the patch passed |
   | +1 | javadoc | 19 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 101 | server-scm in the patch failed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3298 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/714 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux cfa19566ca15 4.4.0-141-generic #167~14.04.1-Ubuntu SMP Mon 
Dec 10 13:20:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5dcf64 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/5/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/5/testReport/ |
   | Max. process+thread count | 342 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/server-scm U: hadoop-hdds/server-scm |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-714/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233179)
Time Spent: 4h 50m  (was: 4h 40m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233176&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233176
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:30
Start Date: 25/Apr/19 23:30
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278767040
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233176)
Time Spent: 4h 40m  (was: 4.5h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1456) Stop the datanode, when any datanode statemachine state is set to shutdown

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1456?focusedWorklogId=233167&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233167
 ]

ASF GitHub Bot logged work on HDDS-1456:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:12
Start Date: 25/Apr/19 23:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #769: HDDS-1456. Stop 
the datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#issuecomment-486870085
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 61 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1031 | trunk passed |
   | +1 | compile | 967 | trunk passed |
   | +1 | checkstyle | 141 | trunk passed |
   | +1 | mvnsite | 167 | trunk passed |
   | +1 | shadedclient | 1014 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 93 | trunk passed |
   | +1 | javadoc | 83 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 91 | the patch passed |
   | +1 | compile | 924 | the patch passed |
   | +1 | javac | 924 | the patch passed |
   | +1 | checkstyle | 133 | the patch passed |
   | +1 | mvnsite | 110 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 676 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 127 | the patch passed |
   | +1 | javadoc | 105 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 68 | container-service in the patch failed. |
   | +1 | unit | 113 | server-scm in the patch passed. |
   | -1 | unit | 789 | integration-test in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 6737 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/769 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux e5a6a8f989b6 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5dcf64 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/2/artifact/out/patch-unit-hadoop-hdds_container-service.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/2/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/2/testReport/ |
   | Max. process+thread count | 5325 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm 
hadoop-ozone/integration-test U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-769/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233167)
Time Spent: 2h  (was: 1h 50m)

> Stop the datanode, when any datanode statemachine state is set to shutdown
> --
>
> Key: HDDS-1456
> URL: https://issues.apache.org/jira/browse/HDDS-1456
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
> 

[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233158&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233158
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:03
Start Date: 25/Apr/19 23:03
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278762154
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
+  LOG.error("Execution exception occurred during createPipeline", ex);
+  throw new IOException("Execution exception occurred during " +
+  "createPipeline", ex);
+} catch (RejectedExecutionException ex) {
+  LOG.error("RejectedExecutionException, occurred during " +
 
 Review comment:
   We return IOException, so in case of IOException we have no logging right?
   It has only break;
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233158)
Time Spent: 4h 20m  (was: 4h 10m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233159&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233159
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:03
Start Date: 25/Apr/19 23:03
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278760822
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
+  LOG.error("Execution exception occurred during createPipeline", ex);
+  throw new IOException("Execution exception occurred during " +
+  "createPipeline", ex);
+} catch (RejectedExecutionException ex) {
+  LOG.error("RejectedExecutionException, occurred during " +
 
 Review comment:
   Logging here, because in the actual createPipelines method in 
BackGroudPipelineCreator which calls this method when we throw IOException they 
break from while loop. That is the reason for the logging. Let me know if we 
still don't want to log here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233159)
Time Spent: 4.5h  (was: 4h 20m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233157&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233157
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:03
Start Date: 25/Apr/19 23:03
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278762154
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
+  LOG.error("Execution exception occurred during createPipeline", ex);
+  throw new IOException("Execution exception occurred during " +
+  "createPipeline", ex);
+} catch (RejectedExecutionException ex) {
+  LOG.error("RejectedExecutionException, occurred during " +
 
 Review comment:
   We return IOException, so in case of IOException we have not logging right?
   It has only break;
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233157)
Time Spent: 4h 10m  (was: 4h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233156&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233156
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 23:01
Start Date: 25/Apr/19 23:01
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278761757
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
+  LOG.error("Execution exception occurred during createPipeline", ex);
+  throw new IOException("Execution exception occurred during " +
+  "createPipeline", ex);
+} catch (RejectedExecutionException ex) {
+  LOG.error("RejectedExecutionException, occurred during " +
 
 Review comment:
   That method is already logging right: 
   ```
 pipelineManager.createPipeline(type, factor);
   } catch (IOException ioe) {
 break;
   } catch (Throwable t) {
 LOG.error("Error while creating pipelines {}", t);
 break;
   }
   ```
   So if we log here the exception will be logged twice. Perhaps I'm looking at 
it wrong.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233156)
Time Spent: 4h  (was: 3h 50m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233151&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233151
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 22:56
Start Date: 25/Apr/19 22:56
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278760822
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
+  LOG.error("Execution exception occurred during createPipeline", ex);
+  throw new IOException("Execution exception occurred during " +
+  "createPipeline", ex);
+} catch (RejectedExecutionException ex) {
+  LOG.error("RejectedExecutionException, occurred during " +
 
 Review comment:
   Logging here, because in the actual createPipelines method in 
BackGroudPipelineCreator which calls this method when we return IOException 
they break from while loop. That is the reason for the logging. Let me know if 
we still don't want to log here?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233151)
Time Spent: 3h 50m  (was: 3h 40m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233147&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233147
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 22:52
Start Date: 25/Apr/19 22:52
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278759739
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
+  LOG.error("Execution exception occurred during createPipeline", ex);
+  throw new IOException("Execution exception occurred during " +
+  "createPipeline", ex);
+} catch (RejectedExecutionException ex) {
+  LOG.error("RejectedExecutionException, occurred during " +
 
 Review comment:
   Same here, don't log.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233147)
Time Spent: 3.5h  (was: 3h 20m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233148&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233148
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 22:52
Start Date: 25/Apr/19 22:52
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278759705
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
+  LOG.error("Execution exception occurred during createPipeline", ex);
 
 Review comment:
   Don't log here, just throw IOException that wraps the ExecutionException.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233148)
Time Spent: 3.5h  (was: 3h 20m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233149&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233149
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 22:52
Start Date: 25/Apr/19 22:52
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #714: HDDS-1406. Avoid 
usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278759939
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +165,37 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
 
 Review comment:
   You can simplify the code a bit by catching multiple exceptions in one 
clause. e.g.
   catch(ExecutionException|RejectedExecutionException e)
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233149)
Time Spent: 3h 40m  (was: 3.5h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233143&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233143
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 22:44
Start Date: 25/Apr/19 22:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278758443
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +164,28 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
 
 Review comment:
   DOne
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233143)
Time Spent: 3h 10m  (was: 3h)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233141&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233141
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 22:44
Start Date: 25/Apr/19 22:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #714: HDDS-1406. 
Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#issuecomment-486863874
 
 
   Thank You @arp7 for the review.
   Addressed review comments.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233141)
Time Spent: 2h 50m  (was: 2h 40m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233142&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233142
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 22:44
Start Date: 25/Apr/19 22:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278758367
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/StorageContainerManager.java
 ##
 @@ -1010,6 +1011,9 @@ public void stop() {
 } catch (Exception ex) {
   LOG.error("SCM Metadata store stop failed", ex);
 }
+
+// shutdown RatisPipelineUtils pool.
+RatisPipelineUtils.POOL.shutdown();
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233142)
Time Spent: 3h  (was: 2h 50m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1406) Avoid usage of commonPool in RatisPipelineUtils

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1406?focusedWorklogId=233144&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233144
 ]

ASF GitHub Bot logged work on HDDS-1406:


Author: ASF GitHub Bot
Created on: 25/Apr/19 22:44
Start Date: 25/Apr/19 22:44
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #714: 
HDDS-1406. Avoid usage of commonPool in RatisPipelineUtils.
URL: https://github.com/apache/hadoop/pull/714#discussion_r278758443
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineUtils.java
 ##
 @@ -146,19 +164,28 @@ private static void callRatisRpc(List 
datanodes,
 SecurityConfig(ozoneConf));
 final TimeDuration requestTimeout =
 RatisHelper.getClientRequestTimeout(ozoneConf);
-datanodes.parallelStream().forEach(d -> {
-  final RaftPeer p = RatisHelper.toRaftPeer(d);
-  try (RaftClient client = RatisHelper
-  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
-  retryPolicy, maxOutstandingRequests, tlsConfig, requestTimeout)) 
{
-rpc.accept(client, p);
-  } catch (IOException ioe) {
-String errMsg =
-"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
-LOG.error(errMsg, ioe);
-exceptions.add(new IOException(errMsg, ioe));
-  }
-});
+try {
+  POOL.submit(() -> {
+datanodes.parallelStream().forEach(d -> {
+  final RaftPeer p = RatisHelper.toRaftPeer(d);
+  try (RaftClient client = RatisHelper
+  .newRaftClient(SupportedRpcType.valueOfIgnoreCase(rpcType), p,
+  retryPolicy, maxOutstandingRequests, tlsConfig,
+  requestTimeout)) {
+rpc.accept(client, p);
+  } catch (IOException ioe) {
+String errMsg =
+"Failed invoke Ratis rpc " + rpc + " for " + d.getUuid();
+LOG.error(errMsg, ioe);
+exceptions.add(new IOException(errMsg, ioe));
+  }
+});
+  }).get();
+} catch (ExecutionException ex) {
 
 Review comment:
   Done
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233144)
Time Spent: 3h 20m  (was: 3h 10m)

> Avoid usage of commonPool in RatisPipelineUtils
> ---
>
> Key: HDDS-1406
> URL: https://issues.apache.org/jira/browse/HDDS-1406
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> We use parallelStream in during createPipline, this internally uses 
> commonPool. Use Our own ForkJoinPool with parallelisim set with number of 
> processors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-04-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826495#comment-16826495
 ] 

Hadoop QA commented on HDFS-14454:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
39s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs-rbf generated 0 new 
+ 11 unchanged - 1 fixed = 11 total (was 12) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 23m  
2s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14454 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967063/HDFS-14454-HDFS-13891.004.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 09c2bdd7bc85 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / d153462 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26705/testReport/ |
| Max. process+thread count | 1040 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26705/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: getContentS

[jira] [Updated] (HDDS-1065) OM and DN should persist SCM certificate as the trust root.

2019-04-25 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1065:
-
Issue Type: Task  (was: Sub-task)
Parent: (was: HDDS-4)

> OM and DN should persist SCM certificate as the trust root.
> ---
>
> Key: HDDS-1065
> URL: https://issues.apache.org/jira/browse/HDDS-1065
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> OM and DN should persist SCM certificate as the trust root.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1065) OM and DN should persist SCM certificate as the trust root.

2019-04-25 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1065:
-
Issue Type: Sub-task  (was: Task)
Parent: HDDS-1463

> OM and DN should persist SCM certificate as the trust root.
> ---
>
> Key: HDDS-1065
> URL: https://issues.apache.org/jira/browse/HDDS-1065
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> OM and DN should persist SCM certificate as the trust root.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1442) add spark container to ozonesecure-mr compose files

2019-04-25 Thread Ajay Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HDDS-1442:
-
Issue Type: New Feature  (was: Sub-task)
Parent: (was: HDDS-1463)

> add spark container to ozonesecure-mr compose files
> ---
>
> Key: HDDS-1442
> URL: https://issues.apache.org/jira/browse/HDDS-1442
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> add spark container to ozonesecure-mr compose files



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1456) Stop the datanode, when any datanode statemachine state is set to shutdown

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1456?focusedWorklogId=233117&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233117
 ]

ASF GitHub Bot logged work on HDDS-1456:


Author: ASF GitHub Bot
Created on: 25/Apr/19 21:32
Start Date: 25/Apr/19 21:32
Worklog Time Spent: 10m 
  Work Description: arp7 commented on issue #769: HDDS-1456. Stop the 
datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#issuecomment-486845634
 
 
   +1
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233117)
Time Spent: 1h 50m  (was: 1h 40m)

> Stop the datanode, when any datanode statemachine state is set to shutdown
> --
>
> Key: HDDS-1456
> URL: https://issues.apache.org/jira/browse/HDDS-1456
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Recently we have seen an issue, in InitDatanodeState, there is error during 
> create Path for volume. We set the state to shutdown and this has caused 
> DatanodeStateMachine to stop, but datanode is still running. In this case we 
> should stop Datanode, otherwise, user will know about this when running ozone 
> commands or when user observed metrics like healthy nodes.
>  
> cc [~vivekratnavel]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1456) Stop the datanode, when any datanode statemachine state is set to shutdown

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1456?focusedWorklogId=233113&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233113
 ]

ASF GitHub Bot logged work on HDDS-1456:


Author: ASF GitHub Bot
Created on: 25/Apr/19 21:22
Start Date: 25/Apr/19 21:22
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #769: 
HDDS-1456. Stop the datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#discussion_r278737257
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/test/java/org/apache/hadoop/ozone/container/common/volume/TestVolumeSetDiskChecks.java
 ##
 @@ -141,6 +143,9 @@ HddsVolumeChecker getVolumeChecker(Configuration 
configuration)
 return new DummyChecker(configuration, new Timer(), numVolumes);
   }
 };
+
+assertEquals(volumeSet.getFailedVolumesList().size(), numVolumes);
 
 Review comment:
   This is updated like this due to code reordering. As we are calling 
checkAllVolumes() when only volumeMap size is not zero. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233113)
Time Spent: 1h 40m  (was: 1.5h)

> Stop the datanode, when any datanode statemachine state is set to shutdown
> --
>
> Key: HDDS-1456
> URL: https://issues.apache.org/jira/browse/HDDS-1456
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Recently we have seen an issue, in InitDatanodeState, there is error during 
> create Path for volume. We set the state to shutdown and this has caused 
> DatanodeStateMachine to stop, but datanode is still running. In this case we 
> should stop Datanode, otherwise, user will know about this when running ozone 
> commands or when user observed metrics like healthy nodes.
>  
> cc [~vivekratnavel]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1456) Stop the datanode, when any datanode statemachine state is set to shutdown

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1456?focusedWorklogId=233112&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233112
 ]

ASF GitHub Bot logged work on HDDS-1456:


Author: ASF GitHub Bot
Created on: 25/Apr/19 21:21
Start Date: 25/Apr/19 21:21
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #769: HDDS-1456. Stop 
the datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#issuecomment-486842522
 
 
   Thank You @arp7 for the review and offline discussion.
   I have addressed review comments
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233112)
Time Spent: 1.5h  (was: 1h 20m)

> Stop the datanode, when any datanode statemachine state is set to shutdown
> --
>
> Key: HDDS-1456
> URL: https://issues.apache.org/jira/browse/HDDS-1456
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Recently we have seen an issue, in InitDatanodeState, there is error during 
> create Path for volume. We set the state to shutdown and this has caused 
> DatanodeStateMachine to stop, but datanode is still running. In this case we 
> should stop Datanode, otherwise, user will know about this when running ozone 
> commands or when user observed metrics like healthy nodes.
>  
> cc [~vivekratnavel]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1456) Stop the datanode, when any datanode statemachine state is set to shutdown

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1456?focusedWorklogId=233111&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233111
 ]

ASF GitHub Bot logged work on HDDS-1456:


Author: ASF GitHub Bot
Created on: 25/Apr/19 21:20
Start Date: 25/Apr/19 21:20
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #769: 
HDDS-1456. Stop the datanode, when any datanode statemachine state is…
URL: https://github.com/apache/hadoop/pull/769#discussion_r278736573
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/DatanodeStateMachine.java
 ##
 @@ -93,7 +95,9 @@
* enabled
*/
   public DatanodeStateMachine(DatanodeDetails datanodeDetails,
-  Configuration conf, CertificateClient certClient) throws IOException {
+  Configuration conf, CertificateClient certClient,
 
 Review comment:
   Done..
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233111)
Time Spent: 1h 20m  (was: 1h 10m)

> Stop the datanode, when any datanode statemachine state is set to shutdown
> --
>
> Key: HDDS-1456
> URL: https://issues.apache.org/jira/browse/HDDS-1456
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Recently we have seen an issue, in InitDatanodeState, there is error during 
> create Path for volume. We set the state to shutdown and this has caused 
> DatanodeStateMachine to stop, but datanode is still running. In this case we 
> should stop Datanode, otherwise, user will know about this when running ozone 
> commands or when user observed metrics like healthy nodes.
>  
> cc [~vivekratnavel]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1403) KeyOutputStream writes fails after max retries while writing to a closed container

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1403?focusedWorklogId=233110&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233110
 ]

ASF GitHub Bot logged work on HDDS-1403:


Author: ASF GitHub Bot
Created on: 25/Apr/19 21:14
Start Date: 25/Apr/19 21:14
Worklog Time Spent: 10m 
  Work Description: arp7 commented on pull request #753: HDDS-1403. 
KeyOutputStream writes fails after max retries while writing to a closed 
container
URL: https://github.com/apache/hadoop/pull/753#discussion_r278734053
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -436,11 +436,12 @@
 
   
   
-ozone.client.retry.interval.ms
-500
+ozone.client.retry.interval
+0ms
 OZONE, CLIENT
-Indicates the time duration in milliseconds a client will wait
-  before retrying a write key request on encountering an exception.
+Indicates the time duration a client will wait before
+  retrying a write key request on encountering an exception. Be default
 
 Review comment:
   Nitpick: By default
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233110)
Time Spent: 1h 20m  (was: 1h 10m)

> KeyOutputStream writes fails after max retries while writing to a closed 
> container
> --
>
> Key: HDDS-1403
> URL: https://issues.apache.org/jira/browse/HDDS-1403
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently a Ozone Client retries a write operation 5 times. It is possible 
> that the container being written to is already closed by the time it is 
> written to. The key write will fail after retrying multiple times with this 
> error. This needs to be fixed as this is an internal error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-04-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14454:
---
Attachment: HDFS-14454-HDFS-13891.004.patch

> RBF: getContentSummary() should allow non-existing folders
> --
>
> Key: HDFS-14454
> URL: https://issues.apache.org/jira/browse/HDFS-14454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14454-HDFS-13891.000.patch, 
> HDFS-14454-HDFS-13891.001.patch, HDFS-14454-HDFS-13891.002.patch, 
> HDFS-14454-HDFS-13891.003.patch, HDFS-14454-HDFS-13891.004.patch
>
>
> We have a mount point with HASH_ALL and one of the subclusters does not 
> contain the folder.
> In this case, getContentSummary() returns FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-04-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14454:
---
Attachment: (was: HDFS-14454-HDFS-13891.004.patch)

> RBF: getContentSummary() should allow non-existing folders
> --
>
> Key: HDFS-14454
> URL: https://issues.apache.org/jira/browse/HDFS-14454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14454-HDFS-13891.000.patch, 
> HDFS-14454-HDFS-13891.001.patch, HDFS-14454-HDFS-13891.002.patch, 
> HDFS-14454-HDFS-13891.003.patch
>
>
> We have a mount point with HASH_ALL and one of the subclusters does not 
> contain the folder.
> In this case, getContentSummary() returns FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826430#comment-16826430
 ] 

Íñigo Goiri commented on HDFS-14454:


In  [^HDFS-14454-HDFS-13891.004.patch] I refactored the code a little to avoid 
using the same code over and over in the tests.

> RBF: getContentSummary() should allow non-existing folders
> --
>
> Key: HDFS-14454
> URL: https://issues.apache.org/jira/browse/HDFS-14454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14454-HDFS-13891.000.patch, 
> HDFS-14454-HDFS-13891.001.patch, HDFS-14454-HDFS-13891.002.patch, 
> HDFS-14454-HDFS-13891.003.patch, HDFS-14454-HDFS-13891.004.patch
>
>
> We have a mount point with HASH_ALL and one of the subclusters does not 
> contain the folder.
> In this case, getContentSummary() returns FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-04-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14454:
---
Attachment: HDFS-14454-HDFS-13891.004.patch

> RBF: getContentSummary() should allow non-existing folders
> --
>
> Key: HDFS-14454
> URL: https://issues.apache.org/jira/browse/HDFS-14454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14454-HDFS-13891.000.patch, 
> HDFS-14454-HDFS-13891.001.patch, HDFS-14454-HDFS-13891.002.patch, 
> HDFS-14454-HDFS-13891.003.patch, HDFS-14454-HDFS-13891.004.patch
>
>
> We have a mount point with HASH_ALL and one of the subclusters does not 
> contain the folder.
> In this case, getContentSummary() returns FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1470) Implement a CLI tool to dump the contents of rocksdb metadata

2019-04-25 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created HDDS-1470:
--

 Summary: Implement a CLI tool to dump the contents of rocksdb 
metadata
 Key: HDDS-1470
 URL: https://issues.apache.org/jira/browse/HDDS-1470
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hrishikesh Gadre
Assignee: Hrishikesh Gadre


The DataNode plugin for Ozone stores the protobuf message as the value in the 
rocksdb metadata store. Since the protobuf message contents are not human 
readable, it is difficult to introspect (e.g. for debugging). This Jira is to 
add a command-line tool to dump the contents of rocksdb database in human 
readable format (e.g. json or yaml).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12510) RBF: Add security to UI

2019-04-25 Thread CR Hota (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826404#comment-16826404
 ] 

CR Hota commented on HDFS-12510:


[~elgoiri]  [~brahmareddy] Thanks for the comments. 

2 cents, One point that I forgot to mention in the my previous comment. Based 
on what I saw earlier in code, WebHDFSHandler related changes were targeting a 
use case that allowed file uploads etc via webUI which routers do not support 
currently. Feel we should re-look at this flow again when we develop a feature 
that would allow secure uploads via router to respective underlying clusters. 
That's when the cors checks will come into picture mainly as browser will have 
redirects to deal with etc.

> RBF: Add security to UI
> ---
>
> Key: HDFS-12510
> URL: https://issues.apache.org/jira/browse/HDFS-12510
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12510-HDFS-13891.001.patch
>
>
> HDFS-12273 implemented the UI for Router Based Federation without security.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233073&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233073
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:39
Start Date: 25/Apr/19 19:39
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on issue #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#issuecomment-486810661
 
 
   > /cc @anuengineer This is the annotation processor. With a small 
modification in the OzoneConfiguration (to load all the generated fragments) we 
don't need to merge all the generated config files to one big ozone-default.xml
   
   I am ok with that, but some of the old school people might like a single 
file, and in the deployment, phase don't we need a single file ? or should we 
move away since the code already has the default?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233073)
Time Spent: 2h 20m  (was: 2h 10m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233068&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233068
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278697747
 
 

 ##
 File path: 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
 ##
 @@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import javax.annotation.processing.AbstractProcessor;
+import javax.annotation.processing.Filer;
+import javax.annotation.processing.RoundEnvironment;
+import javax.annotation.processing.SupportedAnnotationTypes;
+import javax.lang.model.element.Element;
+import javax.lang.model.element.ElementKind;
+import javax.lang.model.element.TypeElement;
+import javax.tools.Diagnostic.Kind;
+import javax.tools.FileObject;
+import javax.tools.StandardLocation;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.nio.charset.StandardCharsets;
+import java.util.Set;
+
+/**
+ * Annotation processor to generate ozone-site-generated fragments from
+ * ozone-site.xml.
 
 Review comment:
   wrong comment? from config classes?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233068)
Time Spent: 2h 10m  (was: 2h)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233065&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233065
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278696810
 
 

 ##
 File path: hadoop-hdds/config/pom.xml
 ##
 @@ -0,0 +1,66 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  4.0.0
+  
+org.apache.hadoop
+hadoop-hdds
+0.5.0-SNAPSHOT
 
 Review comment:
   is it possible to inherit this value from the parent POM ? more a question 
for my understanding.. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233065)
Time Spent: 1h 40m  (was: 1.5h)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233066&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233066
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278692062
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
 ##
 @@ -161,14 +285,14 @@ public static void activate() {
 Configuration.addDefaultResource("hdfs-default.xml");
 Configuration.addDefaultResource("hdfs-site.xml");
 Configuration.addDefaultResource("ozone-default.xml");
-Configuration.addDefaultResource("ozone-site.xml");
 
 Review comment:
   Shouldn't we still allow this over-ride?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233066)
Time Spent: 1h 50m  (was: 1h 40m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233060&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233060
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278698781
 
 

 ##
 File path: 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
 ##
 @@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import javax.annotation.processing.AbstractProcessor;
+import javax.annotation.processing.Filer;
+import javax.annotation.processing.RoundEnvironment;
+import javax.annotation.processing.SupportedAnnotationTypes;
+import javax.lang.model.element.Element;
+import javax.lang.model.element.ElementKind;
+import javax.lang.model.element.TypeElement;
+import javax.tools.Diagnostic.Kind;
+import javax.tools.FileObject;
+import javax.tools.StandardLocation;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.nio.charset.StandardCharsets;
+import java.util.Set;
+
+/**
+ * Annotation processor to generate ozone-site-generated fragments from
+ * ozone-site.xml.
+ */
+@SupportedAnnotationTypes("org.apache.hadoop.hdds.conf.ConfigGroup")
+public class ConfigFileGenerator extends AbstractProcessor {
+
+  public static final String OUTPUT_FILE_NAME = "ozone-default-generated.xml";
+
+  @Override
+  public boolean process(Set annotations,
+  RoundEnvironment roundEnv) {
+if (roundEnv.processingOver()) {
+  return false;
+}
+
+Filer filer = processingEnv.getFiler();
+System.out.println("round");
+
+try {
+
+  //load existing generated config (if exists)
+  ConfigFileAppender appender = new ConfigFileAppender();
+  try (InputStream input = filer
+  .getResource(StandardLocation.CLASS_OUTPUT, "",
+  OUTPUT_FILE_NAME).openInputStream()) {
+appender.load(input);
+  } catch (FileNotFoundException ex) {
+appender.init();
+  }
+
+  Set annotatedElements =
+  roundEnv.getElementsAnnotatedWith(ConfigGroup.class);
+  for (Element annotatedElement : annotatedElements) {
+TypeElement configGroup = (TypeElement) annotatedElement;
+
+//check if any of the setters are annotated with @Config
+for (Element element : configGroup.getEnclosedElements()) {
+  if (element.getKind() == ElementKind.METHOD) {
+processingEnv.getMessager()
+.printMessage(Kind.WARNING, 
element.getSimpleName().toString());
+if (element.getSimpleName().toString().startsWith("set")
 
 Review comment:
   In future, we might want to emit a warning if you have "Set" for example, 
assuming that was a mistake that user is making, and letting them know we are 
ignoring it. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233060)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See the design doc in th

[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233067&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233067
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278693655
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
 ##
 @@ -45,12 +48,31 @@
 
   public OzoneConfiguration() {
 OzoneConfiguration.activate();
+loadDefaults();
   }
 
   public OzoneConfiguration(Configuration conf) {
 super(conf);
 //load the configuration from the classloader of the original conf.
 setClassLoader(conf.getClassLoader());
+loadDefaults();
+  }
+
+  private void loadDefaults() {
 
 Review comment:
   Can I be a greedy pig and request that we also sort these keys (in another 
JIRA of course) before we write out the XML? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233067)
Time Spent: 2h  (was: 1h 50m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233061&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233061
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278694440
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/SimpleConfiguration.java
 ##
 @@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Example configuration to test the configuration injection.
+ */
+@ConfigGroup(prefix = "ozone.scm.client")
+public class SimpleConfiguration {
+
+  private String clientAddress;
+
+  private String bindHost;
+
+  private boolean enabled;
+
+  private int port = 1234;
+
+  private long waitTime = 1;
+
+  @Config(key = "address", defaultValue = "localhost")
+  public void setClientAddress(String clientAddress) {
+this.clientAddress = clientAddress;
+  }
+
+  @Config(key = "bind.host", defaultValue = "0.0.0.0")
+  public void setBindHost(String bindHost) {
+this.bindHost = bindHost;
+  }
+
+  @Config(key = "enabled", defaultValue = "true")
+  public void setEnabled(boolean enabled) {
+this.enabled = enabled;
+  }
+
+  @Config(key = "port", defaultValue = "9878")
+  public void setPort(int port) {
+this.port = port;
+  }
+
+  @Config(key = "wait", type = ConfigType.TIME, timeUnit =
+  TimeUnit.SECONDS, defaultValue = "10m")
+  public void setWaitTime(long waitTime) {
 
 Review comment:
   Nice 👏 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233061)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233063&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233063
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278695268
 
 

 ##
 File path: 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/Config.java
 ##
 @@ -0,0 +1,59 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Mark field to be configurable from ozone-site.xml.
+ */
+@Retention(RetentionPolicy.RUNTIME)
+@Target(ElementType.METHOD)
+public @interface Config {
+
+  /**
+   * Configuration fragment relative to the prefix defined with @ConfigGroup.
+   */
+  String key();
+
+  /**
+   * Default value to use if not set.
+   */
+  String defaultValue();
+
+  /**
+   * Custom description as a help.
+   */
+  String description() default "";
+
+  /**
+   * Type of configuration. Use AUTO to decide it based on the java type.
+   */
+  ConfigType type() default ConfigType.AUTO;
+
+  /**
+   * If type == TIME the unit should be defined with this attribute.
+   */
+  TimeUnit timeUnit() default TimeUnit.MILLISECONDS;
+
+  ConfigTag[] tags() default {ConfigTag.OZONE};
 
 Review comment:
   We should enforce this and description, so the code will error out during 
compile.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233063)
Time Spent: 1h 20m  (was: 1h 10m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233058&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233058
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278696010
 
 

 ##
 File path: 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileAppender.java
 ##
 @@ -0,0 +1,127 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import javax.xml.parsers.DocumentBuilder;
+import javax.xml.parsers.DocumentBuilderFactory;
+import javax.xml.transform.OutputKeys;
+import javax.xml.transform.Transformer;
+import javax.xml.transform.TransformerException;
+import javax.xml.transform.TransformerFactory;
+import javax.xml.transform.dom.DOMSource;
+import javax.xml.transform.stream.StreamResult;
+import java.io.InputStream;
+import java.io.Writer;
+import java.util.Arrays;
+import java.util.stream.Collectors;
+
+import org.w3c.dom.Document;
+import org.w3c.dom.Element;
+
+/**
+ * Simple DOM based config file writer.
+ * 
+ * This class can init/load existing ozone-site.xml fragments and append
+ * new entries and write to the file system.
 
 Review comment:
   Should this class build ozone-default.xml or ozone-site.xml? I was thinking 
the compilation process builds ozone-default.xml and user can define 
ozone-site.xml by using a tool like GenConfig. I know we have checked in some 
empty ozone-site.xml, just a question.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233058)
Time Spent: 1h  (was: 50m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233064&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233064
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278699642
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
 ##
 @@ -754,4 +747,64 @@ private InflightAction(final DatanodeDetails datanode,
   this.time = time;
 }
   }
+
+  /**
+   * Configuration used by the Replication Manager.
+   */
+  @ConfigGroup(prefix = "hdds.scm.replication")
+  public static class ReplicationManagerConfiguration {
+/**
+ * The frequency in which ReplicationMonitor thread should run.
+ */
+private long interval = 5 * 60 * 1000;
+
+/**
+ * Timeout for container replication & deletion command issued by
+ * ReplicationManager.
+ */
+private long eventTimeout = 10 * 60 * 1000;
+
+@Config(key = "thread.interval",
+type = ConfigType.TIME,
+defaultValue = "3s",
+tags = {SCM, OZONE},
+description = "When a heartbeat from the data node arrives on SCM, "
++ "It is queued for processing with the time stamp of when the "
 
 Review comment:
   @nandakumar131  Do we still use this Key ? or is this some code changed but 
we forgot to remove the config value case? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233064)
Time Spent: 1.5h  (was: 1h 20m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233062&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233062
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278697984
 
 

 ##
 File path: 
hadoop-hdds/config/src/main/java/org/apache/hadoop/hdds/conf/ConfigFileGenerator.java
 ##
 @@ -0,0 +1,115 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import javax.annotation.processing.AbstractProcessor;
+import javax.annotation.processing.Filer;
+import javax.annotation.processing.RoundEnvironment;
+import javax.annotation.processing.SupportedAnnotationTypes;
+import javax.lang.model.element.Element;
+import javax.lang.model.element.ElementKind;
+import javax.lang.model.element.TypeElement;
+import javax.tools.Diagnostic.Kind;
+import javax.tools.FileObject;
+import javax.tools.StandardLocation;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStreamWriter;
+import java.io.Writer;
+import java.nio.charset.StandardCharsets;
+import java.util.Set;
+
+/**
+ * Annotation processor to generate ozone-site-generated fragments from
+ * ozone-site.xml.
+ */
+@SupportedAnnotationTypes("org.apache.hadoop.hdds.conf.ConfigGroup")
+public class ConfigFileGenerator extends AbstractProcessor {
+
+  public static final String OUTPUT_FILE_NAME = "ozone-default-generated.xml";
+
+  @Override
+  public boolean process(Set annotations,
+  RoundEnvironment roundEnv) {
+if (roundEnv.processingOver()) {
+  return false;
+}
+
+Filer filer = processingEnv.getFiler();
+System.out.println("round");
 
 Review comment:
   Debug?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233062)
Time Spent: 1h 10m  (was: 1h)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233059&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233059
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:34
Start Date: 25/Apr/19 19:34
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278694151
 
 

 ##
 File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/SimpleConfiguration.java
 ##
 @@ -0,0 +1,83 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.conf;
+
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Example configuration to test the configuration injection.
+ */
+@ConfigGroup(prefix = "ozone.scm.client")
+public class SimpleConfiguration {
+
+  private String clientAddress;
+
+  private String bindHost;
+
+  private boolean enabled;
+
+  private int port = 1234;
+
+  private long waitTime = 1;
+
+  @Config(key = "address", defaultValue = "localhost")
+  public void setClientAddress(String clientAddress) {
+this.clientAddress = clientAddress;
+  }
+
+  @Config(key = "bind.host", defaultValue = "0.0.0.0")
+  public void setBindHost(String bindHost) {
+this.bindHost = bindHost;
+  }
+
+  @Config(key = "enabled", defaultValue = "true")
+  public void setEnabled(boolean enabled) {
+this.enabled = enabled;
 
 Review comment:
   nit: what does client. enabled mean?
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233059)
Time Spent: 1h  (was: 50m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233043&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233043
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:09
Start Date: 25/Apr/19 19:09
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278691904
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
 ##
 @@ -161,14 +285,14 @@ public static void activate() {
 Configuration.addDefaultResource("hdfs-default.xml");
 Configuration.addDefaultResource("hdfs-site.xml");
 Configuration.addDefaultResource("ozone-default.xml");
-Configuration.addDefaultResource("ozone-site.xml");
 
 Review comment:
   Shouldn't we still allow this over-ride? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233043)
Time Spent: 40m  (was: 0.5h)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=233044&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-233044
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 19:09
Start Date: 25/Apr/19 19:09
Worklog Time Spent: 10m 
  Work Description: anuengineer commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#discussion_r278691904
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/conf/OzoneConfiguration.java
 ##
 @@ -161,14 +285,14 @@ public static void activate() {
 Configuration.addDefaultResource("hdfs-default.xml");
 Configuration.addDefaultResource("hdfs-site.xml");
 Configuration.addDefaultResource("ozone-default.xml");
-Configuration.addDefaultResource("ozone-site.xml");
 
 Review comment:
   Shouldn't we still allow this over-ride? 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 233044)
Time Spent: 50m  (was: 40m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-04-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826316#comment-16826316
 ] 

Hadoop QA commented on HDFS-14454:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} HDFS-13891 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
14s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 56s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} HDFS-13891 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} HDFS-13891 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs-rbf generated 0 new 
+ 11 unchanged - 1 fixed = 11 total (was 12) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
53s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 32s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14454 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967038/HDFS-14454-HDFS-13891.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b4f323af5444 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | HDFS-13891 / d153462 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_191 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26704/testReport/ |
| Max. process+thread count | 1362 (vs. ulimit of 1) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/26704/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: getContentSu

[jira] [Commented] (HDDS-999) Make the DNS resolution in OzoneManager more resilient

2019-04-25 Thread Siddharth Wagle (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826283#comment-16826283
 ] 

Siddharth Wagle commented on HDDS-999:
--

Hi [~elek], with the patch we wait for 50 seconds now before failing. Are you 
ok with the current patch? 

> Make the DNS resolution in OzoneManager more resilient
> --
>
> Key: HDDS-999
> URL: https://issues.apache.org/jira/browse/HDDS-999
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Elek, Marton
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-999.01.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If the OzoneManager is started before scm the scm dns may not be available. 
> In this case the om should retry and re-resolve the dns, but as of now it 
> throws an exception:
> {code:java}
> 2019-01-23 17:14:25 ERROR OzoneManager:593 - Failed to start the OzoneManager.
> java.net.SocketException: Call From om-0.om to null:0 failed on socket 
> exception: java.net.SocketException: Unresolved address; For more details 
> see:  http://wiki.apache.org/hadoop/SocketException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:798)
>     at org.apache.hadoop.ipc.Server.bind(Server.java:566)
>     at org.apache.hadoop.ipc.Server$Listener.(Server.java:1042)
>     at org.apache.hadoop.ipc.Server.(Server.java:2815)
>     at org.apache.hadoop.ipc.RPC$Server.(RPC.java:994)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:421)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>     at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:804)
>     at 
> org.apache.hadoop.ozone.om.OzoneManager.startRpcServer(OzoneManager.java:563)
>     at 
> org.apache.hadoop.ozone.om.OzoneManager.getRpcServer(OzoneManager.java:927)
>     at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:265)
>     at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:674)
>     at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:587)
> Caused by: java.net.SocketException: Unresolved address
>     at sun.nio.ch.Net.translateToSocketException(Net.java:131)
>     at sun.nio.ch.Net.translateException(Net.java:157)
>     at sun.nio.ch.Net.translateException(Net.java:163)
>     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
>     at org.apache.hadoop.ipc.Server.bind(Server.java:549)
>     ... 11 more
> Caused by: java.nio.channels.UnresolvedAddressException
>     at sun.nio.ch.Net.checkAddress(Net.java:101)
>     at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
>     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>     ... 12 more{code}
> It should be fixed. (See also HDDS-421 which fixed the same problem in 
> datanode side and HDDS-907 which is the workaround while this issue is not 
> resolved).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-999) Make the DNS resolution in OzoneManager more resilient

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-999?focusedWorklogId=232986&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232986
 ]

ASF GitHub Bot logged work on HDDS-999:
---

Author: ASF GitHub Bot
Created on: 25/Apr/19 17:14
Start Date: 25/Apr/19 17:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #758: HDDS-999. Make 
the DNS resolution in OzoneManager more resilient. (swagle)
URL: https://github.com/apache/hadoop/pull/758#issuecomment-486760267
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 52 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1066 | trunk passed |
   | +1 | compile | 115 | trunk passed |
   | +1 | checkstyle | 48 | trunk passed |
   | +1 | mvnsite | 67 | trunk passed |
   | +1 | shadedclient | 862 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 58 | trunk passed |
   | +1 | javadoc | 54 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | -1 | mvninstall | 23 | dist in the patch failed. |
   | +1 | compile | 114 | the patch passed |
   | +1 | javac | 114 | the patch passed |
   | +1 | checkstyle | 27 | the patch passed |
   | +1 | mvnsite | 51 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 873 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 52 | the patch passed |
   | +1 | javadoc | 44 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 71 | ozone-manager in the patch passed. |
   | +1 | unit | 24 | dist in the patch passed. |
   | +1 | asflicense | 31 | The patch does not generate ASF License warnings. |
   | | | 3777 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-758/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/758 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  yamllint  findbugs  checkstyle  |
   | uname | Linux 87b06ff22d90 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / b5dcf64 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-758/3/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-758/3/testReport/ |
   | Max. process+thread count | 389 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager hadoop-ozone/dist U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-758/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232986)
Time Spent: 1h 40m  (was: 1.5h)

> Make the DNS resolution in OzoneManager more resilient
> --
>
> Key: HDDS-999
> URL: https://issues.apache.org/jira/browse/HDDS-999
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Elek, Marton
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-999.01.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h

[jira] [Commented] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826233#comment-16826233
 ] 

Eric Yang commented on HDFS-14434:
--

[~magnum] Thank you for the patch.  It looks like non-ssl test were removed and 
initSecureConf changed tests to run with SSL only.  I think it would be safer 
to keep non-SSL test cases as well to confirm that no regression happens for 
future patch.

Some test code are comment out instead of remove.  I think it is safe to remove 
the lines.

{code}
-new UserParam(ugi.getRealUser().getShortUserName()).toString(),
+//new UserParam(ugi.getRealUser().getShortUserName()).toString(),
{code}

> webhdfs that connect secure hdfs should not use user.name parameter
> ---
>
> Key: HDFS-14434
> URL: https://issues.apache.org/jira/browse/HDFS-14434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Minor
> Attachments: HDFS-14434.001.patch, HDFS-14434.002.patch, 
> HDFS-14434.003.patch, HDFS-14434.004.patch, HDFS-14434.005.patch, 
> HDFS-14434.006.patch
>
>
> I have two secure hadoop cluster.  Both cluster use cross-realm 
> authentication. 
> [use...@a.com|mailto:use...@a.com] can access to HDFS of B.COM realm
> by the way, hadoop username of use...@a.com  in B.COM realm is  
> cross_realm_a_com_user_a.
> hdfs dfs command of use...@a.com using B.COM webhdfs failed.
> root cause is  webhdfs that connect secure hdfs use user.name parameter.
> according to webhdfs spec,  insecure webhdfs use user.name,  secure webhdfs 
> use SPNEGO for authentication.
> I think webhdfs that connect secure hdfs  should not use user.name parameter.
> I will attach patch.
> below is error log
>  
> {noformat}
> $ hdfs dfs -ls  webhdfs://b.com:50070/
> ls: Usernames not matched: name=user_a != expected=cross_realm_a_com_user_a
>  
> # user.name in cross realm webhdfs
> $ curl -u : --negotiate 
> 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=user_a' 
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Usernames not 
> matched: name=user_a != expected=cross_realm_a_com_user_a"}}
> # USE SPNEGO
> $ curl -u : --negotiate 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN'
> {"Token"{"urlString":"XgA."}}
>  
> {noformat}
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14454) RBF: getContentSummary() should allow non-existing folders

2019-04-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/HDFS-14454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14454:
---
Attachment: HDFS-14454-HDFS-13891.003.patch

> RBF: getContentSummary() should allow non-existing folders
> --
>
> Key: HDFS-14454
> URL: https://issues.apache.org/jira/browse/HDFS-14454
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14454-HDFS-13891.000.patch, 
> HDFS-14454-HDFS-13891.001.patch, HDFS-14454-HDFS-13891.002.patch, 
> HDFS-14454-HDFS-13891.003.patch
>
>
> We have a mount point with HASH_ALL and one of the subclusters does not 
> contain the folder.
> In this case, getContentSummary() returns FileNotFoundException.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12510) RBF: Add security to UI

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826225#comment-16826225
 ] 

Íñigo Goiri commented on HDFS-12510:


The patch is obviously good to have but I don't think it covers the concern 
raised by [~raviprak].
Unfortunately, my knowledge in what he mentioned is fairly limited and I'm not 
sure if the current Router web already covers the issues or not.

> RBF: Add security to UI
> ---
>
> Key: HDFS-12510
> URL: https://issues.apache.org/jira/browse/HDFS-12510
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12510-HDFS-13891.001.patch
>
>
> HDFS-12273 implemented the UI for Router Based Federation without security.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826221#comment-16826221
 ] 

Íñigo Goiri commented on HDFS-14447:


Please check the unit test report, there must be something different in your 
environment:
https://builds.apache.org/job/PreCommit-HDFS-Build/26699/testReport/

There are also a bunch of checkstyle warnings that can be fixed.
For the logging use the logger instead of System.out.

> RBF: RouterAdminServer should support RefreshUserMappingsProtocol
> -
>
> Key: HDFS-14447
> URL: https://issues.apache.org/jira/browse/HDFS-14447
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: 3.1.0
>Reporter: Shen Yinjie
>Assignee: Shen Yinjie
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14447-HDFS-13891.01.patch, 
> HDFS-14447-HDFS-13891.02.patch, error.png
>
>
> HDFS with RBF
> We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin 
> -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration,
>  it throws "Unknown protocol: ...RefreshUserMappingProtocol".
> RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser 
> client would be refused to impersonate.As shown in the screenshot



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-999) Make the DNS resolution in OzoneManager more resilient

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-999?focusedWorklogId=232957&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232957
 ]

ASF GitHub Bot logged work on HDDS-999:
---

Author: ASF GitHub Bot
Created on: 25/Apr/19 16:11
Start Date: 25/Apr/19 16:11
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #758: HDDS-999. Make 
the DNS resolution in OzoneManager more resilient. (swagle)
URL: https://github.com/apache/hadoop/pull/758#discussion_r278625552
 
 

 ##
 File path: hadoop-ozone/dist/src/main/k8s/ozone/om-statefulset.yaml
 ##
 @@ -44,8 +44,8 @@ spec:
 - om
 - --init
   env:
-- name: "WAITFOR"
-  value: "scm-0.scm:9876"
+- name: "ENSURE_OM_INITIALIZED"
 
 Review comment:
   Thanks for the review @elek. Removed in the latest commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232957)
Time Spent: 1.5h  (was: 1h 20m)

> Make the DNS resolution in OzoneManager more resilient
> --
>
> Key: HDDS-999
> URL: https://issues.apache.org/jira/browse/HDDS-999
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Elek, Marton
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-999.01.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> If the OzoneManager is started before scm the scm dns may not be available. 
> In this case the om should retry and re-resolve the dns, but as of now it 
> throws an exception:
> {code:java}
> 2019-01-23 17:14:25 ERROR OzoneManager:593 - Failed to start the OzoneManager.
> java.net.SocketException: Call From om-0.om to null:0 failed on socket 
> exception: java.net.SocketException: Unresolved address; For more details 
> see:  http://wiki.apache.org/hadoop/SocketException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:798)
>     at org.apache.hadoop.ipc.Server.bind(Server.java:566)
>     at org.apache.hadoop.ipc.Server$Listener.(Server.java:1042)
>     at org.apache.hadoop.ipc.Server.(Server.java:2815)
>     at org.apache.hadoop.ipc.RPC$Server.(RPC.java:994)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:421)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>     at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:804)
>     at 
> org.apache.hadoop.ozone.om.OzoneManager.startRpcServer(OzoneManager.java:563)
>     at 
> org.apache.hadoop.ozone.om.OzoneManager.getRpcServer(OzoneManager.java:927)
>     at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:265)
>     at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:674)
>     at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:587)
> Caused by: java.net.SocketException: Unresolved address
>     at sun.nio.ch.Net.translateToSocketException(Net.java:131)
>     at sun.nio.ch.Net.translateException(Net.java:157)
>     at sun.nio.ch.Net.translateException(Net.java:163)
>     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
>     at org.apache.hadoop.ipc.Server.bind(Server.java:549)
>     ... 11 more
> Caused by: java.nio.channels.UnresolvedAddressException
>     at sun.nio.ch.Net.checkAddress(Net.java:101)
>     at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
>     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>     ... 12 more{code}
> It should be fixed. (See also HDDS-421 which fixed the same problem in 
> datanode side and HDDS-907 which is the workaround while this issue is not 
> resolved).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

[jira] [Work logged] (HDDS-999) Make the DNS resolution in OzoneManager more resilient

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-999?focusedWorklogId=232956&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232956
 ]

ASF GitHub Bot logged work on HDDS-999:
---

Author: ASF GitHub Bot
Created on: 25/Apr/19 16:10
Start Date: 25/Apr/19 16:10
Worklog Time Spent: 10m 
  Work Description: swagle commented on pull request #758: HDDS-999. Make 
the DNS resolution in OzoneManager more resilient. (swagle)
URL: https://github.com/apache/hadoop/pull/758#discussion_r278625552
 
 

 ##
 File path: hadoop-ozone/dist/src/main/k8s/ozone/om-statefulset.yaml
 ##
 @@ -44,8 +44,8 @@ spec:
 - om
 - --init
   env:
-- name: "WAITFOR"
-  value: "scm-0.scm:9876"
+- name: "ENSURE_OM_INITIALIZED"
 
 Review comment:
   Removed in the latest commit.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232956)
Time Spent: 1h 20m  (was: 1h 10m)

> Make the DNS resolution in OzoneManager more resilient
> --
>
> Key: HDDS-999
> URL: https://issues.apache.org/jira/browse/HDDS-999
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Elek, Marton
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-999.01.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> If the OzoneManager is started before scm the scm dns may not be available. 
> In this case the om should retry and re-resolve the dns, but as of now it 
> throws an exception:
> {code:java}
> 2019-01-23 17:14:25 ERROR OzoneManager:593 - Failed to start the OzoneManager.
> java.net.SocketException: Call From om-0.om to null:0 failed on socket 
> exception: java.net.SocketException: Unresolved address; For more details 
> see:  http://wiki.apache.org/hadoop/SocketException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:798)
>     at org.apache.hadoop.ipc.Server.bind(Server.java:566)
>     at org.apache.hadoop.ipc.Server$Listener.(Server.java:1042)
>     at org.apache.hadoop.ipc.Server.(Server.java:2815)
>     at org.apache.hadoop.ipc.RPC$Server.(RPC.java:994)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:421)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>     at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:804)
>     at 
> org.apache.hadoop.ozone.om.OzoneManager.startRpcServer(OzoneManager.java:563)
>     at 
> org.apache.hadoop.ozone.om.OzoneManager.getRpcServer(OzoneManager.java:927)
>     at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:265)
>     at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:674)
>     at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:587)
> Caused by: java.net.SocketException: Unresolved address
>     at sun.nio.ch.Net.translateToSocketException(Net.java:131)
>     at sun.nio.ch.Net.translateException(Net.java:157)
>     at sun.nio.ch.Net.translateException(Net.java:163)
>     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
>     at org.apache.hadoop.ipc.Server.bind(Server.java:549)
>     ... 11 more
> Caused by: java.nio.channels.UnresolvedAddressException
>     at sun.nio.ch.Net.checkAddress(Net.java:101)
>     at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
>     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>     ... 12 more{code}
> It should be fixed. (See also HDDS-421 which fixed the same problem in 
> datanode side and HDDS-907 which is the workaround while this issue is not 
> resolved).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-25 Thread Yuriy Malygin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826162#comment-16826162
 ] 

Yuriy Malygin edited comment on HDFS-13596 at 4/25/19 3:31 PM:
---

I'm testing last patch on a cluster with Kerberos:
 # download sources from [https://github.com/apache/hadoop]
 # apply patch _HDFS-13596.007.patch_
 # build patched version of _hadoop-3.3.0-SNAPSHOT_
 # prepare running cluster (_hadoop-2.7.3_) to _rollingUpgrade_
 # run _hadoop-3.3.0-SNAPSHOT_ in _rollingUpgrade_ mode
 # upload test data to HDFS
 # restart NN and all is fine
 # stop cluster for rollback to hadoop-2.7.3 
 # hadoop-2.7.3 NN start fails with _ArrayIndexOutOfBoundsException_:
{code:java}
INFO   | jvm 1| 2019/04/25 18:01:24 | STARTUP_MSG:   build = 
https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'vinodkv' on 
2016-08-18T01:01Z
INFO   | jvm 1| 2019/04/25 18:01:24 | STARTUP_MSG:   java = 1.8.0_102
INFO   | jvm 1| 2019/04/25 18:01:24 | 
/
INFO   | jvm 1| 2019/04/25 18:01:24 | 2019-04-25 18:01:24,767  INFO 
[Thread-3] NameNode - registered UNIX signal handlers for [TERM, HUP, INT]
INFO   | jvm 1| 2019/04/25 18:01:24 | 2019-04-25 18:01:24,769  INFO 
[Thread-3] NameNode - createNameNode [-rollingUpgrade, rollback]
{code}
{code:java}
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342 DEBUG 
[Thread-3] FSImage - Planning to load edit log stream: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@53468702
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342 DEBUG 
[Thread-3] FSImage - Planning to load edit log stream: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5e350e83
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342 DEBUG 
[Thread-3] FSImage - Planning to load edit log stream: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@cf86362
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342 DEBUG 
[Thread-3] FSImage - Planning to load edit log stream: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@840d683
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342  INFO 
[Thread-3] FSImage - Planning to load image: 
FSImageFile(file=/one/hadoop-data/dfs/current/fsimage_rollback_677,
 cpktTxId=677)
INFO   | jvm 1| 2019/04/25 18:01:30 | Total time for which application 
threads were stopped: 0.0007557 seconds, Stopping threads took: 0.0001181 
seconds
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,377 ERROR 
[Thread-3] FSImage - Failed to load image from 
FSImageFile(file=/one/hadoop-data/dfs/current/fsimage_rollback_677,
 cpktTxId=677)
INFO   | jvm 1| 2019/04/25 18:01:30 | 
java.lang.ArrayIndexOutOfBoundsException: 536870913
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadStringTableSection(FSImageFormatProtobuf.java:318)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:251)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:182)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:963)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:947)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:746)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:677)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:976)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameN

[jira] [Commented] (HDFS-13596) NN restart fails after RollingUpgrade from 2.x to 3.x

2019-04-25 Thread Yuriy Malygin (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-13596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826162#comment-16826162
 ] 

Yuriy Malygin commented on HDFS-13596:
--

I'm testing last patch on a cluster with Kerberos:
 # download sources from [https://github.com/apache/hadoop]
 # apply patch _HDFS-13596.007.patch_
 # build patched version of _hadoop-3.3.0-SNAPSHOT_
 # prepare running cluster (_hadoop-2.7.3_) to _rollingUpgrade_
 # run _hadoop-3.3.0-SNAPSHOT_ in _rollingUpgrade_ mode
 # upload test data to HDFS
 # restart NN and all is fine
 # stop cluster for rollback to hadoop-2.7.3 
 # hadoop-2.7.3 NN start fails:
{code:java}
INFO   | jvm 1| 2019/04/25 18:01:24 | STARTUP_MSG:   build = 
https://git-wip-us.apache.org/repos/asf/hadoop.git -r 
baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'vinodkv' on 
2016-08-18T01:01Z
INFO   | jvm 1| 2019/04/25 18:01:24 | STARTUP_MSG:   java = 1.8.0_102
INFO   | jvm 1| 2019/04/25 18:01:24 | 
/
INFO   | jvm 1| 2019/04/25 18:01:24 | 2019-04-25 18:01:24,767  INFO 
[Thread-3] NameNode - registered UNIX signal handlers for [TERM, HUP, INT]
INFO   | jvm 1| 2019/04/25 18:01:24 | 2019-04-25 18:01:24,769  INFO 
[Thread-3] NameNode - createNameNode [-rollingUpgrade, rollback]
{code}
{code:java}
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342 DEBUG 
[Thread-3] FSImage - Planning to load edit log stream: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@53468702
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342 DEBUG 
[Thread-3] FSImage - Planning to load edit log stream: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@5e350e83
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342 DEBUG 
[Thread-3] FSImage - Planning to load edit log stream: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@cf86362
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342 DEBUG 
[Thread-3] FSImage - Planning to load edit log stream: 
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream@840d683
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,342  INFO 
[Thread-3] FSImage - Planning to load image: 
FSImageFile(file=/one/hadoop-data/dfs/current/fsimage_rollback_677,
 cpktTxId=677)
INFO   | jvm 1| 2019/04/25 18:01:30 | Total time for which application 
threads were stopped: 0.0007557 seconds, Stopping threads took: 0.0001181 
seconds
INFO   | jvm 1| 2019/04/25 18:01:30 | 2019-04-25 18:01:30,377 ERROR 
[Thread-3] FSImage - Failed to load image from 
FSImageFile(file=/one/hadoop-data/dfs/current/fsimage_rollback_677,
 cpktTxId=677)
INFO   | jvm 1| 2019/04/25 18:01:30 | 
java.lang.ArrayIndexOutOfBoundsException: 536870913
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadStringTableSection(FSImageFormatProtobuf.java:318)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.loadInternal(FSImageFormatProtobuf.java:251)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormatProtobuf$Loader.load(FSImageFormatProtobuf.java:182)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImageFormat$LoaderDelegator.load(FSImageFormat.java:226)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:963)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:947)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImage.java:746)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:677)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:976)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812)
INFO   | jvm 1| 2019/04/25 18:01:30 |   at 
org.apache.hadoop.hdf

[jira] [Updated] (HDFS-14457) RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-25 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14457:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HDFS-13891
   Status: Resolved  (was: Patch Available)

Committed. 
Thanx [~Huachao] for the contribution.

> RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> --
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: RBF
> Fix For: HDFS-13891
>
> Attachments: HDFS-14457-HDFS-13891-01.patch, 
> HDFS-14457-HDFS-13891-02.patch, HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14457) RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-25 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826147#comment-16826147
 ] 

Ayush Saxena commented on HDFS-14457:
-

v02 LGTM +1
Committing Shortly!!!

> RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> --
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14457-HDFS-13891-01.patch, 
> HDFS-14457-HDFS-13891-02.patch, HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=232858&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232858
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 13:30
Start Date: 25/Apr/19 13:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#issuecomment-486673145
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 0 | Docker mode activated. |
   | -1 | patch | 8 | https://github.com/apache/hadoop/pull/773 does not apply 
to trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/773 |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-773/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232858)
Time Spent: 0.5h  (was: 20m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread KWON BYUNGCHANG (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826066#comment-16826066
 ] 

KWON BYUNGCHANG commented on HDFS-14434:


failed test case does not related with this patch. 
review please.

> webhdfs that connect secure hdfs should not use user.name parameter
> ---
>
> Key: HDFS-14434
> URL: https://issues.apache.org/jira/browse/HDFS-14434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Minor
> Attachments: HDFS-14434.001.patch, HDFS-14434.002.patch, 
> HDFS-14434.003.patch, HDFS-14434.004.patch, HDFS-14434.005.patch, 
> HDFS-14434.006.patch
>
>
> I have two secure hadoop cluster.  Both cluster use cross-realm 
> authentication. 
> [use...@a.com|mailto:use...@a.com] can access to HDFS of B.COM realm
> by the way, hadoop username of use...@a.com  in B.COM realm is  
> cross_realm_a_com_user_a.
> hdfs dfs command of use...@a.com using B.COM webhdfs failed.
> root cause is  webhdfs that connect secure hdfs use user.name parameter.
> according to webhdfs spec,  insecure webhdfs use user.name,  secure webhdfs 
> use SPNEGO for authentication.
> I think webhdfs that connect secure hdfs  should not use user.name parameter.
> I will attach patch.
> below is error log
>  
> {noformat}
> $ hdfs dfs -ls  webhdfs://b.com:50070/
> ls: Usernames not matched: name=user_a != expected=cross_realm_a_com_user_a
>  
> # user.name in cross realm webhdfs
> $ curl -u : --negotiate 
> 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=user_a' 
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Usernames not 
> matched: name=user_a != expected=cross_realm_a_com_user_a"}}
> # USE SPNEGO
> $ curl -u : --negotiate 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN'
> {"Token"{"urlString":"XgA."}}
>  
> {noformat}
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=232857&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232857
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 13:28
Start Date: 25/Apr/19 13:28
Worklog Time Spent: 10m 
  Work Description: elek commented on issue #773: HDDS-1469. Generate 
default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773#issuecomment-486672703
 
 
   /cc @anuengineer This is the annotation processor. With a small modification 
in the OzoneConfiguration (to load all the generated fragments) we don't need 
to merge all the generated config files to one big ozone-default.xml
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232857)
Time Spent: 20m  (was: 10m)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?focusedWorklogId=232855&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232855
 ]

ASF GitHub Bot logged work on HDDS-1469:


Author: ASF GitHub Bot
Created on: 25/Apr/19 13:27
Start Date: 25/Apr/19 13:27
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #773: HDDS-1469. 
Generate default configuration fragments based on annotations
URL: https://github.com/apache/hadoop/pull/773
 
 
   See the design doc in the parent jira for more details.
   
   In this jira I introduce a new annotation processor which can generate 
ozone-default.xml fragments based on the annotations which are introduced by 
HDDS-1468.
   
   The ozone-default-generated.xml fragments can be used directly by the 
OzoneConfiguration as I added a small code to the constructor to check ALL the 
available ozone-default-generated.xml files and add them to the available 
resources.
   
   With this approach we don't need to edit ozone-default.xml as all the 
configuration can be defined in java code.
   
   As a side effect each service will see only the available configuration keys 
and values based on the classpath. (If the ozone-default-generated.xml file of 
OzoneManager is not on the classpath of the SCM, SCM doesn't see the available 
configs.) 
   
   
   
   See: https://issues.apache.org/jira/browse/HDDS-1469
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232855)
Time Spent: 10m
Remaining Estimate: 0h

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1469:
-
Labels: pull-request-available  (was: )

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread Elek, Marton (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HDDS-1469:
---
Status: Patch Available  (was: Open)

> Generate default configuration fragments based on annotations
> -
>
> Key: HDDS-1469
> URL: https://issues.apache.org/jira/browse/HDDS-1469
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> See the design doc in the parent jira for more details.
> In this jira I introduce a new annotation processor which can generate 
> ozone-default.xml fragments based on the annotations which are introduced by 
> HDDS-1468.
> The ozone-default-generated.xml fragments can be used directly by the 
> OzoneConfiguration as I added a small code to the constructor to check ALL 
> the available ozone-default-generated.xml files and add them to the available 
> resources.
> With this approach we don't need to edit ozone-default.xml as all the 
> configuration can be defined in java code.
> As a side effect each service will see only the available configuration keys 
> and values based on the classpath. (If the ozone-default-generated.xml file 
> of OzoneManager is not on the classpath of the SCM, SCM doesn't see the 
> available configs.) 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1469) Generate default configuration fragments based on annotations

2019-04-25 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-1469:
--

 Summary: Generate default configuration fragments based on 
annotations
 Key: HDDS-1469
 URL: https://issues.apache.org/jira/browse/HDDS-1469
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Elek, Marton
Assignee: Elek, Marton


See the design doc in the parent jira for more details.

In this jira I introduce a new annotation processor which can generate 
ozone-default.xml fragments based on the annotations which are introduced by 
HDDS-1468.

The ozone-default-generated.xml fragments can be used directly by the 
OzoneConfiguration as I added a small code to the constructor to check ALL the 
available ozone-default-generated.xml files and add them to the available 
resources.

With this approach we don't need to edit ozone-default.xml as all the 
configuration can be defined in java code.

As a side effect each service will see only the available configuration keys 
and values based on the classpath. (If the ozone-default-generated.xml file of 
OzoneManager is not on the classpath of the SCM, SCM doesn't see the available 
configs.) 





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826062#comment-16826062
 ] 

Hadoop QA commented on HDFS-14434:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 19s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
7s{color} | {color:green} hadoop-hdfs-project generated 0 new + 536 unchanged - 
2 fixed = 536 total (was 538) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
90 unchanged - 18 fixed = 91 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
45s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.tools.TestDFSZKFailoverController |
|   | hadoop.fs.viewfs.TestViewFsAtHdfsRoot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14434 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966998/HDFS-14434.006.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 08a79fa363dd 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build too

[jira] [Commented] (HDDS-999) Make the DNS resolution in OzoneManager more resilient

2019-04-25 Thread Elek, Marton (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826056#comment-16826056
 ] 

Elek, Marton commented on HDDS-999:
---

Ok, I got it from your patch. The initialization part is good (it can wait 50 
seconds for the new DNS entry) but the startup is bad. I can reproduce the 
problem with creating a cluster. Stopping all the services. Restarting the om 
only (no init is required) and it can't be started.

> Make the DNS resolution in OzoneManager more resilient
> --
>
> Key: HDDS-999
> URL: https://issues.apache.org/jira/browse/HDDS-999
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Elek, Marton
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-999.01.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> If the OzoneManager is started before scm the scm dns may not be available. 
> In this case the om should retry and re-resolve the dns, but as of now it 
> throws an exception:
> {code:java}
> 2019-01-23 17:14:25 ERROR OzoneManager:593 - Failed to start the OzoneManager.
> java.net.SocketException: Call From om-0.om to null:0 failed on socket 
> exception: java.net.SocketException: Unresolved address; For more details 
> see:  http://wiki.apache.org/hadoop/SocketException
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:798)
>     at org.apache.hadoop.ipc.Server.bind(Server.java:566)
>     at org.apache.hadoop.ipc.Server$Listener.(Server.java:1042)
>     at org.apache.hadoop.ipc.Server.(Server.java:2815)
>     at org.apache.hadoop.ipc.RPC$Server.(RPC.java:994)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:421)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
>     at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:804)
>     at 
> org.apache.hadoop.ozone.om.OzoneManager.startRpcServer(OzoneManager.java:563)
>     at 
> org.apache.hadoop.ozone.om.OzoneManager.getRpcServer(OzoneManager.java:927)
>     at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:265)
>     at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:674)
>     at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:587)
> Caused by: java.net.SocketException: Unresolved address
>     at sun.nio.ch.Net.translateToSocketException(Net.java:131)
>     at sun.nio.ch.Net.translateException(Net.java:157)
>     at sun.nio.ch.Net.translateException(Net.java:163)
>     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:76)
>     at org.apache.hadoop.ipc.Server.bind(Server.java:549)
>     ... 11 more
> Caused by: java.nio.channels.UnresolvedAddressException
>     at sun.nio.ch.Net.checkAddress(Net.java:101)
>     at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:218)
>     at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>     ... 12 more{code}
> It should be fixed. (See also HDDS-421 which fixed the same problem in 
> datanode side and HDDS-907 which is the workaround while this issue is not 
> resolved).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1395) Key write fails with BlockOutputStream has been closed exception

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1395?focusedWorklogId=232788&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232788
 ]

ASF GitHub Bot logged work on HDDS-1395:


Author: ASF GitHub Bot
Created on: 25/Apr/19 12:06
Start Date: 25/Apr/19 12:06
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #749: HDDS-1395. Key 
write fails with BlockOutputStream has been closed exception
URL: https://github.com/apache/hadoop/pull/749#issuecomment-486644909
 
 
   /retest
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232788)
Time Spent: 3h 20m  (was: 3h 10m)

> Key write fails with BlockOutputStream has been closed exception
> 
>
> Key: HDDS-1395
> URL: https://issues.apache.org/jira/browse/HDDS-1395
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: MiniOzoneChaosCluster, pull-request-available
> Attachments: HDDS-1395.000.patch, HDDS-1395.001.patch
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Key write fails with BlockOutputStream has been closed
> {code}
> 2019-04-05 11:24:47,770 ERROR ozone.MiniOzoneLoadGenerator 
> (MiniOzoneLoadGenerator.java:load(102)) - LOADGEN: Create 
> key:pool-431-thread-9-2092651262 failed with exception, but skipping
> java.io.IOException: BlockOutputStream has been closed.
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.checkOpen(BlockOutputStream.java:662)
> at 
> org.apache.hadoop.hdds.scm.storage.BlockOutputStream.write(BlockOutputStream.java:245)
> at 
> org.apache.hadoop.ozone.client.io.BlockOutputStreamEntry.write(BlockOutputStreamEntry.java:131)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.handleWrite(KeyOutputStream.java:325)
> at 
> org.apache.hadoop.ozone.client.io.KeyOutputStream.write(KeyOutputStream.java:287)
> at 
> org.apache.hadoop.ozone.client.io.OzoneOutputStream.write(OzoneOutputStream.java:49)
> at java.io.OutputStream.write(OutputStream.java:75)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.load(MiniOzoneLoadGenerator.java:100)
> at 
> org.apache.hadoop.ozone.MiniOzoneLoadGenerator.lambda$startIO$0(MiniOzoneLoadGenerator.java:143)
> at 
> java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1626)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1468) Inject configuration values to Java objects

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1468?focusedWorklogId=232719&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232719
 ]

ASF GitHub Bot logged work on HDDS-1468:


Author: ASF GitHub Bot
Created on: 25/Apr/19 10:34
Start Date: 25/Apr/19 10:34
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #772: HDDS-1468. Inject 
configuration values to Java objects
URL: https://github.com/apache/hadoop/pull/772#issuecomment-486619157
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 23 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 40 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1207 | trunk passed |
   | +1 | compile | 74 | trunk passed |
   | +1 | checkstyle | 31 | trunk passed |
   | +1 | mvnsite | 76 | trunk passed |
   | +1 | shadedclient | 833 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 113 | trunk passed |
   | +1 | javadoc | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 74 | the patch passed |
   | +1 | compile | 70 | the patch passed |
   | +1 | javac | 70 | the patch passed |
   | -0 | checkstyle | 25 | hadoop-hdds: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 61 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 804 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 119 | the patch passed |
   | +1 | javadoc | 54 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 78 | common in the patch passed. |
   | -1 | unit | 88 | server-scm in the patch failed. |
   | +1 | asflicense | 27 | The patch does not generate ASF License warnings. |
   | | | 3896 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-772/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/772 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 8e6273679b38 4.4.0-143-generic #169~14.04.2-Ubuntu SMP Wed 
Feb 13 15:00:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 0b3d41b |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-772/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-772/1/artifact/out/patch-unit-hadoop-hdds_server-scm.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-772/1/testReport/ |
   | Max. process+thread count | 464 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-hdds/server-scm U: hadoop-hdds |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-772/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232719)
Time Spent: 20m  (was: 10m)

> Inject configuration values to Java objects
> ---
>
> Key: HDDS-1468
> URL: https://issues.apache.org/jira/browse/HDDS-1468
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> According to the design doc in the parent issue we would like to support java 
> configuration objects which are simple POJO but the fields/setters are 
> annotated. As a first step we can introduce the 
> OzoneConfiguration.getConfigObject() api which can create the config object 
> and inject configuration.
> Later 

[jira] [Commented] (HDFS-12510) RBF: Add security to UI

2019-04-25 Thread Brahma Reddy Battula (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825951#comment-16825951
 ] 

Brahma Reddy Battula commented on HDFS-12510:
-

Patch lgtm, but we might need to change the title.
{quote}As pointed out by [~raviprak] in HDFS-12273, we should do something like:
{quote}
webhdfshandler change really required.? since it's used Datanode only..? did i 
miss anything here..?

Since router uses initwebhdfs(..) same as namenode so filter's can used same as 
Namenode.
{code:java}
NameNodeHttpServer.initWebHdfs(conf, httpAddress.getHostName(), httpKeytab,
 httpServer, RouterWebHdfsMethods.class.getPackage().getName());{code}
[~elgoiri]/[~raviprak] any pointers to this issue..? can we gohead with commit 
since it's just exposes one metric..?

> RBF: Add security to UI
> ---
>
> Key: HDFS-12510
> URL: https://issues.apache.org/jira/browse/HDFS-12510
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Íñigo Goiri
>Assignee: CR Hota
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-12510-HDFS-13891.001.patch
>
>
> HDFS-12273 implemented the UI for Router Based Federation without security.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread KWON BYUNGCHANG (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

KWON BYUNGCHANG updated HDFS-14434:
---
Attachment: HDFS-14434.006.patch

> webhdfs that connect secure hdfs should not use user.name parameter
> ---
>
> Key: HDFS-14434
> URL: https://issues.apache.org/jira/browse/HDFS-14434
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: webhdfs
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Minor
> Attachments: HDFS-14434.001.patch, HDFS-14434.002.patch, 
> HDFS-14434.003.patch, HDFS-14434.004.patch, HDFS-14434.005.patch, 
> HDFS-14434.006.patch
>
>
> I have two secure hadoop cluster.  Both cluster use cross-realm 
> authentication. 
> [use...@a.com|mailto:use...@a.com] can access to HDFS of B.COM realm
> by the way, hadoop username of use...@a.com  in B.COM realm is  
> cross_realm_a_com_user_a.
> hdfs dfs command of use...@a.com using B.COM webhdfs failed.
> root cause is  webhdfs that connect secure hdfs use user.name parameter.
> according to webhdfs spec,  insecure webhdfs use user.name,  secure webhdfs 
> use SPNEGO for authentication.
> I think webhdfs that connect secure hdfs  should not use user.name parameter.
> I will attach patch.
> below is error log
>  
> {noformat}
> $ hdfs dfs -ls  webhdfs://b.com:50070/
> ls: Usernames not matched: name=user_a != expected=cross_realm_a_com_user_a
>  
> # user.name in cross realm webhdfs
> $ curl -u : --negotiate 
> 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN&user.name=user_a' 
> {"RemoteException":{"exception":"SecurityException","javaClassName":"java.lang.SecurityException","message":"Failed
>  to obtain user group information: java.io.IOException: Usernames not 
> matched: name=user_a != expected=cross_realm_a_com_user_a"}}
> # USE SPNEGO
> $ curl -u : --negotiate 'http://b.com:50070/webhdfs/v1/?op=GETDELEGATIONTOKEN'
> {"Token"{"urlString":"XgA."}}
>  
> {noformat}
>  
>  
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12273) Federation UI

2019-04-25 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-12273:
---

Assignee: Íñigo Goiri  (was: Brahma Reddy Battula)

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, 
> HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch, 
> HDFS-12273-HDFS-10467-005.patch, HDFS-12273-HDFS-10467-006.patch, 
> HDFS-12273-HDFS-10467-007.patch, HDFS-12273-HDFS-10467-008.patch, 
> HDFS-12273-HDFS-10467-009.patch, HDFS-12273-HDFS-10467-010.patch, 
> federationUI-1.png, federationUI-2.png, federationUI-3.png
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-12273) Federation UI

2019-04-25 Thread Brahma Reddy Battula (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula reassigned HDFS-12273:
---

Assignee: Brahma Reddy Battula  (was: Íñigo Goiri)

> Federation UI
> -
>
> Key: HDFS-12273
> URL: https://issues.apache.org/jira/browse/HDFS-12273
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Íñigo Goiri
>Assignee: Brahma Reddy Battula
>Priority: Major
> Fix For: 2.9.0, 3.0.0
>
> Attachments: HDFS-12273-HDFS-10467-000.patch, 
> HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, 
> HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch, 
> HDFS-12273-HDFS-10467-005.patch, HDFS-12273-HDFS-10467-006.patch, 
> HDFS-12273-HDFS-10467-007.patch, HDFS-12273-HDFS-10467-008.patch, 
> HDFS-12273-HDFS-10467-009.patch, HDFS-12273-HDFS-10467-010.patch, 
> federationUI-1.png, federationUI-2.png, federationUI-3.png
>
>
> Add the Web UI to the Router to expose the status of the federated cluster. 
> It includes the federation metrics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14434) webhdfs that connect secure hdfs should not use user.name parameter

2019-04-25 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825920#comment-16825920
 ] 

Hadoop QA commented on HDFS-14434:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
6s{color} | {color:green} hadoop-hdfs-project generated 0 new + 536 unchanged - 
2 fixed = 536 total (was 538) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new + 
90 unchanged - 18 fixed = 96 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
49s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}100m 24s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14434 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12966988/HDFS-14434.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ca4d68f2ad45 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0b3d41b |
| m

[jira] [Commented] (HDFS-14356) Implement HDFS cache on SCM with native PMDK libs

2019-04-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825902#comment-16825902
 ] 

Feilong He commented on HDFS-14356:
---

HDFS-14356.002.patch has been uploaded with fixing [~Sammi]'s comments. Thanks!

> Implement HDFS cache on SCM with native PMDK libs
> -
>
> Key: HDFS-14356
> URL: https://issues.apache.org/jira/browse/HDFS-14356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14356.000.patch, HDFS-14356.001.patch, 
> HDFS-14356.002.patch
>
>
> In this implementation, native PMDK libs are used to map HDFS blocks to SCM. 
> To use this implementation, user should build hadoop with PMDK libs by 
> specifying a build option. This implementation is only supported in linux 
> platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14356) Implement HDFS cache on SCM with native PMDK libs

2019-04-25 Thread Feilong He (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feilong He updated HDFS-14356:
--
Attachment: HDFS-14356.002.patch

> Implement HDFS cache on SCM with native PMDK libs
> -
>
> Key: HDFS-14356
> URL: https://issues.apache.org/jira/browse/HDFS-14356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14356.000.patch, HDFS-14356.001.patch, 
> HDFS-14356.002.patch
>
>
> In this implementation, native PMDK libs are used to map HDFS blocks to SCM. 
> To use this implementation, user should build hadoop with PMDK libs by 
> specifying a build option. This implementation is only supported in linux 
> platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14356) Implement HDFS cache on SCM with native PMDK libs

2019-04-25 Thread Feilong He (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16825899#comment-16825899
 ] 

Feilong He commented on HDFS-14356:
---

[~Sammi], I am so appreciative of your review. Your suggestions are quite 
valuable to us.
{quote}Use constants or enum for pmdk status
{quote}
Good suggestion. As you suggested, it's quite reasonable to use enum to keep 
pmdk support state codes. I will refine this part of impl in the new patch.
{quote}Is there potential buffer overflow risk here?
{quote}
Actually, snprintf() method will truncate excessive characters when putting 
pmem error message into msg[1000]. So there should be no buffer overflow risk 
here. I will check other pieces of code to avoid this issue. Thanks for your 
great insight.
{quote}Do you plan to support Windows in this patch?  If not, please clarify 
the supported platform in the title or in the description.  Also make sure when 
compile native is enabled(-Pnative),  following two cases pass
{quote} * 
{quote} Linux platform, compile with and without PMDK enabled{quote}
 * 
{quote} Windows platform,  compile without PMDK enabled{quote}

I will make sure the two build cases pass. For linux platform, the build can 
pass on my side regardless of whether PMDK is enabled. And I will prepare a 
windows environment to make sure that.

 

Thanks [~Sammi] again!

> Implement HDFS cache on SCM with native PMDK libs
> -
>
> Key: HDFS-14356
> URL: https://issues.apache.org/jira/browse/HDFS-14356
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: caching, datanode
>Reporter: Feilong He
>Assignee: Feilong He
>Priority: Major
> Attachments: HDFS-14356.000.patch, HDFS-14356.001.patch
>
>
> In this implementation, native PMDK libs are used to map HDFS blocks to SCM. 
> To use this implementation, user should build hadoop with PMDK libs by 
> specifying a build option. This implementation is only supported in linux 
> platform.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1466) Improve the configuration API with using Java classes instead of constants

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1466?focusedWorklogId=232667&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232667
 ]

ASF GitHub Bot logged work on HDDS-1466:


Author: ASF GitHub Bot
Created on: 25/Apr/19 09:25
Start Date: 25/Apr/19 09:25
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #771: HDDS-1466. Improve 
the configuration API with using Java classes instead of constants 
URL: https://github.com/apache/hadoop/pull/771
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232667)
Time Spent: 20m  (was: 10m)

> Improve the configuration API with using Java classes instead of constants 
> ---
>
> Key: HDDS-1466
> URL: https://issues.apache.org/jira/browse/HDDS-1466
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
> Attachments: typesafe.pdf
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As of now we use the API in Configuration api from hadoop-common. We propose 
> to use additional wrapper to get configured Objects instead of constants to 
> make the configuration more structured and type safe.
> The ozone-default.xml can be generated with annotation processor.
> Please see the attached design doc about more details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1468) Inject configuration values to Java objects

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1468?focusedWorklogId=232668&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-232668
 ]

ASF GitHub Bot logged work on HDDS-1468:


Author: ASF GitHub Bot
Created on: 25/Apr/19 09:25
Start Date: 25/Apr/19 09:25
Worklog Time Spent: 10m 
  Work Description: elek commented on pull request #772: HDDS-1468. Inject 
configuration values to Java objects
URL: https://github.com/apache/hadoop/pull/772
 
 
   According to the design doc in the parent issue we would like to support 
java configuration objects which are simple POJO but the fields/setters are 
annotated. As a first step we can introduce the 
OzoneConfiguration.getConfigObject() api which can create the config object and 
inject configuration.
   
   Later we can improve it with annotation processor which can generate the 
ozone-default.xml.
   
   See: https://issues.apache.org/jira/browse/HDDS-1468
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 232668)
Time Spent: 10m
Remaining Estimate: 0h

> Inject configuration values to Java objects
> ---
>
> Key: HDDS-1468
> URL: https://issues.apache.org/jira/browse/HDDS-1468
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> According to the design doc in the parent issue we would like to support java 
> configuration objects which are simple POJO but the fields/setters are 
> annotated. As a first step we can introduce the 
> OzoneConfiguration.getConfigObject() api which can create the config object 
> and inject configuration.
> Later we can improve it with annotation processor which can generate the 
> ozone-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1468) Inject configuration values to Java objects

2019-04-25 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1468:
-
Labels: pull-request-available  (was: )

> Inject configuration values to Java objects
> ---
>
> Key: HDDS-1468
> URL: https://issues.apache.org/jira/browse/HDDS-1468
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Elek, Marton
>Assignee: Elek, Marton
>Priority: Major
>  Labels: pull-request-available
>
> According to the design doc in the parent issue we would like to support java 
> configuration objects which are simple POJO but the fields/setters are 
> annotated. As a first step we can introduce the 
> OzoneConfiguration.getConfigObject() api which can create the config object 
> and inject configuration.
> Later we can improve it with annotation processor which can generate the 
> ozone-default.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   >