[jira] [Created] (HDFS-12501) Ozone: Cleanup javac issues

2017-09-19 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-12501:


 Summary: Ozone: Cleanup javac issues
 Key: HDFS-12501
 URL: https://issues.apache.org/jira/browse/HDFS-12501
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Yiqun Lin
Assignee: Yiqun Lin


There is a bunch of javac issues under Ozone tree. We have to clean them up 
before we call for a merge of this tree.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12500) Ozone: add logger for oz shell commands and move error stack traces to DEBUG level

2017-09-19 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12500:
--

 Summary: Ozone: add logger for oz shell commands and move error 
stack traces to DEBUG level
 Key: HDFS-12500
 URL: https://issues.apache.org/jira/browse/HDFS-12500
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Priority: Minor


Per discussion in HDFS-12489 to reduce the verbosity of logs when exception 
happens, lets add logger to {{Shell.java}} and move error stack traces to DEBUG 
level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12499) dfs.namenode.shared.edits.dir property is currently namenode specific key

2017-09-19 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12499:
-

 Summary: dfs.namenode.shared.edits.dir property is currently 
namenode specific key
 Key: HDFS-12499
 URL: https://issues.apache.org/jira/browse/HDFS-12499
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham


HDFS + Federation cluster +QJM

dfs.shared.edits.dir property can be set as
1. dfs.shared.edits.dir.<> 
2. dfs.shared.edits.dir.<> .<>

Configuring both ways are supported currently. Option 2 should not be 
supported, as for a particular nameservice quoram of journal nodes should be 
same.

This jira is to discuss do we need to support 2nd option way of configuring or 
remove it?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12498) Journal Syncer is not started in Federated + HA cluster

2017-09-19 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDFS-12498:
-

 Summary: Journal Syncer is not started in Federated + HA cluster
 Key: HDFS-12498
 URL: https://issues.apache.org/jira/browse/HDFS-12498
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Bharat Viswanadham


Journal Syncer is not getting started in HDFS + Federated cluster, when 
dfs.shared.edits.dir.<> is provided, instead of 
dfs.namenode.shared.edits.dir 

*Log Snippet:*

{code:java}
2017-09-19 21:42:40,598 WARN 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Could not construct 
Shared Edits Uri
2017-09-19 21:42:40,598 WARN 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Other JournalNode 
addresses not available. Journal Syncing cannot be done
2017-09-19 21:42:40,598 WARN 
org.apache.hadoop.hdfs.qjournal.server.JournalNodeSyncer: Failed to start 
SyncJournal daemon for journal ns1
{code}





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-12483) Provide a configuration to adjust the weight of EC recovery tasks to adjust the speed of recovery

2017-09-19 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu resolved HDFS-12483.
--
Resolution: Duplicate

> Provide a configuration to adjust the weight of EC recovery tasks to adjust 
> the speed of recovery
> -
>
> Key: HDFS-12483
> URL: https://issues.apache.org/jira/browse/HDFS-12483
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: erasure-coding
>Affects Versions: 3.0.0-alpha4
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>Priority: Minor
>
> The relative speed of EC recovery comparing to 3x replica recovery is a 
> function of (EC codec, number of sources, NIC speed, and CPU speed, and etc). 
> Currently the EC recovery has a fixed {{xmitsInProgress}} of {{max(# of 
> sources, # of targets)}} comparing to {{1}} for 3x replica recovery, and NN 
> uses {{xmitsInProgress}} to decide how much recovery tasks to schedule to the 
> DataNode this we can add a coefficient for user to tune the weight of EC 
> recovery tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12497) Re-enable TestDFSStripedOutputStreamWithFailure tests

2017-09-19 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-12497:
--

 Summary: Re-enable TestDFSStripedOutputStreamWithFailure tests
 Key: HDFS-12497
 URL: https://issues.apache.org/jira/browse/HDFS-12497
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: erasure-coding
Affects Versions: 3.0.0-beta1
Reporter: Andrew Wang


We disabled this suite of tests in HDFS-12417 since they were very flaky. We 
should fix these tests and re-enable them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12496) Make QuorumJournalManager timeout properties configurable

2017-09-19 Thread Ajay Kumar (JIRA)
Ajay Kumar created HDFS-12496:
-

 Summary: Make QuorumJournalManager timeout properties configurable
 Key: HDFS-12496
 URL: https://issues.apache.org/jira/browse/HDFS-12496
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ajay Kumar
Assignee: Ajay Kumar


Make QuorumJournalManager timeout properties configurable using a common key. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12495) TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks fails intermittently

2017-09-19 Thread Eric Badger (JIRA)
Eric Badger created HDFS-12495:
--

 Summary: TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks 
fails intermittently
 Key: HDFS-12495
 URL: https://issues.apache.org/jira/browse/HDFS-12495
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Eric Badger
Assignee: Eric Badger


{noformat}
java.net.BindException: Problem binding to [localhost:36701] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:433)
at sun.nio.ch.Net.bind(Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at org.apache.hadoop.ipc.Server.bind(Server.java:546)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:955)
at org.apache.hadoop.ipc.Server.(Server.java:2655)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:968)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:367)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:342)
at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:810)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:954)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1314)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2611)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2499)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2546)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2152)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock.testPendingDeleteUnknownBlocks(TestPendingInvalidateBlock.java:175)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



2017-09-19 Hadoop 3 release status update

2017-09-19 Thread Andrew Wang
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+3+release+status+updates

2017-09-19

Sorry for the late update. We're down to one blocker and one EC must do!
Made great progress over the last week and a bit.

We will likely cut RC0 this week.

Highlights:

   - Down to just two blocker issues!

Red flags:

   - HDFS unit tests are quite flaky. Some blockers were filed and then
   resolved or downgraded. More work to do here.

Previously tracked beta1 blockers that have been resolved or dropped:

   - HADOOP-14738  (Remove
   S3N and obsolete bits of S3A; rework docs): Committed!
   - HADOOP-14284  (Shade
   Guava everywhere): We resolved this since we decided it was unnecessary for
   beta1.
   - YARN-7162  (Remove
   XML excludes file format): Robert committed after review from Junping.
   - HADOOP-14847  (Remove
   Guava Supplier and change to java Supplier in AMRMClient and
   AMRMClientAysnc): Committed!
   - HADOOP-14238 
(Rechecking
   Guava's object is not exposed to user-facing API): We dropped this off the
   blocker list in the absence of other known issues
   - HADOOP-14835  (mvn
   site build throws SAX errors): I committed after further discussion and
   review with Sean Mackrory and Allen. Planning to switch to japicmp for
   later releases.
   - HDFS-12218  (Rename
   split EC / replicated block metrics in BlockManager): Committed.


beta1 blockers:

   - HADOOP-14771 
(hadoop-client
   does not include hadoop-yarn-client): This was committed but then reverted
   since it broke the build. Haibo and Sean are actively pressing towards a
   correct fix.


beta1 features:

   - Erasure coding
  - Resolved a number of must-dos
 - HDFS-7859 (fsimage changes) was committed!
 - HDFS-12395 (edit log changes) was also committed!
 - HDFS-12218 is discussed above.
  - Remaining blockers:
 - HDFS-12447 is to refactor some of the fsimage code, Andrew needs
 to review
  - Also been progress cleaning up the flaky unit tests, still more to
  do
   - Addressing incompatible changes (YARN-6142 and HDFS-11096)
   - Ray has gone through almost all the YARN protos and thinks we're okay
  to move forwards.
  - I think we'll move forward without this committed, given that Sean
  has run it successfully.
   - Classpath isolation (HADOOP-11656)
  - We have just HADOOP-14771 left.
   - Compat guide (HADOOP-13714
   )
  - This was committed! Some follow-on work filed for GA.
   - TSv2 alpha 2
   - This was merged, no problems thus far [image: (smile)]

GA features:

   - Resource profiles (Wangda Tan)
  - Merge vote was sent out. Since branch-3.0 has been cut, this can be
  merged to trunk (3.1.0) and then backported once we've completed testing.
   - HDFS router-based federation (Chris Douglas)
   - This is like YARN federation, very separate and doesn't add new APIs,
  run in production at MSFT.
  - If it passes Cloudera internal integration testing, I'm fine
  putting this in for GA.
   - API-based scheduler configuration (Jonathan Hung)
  - Jonathan mentioned that his main goal is to get this in for 2.9.0,
  which seems likely to go out after 3.0.0 GA since there hasn't been any
  serious release planning yet. Jonathan said that delaying this
until 3.1.0
  is fine.
   - YARN native services
  - Still not 100% clear when this will land.


[jira] [Created] (HDFS-12494) libhdfs SIGSEGV in setTLSExceptionStrings

2017-09-19 Thread John Zhuge (JIRA)
John Zhuge created HDFS-12494:
-

 Summary: libhdfs SIGSEGV in setTLSExceptionStrings
 Key: HDFS-12494
 URL: https://issues.apache.org/jira/browse/HDFS-12494
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: libhdfs
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge


libhdfs application crashes when CLASSPATH is set but not set properly:
{noformat}
$ export CLASSPATH=$(hadoop classpath)
$ pwd
/Users/jzhuge/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/native

$ ./test_libhdfs_ops
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x0001052968f7, pid=14147, tid=775
#
# JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build 1.7.0_79-b15)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode bsd-amd64 
compressed oops)
# Problematic frame:
# C  [libhdfs.0.0.0.dylib+0x38f7]  setTLSExceptionStrings+0x47
#
# Core dump written. Default location: /cores/core or core.14147
#
# An error report file with more information is saved as:
# 
/Users/jzhuge/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid14147.log
#
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Abort trap: 6 (core dumped)

[jzhuge@jzhuge-MBP native]((be32925fff5...) *+)$ lldb -c /cores/core.14147
(lldb) target create --core "/cores/core.14147"
warning: (x86_64) /cores/core.14147 load command 549 LC_SEGMENT_64 has a 
fileoff + filesize (0x14627f000) that extends beyond the end of the file 
(0x14627e000), the segment will be truncated to match
warning: (x86_64) /cores/core.14147 load command 550 LC_SEGMENT_64 has a 
fileoff (0x14627f000) that extends beyond the end of the file (0x14627e000), 
ignoring this section
Core file '/cores/core.14147' (x86_64) was loaded.
(lldb) bt
* thread #1, stop reason = signal SIGSTOP
  * frame #0: 0x7fffcf89ad42 libsystem_kernel.dylib`__pthread_kill + 10
frame #1: 0x7fffcf988457 libsystem_pthread.dylib`pthread_kill + 90
frame #2: 0x7fffcf800420 libsystem_c.dylib`abort + 129
frame #3: 0x0001056cd5fb libjvm.dylib`os::abort(bool) + 25
frame #4: 0x0001057d98fc libjvm.dylib`VMError::report_and_die() + 2308
frame #5: 0x0001056cefb5 libjvm.dylib`JVM_handle_bsd_signal + 1083
frame #6: 0x7fffcf97bb3a libsystem_platform.dylib`_sigtramp + 26
frame #7: 0x0001052968f8 
libhdfs.0.0.0.dylib`setTLSExceptionStrings(rootCause=0x, 
stackTrace=0x) at jni_helper.c:589 [opt]
frame #8: 0x0001052954f0 
libhdfs.0.0.0.dylib`printExceptionAndFreeV(env=0x7ffaff0019e8, 
exc=0x7ffafec04140, noPrintFlags=, fmt="loadFileSystems", 
ap=) at exception.c:183 [opt]
frame #9: 0x0001052956bb 
libhdfs.0.0.0.dylib`printExceptionAndFree(env=, exc=, 
noPrintFlags=, fmt=) at exception.c:213 [opt]
frame #10: 0x0001052967f4 libhdfs.0.0.0.dylib`getJNIEnv [inlined] 
getGlobalJNIEnv at jni_helper.c:463 [opt]
frame #11: 0x00010529664f libhdfs.0.0.0.dylib`getJNIEnv at 
jni_helper.c:528 [opt]
frame #12: 0x0001052975eb 
libhdfs.0.0.0.dylib`hdfsBuilderConnect(bld=0x7ffafed0) at hdfs.c:693 
[opt]
frame #13: 0x00010528be30 test_libhdfs_ops`main(argc=, 
argv=) at test_libhdfs_ops.c:91 [opt]
frame #14: 0x7fffcf76c235 libdyld.dylib`start + 1
(lldb) f 10
libhdfs.0.0.0.dylib was compiled with optimization - stepping may behave oddly; 
variables may not be available.
frame #10: 0x0001052967f4 libhdfs.0.0.0.dylib`getJNIEnv [inlined] 
getGlobalJNIEnv at jni_helper.c:463 [opt]
   460   "org/apache/hadoop/fs/FileSystem",
   461   "loadFileSystems", "()V");
   462  if (jthr) {
-> 463  printExceptionAndFree(env, jthr, PRINT_EXC_ALL, 
"loadFileSystems");
   464  }
   465  }
   466  else {
(lldb) f 7
frame #7: 0x0001052968f8 
libhdfs.0.0.0.dylib`setTLSExceptionStrings(rootCause=0x, 
stackTrace=0x) at jni_helper.c:589 [opt]
   586  mutexUnlock();
   587  }
   588
-> 589  free(state->lastExceptionRootCause);
   590  free(state->lastExceptionStackTrace);
   591  state->lastExceptionRootCause = (char*)rootCause;
   592  state->lastExceptionStackTrace = (char*)stackTrace;
(lldb) p state
(ThreadLocalState *) $0 = 0x
{noformat}

The correct way to set CLASSPATH is because libhdfs does not support wildcard 
in CLASSPATH:
{noformat}
$ export CLASSPATH=$(hadoop classpath --glob)
{noformat}
Filed HDFS-12491 Support wildcard in CLASSPATH for libhdfs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To 

[jira] [Created] (HDFS-12493) Correct javadoc for BackupNode#startActiveServices

2017-09-19 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12493:


 Summary: Correct javadoc for BackupNode#startActiveServices
 Key: HDFS-12493
 URL: https://issues.apache.org/jira/browse/HDFS-12493
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
Priority: Trivial


Following javadoc warning needs to be fixed for 
{{BackupNode#startActiveServices}}
Javadoc links are not linked correctly.

{code}
/**
 * Start services for BackupNode.
 * 
 * The following services should be muted
 * (not run or not pass any control commands to DataNodes)
 * on BackupNode:
 * {@link LeaseManager.Monitor} protected by SafeMode.
 * {@link BlockManager.RedundancyMonitor} protected by SafeMode.
 * {@link HeartbeatManager.Monitor} protected by SafeMode.
 * {@link DatanodeAdminManager.Monitor} need to prohibit refreshNodes().
 * {@link PendingReconstructionBlocks.PendingReconstructionMonitor}
 * harmless, because RedundancyMonitor is muted.
 */
@Override
public void startActiveServices() throws IOException {
  try {
namesystem.startActiveServices();
  } catch (Throwable t) {
doImmediateShutdown(t);
  }
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: qbt is failiing///RE: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-09-19 Thread Allen Wittenauer

> On Sep 19, 2017, at 6:35 AM, Brahma Reddy Battula 
>  wrote:
> 
> qbt is failing from two days with following errors, any idea on this..?

Nothing to be too concerned about.

This is what it looks like when a build server gets bounced or crashed. 
 INFRA team knows our jobs take forever so they rarely wait for them to finish 
if they are doing upgrades.  They’ve been doing that work lately; you can 
follow the action on builds@.





-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12492) Ozone: ListVolume output misses some attributes

2017-09-19 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12492:
--

 Summary: Ozone: ListVolume output misses some attributes
 Key: HDFS-12492
 URL: https://issues.apache.org/jira/browse/HDFS-12492
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang


When do a listVolume call, we get output like following

{noformat}
[ {
  "owner" : {
    "name" : "wwei"
  },
  "quota" : {
    "unit" : "TB",
    "size" : 1048576
  },
  "volumeName" : "vol-0-84022",
  "createdOn" : "Mon, 18 Sep 2017 03:09:46 GMT",
  "createdBy" : null,
  "bytesUsed" : 0,
  "bucketCount" : 0
{noformat}

Values for *createdOn*, *createdBy* and *bytesUsed* and *bucketCount* are all 
missing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12491) Support wildcard in CLASSPATH for libhdfs

2017-09-19 Thread John Zhuge (JIRA)
John Zhuge created HDFS-12491:
-

 Summary: Support wildcard in CLASSPATH for libhdfs
 Key: HDFS-12491
 URL: https://issues.apache.org/jira/browse/HDFS-12491
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: libhdfs
Affects Versions: 2.8.0
Reporter: John Zhuge


According to libhdfs doc, wildcard in CLASSPATH is not support:

bq. The most common problem is the CLASSPATH is not set properly when calling a 
program that uses libhdfs. Make sure you set it to all the Hadoop jars needed 
to run Hadoop itself as well as the right configuration directory containing 
hdfs-site.xml. It is not valid to use wildcard syntax for specifying multiple 
jars. It may be useful to run hadoop classpath --glob or hadoop classpath --jar 
 to generate the correct classpath for your deployment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12490) Ozone: OzoneClient: OzoneBucket should have information about the bucket creation time

2017-09-19 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-12490:


 Summary: Ozone: OzoneClient: OzoneBucket should have information 
about the bucket creation time
 Key: HDFS-12490
 URL: https://issues.apache.org/jira/browse/HDFS-12490
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: HDFS-7240


OzoneBucket should have information about the bucket creation time.

OzoneFileSystem needs creation time to display the file status information for 
the root of the filesystem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



qbt is failiing///RE: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-09-19 Thread Brahma Reddy Battula
qbt is failing from two days with following errors, any idea on this..?




cd 
/testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
/opt/maven/bin/mvn 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-trunk-full-1 -Ptest-patch 
-Pparallel-tests -Pshelltest -Pnative -Drequire.snappy -Drequire.openssl 
-Drequire.fuse -Drequire.test.libhadoop -Pyarn-ui clean test -fae > 
/testptch/patchprocess/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
 2>&1
FATAL: command execution failed
Command close created at
at hudson.remoting.Command.(Command.java:60)
at hudson.remoting.Channel$CloseCommand.(Channel.java:1123)
at hudson.remoting.Channel$CloseCommand.(Channel.java:1121)
at hudson.remoting.Channel.close(Channel.java:1281)
at hudson.remoting.Channel.close(Channel.java:1263)
at hudson.remoting.Channel$CloseCommand.execute(Channel.java:1128)
Caused: hudson.remoting.Channel$OrderlyShutdown
at hudson.remoting.Channel$CloseCommand.execute(Channel.java:1129)
at hudson.remoting.Channel$1.handle(Channel.java:527)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:83)
Caused: java.io.IOException: Backing channel 'H10' is disconnected.
at 
hudson.remoting.RemoteInvocationHandler.channelOrFail(RemoteInvocationHandler.java:192)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:257)
at com.sun.proxy.$Proxy125.isAlive(Unknown Source)
at hudson.Launcher$RemoteLauncher$ProcImpl.isAlive(Launcher.java:1043)
at hudson.Launcher$RemoteLauncher$ProcImpl.join(Launcher.java:1035)
at hudson.tasks.CommandInterpreter.join(CommandInterpreter.java:155)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:109)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:66)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:735)
at hudson.model.Build$BuildExecution.build(Build.java:206)
at hudson.model.Build$BuildExecution.doRun(Build.java:163)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:490)
at hudson.model.Run.execute(Run.java:1735)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Build step 'Execute shell' marked build as failure
ERROR: Step ?Publish Checkstyle analysis results? failed: no workspace for 
hadoop-qbt-trunk-java8-linux-x86 #526
ERROR: Step ?Publish FindBugs analysis results? failed: no workspace for 
hadoop-qbt-trunk-java8-linux-x86 #526
ERROR: Step ?Archive the artifacts? failed: no workspace for 
hadoop-qbt-trunk-java8-linux-x86 #526
ERROR: Step ?Publish JUnit test result report? failed: no workspace for 
hadoop-qbt-trunk-java8-linux-x86 #526
ERROR: Build step failed with exception
java.lang.NullPointerException
at 
hudson.plugins.violations.ViolationsPublisher.perform(ViolationsPublisher.java:74)
at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:735)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:676)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:621)
at hudson.model.Run.execute(Run.java:1760)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Build step 'Report Violations' marked build as failure

--Brahma Reddy Battula

-Original Message-
From: Apache Jenkins Server [mailto:jenk...@builds.apache.org] 
Sent: 19 September 2017 15:07
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/527/

[Sep 18, 2017 10:07:12 AM] (kai.zheng) HDFS-12460. Make addErasureCodingPolicy 
an idempotent operation.
[Sep 18, 2017 3:16:09 PM] (jlowe) YARN-7192. Add a pluggable StateMachine 
Listener that is notified of NM [Sep 18, 2017 4:53:24 PM] (arp) HDFS-12470. 
DiskBalancer: Some tests create plan files under system [Sep 18, 2017 5:32:08 
PM] (rkanter) Revert "YARN-7162. Remove XML excludes file format (rkanter)" - 
wrong [Sep 18, 2017 5:40:06 PM] (rkanter) MAPREDUCE-6954. Disable erasure 
coding for files that are uploaded to [Sep 18, 

[jira] [Created] (HDFS-12489) Ozone: OzoneRestClientException swallows exceptions which makes client hard to debug failures

2017-09-19 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12489:
--

 Summary: Ozone: OzoneRestClientException swallows exceptions which 
makes client hard to debug failures 
 Key: HDFS-12489
 URL: https://issues.apache.org/jira/browse/HDFS-12489
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


There are multiple try-catch places swallow exceptions when transforming some 
other exception to {{OzoneRestClientException}}. As a result, when client runs 
into such code paths, they lose track of what was going on which makes the 
debug extremely difficult. See below example

{code}
bin/hdfs oz -listBucket  http://15oz1.fyre.ibm.com:9864/vol-0-84022 -user wwei
Command Failed : {"httpCode":0,"shortMessage":"Read timed 
out","resource":null,"message":"Read timed 
out","requestID":null,"hostName":null}
{code}

the returned message doesn't help much on debugging where and how it reads 
timed out.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12488) Ozone: OzoneRestClient has no notion of configuration

2017-09-19 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12488:
--

 Summary: Ozone: OzoneRestClient has no notion of configuration
 Key: HDFS-12488
 URL: https://issues.apache.org/jira/browse/HDFS-12488
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Weiwei Yang


When I test ozone on a 15 nodes cluster with millions of keys, responses of 
rest client becomes to be slower. Following call times out after default 5s,

{code}
bin/hdfs oz -listBucket  http://15oz1.fyre.ibm.com:9864/vol-0-84022 -user wwei
Command Failed : {"httpCode":0,"shortMessage":"Read timed 
out","resource":null,"message":"Read timed 
out","requestID":null,"hostName":null}
{code}

Then I increase the timeout by explicitly setting following property in 
{{ozone-site.xml}}

{code}
 
ozone.client.socket.timeout.ms
1
  
{code}

but this doesn't work and rest clients are still created with default *5s* 
timeout. This needs to be fixed. Just like {{DFSClient}}, we should make 
{{OzoneRestClient}} to be configuration awareness, so that clients can adjust 
client configuration on demand. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12487) FsDatasetSpi.isValidBlock() lacks null pointer check inside and neither do the callers

2017-09-19 Thread liumi (JIRA)
liumi created HDFS-12487:


 Summary: FsDatasetSpi.isValidBlock() lacks null pointer check 
inside and neither do the callers
 Key: HDFS-12487
 URL: https://issues.apache.org/jira/browse/HDFS-12487
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: balancer & mover, diskbalancer
Affects Versions: 3.0.0-alpha1
 Environment: CentOS 6.8 x64
CPU:4 core
Memory:16GB
Hadoop: Release 3.0.0-alpha4

Reporter: liumi
 Fix For: 3.0.0-alpha4


BlockIteratorImpl.nextBlock() will look for the blocks in the source volume, if 
there are no blocks any more, it will return null up to 
DiskBalancer.getBlockToCopy(). However, the DiskBalancer.getBlockToCopy() will 
check whether it's a valid block.
When I look into the FsDatasetSpi.isValidBlock(), I find that it doesn't check 
the null pointer! In fact, we firstly need to check whether it's null or not, 
or exception will occur.
This bug is hard to find, because the DiskBalancer hardly copy all the data of 
one volume to others. Even if some times we may copy all the data of one volume 
to other volumes, when the bug occurs, the copy process has already done.
However, when we try to copy all the data of two or more volumes to other 
volumes in more than one step, the thread will be shut down, which is caused by 
the bug above.
The bug can fixed by two ways:
1)Before the call of FsDatasetSpi.isValidBlock(), we check the null pointer
2)Check the null pointer inside the implementation of 
FsDatasetSpi.isValidBlock()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-09-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/527/

[Sep 18, 2017 10:07:12 AM] (kai.zheng) HDFS-12460. Make addErasureCodingPolicy 
an idempotent operation.
[Sep 18, 2017 3:16:09 PM] (jlowe) YARN-7192. Add a pluggable StateMachine 
Listener that is notified of NM
[Sep 18, 2017 4:53:24 PM] (arp) HDFS-12470. DiskBalancer: Some tests create 
plan files under system
[Sep 18, 2017 5:32:08 PM] (rkanter) Revert "YARN-7162. Remove XML excludes file 
format (rkanter)" - wrong
[Sep 18, 2017 5:40:06 PM] (rkanter) MAPREDUCE-6954. Disable erasure coding for 
files that are uploaded to
[Sep 18, 2017 6:26:44 PM] (Arun Suresh) YARN-7199. Fix
[Sep 18, 2017 9:04:05 PM] (xgong) YARN-6570. No logs were found for running 
application, running
[Sep 18, 2017 9:25:35 PM] (haibochen) HADOOP-14771. hadoop-client does not 
include hadoop-yarn-client. (Ajay
[Sep 18, 2017 10:04:43 PM] (jlowe) MAPREDUCE-6958. Shuffle audit logger should 
log size of shuffle
[Sep 18, 2017 10:13:42 PM] (wang) HADOOP-14835. mvn site build throws SAX 
errors. Contributed by Andrew
[Sep 18, 2017 10:49:31 PM] (Arun Suresh) YARN-7203. Add container ExecutionType 
into ContainerReport. (Botong
[Sep 19, 2017 2:05:54 AM] (aajisaka) MAPREDUCE-6947. Moving logging APIs over 
to slf4j in


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org