[jira] [Created] (HDFS-11913) Ozone: TestKeySpaceManager#testDeleteVolume fails

2017-06-01 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-11913:
--

 Summary: Ozone: TestKeySpaceManager#testDeleteVolume fails
 Key: HDFS-11913
 URL: https://issues.apache.org/jira/browse/HDFS-11913
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ozone
Reporter: Weiwei Yang
Assignee: Weiwei Yang


HDFS-11774 introduces an UT failure, {{TestKeySpaceManager#testDeleteVolume}}, 
error as below

{noformat}
java.util.NoSuchElementException
 at 
org.fusesource.leveldbjni.internal.JniDBIterator.peekNext(JniDBIterator.java:84)
 at org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:98)
 at org.fusesource.leveldbjni.internal.JniDBIterator.next(JniDBIterator.java:45)
 at 
org.apache.hadoop.ozone.ksm.MetadataManagerImpl.isVolumeEmpty(MetadataManagerImpl.java:221)
 at 
org.apache.hadoop.ozone.ksm.VolumeManagerImpl.deleteVolume(VolumeManagerImpl.java:294)
 at 
org.apache.hadoop.ozone.ksm.KeySpaceManager.deleteVolume(KeySpaceManager.java:340)
 at 
org.apache.hadoop.ozone.protocolPB.KeySpaceManagerProtocolServerSideTranslatorPB.deleteVolume(KeySpaceManagerProtocolServerSideTranslatorPB.java:200)
 at 
org.apache.hadoop.ozone.protocol.proto.KeySpaceManagerProtocolProtos$KeySpaceManagerService$2.callBlockingMethod(KeySpaceManagerProtocolProtos.java:22742)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:522)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:867)
 at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:813)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:422)
 at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2659)
{noformat}

this is caused by a buggy code in {{MetadataManagerImpl#isVolumeEmpty}}, there 
are 2 issues need to be fixed
# Iterate next element will throw this exception if it doesn't have next. This 
always fail when a volume is empty.
# The code was checking if the first bucket name start with "/volume_name", 
this will return a wrong value if I have several empty volumes with same 
prefix, e.g "/volA/", "/volAA/". Such case {{isVolumeEmpty}} will return false 
as the next element from "/volA/" is not a bucket, it's another volume 
"/volAA/" but matches the prefix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11912) Add a snapshot unit test with randomized file IO operations

2017-06-01 Thread George Huang (JIRA)
George Huang created HDFS-11912:
---

 Summary: Add a snapshot unit test with randomized file IO 
operations
 Key: HDFS-11912
 URL: https://issues.apache.org/jira/browse/HDFS-11912
 Project: Hadoop HDFS
  Issue Type: Test
  Components: hdfs
Reporter: George Huang
Priority: Minor


Adding a snapshot unit test with randomized file IO operations.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-06-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/

[May 31, 2017 3:09:08 PM] (brahma) HDFS-11901. Modifier 'static' is redundant 
for inner enums. Contributed
[May 31, 2017 3:18:42 PM] (jeagles) YARN-6497. Method length of 
ResourceManager#serviceInit() is too long
[May 31, 2017 3:55:03 PM] (kihwal) HDFS-5042. Completed files lost after power 
failure. Contributed by
[May 31, 2017 4:32:32 PM] (nroberts) YARN-6649. RollingLevelDBTimelineServer 
throws RuntimeException if
[May 31, 2017 10:48:04 PM] (templedf) YARN-6246. Identifying starved apps does 
not need the scheduler
[May 31, 2017 10:57:48 PM] (templedf) HADOOP-9849. License information is 
missing for native CRC32 code
[Jun 1, 2017 4:08:01 AM] (aajisaka) HADOOP-14466. Remove useless document from
[Jun 1, 2017 4:35:14 AM] (aajisaka) HADOOP-13921. Remove log4j classes from 
JobConf.




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure120 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.hdfs.TestNNBench 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/artifact/out/patch-mvninstall-root.txt
  [496K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [140K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [972K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/332/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   

[jira] [Created] (HDFS-11911) SnapshotDiff should maintain the order of file/dir creation and deletion

2017-06-01 Thread Manoj Govindassamy (JIRA)
Manoj Govindassamy created HDFS-11911:
-

 Summary: SnapshotDiff should maintain the order of file/dir 
creation and deletion
 Key: HDFS-11911
 URL: https://issues.apache.org/jira/browse/HDFS-11911
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs, snapshots
Affects Versions: 3.0.0-alpha1
Reporter: Manoj Govindassamy
Assignee: Manoj Govindassamy


{{DirectoryWithSnapshotFeature}} maintains a separate list for CREATED and 
DELETED children but the ordering of these creation and deletion events are not 
maintained. Assume a case like below, where the time is growing downwards...
{noformat}
|
+  CREATE File-1
|
+ Snap S1 created
|
+ DELETE File-1
|
+ Snap S2 created
|
+ CREATE File-1
|
+ Snap S3 created
|
|
V
{noformat} 

The snapshot diff report which takes in the DirectoryWithSnapshotFeature diff 
entries and just prints all the creation first and then the deletions, thereby 
giving the perception that file-1 got created first and then got deleted. But 
after S3, file-1 is still available. 

{noformat}
The difference between snapshot S1 and snapshot S3 under the directory /:
M   .
+   ./file-1
-   ./file-1
{noformat}

Can we have DirectoryWithSnapshotFeature maintain the diff entries ordered by 
time or sequence? 




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Jenkins failures

2017-06-01 Thread Xiao Chen
There was some maven-caused failures last night, and should be fixed by
INFRA-14261 .
Saw another error just now but not sure if that's just my run or something
globally.

Thanks,

-Xiao

On Thu, Jun 1, 2017 at 10:12 AM, Anu Engineer 
wrote:

> Scratch that , it looks like Jenkins is just really slow in picking up the
> patches. Failures are all normal.
>
> Thanks
> Anu
>
>
>
>
>
> On 6/1/17, 10:04 AM, "Anu Engineer"  wrote:
>
> >Hi All,
> >
> >Looks like we are having failures in the Jenkins pipeline. Would someone
> with access to build machines be able to take a look ? Not able to see
> human readable build logs from builds.apache.org.
> >I can see a message saying builds have been broken since build #19584.
> >
> >Thanks in advance
> >Anu
> >
>


Re: Jenkins failures

2017-06-01 Thread Anu Engineer
Scratch that , it looks like Jenkins is just really slow in picking up the 
patches. Failures are all normal.

Thanks
Anu





On 6/1/17, 10:04 AM, "Anu Engineer"  wrote:

>Hi All,
>
>Looks like we are having failures in the Jenkins pipeline. Would someone with 
>access to build machines be able to take a look ? Not able to see human 
>readable build logs from builds.apache.org.
>I can see a message saying builds have been broken since build #19584.
>
>Thanks in advance
>Anu
>


Jenkins failures

2017-06-01 Thread Anu Engineer
Hi All,

Looks like we are having failures in the Jenkins pipeline. Would someone with 
access to build machines be able to take a look ? Not able to see human 
readable build logs from builds.apache.org.
I can see a message saying builds have been broken since build #19584.

Thanks in advance
Anu



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-06-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/421/

[May 31, 2017 10:45:35 AM] (vvasudev) YARN-6366. Refactor the NodeManager 
DeletionService to support
[May 31, 2017 3:09:08 PM] (brahma) HDFS-11901. Modifier 'static' is redundant 
for inner enums. Contributed
[May 31, 2017 3:18:42 PM] (jeagles) YARN-6497. Method length of 
ResourceManager#serviceInit() is too long
[May 31, 2017 3:55:03 PM] (kihwal) HDFS-5042. Completed files lost after power 
failure. Contributed by
[May 31, 2017 4:32:32 PM] (nroberts) YARN-6649. RollingLevelDBTimelineServer 
throws RuntimeException if
[May 31, 2017 10:48:04 PM] (templedf) YARN-6246. Identifying starved apps does 
not need the scheduler
[May 31, 2017 10:57:48 PM] (templedf) HADOOP-9849. License information is 
missing for native CRC32 code
[Jun 1, 2017 4:08:01 AM] (aajisaka) HADOOP-14466. Remove useless document from
[Jun 1, 2017 4:35:14 AM] (aajisaka) HADOOP-13921. Remove log4j classes from 
JobConf.




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 351] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() 

[jira] [Created] (HDFS-11910) Ozone:KSM: Add setVolumeAcls to allow adding/removing acls from a KSM volume

2017-06-01 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDFS-11910:


 Summary: Ozone:KSM: Add setVolumeAcls to allow adding/removing 
acls from a KSM volume
 Key: HDFS-11910
 URL: https://issues.apache.org/jira/browse/HDFS-11910
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Create KSM volumes sets the acls for the user creating a volume, however it 
will be desirable to have setVolumeAcls to change the set of acls for the volume



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11909) Ozone: KSM : Support for simulated file system operations

2017-06-01 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-11909:
---

 Summary: Ozone: KSM :  Support for simulated file system operations
 Key: HDFS-11909
 URL: https://issues.apache.org/jira/browse/HDFS-11909
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer


This JIRA adds a proposal that makes it easy to implement OzoneFileSystem. This 
allows the directory and file list operations simpler.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org