wiki.apache.org is no longer editable

2017-05-31 Thread Akira Ajisaka

Hi folks,

https://wiki.apache.org/hadoop/ is no longer editable.
If you want to edit a wiki page in wiki.apache.org,
you need to migrate the page to
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Home.

If you want to edit the wiki, please tell me your account
for cwiki.apache.org.

There are some error messages refer to the wiki page.
We need to update the error messages when the page is migrated.

Regards,
Akira


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14474) Use OpenJDK 7 instead of Oracle JDK 7 to avoid oracle-java7-installer failures

2017-05-31 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-14474:
--

 Summary: Use OpenJDK 7 instead of Oracle JDK 7 to avoid 
oracle-java7-installer failures
 Key: HADOOP-14474
 URL: https://issues.apache.org/jira/browse/HADOOP-14474
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.3, 2.8.0
Reporter: Akira Ajisaka


Recently Oracle has changed the download link for Oracle JDK7, and that's why 
oracle-java7-installer fails. Precommit jobs for branch-2* are failing because 
of this failure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14473) Optimize NativeAzureFileSystem::seek for forward seeks

2017-05-31 Thread Rajesh Balamohan (JIRA)
Rajesh Balamohan created HADOOP-14473:
-

 Summary: Optimize NativeAzureFileSystem::seek for forward seeks
 Key: HADOOP-14473
 URL: https://issues.apache.org/jira/browse/HADOOP-14473
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Reporter: Rajesh Balamohan
Priority: Minor


{{NativeAzureFileSystem::seek()}} closes and re-opens the inputstream 
irrespective of forward/backward seek. It would be beneficial to re-open the 
stream on backward seek.

https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java#L889



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14472) Azure: TestReadAndSeekPageBlobAfterWrite fails intermittently

2017-05-31 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14472:
--

 Summary: Azure: TestReadAndSeekPageBlobAfterWrite fails 
intermittently
 Key: HADOOP-14472
 URL: https://issues.apache.org/jira/browse/HADOOP-14472
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Reporter: Mingliang Liu


Reported by [HADOOP-14461]
{code}
testManySmallWritesWithHFlush(org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite)
  Time elapsed: 1.051 sec  <<< FAILURE!
java.lang.AssertionError: hflush duration of 13, less than minimum expected of 
20
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite.writeAndReadOneFile(TestReadAndSeekPageBlobAfterWrite.java:286)
at 
org.apache.hadoop.fs.azure.TestReadAndSeekPageBlobAfterWrite.testManySmallWritesWithHFlush(TestReadAndSeekPageBlobAfterWrite.java:247)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2017-05-31 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/

[May 30, 2017 5:07:58 PM] (brahma) HADOOP-14456. Modifier 'static' is redundant 
for inner enums.
[May 30, 2017 6:10:12 PM] (lei) HDFS-11659. 
TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail
[May 30, 2017 11:58:15 PM] (haibochen) YARN-6477. Dispatcher no longer needs 
the raw types suppression. (Maya
[May 31, 2017 10:45:35 AM] (vvasudev) YARN-6366. Refactor the NodeManager 
DeletionService to support




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.sftp.TestSFTPFileSystem 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.server.namenode.TestProcessCorruptBlocks 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   
hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.fs.http.server.TestHttpFSServerNoXAttrs 
   
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.yarn.sls.appmaster.TestAMSimulator 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands 
   
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA 
   
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA 
   org.apache.hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-mvninstall-root.txt
  [492K]

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-compile-root.txt
  [20K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-compile-root.txt
  [20K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-compile-root.txt
  [20K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-assemblies.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [900K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [56K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/331/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   

Re: s3a hadoop 2.8.0 tests vs 2.7.3

2017-05-31 Thread Vasu Kulkarni
Thanks Steve

On Wed, May 31, 2017 at 4:18 AM, Steve Loughran 
wrote:

> see also
>
> https://github.com/apache/hadoop/blob/trunk/hadoop-
> tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>
>
> On 26 May 2017, at 23:22, Vasu Kulkarni  wrote:
>
> Thanks Liu
>
> On Fri, May 26, 2017 at 2:42 PM, Mingliang Liu  wrote:
>
> Hi,
>
> Many tests of S3A have been moved to “integration tests”, whose names start
> with “ITestS3A”. Moreover, the phase of Maven for those tests are “verify”
> instead of “test” now.
>
> So, you can specify the "mvn -Dit.test=‘ ITestS3A*’ verify" for integration
> tests (and unit tests). “mvn test” will run unit tests only. Make sure the
> credentials are provided.
>
> By the way, upgrading from 2.7 to 2.8 is a smart choice from S3A point of
> view.
>
> https://github.com/apache/hadoop/blob/branch-2.8.1/
> hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-
> aws/index.md#running-the-tests
>
> L
>
> On May 26, 2017, at 9:00 AM, Vasu Kulkarni  wrote:
>
> Sorry resending because i had problems with group subscribe
>
> Hi,
>
> I am trying to run hadoop s3a unit tests on 2.8.0 release(using ceph
> radosgw),  I notice that many tests that ran in 2.7.3 have been
> skipped in 2.8.0 hadoop release,
> I am following the configuration options from here that worked here
> for 2.7.3:
> https://github.com/apache/hadoop/blob/trunk/hadoop-
> tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
>
> Has any of the configuration options changed for 2.8.0 which are not
> yet documented or the test structure has changed ? Thanks
>
> on 2.8.0:
>
> Tests: (mvn test -Dtest=S3a*,TestS3A* )
> Running org.apache.hadoop.fs.s3a.TestS3AExceptionTranslation
> Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124
> sec - in org.apache.hadoop.fs.s3a.TestS3AExceptionTranslation
> Running org.apache.hadoop.fs.s3a.TestS3AAWSCredentialsProvider
> Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.385
> sec - in org.apache.hadoop.fs.s3a.TestS3AAWSCredentialsProvider
> Running org.apache.hadoop.fs.s3a.TestS3AInputPolicies
> Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.146
> sec - in org.apache.hadoop.fs.s3a.TestS3AInputPolicies
> Running org.apache.hadoop.fs.s3a.TestS3AGetFileStatus
>
>
> Tests run: 40, Failures: 0, Errors: 0, Skipped: 0
>
>
>
> on 2.7.3:
>
> Tests: ( mvn test -Dtest=S3a*,TestS3A*)
>
> Running org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir
> Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
> Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
> .
> .
> Running org.apache.hadoop.fs.s3a.scale.TestS3ADeleteManyFiles
>
> Tests run: 88, Failures: 0, Errors: 0, Skipped: 48
>
> Thanks
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>
>


[jira] [Created] (HADOOP-14471) Upgrade Jetty to latest version

2017-05-31 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14471:
---

 Summary: Upgrade Jetty to latest version
 Key: HADOOP-14471
 URL: https://issues.apache.org/jira/browse/HADOOP-14471
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0-alpha4
Reporter: John Zhuge
Assignee: John Zhuge


The current Jetty version is {{9.3.11.v20160721}}. Should we upgrade it to the 
latest 9.3.x which is {{9.3.19.v20170502}}? Or 9.4?

9.3.x changes: 
https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/VERSION.txt

9.4.x changes:
https://github.com/eclipse/jetty.project/blob/jetty-9.4.x/VERSION.txt



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-05-31 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/420/

[May 30, 2017 8:22:40 AM] (sunilg) YARN-6635. Refactor yarn-app pages in new 
YARN UI. Contributed by Akhil
[May 30, 2017 5:07:58 PM] (brahma) HADOOP-14456. Modifier 'static' is redundant 
for inner enums.
[May 30, 2017 6:10:12 PM] (lei) HDFS-11659. 
TestDataNodeHotSwapVolumes.testRemoveVolumeBeingWritten fail
[May 30, 2017 11:58:15 PM] (haibochen) YARN-6477. Dispatcher no longer needs 
the raw types suppression. (Maya




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 368] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 387] 
   Return value of org.apache.hadoop.fs.permission.FsAction.or(FsAction) 
ignored, but method has no side effect At FTPFileSystem.java:but method has no 
side effect At FTPFileSystem.java:[line 421] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 350] 
   org.apache.hadoop.io.erasurecode.ECSchema.toString() makes inefficient 
use of keySet iterator instead of entrySet iterator At ECSchema.java:keySet 
iterator instead of entrySet iterator At ECSchema.java:[line 193] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 

[jira] [Created] (HADOOP-14470) the ternary operator in create method in class CommandWithDestination is redundant

2017-05-31 Thread Hongyuan Li (JIRA)
Hongyuan Li created HADOOP-14470:


 Summary: the ternary operator  in create method in class 
CommandWithDestination is redundant
 Key: HADOOP-14470
 URL: https://issues.apache.org/jira/browse/HADOOP-14470
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hongyuan Li
Assignee: Hongyuan Li
Priority: Trivial


in if statement,the lazyPersist  is always true, thus the ternary operator is 
redundant,related code like below:
{code}
   FSDataOutputStream create(PathData item, boolean lazyPersist,
boolean direct)
throws IOException {
  try {
if (lazyPersist) {
  ……
  return create(item.path,
FsPermission.getFileDefault().applyUMask(
FsPermission.getUMask(getConf())),
createFlags,
getConf().getInt(IO_FILE_BUFFER_SIZE_KEY,
IO_FILE_BUFFER_SIZE_DEFAULT),
lazyPersist ? 1 : getDefaultReplication(item.path), // 
this is redundant 
getDefaultBlockSize(),
null,
null);
} else {
  return create(item.path, true);
}
  } finally { // might have been created but stream was interrupted
if (!direct) {
  deleteOnExit(item.path);
}
  }
}

{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14462) In hadoop Path class, it is associated with java,net.URI, sometimes will boring

2017-05-31 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-14462.
-
Resolution: Won't Fix

> In hadoop Path class, it is associated with java,net.URI, sometimes will 
> boring
> ---
>
> Key: HADOOP-14462
> URL: https://issues.apache.org/jira/browse/HADOOP-14462
> Project: Hadoop Common
>  Issue Type: Wish
>Reporter: Hongyuan Li
>
> [~ste...@apache.org] [~brahmareddy] 
> method {{new Path(String pathString)}} used java.net.URI  {{public URI(String 
> scheme,String authority, String path, String query, String fragment)}}
>  method, in which %25 may converted to %2525. for example, test like below 
> won't pass.
> {code}
> String uriString = "ftp://:%25112@nodee1/;;
>  URI uri = new URI(uriString );  
> Path path1 = new Path(uri);
> Path path2 = new Path(uriString);
> assertEquals(path1, path2);
> {code}
> Any good idea to solve this ? 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: s3a hadoop 2.8.0 tests vs 2.7.3

2017-05-31 Thread Steve Loughran
see also

https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md


On 26 May 2017, at 23:22, Vasu Kulkarni 
> wrote:

Thanks Liu

On Fri, May 26, 2017 at 2:42 PM, Mingliang Liu 
> wrote:
Hi,

Many tests of S3A have been moved to “integration tests”, whose names start
with “ITestS3A”. Moreover, the phase of Maven for those tests are “verify”
instead of “test” now.

So, you can specify the "mvn -Dit.test=‘ ITestS3A*’ verify" for integration
tests (and unit tests). “mvn test” will run unit tests only. Make sure the
credentials are provided.

By the way, upgrading from 2.7 to 2.8 is a smart choice from S3A point of
view.

https://github.com/apache/hadoop/blob/branch-2.8.1/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md#running-the-tests

L

On May 26, 2017, at 9:00 AM, Vasu Kulkarni  wrote:

Sorry resending because i had problems with group subscribe

Hi,

I am trying to run hadoop s3a unit tests on 2.8.0 release(using ceph
radosgw),  I notice that many tests that ran in 2.7.3 have been
skipped in 2.8.0 hadoop release,
I am following the configuration options from here that worked here
for 2.7.3:
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md

Has any of the configuration options changed for 2.8.0 which are not
yet documented or the test structure has changed ? Thanks

on 2.8.0:

Tests: (mvn test -Dtest=S3a*,TestS3A* )
Running org.apache.hadoop.fs.s3a.TestS3AExceptionTranslation
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124
sec - in org.apache.hadoop.fs.s3a.TestS3AExceptionTranslation
Running org.apache.hadoop.fs.s3a.TestS3AAWSCredentialsProvider
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.385
sec - in org.apache.hadoop.fs.s3a.TestS3AAWSCredentialsProvider
Running org.apache.hadoop.fs.s3a.TestS3AInputPolicies
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.146
sec - in org.apache.hadoop.fs.s3a.TestS3AInputPolicies
Running org.apache.hadoop.fs.s3a.TestS3AGetFileStatus


Tests run: 40, Failures: 0, Errors: 0, Skipped: 0



on 2.7.3:

Tests: ( mvn test -Dtest=S3a*,TestS3A*)

Running org.apache.hadoop.fs.contract.s3a.TestS3AContractMkdir
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRootDir
Running org.apache.hadoop.fs.contract.s3a.TestS3AContractRename
.
.
Running org.apache.hadoop.fs.s3a.scale.TestS3ADeleteManyFiles

Tests run: 88, Failures: 0, Errors: 0, Skipped: 48

Thanks

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: 
common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 
common-dev-h...@hadoop.apache.org





[jira] [Created] (HADOOP-14469) the listStatus method of FTPFileSystem should ignore the path "." and ".."

2017-05-31 Thread Hongyuan Li (JIRA)
Hongyuan Li created HADOOP-14469:


 Summary: the listStatus method of FTPFileSystem should ignore the 
path "."  and ".."
 Key: HADOOP-14469
 URL: https://issues.apache.org/jira/browse/HADOOP-14469
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hongyuan Li
Assignee: Hongyuan Li


for some ftpsystems, liststatus method will return new Path(".") and new 
Path(".."), thus causing list op looping.We can see in code below:

{code}
  private FileStatus[] listStatus(FTPClient client, Path file)
  throws IOException {
……
FileStatus[] fileStats = new FileStatus[ftpFiles.length];
for (int i = 0; i < ftpFiles.length; i++) {
  fileStats[i] = getFileStatus(ftpFiles[i], absolute);
}
return fileStats;
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org